text
stringlengths 6
128k
|
---|
# Integrating Amortized Inference with Diffusion Models for Learning Clean
Distribution from Corrupted Images
Yifei Wang Weimin Bai∗ Weijian Luo Wenzheng Chen He Sun
Peking University Equal contribution.Correspondence to<EMAIL_ADDRESS>
###### Abstract
Diffusion models (DMs) have emerged as powerful generative models for solving
inverse problems, offering a good approximation of prior distributions of
real-world image data. Typically, diffusion models rely on large-scale clean
signals to accurately learn the score functions of ground truth clean image
distributions. However, such a requirement for large amounts of clean data is
often impractical in real-world applications, especially in fields where data
samples are expensive to obtain. To address this limitation, in this work, we
introduce _FlowDiff_ , a novel joint training paradigm that leverages a
conditional normalizing flow model to facilitate the training of diffusion
models on corrupted data sources. The conditional normalizing flow try to
learn to recover clean images through a novel amortized inference mechanism,
and can thus effectively facilitate the diffusion model’s training with
corrupted data. On the other side, diffusion models provide strong priors
which in turn improve the quality of image recovery. The flow model and the
diffusion model can therefore promote each other and demonstrate strong
empirical performances. Our elaborate experiment shows that FlowDiff can
effectively learn clean distributions across a wide range of corrupted data
sources, such as noisy and blurry images. It consistently outperforms existing
baselines with significant margins under identical conditions. Additionally,
we also study the learned diffusion prior, observing its superior performance
in downstream computational imaging tasks, including inpainting, denoising,
and deblurring.
## 1 Introduction
Diffusion models (DMs) ho2020denoising ; song2020score ; song2019generative ;
sohl2015deep have become a key focus in generative modeling due to their
exceptional ability to capture complex data distributions and generate high-
fidelity samples. Their versatility has led to successful applications across
various data modalities, including images rombach2022high ;
dhariwal2021diffusion , videos ho2022imagen ; ho2022video , text
li2022diffusion , audio kong2020diffwave , 3D shapes tang2023make ;
luo2021diffusion , and scientific domains like molecule design
guo2024diffusion ; huang2023mdm . A particularly promising application of DMs
is in solving computational imaging inverse problems, which aim to recover the
underlying image $\mathbf{x}$ from noisy or corrupted observations
$\mathbf{y}$ chung2022diffusion . This can be probabilistically formulated as:
$p(\mathbf{x}\mid\mathbf{y})\propto p(\mathbf{y}\mid\mathbf{x})p(\mathbf{x}).$
(1)
where $p(\mathbf{y}\mid\mathbf{x})$ represents the forward model mapping
images to observations, and $p(\mathbf{x})$ encodes prior knowledge about the
images. Inverse problems are often ill-posed due to noise and corruption,
leading to ambiguous solutions. DMs serve as powerful priors because they
approximate the gradient of the data’s log-likelihood function
$\nabla_{\mathbf{x}}\log p_{data}(\mathbf{x})$, effectively constraining the
solution space by leveraging their ability to model complex image
distributions, favoring realistic and high-quality reconstructions.
However, training an effective DM typically requires a large dataset of clean
images, which can be expensive or sometimes impossible to acquire. For
example, in structural biology, 3D protein structures cannot be directly
observed, and only low signal-to-noise 2D projections are captured by cryo-
electron microscopy (Cryo-EM) nogales2015cryo . Similarly, in astronomy, black
hole images are impossible to observe directly. Such scenarios, where only
corrupted data is available, are common, especially in scientific
applications. This raises the question: is it possible to train DMs on clean
data distributions using only corrupted observations?
In this paper, we provide an affirmative answer and introduce a general
framework capable of learning clean data distributions from arbitrary or mixed
types of corrupted observations. The key insight is to incorporate an
additional normalizing flow kingma2018glow that estimates clean images from
corrupted observations through amortized inference. The normalizing flow is
trained jointly with the DM in a variational inference framework: the
normalizing flow generates clean images for training the DM, while the DM in
turn imposes an image prior to guide the the normalizing flow model to get
reasonable estimations Although this may seem like a chicken-and-egg problem,
we demonstrate that a good equilibrium can be reached using an appropriate
training strategy. The normalizing flow converges to a good inference
function, and the DM converges to the clean data distribution, enabling
simultaneous retrieval of clean images and learning of the clean distribution,
even when no clean signals are provided. Through extensive experiments, we
show that our method significantly outperforms existing approaches across
multiple computational imaging applications, including denoising, deblurring,
and fluorescent microscopy.
## 2 Background
### 2.1 Score-based diffusion models
Score-based diffusion models ho2020denoising ; song2020score ;
song2019generative ; sohl2015deep are a class of generative models that
leverage stochastic processes to generate high-quality data, such as images or
audio. Unlike traditional generative models that directly map latent codes to
data samples, these models operate by gradually transforming a simple noise
distribution, $\pi\sim\mathcal{N}(\mathbf{0},\mathbf{I})$, into a complex data
distribution, $p_{data}$, through a series of small, iterative steps. This
process involves a forward diffusion phase, which adds noise to the data, and
a reverse diffusion phase, which denoises it. Both phases are governed by
stochastic differential equations (SDEs) defined over the time interval
$t\in[0,T]$:
$\begin{split}\text{Forward-time SDE:}\quad
d\mathbf{x}&=\mathbf{f}(\mathbf{x},t)dt+g(t)d\mathbf{w},\\\ \text{Reverse-time
SDE:}\quad
d\mathbf{x}&=\left[\mathbf{f}(\mathbf{x},t)-g(t)^{2}\nabla_{\mathbf{x}}\log
p_{t}(\mathbf{x})\right]dt+g(t)d\mathbf{\overline{w}},\end{split}$ (2)
where $\mathbf{w}\in\mathbb{R}^{d}$ and
$\mathbf{\overline{w}}\in\mathbb{R}^{d}$ are Brownian motions,
$\mathbf{f}(\cdot,t):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ defines the
drift coefficient that controls the deterministic evolution of
$\mathbf{x}(t)$, and $g(\cdot):\mathbb{R}\rightarrow\mathbb{R}$ is the
diffusion coefficient that controls the rate of noise increase in
$\mathbf{x}(t)$. At the core of the reverse diffusion process is a neural
network, $s_{\theta}$, which is trained to approximate the score function,
i.e., the gradient of the log-density $\nabla_{\mathbf{x}}\log
p_{t}(\mathbf{x})$. This allows constructing a diffusion process
$\mathbf{x}_{t=0:T}$ where $\mathbf{x}_{0}\sim p_{data}$ and
$\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})$. Score-based diffusion
models have demonstrated remarkable success in generating high-fidelity data
and have become a significant area of research in machine learning and
artificial intelligence.
### 2.2 Diffusion models for inverse problems
Inverse problems aim to recover an underlying signal or image from
observations and arise in various fields, such as computational imaging. These
problems are often ill-posed due to factors like noise, incomplete data, or
non-invertible forward operators ($\mathbf{x}\to\mathbf{y}$) that map the
underlying signal $\mathbf{x}$ to observations $\mathbf{y}$. Traditional
inverse problem solvers rely on handcrafted priors or regularizers that impose
assumptions about the signal’s structure or smoothness. However, these
assumptions may lead to suboptimal reconstructions, especially for complex
images with intricate details. Diffusion models offer powerful data-driven
priors or regularizers for the inversion process Luo2023ACS ; feng2023score .
By incorporating the forward operator into the diffusion process using Bayes’
rule, a conditional score function, $\nabla_{\mathbf{x}}\log
p_{t}(\mathbf{x}\mid\mathbf{y})$, can be defined, enabling a conditional
diffusion process to gradually recover the underlying clean image from noisy
observations graikos2022diffusion ; kawar2021snips ; chung2022diffusion . As
diffusion models inherently define a sampling approach, they not only provide
point estimates but also quantify reconstruction uncertainties, which is
valuable for scientific and medical imaging applications song2021solving .
### 2.3 Learning generative priors from corrupted data
Generative models need large clean datasets to learn accurate data
distributions. However, when only corrupted observations are available,
directly training generative models on these data may lead to distorted or
biased distributions, resulting in poor generative performance and suboptimal
priors for inverse problems. A promising approach is learning the clean
generative prior directly from corrupted observations. Early approaches like
AmbientGAN bora2018ambientgan and AmbientFlow kelkar2023ambientflow have
explored this concept for GANs and normalizing flows, respectively. AmbientGAN
integrates the forward model into its generator, simulating the measurements
of generated images, while its discriminator differentiates between real and
simulated measurements. AmbientFlow uses a variational Bayesian framework
sun2021deep to train two flow-based models; one predicts clean images from
noisy data, and the other models the clean distribution. However, the limited
capacity of GANs and flows restricts the modeling of complicated distributions
Recent research has shifted towards training clean diffusion models using
corrupted data. Techniques such as AmbientDiffusion daras2023ambient
introduce additional corruption during training. As the model cannot
distinguish between original and further corruptions, this helps the diffusion
model restore the clean distribution. Methods like SURE-Score aali2023solving
and GSURE kawar2023gsure utilize Stein’s Unbiased Risk Estimate (SURE) loss
to jointly train denoising and diffusion models through denoising score
matching. However, these methods often have restrictive assumptions about the
type of corruption—SURE-Score is limited to denoising, and AmbientDiffusion to
inpainting. This motivates exploring a more generalizable framework for
training expressive diffusion models using arbitrary or mixed types of
corrupted data.
## 3 Methods
In this section, we propose a diffusion-based framework to learn clean
distribution from corrupted observations, as shown in Fig. 1. Specifically, we
first introduce the amortized inference framework with a parameterized
normalizing flow for the image inverse problems in Sec. 3.1. Then, we
elaborate on how to adopt score-based generative models to approximate image
priors in Sec. 3.2. Finally, we discuss the techniques and implementation
details for jointly optimizing the normalizing flow and the diffusion prior in
Sec. 3.3.
### 3.1 Amortized inference with normalizing flows
To solve a general noisy inverse problem $\mathbf{y}=f(\mathbf{x})+\eta$,
where observations $\mathbf{y}$ are given, $f(\cdot)$ is the known forward
model, $\eta\sim\mathcal{N}(\mathbf{0},\sigma_{n}^{2}\mathbf{I})$, we aim to
compute the posterior $p(\mathbf{x}\mid\mathbf{y})$ to recover underlying
signals $\mathbf{x}$ from corrupted observations $\mathbf{y}$. However, it is
intractable to compute the posterior exactly, as the true values of the
measurement distribution $p(\mathbf{y})$ remain unknown. Therefore, we
consider an amortized inference framework to approximate the underlying
posterior with a deep neural network parameterized by $\varphi$. The goal of
optimization is to minimize the Kullback-Leibler (KL) divergence between the
variational distribution $p_{\varphi}(\mathbf{x}\mid\mathbf{y})$ and the true
posterior $p(\mathbf{x}\mid\mathbf{y})$:
$\displaystyle D_{KL}(p_{\varphi}(\mathbf{x}\mid\mathbf{y})\parallel
p(\mathbf{x}\mid\mathbf{y}))$ $\displaystyle=\int
p_{\varphi}(\mathbf{x}\mid\mathbf{y})\log\frac{p_{\varphi}(\mathbf{x}\mid\mathbf{y})}{p(\mathbf{x}\mid\mathbf{y})}d\mathbf{x}$
(3) $\displaystyle=\int p_{\varphi}(\mathbf{x}\mid
y)\log\frac{p_{\varphi}(\mathbf{x}\mid\mathbf{y})p(\mathbf{y})}{p(\mathbf{y}\mid\mathbf{x})p(\mathbf{x})}d\mathbf{x}$
$\displaystyle=\mathbb{E}_{p_{\varphi}(\mathbf{x}\mid\mathbf{y})}\left[\log
p_{\varphi}(\mathbf{x}\mid\mathbf{y})+\log p(\mathbf{y})-\log
p(\mathbf{y}\mid\mathbf{x})-\log p(\mathbf{x})\right].$
Specifically, we introduce a conditional normalizing flow model $G_{\varphi}$
to model the likelihood term $p_{\varphi}(\mathbf{x}\mid\mathbf{y})$.
Normalizing flows dinh2014nice are invertible generative models that can
model complex distributions of target data murphy2018machine . They draw
samples $\mathbf{x}$ from a simple distribution(i.e. standard Gaussian
distribution) $\pi(\mathbf{z})$ through a nonlinear but invertible
transformation. The log-likelihood of samples from a normalizing flow can be
analytically computed based on the “change of variables theorem”:
$\displaystyle\log
p_{\varphi}(\mathbf{x})=\log\pi(\mathbf{z})-\log\left|\mathrm{det}\frac{dG_{\varphi}(\mathbf{z})}{d\mathbf{z}}\right|,$
(4)
where $\mathrm{det}\frac{dG_{\varphi}(\mathbf{z})}{d\mathbf{z}}$ is the
determinant of the generative model’s Jacobian matrix. Once the weights of
$G_{\varphi}$ are trained, one can efficiently sample through the normalizing
flow and exactly compute the log-likelihood of target samples. These excellent
properties make them natural tools for modeling the variational distribution.
In the context of inverse problems, we adapt the unconditional flow to model
the posterior distribution $p_{\varphi}(\mathbf{x}\mid\mathbf{y})$ conditioned
on observations $\mathbf{y}$. Therefore, we can further expand the KL
divergence in Eq. 3 based on conditional normalizing flows,
$G_{\varphi}(\mathbf{z},\mathbf{y})$:
$\displaystyle\mathbb{E}_{\mathbf{z}\sim\pi(\mathbf{z})}\left[\log\pi(\mathbf{z})-\underbrace{\log\left|\mathrm{det}\frac{dG_{\varphi}(\mathbf{z},\mathbf{y})}{d\mathbf{z}}\right|}_{L_{entropy}}+\log
p(\mathbf{y})-\underbrace{\log p(\mathbf{y}\mid
G_{\varphi}(\mathbf{z},\mathbf{y}))}_{L_{datafidelity}}-\underbrace{\log
p(G_{\varphi}(\mathbf{z},\mathbf{y}))}_{L_{prior}}\right].$ (5)
Since $\log\pi(\mathbf{z})$ and $\log p(\mathbf{y})$ are constant and have no
learnable parameters, we simply ignore these terms during training. For
simplification, we denote the posterior samples produced by the conditional
normalizing flow, $G_{\varphi}(\mathbf{z},\mathbf{y})$, as $\hat{\mathbf{x}}$.
The final objective function of the proposed amortized inference framework can
be written as:
$\displaystyle
L=\mathbb{E}_{p(\hat{\mathbf{x}})}\left[-\underbrace{\log\left|\mathrm{det}\frac{d\hat{\mathbf{x}}}{d\mathbf{z}}\right|}_{L_{entropy}}-\underbrace{\log
p(\mathbf{y}\mid\hat{\mathbf{x}})}_{L_{datafidelity}}-\underbrace{\log
p(\hat{\mathbf{x}})}_{L_{prior}}\right],$ (6)
where the entropy loss $L_{entropy}$ and the data fidelity loss
$L_{datafidelity}$ can be computed by the log-determinant of the generative
model’s Jacobian matrix and data consistency, respectively. As for the prior
term, previous works mainly use handcrafted priors such as sparsity or total
variation (TV) kuramochi2018superresolution ; bouman1993generalized . However,
these priors can not capture the complex nature of natural image distributions
and always introduce human bias. We propose to introduce data-driven DMs as
the powerful priors. Previous works are limited to simply adopting DMs pre-
trained on a large dataset of clean images as priors chung2022diffusion ;
feng2023efficient ; feng2023score , which can be expensive or sometimes
impossible to acquire. Therefore, we carefully design an amortized inference
framework to train the DM from scratch, without requiring any clean images.
Figure 1: Overview of the FlowDiff. We aim to train a clean diffusion model,
$s_{\theta}$, using only corrupted observations. To achieve this, a
conditional normalizing flow, $G_{\varphi}$, is introduced to recover
underlying clean images through amortized inference. The conditional
normalizing flow and the diffusion model are trained jointly: the flow
generates clean images for training the diffusion model, while the diffusion
model provides an image prior to regularize the output of the flow. Once the
two networks reach equilibrium, clean reconstructions of corrupted
observations are produced, and a clean diffusion prior is learned.
### 3.2 Jointly optimizing score-based priors
Ideally, we adopt a clean DM pre-trained on large-scale signals as a plug-and-
play prior to Eq. 6. Extensive works have focused on solving general inverse
problems in this paradigm. We argue that it is feasible to train this prior
jointly with the amortized inference framework. Assuming that the posterior
samples are clean, we can train the score-based DM through well-established
techniques.
Maximum likelihood training of score-based diffusion models Our goal is to
learn the data distribution by approximating their score function
$\nabla_{\mathbf{x}}\log p(\mathbf{x})$ with a neural network
$\mathbf{s}_{\theta}$. In order to train $s_{\theta}(\mathbf{x},t)$,
song2019generative proposed score matching loss
$\mathcal{J}_{SM}(\mathbf{\theta};\lambda(\cdot))=\frac{1}{2}\int_{0}^{T}\mathbb{E}_{p_{t}(\mathbf{x})}\left[\lambda(t)\|\nabla_{\mathbf{x}}\log
p_{t}(\mathbf{x})-\mathbf{s}_{\theta}(\mathbf{x},t)\|_{2}^{2}\right],$ (7)
where $\lambda(t)$ is a weighting factor, i.e. $\lambda(t)=g(t)^{2}$. Eq. 7
stands for a weighted MSE loss between $\mathbf{s}_{\theta}(\mathbf{x},t)$ and
$\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})$ with a manually chosen weighting
function $\lambda(t)$. During training, we ignore $\lambda(t)$ for the
benefits of sample quality (and simpler to implement) ho2020denoising .
Notably, the score matching loss could also serve as a data-driven regularizer
during inference.
Prior probability computed through DMs By removing the Brownian motion from
the reverse-time SDE in Eq. 2, that is, ignoring the stochastic term in the
SDE, we can derive a probability flow ODE sampler song2020score :
$d\mathbf{x}=\left[\mathbf{f}(\mathbf{x},t)-\frac{1}{2}g(t)^{2}\nabla_{\mathbf{x}}\log
p_{t}(\mathbf{x})\right]dt.$ (8)
song2021maximum proved that if
$s_{\theta}(\mathbf{x},t)\equiv\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})$,
Then $p_{\theta}^{\rm ODE}=p_{\theta}^{\rm SDE}=p_{data}$. Assuming the former
equation always holds, we can compute the probability of a single image
through integration:
$\displaystyle\log p_{data}(\mathbf{x})$ $\displaystyle=\log p_{\theta}^{\rm
ODE}(\mathbf{x})$ (9)
$\displaystyle=\log\pi(\mathbf{x}(T))+\int_{0}^{T}\nabla_{\mathbf{x}}\cdot\left[\mathbf{f}(\mathbf{x},t)-\frac{1}{2}g(t)^{2}\nabla_{\mathbf{x}}\log
p_{t}(\mathbf{x})\right]dt.$
$\displaystyle=\log\pi(\mathbf{x}(T))+\int_{0}^{T}\nabla_{\mathbf{x}}\cdot\left[\mathbf{f}(\mathbf{x},t)-\frac{1}{2}g(t)^{2}s_{\theta}(\mathbf{x},t)\right]dt.$
feng2023score has verified that a pre-trained $\mathbf{s}_{\theta}$ can serve
as a powerful plug-and-play prior for inverse imaging. However, the log-
probability function in Eq. 9 is computationally expensive, requiring hundreds
of discrete ODE time steps to accurately compute, thus not practical to use in
the training process. To reduce the computational overhead, song2021maximum ;
feng2023efficient prove that the evidence lower bound (ELBO) could
approximate the performance of $p_{\theta}^{\rm ODE}$ to serve as a prior
distribution. Interestingly, we find that in our setting, the score matching
loss in Eq. 7 becomes a lower bound of $\log p(\mathbf{x})$ by selecting a
specific weighting function $\lambda(\cdot)=g(\cdot)^{2}$, which gives us an
explicit function to approximate the probability of a prior image
song2021maximum :
$-\mathbb{E}_{p(\mathbf{x})}\left[\log
p_{data}(\mathbf{x})\right]=-\mathbb{E}_{p(\mathbf{x})}\left[\log
p_{\theta}^{\text{SDE}}(\mathbf{x})\right]\leq\mathcal{J}_{SM}(\mathbf{\theta};g(\cdot)^{2})+C$
(10)
Consequently, by replacing $\mathbb{E}_{p(\hat{\mathbf{x}})}\left[-\log
p(\hat{\mathbf{x}})\right]$ in Eq. 6 with the upper bound in Eq. 10, we derive
the loss for jointly optimizing the normalizing flow and the score-based
diffusion prior.
### 3.3 Implementation details
#### Training scheme
We briefly discuss how to jointly train the normalizing flow and the diffusion
model with the objective function in Eq. 6. Figure 1 illustrates our training
framework. Our training loss consists of three terms: the first two terms
(highlighted in green) are influenced only by the amortized inference network
(i.e., conditional normalizing flow), while the third term (highlighted in
blue) is influenced by both the inference network and the diffusion prior. In
our joint optimization implementation, we alternate between updating the
weights of the flow and the diffusion model in each training step. The flow is
trained with all three terms, as defined in Eq.6, assuming the diffusion prior
is fixed. Then, we fix the flow’s weights and use the posterior images sampled
from the flow model to train the DM using only the third term, i.e., the score
matching loss defined in Eq.7. Note that when training the DM, we remove the
weighting function $\lambda(t)$ in Eq.7 for better performance, as suggested
by Kingma et al. kingma2024understanding .
#### Model reset
Considering the joint optimization of the normalizing flow and diffusion model
is highly non-convex, we often observe unstable model performance during
training. Additionally, because both the normalizing flow and the diffusion
prior are randomly initialized, they are initially trained with poor posterior
samples, hindering optimal convergence due to memorization effects. To help
the networks escape local minima, we periodically reset the weights of the
normalizing flow and diffusion prior after a certain number of joint training
steps.
For example, in denoising tasks, we reset the weights of the normalizing flow
(amortized inference network) after 9000 joint training steps and retrain it
from scratch until convergence, fixing the learned diffusion prior at the
9000th step. We then reverse the process: using the improved normalizing flow
to generate better posterior samples, we retrain the diffusion model from
scratch until it converges. Empirically, this model resetting strategy
significantly reduces the influence of memorization effects, thereby enhancing
both the amortized inference accuracy and the generative performance of the
diffusion model. We also employ a similar strategy for the deblurring task.
#### Posterior sampling
In addition to generating posterior samples using amortized inference, the
inverse problem can also be solved by leveraging the Diffusion Posterior
Sampling (DPS) algorithm chung2022diffusion to sample from corrupted
observations using the learned diffusion model. Specifically, we modify the
reverse-time SDE from Eq. 2 into a conditional reversed process for posterior
sampling:
$\quad
d\mathbf{x}=\left[\mathbf{f}(\mathbf{x},t)-g(t)^{2}\nabla_{\mathbf{x}}\log
p_{t}(\mathbf{x}\mid\mathbf{y})\right]dt+g(t)d\mathbf{\overline{w}},$ (11)
where $\mathbf{y}$ is the given observation. Using Bayes’ theorem, the
conditional score function can be decomposed into:
$\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x}|\mathbf{y})=\nabla_{\mathbf{x}}\log
p_{t}(\mathbf{y}\mid\mathbf{x})+\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x}).$
(12)
Here $\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})$ can be replaced by the
learned score function $\mathbf{s}_{\theta}(\mathbf{x},t)$, while for
$\nabla_{\mathbf{x}}\log p_{t}(\mathbf{y}\mid\mathbf{x})$ we adopt the
approximation proposed by DPS chung2022diffusion :
$p_{t}\left(\mathbf{y}\mid\mathbf{x}\right)\simeq
p\left(\mathbf{y}\mid\hat{\mathbf{x}}_{0}(\mathbf{x})\right),\quad\text{where}\quad\hat{\mathbf{x}}_{0}(\mathbf{x}_{t}):=\mathbb{E}\left[\mathbf{x}_{0}\mid\mathbf{x}_{t}\right],$
(13)
Substituting Eq.13 and Eq.12 into Eq. 11, we obtain a conditional reverse-time
SDE that can reconstruct images from corrupted observations.
Figure 2: Training procedure of the conditional flow model and the diffusion
model. We alternately report the amortized inference results from the flow
model and the generative images from the diffusion model during training. The
diffusion model initially captures low-frequency signals, guiding the
amortized inference model. As the amortized inference improves, it produces
better-quality images, further enhancing the diffusion model’s training.
Eventually, both models converge to produce clean images.
## 4 Experiments
In this section, we first demonstrate our method on image denoising and
deblurring tasks using various datasets, including MNIST, CIFAR-10, and
fluorescent microscopic images of tubulins. After that, we apply the models
learned by our methods to solving inverse problems. Further details on neural
network architectures, training settings, and additional reconstruction and
generation samples are provided in the appendix.
### 4.1 Experimental setting
#### Datasets
Our experiments are conducted on three sets of corrupted observations. First,
we performed a toy experiment on denoising MNIST images corrupted by additive
Gaussian noise with $\sigma=0.3$. Next, we tested our method on a deblurring
task using dog images from CIFAR-10, where all images were blurred by a
Gaussian kernel of size 3$\times$3 and a standard deviation of 1.5 pixels.
Finally, for more realistic, higher-resolution images, we attempted to restore
and learn a clean distribution from noisy microscopic images of tubulins,
assuming they are corrupted by additive Gaussian noise with $\sigma=0.2$.
#### Evaluation metrics
We use the Frechet Inception Distance (FID) parmar2022aliased to assess the
generative ability of our learned diffusion model by comparing 5000 image
samples generated from it to reserved test data from the underlying true
distribution. For posterior sampling results, including those from either the
amortized inference network or the conditional diffusion process using the
learned generative model, we compute Peak Signal-to-Noise Ratio (PSNR),
Structural Similarity (SSIM), and Learned Perceptual Image Patch Similarity
(LPIPS) to evaluate the image reconstruction quality.
| \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/1-1.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/1-2.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/1-3.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/1-4.png} \end{overpic}
---|---|---|---
(a) Ground Truth | (b) Observations, FID=204 | (c) AmbientFlow kelkar2023ambientflow , FID=147 | (d) Ours, FID=149
\begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/2-1.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/2-2.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/2-3.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/2-4.png} \end{overpic}
(a) Ground Truth | (b) Observations, FID=109 | (c) AmbientFlow kelkar2023ambientflow , FID=272 | (d) Ours, FID=209
\begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/3-1.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/3-2.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/3-3.png} \end{overpic} | \begin{overpic}[width=106.23477pt]{Styles/fig/flow-fig3/3-4.png} \end{overpic}
(a) Ground Truth | (b) Observations, FID=298 | (c) AmbientFlow kelkar2023ambientflow , FID=295 | (d) Ours, FID=210
Figure 3: Image samples from diffusion models learned from corrupted
observations. The three rows show results from models trained on different
datasets: noisy MNIST handwritten digits, blurred CIFAR-10 dog images, and
noisy fluorescent microscope images. The learned diffusion models generate
samples similar to the ground-truth images, significantly outperforming the
baseline, AmbientFlow. Notably, when directly training the diffusion model
using blurred images (2nd row (b)), we achieve samples with low FID scores.
This is because FID mainly measures the similarity of smoothed features among
image sets. However, our method (2nd row (d)) produces more reasonable and
sharper dog images, despite the FID score not being superior.
#### Baselines
We compare our methods with three baselines that operate under the same
conditions, where no clean signals are available. AmbientFlow
kelkar2023ambientflow employs a similar amortized inference approach but uses
a normalizing flow model kingma2018glow , instead of a diffusion model, to
learn the unconditional clean distribution. AmbientDiffusion daras2023ambient
learns a clean score-based prior by further corrupting the input data, rather
than restoring the clean images first. SURE-Score aali2023solving combines
the SURE loss stein1981estimation to implicitly regularize the weights of the
learned diffusion model, enabling direct training of clean diffusion models
using corrupted observations. We carefully tune the hyperparameters of all the
baselines and report the best results. More information on these neural
networks’ architectures and hyperparameters can be found in Appendix A.
### 4.2 Results
| Observation | Ambient Flow kelkar2023ambientflow | Ours | Ground Truth | Observation | Ambient Flow kelkar2023ambientflow | Ours | Ground Truth
---|---|---|---|---|---|---|---
\begin{overpic}[width=52.03227pt]{Styles/fig/fig4/1-1.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig4/1-2.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig4/1-3.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig4/1-4.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig4/1-5.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig4/1-6.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig4/1-7.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig4/1-8.png} \end{overpic}
(a) Amortized Inference Results of CIFAR-10 | (b) Amortized Inference Results of Tubulins
Figure 4: Amortized inference results on CIFAR-10 deblurring and microscopy imaging tasks. Our method achieves superior performance compared to AmbientFlow, because of the diffusion model’s stronger generative modeling capabilities over the flow model employed by AmbientFlow. | Observation | Ambient Diffusion daras2023ambient | SURE- Score aali2023solving | Ambient Flow kelkar2023ambientflow | Ours | Ground Truth | Observation | Ambient Diffusion daras2023ambient | SURE- Score aali2023solving | Ambient Flow kelkar2023ambientflow | Ours | Ground Truth
---|---|---|---|---|---|---|---|---|---|---|---
\begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-1.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-2.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-3.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-4.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-5.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-6.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-7.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-8.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-9.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-10.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-11.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/1-12.png} \end{overpic}
\begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-1.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-2.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-3.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-4.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-5.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-6.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-7.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-8.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-9.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-10.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-11.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/2-12.png} \end{overpic}
(a) CIFAR-10, Denoising | (b) CIFAR-10, Deblurring
Observation | Ambient Diffusion daras2023ambient | SURE- Score aali2023solving | Ambient Flow kelkar2023ambientflow | Ours | Ground Truth | Observation | Ambient Diffusion daras2023ambient | SURE- Score aali2023solving | Ambient Flow kelkar2023ambientflow | Ours | Ground Truth
\begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-1.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-2.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-3.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-4.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-5.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-6.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-7.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-8.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-9.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-10.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-11.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/3-12.png} \end{overpic}
\begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-1.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-2.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-3.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-4.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-5.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-6.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-7.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-8.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-9.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-10.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-11.png} \end{overpic} | \begin{overpic}[width=33.82362pt]{Styles/fig/fig5/4-12.png} \end{overpic}
(c) CIFAR-10, Inpainting | (d) CIFAR-10, Deblurring+Denoising
Figure 5: Posterior samples from the generative model trained on blurred
CIFAR-10 images. On four downstream tasks - denoising, deblurring, inpainting,
and combined denoising and deblurring - our method surpasses the performance
of baseline approaches including AmbientDiffusion, SURE-Score, and
AmbientFlow.
#### Clean distributions learned from corrupted observations
Fig 3 compares the FID scores of image samples from the baseline method,
AmbientFlow, and our method, FlowDiff. As shown in Fig.3, our method
significantly outperforms AmbientFlow across all three tasks, learning high-
quality, complex, clean image distributions from blurred or noisy
observations. During training, as illustrated in Fig. 2, we observed that the
diffusion model captures low-frequency signals first, providing guidance for
the amortized inference model. As the amortized inference improves, it
produces better-quality images, which in turn enhances the training of the
diffusion model. By alternatively updating the weights of these models, the
diffusion model eventually learns the distribution of the clean data. The
model reset technique described in Sec. 3.3 is used in our training process.
Table 1: Amortized inference results for three different tasks. In all cases, our method produces a better flow model for amortized inference than AmbientFlow, despite using the same flow architectures. This indicates that the diffusion model in our framework provides a superior prior compared to the flow prior in AmbientFlow. The optimal results are highlighted in bold. Tasks | Input | AmbientFlow | Ours
---|---|---|---
PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$
MNIST Denoising | 13.57 | 0.210 | 0.591 | 21.18 | 0.394 | 0.177 | 20.73 | 0.399 | 0.160
CIFAR-10 Deblurring | 20.91 | 0.582 | 0.182 | 20.38 | 0.704 | 0.199 | 21.97 | 0.787 | 0.135
Microscopy Imaging | 14.32 | 0.106 | 0.572 | 16.76 | 0.220 | 0.467 | 18.87 | 0.263 | 0.397
#### Amortized inference
Fig. 4 and Table 1 present the amortized inference results, i.e., the
posterior samples drawn from the conditional normalizing flow. Our method
produces better reconstructed images compared to AmbientFlow. Furthermore,
results show that the amortized inference models trained by our method achieve
performance comparable to those trained with a clean diffusion prior, again
demonstrating that our method successfully captures the underlying clean data
distribution. More details can be found in Appendix C.
Table 2: Performance on downstream posterior sampling tasks using the CIFAR-10 dog images. The optimal results are highlighted in bold, and the second-best results are underlined. All metrics are computed using 128 posterior samples. Tasks | Metrics | Input | Ambient Diffusion | SURE- Score | Ambient Flow | Ours
---|---|---|---|---|---|---
Denoising | PSNR $\uparrow$ | 14.79 | 21.37 | 19.04 | 14.95 | 21.70
SSIM $\uparrow$ | 0.444 | 0.743 | 0.636 | 0.356 | 0.782
LPIPS $\downarrow$ | 0.107 | 0.033 | 0.111 | 0.142 | 0.040
Deblurring | PSNR$\uparrow$ | 22.34 | 16.14 | 23.52 | 15.03 | 23.91
SSIM $\uparrow$ | 0.724 | 0.761 | 0.829 | 0.305 | 0.880
LPIPS$\downarrow$ | 0.228 | 0.074 | 0.078 | 0.163 | 0.031
Deblurring + Denoising | PSNR$\uparrow$ | 19.34 | 16.23 | 21.28 | 15.11 | 22.55
SSIM $\uparrow$ | 0.609 | 0.767 | 0.666 | 0.339 | 0.816
LPIPS$\downarrow$ | 0.044 | 0.062 | 0.122 | 0.137 | 0.037
Inpainting | PSNR $\uparrow$ | 13.49 | 20.57 | 21.60 | 13.20 | 22.46
SSIM $\uparrow$ | 0.404 | 0.639 | 0.732 | 0.217 | 0.836
LPIPS$\downarrow$ | 0.295 | 0.038 | 0.049 | 0.179 | 0.034
#### Posterior sampling with learned clean prior
We leverage the learned clean priors to solve various downstream computational
imaging inverse problems, including inpainting, denoising, deblurring, and a
combination of denoising and deblurring. Table 2 and Fig. 5 present
comprehensive experiments on CIFAR-10 across all four tasks. Assuming the
clean distributions are trained on blurred images as explained in Sec. 4.1,
our methods significantly outperform all the baselines, including
AmbientDiffusion, SURE-Score, and AmbientFlow, across all tasks.
## 5 Conclusion and limitation
In this work, we present FlowDiff, a framework that integrates amortized
inference with state-of-the-art diffusion models to learn clean signal
distributions directly from corrupted observations. Through amortized
inference, our framework incorporates an additional normalizing flow
kingma2018glow that generates clean images from corrupted observations. The
normalizing flow is trained jointly with the DM in a variational inference
framework: the normalizing flow generates clean images for training the DM,
while the DM imposes an image prior to guide reasonable estimations by the
normalizing flow. After training, our method provides both a diffusion prior
that models a complex, high-quality clean image distribution and a normalizing
flow-based amortized inference network that directly generates posterior
samples from corrupted observations. We demonstrate our method through
extensive experiments on various datasets and multiple computational imaging
tasks. We also apply the models our method learned to solving inverse problems
including denoising, deblurring, and inpainting.
However, the different learning speeds of diffusion models and normalizing
flows make the joint training of the two networks sometimes unstable. Besides,
normalizing flows often fail to model complex data distributions due to their
limited model capacity. In the future, we plan to explore better optimization
frameworks, such as alternating optimization methods like expectation-
maximization, to achieve more stable training.
## References
* (1) Asad Aali, Marius Arvinte, Sidharth Kumar, and Jonathan I Tamir. Solving inverse problems with score-based generative priors learned from noisy data. arXiv preprint arXiv:2305.01166, 2023.
* (2) Ashish Bora, Eric Price, and Alexandros G Dimakis. Ambientgan: Generative models from lossy measurements. In International conference on learning representations, 2018.
* (3) Charles Bouman and Ken Sauer. A generalized gaussian image model for edge-preserving map estimation. IEEE Transactions on image processing, 2(3):296–310, 1993.
* (4) Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. arXiv preprint arXiv:2209.14687, 2022.
* (5) Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007.
* (6) Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, and Adam Klivans. Ambient diffusion: Learning clean distributions from corrupted data, 2023\.
* (7) Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021.
* (8) Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
* (9) Berthy T Feng and Katherine L Bouman. Efficient bayesian computational imaging with a surrogate score-based prior. arXiv preprint arXiv:2309.01949, 2023.
* (10) Berthy T Feng, Jamie Smith, Michael Rubinstein, Huiwen Chang, Katherine L Bouman, and William T Freeman. Score-based diffusion models as principled priors for inverse imaging. arXiv preprint arXiv:2304.11751, 2023.
* (11) Alexandros Graikos, Nikolay Malkin, Nebojsa Jojic, and Dimitris Samaras. Diffusion models as plug-and-play priors. Advances in Neural Information Processing Systems, 35:14715–14728, 2022.
* (12) Zhiye Guo, Jian Liu, Yanli Wang, Mengrui Chen, Duolin Wang, Dong Xu, and Jianlin Cheng. Diffusion models in bioinformatics and computational biology. Nature reviews bioengineering, 2(2):136–154, 2024.
* (13) Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303, 2022.
* (14) Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
* (15) Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. Advances in Neural Information Processing Systems, 35:8633–8646, 2022.
* (16) Lei Huang, Hengtong Zhang, Tingyang Xu, and Ka-Chun Wong. Mdm: Molecular diffusion model for 3d molecule generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 5105–5112, 2023.
* (17) Bahjat Kawar, Noam Elata, Tomer Michaeli, and Michael Elad. Gsure-based diffusion model training with corrupted data. arXiv preprint arXiv:2305.13128, 2023.
* (18) Bahjat Kawar, Gregory Vaksman, and Michael Elad. Snips: Solving noisy inverse problems stochastically. Advances in Neural Information Processing Systems, 34:21757–21769, 2021.
* (19) Varun A Kelkar, Rucha Deshpande, Arindam Banerjee, and Mark A Anastasio. Ambientflow: Invertible generative models from incomplete, noisy measurements. arXiv preprint arXiv:2309.04856, 2023.
* (20) Diederik Kingma and Ruiqi Gao. Understanding diffusion objectives as the elbo with simple data augmentation. Advances in Neural Information Processing Systems, 36, 2024.
* (21) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* (22) Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31, 2018.
* (23) Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761, 2020.
* (24) Kazuki Kuramochi, Kazunori Akiyama, Shiro Ikeda, Fumie Tazaki, Vincent L Fish, Hung-Yi Pu, Keiichi Asada, and Mareki Honma. Superresolution interferometric imaging with sparse modeling using total squared variation: application to imaging the black hole shadow. The Astrophysical Journal, 858(1):56, 2018.
* (25) Xiang Li, John Thickstun, Ishaan Gulrajani, Percy S Liang, and Tatsunori B Hashimoto. Diffusion-lm improves controllable text generation. Advances in Neural Information Processing Systems, 35:4328–4343, 2022.
* (26) Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2837–2845, 2021.
* (27) Weijian Luo. A comprehensive survey on knowledge distillation of diffusion models. ArXiv, abs/2304.04262, 2023.
* (28) Kevin P Murphy. Machine learning: A probabilistic perspective (adaptive computation and machine learning series). The MIT Press: London, UK, 2018.
* (29) Eva Nogales and Sjors HW Scheres. Cryo-em: a unique tool for the visualization of macromolecular complexity. Molecular cell, 58(4):677–689, 2015.
* (30) Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On aliased resizing and surprising subtleties in gan evaluation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11410–11420, 2022.
* (31) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
* (32) Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pages 2256–2265. PMLR, 2015.
* (33) Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models. Advances in neural information processing systems, 34:1415–1428, 2021.
* (34) Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
* (35) Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon. Solving inverse problems in medical imaging with score-based generative models. arXiv preprint arXiv:2111.08005, 2021.
* (36) Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
* (37) Charles M Stein. Estimation of the mean of a multivariate normal distribution. The annals of Statistics, pages 1135–1151, 1981.
* (38) He Sun and Katherine L Bouman. Deep probabilistic imaging: Uncertainty quantification and multi-modal solution characterization for computational imaging. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 2628–2637, 2021.
* (39) Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, and Dong Chen. Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22819–22829, 2023.
## Appendix A Neural network architectures and hyper-parameters
We conducted all our experiments using the same neural network architectures,
but slightly adjusting the number of parameters based on the complexity of the
data distribution. The specific numbers are shown in Table 3. For our
amortized inference model, we use the same conditional invertible network as
AmbientFlow kelkar2023ambientflow and employ DDPM ho2020denoising for
learning the clean distribution. To balance the learning speeds between the
flow model and the diffusion model, we set the learning rate of the diffusion
model to be smaller. Specifically, for the MNIST and CIFAR-10 experiments, the
flow network’s learning rate is $1e-3$ and the diffusion model’s learning rate
is $1e-4$. For the microscopic experiment, we train the normalizing flow with
a learning rate of $2e-5$ and the diffusion model with a learning rate of
$1e-5$. We use Adam kingma2014adam as the optimizer for our models. All the
experiments are conducted on an NVIDIA A800 GPU workstation.
Table 3: The number of model parameters for each experiment. We adjusted the size of our models based on the complexity of data distributions as well as the resolution of input images. | MNIST | CIFAR-10 | Microscopic images of tubulins
---|---|---|---
Flow | 8M | 11.5M | 23.9M
Diffusion | 6M | 35.7M | 17.2M
| \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/1-1.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 18.16}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/1-2.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 20.77}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/1-3.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 17.74}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/1-4.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 22.87}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/1-5.png} \end{overpic}
---|---|---|---|---
\begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/2-1.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 18.00}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/2-2.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 20.38}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/2-3.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 17.33}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/2-4.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 22.99}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/2-5.png} \end{overpic}
\begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/3-1.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 18.12}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/3-2.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 20.28}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/3-3.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 16.87}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/3-4.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 22.63}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/3-5.png} \end{overpic}
\begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/4-1.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 17.94}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/4-2.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 16.01}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/4-3.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 17.10}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/4-4.png} \put(15.0,86.0){{\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}PSNR: 22.56}} \end{overpic} | \begin{overpic}[width=84.55907pt]{Styles/fig/flow-fig6/4-5.png} \end{overpic}
Observations | BM3D dabov2007image | Ambient Flow kelkar2023ambientflow | Ours | Ground Truth
Figure 6: Posterior samples from the generative model trained on noisy
microscopic images. On the downstream image denoising task, our approach
outperforms BM3D dabov2007image and AmbientFlow kelkar2023ambientflow by
effectively removing noise while preserving intricate structural details of
microscopic images of tubulins.
## Appendix B Additional posterior sampling results
Table 4: Performance on downstream posterior sampling tasks using MNIST and microscopic images. The optimal results are highlighted in bold. Method | MNIST | Microscopic images of tubulins
---|---|---
PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$
Observations | 13.36 | 0.344 | 0.103 | 18.89 | 0.477 | 0.252
BM3D | 13.57 | 0.427 | 0.088 | 21.28 | 0.542 | 0.073
AmbientFlow | 17.67 | 0.476 | 0.047 | 15.35 | 0.120 | 0.537
Ours | 20.97 | 0.618 | 0.053 | 23.33 | 0.687 | 0.111
In this section, we present additional results on posterior sampling for the
denoising task using diffusion models trained on corrupted MNIST and
microscopic images. Table 4 demonstrates that our method surpasses both the
classical method, BM3D dabov2007image , and the deep learning baseline,
AmbientFlow, in terms of PSNR and SSIM. Fig. 6 showcases the posterior samples
obtained from denoising problems using generative models trained on noisy
microscopic images. Our method achieves superior denoising performance on
microscopic images of tubulins, preserving detailed cellular structures
visually. These findings underscore the significant potential of our framework
in reconstructing clean fluorescent microscopic images, particularly in
scenarios where acquiring clean signals is impractical or cost-prohibitive.
## Appendix C Comparison with amortized inference models trained with a clean
diffusion prior
We provide an additional comparison of our method to flow models trained with
clean diffusion prior. The clean diffusion priors are trained with the clean
images of MNIST and CIFAR-10. As illustrated by Fig. 7 and Table 5, the
performance of the amortized inference networks trained using our method
achieves similar results to those trained with clean diffusion priors.
Table 5: Amortized inference results for our method and flow trained with clean diffusion prior. Method | MNIST | CIFAR-10
---|---|---
PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$
Observations | 13.57 | 0.210 | 0.591 | 20.91 | 0.582 | 0.182
Clean Prior | 24.61 | 0.482 | 0.064 | 22.37 | 0.787 | 0.117
Ours | 20.73 | 0.399 | 0.160 | 21.97 | 0.787 | 0.135
| Observation | Ours | Clean Prior | Ground Truth | Observation | Ours | Clean Prior | Ground Truth
---|---|---|---|---|---|---|---
\begin{overpic}[width=52.03227pt]{Styles/fig/fig7/1-1.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig7/1-2.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig7/1-3.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig7/1-4.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig7/1-5.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig7/1-6.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig7/1-7.png} \end{overpic} | \begin{overpic}[width=52.03227pt]{Styles/fig/fig7/1-8.png} \end{overpic}
(a) Amortized Inference Results of MNIST | (b) Amortized Inference Results of CIFAR-10
Figure 7: Comparative Analysis of Amortized Inference Networks Trained by Our
Method and Clean Diffusion Priors.
|
# Quantum Regularized Least Squares
Shantanav Chakraborty Center for Quantum Science and Technology, IIIT
Hyderabad, Telangana 500032, India Center for Security, Theory and
Algorithmic Research, IIIT Hyderabad, Telangana 500032, India
<EMAIL_ADDRESS>Aditya Morolia Center for Quantum Science and Technology,
IIIT Hyderabad, Telangana 500032, India Center for Computational Natural
Sciences and Bioinformatics, IIIT Hyderabad, Telangana 500032, India
<EMAIL_ADDRESS>Anurudh Peduri Faculty of Computer
Science, Ruhr University Bochum, 44801 Bochum, Germany Center for Quantum
Science and Technology, IIIT Hyderabad, Telangana 500032, India Center for
Security, Theory and Algorithmic Research, IIIT Hyderabad, Telangana 500032,
India<EMAIL_ADDRESS>
###### Abstract
Linear regression is a widely used technique to fit linear models and finds
widespread applications across different areas such as machine learning and
statistics. In most real-world scenarios, however, linear regression problems
are often ill-posed or the underlying model suffers from overfitting, leading
to erroneous or trivial solutions. This is often dealt with by adding extra
constraints, known as regularization. In this paper, we use the frameworks of
block-encoding and quantum singular value transformation (QSVT) to design the
first quantum algorithms for quantum least squares with general
$\ell_{2}$-regularization. These include regularized versions of quantum
ordinary least squares, quantum weighted least squares, and quantum
generalized least squares. Our quantum algorithms substantially improve upon
prior results on quantum ridge regression (polynomial improvement in the
condition number and an exponential improvement in accuracy), which is a
particular case of our result.
To this end, we assume approximate block-encodings of the underlying matrices
as input and use robust QSVT algorithms for various linear algebra operations.
In particular, we develop a variable-time quantum algorithm for matrix
inversion using QSVT, where we use quantum singular value discrimination as a
subroutine instead of gapped phase estimation. This ensures that substantially
fewer ancilla qubits are required for this procedure than prior results. Owing
to the generality of the block-encoding framework, our algorithms are
applicable to a variety of input models and can also be seen as improved and
generalized versions of prior results on standard (non-regularized) quantum
least squares algorithms.
###### Contents
1. 1 Introduction
1. 1.1 Linear regression with $\ell_{2}$-regularization
2. 1.2 Prior work
3. 1.3 Our contributions
2. 2 Preliminaries
1. 2.1 Notation
2. 2.2 Quantum Input Models
1. 2.2.1 Quantum Data Structure Input Model
2. 2.2.2 Sparse Access Input Model
3. 2.3 Quantum Singular Value Transformation
4. 2.4 Variable Time Amplitude Amplification
3. 3 Algorithmic Primitives
1. 3.1 Arithmetic with Block-Encoded Matrices
2. 3.2 Robust Quantum Singular Value Discrimination
3. 3.3 Variable-Time Quantum Algorithm for Matrix Inversion using QSVT
4. 3.4 Negative Powers of Matrices using QSVT
4. 4 Quantum Least Squares with General $\ell_{2}$-Regularization
1. 4.1 Quantum Ordinary Least Squares
2. 4.2 Quantum Weighted And Generalized Least Squares
1. 4.2.1 Quantum Weighted Least Squares
2. 4.2.2 Quantum Generalized Least Squares
5. 5 Future Directions
6. A Algorithmic Primitives
1. A.1 Arithmetic with Block-Encoded Matrices
## 1 Introduction
The problem of fitting a theoretical model to a large set of experimental data
appears across various fields ranging from the natural sciences to machine
learning and statistics [35]. Linear regression is one of the most widely used
procedures for achieving this. By assuming that, for the underlying model,
there exists a linear relationship between a dependent variable and one or
more explanatory variables, linear regression constructs the best linear fit
to the series of data points. Usually, it does so while minimizing the sum of
squared errors - known as the least squares method.
In other words, suppose that we are given $N$ data points
$\\{(a_{i},b_{i})\\}_{i=1}^{N}$ where $\forall
i:a_{i}\in\mathbb{R}^{d},\forall i:b_{i}\in\mathbb{R}$. The assumption is that
each $b_{i}$ is linearly dependent on $a_{i}$ up to some random noise of mean
$0$. Suppose $A$ is the data matrix of dimension $N\times d$, such that its
$i^{\mathrm{th}}$ row is the vector $a_{i}$ and $b\in\mathbb{R}^{N}$ such that
$b=(b_{1},\cdots,b_{N})^{T}$. Then the procedure, known as ordinary least
squares, obtains a vector $x\in\mathbb{R}^{d}$ that minimizes the objective
function $\left|\left|Ax-b\right|\right|^{2}_{2}$. This problem has a closed-
form solution given by $x=(A^{T}A)^{-1}A^{T}b=A^{+}b$, where $A^{+}$ denotes
the Moore-Penrose inverse of the matrix $A$. Thus computationally, finding the
best fit by linear regression reduces to finding the pseudoinverse of a matrix
that represents the data, a task that is expensive for classical machines for
large data sets.
In practice, however, least squares regression runs into problems such as
overfitting. For instance, the solution might fit most data points, even those
corresponding to random noise. Furthermore, the linear regression problem may
also be ill-posed, for instance, when the number of variables exceeds the
number of data points rendering it impossible to fit the data. These issues
come up frequently with linear regression models and result in erroneous or
trivial solutions. Furthermore, another frequent occurrence is that the data
matrix $A$ has linearly dependent columns. In this scenario, the matrix
$A^{T}A$ is not full rank and therefore is not invertible.
Regularization is a widely used technique to remedy these problems, not just
for linear regression but for inverse problems, in general [11]. In the
context of linear regression, broadly, this involves adding a penalty term to
the objective function, which constrains the solution of the regression
problem. For instance, in the case of $\ell_{2}$-regularization, the objective
is to obtain $x$ that minimizes
$\norm{Ax-b}^{2}_{2}+\lambda\norm{Lx}^{2}_{2}$ (1)
where $L$ is an appropriately chosen penalty matrix (or regularization matrix)
of dimension $N\times d$ and $\lambda>0$ is the regularization parameter, an
appropriately chosen constant. This regularization technique is known as
general $\mathit{\ell_{2}}$-regularization or Tikhonov regularization in the
literature [17, 18, 4, 12, 43]. It is a generalization of ridge regression
which corresponds to the case when $L$ is the identity matrix [20, 33, 42].
The closed-form solution of the general $\ell_{2}$-regularized ordinary least
squares problem is given by
$x=\left(A^{T}A+\lambda L^{T}L\right)^{-1}A^{T}b.$ (2)
A straightforward observation is that even when $A^{T}A$ is singular, a
judicious choice of the penalty matrix $L$ can ensure that the effective
condition number (ratio of the maximum and the minimum non-zero singular
values) of the overall matrix is finite and $A^{T}A+\lambda L^{T}L$ is
invertible.
In this paper, we develop quantum algorithms for linear regression with
general $\ell_{2}$-regularization. If the optimal solution is
$x=(x_{1},\cdots,x_{d})^{T}$, then our quantum algorithm outputs a quantum
state that is $\delta$-close to $\ket{x}=\sum_{j=1}^{d}x_{j}\ket{j}/\norm{x}$,
assuming access to the matrices $A,L$, and the quantum state $\ket{b}$ via
general quantum input models.
In several practical scenarios, depending on the underlying theoretical model,
generalizations of the ordinary least squares (OLS) technique are more useful
to fit the data. For instance, certain samples may be of more importance (and
therefore have more weight) than the others, in which case weighted least
squares (WLS) is preferred. Generalized least squares (GLS) is used when the
underlying samples obtained are correlated. These techniques also suffer from
the issues commonplace with OLS, warranting the need for regularization [43].
Consequently, we also design algorithms for quantum WLS with general
$\ell_{2}$-regularization and quantum GLS with general
$\ell_{2}$-regularization.
Organization of the paper: In the remainder of Section 1, we formally describe
$\ell_{2}$-regularized versions of OLS, WLS, and GLS (Section 1.1), discuss
prior and related work (Section 1.2), and outline our contributions and
results (Section 1.3). In Section 2, we briefly outline the framework of
block-encoding and quantum input models that are particular instances of it
(Section 2.2). We also briefly introduce quantum singular value transformation
(QSVT) (Section 2.3) and variable time amplitude amplification (VTAA) (Section
2.4). Following this, in Section 3, we develop several algorithmic primitives
involving arithmetic of block-encodings (Section 3.1), quantum singular value
discrimination (Section 3.2) and quantum linear algebra using QSVT (Section
3.3). These are the technical building blocks for designing our quantum
regularized regression algorithms. Using these algorithmic primitives, we
design quantum algorithms for the quantum least squares with
$\ell_{2}$-regularization in Section 4. Finally, we conclude by discussing
some possible future research directions in Section 5.
### 1.1 Linear regression with $\ell_{2}$-regularization
Suppose we are given data points $\\{(a_{i},b_{i})\\}_{i=1}^{N}$, where
$\forall i:a_{i}\in\mathbb{R}^{d},\forall i:b_{i}\in\mathbb{R}$ such that
$(a_{i},b_{i})\sim_{i.i.d}\mathcal{D}$, i.e. they are sampled i.i.d. from some
unknown distribution $\mathcal{D}$, assumed to be linear. We want to find a
vector $x\in\mathbb{R}^{d}$ such that the inner product $x^{T}a_{j}$ is a good
predictor for the target $b_{j}$ for some unknown $a_{j}$. This can be done by
minimizing the total squared loss over the given data points,
$\mathcal{L}_{O}:=\sum_{j}(x^{T}a_{j}-b_{j})^{2},$ (3)
leading to the ordinary least squares (OLS) optimization problem. The task
then is to find $x\in\mathbb{R}^{d}$ that minimizes $\norm{Ax-b}_{2}^{2}$,
where $A$ is the $N\times d$ data matrix such that the $i^{th}$ row of $A$ is
$a_{i}$, and the $i^{th}$ element of the vector $b$ is $b_{i}$. Assuming that
$A^{T}A$ is non-singular, the optimal $x$ satisfies
$x=(A^{T}A)^{-1}A^{T}b=A^{+}b,$ (4)
which corresponds to solving a linear system of equations.
Suppose that out of the samples present in the data, we have higher confidence
in some of them than others. In such a scenario, the $i^{\mathrm{th}}$
observation can be assigned a weight $w_{i}\in\mathbb{R}$. This leads to a
generalization of the OLS problem to weighted least squares (WLS). In order to
obtain the best linear fit, the task is now to minimize the weighted version
of the loss
$\mathcal{L}_{W}:=\sum_{j}w_{j}(x^{T}a_{j}-b_{j})^{2}.$ (5)
As before, assuming $A^{T}WA$ is non-singular, the above loss function has the
following closed-form solution:
$x=(A^{T}WA)^{-1}A^{T}Wb,$ (6)
where $W$ is a diagonal matrix with $w_{i}$ being the $i^{\mathrm{th}}$
diagonal element.
There can arise scenarios where there exists some correlation between any two
samples. For generalized least squares (GLS), the presumed correlations
between pairs of samples are given in a symmetric, non-singular covariance
matrix $\Omega$. This objective is to find the vector $x$ that minimizes
$\mathcal{L}_{\Omega}:=\sum_{i,j}(\Omega^{-1})_{i,j}(x^{T}a_{i}-b_{i})(x^{T}a_{j}-b_{j}).$
(7)
Similarly, the closed-form solution for GLS is given by
$x=(A^{T}\Omega^{-1}A)^{-1}A^{T}\Omega^{-1}b.$ (8)
As mentioned previously, in several practical scenarios, the linear regression
problem may be ill-posed or suffer from overfitting. Furthermore, the data may
be such that some of the columns of the matrix $A$ are linearly dependent.
This shrinks the rank of $A$, and consequently of the matrix $A^{T}A$,
rendering it singular and, therefore non-invertible. Recall that the closed-
form solution of OLS exists only if $A^{T}A$ is non-singular, which is no
longer the case. Such scenarios arise even for WLS and GLS problems [43].
In such cases, one resorts to regularization to deal with them. Let
$\mathcal{L}$ be the loss function to be minimized for the underlying least
squares problem (such as OLS, WLS, or GLS). Then general
$\ell_{2}$-regularization (Tikhonov regularization) involves an additional
penalty term so that the objective now is to find the vector
$x\in\mathbb{R}^{d}$ that minimizes
$\mathcal{L}+\lambda\norm{Lx}^{2}_{2}.$ (9)
Here $\lambda$, known as the regularization parameter, is a positive constant
that controls the size of the vector $x$, while $L$ is known as the penalty
matrix (or regularization matrix) that defines a (semi)norm on the solution
through which the size is measured. The solution to the Tikhonov
regularization problem also has a closed-form solution. For example, in the
OLS problem, when $\mathcal{L}=\mathcal{L}_{O}$, we have that
$x=(A^{T}A+\lambda L^{T}L)^{-1}A^{T}b.$ (10)
It is worth noting that when $L=I$, the $\ell_{2}$-regularized OLS problem is
known as ridge regression. For the unregularized OLS problem, the singular
values of $A$, $\sigma_{j}$ are mapped to $1/\sigma_{j}$. The penalty term due
to $\ell_{2}$-regularization, results in a shrinkage of the singular values.
This implies that even in the scenario where $A$ has linearly dependent
columns (some $\sigma_{j}=0$) and $(A^{T}A)^{-1}$ does not exist, the inverse
$(A^{T}A+\lambda L^{T}L)^{-1}$ is well defined for $\lambda>0$ and any
positive-definite $L$. Throughout this article, we refer to such an $L$ (which
is positive definite) as a good regularizer. The penalty matrix $L$ allows for
penalizing each regression parameter differently and leads to joint shrinkage
among the elements of $x$. It also determines the rate and direction of
shrinkage. In the special case of ridge regression, as $L=I$, the penalty
shrinks each element of $x$ equally along the unit vectors $e_{j}$. Also note
that by definition, $I$ is a good regularizer.
Closed-form expressions can also be obtained for the WLS and the GLS problem
($\mathcal{L}=\mathcal{L}_{W},\mathcal{L}_{\Omega}$ respectively), and finding
the optimal solution $x$ reduces to solving a linear system. The quantum
version of these algorithms output a quantum state that is $\epsilon$-close
$\ket{x}=\sum_{j}x_{j}\ket{j}/\norm{x}$.
Throughout this work, while designing our quantum algorithms, we shall assume
access (via a block-encoding) to the matrices $A$, $W$, $\Omega$, and $L$ and
knowledge of the parameter $\lambda$. Classically, however, the regularization
matrix $L$ and the optimal parameter $\lambda$ are obtained via several
heuristic techniques [18, 12, 43].
### 1.2 Prior work
Quantum algorithms for (unregularized) linear regression was first developed
by Wiebe et al. [45], wherein the authors made use of the HHL algorithm for
solving a linear system of equations [19]. Their algorithm assumes query
access to a sparse matrix $A$ (sparse-access-model) and to a procedure to
prepare $\ket{b}=\sum_{i}b_{i}\ket{i}$. They first prepare a quantum state
proportional to $A^{T}\ket{b}$, and then use the HHL algorithm to apply the
operator $(A^{T}A)^{-1}$ to it. Overall the algorithm runs in a time scaling
as $\kappa^{6}_{A}$ (the condition number of $A$) and inverse polynomial in
the accuracy $\delta$. Subsequent results have considered the problem of
obtaining classical outputs for linear regression. For instance, in Ref. [44],
$A^{+}$ is directly applied to the quantum state $\ket{b}$, followed by
amplitude estimation to obtain the entries of $x$. On the other hand, Ref.
[39] used the techniques of quantum principal component analysis in [27] to
predict a new data point for the regression problem. These algorithms also
work in the sparse access model and run in a time that scales as
$\mathrm{poly}\left(\kappa,1/\delta\right)$. Kerenidis and Prakash [23]
provided a quantum algorithm for the WLS problem wherein they used a classical
data structure to store the entries of $A$ and $W$. Furthermore, they assumed
QRAM access to this data structure [36, 22] that would allow the preparation
of quantum states proportional to the entries of $A$ and $W$ efficiently. They
showed that in this input model (quantum data structure model), an iterative
quantum linear systems algorithm can prepare $\ket{x}$ in time
$\widetilde{O}(\mu\kappa^{3}/\delta)$, where $\kappa$ is the condition number
of the matrix $A^{T}\sqrt{W}$ while $\mu=\norm{\sqrt{W}A}_{F}$. Chakraborty et
al. [7] applied the framework of block-encoding along with (controlled)
Hamiltonian simulation of Low and Chuang [26] to design improved quantum
algorithms for solving linear systems. Quantum algorithms developed in the
block-encoding framework are applicable to a wide variety of input models,
including the sparse access model and the quantum data structure model of
[23]. They applied their quantum linear systems solver to develop quantum
algorithms for quantum weighted least squares and generalized least squares.
Their quantum algorithm for WLS has a complexity that is in
$\widetilde{O}\left(\alpha\kappa\mathrm{polylog}(Nd/\delta)\right)$, where
$\alpha=s$, the sparsity of the matrix $A^{T}\sqrt{W}$ in the sparse access
model while $\alpha=\norm{\sqrt{W}A}_{F}$, for the quantum data structure
input model. For GLS, their quantum algorithm outputs $\ket{x}$ in cost
$\widetilde{O}\left(\kappa_{A}\kappa_{\Omega}(\alpha_{A}+\alpha_{\Omega}\kappa_{\Omega})\mathrm{polylog}(1/\delta)\right)$,
where $\kappa_{A}$ and $\kappa_{\Omega}$ are the condition numbers of $A$ and
$\Omega$ respectively while $\alpha_{A}$ and $\alpha_{\Omega}$ are parameters
that depend on how the matrices $A$ and $\Omega$ are accessed in the
underlying input model.
While quantum linear regression algorithms have been designed and subsequently
improved over the years, quantum algorithms for regularized least squares have
not been developed extensively. Yu et al. [46] developed a quantum algorithm
for ridge regression in the sparse access model using the LMR scheme [27] for
Hamiltonian simulation and quantum phase estimation, which they then used to
determine the optimal value of the parameter $\lambda$. Their algorithm to
output $\ket{x}$ has a cubic dependence on both $\kappa$ and $1/\delta$. They
use this as a subroutine to determine a good value of $\lambda$. A few other
works [40, 10] have considered the quantum ridge regression problem in the
sparse access model, all of which can be implemented with
$\mathrm{poly}(\kappa,1/\delta)$ cost.
Recently, Chen and de Wolf designed quantum algorithms for lasso
($\ell_{1}$-regularization) and ridge regressions from the perspective of
empirical loss minimization [5]. For both lasso and ridge, their quantum
algorithms output a classical vector $\widetilde{x}$ whose loss (mean squared
error) is $\delta$-close to the minimum achievable loss. In this context, they
prove a quantum lower bound of $\Omega(d/\delta)$ for ridge regression which
indicates that in their setting, the dependence on $d$ cannot be improved on a
quantum computer (the classical lower bound is also linear in $d$ and there
exists a matching upper bound). Note that $\widetilde{x}$ is not necessarily
close to the optimal solution $x$ of the corresponding least squares problem,
even though their respective loss values are. Moreover, their result (of
outputting a classical vector $\widetilde{x}$) is incomparable to our
objective of obtaining a quantum state encoding the optimal solution to the
regularized regression problem.
Finally, Gilyén et al. obtained a “dequantized” classical algorithm for ridge
regression assuming norm squared access to input data similar to the quantum
data structure input model [15]. Furthermore, similar to the quantum setting
where the output is the quantum state $\ket{x}=\sum_{j}x_{j}\ket{j}/\norm{x}$
instead of $x$ itself, their algorithm obtains samples from the distribution
$x_{j}^{2}/\norm{x}^{2}$. For the regularization parameter
$\lambda=O\left(\norm{A}\norm{A}_{F}\right)$, the running time of their
algorithm is in $\widetilde{O}\left(\kappa^{12}r_{A}^{3}/\delta^{4}\right)$,
where $r_{A}$ is the rank of $A$. Their result (and several prior results)
does not have a polynomial dependence on the dimension of $A$ and therefore
rules out the possibility of generic exponential quantum speedup (except in
$\delta$) in the quantum data structure input model.
### 1.3 Our contributions
In this work, we design the first quantum algorithms for OLS, WLS, and GLS
with general $\ell_{2}$-regularization. We use the Quantum Singular Value
Transformation (QSVT) framework introduced by Gilyén et al [14]. We assume
that the relevant matrices are provided as input in the block-encoding model,
in which access to an input matrix $A$ is given by a unitary $U_{A}$ whose
top-left block is (close to) $A/\alpha$. The parameter $\alpha$ takes specific
values depending on the underlying input model. QSVT then allows us to
implement nearly arbitrary polynomial transformations to a block of a unitary
matrix using a series of parameterized, projector-controlled rotations
(quantum signal processing [25]).
More precisely, given approximate block-encodings of the data matrix $A$ and
the regularizing matrix $L$, and a unitary procedure to prepare the state
$\ket{b}$, our quantum algorithms output a quantum state that is
$\delta$-close to $\ket{x}$, the quantum state proportional to the
$\ell_{2}$-regularized ordinary least squares (or weighted least squares or
generalized least squares problem). We briefly summarize the query
complexities of our results in Table 1.
For the OLS problem with general $\ell_{2}$-regularization (Section 4.2,
Theorem 32), we design a quantum algorithm which given an
$(\alpha_{A},a_{A},\varepsilon_{A})$-block-encoding of $A$ (implemented in
cost $T_{A}$), an $(\alpha_{L},a_{L},\varepsilon_{L})$-block-encoding of $L$
(implemented in cost $T_{L}$), a parameter $\lambda>0$, and a procedure to
prepare $\ket{b}$ (in cost $T_{b}$), outputs a quantum state which is
$\delta$-close to $\ket{x}$. The algorithm has a cost
$\order{\kappa\log\kappa\left(\left(\frac{\alpha_{A}+\sqrt{\lambda}\alpha_{L}}{\norm{A}+\sqrt{\lambda}\norm{L}}\right)\log\left(\frac{\kappa}{\delta}\right)\left(T_{A}+T_{L}\right)+T_{b}\right)}$
where $\kappa$ can be thought of as a modified condition number, related to
the effective condition numbers of $A$ and $L$. When $L$ is a good
regularizer, this is given by the expression
$\kappa=\kappa_{L}\left(1+\frac{\norm{A}}{\sqrt{\lambda}\norm{L}}\right),$
Notice that $\kappa$ is independent of $\kappa_{A}$, the condition number of
the data matrix $A$, which underscores the advantage of regularization. The
parameters $\alpha_{A}$ and $\alpha_{L}$ take specific values depending on the
underlying input model. For the sparse access input model, $\alpha_{A}=s_{A}$
and $\alpha_{L}=s_{L}$, the respective sparsities of the matrices $A$ and $L$.
On the other hand for the quantum data structure input model,
$\alpha_{A}=\norm{A}_{F}$ and $\alpha_{L}=\norm{L}_{F}$. Consequently, the
complexity of Quantum Ridge Regression can be obtained by substituting $L=I$
in the above complexity as
$\order{\log\kappa\left(\frac{\alpha_{A}}{\sqrt{\lambda}}\log\left(\frac{\kappa}{\delta}\right)T_{A}+\kappa
T_{b}\right)}$
where $\kappa=1+\norm{A}/\sqrt{\lambda}$, by noting that the block-encoding of
$I$ is trivial while the norm and condition number of the identity matrix is
one. For this problem of quantum ridge regression, our quantum algorithms are
substantially better than prior results [40, 46, 10], exhibiting a polynomial
improvement in $\kappa$ and an exponential improvement in $1/\delta$.
For the $\ell_{2}$-regularized GLS problem (Section 4, Theorem 42), we design
a quantum algorithm that along with approximate block-encodings of $A$ and
$L$, takes as input an
$(\alpha_{\Omega},a_{\Omega},\varepsilon_{\Omega})$-lock-encoding of the
matrix $\Omega$ (implementable at a cost of $T_{\Omega}$) to output a state
$\delta$-close to $\ket{x}$ at a cost of
$\order{\kappa\sqrt{{\kappa_{\Omega}}}\log\kappa\left(\left(\frac{\alpha_{A}}{\norm{A}}T_{A}+\frac{\alpha_{L}}{\norm{L}}T_{L}+\frac{\alpha_{\Omega}{\kappa_{\Omega}}}{{\norm{\Omega}}}T_{\Omega}\right)\log^{3}\left(\frac{\kappa{\kappa_{\Omega}}\norm{A}\norm{L}}{\delta{\norm{\Omega}}}\right)+T_{b}\right)}$
In the above complexity, when $L$ is a good regularizer, the modified
condition number $\kappa$ is defined as
$\kappa=\kappa_{L}\left(1+\frac{\sqrt{{\kappa_{\Omega}}}\norm{A}}{\sqrt{\lambda\norm{\Omega}}\norm{L}}\right)$
The WLS problem is a particular case of GLS, wherein the matrix $\Omega$ is
diagonal. However, we show that better complexities for the
$\ell_{2}$-regularized WLS problem can be obtained if we assume QRAM access to
the diagonal entries of $W$ (Section 4, Theorem 39 and Theorem 40).
Table 1 summarizes the complexities of our algorithms for quantum linear
regression with general $\ell_{2}$-regularization. For better exposition, here
we assume that $\norm{A},\norm{L},\norm{\Omega}$ and
$\lambda=\Theta\left(1\right)$. For the general expression of the
complexities, we refer the readers to Section 4.
Problem | Unregularized | $\mathbf{\ell_{2}}$-Regularized
---|---|---
Quantum OLS | $\widetilde{\mathcal{O}}\left(\alpha_{A}\kappa_{A}\log\left(1/\delta\right)\right)$ | $\widetilde{\mathcal{O}}\left((\alpha_{A}+\alpha_{L})\kappa_{L}\log\left(1/\delta\right)\right)$
Quantum GLS | $\widetilde{\mathcal{O}}\left(\left(\alpha_{A}+\alpha_{\Omega}\kappa_{\Omega}\right)\kappa_{A}\sqrt{\kappa_{\Omega}}\log^{3}\left(1/\delta\right)\right)$ | $\widetilde{\mathcal{O}}\left(\left(\alpha_{A}+\alpha_{L}+\alpha_{\Omega}\kappa_{\Omega}\right)\kappa_{L}\sqrt{\kappa_{\Omega}}\log^{3}\left(1/\delta\right)\right)$
Table 1: Complexity of quantum linear regression algorithms with and without
general $\ell_{2}$-regularization. All of these algorithms require only
$\Theta({\log\kappa})$ additional qubits.
In order to derive our results, we take advantage of the ability to
efficiently perform arithmetic operations on block-encoded matrices, as
outlined in Section 3. Along with this, we use QSVT to perform linear
algebraic operations on block-encoded matrices. To this end, adapt the results
in Refs. [14, 34] to our setting. One of our contributions is that we work
with robust versions of many of these algorithms. In prior works, QSVT is
often applied to block-encoded matrices, assuming perfect block-encoding. For
the quantum algorithms in this paper, we rigorously obtain the precision
$\varepsilon$ required to obtain a $\delta$-approximation of the desired
output state.
For instance, a key ingredient of our algorithm for regularized least squares
is to make use of QSVT to obtain $A^{+}$, given an $\varepsilon$-approximate
block-encoding of $A$. In order to obtain a (near) optimal dependence on the
condition number of $A$ by applying variable-time amplitude amplification
(VTAA) [3], we recast the standard QSVT algorithm as a variable stopping-time
quantum algorithm. Using QSVT instead of controlled Hamiltonian simulation
ensures that the variable-time quantum procedure to prepare $A^{+}\ket{x}$ has
a slightly better running time (by a log factor) and considerably fewer
additional qubits than Refs. [9, 7].
Furthermore, for the variable time matrix inversion algorithm, a crucial
requirement is the application of the inversion procedure to the portion of
the input state that is spanned by singular values larger than a certain
threshold. In order to achieve this, prior results have made use of Gapped
Phase Estimation (GPE), which is a simple variant of the standard phase
estimation procedure that decides whether the eigenvalue of a Hermitian
operator is above or below a certain threshold [3, 9, 7]. However, GPE can
only be applied to a Hermitian matrix and requires additional registers that
store the estimates of the phases, which are never used for variable-time
amplitude amplification. In this work, instead of GPE, we develop a robust
version of quantum singular value discrimination (QSVD) using QSVT, which can
be directly applied to non-Hermitian matrices. This algorithm decides whether
some singular value of a matrix is above or below a certain threshold without
storing estimates of the singular values. This leads to a space-efficient
variable time quantum algorithm for matrix inversion by further reducing the
number of additional qubits required by a factor of
$O(\log^{2}(\kappa/\delta))$ as compared to prior results [9, 7].
Consequently, this also implies that in our framework, quantum algorithms for
(unregularized) least squares (which are special cases of our result) have
better complexities than those of Ref. [7].
## 2 Preliminaries
This section lays down the notation, and introduces the quantum singular value
transformation (QSVT) and block-encoding frameworks, which are used to design
the algorithm for quantum regression.
### 2.1 Notation
For a matrix $A\in\mathbb{R}^{N\times d}$, $A_{i,.}$ denotes the $i^{th}$ row
of $A$, and $\norm{A_{i,\cdot}}$ denotes the vector norm of $A_{i,.}^{T}$.
$s^{A}_{r}$ and $s^{A}_{c}$ denote the row and column sparsity of the matrix,
which is the maximum number of non-zero entries in any row and any column,
respectively.
Singular Value Decomposition. The decomposition $A=W\Sigma V^{\dagger}$, where
$W$ and $V$ are unitary and $\Sigma$ is a diagonal matrix, represents the
singular value decomposition (SVD) of $A$. All matrices can be decomposed in
this form. The diagonal entries of $\Sigma$, usually denoted by
$\sigma(A)=\\{\sigma_{j}\\}$, is the multiset of all singular values of $A$,
which are real and non-negative. $\sigma_{\max}$ and $\sigma_{\min}$ denote
the maximum and minimum singular values of $A$. $r(A)=\mathrm{rank}(A)$ is the
number of non-zero singular values of $A$. The columns of
$W,\leavevmode\nobreak\ V$ (denoted by $\\{\ket{w_{j}}\\}$ and
$\\{\ket{v_{j}}\\}$) are the left and right singular vectors of $A$. Thus
$A=\sum_{j}^{r}\sigma_{j}\ket{w_{j}}\bra{v_{j}}$. The singular vectors of $A$
can be computed as the positive square roots of the eigenvalues of
$A^{\dagger}A$ (which is positive semi-definite and therefore has non-negative
real eigenvalues.)
Effective Condition Number. $\kappa_{A}$ denotes (an upper bound on) the
effective condition number of $A$, defined as the ratio of the maximum and
minimum non-zero singular values of $A$. Let $\sigma_{\max}\left(A\right)$ be
the largest singular value of $A$, and $\sigma_{\min}\left(A\right)$ be the
smallest singular value of $A$. Additionally, let
$\widetilde{\sigma}_{\min}\left(A\right)$ be the smallest non-zero singular
value of $A$. Then
$\kappa_{A}\geq\frac{\sigma_{\max}\left(A\right)}{\widetilde{\sigma}_{\min}\left(A\right)}=\sqrt{\frac{\lambda_{\max}(A^{\dagger}A)}{\widetilde{\lambda}_{\min}(A^{\dagger}A)}}$
If $A$ is full-rank, then
$\widetilde{\sigma}_{\min}\left(A\right)=\sigma_{\min}\left(A\right)$, and
$\kappa_{A}$ becomes the condition number of the matrix. In this text, unless
stated otherwise, we always refer to $\kappa_{A}$ as (an upper bound on)
effective condition number of a matrix, and not the true condition number.
Norm. Unless otherwise specified, $\norm{A}$ denotes the spectral norm of $A$,
while $\norm{A}_{F}$ denotes the Frobenius norm of $A$, defined as
$\displaystyle\norm{A}:=\max_{x\neq
0}\frac{\norm{Ax}}{\norm{x}}=\sigma_{\max}(A)$
$\displaystyle\norm{A}_{F}:=\sqrt{\sum_{j=1}^{r}\sigma_{j}^{2}}$
Unless otherwise specified, when $A$ is assumed to be normalized, it is with
respect to the spectral norm.
Soft-O Complexity. Finally, we use $f=\widetilde{\mathcal{O}}\left(g\right)$
to denote $f=\order{g\cdot\mathrm{polylog}(g)}$.
Controlled Unitaries. If $U$ is a $s$-qubit unitary, then $C\text{-}U$ is a
$(s+1)$-qubit unitary defined by
$C\text{-}U=\outerproduct{0}{0}\otimes I_{s}+\outerproduct{1}{1}\otimes U$
Throughout this text whenever we state that the time taken to implement a
unitary $U_{A}$ is $T_{A}$ and the cost of an algorithm is $\order{nT_{A}}$,
we imply that the algorithm makes $n$ uses of the unitary $U_{A}$. Thus, if
the circuit depth of $U_{A}$ is $T_{A}$, the circuit depth of our algorithm is
$\order{nT_{A}}$.
### 2.2 Quantum Input Models
The complexities of quantum algorithms often depend on how the input data is
accessed. For instance, in quantum algorithms for linear algebra (involving
matrix operations), it is often assumed that there exists a black-box that
returns the positions of the non-zero entries of the underlying matrix when
queried. The algorithmic running time is expressed in terms of the number of
queries made to this black-box. Such an input model, known as the Sparse
Access Model, helps design efficient quantum algorithms whenever the
underlying matrices are sparse. Various other input models exist, and quantum
algorithms are typically designed and optimized for specific input models.
Kerenidis and Prakash [22, Section 5.1] introduced a different input model,
known as the quantum data structure model, which is more conducive for
designing quantum machine learning algorithms. In this model, the input data
(e.g: entries of matrices) arrive online and are stored in a classical data
structure (often referred to as the KP-tree in the literature), which can be
queried in superposition by using a QRAM. This facilitates efficiently
preparing quantum states corresponding to the rows of the underlying matrix,
that can then be used for performing several matrix operations. Subsequently,
several quantum-inspired classical algorithms have also been developed
following the breakthrough result of Tang [41]. Such classical algorithms have
the same underlying assumptions as the quantum algorithms designed in the data
structure input model and are only polynomially slower provided the underlying
matrix is low rank.
In this work, we will consider the framework of block-encoding, wherein it is
assumed that the input matrix $A$ (up to some sub-normalization) is stored in
the left block of some unitary. The advantage of the block-encoding framework,
which was introduced in a series of works [26, Definition 1], [7, Section 1],
[14, Section 1.3], is that it can be applied to a wide variety of input
models. For instance, it can be shown that both the sparse access input model
as well as the quantum data structure input model are specific instances of
block-encoded matrices [7, Sections 2.2 and 2.4], [14, Section 5.2]. Here we
formally define the framework of block-encoding and also express the sparse
access model as well as the quantum data structure model as block-encodings.
We refer the reader to [7, 14] for proofs.
###### Definition 1 (Block Encoding, restated from [14], Definition 24).
Suppose that $A$ is an $s$-qubit operator,
$\alpha,\varepsilon\in\mathbb{R}^{+}$ and $a\in\mathbb{N}$, then we say that
the $(s+a)$-qubit unitary $U_{A}$ is an $(\alpha,a,\varepsilon)$-block-
encoding of $A$, if
$\norm{A-\alpha(\bra{0}^{\otimes a}\otimes I)U_{A}(\ket{0}^{\otimes a}\otimes
I)}\leq\varepsilon.$ (11)
Let $\ket{\psi}$ be an $s$-qubit quantum state. Then applying $U_{A}$ to
$\ket{\psi}\ket{0}^{\otimes a}$ outputs a quantum state that is
$\frac{\varepsilon}{\alpha}$-close to
$\dfrac{A}{\alpha}\ket{\psi}\ket{0}^{\otimes a}+\ket{\Phi^{\perp}},$
where $\left(I_{s}\otimes\outerproduct{0}{0}^{\otimes
a}\right)\ket{\Phi^{\perp}}=0$. Equivalently, suppose
$\tilde{A}:=\alpha\left(\bra{0}^{\otimes a}\otimes
I_{s}\right)U_{A}\left(\ket{0}^{\otimes a}\otimes I_{s}\right)$ denotes the
actual matrix that is block-encoded into $U_{A}$, then
$\norm{A-\tilde{A}}\leq\varepsilon$.
In the subsequent sections, we provide an outline of the quantum data
structure model and the sparse access model which are particular instances of
the block encoding framework.
#### 2.2.1 Quantum Data Structure Input Model
Kerenidis and Prakash introduced a quantum accessible classical data structure
which has proven to be quite useful for designing several quantum algorithms
for linear algebra [22]. The classical data structure stores entries of
matrices or vectors and can be queried in superposition using a QRAM (quantum
random access memory). We directly state the following theorem from therein.
###### Theorem 2 (Implementing quantum operators using an efficient data
structure, [36, 22]).
Let $A\in\mathbb{R}^{N\times d}$, and $w$ be the number of non-zero entries of
$A$. Then there exists a data structure of size
$\order{w\log^{2}\left(dN\right)}$ that given the matrix elements
$(i,j,a_{ij})$, stores them at a cost of $\order{\log\left(dN\right)}$
operations per element. Once all the non-zero entries of $A$ have been stored
in the data structure, there exist quantum algorithms that are
$\varepsilon$-approximations to the following maps:
$U:\ket{i}\ket{0}\mapsto\frac{1}{\norm{A_{i,\cdot}}}\sum_{j=1}^{d}a_{i,j}\ket{i,j}=\ket{\psi_{i}},$
$V:\ket{0}\ket{j}\mapsto\frac{1}{\norm{A}_{F}}\sum_{i=1}^{N}\norm{A_{i},.}\ket{i,j}=\ket{\phi_{j}}$
where $\norm{A_{i,\cdot}}$ is the norm of the $i^{\mathrm{th}}$ row of $A$ and
the second register of $\ket{\psi_{i}}$ is the quantum state corresponding to
the $i^{\mathrm{th}}$ row of $A$. These operations can be applied at a cost of
$\order{\mathrm{polylog}(Nd/\varepsilon)}$.
It was identified in Ref. [7] that if a matrix $A$ is stored in this quantum
accessible data structure, there exists an efficiently implementable block-
encoding of $A$. We restate their result here.
###### Lemma 3 (Implementing block encodings from quantum data structures,
[7], Theorem 4).
Let the entries of the matrix $A\in\mathbb{R}^{N\times d}$ be stored in a
quantum accessible data structure, then there exist unitaries $U_{R},U_{L}$
that can be implemented at a cost of
$\order{\mathrm{polylog}(dN/\varepsilon)}$ such that $U_{R}^{\dagger}U_{L}$ is
a $(\norm{A}_{F},\lceil\log\left(d+N\right)\rceil,\varepsilon)$-block-encoding
of $A$.
###### Proof.
The unitaries $U_{R}$ and $U_{L}$ can be implemented via $U$ and $V$ in the
previous lemma. Let $U_{R}=U$ and $U_{L}=V.\texttt{SWAP}$. Then for
$s=\lceil\log(d+N)\rceil$ we have
$U_{R}:\leavevmode\nobreak\ \ket{i}\ket{0^{s}}\rightarrow\ket{\psi_{i}},$
and
$U_{L}:\leavevmode\nobreak\ \ket{j}\ket{0^{s}}\rightarrow\ket{\phi_{j}},$
So we have that the top left block of $U^{{\dagger}}_{R}U_{L}$ is
$\sum_{i=1}^{N}\sum_{j=1}^{d}\braket{\psi_{i}}{\phi_{j}}\ket{i,0}\bra{j,0}$
Now
$\displaystyle\braket{\psi_{i}}{\phi_{j}}$
$\displaystyle=\sum_{k=1}^{d}\sum_{\ell=1}^{N}\dfrac{a_{ik}}{\norm{A_{i,\cdot}}}\cdot\dfrac{\norm{A_{\ell}}}{\norm{A}_{F}}\underbrace{\braket{i,k}{l,j}}_{:=\delta_{i,l}.\delta_{k,j}}$
$\displaystyle=\dfrac{a_{ij}}{\norm{A}_{F}}.$
Moreover since only $\varepsilon$-approximations of $U$ and $V$ can be
implemented we have that $U^{{\dagger}}_{R}U_{L}$ is a
$(\norm{A}_{F},\lceil\log(n+d)\rceil,\varepsilon)$ block encoding of $A$
implementable with the same cost as $U$ and $V$. ∎
In Ref. [23] argued that in certain scenarios, storing the entries of
$A^{(p)},(A^{1-p})^{\dagger}$ might be useful as compared to storing $A$, for
some $p\in[0,1]$. In such cases, the quantum data structure is a
$(\mu_{p},\lceil\log(N+d)\rceil,\varepsilon)$ block encoding of $A$, where
$\mu_{p}(A)=\sqrt{s_{2p}(A).s_{2(1-p)}(A^{T})}$ such that
$s_{p}(A):=\max_{j}\norm{A_{j,\cdot}}_{q}^{q}$. Throughout the work, whenever
our results are expressed in the quantum data structure input model, we shall
state our complexity in terms of $\mu_{A}$. When the entries of $A$ are
directly stored in the data structure, $\mu_{A}=\norm{A}_{F}$. Although, we
will not state it explicitly each time, our results also hold when fractional
powers of $A$ are stored in the database and simply substituting
$\mu_{A}=\mu_{p}(A)$, yields the required complexity.
#### 2.2.2 Sparse Access Input Model
The sparse access input model considers that the input matrix
$A\in\mathbb{R}^{N\times d}$ has row sparsity $s_{r}$ and column sparsity
$s_{c}$. Furthermore, it assumes that the entries of $A$ can be queried via an
oracle as
$O_{A}:\ket{i}\ket{j}\ket{0}^{\otimes
b}\mapsto\ket{i}\ket{j}\ket{a_{ij}}\quad\forall i\in[N],j\in[d],$
and the indices of the non-zero elements of each row and column can be queried
via the following oracles:
$\displaystyle O_{r}:\ket{i}\ket{j}\mapsto\ket{i}\ket{r_{ij}}\quad\forall
i\in[N],k\in[s_{r}],$ $\displaystyle
O_{c}:\ket{i}\ket{j}\mapsto\ket{c_{ij}}\ket{j}\quad\forall
i\in[d],k\in[s_{c}]$
where $r_{ij}$ is the $j^{\mathrm{th}}$ non-zero entry of the
$i^{\mathrm{th}}$ row of $A$ and $c_{ij}$ is the $i^{\mathrm{th}}$ non-zero
entry of the $j^{\mathrm{th}}$ column of $A$. Gilyén et al. [14] showed that a
block encoding of a sparse $A$ can be efficiently prepared by using these
three oracles. We restate their lemma below.
###### Lemma 4 (Constructing a block-encoding from sparse-access to matrices,
[13], Lemma 48).
Let $A\in\mathbb{R}^{N\times d}$ be an $s_{r},s_{c}$ row, column sparse matrix
given as a sparse access input. Then for all $\varepsilon\in(0,1)$, we can
implement a
$(\sqrt{s_{c}s_{r}},\mathrm{polylog}(Nd/\varepsilon),\varepsilon)$-block-
encoding of $A$ with $\order{1}$ queries to $O_{r},O_{c},O_{A}$ and
$\mathrm{polylog}(Nd/\varepsilon)$ elementary quantum gates.
Throughout the paper, we shall assume input matrices are accessible via
approximate block-encodings. This also allows us to write down the
complexities of our quantum algorithms in this general framework.
Additionally, we state the complexities in both the sparse access input model
as well as the quantum accessible data structure input model as particular
cases.
### 2.3 Quantum Singular Value Transformation
In a seminal work, Gilyén et al. presented a framework to apply an arbitrary
polynomial function to the singular values of a matrix, known as Quantum
Singular Value Transformation (QSVT) [14]. QSVT is quite general: many quantum
algorithms can be recast to this framework, and for several problems, better
quantum algorithms can be obtained [14, 34]. In particular, QSVT has been
extremely useful in obtaining optimal quantum algorithms for linear algebra.
For instance, using QSVT, given the block-encoding of a matrix $A$, one could
obtain $A^{-c}$ with $c\in[0,\infty)$ with optimal complexity and by using
fewer additional qubits than prior art. This section briefly describes this
framework, which is a generalization of Quantum Signal Processing (QSP) [26,
Section 2], [25, Theorem 2], [32]. The reader may refer to [34] for a more
pedagogical overview of these techniques.
Let us begin by discussing the framework of Quantum Signal Processing. QSP is
a quantum algorithm to apply a $d$-degree bounded polynomial transformation
with parity $d\mod 2$ to an arbitrary quantum subsystem, using a quantum
circuit $U_{\Phi}$ consisting of only controlled single qubit rotations. This
is achieved by interleaving a signal rotation operator $W$ (which is an
$x$-rotation by some fixed angle $\theta$) and a signal processing operator
$S_{\phi}$ (which is a $z$-rotation by a variable angle $\phi\in[0,2\pi]$). In
this formulation, the signal rotation operator is defined as
$W(x):=\begin{pmatrix}x&i\sqrt{1-x^{2}}\\\ i\sqrt{1-x^{2}}&x\end{pmatrix},$
(12)
which is an $x$-rotation by angle $\theta=-2\arccos(x)$, and the signal
processing operator is defined as
$S_{\phi}:=e^{i\phi Z},$ (13)
which is a $z$-rotation by an angle $-2\phi$. Interestingly, sandwiching them
together for some
$\Phi:=(\phi_{0},\phi_{1},\ldots\phi_{d})\in\mathbb{R}^{d+1}$, as shown in
Equation 14, gives us a matrix whose elements are polynomial transformations
of $x$,
$\displaystyle U_{\Phi}$
$\displaystyle:=e^{i\phi_{0}Z}\prod_{j=1}^{j=d}\left(W(x)e^{i\phi_{j}Z}\right)$
(14) $\displaystyle=\begin{pmatrix}P(x)&iQ(x)\sqrt{1-x^{2}}\\\
iQ^{*}(x)\sqrt{1-x^{2}}&P^{*}(x)\end{pmatrix},$ (15)
such that
1. 1.
$\deg P\leq d;\ \deg Q\leq d-1$,
2. 2.
$P(x)$ has a parity $d\mod 2$,
3. 3.
$|P(x)|^{2}+(1-x^{2})|Q(x)|^{2}=1\quad\forall x\in[-1,1]$.
Following the application of the quantum circuit $U_{\Phi}$ for an appropriate
$\Phi$, one can project into the top left block of $U_{\Phi}$ to recover the
polynomial $\bra{0}U_{\Phi}\ket{0}=P(x)$. Projecting to other basis allows the
ability to perform more interesting polynomial transformations, which can be
linear combinations of $P(x),Q(x)$, and their complex conjugates. For example,
projecting to $\\{\ket{+},\ket{-}\\}$ basis gives us
$\bra{+}U_{\Phi}\ket{+}=(P(x))+i(Q(x))\sqrt{1-x^{2}}.$ (16)
Quantum Signal Processing can be formally stated as follows.
###### Theorem 5 (Quantum Signal Processing, Corollary 8 from [14]).
Let $P\in\mathbb{C}[x]$ be a polynomial of degree $d\geq 2$, such that
* •
$P$ has parity-$(d\mod 2)$,
* •
$\forall x\in[-1,1]:\absolutevalue{P(x)}\leq 1$,
* •
$\forall x\in(-\infty,-1]\cup[1,\infty):\absolutevalue{P(x)}\geq 1$,
* •
if $d$ is even, then $\forall x\in\mathbb{R}:P(ix)P^{*}(ix)\geq 1$.
Then there exists a $\Phi\in\mathbb{R}^{d}$ such that
$\prod_{j=1}^{d}\left(e^{i\phi_{j}\sigma_{z}}W(x)\right)=\begin{pmatrix}P(x)&\cdot\\\
\cdot&\cdot\end{pmatrix}.$ (17)
Thus, QSP allows us to implement any polynomial $P(x)$ that satisfies the
aforementioned requirements. Throughout this article, we refer to any such
polynomial $P(x)$ as a QSP polynomial. Quantum Singular Value Transformation
is a natural generalization of this procedure. It allows us to apply a QSP
polynomial transformation to each singular value of an arbitrary block of a
unitary matrix. In addition to this generalization, QSVT relies on the
observation that several functions can be well-approximated by QSP
polynomials. Thus, through QSVT one can transform each singular value of a
block-encoded matrix by any function that can be approximated by a QSP
polynomial. Since several linear algebra problems boil down to applying
specific transformations to the singular values of a matrix, QSVT is
particularly useful for developing fast algorithms for quantum linear algebra.
Next, we introduce QSVT formally via the following theorem.
###### Theorem 6 (Quantum Singular Value Transformation [13], Section 3.2).
Suppose $A\in\mathbb{R}^{N\times d}$ is a matrix with singular value
decomposition $A=\sum_{j=1}^{d_{\min}}\sigma_{j}\ket{v_{j}}\bra{w_{j}}$, where
$d_{\min}=\min\\{N,d\\}$ and $\ket{v_{j}}$ $(\ket{w_{j}})$ is the left (right)
singular vector with singular value $\sigma_{j}$. Furthermore, let $U_{A}$ be
a unitary such that $A=\widetilde{\Pi}U_{A}\Pi$, where $\Pi$ and
$\widetilde{\Pi}$ are orthogonal projectors. Then, for any QSP polynomial
$P(x)$ of degree $n$, there exists a vector
$\Phi=\left(\phi_{1},\phi_{2},\cdots\phi_{n}\right)\in\mathbb{R}^{n}$ and a
unitary
$U_{\Phi}=\begin{cases}e^{i\phi_{1}(2\widetilde{\Pi}-I)}U_{A}\left[\prod_{k=1}^{(n-1)/2}e^{i\phi_{2k}(2\widetilde{\Pi}-I)}U_{A}^{\dagger}e^{i\phi_{2k+1}(2\widetilde{\Pi}-I)}U_{A}\right],&n\text{
is odd }\\\
\left[\prod_{k=1}^{n/2}e^{i\phi_{2k-1}(2\widetilde{\Pi}-I)}U_{A}^{\dagger}e^{i\phi_{2k}(2\widetilde{\Pi}-I)}U_{A}\right],&n\text{
is even},\end{cases}$ (18)
such that
$P^{SV}(A)=\begin{cases}\widetilde{\Pi}U_{\Phi}\Pi,&n\text{ is odd}\\\ \Pi
U_{\Phi}\Pi,&n\text{ is even},\end{cases}$ (19)
where $P^{SV}(A)$ is the polynomial transformation of the matrix $A$ defined
as
$P^{SV}(A):=\begin{cases}\sum_{j}P(\sigma_{j})\ket{v_{j}}\bra{w_{j}},&P\text{
is odd}\\\ \sum_{j}P(\sigma_{j})\ket{w_{j}}\bra{w_{j}},&P\text{ is
even}.\end{cases}$ (20)
Theorem 6 tells us that for any QSP polynomial $P$ of degree $n$, we can
implement $P^{SV}(A)$ using one ancilla qubit, $\Theta(n)$ applications of
$U_{A}$, $U^{{\dagger}}_{A}$ and controlled reflections $I-2\Pi$ and
$I-2\widetilde{\Pi}$. Furthermore, if in some well-defined interval, some
function $f(x)$ is well approximated by an $n$-degree QSP polynomial $P(x)$,
then Theorem 6 also allows us to implement a transformation that approximates
$f(A)$, where
$f(A):=\begin{cases}\sum_{j}f(\sigma_{j})\ket{v_{j}}\bra{w_{j}},&P\text{ is
odd}\\\ \sum_{j}f(\sigma_{j})\ket{w_{j}}\bra{w_{j}},&P\text{ is
even}.\end{cases}$ (21)
The following theorem from Ref. [13] deals with the robustness of the QSVT
procedure, i.e. how errors propagate in QSVT. In particular, for two matrices
$A$ and $\tilde{A}$, it shows how close their polynomial transformations
($P^{SV}(A)$ and $P^{SV}(\widetilde{A})$, respectively) are, as a function of
the distance between $A$ and $\tilde{A}$.
###### Lemma 7 (Robustness of Quantum Singular Value Transformation, [13],
Lemma 23).
Let $P\in\mathbb{C}[x]$ be a QSP polynomial of degree $n$. Let
$A,\tilde{A}\in\mathbb{C}^{N\times d}$ be matrices of spectral norm at most 1,
such that
$\norm{A-\tilde{A}}+\norm{\frac{A+\tilde{A}}{2}}^{2}\leq 1.$
Then,
$\norm{P^{SV}(A)-P^{SV}(\tilde{A})}\leq
n\sqrt{\frac{2}{1-\norm{\frac{A+\tilde{A}}{2}}^{2}}}\norm{A-\tilde{A}}.$
We will apply this theorem to develop a robust version of QSVT. More
precisely, in order to implement QSVT, we require access to a unitary $U_{A}$,
which is a block-encoding of some matrix $A$. This block-encoding, in most
practical scenarios, is not perfect: we only have access to a
$\varepsilon$-approximate block-encoding of $A$. If we want an
$\delta$-accurate implementation of $P^{SV}(A)$, how precise should the block-
encoding of $A$ be? Such a robustness analysis has been absent from prior work
involving QSVT and will allow us to develop robust versions of a number of
quantum algorithms in subsequent sections. The following theorem determines
the precision $\varepsilon$ required in the block-encoding of $A$ in terms of
$n$, the degree of the QSP polynomial that we wish to implement and $\delta$,
the accuracy of $P^{SV}(A)$.
###### Theorem 8 (Robust QSVT).
Let $P\in\mathbb{C}[x]$ be a QSP polynomial of degree $n\geq 2$. Let
$\delta\in[0,1]$ be the precision parameter. Let $U$ be an
$(\alpha,a,\varepsilon)$-block-encoding of matrix $A\in\mathbb{C}^{N\times d}$
satisfying $\norm{A}\leq\alpha/2$, implemented in cost $T$ for some
$\varepsilon\leq{\alpha\delta}/{2n}$. Then we can construct a
$(1,a+1,\delta)$-block-encoding of $P(A/\alpha)$ in cost $\order{nT}$.
###### Proof.
Let $\tilde{A}$ be the encoded block of $U$, then
$\norm{A-\tilde{A}}\leq\varepsilon$. Applying QSVT on $U$ with the polynomial
$P$, we get a block-encoding for $P(\tilde{A}/\alpha)$, with $\order{n}$ uses
of $U,U^{\dagger}$, and as many multiply-controlled NOT gates. Observe that
$\norm{\frac{A}{\alpha}-\frac{\tilde{A}}{\alpha}}\leq\frac{\varepsilon}{\alpha}\leq\frac{\delta}{2n}\leq\frac{1}{4}$,
and,
$\displaystyle\norm{\frac{\frac{A}{\alpha}+\frac{\tilde{A}}{\alpha}}{2}}^{2}=\norm{\frac{A}{\alpha}+\frac{\tilde{A}-A}{2\alpha}}^{2}\leq\left(\frac{\norm{A}}{\alpha}+\frac{\norm{\tilde{A}-A}}{2\alpha}\right)^{2}\leq\left(\frac{1}{2}+\frac{1}{8}\right)^{2}\leq\frac{1}{2}$
Therefore the error in the final block-encoding is given by invoking Lemma 7
with matrices $A/\alpha,\tilde{A}/\alpha$:
$\displaystyle\norm{P\left(\frac{A}{\alpha}\right)-P\left(\frac{\tilde{A}}{\alpha}\right)}\leq
n\sqrt{\frac{2}{1-\frac{1}{2}}}\leavevmode\nobreak\
\frac{\varepsilon}{\alpha}=\frac{2n\varepsilon}{\alpha}\leq\delta.$
∎
In Section 3, we will make use of Theorem 8, to develop robust quantum
algorithms for singular value discrimination, variable-time matrix inversion,
positive and negative powers of matrices. Subsequently, in Sec. 4, we shall
combine algorithmic primitives to design robust quantum regularized least
squares algorithms.
### 2.4 Variable Time Amplitude Amplification
Ambainis [3] defined the notion of a variable-stopping-time quantum algorithm
and formulated the technique of Variable Time Amplitude Amplification (VTAA),
a tool that can be used to amplify the success probability of a variable-
stopping-time quantum algorithm to a constant by taking advantage of the fact
that computation on some parts of an algorithm can complete earlier than on
other parts. The key idea here is to look at a quantum algorithm $\mathcal{A}$
acting on a state $\ket{\psi}$ as a combination of $m$ quantum sub-algorithms
$\mathcal{A}=\mathcal{A}_{m}\cdot\mathcal{A}_{m-1}\cdot\ldots\mathcal{A}_{1}$,
each acting on $\ket{\psi}$ conditioned on some ancilla flag being set.
Formally, a variable stopping time algorithm is defined as follows
###### Definition 9 (Variable-stopping-time Algorithm, [3]).
A quantum algorithm $\mathcal{A}$ acting on $\mathcal{H}$ that can be written
as $m$ quantum sub-algorithms,
$\mathcal{A}=\mathcal{A}_{m}\cdot\mathcal{A}_{m-1}\cdot\ldots\mathcal{A}_{1}$
is called a variable stopping time algorithm if
$\mathcal{H}=\mathcal{H}_{C}\otimes\mathcal{H}_{\mathcal{A}}$, where
$\mathcal{H}_{C}=\otimes_{i=1}^{m}\mathcal{H}_{C_{i}}$ with
$\mathcal{H}_{C_{i}}=\mathrm{span}(\ket{0},\ket{1})$, and each unitary
$\mathcal{A}_{j}$ acts on
$\mathcal{H}_{C_{j}}\otimes\mathcal{H}_{\mathcal{A}}$ controlled on the first
$j-1$ qubits $\ket{0}^{\otimes j-1}\in\otimes_{i=1}^{j-1}\mathcal{H}_{C_{i}}$
being in the all zero state.
Here $\mathcal{H}_{C_{i}}$ is a single qubit clock register. In VTAA,
$\mathcal{H}_{\mathcal{A}}$ has a flag space consisting of a single qubit to
indicate success,
$\mathcal{H}_{\mathcal{A}}=\mathcal{H}_{F}\otimes\mathcal{H}_{W}$. Here
$\mathcal{H}_{F}=\mathrm{Span}(\ket{g},\ket{b})$ flags the good and bad parts
of the run. Furthermore, for $1\leq i\leq m$, define the stopping times
$t_{i}$ such that $t_{1}<t_{2}<\cdots t_{m}=T_{\max}$, such that the algorithm
$\mathcal{A}_{j}\mathcal{A}_{j-1}\cdots\mathcal{A}_{1}$ having (gate/query)
complexity $t_{i}$ halts with probability
$p_{j}=\norm{\Pi_{C_{j}}\mathcal{A}_{j}\mathcal{A}_{j-1}\cdots\mathcal{A}_{1}\ket{0}_{\mathcal{H}}}^{2},$
where $\ket{0}_{\mathcal{H}}\in\mathcal{H}$ is the all zero quantum state and
$\Pi_{C_{j}}$ is the projector onto $\ket{1}$ in $\mathcal{H}_{C_{j}}$. From
this one can define the average stopping time of the algorithm $\mathcal{A}$
defined as
$\norm{T}_{2}=\sqrt{\sum_{j=1}^{m}p_{j}t^{2}_{j}}.$
For a variable stopping time algorithm if the average stopping time
$\norm{T}_{2}$ is less than the maximum stopping time $T_{\max}$, VTAA can
amplify the success probability $(p_{\mathrm{succ}})$ much faster than
standard amplitude amplification. In this framework, the success probability
of $\mathcal{A}$ is given by
$p_{\mathrm{succ}}=\norm{\Pi_{F}\mathcal{A}_{m}\mathcal{A}_{m-1}\cdots\mathcal{A}_{1}\ket{0}_{\mathcal{H}}}^{2}$
While standard amplitude amplification requires time scaling as
$\order{T_{\max}/\sqrt{p_{\mathrm{succ}}}}$, the complexity of VTAA is more
involved. Following [7], we define the complexity of VTAA as follows.
###### Lemma 10 (Efficient variable time amplitude amplification [6], Theorem
23).
Let $U$ be a state preparation unitary such that $U\ket{0}^{\otimes
k}=\sqrt{p_{\mathrm{prep}}}\ket{0}\ket{\psi_{0}}+\sqrt{1-p_{\mathrm{prep}}}\ket{1}\ket{\psi_{1}}$
that has a query complexity $T_{U}$. And let
$\mathcal{A}=\mathcal{A}_{m}\mathcal{A}_{m-1}\cdots\mathcal{A}_{1}$ be a
variable stopping time quantum algorithm that we want to apply to the state
$\ket{\psi_{0}}$, with the following known bounds: $p_{\mathrm{prep}}\geq
p^{\prime}_{\mathrm{prep}}$ and $p_{\mathrm{succ}}\geq
p^{\prime}_{\mathrm{succ}}$. Define $T^{\prime}_{\max}:=2T_{\max}/t_{1}$ and
$Q:=\left(T_{\max}+\frac{T_{U}+k}{\sqrt{p_{\mathrm{prep}}}}\right)\sqrt{\log\left(T^{\prime}_{\max}\right)}+\frac{\left(\norm{T}_{2}+\frac{T_{U}+k}{\sqrt{p_{\mathrm{prep}}}}\right)\log\left(T^{\prime}_{\max}\right)}{\sqrt{p_{\mathrm{succ}}}}.$
Then with success probability $\geq 1-\delta$, we can create a variable-
stopping time algorithm $\mathcal{A}^{\prime}$ that prepares the state
$a\ket{0}\mathcal{A}^{\prime}\ket{\psi_{0}}+\sqrt{1-a^{2}}\ket{1}\ket{\psi_{\textrm{garbage}}}$,
such that $a=\Theta(1)$ is a constant and $\mathcal{A}^{\prime}$ has the
complexity $\order{Q}$.
One cannot simply replace standard amplitude amplification with VTAA to boost
the success probability of a quantum algorithm. A crucial task would be to
recast the underlying algorithm in the VTAA framework. We will be applying
VTAA to the quantum algorithm for matrix inversion by QSVT. So, first of all,
in order to apply VTAA to the algorithm must be first recast into a variable-
time stopping algorithm so that VTAA can be applied.
Originally, Ambainis [3] used VTAA to improve the running time of the HHL
algorithm from $\order{\kappa^{2}\log N}$ to $\order{\kappa\log^{3}\kappa\log
N}$. Childs et al. [9] designed a quantum linear systems algorithm with a
polylogarithmic dependence on the accuracy. Additionally, they recast their
algorithm into a framework where VTAA could be applied to obtain a linear
dependence on $\kappa$. Later Chakraborty et al. [7] modified Ambainis’ VTAA
algorithm to perform variable time amplitude estimation.
In this work, to design quantum algorithms for $\ell_{2}$-regularized linear
regression, we use a quantum algorithm for matrix inversion by QSVT. We recast
this algorithm in the framework of VTAA to achieve nearly linear dependence in
$\kappa$ (the effective condition number of the matrix to be inverted). Using
QSVT instead of controlled Hamiltonian simulation improves the complexity of
the overall matrix inversion algorithm (QSVT and VTAA) by a log factor. It
also reduces the number of additional qubits substantially. Furthermore, we
replace a gapped quantum phase estimation procedure with a more efficient
quantum singular value discrimination algorithm using QSVT. This further
reduces the number of additional qubits by $O(\log^{2}(\kappa/\delta))$ than
in Refs. [9, 7], where $\kappa$ is the condition number of the underlying
matrix and $\delta$ is the desired accuracy. The details of the variable
stopping time quantum algorithm for matrix inversion by QSVT are laid out in
Section 3.3.
## 3 Algorithmic Primitives
This section introduces the building blocks of our quantum algorithms for
quantum linear regression with general $\ell_{2}$-regularization. As mentioned
previously, we work in the block-encoding framework. We develop robust quantum
algorithms for arithmetic operations, inversion, and positive and negative
powers of matrices using quantum singular value transformation, assuming we
have access to approximate block-encodings of these matrices. While some of
these results were previously derived assuming perfect block-encodings [14,
7], we calculate the precision required in the input block-encodings to output
a block-encoding or quantum state arbitrarily close to the target.
Given a $(\alpha,a,\varepsilon)$-block-encoding of a matrix $A$, we can
efficiently amplify the sub-normalization factor from $\alpha$ to a constant
and obtain an amplified block-encoding of $A$. For our quantum algorithms in
Sec. 4, we show working with pre-amplified block-encodings often yields better
complexities. We state the following lemma which was proven in Ref. [24]:
###### Lemma 11 (Uniform Block Amplification of Contractions, [24]).
Let $A\in\mathbb{R}^{N\times d}$ such that $\norm{A}\leq 1$ If $\alpha\geq 1$
and $U$ is a $(\alpha,a,\varepsilon)$-block-encoding of A that can be
implemented at a cost of $T_{U}$, then there is a
$(\sqrt{2},a+1,\varepsilon+\gamma)$-block-encoding of A that can be
implemented at a cost of $\order{\alpha T_{U}\log\left(1/\gamma\right)}$.
###### Corollary 12 (Uniform Block Amplification).
Let $A\in\mathbb{R}^{N\times d}$ and $\delta\in(0,1]$. Suppose $U$ is a
$(\alpha,a,\varepsilon)$-block-encoding of A, such that
$\varepsilon\leq\frac{\delta}{2}$, that can be implemented at a cost of
$T_{U}$. Then a $(\sqrt{2}\norm{A},a+1,\delta)$-block-encoding of A can be
implemented at a cost of $\order{\frac{\alpha
T_{U}}{\norm{A}}\log\left(\norm{A}/\delta\right)}$.
We now obtain the complexity of applying a block-encoded matrix to a quantum
state, which is a generalization of a lemma proven in Ref. [7].
###### Lemma 13 (Applying a Block-encoded Matrix on a Quantum State).
Let $A$ be an $s$-qubit operator such that its non-zero singular values lie in
$[\norm{A}/\kappa,\norm{A}]$. Also let $\delta\in(0,1)$, and $U_{A}$ be an
$(\alpha,a,\varepsilon)$-block-encoding of $A$, implementable in time $T_{A}$,
such that
$\varepsilon\leq\frac{\delta\norm{A}}{2\kappa}.$
Furthermore, suppose $\ket{b}$ be an $s$-qubit quantum state, prepared in time
$T_{b}$. Then we can prepare a state that is $\delta$-close to
$\frac{A\ket{b}}{\norm{A\ket{b}}}$ with success probability
$\Omega\left(1\right)$ at a cost of
$\order{\frac{\alpha\kappa}{\norm{A}}(T_{A}+T_{b})}$
###### Corollary 14 (Applying a pre-amplified Block-encoded Matrix on a
Quantum State).
Let $A$ be an $s$-qubit operator such that its non-zero singular values lie in
$[\norm{A}/\kappa,\norm{A}]$. Also let $\delta\in(0,1)$, and $U_{A}$ be an
$(\alpha,a,\varepsilon)$-block-encoding of $A$, implementable in time $T_{A}$,
such that
$\varepsilon\leq\frac{\delta\norm{A}}{4\kappa}.$
Furthermore, suppose $\ket{b}$ be an $s$-qubit quantum state that can be
prepared in time $T_{b}$. Then we can prepare a state that is $\delta$-close
to $\frac{A\ket{b}}{\norm{A\ket{b}}}$ with success probability
$\Omega\left(1\right)$ at a cost of
$\order{\frac{\alpha\kappa}{\norm{A}}\log\left(\frac{\kappa}{\delta}\right)T_{A}+\kappa
T_{b}}$
Now, it may happen that $U_{b}$ prepares a quantum state that is only
$\varepsilon$-close to the desired state $\ket{b}$. In such cases, we have the
following lemma
###### Lemma 15 (Robustness of state preparation).
Let $A$ be an $s$-qubit operator such that its non-zero singular values lie in
$\left[\norm{A}/\kappa,\norm{A}\right]$. Suppose $\ket{b^{\prime}}$ is a
quantum state that is $\varepsilon/2\kappa$-close to $\ket{b}$ and
$\ket{\psi}$ is a quantum state that is $\varepsilon/2$-close to
$A\ket{b^{\prime}}/\norm{A\ket{b^{\prime}}}$. Then we have that $\ket{\psi}$
is $\varepsilon$-close to $A\ket{b}/\norm{A\ket{b}}$.
The proofs for 12, Lemma 13, 14, and Lemma 15 can be found in Appendix A.
### 3.1 Arithmetic with Block-Encoded Matrices
The block-encoding framework embeds a matrix on the top left block of a larger
unitary $U$. It has been demonstrated that this framework allows us to obtain
sums, products, linear combinations of block-encoded matrices. This is
particularly useful for solving linear algebra problems in general. Here, we
state some of the arithmetic operations on block-encoded matrices that we
shall be using in order to design the quantum algorithms of Section 4 and
tailor existing results to our requirements.
First we prove a slightly more general form of linear combination of unitaries
in the block-encoding framework, presented in [14]. To do this we assume that
we are given optimal state preparation pairs, defined as follows.
###### Definition 16 (Optimal State Preparation Unitary).
Let $m\in\mathbb{Z}^{+}$, and $s=\lceil\log{m}\rceil$. Let
$\eta\in\mathbb{R}^{m}_{+}$. Then we call a $s$-qubit unitary $P$ a $\eta$
state-preparation unitary if
$P\ket{0}=\frac{1}{\sum_{j}\eta_{j}}\sum_{j}\sqrt{\eta_{j}}\ket{j}$
###### Lemma 17 (Linear Combination of Block Encoded Matrices).
For each $j\in\\{0,\ldots,m-1\\}$, let $A_{j}$ be an $s$-qubit operator, and
$y_{j}\in\mathbb{R}^{+}$. Let $U_{j}$ be a
$(\alpha_{j},a_{j},\varepsilon_{j})$-block-encoding of $A_{j}$, implemented in
time $T_{j}$. Define the matrix $A=\sum_{j}y_{j}A_{j}$, and the vector
$\eta\in\mathbb{R}^{m}$ s.t. $\eta_{j}=y_{j}\alpha_{j}$. Let $U_{\eta}$ be a
$\eta$ state-preparation unitary, implemented in time $T_{\eta}$. Then we can
implement a
$\left(\sum_{j}y_{j}\alpha_{j},\max_{j}(a_{j})+s,\sum_{j}y_{j}\varepsilon_{j}\right)$
block-encoding of $A$ at a cost of $\order{\sum_{j}T_{j}+T_{\eta}}$.
The proof is similar to the one in Ref. [14], with some improvements to the
bounds. The detailed proof can be found in Appendix A. We now specialize the
above lemma for the case where we need a linear combination of just two
unitaries. This is the case used in this work, and we obtain a better error
scaling for this by giving an explicit state preparation unitary.
###### Corollary 18 (Linear Combination of Two Block Encoded Matrices).
For $j\in\\{0,1\\}$, let $A_{j}$ be an $s$-qubit operator and
$y_{j}\in\mathbb{R}^{+}$. Let $U_{j}$ be a
$(\alpha_{j},a_{j},\varepsilon_{j})$-block-encoding of $A_{j}$, implemented in
time $T_{j}$. Then we can implement a
$(y_{0}\alpha_{0}+y_{1}\alpha_{1},1+\max(a_{0},a_{1}),y_{0}\varepsilon_{0}+y_{1}\varepsilon_{1})$
encoding of $y_{0}A_{0}+y_{1}A_{1}$ in time $\order{T_{0}+T_{1}}$.
###### Proof.
Let $\alpha=y_{0}\alpha_{0}+y_{1}\alpha_{1}$ and
$P=\frac{1}{\sqrt{\alpha}}\begin{pmatrix}\sqrt{y_{0}\alpha_{0}}&-\sqrt{y_{1}\alpha_{1}}\\\
\sqrt{y_{1}\alpha_{1}}&\sqrt{y_{0}\alpha_{0}}\end{pmatrix}$. By Definition 16,
$P$ is a $\\{y_{0}\alpha_{0},y_{1}\alpha_{1}\\}$ state-preparation-unitary.
Invoking Lemma 17 with $P$, we get the required unitary.
∎
Given block-encodings of two matrices $A$ and $B$, it is easy to obtain a
block-encoding of $AB$.
###### Lemma 19 (Product of Block Encodings, [13], Lemma 53).
If $U_{A}$ is an $(\alpha,a,\delta)$-block-encoding of an $s$-qubit operator
$A$ implemented in time $T_{A}$, and $U_{B}$ is a
$(\beta,b,\varepsilon)$-block-encoding of an $s$-qubit operator $B$
implemented in time $T_{B}$, then $(I^{\otimes b}\otimes U_{A})(I^{\otimes
a}\otimes U_{B})$ is an
$(\alpha\beta,a+b,\alpha\varepsilon+\beta\delta)$-block-encoding of $AB$
implemented at a cost of $\order{T_{A}+T_{B}}$.
Directly applying Lemma 19 results in a block-encoding of
$\frac{AB}{\alpha\beta}$. If $\alpha$ and $\beta$ are large, then the sub-
normalization factor $\alpha\beta$ might incur an undesirable overhead to the
cost of the algorithm that uses it. In many cases, the complexity of obtaining
products of block-encodings can be improved if we first amplify the block-
encodings (using Lemma 12) and then apply Lemma 19. We prove the following
lemma:
###### Lemma 20 (Product of Amplified Block-Encodings).
Let $\delta\in(0,1]$. If $U_{A}$ is an
$(\alpha_{A},a_{A},\varepsilon_{A})$-block-encoding of an $s$-qubit operator
$A$ implemented in time $T_{A}$, and $U_{B}$ is a
$(\alpha_{B},a_{B},\varepsilon_{B})$-block-encoding of an $s$-qubit operator
$B$ implemented in time $T_{B}$, such that
$\varepsilon_{A}\leq\frac{\delta}{4\sqrt{2}\norm{B}}$ and
$\varepsilon_{B}\leq\frac{\delta}{4\sqrt{2}\norm{A}}$. Then we can implement a
$(2\norm{A}\norm{B},a_{A}+a_{B}+2,\delta)$-block-encoding of $AB$ implemented
at a cost of
$\order{\left(\frac{\alpha_{A}}{\norm{A}}T_{A}+\frac{\alpha_{B}}{\norm{B}}T_{B}\right)\log\left(\frac{\norm{A}\norm{B}}{\delta}\right)}.$
###### Proof.
Using 12 for some $\delta_{A}\geq 2\varepsilon_{A}$ we get a
$(\sqrt{2}\norm{A},a_{A}+1,\delta_{A})$-block-encoding of $A$ at a cost of
$\order{\frac{\alpha_{A}T_{A}}{\norm{A}}\log\left(\norm{A}/\delta_{A}\right)}.$
Similarly for some $\delta_{B}\geq 2\varepsilon_{B}$ we get a
$(\sqrt{2}\norm{B},a_{B}+1,\delta_{B})$-block-encoding of $B$ at a cost of
$\order{\frac{\alpha_{B}T_{B}}{\norm{B}}\log\left(\norm{B}/\delta_{B}\right)}.$
Now using Lemma 19 we get a
$(2,a_{A}+a_{B}+2,\sqrt{2}\left(\norm{A}\delta_{B}+\norm{B}\delta_{A}\right))$-block-
encoding of $AB$. We can choose $\delta_{A}=\frac{\delta}{2\sqrt{2}\norm{B}}$
and $\delta_{B}=\frac{\delta}{2\sqrt{2}\norm{A}}$ which bounds the final
block-encoding error by $\delta$. ∎
Observe that we have assumed that $A$ and $B$ are $s$-qubit operators. For any
two matrices of dimension ${N\times d}$ and $d\times K$, such that $N,d,K\leq
2^{s}$, we can always pad them with rows and columns of zero entries and
convert them to $s$-qubit operators. Thus, in the scenario where $A$ and $B$
are not $s$-qubit operators, one can consider block encodings of padded
versions of these matrices. Note that this does not affect the operations on
the sub-matrix blocks encoding $A$ and $B$. Thus, the above results can be
used to perform block-encoded matrix products for arbitrary (compatible)
matrices.
Next we show how to find the block encoding of tensor product of matrices from
their block encodings. This procedure will be useful in creating the dilated
matrices required for regularization. The proof can be found in Appendix A.
###### Lemma 21 (Tensor Product of Block Encoded Matrices).
Let $U_{1}$ and $U_{2}$ be $(\alpha,a,\varepsilon_{1})$ and
$(\beta,b,\varepsilon_{2})$-block-encodings of $A_{1}$ and $A_{2}$, $s$ and
$t$-qubit operators, implemented in time $T_{1}$ and $T_{2}$ respectively.
Define $S:=\Pi_{i=1}^{s}\texttt{SWAP}_{a+b+i}^{a+i}$. Then, $S(U_{1}\otimes
U_{2})S^{\dagger}$ is an
$(\alpha\beta,a+b,\alpha\varepsilon_{2}+\beta\varepsilon_{1}+\varepsilon_{1}\varepsilon_{2})$
block-encoding of $A_{1}\otimes A_{2}$, implemented at a cost of
$\order{T_{1}+T_{2}}$.
We will now use Lemma 21 to augment one matrix into another, given their
approximate block-encodings.
###### Lemma 22 (Block-encoding of augmented matrix).
If $U_{A}$ is an $(\alpha_{A},a_{A},\varepsilon_{A})$-block encoding of an
$s$-qubit operator $A$ that can be implemented in time $T_{A}$ and $U_{B}$ is
an $(\alpha_{B},a_{B},\varepsilon_{B})$-block encoding of an $s$-qubit
operator $B$ that can be implemented in time $T_{B}$, then we an implement an
$(\alpha_{A}+\alpha_{B},\max(a_{A},a_{B})+2,\varepsilon_{A}+\varepsilon_{B})$-block-
encoding of
$A_{B}=\begin{pmatrix}A&0\\\ B&0\end{pmatrix}$
at a cost of $\order{T_{A}+T_{B}}$.
###### Proof.
Let $M_{A}=\begin{pmatrix}1&0\\\ 0&0\end{pmatrix}$. Then the SWAP gate is a
$(1,1,0)$ block encoding of $M_{A}$. By Lemma 21, we can implement
$U_{A}^{\prime}$, an $(\alpha_{A},a_{A}+1,\varepsilon_{A})$-block-encoding of
their tensor product $M_{A}\otimes A=\begin{pmatrix}A&0\\\ 0&0\end{pmatrix}$
at a cost of $\order{T_{A}}$. Similarly, Let $M_{B}=\begin{pmatrix}0&0\\\
1&0\end{pmatrix}$. Then $(I\otimes X)\cdot\texttt{SWAP}$ is a $(1,1,0)$-block-
encoding of $M_{B}$. Similarly Lemma 21, we can implement $U_{B}^{\prime}$, an
$(\alpha_{B},a_{B}+1,\varepsilon_{B})$-block-encoding of $M_{B}\otimes
B=\begin{pmatrix}0&0\\\ B&0\end{pmatrix}$ at a cost of $\order{T_{B}}$. We add
them by using 18 on $U_{A}^{\prime}$ and $U_{B}^{\prime}$, to implement
$U_{A_{B}}$, an
$(\alpha_{A}+\alpha_{B},2+\max(a_{A},a_{B}),\varepsilon_{A}+\varepsilon_{B})$-block-
encoding of $A_{B}=\begin{pmatrix}A&0\\\ B&0\end{pmatrix}$. This can be
implemented at a cost of $\order{T_{A}+T_{B}}$. ∎
### 3.2 Robust Quantum Singular Value Discrimination
The problem of deciding whether the eigenvalues of a Hamiltonian lie above or
below a certain threshold, known as eigenvalue discrimination, finds
widespread applications. For instance, the problem of determining whether the
ground energy of a generic local Hamiltonian is $<\lambda_{a}$ or
$>\lambda_{b}$ is known to be QMA-Complete [21]. Nevertheless, quantum
eigenvalue discrimination has been useful in preparing ground states of
Hamiltonians. Generally, a variant of quantum phase estimation, which
effectively performs a projection onto the eigenbasis of the underlying
Hamiltonian, is used to perform eigenvalue discrimination [16]. Recently, it
has been shown that QSVT can be used to approximate a projection onto the
eigenspace of an operator by implementing a polynomial approximation of the
sign function [29]. This was then used to design improved quantum algorithms
for ground state preparation.
In our work, we design a more general primitive, known as Quantum Singular
Value Discrimination (QSVD). Instead of eigenvalues, the algorithm
distinguishes whether a singular value $\sigma$ is $\leq\sigma_{a}$ or
$\geq\sigma_{b}$. This is particularly useful when the block-encoded matrix is
not necessarily Hermitian and hence, may not have well-defined eigenvalues. We
use this procedure to develop a more space-efficient variable stopping time
matrix inversion algorithm in Section 3.3. Owing to the widespread use of
singular values in a plethora of fields, we believe that our QSVD procedure is
of independent interest.
Let us define the sign function $\mathrm{sign}:\mathbb{R}\to\mathbb{R}$ as
follows:
$\mathrm{sign}(x)=\begin{cases}-1&\quad x<0\\\ 0&\quad x=0\\\ 1&\quad x>0.\\\
\end{cases}$ (22)
Given a threshold singular value $c$, Low and Chuang [24] showed that there
exists a polynomial approximation to $\mathrm{sign}(c-x)$ (based on its
approximation of the erf function). We use the result of Ref. [34], where such
a polynomial of even parity was considered. This is crucial, as for even
polynomials, QSVT maps right (left) singular vectors to right (left) singular
vectors, which enables us to use the polynomial in [34] for singular value
discrimination.
###### Lemma 23 (Polynomial approximation to the sign function [24, 28, 34]).
For any $\varepsilon,\Delta,c\in(0,1)$, there exists an efficiently computable
even polynomial $P_{\varepsilon,\Delta,c}(x)$ of degree
$l=\order{\frac{1}{\Delta}\log(1/\varepsilon)}$ such that
* 1\.
$\forall x\in[0,1]\colon\absolutevalue{P_{\varepsilon,\Delta,c}(x)}\leq 1$
* 2\.
$\forall
x\in[0,1]\setminus\left(c-\frac{\Delta}{2},c+\frac{\Delta}{2}\right)\colon\absolutevalue{P_{\varepsilon,\Delta,c}(x)-\mathrm{sign}(c-x)}\leq\varepsilon$
Therefore, given a matrix $A$ with singular values between $[0,1]$, we can use
QSVT to implement $P_{\varepsilon,\Delta,c}(A)$ which correctly distinguishes
between singular values of $A$ whose value is less than $c-\Delta/2$ and those
whose value is greater than $c+\Delta/2$. For our purposes, we shall consider
that we are given $U_{A}$, which is an $(\alpha,a,\varepsilon)$ block-encoding
of a matrix $A$. Our goal would be to distinguish whether a certain singular
value $\sigma$ satisfies $0\leq\sigma\leq\varphi$ or $2\varphi\leq\sigma\leq
1$. Since $U_{A}$ (approximately) implements $A/\alpha$, the task can be
rephrased as distinguishing whether a singular value of $A/\alpha$ is in
$[0,\varphi/\alpha]$ or in $[2\varphi/\alpha,1]$. For this, we develop a
robust version of quantum singular value discrimination ($QSVD(\phi,\delta)$),
which indicates the precision $\varepsilon$ required to commit an error that
is at most $\delta$.
###### Theorem 24 (Quantum Singular Value Discrimination using QSVT).
Suppose $A\in\mathbb{C}^{N\times N}$ is an $s$-qubit operator (where
$N=2^{s}$) with singular value decomposition
$A=\sum_{j\in[N]}\sigma_{j}\outerproduct{u_{j}}{v_{j}}$ such that all
$\sigma_{j}$ lie in the range $[0,1]$. Let
$\varphi\in\left(0,\frac{1}{2}\right)$ and $\delta\in(0,1]$ be some
parameters. Suppose that for some $\alpha\geq 2$ and $\varepsilon$ satisfying
$\varepsilon=o\left(\frac{\delta\varphi}{\log(1/\delta)}\right)$
we have access to $U_{A}$, an $(\alpha,a,\varepsilon)$-block-encoding of $A$
implemented in cost $T_{A}$. Then there exists a quantum algorithm
$QSVD(\varphi,\delta)$ which implements a $(1,a+1,\delta)$-block-encoding of
some $(s+1)$-qubit operator $D\in\mathbb{C}^{2N\times 2N}$ satisfying the
following constraints for all $j\in[N]$:
* •
$\sigma_{j}\leq\varphi\implies D\ket{0}\ket{v_{j}}=\ket{0}\ket{v_{j}}$
* •
$\sigma_{j}\geq 2\varphi\implies D\ket{0}\ket{v_{j}}=\ket{1}\ket{v_{j}}$
This algorithm has a cost of
$\order{\frac{\alpha}{\varphi}\log\left(\frac{1}{\delta}\right)T_{A}}.$
###### Proof.
We invoke Lemma 23 with parameters $\varepsilon^{\prime}:=\frac{\delta}{2}$,
$c:=\frac{3\varphi}{2\alpha}$ and $\Delta:=\frac{\varphi}{2\alpha}$, to
construct an even polynomial $P:=P_{\varepsilon^{\prime},\Delta,c}$ of degree
$n:=\order{\frac{\alpha}{\varphi}\log(\frac{1}{\varepsilon^{\prime}})}$, which
is an $\varepsilon^{\prime}$-approximation of
$f(x):=\text{sign}\left(\frac{3\varphi}{2\alpha}-x\right)$ for
$x\in\left[0,\frac{\varphi}{\alpha}\right]\cup\left[\frac{2\varphi}{\alpha},1\right]$.
Invoking Theorem 8 with $P$ and $U_{A}$, we get $U_{B}$ – a
$(1,a+1,\gamma)$-block-encoding of $B:=P(A/\alpha)$, implemented in cost
$\order{nT_{A}}$, where $\varepsilon$ must satisfy
$\varepsilon\leq\alpha\gamma/2n$.
Now consider the following unitary $W$ that acts on $s+a+2$ qubits:
$W:=\texttt{SWAP}_{[s,s+a+1]}^{\dagger}(H\otimes
I_{s+a+1})\left(C\text{-}U_{B}\right)(H\otimes
I_{s+a+1})\texttt{SWAP}_{[s,s+a+1]}$
$W$ is the required block-encoding of $D$, and $\texttt{SWAP}_{[l,r]}$
sequentially swaps adjacent qubits with indices in range $[l,r]$ effectively
moving qubit indexed $l$ to the right of qubit $r$. (where qubits are zero-
indexed, with higher indices for ancillas). Let $\tilde{B}$ be the top-left
block of $U_{B}$ (therefore $\norm{B-\tilde{B}}\leq\gamma$). Then we can
extract $\tilde{D}$, the top-left block of $W$ as follows:
$\displaystyle\tilde{D}$ $\displaystyle=\left(\bra{0}^{\otimes a+1}\otimes
I_{s+1}\right)\texttt{SWAP}_{[s,s+a+1]}^{\dagger}\left(\outerproduct{+}{+}\otimes
I_{s+a+1}+\outerproduct{-}{-}\otimes
U_{B}\right)\texttt{SWAP}_{[s,s+a+1]}\left(\ket{0}^{\otimes a+1}\otimes
I_{s+1}\right)$ $\displaystyle=\outerproduct{+}{+}\otimes
I_{s}+\outerproduct{-}{-}\otimes\tilde{B}$
Let us define index sets $L,R\subseteq[N]$ where
$L:=\\{j\in[N]\mid\sigma_{j}\leq\varphi\\}$ and
$R:=\\{j\in[N]\mid\sigma_{j}\geq 2\varphi\\}$; and the corresponding subspace
projections $\Pi_{L}:=\sum_{j\in L}\outerproduct{v_{j}}{v_{j}}$,
$\Pi_{R}:=\sum_{j\in R}\outerproduct{v_{j}}{v_{j}}$, and
$\Pi_{\perp}:=I_{s}-\Pi_{L}-\Pi_{R}$. Using these we pick our required
operator $D$ as follows:
$D:=I\otimes\Pi_{L}+X\otimes\Pi_{R}+\tilde{D}(I\otimes\Pi_{\perp})$
That is, $D$ behaves as expected on the required subspace, and acts identical
to $\tilde{D}$ on the remaining space. The error in the block-encoding can be
computed as
$\displaystyle\norm{D-\tilde{D}}$
$\displaystyle=\norm{I\otimes\Pi_{L}+X\otimes\Pi_{R}+\tilde{D}(I\otimes\Pi_{\perp})-\tilde{D}}$
$\displaystyle=\norm{I\otimes\Pi_{L}+X\otimes\Pi_{R}-\tilde{D}(I\otimes(\Pi_{L}+\Pi_{R}))}$
$\displaystyle=\norm{(I\otimes I_{s}-\tilde{D})(I\otimes\Pi_{L})+(X\otimes
I_{s}-\tilde{D})(I\otimes\Pi_{R})}$
$\displaystyle=\norm{\left(\outerproduct{-}{-}\otimes(I_{s}-\tilde{B})\right)(I\otimes\Pi_{L})-\left(\outerproduct{-}{-}\otimes(I_{s}+\tilde{B})\right)(I\otimes\Pi_{R})}$
$\displaystyle=\norm{(I_{s}-\tilde{B})\Pi_{L}-(I_{s}+\tilde{B})\Pi_{R}}$
$\displaystyle=\norm{(I_{s}-B)\Pi_{L}-(I_{s}+B)\Pi_{R}+(B-\tilde{B})(\Pi_{L}-\Pi_{R})}$
$\displaystyle\leq\norm{(I_{s}-P(A/\alpha))\Pi_{L}-(I_{s}+P(A/\alpha))\Pi_{R}}+\norm{B-\tilde{B}}\norm{\Pi_{L}-\Pi_{R}}$
$\displaystyle\leq\varepsilon^{\prime}+\gamma$
We can choose $\gamma=\delta/2$, therefore
$\varepsilon\leq\frac{\alpha\delta}{4n}=o\left(\frac{\delta\varphi}{\log(\frac{1}{\delta})}\right)$
∎
In Section 3.3, we develop a variable stopping time quantum algorithm for
matrix inversion using QSVT. In order to recast the usual matrix inversion to
the VTAA framework, we need to be able to apply this algorithm to specific
ranges of the singular values of the matrix. This is achieved by applying a
controlled QSVD algorithm, to determine whether the input singular vector
corresponds to an singular value less than (or greater than) a certain
threshold. Based on the outcome of controlled QSVD, the standard inversion
algorithm is applied. These two steps correspond to sub-algorithms
$\mathcal{A}_{j}$ of the VTAA framework.
In prior works such as Refs. [3, 9, 7], gapped phase estimation (GPE) was used
to implement this. GPE requires an additional register of
$\order{\log(\kappa)\log(1/\delta)}$ qubits to store the estimated phases. For
the whole VTAA procedure, $\log\kappa$ such registers are needed. As a result,
substituting GPE with QSVD, we save $\order{\log^{2}(\kappa)\log(1/\delta)}$
qubits.
### 3.3 Variable-Time Quantum Algorithm for Matrix Inversion using QSVT
Matrix inversion by QSVT applies a polynomial approximation of $f(x)=1/x$,
satisfying the constraints laid out in Section 2.3. Here, we make use of the
result of [34] to implement $A^{+}$. We adapt their result to the scenario
where we have an approximate block-encoding of $A$ as input. Finally, we
convert this to a variable stopping time quantum algorithm and apply VTAA to
obtain a linear dependence on the condition number of $A$.
###### Lemma 25 (Matrix Inversion polynomial (Appendix C of [34])).
Given $\kappa\geq 1,\varepsilon\in\mathbb{R}^{+}$, there exists an odd QSP
polynomial $P_{\varepsilon,\kappa}^{\text{MI}}$ of degree
$\order{\kappa\log(\kappa/\varepsilon)}$, which is an
$\frac{\varepsilon}{2\kappa}$ approximation of the function
$f(x)=\frac{1}{2\kappa x}$ in the range
$\mathcal{D}:=[-1,-\frac{1}{\kappa}]\cup[\frac{1}{\kappa},1]$. Also in this
range $P_{\varepsilon,\kappa}^{\text{MI}}$ is bounded from above by $1$, i.e.
$\forall
x\in\mathcal{D}:\left|P_{\varepsilon,\kappa}^{\text{MI}}(x)\right|\leq 1$.
###### Theorem 26 (Inverting Normalized Matrices using QSVT).
Let $A$ be a normalized matrix with non-zero singular values in the range
$[1/\kappa_{A},1]$ for some $\kappa_{A}\geq 1$. Let $\delta\in(0,1].$ For some
$\varepsilon=o\left(\frac{\delta}{\kappa_{A}^{2}\log\left({\kappa_{A}}/{\delta}\right)}\right)$
and $\alpha\geq 2$, let $U_{A}$ be an $(\alpha,a,\varepsilon)$-block-encoding
of $A$, implemented in time $T_{A}$. Then we can implement a
$(2\kappa_{A},a+1,\delta)$-block-encoding of $A^{+}$ at a cost of
$\order{\kappa_{A}\alpha\log\left(\frac{\kappa_{A}}{\delta}\right)T_{A}}.$
###### Proof.
We use the matrix inversion polynomial defined in Lemma 25,
$P:=P^{MI}_{\phi,\kappa}$ for this task, with $\kappa=\kappa_{A}\alpha$ and an
appropriate $\phi$. This has a degree of
$n:=\order{\kappa_{A}\alpha\log\left(\kappa_{A}\alpha/\phi\right)}$. We invoke
Theorem 8 to apply QSVT using the polynomial $P$ above, block-encoding
$U_{A}$, and an appropriate error parameter $\gamma$ such that
$\varepsilon\leq\alpha\gamma/2n$, to get the unitary $U$, a
$(1,a+1,\gamma)$-block-encoding of $P(A/\alpha)$. As $P$ is a
$(\phi/2\kappa)$-approximation of $f(x):=1/2\kappa x$, we have
$\displaystyle\norm{f(A/\alpha)-P(A/\alpha)}\leq\frac{\phi}{2\kappa},$
which implies $U$ is a $(1,a+1,\gamma+\phi/2\kappa)$-block-encoding of
$f(A/\alpha)$. And because $f(A/\alpha)=\frac{\alpha
A^{+}}{2\kappa}=A^{+}/2\kappa_{A}$, we can re-interpret $U$ as a
$(2\kappa_{A},a+1,2\kappa_{A}\gamma+\phi/\alpha)$-block-encoding of $A^{+}$.
Choosing $2\kappa_{A}\gamma=\phi/\alpha=\delta/2$, the final block-encoding
has an error of $\delta$. This gives us $\phi=\alpha\delta/2$ and
$\gamma=\delta/4\kappa_{A}$, and
$\varepsilon\leq\frac{\alpha\gamma}{2n}=\frac{\alpha\delta}{8\kappa_{A}n}=\order{\frac{\delta}{\kappa_{A}^{2}\log(\kappa_{A}/\delta)}}$
∎
Next, we design a map $W(\gamma,\delta)$ that uses QSVT to invert the singular
values of a matrix if they belong to a particular domain. This helps us recast
the usual matrix inversion algorithm as a variable-stopping-time algorithm and
will be a key subroutine for boosting the success probability of this
algorithm using VTAA. This procedure was also used in Refs. [9, 7] for the
quantum linear systems algorithms.
###### Theorem 27 (Efficient inversion of block-encoded matrix).
Let $A$ be a normalized matrix with non-zero singular values in the range
$\left[1/\kappa,1\right]$, for some $\kappa\geq 1$. Let
$\delta\in(0,1];\leavevmode\nobreak\ 0<\gamma\leq 1$. Let $U_{A}$ be an
$(\alpha,a,\varepsilon)$-block-encoding of $A$ implemented in time $T_{A}$,
such that $\alpha\geq 2$ and
$\varepsilon=o\left(\frac{\delta\gamma^{2}}{\log\left(\frac{1}{\delta\gamma}\right)}\right)$
. Then for any quantum state $\ket{b}$ that is spanned by the left singular
vectors of $A$ corresponding to the singular values in the range
$\left[\gamma,1\right]$, there exists a unitary $W(\gamma,\delta)$ that
implements
$W(\gamma,\delta):\ket{0}_{F}\ket{0}_{Q}\ket{b}_{I}\mapsto\dfrac{1}{a_{\max}}\ket{1}_{F}\ket{0}_{Q}f(A)\ket{b}_{I}+\ket{0}_{F}\ket{\perp}_{QI}$
(23)
where $a_{\max}=\order{\kappa_{A}}$ is a constant independent of $\gamma$,
$\ket{\perp}_{QI}$ is an unnormalized quantum state orthogonal to
$\ket{0}_{Q}$ and $\norm{f(A)\ket{b}-A^{+}\ket{b}}\leq\delta$. Here $F$ is a
1-qubit flag register, $Q$ is an $\alpha$-qubit ancilla register, and $I$ is
the $\log N$-qubit input register. This unitary has a cost
$\order{\frac{\alpha}{\gamma}\log\left(\frac{1}{\gamma\delta}\right)T_{A}}$
(24)
###### Proof.
Since we only need to invert the singular values in a particular range, we can
use the procedure in Theorem 26 with $\kappa_{A}$ modified to the restricted
range. That gives us the description of a quantum circuit
$\widetilde{W}(\gamma,\delta)$ that can implement the following map
$\widetilde{W}(\gamma,\delta):\ket{b}_{I}\ket{0}_{Q}\mapsto\frac{\gamma}{2}f(A)\ket{b}_{I}\ket{0}_{Q}+\ket{\perp}_{QI},$
where $\ket{\perp}$ is an unnomalized state with no component along
$\ket{0}_{Q}$. This has the same cost as Equation 24. Here
$\norm{f(A)\ket{\psi}-A^{+}\ket{\psi}}\leq\delta$ whenever $\ket{\psi}$ is a
unit vector in the span of the singular vectors of $A$ corresponding to the
singular values in $[\gamma,1]$. This follows from the sub-multiplicativity
property of the matrix-vector product.
Next, we must transform the amplitude of the good part of the state to
$\Theta({\kappa})$, independent of $\gamma$. To achieve this, we will have to
flag it with an ancillary qubit to use a controlled rotation to modify the
amplitude. Thus we add a single qubit $\ket{0}_{F}$ register and flip this
register controlled on register $Q$ being in the state $\ket{0}$ (the good
part). This gives us the transformation
$\widetilde{W}^{\prime}(\gamma,\delta):\ket{0}_{F}\ket{b}_{I}\ket{0}_{Q}\mapsto\frac{\gamma}{2}\ket{1}_{F}f(A)\ket{b}_{I}\ket{0}_{Q}+\ket{0}_{F}\ket{\perp}_{QI}$
Then we use a controlled rotation to replace the amplitude $\gamma/2$ with
some constant $a_{\max}^{-1}$ which is independent of $\gamma$, which is
achieved by introducing the relevant phase to the flag space
$\ket{1}_{F}\mapsto\frac{2}{\gamma
a_{\max}}\ket{1}_{F}+\sqrt{1-\frac{4}{\gamma^{2}a^{2}_{\max}}}\ket{0}_{F}.$
This gives us the desired $W(\gamma,\delta)$ as in Equation 23. ∎
Given such a unitary $W(\gamma,\delta)$, Ref. [7] laid out a procedure for a
variable time quantum algorithm $\mathcal{A}$ that takes as input the block
encoding of an $N\times d$ matrix $A$, and a state preparation procedure
$U_{b}:\ket{0}^{\otimes n}\mapsto\ket{b}$, and outputs a quantum state that is
a bounded distance away from $A^{+}\ket{b}/\norm{A^{+}\ket{b}}$. In order to
determine the branches of the algorithm on which to apply VTAA at a particular
iteration, [9, 7, 3] use the technique of gapped phase estimation, which given
a unitary $U$, a threshold $\phi$ and one of its eigenstate $\ket{\lambda}$,
decides if the corresponding eigenvalue is a bounded distance below the
threshold, or a bounded distance above it. In this work, we replace gapped
phase estimation with the QSVD algorithm (Theorem 24) which can be applied
directly to any block-encoded (not necessarily Hermitian) matrix $A$, and
allows for saving on $\order{\log^{2}(\kappa/\delta)}$ qubits.
The Variable time Algorithm: This algorithm will be a sequence of $m$ sub-
algorithms
$\mathcal{A}=\mathcal{A}_{m}\cdot\mathcal{A}_{m-1}\cdot\ldots\mathcal{A}_{1}$,
where $m=\lceil\log\kappa\rceil+1$. The overall algorithm acts on the
following registers:
* •
$m$ single qubit clock registers $C_{i}:i\in[m]$.
* •
An input register $I$, initialized to $\ket{0}^{\otimes s}$.
* •
Ancillary register space $Q$ for the block encoding of $A$, initialized to
$\ket{0}^{\otimes a}$.
* •
A single qubit flag register $\ket{0}_{F}$ used to flag success of the
algorithm.
Once we have prepared the above state space, we use the state preparation
procedure to prepare the state $\ket{b}$. Now we can define how each
$\mathcal{A}_{j}$ acts on the state space. Let
$\varepsilon^{\prime}=\frac{\delta}{a_{\max}m}$. The action of
$\mathcal{A}_{j}$ can be broken down into two parts:
1. 1.
If $C_{j-1}\ldots C_{1}$ is in state $\ket{0}^{\otimes(j-1)}$, apply
QSVD$(2^{-j},\varepsilon^{\prime})$, (Theorem 24) to the state $\ket{b}$. The
output is to be written to the clock register $C_{j}$.
2. 2.
If the state of $C_{j}$ is now $\ket{1}$, apply
$W(2^{-j},\varepsilon^{\prime})$ to $I\otimes F\otimes Q$.
Additionally, we would need algorithms
$\mathcal{A}^{\prime}=\mathcal{A}^{\prime}_{m}\cdots\mathcal{A}^{\prime}_{1}$
which are similar to $\mathcal{A}$, except that in Step 2, it implements
$W^{\prime}$ which sets the flag register to $1$. That is,
$W^{\prime}\ket{b}_{I}\ket{0}_{F}\ket{0}_{Q}=\ket{b}_{I}\ket{1}_{F}\ket{0}_{Q}.$
Now we are in a position to define the variable time quantum linear systems
algorithm using QSVT.
###### Theorem 28 (Variable Time Quantum Linear Systems Algorithm Using
QSVT).
Let $\varepsilon,\delta>0$. Let $A$ is a normalized $N\times d$ matrix such
that its non-zero singular values lie in $[1/\kappa,1]$. Suppose that for
$\varepsilon=o\left(\frac{\delta}{\kappa^{3}\log^{2}\left(\frac{\kappa}{\delta}\right)}\right),$
we have access to $U_{A}$ which is an $(\alpha,a,\varepsilon)$-block-encoding
of $A$, implemented with cost $T_{A}$. Let $\ket{b}$ be a state vector which
is spanned by the left singular vectors of $A$. Suppose there exists a
procedure to prepare the state $\ket{b}$ in cost $T_{b}$. Then there exists a
variable time quantum algorithm that outputs a state that is $\delta$-close
$\frac{A^{+}\ket{b}}{\norm{A^{+}\ket{b}}}$ at a cost of
$\order{\kappa\log\kappa\left(\alpha
T_{A}\log\left(\frac{\kappa}{\delta}\right)+T_{b}\right)}$ (25)
using $\order{\log\left(\kappa\right)}$ additional qubits.
###### Proof.
The correctness of the algorithm is similar to that of Refs. [9, 7], except
here, we use QSVD instead of gapped phase estimation. According to Lemma 10,
we need $T_{\max}$ (the maximum time any of the sub-algorithms
$\mathcal{A}_{j}$ take), $\norm{T}_{2}^{2}$ (the $\ell_{2}$-averaged stopping
time of the sub-algorithms), and $\sqrt{p_{\mathrm{succ}}}$ (the square root
of the success probability.) Now each sub-algorithm consists of two steps,
implementing QEVD with precision $2^{-j}$ and error $\varepsilon^{\prime}$,
followed by $W(2^{-j},\varepsilon^{\prime})$. From Theorem 24, the first step
costs
$\order{\alpha T_{A}2^{j}\log\left(\frac{1}{\varepsilon^{\prime}}\right)},$
and the cost of implementing $W(2^{-j},\varepsilon^{\prime})$ is as described
in Equation 24. Thus the overall cost of $\mathcal{A}_{j}$, which is the sum
of these two costs, turns out to be
$\order{\alpha T_{A}2^{j}\log\left(\frac{2^{j}}{\varepsilon^{\prime}}\right)}$
(26)
Note that the time $t_{j}$ required to implement
$\mathcal{A}_{j}\ldots\mathcal{A}_{1}$ is also the same as Equation 26. Also,
$\displaystyle T_{\max}$ $\displaystyle=\max_{j}\leavevmode\nobreak\ t_{j}$
$\displaystyle=\max_{j}\leavevmode\nobreak\ \order{\alpha
T_{A}2^{j}\log\left(\frac{2^{j}}{\varepsilon^{\prime}}\right)}$
$\displaystyle=\order{\alpha
T_{A}\kappa\log\left(\frac{\kappa}{\varepsilon^{\prime}}\right)}$
$\displaystyle=\order{\alpha
T_{A}\kappa\log\left(\frac{\kappa\log\left(\kappa\right)}{\delta}\right)}.$
The $\norm{T}_{2}^{2}$ is dependent on the probability that $\mathcal{A}$
stops at the $j^{\mathrm{th}}$ step. This is given by
$p_{j}=\norm{\Pi_{C_{j}}\mathcal{A}_{j}\ldots\mathcal{A}_{1}\ket{\psi}_{I}\ket{0}_{CFPQ}}^{2}$,
where $\Pi_{C_{j}}$ is the projector on $\ket{1}_{C_{j}}$, the
$j^{\mathrm{th}}$ clock register. From this, $\norm{T}_{2}^{2}$ can be
calculated as
$\displaystyle\norm{T}_{2}^{2}$ $\displaystyle=\sum_{j}p_{j}t_{j}^{2}$
$\displaystyle=\sum_{j}\norm{\Pi_{C_{j}}\mathcal{A}_{j}\ldots\mathcal{A}_{1}\ket{\psi}_{I}\ket{0}_{CFPQ}}^{2}t_{j}^{2}$
$\displaystyle=\sum_{k}\absolutevalue{c_{k}}^{2}\sum_{j}\left(\norm{\Pi_{C_{j}}\mathcal{A}_{j}\ldots\mathcal{A}_{1}\ket{v_{k}}_{I}\ket{0}_{CFPQ}}^{2}t_{j}^{2}\right)$
$\displaystyle=\order{\alpha^{2}T_{A}^{2}\sum_{k}{\log^{2}\left(\frac{1}{\sigma_{k}\varepsilon^{\prime}}\right)\frac{\absolutevalue{c_{k}}^{2}}{\sigma_{k}^{2}}}}$
Therefore
$\norm{T}_{2}=\order{\alpha
T_{A}\log\left(\frac{\kappa\log{\kappa}}{\delta}\right)\sqrt{\sum_{k}\frac{\absolutevalue{c_{k}}^{2}}{\sigma_{k}^{2}}}}.$
(27)
Next we calculate the success probability.
$\displaystyle\sqrt{p_{\mathrm{succ}}}$
$\displaystyle=\norm{\Pi_{F}\frac{A^{-1}}{\alpha_{\max}}\ket{b}_{I}\ket{\phi}_{CFPQ}}+\order{m\varepsilon^{\prime}}$
$\displaystyle=\frac{1}{\alpha_{\max}}\sqrt{\sum_{j}\frac{\absolutevalue{c_{j}}^{2}}{\sigma_{j}^{2}}}+\order{\frac{\delta}{\alpha_{\max}}}$
$\displaystyle=\Omega\left(\frac{1}{\kappa}\sqrt{\sum_{j}\frac{\absolutevalue{c_{j}}^{2}}{\sigma_{j}^{2}}}\right)$
Given these, we can use Lemma 10 to write the final complexity of matrix
inversion with VTAA:
$\displaystyle
T_{\max}+T_{b}+\frac{(\norm{T}_{2}+T_{b})\log\left(T^{\prime}_{\max}\right)}{\sqrt{p_{\textrm{succ}}}}=\order{\kappa\log\kappa\left(\alpha
T_{A}\log\left(\frac{\kappa}{\delta}\right)+T_{b}\right)}$
The upper bound on the precision required for the input block-encoding,
$\varepsilon$, can be calculated from the bounds on the precisions for
$W(\kappa,\varepsilon^{\prime})$ (Theorem 27) and
$\mathrm{QSVD}(\kappa,\varepsilon^{\prime})$ (Theorem 24) as follows:
$\displaystyle\varepsilon=o\left(\min\left(\frac{\varepsilon^{\prime}}{\kappa^{2}\log\left(\frac{\kappa}{\varepsilon^{\prime}}\right)},\frac{\varepsilon^{\prime}}{\kappa\log\left(\frac{1}{\varepsilon^{\prime}}\right)}\right)\right)=o\left({\frac{\varepsilon^{\prime}}{\kappa^{2}\log\left(\frac{\kappa}{\varepsilon^{\prime}}\right)}}\right)=o\left(\frac{\delta}{\kappa^{3}\log^{2}\left(\frac{\kappa}{\delta}\right)}\right)$
∎
The overall complexity is better by a $\log$ factor and requires
$\order{\log^{2}(\kappa/\delta)}$ fewer additional qubits as compared to the
variable time algorithms in Refs. [9, 7].
### 3.4 Negative Powers of Matrices using QSVT
We consider the problem: given an approximate block-encoding of a matrix $A$,
we need to prepare a block-encoding of $A^{-c}$, where $c\in(0,1)$. This
procedure will be used to develop algorithms for $\ell_{2}$-regularized
versions of GLS. We will directly use the results of [14].
###### Lemma 29 (Polynomial approximations of negative power functions, [13],
Corollary 67).
Let $\varepsilon,\delta\in(0,\frac{1}{2}],c>0$ and let
$f(x):=\frac{\delta^{c}}{2}x^{-c}$, then there exist even/odd polynomials
$P_{c,\varepsilon,\delta},P^{\prime}_{c,\varepsilon,\delta}\in\mathbb{R}[x]$
such that $\norm{P_{c,\varepsilon,\delta}-f}_{[\delta,1]}\leq\varepsilon$,
$\norm{P_{c,\varepsilon,\delta}}_{[-1,1]}\leq 1$ and
$\norm{P^{\prime}_{c,\varepsilon,\delta}-f}_{[\delta,1]}\leq\varepsilon$,
$\norm{P^{\prime}_{c,\varepsilon,\delta}}_{[-1,1]}\leq 1$. Moreover the degree
of the polynomials are
$\order{\frac{\max(1,c)}{\delta}\log\left(\frac{1}{\varepsilon}\right)}$.
###### Theorem 30 (Negative fractional powers of a normalized matrix using
QSVT).
Let $c\in(0,1)$ be some constant and $\delta\in(0,1]$ Let $A$ be a normalized
matrix with non-zero singular values in the range $[1/\kappa,1]$. Let $U_{A}$
be a $(\alpha,a,\varepsilon)$-block-encoding of a matrix $A$, implemented in
time $T_{A}$ such that $\alpha\geq 2$ and
$\varepsilon=o\left(\frac{\delta}{\kappa^{c+1}\log(\kappa/\delta)}\right)$
Then we can construct a $(2\kappa^{c},a+1,\delta)$-block-encoding of $A^{-c}$
at a cost of
$\order{\alpha\kappa\log\left(\frac{\kappa}{\delta}\right)\leavevmode\nobreak\
T_{A}}.$
###### Proof.
From Lemma 29, using $\Delta:=\frac{1}{\kappa\alpha}$ and an appropriate
$\varphi\in(0,\frac{1}{2}]$, we get an even QSP polynomial
$P:=P_{c,\varphi,\Delta}$ which is $\varphi$-close to
$f(x):=\frac{1}{2\kappa^{c}\alpha^{c}x^{c}}$, and has degree $n$ such that
$n=\order{\alpha\kappa\log\left(\frac{1}{\varphi}\right)}$. Therefore
$\norm{f(A/\alpha)-P(A/\alpha)}\leq\varphi.$
Using Theorem 8 we can construct $U_{P}$, a $(1,a+1,\gamma)$-block-encoding of
$P(A/\alpha)$, given that $\varepsilon\leq\frac{\alpha\gamma}{2n}$. Then from
triangle inequality it follows that it is a $(1,a+1,\leavevmode\nobreak\
\varphi+\gamma)$-block-encoding of $f(A/\alpha)$. And because
$f(A/\alpha)=\frac{A^{-c}}{2\kappa^{c}}$, $U_{P}$ can be re-interpreted as a
$(2\kappa^{c},a+1,2\kappa^{c}(\varphi+\gamma))$-block-encoding of $A^{-c}$. We
therefore choose $\varphi=\gamma=\frac{\delta}{4\kappa^{c}}$, and choose
$\varepsilon$ as
$\varepsilon=o\left(\alpha\frac{\delta}{4\kappa^{c}}\frac{1}{\alpha\kappa\log(4\kappa^{c}/\delta)}\right)=o\left(\frac{\delta}{\kappa^{c+1}\log(\kappa/\delta)}\right)$
∎
Having discussed the necessary algorithmic primitives, we are now in a
position to design quantum algorithms for linear regression with general
$\ell_{2}$-regularization. We will first deal with ordinary least squares
followed by weighted and generalized least squares.
## 4 Quantum Least Squares with General $\ell_{2}$-Regularization
In this section, we derive the main results of our paper, namely quantum
algorithms for quantum ordinary least squares (OLS), quantum weighted least
squares (WLS) and quantum generalized least squares (GLS) with
$\ell_{2}$-regularization.
### 4.1 Quantum Ordinary Least Squares
Given $N$ data points $\\{a_{i},b_{i}\\}_{i=1}^{N}$ such that
$a_{i}\in\mathbb{R}^{d}$ and $b_{i}\in\mathbb{R}$, the objective of linear
regression is to find $x\in\mathbb{R}^{d}$ that minimizes the loss function
$\mathcal{L}_{O}=\sum_{j=1}^{N}(x^{T}a_{i}-b_{i})^{2}.$ (28)
Consider the $N\times d$ matrix $A$ (known as the data matrix) such that the
$i^{\mathrm{th}}$ row of $A$ is the vector $a_{i}$ transposed and the column
vector $b=(b_{1}\cdots b_{N})^{T}$. Then, the solution to the OLS problem is
given by $x=(A^{T}A)^{-1}A^{T}b=A^{+}b$.
For the $\ell_{2}$-regularized version of the OLS problem, a penalty term is
added to its objective function. This has the effect of shrinking the singular
values of $A$ which helps overcome problems such as rank deficiency and
overfitting for the OLS problem. The loss function to be minimized is of the
form
$\norm{Ax-b}^{2}_{2}+\norm{Lx}^{2}_{2},$ (29)
where $L$ is the $N\times d$ penalty matrix and $\lambda>0$ is the optimal
regularizing parameter. The solution $x\in\mathbb{R}^{d}$ satisfies
$x=(A^{T}A+\lambda L^{T}L)^{-1}A^{T}b.$ (30)
Therefore, for quantum ordinary least squares with general
$\ell_{2}$-regularization, we assume that we have access to approximate block-
encodings of the data matrix $A$, $L$ and a procedure to prepare the quantum
state $\ket{b}=\sum_{j=1}^{N}b_{j}\ket{j}/\norm{b}$. Our algorithm outputs a
quantum state that is close to
$\ket{x}=\dfrac{(A^{T}A+\lambda
L^{T}L)^{-1}A^{T}\ket{b}}{\norm{(A^{T}A+\lambda L^{T}L)^{-1}A^{T}\ket{b}}}.$
(31)
In order to implement a quantum algorithm that implements this, a
straightforward approach would be the following: We first construct block-
encodings of $A^{T}A$ and $L^{T}L$, given block encodings of $A$ and $L$,
respectively (Using Lemma 19). We could then implement a block-encoding of
$A^{T}A+\lambda L^{T}L$ using these block encodings (By Lemma 17). On the
other hand, we could also prepare a quantum state proportional to
$A^{T}\ket{b}$ by using the block-encoding for $A$ and the unitary preparing
$\ket{b}$. Finally, using the block encoding of $A^{T}A+\lambda L^{T}L$, we
could implement a block-encoding of $(A^{T}A+\lambda L^{T}L)^{-1}$ (using
Theorem 26) and apply it to the state $A^{T}\ket{b}$. Although this procedure
would output a quantum state close to $\ket{x}$, it is not efficient. It is
easy to see that the inverse of $A^{T}A+\lambda L^{T}L$, would be implemented
with a complexity that has a quadratic dependence on the condition numbers of
$A$ and $L$. This would be undesirable as it would perform worse than the
unregularized quantum least squares algorithm, where one is able to implement
$A^{+}$ directly. However, it is possible to design a quantum algorithm that
performs significantly better than this.
The first observation is that it is possible to recast this problem as finding
the pseudoinverse of some augmented matrix. Given the data matrix
$A\in\mathbb{R}^{N\times d}$, the regularizing matrix $L\in\mathbb{R}^{N\times
d}$, let us define the following augmented matrix
$A_{L}:=\begin{pmatrix}A&0\\\ \sqrt{\lambda}L&0\end{pmatrix}.$ (32)
It is easy to see that the top left block of $A^{+}_{L}=(A^{T}A+\lambda
L^{T}L)^{-1}A^{T}$, which is the required linear transformation to be applied
to $b$. Consequently, our strategy would be to implement a block-encoding of
$A_{L}$, given block-encodings of $A$ and $L$. Following this, we use matrix
inversion by QSVT to implement $A^{+}_{L}\ket{b}\ket{0}$. The first register
is left in the quantum state given in Equation 31.
From this, it is clear that the complexity of our quantum algorithm would
depend on the effective condition number of the augmented matrix $A_{L}$. In
this regard, we shall assume that the penalty matrix $L$ is a good
regularizer. That is, $L$ is chosen such that it does not have zero singular
values (positive definite). This is a fair assumption as if $L$ has only non-
zero singular values, the minimum singular value of $A_{L}$ is guaranteed to
be lower bounded by the minimum singular value of $L$. This ensures that the
effective condition number of $A_{L}$ depends on $\kappa_{L}$, even when the
data matrix $A$ has zero singular values and $A^{T}A$ is not invertible.
Consequently, this also guarantees that regularized least squares provide an
advantage over their unregularized counterparts.
Next, we obtain bounds on the effective condition number of the augmented
matrix $A_{L}$ for a good regularizer $L$ via the following lemma:
###### Lemma 31 (Condition number and Spectral Norm of $A_{L}$).
Let the data matrix $A$ and the positive definite penalty matrix $L$ have
spectral norms $\norm{A}$ and $\norm{L}$, respectively. Furthermore, suppose
their effective condition numbers be upper bounded by $\kappa_{A}$ and
$\kappa_{L}$. Then the ratio between the maximum and minimum (non-zero)
singular value of $A_{L}$ is upper bounded by
$\kappa=\kappa_{L}\left(1+\frac{\norm{A}}{\sqrt{\lambda}\norm{L}}\right)$
We can also bound the spectral norm as
$\norm{A_{L}}=\Theta\left(\norm{A}+\sqrt{\lambda}\norm{L}\right)$
###### Proof.
To bound the spectral norm and condition number of $A_{L}$, consider the
eigenvalues of the following matrix:
$A_{L}^{T}A_{L}=\begin{pmatrix}A^{T}A+\lambda L^{T}L&0\\\ 0&0\end{pmatrix}$
This implies that the non-zero eigenvalues of $A_{L}^{T}A_{L}$ are the same as
those of $A^{T}A+\lambda L^{T}L$. Therefore, using triangle inequality, the
spectral norm of $A_{L}$ can be upper-bounded as follows:
$\displaystyle\norm{A_{L}}=\sqrt{\norm{A_{L}^{T}A_{L}}}=\sqrt{\norm{A^{T}A+\lambda
L^{T}L}}\leq\sqrt{\norm{A^{T}A}+\lambda\norm{L^{T}L}}=\sqrt{\norm{A}^{2}+\lambda\norm{L}^{2}}\leq\norm{A}+\sqrt{\lambda}\norm{L}$
Similarly $\norm{A_{L}}\geq\norm{A}$ and
$\norm{A_{L}}\geq\sqrt{\lambda}\norm{L}$, which effectively gives the tight
bound for $\norm{A_{L}}$.
As $L^{T}L$ is positive definite, we have that its minimum singular value is
$\sigma_{\mathrm{min}(L)}=\norm{L}/\kappa_{L}$. And we also know that $A^{T}A$
is positive semidefinite, so by Weyl’s inequality, the minimum singular value
of $A_{L}$ is lower bounded by
$\displaystyle\sigma_{\min}\left(A_{L}\right)\geq\sqrt{\sigma_{\min}\left(A\right)^{2}+\lambda\sigma_{\min}\left(L\right)^{2}}\geq\sqrt{\lambda\frac{\norm{L}^{2}}{\kappa_{L}^{2}}}=\sqrt{\lambda}\frac{\norm{L}}{\kappa_{L}}$
Thus,
$\dfrac{\sigma_{\max}\left(A_{L}\right)}{\sigma_{\min}\left(A_{L}\right)}\leq\kappa=\kappa_{L}\left(1+\frac{\norm{A}}{\sqrt{\lambda}\norm{L}}\right)$
∎
In the theorems and lemmas for regularized quantum linear regression and its
variants that we develop in this section, we consider that $L$ is a good
regularizer in order to provide a simple expression for $\kappa$. However,
this is without loss of generality. When $L$ is not a good regularizer, the
expressions for the respective complexities will remain unaltered, except that
$\kappa$ would now correspond to the condition number of the augmented matrix.
Now it might be possible that $\ket{b}$ does not belong to the row space of
$(A^{T}A+\lambda L^{T}L)^{-1}A^{T}$ which is equivalent to saying
$\ket{b}\ket{0}$ may not lie in $\mathrm{row}(A^{+}_{L})$. However, it is
reasonable to expect that the initial hypothesis of the underlying model being
close to linear is correct. That is, we expect $\ket{b}$ to have a good
overlap with
$\mathrm{row}\left(A^{+}_{L}\right)=\mathrm{col}\left(A_{L}\right)$. The
quantity that quantifies how far the model is from being linear is the so
called normalized residual sum of squares. For $\ell_{2}$-regularized ordinary
least squares, this is given by
$\mathcal{S}_{O}=\dfrac{\norm{(I-\Pi_{\mathrm{col}(A_{L})})\ket{b}\ket{0}}^{2}}{\norm{\ket{b}}^{2}}=1-\norm{\Pi_{\mathrm{col}(A_{L})}\ket{b}\ket{0}}^{2}.$
(33)
If the underlying data can indeed be fit by a linear function,
$\mathcal{S}_{O}$ will be low. Subsequently, we assume that
$\mathcal{S}_{O}=1-\norm{\Pi_{\mathrm{col}(A_{L})}\ket{b}\ket{0}}^{2}\leq\gamma<1/2$.
This in turn implies that
$\norm{\Pi_{\mathrm{col}(A_{L})}\ket{b}\ket{0}}^{2}=\Omega(1)$, implying that
the data can be reasonably fit by a linear model.iiiOur results also hold if
we assume that $\mathcal{S}_{O}\leq\gamma$ for some $\gamma\in(0,1)$. That is,
$\norm{\Pi_{\mathrm{col}(A_{L})}}\geq 1-\gamma$. In such a scenario our
complexity to prepare $A^{+}_{L}\ket{b,0}/\norm{A^{+}_{L}\ket{b,0}}$ is
rescaled by $1/\sqrt{1-\gamma}$.
Now we are in a position to present our quantum algorithm for the quantum
least squares problem with general $\ell_{2}$-regularization. We also present
an improved quantum algorithm for the closely related quantum ridge
regression, which is a special case of the former.
###### Theorem 32 (Quantum Ordinary Least Squares with General
$\ell_{2}$-Regularization).
Let $A,L\in\mathbb{R}^{N\times d}$ be the data and penalty matrices with
effective condition numbers $\kappa_{A}$ and $\kappa_{L}$ respectively, and
$\lambda\in\mathbb{R}^{+}$ be the regression parameter. Let $U_{A}$ be a
$(\alpha_{A},a_{A},\varepsilon_{A})$-block-encoding of $A$ implemented in time
$T_{A}$ and $U_{L}$ be a $(\alpha_{L},a_{L},\varepsilon_{L})$-block-encoding
of $L$ implemented in time $T_{L}$. Furthermore, suppose $U_{b}$ be a unitary
that prepares $\ket{b}$ in time $T_{b}$ and define
$\kappa=\kappa_{L}\left(1+\frac{\norm{A}}{\sqrt{\lambda}\norm{L}}\right)$
Then for any $\delta\in(0,1)$ such that
$\varepsilon_{A},\sqrt{\lambda}\varepsilon_{L}=o\left(\frac{\delta}{\kappa^{3}\log^{2}\left(\frac{\kappa}{\delta}\right)}\right)$
(34)
we can prepare a state that is $\delta$-close to
$\frac{\left(A^{T}A+\lambda
L^{T}L\right)^{-1}A^{T}\ket{b}}{\norm{\left(A^{T}A+\lambda
L^{T}L\right)^{-1}A^{T}\ket{b}}}$
with probability $\Theta(1)$, at a cost of
$\order{\kappa\log\kappa\left(\left(\frac{\alpha_{A}+\sqrt{\lambda}\alpha_{L}}{\norm{A}+\sqrt{\lambda}\norm{L}}\right)\log\left(\frac{\kappa}{\delta}\right)\left(T_{A}+T_{L}\right)+T_{b}\right)}$
(35)
using only $\order{\log\kappa}$ additional qubits.
###### Proof.
We invoke Lemma 22, to obtain a unitary $U$, which is a
$(\alpha_{A}+\sqrt{\lambda}\alpha_{L},\max(a_{A},a_{L})+2,\varepsilon_{A}+\sqrt{\lambda}\varepsilon_{L})$-block-
encoding of the matrix $A_{L}$, implemented at a cost of
$\order{T_{A}+T_{L}}$. Note that in Lemma 22, $A$ and $L$ are considered to be
$s$-qubit operators. For $N\times d$ matrices, such that $N,d\leq 2^{s}$, we
can pad them with zero entries. Padding $A$ and $L$ with zeros may result in
the augmented matrix $A_{L}$ having some zero rows between $A$ and $L$.
However, this is also not an issue as we are only interested in the top left
block of $A^{+}_{L}$ which remains unaffected.
Note that $U$ can be reinterpreted as a
$\left(\frac{\alpha_{A}+\sqrt{\lambda}\alpha_{L}}{\norm{A_{L}}},\max(a_{A},a_{L})+2,\frac{\varepsilon_{A}+\sqrt{\lambda}\varepsilon_{L}}{\norm{A_{L}}}\right)$-block-
encoding of the normalized matrix $A_{L}/\norm{A_{L}}$. Furthermore, we can
prepare the quantum state $\ket{b}\ket{0}$ in time $T_{b}$. Now by using
Theorem 28 with $U$ and an appropriately chosen $\delta$ specified above, we
obtain a quantum state that is $\delta$-close to
$\frac{(A^{T}A+\lambda L^{T}L)^{-1}A^{T}\ket{b}}{\norm{(A^{T}A+\lambda
L^{T}L)^{-1}A^{T}\ket{b}}}$
in the first register. ∎
In the above complexity, when $L$ is a good regularizer, $\kappa$ is
independent of $\kappa_{A}$. $\kappa$ can be made arbitrarily smaller than
$\kappa_{A}$ by an appropriate choice of $L$. Thus the regularized version has
significantly better time complexity than the unregularized case. One such
example of a good regularizer is in case of Quantum Ridge Regression, where we
use the identity matrix to regularize. The corollary below elucidates this.
###### Corollary 33 (Quantum Ridge Regression).
Let $A$ be a matrix of dimension $N\times d$ with effective condition number
$\kappa_{A}$ and $\lambda\in\mathbb{R}^{+}$ be the regression parameter. Let
$U_{A}$ be a $(\alpha,a,\varepsilon)$-block-encoding of $A$ implemented in
time $T_{A}$. Let $U_{b}$ be a unitary that prepares $\ket{b}$ in time
$T_{b}$. If $\kappa=1+\norm{A}/\sqrt{\lambda}$ then for any $\delta$ such that
$\varepsilon=o\left(\frac{\delta}{\kappa^{3}\log^{2}\left(\frac{\kappa}{\delta}\right)}\right)$
we can prepare a state $\delta$-close to
$\frac{\left(A^{T}A+\lambda
I\right)^{-1}A^{T}\ket{b}}{\norm{\left(A^{T}A+\lambda
I\right)^{-1}A^{T}\ket{b}}}$
at a cost of
$\order{\log\kappa\left(\frac{\alpha_{A}}{\sqrt{\lambda}}\log\left(\frac{\kappa}{\delta}\right)T_{A}+\kappa
T_{b}\right)}$ (36)
with probability $\Theta(1)$ using only $\order{\log\kappa}$ additional
qubits.
###### Proof.
The identity matrix $I$ is a trivial $(1,0,0)$-block-encoding of itself, and
$\kappa_{I}=1$. We invoke Theorem 32 with $L=I$ to obtain the solution. ∎
Being in the block-encoding framework allows us to express the complexity of
our quantum algorithm in specific input models such as the quantum data
structure input model and the sparse access model. We express these
complexities via the following corollaries.
###### Corollary 34 (Quantum Ordinary Least Squares with
$\ell_{2}$-Regularization in the Quantum Data Structure Input Model).
Let $A,L\in\mathbb{R}^{N\times d}$ with effective condition numbers
$\kappa_{A},\kappa_{L}$ respectively. Let $\lambda\in\mathbb{R}^{+}$ and
$b\in\mathbb{R}^{N}$. Let $\kappa$ be the effective condition number of the
augmented matrix $A_{L}$. Suppose that $A$, $L$ and $b$ are stored in a
quantum accessible data structure. Then for any $\delta>0$ there exists a
quantum algorithm to prepare a quantum state $\delta$-close to
$\frac{\left(A^{T}A+\lambda
L^{T}L\right)^{-1}A^{T}\ket{b}}{\norm{\left(A^{T}A+\lambda
L^{T}L\right)^{-1}A^{T}\ket{b}}}{}$
with probability $\Theta(1)$, at a cost of
$\order{\kappa\left(\frac{\mu_{A}+\sqrt{\lambda}\mu_{L}}{\norm{A}+\sqrt{\lambda}\norm{L}}\right)\mathrm{polylog}\left(Nd,\kappa,\frac{1}{\delta},\lambda\right)}.$
(37)
###### Proof.
Since $b$ is stored in the data structure, for some $\varepsilon_{b}>0$, we
can prepare the state $\ket{b^{\prime}}$ that is $\varepsilon_{b}$-close to
$\ket{b}=\sum_{i}b_{i}\ket{i}/\norm{b}$ using
$T_{b}=\order{\mathrm{polylog}(N/\varepsilon_{b})}$ queries to the data
structure (see Section 2.2.1.) Similarly, for some parameters
$\varepsilon_{A},\varepsilon_{L}>0$, we can construct a
$\left(\mu_{A},\lceil\log(d+N)\rceil,\varepsilon_{A}\right)$-block-encoding of
$A$ using $T_{A}=\order{\mathrm{polylog}(Nd/\varepsilon_{A})}$ queries to the
data structure and a
$\left(\mu_{L},\lceil\log(d+N)\rceil,\varepsilon_{B}\right)$-block-encoding of
$L$ using $T_{L}=\order{\mathrm{polylog}(Nd/\varepsilon_{B})}$ queries.
We invoke Theorem 32 with a precision $\delta/2$ by choosing $\varepsilon_{A}$
and $\varepsilon_{L}$ such that equation Equation 34 is satisfied. This gives
us a state that is $\delta/2$-close to
$\frac{\left(A^{T}A+\lambda
L^{T}L\right)^{-1}A^{T}\ket{b^{\prime}}}{\norm{\left(A^{T}A+\lambda
L^{T}L\right)^{-1}A^{T}\ket{b^{\prime}}}}$
To compute the final precision as $\delta$, we use Lemma 15 by choosing
$\varepsilon_{b}=\frac{\delta}{2\kappa}$. The complexity can be calculated by
plugging in the relevant values in Equation 35 ∎
In the previous corollary $\mu_{A}=\norm{A}_{F}$ and $\mu_{L}=\norm{L}_{F}$
when the matrix $A$ and $L$ are stored in the data structure. Similarly,
$\mu_{A}=\mu_{p}(A)$ and $\mu_{L}=\mu_{p}(L)$ when the matrices
$A^{(p)},A^{(1-p)}$ and $L^{(p)},L^{(1-p)}$ are stored in the data structure.
Now we discuss the complexity of quantum ordinary least squares with
$\ell_{2}$-regularization in the sparse access input model. We call a matrix
$M$ as $(s_{r},s_{c})$ row-column sparse if it has a row sparsity $s_{r}$ and
column sparsity $s_{c}$.
###### Corollary 35 (Quantum Ordinary least squares with
$\ell_{2}$-regularization in the sparse access model).
Let $A\in\mathbb{R}^{N\times d}$ be $(s^{A}_{r},s^{A}_{c})$ row-column sparse,
and similarly, let $L\in\mathbb{R}^{N\times d}$ be $(s^{L}_{r},s^{L}_{c})$
row-column sparse, with effective condition numbers $\kappa_{A}$ and
$\kappa_{L}$ respectively. Let $\lambda\in\mathbb{R}^{+}$ and $\delta>0$.
Suppose there exists a unitary that prepares $\ket{b}$ at a cost, $T_{b}$.
Then there is a quantum algorithm to prepare a quantum state that is
$\delta$-close to
$\frac{(A^{T}A+\lambda L^{T}L)^{-1}A^{T}\ket{b}}{\norm{(A^{T}A+\lambda
L^{T}L)^{-1}A^{T}\ket{b}}}$
with probability $\Theta(1)$, at a cost of
$\order{\kappa\left(\frac{\sqrt{s^{A}_{r}s^{A}_{c}}+\sqrt{\lambda
s^{L}_{r}s^{L}_{c}}}{\norm{A}+\sqrt{\lambda}\norm{L}}\right)\mathrm{polylog}\left(Nd,\kappa,\frac{1}{\delta},\lambda\right)+\kappa\log\kappa
T_{b}}.$ (38)
###### Proof.
The proof is similar to 34 but with $\alpha_{A}=\sqrt{s^{A}_{r}s^{A}_{c}}$ and
$\alpha_{L}=\sqrt{s^{L}_{r}s^{L}_{c}}$. ∎
### 4.2 Quantum Weighted And Generalized Least Squares
This technique of working with a augmented matrix will also hold for the other
variants of ordinary least squares. In this section, we begin by briefly
describing these variants before moving on to designing quantum algorithms for
the corresponding problems.
Weighted Least Squares: For the WLS problem, each observation
$\\{a_{i},b_{i}\\}$ is assigned some weight $w_{i}\in\mathbb{R}^{+}$ and the
objective function to be minimized is of the form
$\mathcal{L}_{W}:=\sum_{j}w_{j}(x^{T}a_{j}-b_{j})^{2}.$ (39)
If $W\in\mathbb{R}^{N\times N}$ is the diagonal matrix with $w_{i}$ being the
$i^{\mathrm{th}}$ diagonal entry, then the optimal $x$ satisfies
$x=(A^{T}WA)^{-1}A^{T}Wb.$ (40)
The $\ell_{2}$-regularized version of WLS satisfies
$x=(A^{T}WA+\lambda L^{T}L)^{-1}A^{T}Wb$ (41)
Our quantum algorithm outputs a state that is close to
$\ket{x}=\frac{(A^{T}WA+\lambda
L^{T}L)^{-1}A^{T}W\ket{b}}{\norm{(A^{T}WA+\lambda L^{T}L)^{-1}A^{T}W\ket{b}}}$
(42)
given approximate block-encodings of $A$, $W$ and $L$. Much like Equation 32,
finding the optimal solution reduces to finding the pseudo inverse of an
augmented matrix $A_{L}$ given by
$A_{L}:=\begin{pmatrix}\sqrt{W}A&0\\\ \sqrt{\lambda}L&0\end{pmatrix}.$
The top left block of $A^{+}_{L}=(A^{T}WA+\lambda L^{T}L)^{-1}A^{T}\sqrt{W}$,
which is the required linear transformation to be applied to the vector
$y=\sqrt{W}b$. The ratio between the minimum and maximum singular values of
$A_{L}$, $\kappa$, can be obtained analogously to Lemma 31. For the
$\ell_{2}$-regularized WLS problem, normalized residual sum of squares is
given by
$\mathcal{S}_{W}=\dfrac{\norm{(I-\Pi_{\mathrm{col}(A_{L})})\ket{y}\ket{0}}^{2}}{\norm{\ket{y}}^{2}}=1-\norm{\Pi_{\mathrm{col}(A_{L})}\ket{y}\ket{0}}^{2}.$
(43)
Subsequently, we assume that
$\mathcal{S}_{W}=1-\norm{\Pi_{\mathrm{col}(A_{L})}\ket{y}\ket{0}}^{2}\leq\gamma<1/2$.
This in turn implies that
$\norm{\Pi_{\mathrm{col}(A_{L})}\ket{y}\ket{0}}^{2}=\Omega(1)$, implying that
the data can be reasonably fit by a linear model.
Generalized Least Squares. Similarly, we can extend this to GLS problem, where
there the input data may be correlated. These correlations are given by the
non-singular covariance matrix $\Omega\in\mathbb{R}^{N\times N}$. The WLS
problem is a special case of the GLS problem, corresponding to when $\Omega$
is a diagonal matrix. The objective function to be minimized is
$\mathcal{L}_{\Omega}:=\sum_{i,j}(\Omega^{-1})_{ij}(x^{T}a_{i}-b_{i})(x^{T}a_{j}-b_{j}).$
(44)
The optimal $x\in\mathbb{R}^{d}$ satisfies
$x=(A^{T}\Omega^{-1}A)^{-1}A^{T}\Omega^{-1}b$ (45)
Similarly, the $\ell_{2}$-regularized GLS solver outputs $x$ such that
$x=(A^{T}\Omega^{-1}A+\lambda L^{T}L)^{-1}A^{T}\Omega^{-1}b.$ (46)
So, given approximate block-encodings of $A$, $\Omega$ and $L$ a quantum GLS
solver outputs a quantum state close to
$\ket{x}=\frac{(A^{T}\Omega^{-1}A+\lambda
L^{T}L)^{-1}A^{T}\Omega^{-1}\ket{b}}{\norm{(A^{T}\Omega^{-1}A+\lambda
L^{T}L)^{-1}A^{T}\Omega^{-1}\ket{b}}}$ (47)
The augmented matrix $A_{L}$ is defined as
$A_{L}:=\begin{pmatrix}\Omega^{-1/2}A&0\\\ \sqrt{\lambda}L&0\end{pmatrix}.$
Then top left block of $A^{+}_{L}$ to the vector $y=\Omega^{-1/2}b$ yields the
optimal $x$. Thus the quantum GLS problem with $\ell_{2}$-regularization first
prepares $\Omega^{-1/2}\ket{b}\ket{0}$ and then uses the matrix inversion
algorithm by QSVT to implement $A^{+}_{L}\Omega^{-1/2}\ket{b}\ket{0}$.
Analogous to OLS and WLS, we assume that the normalized residual sum of
squares $\mathcal{S}_{\Omega}\leq\gamma<1/2$.
#### 4.2.1 Quantum Weighted Least Squares
In this section, we derive the complexity of the $\ell_{2}$-regularized WLS
problem. We assume that we have a diagonal weight matrix
$W\in\mathbb{R}^{N\times N}$ such that its smallest and largest diagonal
entries are ${w_{\text{min}}}$ and ${w_{\text{max}}}$, respectively. This
implies that $\norm{W}={w_{\text{max}}}$ and
$\kappa_{W}={w_{\text{max}}}/{w_{\text{min}}}$. We take advantage of the fact
that the matrix $W$ is diagonal and then apply controlled rotations to
directly implement a block encoding of $\sqrt{W}A$. Additionally, given a
state preparation procedure for $\ket{b}$, we can easily prepare a state
proportional to $\sqrt{W}\ket{b}$. We then use Theorem 32 to solve QWLS.
We first formalize this idea in Theorem 36, assuming direct access to (i) a
block encoding of $B=\sqrt{W}A$, and (ii) a procedure for preparing the state
$\ket{b_{w}}=\frac{\sqrt{W}\ket{b}}{\norm{\sqrt{W}\ket{b}}}$. Subsequently,
for the specific input models, we show that we can indeed efficiently obtain a
block-encoding of $B$ and prepare the state $\ket{b_{w}}$.
###### Theorem 36 (Quantum Weighted Least Squares with General
$\ell_{2}$-Regularization).
Let $A,L\in\mathbb{R}^{N\times d}$, be the data and penalty matrix, with
effective condition numbers $\kappa_{A}$ and $\kappa_{L}$, respectively. Let
$\lambda\in\mathbb{R}^{+}$ be the regularizing parameter. Let
$W\in\mathbb{R}^{N\times N}$ be a diagonal weight matrix with the largest and
smallest diagonal entries being ${w_{\text{max}}},{w_{\text{min}}}$,
respectively. Let $U_{B}$ be a $(\alpha_{B},a_{B},\varepsilon_{B})$ block
encoding of $B:=\sqrt{W}A$ implemented in time $T_{B}$ and let $U_{L}$ be a
$(\alpha_{L},a_{L},\varepsilon_{L})$ block encoding of $L$ implemented in time
$T_{L}$, such that
$\varepsilon_{B}=o\left(\frac{\delta}{\kappa^{3}\log^{2}\left(\frac{\kappa}{\delta}\right)}\right)$
and
$\varepsilon_{L}=o\left(\frac{\delta}{\sqrt{\lambda}\kappa^{3}\log^{2}\left(\frac{\kappa}{\delta}\right)}\right)$.
Let $U_{b_{w}}$ be a unitary that prepares
$\frac{\sqrt{W}\ket{b}}{\norm{\sqrt{W}\ket{b}}}$ in time $T_{b_{w}}$. Define
$\kappa:=\kappa_{L}\left(1+\frac{\sqrt{{w_{\text{max}}}}\norm{A}}{\sqrt{\lambda}\norm{L}}\right)$
Then for any $\delta>0$ we can prepare a quantum state that is $\delta$-close
to
$\frac{(A^{T}WA+\lambda L^{T}L)^{-1}A^{T}W\ket{b}}{\norm{(A^{T}WA+\lambda
L^{T}L)^{-1}A^{T}W\ket{b}}}$
with probability $\Theta(1)$, at a cost of
$\order{\kappa\log\kappa\left(\frac{\alpha_{B}+\sqrt{\lambda}\alpha_{L}}{\sqrt{{w_{\text{max}}}}\norm{A}+\sqrt{\lambda}\norm{L}}\log\left(\frac{\kappa}{\delta}\right)(T_{B}+T_{L})+T_{b_{w}}\right)},$
(48)
using only $\order{\log\kappa}$ additional qubits.
###### Proof.
We then invoke Theorem 32 with $B$ and $L$ as the data and regularization
matrices, respectively. This requires that $\varepsilon_{B},\varepsilon_{L}$
such that
$\varepsilon_{B}+\sqrt{\lambda}\varepsilon_{L}=o\left(\frac{\delta}{\kappa^{3}\log^{2}\left(\frac{\kappa}{\delta}\right)}\right).$
Thus, we get the upper bounds on the precision
$\varepsilon_{B},\leavevmode\nobreak\ \varepsilon_{L}$ required. This gives us
a quantum state $\delta$-close to
$\frac{(A^{T}WA+\lambda L^{T}L)^{-1}A^{T}W\ket{b}}{\norm{(A^{T}WA+\lambda
L^{T}L)^{-1}A^{T}W\ket{b}}}.$
∎
Next, we construct the block encodings for $\sqrt{W}A$ and the state
$\frac{\sqrt{W}\ket{b}}{\norm{\sqrt{W}\ket{b}}}$ efficiently in the quantum
data structure input model. This construction would also apply to the sparse
access input model with slight modifications.
###### Lemma 37 (Efficiently preparing $\sqrt{W}A$ in the Quantum Data
Structure Model).
Let $W\in\mathbb{R}^{N\times N}$ such that $W=\mathrm{diag}(w_{1},w_{2}\ldots
w_{N})$ and $w_{\max}:=\max_{i}w_{i}$, and $A\in\mathbb{R}^{N\times d}$ be
stored in a quantum-accessible data structure. Then for any $\delta>0$ there
exists a
$(\sqrt{w_{\max}}\norm{A}_{F},\lceil\log\left(N+d\right)\rceil,\delta)$
block-encoding of $\sqrt{W}A$ that can be implemented at the cost
$\order{\mathrm{polylog}(Nd/\delta)}$.
###### Proof.
$\forall j\in[N]$, define
$\ket{\psi_{j}}:=\sqrt{\frac{w_{j}}{w_{\max}}}\ket{j}\frac{1}{\norm{A_{j,\cdot}}}\sum_{k\in[d]}A_{j,k}\ket{k}.$
Similarly, $\forall k\in[d]$, define
$\ket{\phi_{k}}:=\frac{1}{\norm{A}_{F}}\left(\sum_{j\in[N]}\norm{A_{j,\cdot}}\ket{j}\right)\ket{k}.$
Observe that $\forall j\in[N],k\in[d]$,
$\braket{\psi_{j}}{\phi_{k}}=\sqrt{\frac{w_{j}}{w_{\max}}}\frac{A_{j,k}}{\norm{A}_{F}}=\frac{\bra{j}\sqrt{W}A\ket{k}}{\sqrt{w_{\max}}\norm{A}_{F}}.$
Given quantum data structure accesses to $W$ and $A$, one can construct
quantum circuits $W_{R}$ and $W_{L}$ similar to $U_{L}$ and $U_{R}$ from Lemma
3 that prepare $\ket{\phi_{k}}$ and $\ket{\psi_{j}}$ above. $\ket{\phi_{k}}$
can be prepared just as in Lemma 3, while $\ket{\psi_{j}}$ can be prepared
using controlled rotations on the state $\ket{\frac{w_{j}}{w_{\max}}}$ (which
can be constructed from the QRAM access to $W$) after adding an ancilla qubit
and the QRAM access to $A$. Thus, $W_{R}^{\dagger}W_{L}$ is the required block
encoding, which according to Theorem 2 can be implemented using
$\mathrm{polylog}(Nd/\delta)$ queries. ∎
###### Lemma 38 (Efficiently preparing $\sqrt{W}\ket{b}$ in the Quantum Data
Structure Model).
Let $b\in\mathbb{R}^{N}$ and $W\in\mathbb{R}^{N\times N}$. Suppose that $b$
and $W$ are stored in a quantum-accessible data structure such that we have a
state preparation procedure that acts as
$\displaystyle U_{W}:\ket{j}\ket{0}\mapsto\ket{j}\ket{w_{j}},$ $\displaystyle
U_{b}:\ket{0}\mapsto\sum_{j}\frac{b_{j}}{\norm{b}}\ket{j}.$
Then for any $\delta>0$ we can prepare the quantum state that is
$\delta$-close to $\frac{\sqrt{W}\ket{b}}{\norm{\sqrt{W}\ket{b}}}$ with
constant success probability and at a cost of
$\order{\sqrt{\frac{w_{\max}}{w_{\min}}}\mathrm{polylog}\left(\frac{N}{\delta}\right)}$.
###### Proof.
Use $U_{b}$ to prepare the state
$\ket{b}=\frac{1}{\norm{b}}\sum_{j}b_{j}\ket{j}$
in time polylog$(N)$. Then, apply the following transformation
$\displaystyle\ket{j}\ket{0}\ket{0}$
$\displaystyle\mapsto\ket{j}\ket{w_{j}}\ket{0}$
$\displaystyle\mapsto\ket{j}\ket{w_{j}}\left(\sqrt{\frac{w_{j}}{w_{\max}}}\ket{0}+\sqrt{1-\frac{w_{j}}{w_{\max}}}\ket{1}\right)$
$\displaystyle\mapsto\ket{j}\ket{0}\left(\sqrt{\frac{w_{j}}{w_{\max}}}\ket{0}+\sqrt{1-\frac{w_{j}}{w_{\max}}}\ket{1}\right)$
which can again be applied using some controlled rotations, a square root
circuit and $U_{W}$. This gives us the state (ignoring some blank registers)
$\sum_{j}\left(\sqrt{\frac{w_{j}}{w_{\max}}}\ket{0}+\sqrt{1-\frac{w_{j}}{w_{\max}}}\ket{1}\right)\frac{b_{j}}{\norm{b}}\ket{j}.$
(49)
The probability for the ancilla to be in $\ket{0}$ state is
$\Omega\left(\frac{w_{\min}}{w_{\max}}\right).$
Thus performing $\order{\sqrt{\frac{w_{\max}}{w_{\min}}}}$ rounds of amplitude
amplification on $\ket{0}$ gives us a constant probability of observing
$\ket{0}$, and therefore obtaining the desired state
$\frac{\sqrt{W}\ket{b}}{\norm{\sqrt{W}\ket{b}}}$. ∎
Using the above two theorems, and the quantum OLS solver (Theorem 32), we can
construct an algorithm for regularized quantum WLS.
###### Theorem 39 (Quantum Weighted Least Squares with General
$\ell_{2}$-Regularization in the Quantum Data Structure Model).
Let $A,L\in\mathbb{R}^{N\times d}$ with effective condition numbers
$\kappa_{A},\kappa_{L}$ respectively be stored in an efficient quantum
accessible data structure. Let $W\in\mathbb{R}^{N\times N}$ be a diagonal
matrix with largest and smallest singular values
${w_{\text{max}}},{w_{\text{min}}}$ respectively, which is also stored in an
efficient quantum accessible data structure. Furthermore, suppose the entries
of the vector $b\in\mathbb{R}^{N}$ are also stored in a quantum-accessible
data structure and define,
$\kappa:=\kappa_{L}\left(1+\frac{\sqrt{{w_{\text{max}}}}\norm{A}}{\sqrt{\lambda}\norm{L}}\right)$
Then for any $\delta>0$ we can prepare a quantum state that is $\delta$-close
to
$\frac{(A^{T}WA+\lambda L^{T}L)^{-1}A^{T}W\ket{b}}{\norm{(A^{T}WA+\lambda
L^{T}L)^{-1}A^{T}W\ket{b}}}$
with probability $\Theta(1)$, at a cost of
$\order{\kappa\left(\frac{\sqrt{w_{\text{max}}}\norm{A}_{F}+\sqrt{\lambda}\norm{L}_{F}}{\sqrt{w_{\text{max}}}\norm{A}+\sqrt{\lambda}\norm{L}}+\sqrt{\frac{{w_{\text{max}}}}{{w_{\text{min}}}}}\right)\mathrm{polylog}\left(Nd,\kappa,\frac{1}{\delta}\right)}$
(50)
###### Proof.
Choose some precision parameter $\varepsilon>0$ for accessing the data
structure. Given access to $W$ and $A$, we can use Lemma 37 to prepare a
$(\sqrt{{w_{\text{max}}}}\norm{A}_{F},\lceil\log\left(N+d\right)\rceil,\varepsilon)$-block-
encoding of $\sqrt{W}A$, using
$T_{A}:=\order{\mathrm{polylog}\left(Nd/\varepsilon\right)}$ queries to the
data structure. Similarly, Lemma 3 allows us to build a
$(\norm{L}_{F},\lceil\log\left(N+d\right)\rceil,\varepsilon)$-block-encoding
of $L$ using $T_{L}:=\order{\mathrm{polylog}(Nd/\varepsilon)}$ queries to the
data structure.
Next, using Lemma 38, for any $\varepsilon_{b}>0$, we can prepare a state
$\varepsilon_{b}$-close to
$\ket{b^{\prime}}:=\frac{\sqrt{W}\ket{b}}{\norm{\sqrt{W}\ket{b}}}$. This
procedure requires
$T_{b}:=\order{\sqrt{\frac{w_{\max}}{w_{\min}}}\mathrm{polylog}\left(N/\varepsilon_{b}\right)}$
queries to the data structure. Now we can invoke the OLS solver in Theorem 32
with a precision of $\delta_{b}$, by considering $\sqrt{W}A$ as the data
matrix and $\frac{\sqrt{W}\ket{b}}{\norm{\sqrt{W}\ket{b}}}$ as the input
state. In order for the input block-encoding precision to satisfy the bound in
Equation 34, we choose $\varepsilon$ such that
$\varepsilon=o\left(\frac{\delta_{b}}{\kappa^{3}\log^{2}\left(\frac{\kappa}{\delta_{b}}\right)}\right).$
Finally, for the output state to be $\delta$-close to the required state, we
choose $\delta_{b}=\delta/2$ and $\varepsilon_{b}=\delta/2\kappa$ to use the
robustness result from Lemma 15. This gives us |
# Solitons of the mean curvature flow in $\mathbb{S}^{2}\times\mathbb{R}$
Rafael López, Marian Ioan Munteanu Departamento de Geometría y Topología
Universidad de Granada. 18071 Granada, Spain<EMAIL_ADDRESS>University Al. I.
Cuza’ of Iasi, Faculty of Mathematics, Bd. Carol I, no. 11, 700506 Iasi,
Romania<EMAIL_ADDRESS>
###### Abstract.
A soliton of the mean curvature flow in the product space
$\mathbb{S}^{2}\times\mathbb{R}$ is a surface whose mean curvature $H$
satisfies the equation $H=\langle N,X\rangle$, where $N$ is the unit normal of
the surface and $X$ is a Killing vector field. In this paper we consider the
vector field tangent to the fibers and the vector field associated to
rotations about an axis of $\mathbb{S}^{2}$, respectively. We give a
classification of the solitons with respect to these vector fields assuming
that the surface is invariant under a one-parameter group of vertical
translations or rotations of $\mathbb{S}^{2}$.
###### Key words and phrases:
solitons, mean curvature flow, $\mathbb{S}^{2}\times\mathbb{R}$, one-parameter
group
###### 1991 Mathematics Subject Classification:
53A10, 53C42, 53C44.
## 1\. Introduction
Let $\psi\colon\Sigma\to\mathbb{R}^{3}$ be an immersion of a surface $\Sigma$
in Euclidean space $\mathbb{R}^{3}$. A variation
$\\{\psi_{t}\colon\Sigma\to\mathbb{R}^{3}:t\in[0,T)\\}$, $T>0$,
$\psi_{0}=\psi$, evolves by the mean curvature flow (MCF in short) if
$\displaystyle\frac{\partial\psi_{t}}{\partial t}=H(\psi_{t})N(\psi_{t})$,
where $H(\psi_{t})$ is the mean curvature of $\psi_{t}$ and $N(\psi_{t})$ is
its unit normal. The surface $\Sigma$ is called a soliton of MCF if the
evolution of $\Sigma$ under a one-parameter family of dilations or isometries
remains constant. An important type of solitons are the translators whose
shape is invariant by translations along a direction
$\vec{v}\in\mathbb{R}^{3}$. Translators are characterized by the equation
$H=\langle N,\vec{v}\rangle$, where $H$ and $N$ are the mean curvature and
unit normal of $\Sigma$ respectively. Translators play a special role in the
theory of MCF because they are, after rescaling, a type of singularities of
the MCF according to Huisken and Sinestrari [6]. In the meantime, the
development of the theory of solitons of the MCF in other ambient space has
been developed. Without to be complete, we refer: a general product space
$M^{2}\times\mathbb{R}$ [9]; hyperbolic space [3, 4, 8, 11]; the product
$\mathbb{H}^{2}\times\mathbb{R}$ [1, 2, 5, 7]; the Sol space [14]; the
Heisenberg group [15]; the special linear group [10].
In this paper, we focus on solitons in the product space
$\mathbb{S}^{2}\times\mathbb{R}$, where $\mathbb{S}^{2}$ is the unit sphere of
$\mathbb{R}^{3}$. Looking in the equation $H=\langle N,\vec{v}\rangle$, it is
natural to replace $\vec{v}$ by a Killing vector of the space, which motivates
the following definition.
###### Definition 1.1.
Let $X\in\mathfrak{X}(\mathbb{S}^{2}\times\mathbb{R})$ be a Killing vector
field. A surface $\Sigma$ in $\mathbb{S}^{2}\times\mathbb{R}$ is said to be a
$X$-soliton if its mean curvature $H$ and unit normal vector $N$ satisfy
$H=\langle N,X\rangle.$ (1)
In this paper, $H$ is the sum of the principal curvatures of the surface. The
dimension of Killing vector fields in the space
$\mathbb{S}^{2}\times\mathbb{R}$ is $4$. Taking coordinates $(x,y,z,t)$ in
$\mathbb{S}^{2}\times\mathbb{R}$, a relevant Killing vector field is
$V=\partial_{t}$. Here $V$ is tangent to the fibers of the natural submersion
$\mathbb{S}^{2}\times\mathbb{R}\to\mathbb{S}^{2}$. Other Killing vector fields
come from the rotations of $\mathbb{S}^{2}$. After renaming coordinates,
consider the vector field $R=-y\partial_{x}+x\partial_{y}$ about the $z$-axis
of $\mathbb{S}^{2}$. Examples of solitons are the following.
1. (1)
Cylinders over geodesic of $\mathbb{S}^{2}$ are $V$-solitons. Indeed, let
$\Sigma=C\times\mathbb{R}$ be a surface constructed as a cylinder over a curve
$C$ of $\mathbb{S}^{2}$. Then the mean curvature of $\Sigma$ is $H=\kappa$,
where $\kappa$ the curvature of $C$. Since the unit normal vector $N$ of
$\Sigma$ is orthogonal to $\partial_{t}$, then $\langle N,V\rangle=0$. Thus
$\Sigma$ is a $V$-soliton if and only if $\kappa=0$. Thus $C$ is a geodesic of
$\mathbb{S}^{2}$.
2. (2)
Slices $\mathbb{S}^{2}\times\\{t_{0}\\}$, $t_{0}\in\mathbb{R}$, are
$R$-solitons. Notice that $H=0$ because a slice is totally geodesic. Since
$N=\partial_{t}$, then $\langle N,R\rangle=0$, proving that $H=\langle
N,R\rangle$.
In this paper, we are interested in examples of $V$-solitons and $R$-solitons
that are invariant by a one-parameter group of isometries of
$\mathbb{S}^{2}\times\mathbb{R}$. Here we consider two types of such surfaces.
First, vertical surfaces which are invariant by vertical translations in the
$t$-coordinate. Second, rotational surfaces, which are invariant by a group of
rotations about an axis of $\mathbb{S}^{2}$. Under these geometric conditions
on the surfaces, we give a full classifications of $V$-solitons (Sect. 3) and
$R$-solitons (Sect. 4).
## 2\. Preliminaries
In this section, we compute each one of the terms of Eq. (1) for vertical and
rotational surfaces. The isometry group of $\mathbb{S}^{2}\times\mathbb{R}$
isomorphic to $\mbox{Isom}(\mathbb{S}^{2})\times\mbox{Isom}(\mathbb{R})$. The
group $\mbox{Isom}(\mathbb{S}^{2})$ is generated by the identity, the
antipodal map, rotations and reflections. The group $\mbox{Isom}(\mathbb{R})$
contains the identity, translations, and reflections. Therefore there are two
important one-parameter groups of isometries in
$\mathbb{S}^{2}\times\mathbb{R}$: vertical translations in the factor
$\mathbb{R}$ and rotations in the factor $\mathbb{S}^{2}$. This leads two
types of invariant surfaces.
1. (1)
Vertical surfaces. A vertical translation is a map of type
$T_{\lambda}\colon\mathbb{S}^{2}\times\mathbb{R}\to\mathbb{S}^{2}\times\mathbb{R}$
defined by $T_{\lambda}(p,t)=(p,t+\lambda)$, where $\lambda$ is fixed. This
defines a one-parameter group of vertical translations
$\mathcal{T}=\\{T_{\lambda}:\lambda\in\mathbb{R}\\}$. A vertical surface is a
surface $\Sigma$ invariant by the group $\mathcal{T}$, that is,
$T_{\lambda}(\Sigma)\subset\Sigma$ for all $\lambda\in\mathbb{R}$. The
generating curve of $\Sigma$ is a curve $\alpha\colon
I\subset\mathbb{R}\to\mathbb{S}^{2}$ in the unit sphere $\mathbb{S}^{2}$. Let
us write this curve as
$\alpha(s)=(\cos u(s)\cos v(s),\cos u(s)\sin v(s),\sin u(s)),$ (2)
for some smooth functions $u=u(s)$ and $v=v(s)$. Then a parametrization of
$\Sigma$ is
$\Psi(s,t)=(\cos u(s)\cos v(s),\cos u(s)\sin v(s),\sin u(s),t),\quad s\in
I,t\in\mathbb{R}.$ (3)
In what follows, we parametrize the curve $\beta(s)=(u(s),v(s))$ to have
$u^{\prime}(s)=\cos u(s)\cos\theta(s),\quad v^{\prime}(s)=\sin\theta(s).$
2. (2)
Rotational surfaces. These surfaces are invariant by rotations of
$\mathbb{S}^{2}$. To be precise, and after a choice of coordinates on
$\mathbb{S}^{2}$, a rotation in $\mathbb{S}^{2}\times\mathbb{R}$ about the
$z$-axis is a map
$\mathcal{R}_{\varphi}\colon\mathbb{S}^{2}\times\mathbb{R}\to\mathbb{S}^{2}\times\mathbb{R}$,
given by
$R_{\varphi}=\left(\begin{array}[]{llll}\cos\varphi&-\sin\varphi&0&0\\\
\sin\varphi&\cos\varphi&0&0\\\ 0&0&1&0\\\ 0&0&0&1\end{array}\right).$
The set $\mathcal{R}=\\{\mathcal{R}_{\varphi}:\varphi\in\mathbb{R}\\}$ of all
$\mathcal{R}_{\varphi}$, is a one-parameter group of rotations, that is
${\mathrm{SO}}(2)$. A rotational surface is a surface $\Sigma$ invariant by
the group $\mathcal{R}$, that is, $\mathcal{R}_{\varphi}(\Sigma)\subset\Sigma$
for all $\varphi\in\mathbb{R}$. The generating curve of $\Sigma$ is a curve
$\alpha$ contained in the $xzt$-hyperplane which we suppose parametrized by
$\alpha(s)=(\cos u(s),0,\sin u(s),v(s)),\quad s\in I\subset\mathbb{R},$ (4)
where $u=u(s)$ and $v=v8s)$ are smooth functions. Then a parametrization of
$\Sigma$ is
$\Psi(s,\varphi)=(\cos u(s)\cos\varphi,\cos u(s)\sin\varphi,\sin
u(s),v(s)),\quad s\in I,\varphi\in\mathbb{R}.$ (5)
From now, suppose that the the curve $\beta(s)=(u(s),v(s))$ obtained from
$\alpha$ in (4) is parametrized by the Euclidean arc-length, that is,
$u^{\prime}(s)=\cos\theta(s),\quad v^{\prime}(s)=\sin\theta(s),$
for some smooth function $\theta=\theta(s)$. Notice that $\theta^{\prime}$ is
the curvature of $\beta$ as planar curve of $\mathbb{R}^{2}$.
We now compute the mean curvature $H$ and the unit normal vector $N$ of
vertical surfaces and rotational surfaces.
###### Proposition 2.1.
Suppose that $\Sigma$ is a vertical surface parametrized by (3). Then the unit
normal vector $N$ is
$N=(\cos\theta\sin v-\sin\theta\sin u\cos v,-\cos\theta\cos v-\sin\theta\sin
u\sin v,\sin\theta\cos u,0),$ (6)
and the mean curvature $H$ is
$H=\tan u\sin\theta-\frac{\theta^{\prime}}{\cos u}.$ (7)
###### Proof.
Suppose that $\Sigma$ is parametrized by (3). Then the tangent plane each
point of $\Sigma$ is spanned by $\\{\Psi_{s},\Psi_{t}\\}$, where
$\begin{split}\Psi_{s}&=(-\cos u(\cos\theta\sin u\cos v+\sin\theta\sin v),\cos
u(\sin\theta\cos v-\cos\theta\sin u\sin v),\cos\theta\cos^{2}u,0),\\\
\Psi_{t}&=(0,0,0,1).\end{split}$ (8)
A straightforward computation yields that the unit normal vector is (6).
As usually, denote by $g_{ij}$ the coefficients of the first fundamental form
of $\Psi$, where
$g_{11}=\langle\Psi_{s},\Psi_{s}\rangle,\quad
g_{12}=\langle\Psi_{s},\Psi_{t}\rangle,\quad
g_{22}=\langle\Psi_{t},\Psi_{t}\rangle.$
The formula of $H$ is
$H=\frac{g_{22}b_{11}-2g_{12}b_{12}+g_{11}b_{22}}{g_{11}g_{22}-{g_{12}}^{2}},$
where $b_{ij}$ are the coefficients of the second fundamental form. Here
$b_{11}=-\langle N_{s},\Psi_{s}\rangle,\quad b_{12}=-\langle
N_{s},\Psi_{t}\rangle,\quad b_{22}=-\langle N_{t},\Psi_{t}\rangle.$
A computation of $g_{ij}$ gives $g_{11}=g_{22}=(\cos u)^{2}$ and $g_{12}=0$.
In particular, $\cos u(s)\not=0$ for all $s\in I$. Then
$g_{11}g_{22}-g_{12}^{2}=(\cos u)^{4}$. For the coefficients of the second
fundamental, we have $b_{12}=b_{22}=0$ and
$b_{11}=\cos u(\sin\theta\sin-\theta^{\prime}).$ (9)
Then the mean curvature $H$ is (7). ∎
###### Proposition 2.2.
Suppose that $\Sigma$ is a rotational surface parametrized by (5). Then the
unit normal vector $N$ is
$N(s,\varphi)=(\sin\theta\sin u\cos\varphi,\sin\theta\sin
u\sin\varphi,-\sin\theta\cos u,\cos\theta),$ (10)
and the mean curvature $H$ is
$H=\theta^{\prime}-\sin\theta\tan u.$ (11)
###### Proof.
From (3), the basis $\\{\Psi_{s},\Psi_{t}\\}$ at each tangent plane of
$\Sigma$ is
$\begin{split}\Psi_{s}(s,\varphi)&=(-u^{\prime}\sin
u\cos\varphi,-u^{\prime}\sin u\sin\varphi,u^{\prime}\cos u,v^{\prime}),\\\
\Psi_{\varphi}(s,\varphi)&=(-\sin\varphi\cos u,\cos\varphi\cos
u,0,0).\end{split}$ (12)
Thus $g_{11}=1$, $g_{12}=0$ and $g_{22}=\cos^{2}u$, in particular, $\cos
u\not=0$. As a consequence, the unit normal vector $N$ is (10). The
computation of the coefficients of the second fundamental form gives
$\begin{split}b_{11}&=\theta^{\prime}\\\ b_{12}&=0\\\ b_{22}&=-\sin\theta\sin
u\cos u.\end{split}$ (13)
Hence we deduce that the expression of $H$ is (11).
∎
## 3\. The class of $V$-solitons
Let the vector field
$V=\partial_{t}.$ (14)
The fact that $V$ is tangent to the fibers of the submersion
$\mathbb{S}^{2}\times\mathbb{R}\to\mathbb{S}^{2}$ makes that $V$ has special
properties. For example, $V$-solitons of $\mathbb{S}^{2}\times\mathbb{R}$ can
be viewed as weighted minimal surfaces in a space with density. So, let
$e^{t}dA$ and $e^{t}dV$ the area and volume of
$\mathbb{S}^{2}\times\mathbb{R}$ with a weight $e^{t}$, where $t$ is the last
coordinate of the space. Considering the energy functional $\Omega\mapsto
E(\Omega)=\int_{\Omega}e^{t}dA$ defined for compact subdomains
$\Omega\subset\Sigma$, a critical point of this functional, also called a
weighted minimal surface, is a surface characterized by the equation
$H-\langle N,\nabla t\rangle=0$, where $\nabla$ is the gradient in
$\mathbb{S}^{2}\times\mathbb{R}$. Since $\nabla t=\partial_{t}=V$, we have
proved that a weighted minimal surface in
$(\mathbb{S}^{2}\times\mathbb{R},e^{t}\langle,\rangle)$ is a $V$-soliton. A
property of weighted minimal surfaces is they satisfy a principle of tangency
as a consequence of the Hopf maximum principle for elliptic equations of
divergence type. In our context, the tangency principle asserts that if two
$V$-solitons $\Sigma_{1}$ and $\Sigma_{2}$ touch at some interior point
$p\in\Sigma_{1}\cap\Sigma_{2}$ and one surface is in one side of the other
around $p$, then $\Sigma_{1}$ and $\Sigma_{2}$ coincide in a neighborhood of
$p$. The following result proves that slices are the only closed $V$-solitons.
###### Theorem 3.1.
Slices $\mathbb{S}^{2}\times\\{t_{0}\\}$, $t_{0}\in\mathbb{R}$, are the only
closed (compact without boundary) $V$-solitons in
$\mathbb{S}^{2}\times\mathbb{R}$.
###### Proof.
Let $\psi\colon\Sigma\to\mathbb{S}^{2}\times\mathbb{R}$ be a closed
$V$-soliton. Define on $\Sigma$ the height function
$h\colon\Sigma\to\mathbb{R}$ by $h(q)=\langle\psi(q),\partial_{t}\rangle$. It
is known that for any surface of $\mathbb{S}^{2}\times\mathbb{R}$, the
Laplacian of $h$ is $\Delta h=H\langle N,\partial_{t}\rangle$ [16].
Using that $\Sigma$ is a $V$-soliton, then $\Delta h=\langle
N,\partial_{t}\rangle^{2}=\langle N,V\rangle^{2}$. Integrating in $\Sigma$,
the divergence theorem yields $\int_{\Sigma}\langle N,V\rangle^{2}=0$. Thus
$\langle N,V\rangle=0$ on $\Sigma$ and $H=0$. In particular, $\Delta h=0$. By
the maximum principle, $h$ is a constant function, namely $h(q)=t_{0}$, for
some $t_{0}\in\mathbb{R}$. This proves that
$\Sigma\subset\mathbb{S}^{2}\times\\{t_{0}\\}$ and thus, both surfaces
coincide. ∎
We begin with the study of $V$-solitons invariant by the group $\mathcal{T}$.
We prove that any vertical $V$-soliton is trivial in the sense that it is a
minimal surface, even more, we prove that it is a cylinder of type
$\mathbb{S}^{1}\times\mathbb{R}\subset\mathbb{S}^{2}\times\mathbb{R}$.
###### Theorem 3.2.
Suppose that $\Sigma$ is a vertical surface. Then $\Sigma$ is a $V$-soliton if
and only its generating curve is a geodesic of $\mathbb{S}^{2}$ and $\Sigma$
is a vertical surface on a geodesic of $\mathbb{S}^{2}$.
###### Proof.
Let $\Sigma$ be a vertical surface. Since the vertical lines are fibers of the
submersion $\mathbb{S}^{2}\times\mathbb{R}\to\mathbb{S}^{2}$, the mean
curvature $H$ of $\Sigma$ is $H=\kappa$, where $\kappa$ is the curvature of
$\alpha$. Moreover, the unit normal vector is horizontal, hence $\langle
N,V\rangle=0$. This proves the result. ∎
We now study $V$-solitons of rotational type. As we have indicated in the
previous section, we can assume that the rotation axis is the $z$-axis. Thus a
rotational surface $\Sigma$ can be parametrized by (5).
An immediate example of rotational $V$-soliton is the cylinder
$\mathcal{C}=(\mathbb{S}^{1}\times\\{0\\})\times\mathbb{R}$. This surface
corresponds with the curve $(u(s),v(s))=(0,s)$, $s\in\mathbb{R}$, in (4). Thus
$\alpha(s)=(1,0,0,s)$ is the vertical line through the point
$(1,0,0)\in\mathbb{S}^{2}$. The unit normal $N$ is orthogonal to $V$. Since
the generating curve is a geodesic of $\mathbb{S}^{2}$, the surface is
minimal, proving that $\mathcal{C}$ is a $V$-soliton. This surface is also a
vertical $R$-soliton (Thm. 3.2). We now characterize rotational $V$-solitons
in terms of its generating curve $\alpha$.
###### Proposition 3.3.
Let $\Sigma$ be a rotational surface in $\mathbb{S}^{2}\times\mathbb{R}$. If
$\Sigma$ is parametrized by (5), then $\Sigma$ is a $V$-soliton if and only if
the generating curve $\alpha$ satisfies
$\left\\{\begin{split}u^{\prime}&=\cos\theta\\\ v^{\prime}&=\sin\theta\\\
\theta^{\prime}&=\sin\theta\tan u+\cos\theta\end{split}\right.$ (15)
###### Proof.
This is an immediate consequence of Prop. 2.2. Indeed, from the expression of
$N$ in (10), we have
$\langle N,V\rangle=\cos\theta.$
Using (11), then Eq. (1) is $\theta^{\prime}=\sin\theta\tan u+\cos\theta$.
∎
We now study the solutions of (15), describing their main geometric
properties. Recall that $\cos u\not=0$ by regularity of the surface (Prop.
2.2). Since the last equation of (15) does not depend on $v$, we can study the
solutions $\alpha$ of (15) projecting in the $(u,\theta)$-plane, which in turn
leads to the planar autonomous ordinary system
$\left\\{\begin{split}u^{\prime}&=\cos\theta\\\
\theta^{\prime}&=\sin\theta\tan u+\cos\theta.\end{split}\right.$ (16)
The phase plane of (16) is depicted in Fig. 1. By regularity of the surface,
$u(s)\in(-\pi/2,\pi/2)$. Thus the phase plane of (16) is the set
$A=\\{(u,\theta)\colon
u\in(\frac{\pi}{2},\frac{\pi}{2}),\theta\in\mathbb{R}\\}.$
The trajectories of (16) are the solutions $\gamma(s)=(u(s),\theta(s))$ of
(16) when regarded in $A$ and once initial conditions $(u_{0},\theta_{0})\in
A$ have been fixed. These trajectories foliate $A$ as a consequence of the
existence and uniqueness of the Cauchy problem of (16).
Figure 1. The $(u,\theta)$-phase plane of (16). The red points are the
equilibrium points $(0,\pm\frac{\pi}{2})$, where the surface is the cylinder
$\mathcal{C}$.
The equilibrium points of (16) are $(u,\theta)=(0,\frac{\pi}{2})$ and
$(u,\theta)=(0,-\frac{\pi}{2})$. The rest of equilibrium points can be
obtained by translations by multiples of $\pi$ in the $\theta$-coordinate. If
$(u,\theta)=(0,\frac{\pi}{2})$, then $u(s)=0$, $v(s)=s$. Thus the generating
curve $\alpha$ is the vertical fiber at $(1,0,0)\in\mathbb{S}^{2}$
parametrized with increasing variable $s$, $v(s)=s$. For this curve, the
corresponding surface is the vertical right cylinder $\mathcal{C}$ and this
solution is already known. If $(u,\theta)=(0,-\frac{\pi}{2})$, then $v(s)=-s$
and the generating curve is again the above vertical line but parametrized by
decreasing variable $s$. The surface is the cylinder $\mathcal{C}$ again.
The qualitative behaviour of the trajectories near the equilibrium points are
analyzed, as usually, by the linearized system (see [13] as a general
reference). At the point $(u,\theta)=(0,\frac{\pi}{2})$, we find
$\left(\begin{array}[]{ll}0&-1\\\ 1&-1\end{array}\right)$
as the matrix of the linearized system. The eigenvalues of this matrix are the
two conjugate complex numbers $\frac{1}{2}(-1\pm i\sqrt{3})$. Since the real
parts are negative, then the point $(0,\frac{\pi}{2})$ is a stable spiral.
Thus all the trajectories will move in towards the equilibrium point as $s$
increases. Similarly, for the point $(u,\theta)=(0,-\frac{\pi}{2})$, the
matrix of the corresponding linearized system is
$\left(\begin{array}[]{ll}0&1\\\ -1&1\end{array}\right).$
Then the eigenvalues of this matrix are $\frac{1}{2}(1\pm i\sqrt{3})$ and the
point $(0,-\frac{\pi}{2})$ is an unstable spiral. Since there are no more
equilibrium points, every trajectory starts in the unstable spiral
$(0,-\frac{\pi}{2})$ and ends in the stable spiral $(0,\frac{\pi}{2})$.
In order to give initial conditions at $s=0$, notice that if we do a vertical
translation in $\mathbb{R}^{4}$ of the generating curve $\alpha$, the surface
is a translated from the original. This vertical translation is simply adding
a constant to the last coordinate function $v=v(s)$. Thus, at the initial time
$s=0$, we can assume $v(0)=0$. On the other hand, the fact that the
trajectories go from $(0,-\frac{\pi}{2})$ towards $(0,\frac{\pi}{2})$ implies
that the function $\theta$ attains the value $0$. As a first initial step, we
can consider initial condition $\theta(0)=0$ when the curve
$\beta(s)=(u(s),v(s))$ starts at the $v$-axis. So, let
$u(0)=v(0)=\theta(0)=0.$ (17)
It is immediate from (15) that
$(\bar{u}(s)=-u(-s),\bar{v}(s)=v(-s),\bar{\theta}(s)=-\theta(-s))$ is also a
solution of (15) with the same initial conditions (17). Thus the graphic of
$\beta$ is symmetric about the $v$-axis.
Given initial conditions (17), we know that
$(u(s),\theta(s))\to(0,\frac{\pi}{2})$. Then the right hand-sides of (16)
(also in (15)) are bounded functions, proving that the domain of solutions is
$\mathbb{R}$. Since $|v^{\prime}(s)|\to 1$, then
$\lim_{s\to\pm\infty}v(s)=\infty$ by symmetry of $\beta$. Thus
$\lim_{s\to\pm\infty}\beta(s)=(0,\infty)$, that is $\beta$ is asymptotic to
the $v$-axis at infinity. The projection of $\alpha$ in the factor
$\mathbb{S}^{2}$ converges to $(1,0,0)$. This implies that $\Sigma$ is
asymptotic to the cylinder $\mathcal{C}$.
Because $(0,\pi/2)$ is a stable spiral, the function $\theta(s)$ converges to
$\pi/2$ oscillating around this value, and the same occurs for the function
$u(s)$ around $u=0$. In particular, the graphic of $\beta$ intersects
infinitely many times the $v$-axis. By the symmetry of $\beta$ with respect to
the $v$-axis, we deduce that $\beta$ has (infinitely many) self-intersections.
We claim that the coordinate function $v(s)$ of $\beta$ has no critical points
except $s=0$. We know
$v^{\prime\prime}(s)=\theta^{\prime}(s)\cos\theta(s)=\sin\theta(s)\cos\theta(s)\tan
u(s)+\cos^{2}\theta(s).$
If $v^{\prime}(s)=0$ at $s=s_{0}$, then $\sin\theta(s_{0})=0$, hence
$v^{\prime\prime}(s_{0})=1$. Thus all critical points are local minimum
deducing that $s=0$ is the only minimum. Once we have prove that
$v^{\prime}\not=0$ for all $s\not=0$, then each branch of $\beta$, that is,
$\beta(0,\infty)$ and $\beta(-\infty,0)$, are graphs on the $v$-axis. This
proves that $\beta$ is a bi-graph on the $v$-axis. See Fig. 2, left.
It remains to study the case that the curve $\beta$ does not start in the
$v$-axis, that is, $u(0)=u_{0}$ with $u_{0}\not=0$. Since the equilibrium
points are of spiral type, the solutions of (16) under this initial condition
converge (or diverges) to the equilibrium points. Thus, the curve $\beta$
meets the $v$-axis being asymptotic to this axis.
We summarize the above arguments.
Figure 2. Generating curves of rotational $V$-solitons. Left: the curve
$\beta$. Middle and right: projection of the generating curve $\alpha$ on the
$xzt$-space (middle) and showing it as subset of the cylinder
$\mathbb{S}^{1}\times\mathbb{R}$ (right).
Figure 3. A rotational $V$-soliton after the stereographic projection
$p_{r}$. The surface after rotating $\beta$ in the interval $[0,\infty)$
(left) and in the interval $(-\infty,0]$ (middle). Right: the full surface.
###### Theorem 3.4.
Let $\Sigma$ be a rotational $V$-soliton. Then $\Sigma$ is the cylinder
$\mathcal{C}$ or $\Sigma$ is parametrized by (5) with the following
properties:
1. (1)
The curve $\beta(s)=(u(s),v(s))$ has self-intersections and it is asymptotic
to the $v$-axis at infinity. In case that $\beta$ satisfies the initial
conditions (17), then $\beta$ is a symmetric bi-graph on the $v$-axis.
2. (2)
The surface $\Sigma$ is not embedded with infinitely many intersection points
with the $z$-axis.
3. (3)
The surface $\Sigma$ is asymptotic to the cylinder $\mathcal{C}$.
In Fig. 3 we show the surface $\Sigma$ after a stereographic projection
$p_{r}$ of the first factor $\mathbb{S}^{2}$ into $\mathbb{R}^{2}$,
$p_{r}\colon\mathbb{S}^{2}\times\mathbb{R}\to\mathbb{R}^{2}\times\mathbb{R}$,
$p_{r}(x,y,z,t)=(\frac{x}{1-z},\frac{y}{1-z},t)$.
## 4\. The class of $R$-solitons
In this section we study $R$-solitons, where the vector field $R$ is
$R=-y\partial_{x}+x\partial_{y}.$ (18)
Following our scheme, we will classify $R$-solitons that are vertical surfaces
and next, rotational surfaces. First, suppose that $\Sigma$ is a vertical
$R$-soliton. We know that $\Sigma$ is parametrized by (3) and that the
generating curve $\alpha$ is contained in $\mathbb{S}^{2}$, see (2). A first
example of vertical $R$-soliton is a vertical cylinder over the geodesic
$\mathbb{S}^{1}\times\\{0\\}$ of $\mathbb{S}^{2}$. To be precise, let
$(u(s),v(s))=(0,s)$. Then $\alpha(s)=(\cos s,\sin s,0)$ in (2) and the surface
is the vertical cylinder over $\alpha$ which we have denoted by $\mathcal{C}$
in the previous section. This surface is minimal and it is immediate that $N$
is orthogonal to $R$. Thus $\mathcal{C}$ is a $R$-soliton. Recall that
$\mathcal{C}$ is also a rotational $V$-soliton.
###### Proposition 4.1.
Suppose that $\Sigma$ is a vertical surface parametrized by (3). Then $\Sigma$
is a $R$-soliton if and only if the generating curve $\alpha$ satisfies
$\left\\{\begin{split}u^{\prime}&=\cos u\cos\theta\\\
v^{\prime}&=\sin\theta\\\ \theta^{\prime}&=\sin\theta\sin u+(\cos
u)^{2}\cos\theta.\end{split}\right.$ (19)
###### Proof.
The expression of the unit normal $N$ is given in (3). Thus
$\langle N,R\rangle=-\cos u\cos\theta.$
Since the expression of $H$ is given in (7), then Eq. (1) is
$\theta^{\prime}=\sin\theta\sin u+(\cos u)^{2}\cos\theta$, proving the result.
∎
As in the previous section, we project the solutions of (19) on the
$(u,\theta)$-plane, obtaining the autonomous system
$\left\\{\begin{split}u^{\prime}&=\cos u\cos\theta\\\
\theta^{\prime}&=\sin\theta\sin u+(\cos u)^{2}\cos\theta.\end{split}\right.$
(20)
Equilibrium points are $(u,\theta)=(0,\pm\pi/2)$ again, together the points
$(u,\theta)=(\pm\frac{\pi}{2},0)$ and translations of length $\pi$ of these
points in the $\theta$ variable. For the points $(u,\theta)=(0,\pm\pi/2)$, we
have $u(s)=0$ and $v(s)=\pm s$ are solutions. In this case, we know that
$\Sigma$ is the vertical cylinder $\mathcal{C}$. The points
$(u,\theta)=(\pm\frac{\pi}{2},0)$ do not correspond with surfaces because
regularity is lost. In fact, coming back to the parametrization (3), the map
$\Psi$ is the parametrization of the vertical fiber at
$(0,0,1)\in\mathbb{S}^{2}$.
Figure 4. The $(u,\theta)$-phase plane of (20). The red points are the
equilibrium points $(0,\pm\frac{\pi}{2})$.
The phase plane is the set $A=(-\pi,\pi)\times(-\pi,\pi)$ in the
$(u,\theta)$-plane by the periodicity of $\theta$. If we now compute the
linearized system at the points $(u,\theta)=(0,\pm\frac{\pi}{2})$, we find
that they have the same character that the ones of the system (15). Thus we
have that $(u,\theta)=(0,\frac{\pi}{2})$ is a stable spiral and
$(u,\theta)=(0,-\frac{\pi}{2})$ is an unstable spiral.
For the points $(\frac{\pi}{2},0)$ and $(-\frac{\pi}{2},0)$, the linearized
system is
$\left(\begin{array}[]{ll}-1&0\\\
0&1\end{array}\right),\quad\left(\begin{array}[]{ll}1&0\\\
0&-1\end{array}\right),$
respectively. Since the eigenvalues are two real numbers with opposite signs,
then both equilibrium points are saddle points. See Fig. 4. However, this does
not affect to the reasoning since the arguments now are similar as in the
proof of Thm. 3.4. We omit the details. Figure 5 shows generating curves of
vertical $R$-solitons.
###### Theorem 4.2.
Let $\Sigma$ be a vertical $R$-soliton. Then $\Sigma$ is the cylinder
$\mathcal{C}$ or $\Sigma$ is parametrized by (3) with the following
properties:
1. (1)
The curve $\beta(s)=(u(s),v(s))$ has self-intersections and it is asymptotic
to the $v$-axis at infinity. In case that $\beta$ satisfies the initial
conditions (17), then $\beta$ is a symmetric bi-graph on the $v$-axis.
2. (2)
The surface $\Sigma$ is not embedded with infinitely many intersection points
with the $z$-axis.
3. (3)
The surface $\Sigma$ is asymptotic to the cylinder $\mathcal{C}$.
Figure 5. Generating curves of vertical $R$-solitons. Left: solution curve
$\beta(s)=(u(s),v(s))$. Middle: the generating curve $\alpha$. Right: the
generating curve $\alpha$ contained in the unit sphere $\mathbb{S}^{2}$.
The second type of $R$-solitons are those surfaces that are invariant by a
one-parameter group of rotations of the first factor. Since we have defined in
(18) the vector field $R$ as the rotation about the direction
$(0,0,1)\in\mathbb{S}^{2}$, we cannot a priori fix the rotational axis of the
surface.
###### Theorem 4.3.
The only rotational $R$-solitons are:
1. (1)
Slices $\mathbb{S}^{2}\times\\{t_{0}\\}$, $t_{0}\in\mathbb{R}$, viewed as
rotational surfaces with respect to any axis of $\mathbb{S}^{2}$ and;
2. (2)
Rotational minimal surfaces about the $z$-axis.
###### Proof.
Let $\Sigma$ be a rotational $R$-soliton. In order to have manageable
computations of the mean curvature $H$ and the unit normal $N$ of $\Sigma$, we
will assume in this proof that the rotation axis of $\Sigma$ is the $z$-axis.
Thus we are assuming that the surface is parametrized by (5). Thus the vector
field $R$ is now arbitrary and with no a priori relation with the $z$-axis.
The vector field $R$ is determined by an orthonormal basis
$B=\\{E_{1},E_{2},E_{3}\\}$ of $\mathbb{R}^{3}$. With respect to $B$, the
vector field $R$ can be expressed by
$R(x_{1},x_{2},x_{3},t)=-x_{2}E_{1}+x_{1}E_{2},$
where $(x_{1},x_{2},x_{3})$ are coordinates of $\mathbb{S}^{2}$ with respect
to $B$.
We now write $E_{1}$ and $E_{2}$ with respect to the canonical basis of
$\mathbb{R}^{3}$,
$E_{i}=(\cos m_{i}\sin n_{i},\cos m_{i}\sin n_{i},\sin m_{i}),\quad i=1,2,$
where $m_{i},n_{i}\in\mathbb{R}$. The unit normal $N$ and the mean curvature
$H$ of $\Sigma$ were computed in (10) and (11), respectively. Then
$\begin{split}\langle N,R\rangle&=-\langle\Psi,E_{2}\rangle\langle
N,E_{1}\rangle+\langle\Psi,E_{1}\rangle\langle N,E_{2}\rangle\\\ =&(\sin
m_{1}\cos m_{2}\sin n_{2}-\cos m_{1}\sin n_{1}\sin
m_{2})\sin\theta\sin\varphi\\\ &+(\sin m_{1}\cos m_{2}\cos n_{2}-\cos
m_{1}\cos n_{1}\sin m_{2})\sin\theta\cos\varphi.\end{split}$ (21)
Looking now Eq. (1), we have that in the expression of right hand-side of (1),
that is, $\langle N,R\rangle$, formula (21), the variable $\varphi$ do appear.
However in the left hand-side of (1), the mean curvature $H$, formula (11),
does not depend on $\varphi$. This implies that the coefficients of
$\sin\varphi$ and $\cos\varphi$ in (21) must vanish. Both coefficients contain
the factor $\sin\theta$. This gives the following discussion of cases.
1. (1)
Case $\sin\theta(s)=0$ for all $s$. Then $u(s)=s$ and $v(s)$ is a constant
function, $v(s)=t_{0}$, $t_{0}\in\mathbb{R}$. This proves that $\Sigma$ is a
slice $\mathbb{S}^{2}\times\\{t_{0}\\}$.
2. (2)
Case $\sin\theta(s_{0})\not=0$ at some $s_{0}$. Then in an interval around
$s=s_{0}$, we deduce
$\sin m_{1}\cos m_{2}\sin n_{2}-\cos m_{1}\sin n_{1}\sin m_{2}=0,$ $\sin
m_{1}\cos m_{2}\cos n_{2}-\cos m_{1}\cos n_{1}\sin m_{2}=0.$
Both identities imply $E_{1}\times E_{2}=(0,0,1)$. Thus $R$ coincides with the
vector field defined in (18) and the rotation axis is the $z$-axis. Moreover,
the right hand-side of (1) is $0$, proving that the surface is minimal. This
proves the result.
∎
Minimal surfaces in $\mathbb{S}^{2}\times\mathbb{R}$ of rotational type with
respect to an axis in the first factor were classified by Pedrosa and Ritoré
[12].
## Acknowledgements
Rafael López is a member of the IMAG and of the Research Group “Problemas
variacionales en geometría”, Junta de Andalucía (FQM 325). This research has
been partially supported by MINECO/MICINN/FEDER grant no.
PID2020-117868GB-I00, and by the “María de Maeztu” Excellence Unit IMAG,
reference CEX2020-001105- M, funded by MCINN/AEI/10.13039/501100011033/
CEX2020-001105-M. Marian Ioan Munteanu is thankful to Romanian Ministry of
Research, Innovation and Digitization, within Program 1 – Development of the
national RD system, Subprogram 1.2 – Institutional Performance – RDI
excellence funding projects, Contract no.11PFE/30.12.2021, for financial
support.
## References
* [1] A. Bueno, Translating solitons of the mean curvature flow in the space $\mathbb{H}^{2}\times\mathbb{R}$. J. Geom. 109 (2018), 42.
* [2] A. Bueno, Uniqueness of the translating bowl in $\mathbb{H}^{2}\times\mathbb{R}$. J. Geom. 11 (2020), 43.
* [3] A. Bueno, R. López, A new family of translating solitons in hyperbolic space. arXiv:2402.05533v1 [math.DG] (2024).
* [4] A. Bueno, R. López, Horo-shrinkers in the hyperbolic space. arXiv:2402.05527 [math.DG] (2024).
* [5] A. Bueno, R. López, The class of grim reapers in $\mathbb{H}^{2}\times\mathbb{R}$. arXiv:2402.05772 [math.DG] (2024).
* [6] G. Huisken, C. Sinestrari, Convexity estimates for mean curvature flow and singularities of mean convex surfaces. Acta Math. 183 (1999), 45–70.
* [7] R. F. de Lima, G. Pipoli, Translators to higher order mean curvature flows in $\mathbb{R}^{n}\times\mathbb{R}$ and $\mathbb{H}^{n}\times\mathbb{R}$. arXiv:2211.03918v2 [math.DG] (2024).
* [8] R. F. de Lima, A. K. Ramos, J. P. dos Santos, Solitons to mean curvature flow in the hyperbolic 3-space. arXiv:2307.14136 [math.DG] (2023).
* [9] J. de Lira, F. Martín, Translating solitons in riemannian products. J. Differential Equations, 266 (2019), 7780–7812.
* [10] R. López, M. I. Munteanu, Translators in the special lineal group. Submitted, preprint (2024).
* [11] L. Mari, J. Rocha de Oliveira, A. Savas-Halilaj, R. Sodré de Sena, Conformal solitons for the mean curvature flow in hyperbolic space. arXiv:2307.05088 [math.DG] (2023).
* [12] R. Pedrosa, M. Ritoré, Isoperimetric domains in the Riemannian product of a circle with a simply connected space form and applications to free boundary value problems. Indiana Univ. Math. J. 48 (1999), 1357–1394.
* [13] L. Perko, Differential Equations and Dynamical Systems. Springer, New York, 2001.
* [14] G. Pipoli, Invariant translators of the solvable group. Ann. Mat. Pura Appl. 199 (2020), 1961–1978.
* [15] G. Pipoli, Invariant translators of the Heisenberg group. J. Geom. Anal. 31 (2021), 5219–5258.
* [16] H. Rosenberg, Minimal surfaces in $M^{2}\times\mathbb{R}$. Illinois J. Math. 46 (2001), 1177 – 1195.
|
# LightCTS: A Lightweight Framework for Correlated Time Series Forecasting
Zhichen Lai†, Dalin Zhang†∗, Huan Li‡∗, Christian S. Jensen†, Hua Lu§, Yan
Zhao† †Department of Computer Science, Aalborg University, Denmark
‡College of Computer Science and Technology, Zhejiang University, China
§Department of People and Technology, Roskilde University, Denmark
(2023)
###### Abstract.
Correlated time series (CTS) forecasting plays an essential role in many
practical applications, such as traffic management and server load control.
Many deep learning models have been proposed to improve the accuracy of CTS
forecasting. However, while models have become increasingly complex and
computationally intensive, they struggle to improve accuracy. Pursuing a
different direction, this study aims instead to enable much more efficient,
lightweight models that preserve accuracy while being able to be deployed on
resource-constrained devices. To achieve this goal, we characterize popular
CTS forecasting models and yield two observations that indicate directions for
lightweight CTS forecasting. On this basis, we propose the LightCTS framework
that adopts plain stacking of temporal and spatial operators instead of
alternate stacking which is much more computationally expensive. Moreover,
LightCTS features light temporal and spatial operator modules, called L-TCN
and GL-Former, that offer improved computational efficiency without
compromising their feature extraction capabilities. LightCTS also encompasses
a last-shot compression scheme to reduce redundant temporal features and speed
up subsequent computations. Experiments with single-step and multi-step
forecasting benchmark datasets show that LightCTS is capable of nearly state-
of-the-art accuracy at much reduced computational and storage overheads.
correlated time series forecasting, lightweight neural networks
*corresponding authors: D. Zhang<EMAIL_ADDRESS>and H. Li<EMAIL_ADDRESS>
††journalyear: 2023††copyright: acmcopyright††conference: the 2023
International Conference on Management of Data; June 18–23, 2023; Seattle, WA,
USA††price: 15.00††doi: nnnnn/nnnnnn.nnnnn††ccs: Information systems Spatial-
temporal systems††ccs: Information systems Data mining
## 1\. Introduction
Figure 1. DL-based CTS forecasting frameworks using (a) alternate stacking and
(b) plain stacking.
Driven in part by the availability of increasingly advanced and affordable
sensor technologies, cyber-physical systems (CPSs) (Derler et al., 2011) are
being deployed at a rapid pace. In a typical CPS, multiple sensors sample
physical processes of interest and emit multiple time series with
correlations. One example is sensors that sample power production by
photovoltaic installations in a geographical region (Lai et al., 2018).
Extracting and exploiting correlations in correlated time series (CTS) is
important in many applications, such as the forecasting of traffic situations
(Cirstea et al., 2021; Wu et al., 2021; Papadimitriou and Yu, 2006; Yuan et
al., 2020), air quality (Du et al., 2019), server loads (Faloutsos et al.,
2019; Ma et al., 2018), social activity (Zhu and Shasha, 2002), and wind farm
maintenance (Cheng et al., 2022). In this study, we focus on CTS forecasting.
One significant application occurs in the predictive maintenance of wind
turbines (Cheng et al., 2022), which are often deployed in remote and harsh
locations. Accurate and instant forecasts of a turbine’s operating status,
e.g., covering pitch speed and active power, can enable identification of
potential failures, thereby enabling timely maintenance, and thus reducing
regular maintenance costs, decreases in generated power, and potential safety
hazards (Lai et al., 2022). Hence, forecasting has attracted extensive
research attention.
Deep learning (DL) techniques have recently shown impressive CTS forecasting
performance. A variety of DL modules, such as convolutional neural networks
(CNNs) (Yu et al., 2018b; Wu et al., 2019, 2020; Guo et al., 2019; Huang et
al., 2020; Chen et al., 2022; Rao et al., 2022), recurrent neural networks
(RNNs) (Bai et al., 2020; Chen et al., 2020a; Li et al., 2018; Chang et al.,
2018; Shih et al., 2019), graph convolutional networks (GCNs) (Li et al.,
2018; Pan et al., 2021; Wu et al., 2019, 2021; Guo et al., 2019; Chen et al.,
2022; Rao et al., 2022), and Transformers (Xu et al., 2020; Park et al., 2020;
Zhou et al., 2021; Wu et al., 2021), are used to construct operators for
extracting temporal features from individual time series or spatial features
across correlated time series. These two categories of operators are referred
to as temporal operators (T-operators) and spatial operators (S-operators)
(see the categorization in Table 2). Studies (Yu et al., 2018b; Wu et al.,
2021) show that such operators are effective at feature extraction and enable
state-of-the-art CTS forecasting accuracy. We analyze the commonalities of
these DL-based models and present a generic framework as shown in Figure 1(a).
The framework starts with an embedding module that ingests the raw CTS data;
then, multiple spatio-temporal blocks (ST-blocks) are stacked, each consisting
of a sequence of alternate S- and T-operators for extracting high-order
spatio-temporal features (ST-features); the framework ends with a module that
aggregates ST-features at different depths with residual connections and
outputs a forecast.
However, DL-based CTS models are often large, and inferencing is often
computationally expensive. This limits the possibilities of deploying CTS
forecasting on resource-constrained edge computing devices, which is otherwise
attractive in CPS applications due to the decentralized computation and
potentially increased service responsiveness (Shi et al., 2016). As an
example, there is a pressing need to monitor and forecast the working status
of wind turbines in real time for maintenance (Cheng et al., 2022). However,
wind turbines are often deployed offshore or in high latitudes, and
transmitting their operational data to a remote data center is costly,
lagging, and fragile to the quality of networking. To detect potential faults
in a timely manner and respond on board, an appealing solution is to place a
lightweight model on an edge device operating locally. Microcontroller units
(MCUs) are widely used edge devices in industry due to their stability and low
cost (Sudharsan et al., 2021). However, they have very limited memory. The
popular STM32F4 series of MCUs have up to 3Mb of memory, which is insufficient
for the deployment of state-of-art CTS models like GwNet (Wu et al., 2019).
Moreover, as existing CTS models are not specifically designed for lightweight
applications, simply lightening these models degrades their performance
dramatically (see Table 10).
Moreover, we observe that although recent studies on CTS forecasting focus
mainly on improving accuracy, progress has almost come to a standstill. For
example, AutoCts (Wu et al., 2021), a state-of-the-art forecasting model,
improves GwNet (Wu et al., 2019), a previous state-of-the-art model, by up to
0.06 miles per hour (mph) in terms of mean absolute error (MAE) on traffic
speed forecasting. AutoCts models are much larger and are obtained through
neural architecture search, which involves the training and evaluation of
thousands of large models, which may take up to hundreds of GPU hours,
incurring considerable $CO_{2}$ emissions (Chen et al., 2021). This situation,
characterized by increasingly larger and computationally expensive models with
diminishing accuracy improvements, motivates a different direction, where we
instead aim to achieve lightweight models with competitive accuracy. This will
enable edge computing as well as overall emissions savings (Shi et al., 2016).
Although several lightweight techniques have been proposed in computer vision
(Tan and Le, 2019; Zhang et al., 2018; Han et al., 2020; Li et al., 2021),
these techniques are not readily applicable to CTS forecasting. A key reason
is that lightweight computer vision models focus mainly on simplifying 2D and
3D convolutions for image and video data, while CTS models involve instead 1D
convolution of temporal features and graph convolution of spatial features. In
addition, recent lightweight Transformers (Mehta and Rastegari, 2021; Liu et
al., 2021) reduce the computational cost of the self-attention mechanism by
utilizing the similarities among adjacent and multi-scale image patches, which
are not applicable to CTS data.
We propose LightCTS, a framework that enables lightweight CTS forecasting at
significantly reduced computational cost while retaining forecasting accuracy
comparable to the state-of-the-art. We start with a comprehensive analysis of
existing CTS models, placing them in a generic framework (see Figure 1(a)) and
scrutinizing the computational and storage overheads of their components, both
theoretically and empirically. The analysis yields important observations (see
Section 3) and points to two directions for achieving lightness, namely 1)
simplifying computations associated with ST-feature extraction and 2)
optimizing the generic CTS architecture and compressing redundant temporal
dimensions for costly S-operators.
By following these directions, LightCTS offers a set of novel lightweight
techniques. First, LightCTS includes a novel T-operator module called _Light
Temporal Convolutional Network_ (L-TCN) and a novel S-operator module called
_GlobalLocal TransFormer_ (GL-Former) for temporal and spatial feature
extraction, respectively. Both L-TCN and GL-Former adopt grouping strategies
to reduce the full connections between adjacent layers to local in-group
connections, thus achieving lower complexity than the vanilla TCN and
Transformer. Moreover, LightCTS adopts a simple but effective _plain stacking_
based architecture (see Figure 1(b)) that decouples temporal and spatial
feature extraction and renders the compression and reduction of intermediate
features more flexible. Along with plain stacking, a _last-shot compression_
scheme is employed that retains only the last time-step slice of temporal
features extracted by T-operators. This scheme reduces the features that are
fed to subsequent components with only minor information loss, as TCNs tend to
capture the most significant features in the last time step (Lea et al.,
2016). The plain stacking and last-shot compression combine to considerably
lower the computational and storage overheads of the subsequent S-operators as
well as of the aggregation and output module.
Considering both single-step and multi-step CTS forecasting, we conduct
extensive experiments to evaluate LightCTS on six benchmark datasets. We find
that LightCTS achieves accuracy comparable to those of state-of-the-art
models, but with much lower computational and storage overheads. We have made
our implementation publicly available111https://github.com/AI4CTS/lightcts.
The contributions of the paper are summarized as follows.
* •
We propose LightCTS, a novel lightweight CTS forecasting framework. To the
best of our knowledge, this is the first study of lightweight DL-based CTS
forecasting.
* •
We analyze the architectures, S/T-operators, and resource costs of mainstream
CTS models, and identify key opportunities for achieving lightness.
* •
We contribute L-TCN and GL-Former, two novel lightweight T- and S-operator
modules, targeting the extraction of ST-features of CTS. We also propose a
plain stacking pattern and a last-shot compression scheme, targeting a
reduction of the sizes of the inputs to S-operators and the aggregation and
output module.
* •
We report on experimental findings for different tasks, offering evidence that
LightCTS is capable of state-of-the-art accuracy while reducing computational
costs and model sizes very substantially.
Section 2 covers the definition of CTS and its forecasting tasks; Section 3
analyzes the commonalities of existing CTS models; Section 4 details the
design of KDCTS; Section 5 reports on the experimental study; Section 6 covers
related work on CTS forecasting and knowledge distillation of DL models;
finally, Section 7 concludes and presents research directions.
## 2\. Preliminaries
This section covers preliminaries of CTS and formalizes the problem of CTS
forecasting. Frequently used notations are summarized in Table 1.
Table 1. Summary of notation.
Notation | Description
---|---
$X$ | An indexed set of correlated time series (CTS)
$\mathtt{N}$ | Number of time series in $X$
$\mathtt{T}$ | Number of time steps in $X$
$\mathtt{D}$ | Embedding size of S/T-operators
$\mathtt{P}$ | Number of historical time steps used in CTS forecasting
$\mathtt{Q}$ | Number of future time steps of CTS forecasting
### 2.1. Correlated Time Series
In a cyber-physical system (Derler et al., 2011), $\mathtt{N}$ devices each
generate timestamped data, yielding $\mathtt{N}$ time series. The time series
are called correlated time series (CTS) (Wu et al., 2021), denoted as
$X\in\mathbb{R}^{\mathtt{N}\times\mathtt{T}\times\mathtt{F}}$, where
$\mathtt{T}$ and $\mathtt{F}$ denote the number of time steps and the number
of sensor measurements per time step, respectively. For example, in a wind
turbine farm consisting of 30 turbines, each turbine may emit wind speed and
wind direction measurements at each time step; thus, if the system emits
measurements for 500 time steps, we get $X\in\mathbb{R}^{30\times 500\times
2}$.
Given the $\mathtt{N}$ time series, two kinds of correlations occur: temporal
correlations within time series and spatial correlations across different time
series. On the one hand, consecutive measurements in a time series are
naturally correlated. On the other hand, concurrent measurements by different
devices may be correlated due to, e.g., the spatial proximity of the devices.
For example, traffic flows reported by sensors on connected road segments are
naturally correlated (Pedersen et al., 2020; Wu et al., 2021).
### 2.2. CTS Forecasting Problems
We consider single-step and multi-step CTS forecasting. First, single-step CTS
forecasting aims to predict the $\mathtt{Q}$-th future time step
($\mathtt{Q}\geq 1$); formally,
(1)
$\hat{X}_{t+\mathtt{P+Q}}\leftarrow\mathcal{SF}(X_{t+\mathtt{1}},\ldots,X_{t+\mathtt{P}}),$
where $t$ indexes the beginning time step, $\mathtt{P}$ is the number of
historical time steps used for forecasting, $\hat{X}_{t+\mathtt{P+Q}}$ denotes
the predicted CTS at the future $(t+\mathtt{P+Q})$-th time step, and
$\mathcal{SF}$ denotes a single-step CTS forecasting model.
Next, multi-step CTS forecasting aims to predict $\mathtt{Q}$ ($\mathtt{Q}>1$)
consecutive future time steps in one pass; formally,
(2)
$\\{\hat{X}_{t+\mathtt{P+1}},\ldots,\hat{X}_{t+\mathtt{P+Q}}\\}\leftarrow\mathcal{MF}(X_{t+\mathtt{1}},\ldots,X_{t+\mathtt{P}}),$
where $\mathcal{MF}$ denotes a multi-step CTS forecasting model.
For both problems, it is essential to extract the temporal dynamics in each
time series and the spatial correlations among different time series from the
historical data. To this end, deep learning (DL) techniques with powerful
temporal and spatial feature extraction capabilities have been used widely in
CTS models. A review of existing DL-based CTS models is provided in Section 6.
Due to the characteristics of the neural network operators used to extract
spatial and temporal features, the training and inferencing of DL-based CTS
models incur considerable computational and storage overheads. In this study,
_we aim to enable lightweight CTS forecasting models (i.e., models with fewer
computations and parameters) with forecasting accuracy comparable to the
state-of-the-art CTS forecasting models._
## 3\. Analyses and Directions
In Section 3.1, we place existing DL-based CTS modeling proposals in a generic
framework, map the complexity of their internal operators, and investigate the
computational and storage overheads of representative models. Based on this
analysis, we identify directions for achieving a lightweight CTS framework in
Section 3.2.
### 3.1. Analysis of Existing CTS Models
#### 3.1.1. Generic Framework
To gain insight into the prospects of lightening DL-based CTS models, we
consider representative proposals (Li et al., 2018; Yu et al., 2018b; Wu et
al., 2019; Bai et al., 2020; Wu et al., 2020; Xu et al., 2020; Park et al.,
2020; Guo et al., 2019; Chen et al., 2020a; Shih et al., 2019; Wu et al.,
2021; Zhou et al., 2021; Grigsby et al., 2021; Chang et al., 2018; Chen et
al., 2022; Huang et al., 2020; Cirstea et al., 2021; Rao et al., 2022). Figure
1(a) shows a generic framework for these models. Generally, a CTS model has
three components: (1) an _embedding module_ that transforms the raw CTS into
latent representations; (2) a stack of _spatio-temporal blocks_ (ST-blocks)
that extract spatial and temporal correlations as high-order features; and (3)
an _aggregation and output module_ that aggregates ST-features from the ST-
blocks and outputs the result, which is either a single-step or a multi-step
forecast (see Section 2.2).
Being responsible for extracting temporal and spatial correlations, ST-blocks
make up the key component of a CTS model. Typically, an ST-block includes
alternating stacks of _temporal operators_ (T-operators) and _spatial
operators_ (S-operators). For example, the alternate stacking pattern can be
$\langle T,S\rangle$, $\langle S,T\rangle$, $\langle T,S,T\rangle$, etc. The
S/T-operators are the basic ingredients for extracting comprehensible
features.
#### 3.1.2. S/T-operators
We proceed to study the S/T-operators employed by state-of-the-art models (Li
et al., 2018; Yu et al., 2018b; Wu et al., 2019; Bai et al., 2020; Wu et al.,
2020; Xu et al., 2020; Park et al., 2020; Guo et al., 2019; Chen et al.,
2020a; Shih et al., 2019; Wu et al., 2021; Zhou et al., 2021; Grigsby et al.,
2021; Chang et al., 2018; Chen et al., 2022; Huang et al., 2020; Cirstea et
al., 2021; Rao et al., 2022). Specifically, we analyze each operator’s time
complexity in terms of _FLOPs_ (floating-point operations) and space
complexity in terms of _the number of model parameters_. Referring to Table 2,
we categorize popular S- and T-operators into different families based on the
base operators that they extend. Considering that there are only minor
differences between the operators in a family (e.g., applying different
attention mechanisms or convolution kernels), we report on the time and space
complexity of the base operators in Table 2. We refer interested readers to
the supplemental material (lig, 2023) for a detailed analysis of base
operators.
Table 2. Categorization and analysis of ST-block operators.
Type | Family | Literature | Time Complexity | Space Complexity
---|---|---|---|---
T-operator | CNN | | (Yu et al., 2018b; Wu et al., 2019; Guo et al., 2019; Wu et al., 2020; Cirstea et al., 2021)
---
(Huang et al., 2020; Wu et al., 2021; Chen et al., 2022; Rao et al., 2022)
$\mathcal{O}(\mathtt{D}^{2}\cdot\mathtt{N}\cdot\mathtt{P})$ | $\mathcal{O}(\mathtt{D}^{2})$
RNN | | (Bai et al., 2020; Chen et al., 2020a; Chang et al., 2018)
---
(Li et al., 2018; Cirstea et al., 2021; Shih et al., 2019)
$\mathcal{O}(\mathtt{D}^{2}\cdot\mathtt{N}\cdot\mathtt{P})$ | $\mathcal{O}(\mathtt{D}^{2})$
Transformer | | (Xu et al., 2020; Park et al., 2020)
---
(Zhou et al., 2021; Wu et al., 2021)
$\mathcal{O}(\mathtt{D}\cdot\mathtt{N}\cdot\mathtt{P}\cdot(\mathtt{P}+\mathtt{D}))$ | $\mathcal{O}(\mathtt{D}^{2})$
S-operator | GCN | | (Li et al., 2018; Guo et al., 2019; Wu et al., 2019, 2021)
---
(Cirstea et al., 2021; Chen et al., 2022; Rao et al., 2022)
$\mathcal{O}(\mathtt{D}\cdot\mathtt{N}\cdot\mathtt{P}\cdot(\mathtt{N}+\mathtt{D}))$ | $\mathcal{O}(\mathtt{D}^{2})$
Transformer | | (Xu et al., 2020; Park et al., 2020)
---
(Grigsby et al., 2021; Wu et al., 2021)
$\mathcal{O}(\mathtt{D}\cdot\mathtt{N}\cdot\mathtt{P}\cdot(\mathtt{N}+\mathtt{D}))$ | $\mathcal{O}(\mathtt{D}^{2})$
Embedding size $\mathtt{D}$; time series number $\mathtt{N}$; historical time
steps $\mathtt{P}$.
Three main T-operator families are identified: (1) _CNN-based T-operators_ (Yu
et al., 2018b; Wu et al., 2019, 2020; Guo et al., 2019; Huang et al., 2020;
Chen et al., 2022; Cirstea et al., 2021; Rao et al., 2022), specifically
Temporal Convolutional Networks (TCNs), that apply dilated causal convolutions
to time series data; (2) _RNN-based T-operators_ , such as long short term
memory networks (LSTMs) (Shih et al., 2019) and gated recurrent unit networks
(GRUs) (Bai et al., 2020; Chen et al., 2020a; Chang et al., 2018; Li et al.,
2018; Cirstea et al., 2021), that process time series based on a recursive
mechanism; and (3) _Transformer-based T-operators_ (Xu et al., 2020; Park et
al., 2020; Zhou et al., 2021; Wu et al., 2021) that adopt the attention
mechanism to establish self-interactions of input time steps, enabling
weighted temporal information extraction over long sequences. While all
T-operator families have the same space complexity, the time complexity of the
operators in the Transformer family is larger than those of the operators in
the CNN and RNN families because of their large-size matrix multiplication (Wu
et al., 2021). Further, although the RNN family operators have the same time
complexity regarding FLOPs as the CNN family operators, the former adopt a
sequential computation scheme that significantly reduces the actual
efficiency. Thus, CNN-based T-operators are the most promising for lightweight
CTS models. Recent TCN models, including GwNet (Wu et al., 2019), MtGnn (Wu et
al., 2020), and Fogs (Rao et al., 2022), achieve the state-of-the-art
accuracies.
There are roughly two S-operator families: (1) _GCN-based S-operators_ ,
specifically Chebyshev GCNs (Guo et al., 2019; Cirstea et al., 2021; Chen et
al., 2022; Rao et al., 2022) or Diffusion GCNs (Li et al., 2018; Wu et al.,
2019, 2021), utilize predefined or learned spatial adjacency matrices to
capture high-order spatial correlations and (2) _Transformer-based
S-operators_ (Xu et al., 2020; Park et al., 2020; Grigsby et al., 2021; Wu et
al., 2021) cast attention operations across different time series to obtain
their weighted spatial correlations. Theoretically, GCNs and Transformers
incur the same space and time complexities for S-operators (see Table 2).
Moreover, no existing studies compare their CTS forecasting performance in the
same setting. We thus include experiments that compare these two S-operators
in our LightCTS framework. The results, in Section 5.4 and Table 9, show that
a Transformer-based S-operator achieves the best accuracy in our framework.
#### 3.1.3. FLOP and Parameter Use in CTS Models
To investigate the resource consumption of each component of CTS models, we
analyze FLOPs and parameters via a case study. We select three
representatives, namely Fogs (Rao et al., 2022) as the most accurate model,
MtGnn (Wu et al., 2020) as the most lightweight model, and GwNet (Wu et al.,
2019) as a widely used model. They are also all included for comparison in the
experimental study in Section 5. We select the METR-LA dataset (see Section
5.1.1) and use the architectures reported by the original papers. Table 3
shows the distribution of FLOPs and parameters associated with different
components of CTS models, namely the embedding module, the T-operators and
S-operators in the ST-component, and the aggregation and output module.
Table 3. Distribution of FLOPs and parameters in CTS models.
Model | Metric | Input | T- | S- | Aggregation
---|---|---|---|---|---
Embedding | operators | operators | and Output
Fogs | FLOPs | 0.01% | 2.13% | 95.72% | 2.15%
Parameters | 0.01% | 3.25% | 77.19% | 19.55%
MtGnn | FLOPs | 0.12% | 23.54% | 70.19% | 6.15%
Parameters | 0.02% | 75.52% | 9.49% | 14.97%
GwNet | FLOPs | 0.03% | 6.70% | 75.23% | 18.04%
Parameters | 0.03% | 10.77% | 19.94% | 69.26%
As expected, a significant portion of FLOPs and parameters occur in the S- and
T-operators that make up the core component of a CTS model. Moreover,
S-operators consume many more FLOPs than T-operators. Surprisingly, the
aggregation and output module is also responsible for many FLOPs and
parameters. This is likely because the aggregation and output module has to
process massive ST-features extracted by all ST-blocks.
### 3.2. Observations and Resulting Directions
We highlight the main observations from the above analyses and identify
promising directions for designing LightCTS.
###### Observation 1.
S- and T-operators, which make up the main component of CTS models (Section
3.1.1), incur significant computational and storage overheads (Table 3). Their
time and space complexities are both proportional to $\mathtt{D}^{2}$ (Table
2).
Observation 1 indicates that it is essential to lighten S- and T-operators. As
presented in Table 2, the time and space complexities of all S/T-operators are
positively correlated with $\mathtt{N}$, $\mathtt{P}$, and $\mathtt{D^{2}}$.
The numbers of time series $\mathtt{N}$ and historical time steps $\mathtt{P}$
are decided by the raw CTS data and should be left unaltered in a CTS model.
Therefore, manipulating the embedding size $\mathtt{D}$ of S/T-operators is a
direction for achieving lightness. Many studies (Wu et al., 2021; Chen et al.,
2020a; Wu et al., 2020), however, have shown that simply reducing $\mathtt{D}$
inevitably degrades forecasting accuracy. Instead, we propose to simplify and
reduce the neural network computations associated with $\mathtt{D}$, to be
detailed in Sections 4.2 and 4.4 for T- and S-operators, respectively.
###### Observation 2.
S-operators consume many more resources, especially FLOPs, than T-operators
(Table 3).
One reason for Observation 2 is that S-operators usually have higher time
complexity than T-operators (see Table 2). The complexity of S-operators can
be reduced by manipulating $\mathtt{D}$ as discussed for Observation 1.
Another major reason is that the input to S-operators must retain the temporal
dimensions even if these contribute little to extracting spatial correlations.
This is a consequence of the alternate stacking pattern of the existing
generic CTS framework (see Section 3.1.1). Specifically, since executions of
S- and T-operators intermixed, some T-operators occur after S-operators. To
allow such T-operators to properly extract temporal features, the temporal
dimensions must be preserved. Although T-operators also receive redundant
spatial information due to intermixing, S-operators are affected more due to
their higher time complexity. Moreover, the spatial dimension $\mathtt{N}$
cannot be compressed as the aim is to forecast future values for all
$\mathtt{N}$ time series.
This observation suggests that a new stacking pattern of S/T-operators that
compresses temporal features before applying S-operators. We thus propose to
design a new stacking pattern that decouples T-operators and S-operators, to
be detailed in Section 4.1. In addition, we devise a temporal feature
compression scheme to condense the input feature maps for S-operators while
preserving key temporal information to be used in the final forecasting, to be
detailed in Section 4.3. As mentioned in Section 3.1.3, the aggregation and
output module takes all features extracted by ST-blocks as input, which incurs
considerable FLOPs and parameters. As a by-product of the temporal feature
compression, the intermediate T- and S-features can be downsized, which leads
to a significant reduction in FLOPs and parameters for the aggregation and
output module.
## 4\. Construction of LightCTS
We start by presenting the new _plain stacking_ pattern in Section 4.1; next,
we detail the light T-operator module in Section 4.2 and its subsequent _last-
shot compression_ in Section 4.3; we then present the light S-operator module
in Section 4.4 and the assembly of LightCTS in Section 4.5.
### 4.1. Architecture with Plain Stacking
As illustrated in Figure 1(a), the conventional CTS architecture relies on ST-
blocks, each of which stacks S- and T-operators alternately. Such an alternate
stacking pattern maintains a feature size of
$\mathtt{N}\times\mathtt{P}\times\mathtt{D}$ throughout feature extraction.
Indeed, the temporal dimension $\mathtt{P}$ is only considered in T-operators
and is disregarded in S-operators. In other words, the output representation
unnecessarily increases the computational and storage overheads of
S-operators.
We deviate from the alternate stacking pattern and instead combine a
T-operator module consisting of $\mathtt{L}_{T}$ T-operators and an S-operator
module consisting of $\mathtt{L}_{S}$ S-operators serially, yielding what we
call the _plain stacking_ scheme. Figure 1(b) presents the novel architecture,
which enjoys several benefits. First, temporal and spatial feature extraction
are decoupled, allowing compression of the temporal dimension before applying
the more complex S-operators (see Observation 2). In particular, we propose a
_last-shot compression_ scheme (to be detailed in Section 4.3) that reduces
the output size of temporal feature extraction from
$\mathtt{N}\times\mathtt{P}\times\mathtt{D}$ to $\mathtt{N}\times\mathtt{D}$.
Moreover, the computational overhead of the aggregation and output module is
also reduced by the feature compression. Indeed, the architecture further
reduces costs by taking as input only the final features of the temporal and
spatial feature extraction phases, instead of the features from all stacked
ST-blocks in the conventional architecture.
The plain stacking architecture does not lower the effectiveness of feature
extraction but reduces computational costs, as will be shown in Section 5.4.
To construct a specific LightCTS model using the new architecture, we design a
light T-operator module L-TCN and a light S-operator module GL-Former,
presented in Sections 4.2 and 4.4.
### 4.2. L-TCN
#### 4.2.1. Background of TCN
We propose an Light Temporal Convolutional Network (L-TCN) that is based on
the Gated TCN (Wu et al., 2019), which incorporates the gating mechanism into
standard TCNs to control the temporal information flow. A TCN adopts dilated
causal convolutions (DCC) (Yu and Koltun, 2016; van den Oord et al., 2016) to
capture both long- and short-term temporal patterns in a non-recursive
fashion, thus alleviating the issue of gradient explosion in RNNs.
Figure 2 (left) illustrates the basic structure of a standard TCN with three
DCC layers. In Layer 1, a convolutional filter slides over the input without
skipping any values in the filter (i.e., _dilation rate_ $\delta$ = 1); in
Layer 2, a convolutional filter that skips one value in the middle (i.e.,
$\delta$ = 2) is applied to the output of Layer 1; in Layer 3, convolutional
operations are performed with three skipped values (i.e., $\delta$ = 4) in the
filter. By stacking multiple DCC layers, a.k.a. TCN layers, with gradually
increased dilation rates, a TCN T-operator module with exponentially enlarged
receptive fields is built. As seen in Figure 2 (left), the receptive field of
the last layer’s rightmost node can cover the entire time series length of the
input data, which implies that the plain stacking does not ignore important
spatio-temporal correlations.
With the raw input CTS
${X}\in\mathbb{R}^{\mathtt{N}\times\mathtt{P}\times\mathtt{F}}$ being embedded
into the latent representation
${H}\in\mathbb{R}^{\mathtt{N}\times\mathtt{P}\times\mathtt{D}}$ by the
embedding module, a TCN layer cast on ${H}$ is formalized as follows.
$\displaystyle\operatorname{TCN}(H\mid\delta,\mathtt{K})=H^{\prime},\;\text{where}$
$\displaystyle
H^{\prime}[i;t;d]=\sum\nolimits_{k=0}^{\mathtt{K}-1}\big{(}H[i;{t-\delta\times
k};:]\cdot\text{W}^{d}[k;:]\big{)}$
is the $d$-th ($d\in[0,\mathtt{D})$) output feature map at time step $t$
($t\in[0,\mathtt{P})$) of the $i$-th ($i\in[0,\mathtt{N})$) time series,
$\text{W}^{d}\in\mathbb{R}^{\mathtt{D}\times\mathtt{K}}$ is the $d$-th
convolutional filter, and $\mathtt{K}$ (often as small as 2 or 3) and $\delta$
are the kernel size and dilation rate, respectively. To keep the temporal
dimension of $\mathtt{P}$ constant in the output, zero padding is applied to
the input of each layer (van den Oord et al., 2016).
Figure 2. A TCN with layers of (a) standard convolution (TCN), (b) group
convolution (GTCN), and (c) shuffled group convolution (SGTCN). The TCN
consists of three layers with dilation rates $\delta=\\{1,2,4\\}$ and the
kernel size $\mathtt{K}$ = 2.
#### 4.2.2. Lightening TCN
Previous studies (Wu et al., 2020; Chen et al., 2022) show that directly
reducing the embedding size $\mathtt{D}$ inevitably lowers the representation
capability of the model, resulting in subpar accuracy. Therefore, we propose
instead to lighten the standard TCN using a grouping strategy. This is
motivated by observations of previous studies (Xie et al., 2017; Zhang et al.,
2018) that a TCN has redundant connections between adjacent TCN layers and
thus can be optimized. Specifically, the standard TCN layer in Figure 2(a) has
full connections between input and output feature maps, while the grouping
strategy in Figure 2(b) first partitions the input TCN feature maps into
$\mathtt{G}^{T}$ equal-sized, consecutive, and non-overlapping groups, and
then performs convolutions within the groups. A group TCN layer, i.e., a TCN
layer with the grouping strategy, is represented as follows.
$\operatorname{GTCN}(H\mid\mathtt{G}^{T})=\operatorname{concat}(\\{\operatorname{TCN}(H_{j}\mid\delta,\mathtt{K})\\}_{j=1}^{\mathtt{G}^{T}}),$
where
$H_{j}=[:,:,\frac{\mathtt{D}\times(j-1)}{\mathtt{G}^{T}}:\frac{\mathtt{D}\times
j}{\mathtt{G}^{T}}]\in\mathbb{R}^{\mathtt{N}\times\mathtt{P}\times\frac{\mathtt{D}}{{\mathtt{G}^{T}}}}$
is the $j$-th group of input feature maps and $\operatorname{concat}(\cdot)$
denotes the concatenation operation. The time and space complexity of each
group are
$\mathcal{O}\big{(}(\frac{\mathtt{D}}{\mathtt{G}^{T}})^{2}\cdot\mathtt{N}\cdot\mathtt{P}\big{)}$
and $\mathcal{O}\big{(}(\frac{\mathtt{D}}{\mathtt{G}^{T}})^{2}\big{)}$,
respectively. Therefore, the time and space complexity of a group TCN layer
with $\mathtt{G}^{T}$ groups is
$\mathcal{O}(\frac{\mathtt{D}^{2}}{\mathtt{G}^{T}}\cdot\mathtt{N}\cdot\mathtt{P})$
and $\mathcal{O}(\frac{\mathtt{D}^{2}}{\mathtt{G}^{T}})$, respectively, which
is $\frac{1}{\mathtt{G}^{T}}$ of the corresponding standard TCN. For example,
in Figure 2(b), $\mathtt{D}$ = 4, input and output feature maps are split into
$\mathtt{G}^{T}$ = 2 groups, and each group consists of
$\mathtt{D}/\mathtt{G}^{T}$ = 2 feature maps. The number of convolution
filters is consequently reduced from $\mathtt{D}^{2}$ = 16 to
$\mathtt{D}^{2}/\mathtt{G}^{T}$ = 8.
One drawback of the naive grouping strategy is the lack of information
exchange among groups. Thus, we propose to use a shuffled grouping strategy to
support communications among feature map groups. As depicted in Figure 2(c),
shuffling allows group convolutions to obtain input from different groups,
with one input feature map contributing to all groups. In the implementation,
we stipulate that the number of feature maps in each input group is divisible
by the number of output groups. The shuffling enhances the naive grouping
strategy in a simple but effective way to enable inter-group communication
without increasing model complexity.
The group number $\mathtt{G}^{T}$ is a model structure hyperparameter that
controls the balance between the lightness of an L-TCN and its capacity to
extract temporal information. Intuitively, a larger $\mathtt{G}^{T}$ improves
L-TCN’s lightness but reduces its capacity. Therefore, it is important to tune
$\mathtt{G}^{T}$. First, $\mathtt{G}^{T}$ belongs to a small set of candidate
values because it can only be a factor of the embedding size (e.g., {2, 4, 8,
16, 32} for $\mathtt{D}$ = 64). Therefore, we employ grid search on the small
number of candidates to maximize $\mathtt{G}^{T}$ while maintaining nearly
state-of-the-art accuracy. Alternatively, it is possible to search for an
optimal $\mathtt{G}^{T}$ more efficiently by applying advanced multi-objective
hyperparameter optimization approaches (Morales-Hernández et al., 2022), such
as multi-objective Bayesian optimization. The group number tuning discussed
here also applies to the grouping techniques we use in other modules. The
effect of varying $\mathtt{G}^{T}$ is studied empirically in Section 5.3.2.
Following previous studies (Wu et al., 2019, 2021), we adopt the gating
mechanism to decide the ratio of temporal information extracted by a shuffled
group TCN layer to flow through the model. Thus, an L-TCN layer is given as
follows.
$\operatorname{L-TCN}(H)=\operatorname{tanh}(\operatorname{SGTCN}_{{o}}(H\mid\mathtt{G}^{T}))\odot\sigma(\operatorname{SGTCN}_{{g}}(H\mid\mathtt{G}^{T})),$
where $\operatorname{SGTCN}_{{o}}$ and $\operatorname{SGTCN}_{{g}}$ are two
parallel _shuffled group_ TCN branches: the former extracts temporal features
and the latter controls the ratio at which the features are passed along. The
gating ratio is achieved by the sigmoid function $\sigma(\cdot)$ and is
applied to every temporal feature element by the element-wise product $\odot$.
To sum up, an L-TCN layer reduces the time and space complexities to
$\frac{1}{\mathtt{G}^{T}}$ of the standard TCN’s counterpart, i.e., to
$\mathcal{O}(\frac{\mathtt{D}^{2}}{\mathtt{G}^{T}}\cdot\mathtt{N}\cdot\mathtt{P})$
and $\mathcal{O}(\frac{\mathtt{D}^{2}}{\mathtt{G}^{T}})$, respectively. So
far, the output feature map is of size
$\mathtt{N}\times\mathtt{P}\times\mathtt{D}$. We proceed to present a last-
shot compression scheme to reduce the input size for subsequent S-operators.
### 4.3. Last-shot Compression
Inspired by the success of residual connections (He et al., 2016), popular CTS
models aggregate the features from every ST-block to get the output. The
bottom left corner of Figure 3 illustrates how such a classical aggregation
scheme is applied to the L-TCN. In particular, the features extracted by each
L-TCN layer are summed to obtain an aggregated feature tensor of shape
$\mathtt{N}\times\mathtt{P}\times\mathtt{D}$, with the temporal dimension
$\mathtt{P}$ preserved for subsequent spatial feature extraction. If we are
able to compress the temporal features extracted by each L-TCN layer, the
computational overheads of the following S-operators and the aggregation and
output module will be reduced. Note that these two components are both costly
(see Table 3).
To achieve this, we propose a simple but effective mechanism, named _last-shot
compression_. The idea comes from the intuition that more distant temporal
features are less important for the forecast at the current moment (Cao and
Tay, 2003; Tay and Cao, 2002; de Freitas et al., 1999). To put it simply,
last-shot compression retains only the features at the most recent time step
as the output of each L-TCN layer. An illustration is given in Figure 2
(left), where the rightmost snippets of each layer’s output (corresponding to
the last time step) are extracted and summed as the input to the subsequent
components.
One concern may be that such aggressive compression will completely lose the
information from the previous ($\mathtt{P}-1$) time steps of the input feature
map. This is not so because the stacking of dilated convolutions by L-TCNs
ensures that the last time step feature of each layer preserves information
from several most recent input time steps at different ranges. For example, in
Figure 2 (left), the last time step feature from Layer 1 captures the 7th and
8th input time steps, while that from Layer 2 captures the 5th to the 8th
input steps, and that from Layer 3 captures all eight input time steps. Hence,
if we aggregate only the last time step feature of each L-TCN layer, we still
perceive the full input, but focus more on recent input time steps while
performing higher compression on more distant input time steps. The ablation
study in Section 5.4 shows that this last-shot compression achieves impressive
cost reductions while maintaining model accuracy.
Figure 3. Feature aggregation after last-shot compression vs the classical
feature aggregation (Wu et al., 2021) (bottom left).
The last-shot compression and the following feature aggregation are
illustrated in Figure 3. Let
$H^{b}\in\mathbb{R}^{\mathtt{N}\times\mathtt{P}\times\mathtt{D}}$ be the
output feature of the $b$-th ($1\leq b\leq\mathtt{L}_{T}$) L-TCN layer, and
let $O^{b}=H^{b}[:,\mathtt{P}-1,:]$ $\in\mathbb{R}^{\mathtt{N}\times
1\times\mathtt{D}}$ be the last-step feature of $H^{b}$. The aggregation sums
the last-step features of all layers, i.e.,
$H=\sum\nolimits_{b=1}^{\mathtt{L}_{T}}O^{b}$. The aggregated feature $H$ is
then sent to a squeeze and excitation (SE) module (Hu et al., 2018) for
attentive feature representation:
(3)
$H^{T}=H\cdot\sigma(\text{W}_{s2}\cdot\operatorname{ReLU}(\text{W}_{s1}\cdot
H^{\circ})),$
where $H^{\circ}=\operatorname{GlobalAvgPool}(H)\in\mathbb{R}^{\mathtt{D}}$ is
achieved through the global average pooling (Hu et al., 2018), and
$H^{T}\in\mathbb{R}^{\mathtt{N}\times\mathtt{D}}$ is the final temporal
feature. Given the reduction ratio $r$ in the SE module,
$\text{W}_{s1}\in\mathbb{R}^{\frac{\mathtt{D}}{r}\times\mathtt{D}}$ and
$\text{W}_{s2}\in\mathbb{R}^{\mathtt{D}\times\frac{\mathtt{D}}{r}}$ are weight
matrices to squeeze the representation that in turn is rescaled back to
produce attentions with a sigmoid function $\sigma(\cdot)$ over the original,
aggregated feature $H$.
Compared to the classical aggregation scheme that leads to the feature size of
$\mathtt{N}\times\mathtt{P}\times\mathtt{D}$, the last-shot compression
achieves the feature size of $\mathtt{N}\times\mathtt{D}$ with the temporal
dimension having been flattened from $\mathtt{P}$ to 1. Despite this
reduction, the gradually enlarged receptive field of L-TCN (see Section 4.2.1)
ensures that the last-shot compression upon L-TCN can obtain the temporal
features across all time steps. This property enables further extraction of
spatial correlations over time. The space and time complexities of the
subsequent spatial feature extraction are thus reduced by a factor of
$1/\mathtt{P}$.
### 4.4. GL-Former
#### 4.4.1. Background and Overall Design
Transformers occur in many state-of-the-art CTS forecasting models (Xu et al.,
2020; Park et al., 2020; Zhou et al., 2021; Wu et al., 2021). Aiming for light
yet effective S-operators, we propose a GlobalLocal TransFormer (GL-Former)
module that aims to extract both global-scale and local-scale spatial
correlations among different time series. This way, GL-Former eliminates the
exhaustive global-scale spatial correlation extraction seen in the standard
Transformer (Vaswani et al., 2017). We proceed to give preliminaries of the
standard Transformer and then detail the GL-Former design that targets
accuracy and lightness.
A standard Transformer consists of an _encoding layer_ and $\mathtt{L}_{s}$
_attention blocks_. Given the input
$H^{T}\in\mathbb{R}^{\mathtt{N}\times\mathtt{D}}$ generated by the last-shot
compression scheme (see Equation 3), a _positional encoding mechanism_ (PE)
(Vaswani et al., 2017) is introduced to the encoding layer. The reason is that
by default, Transformers’ self-attention operations are unable to interpret
the permutations of the input nodes. Specifically, the encoding layer converts
$H^{T}$ to a learned positional encoding $H^{\text{PE}}$ by incorporating the
identity information of CTS nodes:
$H^{\text{PE}}=H^{T}+\text{W}^{\text{PE}},$
where $\text{W}^{\text{PE}}\in\mathbb{R}^{\mathtt{N}\times\mathtt{D}}$ is a
learnable matrix capturing the identity information. The resulting encoding
$H^{\text{PE}}$ is fed to sequential attention blocks, each of which consists
of a _multi-head attention_ (MHA) layer and a _feed-forward network_ (FFN)
layer.
An MHA layer is a concatenation of $\mathtt{h}$ repeated self-attention
modules (heads) in parallel:
(4)
$\operatorname{MHA}(H^{\text{PE}})=\operatorname{concat}(\\{\operatorname{head}_{i}(H^{\text{PE}})\\}_{i=1}^{\mathtt{h}})$
(5)
$\operatorname{head}_{i}(H^{\text{PE}})=\operatorname{softmax}\big{(}H^{\text{I}}\big{)}\cdot{V}_{i}$
(6)
$H^{\text{I}}=({{Q}_{i}\cdot{K}_{i}^{\mathsf{T}}})/{\sqrt{\mathtt{D}/\mathtt{h}}}$
(7)
$Q_{i}=H^{\text{PE}}\cdot\text{W}^{Q}_{i},K_{i}=H^{\text{PE}}\cdot\text{W}^{K}_{i},V_{i}=\text{H}^{\text{PE}}\cdot\text{W}^{V}_{i},$
where all learnable matrices
$\text{W}^{*}_{i}\in\mathbb{R}^{\mathtt{D}\times\frac{\mathtt{D}}{\mathtt{h}}}$
are used to convert the embedding size from $\mathtt{D}$ to
$\mathtt{D}/\mathtt{h}$. In Equation 5, each head
$\operatorname{head}_{i}(\cdot)$ is a self-attention module that produces
attention scores for its input
$H^{\text{PE}}\in\mathbb{R}^{\mathtt{N}\times\mathtt{D}}$ by making the input
interact with itself. MHA provides impressive power to encode multiple
relationships.
An FFN layer merges the output
$H^{\text{MHA}}\in\mathbb{R}^{\mathtt{N}\times\mathtt{D}}$ of Equation 4 and
provides non-linear activations via two fully-connected layers:
(8)
$\operatorname{FFN}(H^{\text{MHA}})=\operatorname{FFN}^{(1)}(H^{\text{MHA}})\cdot\text{W}_{2}+\text{b}_{2}$
(9)
$\operatorname{FFN}^{(1)}(H^{\text{MHA}})=\operatorname{ReLU}(H^{\text{MHA}}\cdot\text{W}_{1}+\text{b}_{1}),$
where $\text{W}_{1}\in\mathbb{R}^{\mathtt{D}\times\mathtt{D}^{\prime}}$,
$\text{W}_{2}\in\mathbb{R}^{\mathtt{D}^{\prime}\times\mathtt{D}}$,
$\text{b}_{1}\in\mathbb{R}^{\mathtt{D}^{\prime}}$, and
$\text{b}_{2}\in\mathbb{R}^{\mathtt{D}}$ are learnable matrices and biases.
The two layers first enlarge the embedding size from $\mathtt{D}$ to
$\mathtt{D}^{\prime}$, which is typically a quadruple of $\mathtt{D}$ (Vaswani
et al., 2017), and then scale it back to $\mathtt{D}$. The feature generated
by the ($\mathtt{L}_{S}$)-th attention block is the final output of the
Transformer, i.e., $H^{S}$.
A standard attention block introduced above consists of an MHA and an FFN
layer and captures global-scale spatial correlations among all CTS nodes.
Although using _global attention blocks_ yields powerful feature extraction
capabilities, a model that learns such complicated information can be quite
hard to train. Injecting prior knowledge into the model is a sensible way to
increase model training efficiency, as the model does not need to extrapolate
the knowledge from the data itself. Further, the prior knowledge may offer
more information beyond the training data, thus helping to regularize the
model and prevent overfitting (Vladimirova et al., 2019). Specifically, in our
problem setting, using prior knowledge in the data, such as the explicit
spatial proximity information of CTS nodes (modeled as an
$\mathtt{N}$-by-$\mathtt{N}$ adjacency matrix), we can focus on extracting
correlations for only those pairs of nodes that are potentially relevant and
can omit computations for other pairs. This kind of attention block, which we
call a _local attention block_ , is detailed in Section 4.4.2. As shown in
Figure 4, an example GL-Former is a sequence of alternating global and local
attention blocks. The alternation combats the information loss on local
attentions. Note that the numbers of global and local attention blocks are not
necessarily the same. For example, one global attention could be followed by
two local attentions. In addition, we adopt the grouping strategy to ease the
computations of the MHA and FFN layers in attention blocks, to be detailed in
Section 4.4.3.
Figure 4. An example of GL-Former consisting of $\mathtt{L}_{S}$ alternating
global attention blocks and local attention blocks.
#### 4.4.2. Local Attention Block
The computation of a local attention follows that of a global attention
(Equations 4 to 7), except that a mask function is applied in Equation 6 to
retain pairs of relevant nodes. Specifically, the mask function
$\operatorname{mask}(Z,M)$ hides an element $Z[i,j]$ in the feature matrix $Z$
if element $M[i,j]$ of the mask matrix is false; formally,
$Z[i,j]=\begin{cases}Z[i,j],&\text{if~{}}M[i,j]\text{~{}is~{}}\texttt{true}\\\
-\infty,&\text{otherwise}\end{cases}$
A hidden value is set to $-\infty$ because the masked feature matrix is sent
to a $\operatorname{softmax}(\cdot)$ function (see Equation 5). Besides, a
domain may be associated with several adjacency matrices. For example, one
adjacency matrix may capture the correlations in terms of the distance between
a pair of nodes, while another may capture the correlations in terms of the
data dependency between nodes. In this setting, we obtain mask matrix $M$ by
aggregating all relevant adjacency matrices of the domain, i.e.,
$M=\sum_{i}A_{i}$, where $A_{i}$ is one of the adjacency matrices. Notably,
$A_{i}$ is a sparse adjacency matrix thresholded by a filtering function (Wu
et al., 2019), which reduces the impact of noise and makes the model more
robust (Chen et al., 2022).
With the mask function, the variable $H^{\text{I}}$ in Equation 6 is
calculated as follows for a local attention block.
(10)
$H^{\text{I}}=\operatorname{mask}\big{(}({{Q}_{i}\cdot{K}_{i}^{\mathsf{T}}})/{\sqrt{\mathtt{D}/\mathtt{h}}},M\big{)}$
#### 4.4.3. L-MHA and L-FNN
We propose the light MHA (L-MHA) that adopts a grouping strategy similar to
the one proposed for L-TCN in Section 4.2. The input encoding $H^{\text{PE}}$
of L-MHA is first partitioned into $\mathtt{G}^{M}$ groups, and then the
original MHA (see Equation 4) is applied to each group $H^{\text{PE}}_{j}$
($1\leq j\leq\mathtt{G}^{M}$). The L-MHA is given as follows.
$\displaystyle\operatorname{L-MHA}(H^{\text{PE}})$
$\displaystyle=\operatorname{concat}(\\{\operatorname{MHA}(H^{\text{PE}}_{j})\\}_{j=1}^{\mathtt{G}^{M}}),\;\text{where}$
$\displaystyle H^{\text{PE}}_{j}$
$\displaystyle=H^{\text{PE}}[:,\frac{\mathtt{D}\times(j-1)}{\mathtt{G}^{M}}:\frac{\mathtt{D}\times
j}{\mathtt{G}^{M}}]$
We apply multi-head attention with the same number $\mathtt{h}$ of heads for
each partitioned group. The output of $\operatorname{MHA}(H^{\text{PE}}_{j})$
is of size $\mathtt{N}\times\frac{\mathtt{D}}{\mathtt{G}^{M}}$, and the final
output of L-MHA is of size $\mathtt{N}\times\mathtt{D}$, the same as the
standard MHA in Equation 4. Still, the time and space complexities of L-MHA
are a fraction ${1}/{\mathtt{G}^{M}}$ of those of the standard MHA.
Likewise, we implement a light FFN (L-FFN) that partitions the input features
into $\mathtt{G}^{F}$ groups. We only apply the grouping strategy to the
second layer of an original FFN (see Equation 8) and the computation in the
first layer remains as shown in Equation 9. The first layer
$\operatorname{FFN}^{(1)}$ encapsulates the only non-linear activation in an
attention block, and lightening it will reduce accuracy considerably. As a
result, L-FFN is processed as follows.
$\displaystyle\operatorname{L-FFN}(H^{\text{MHA}})$
$\displaystyle=\operatorname{concat}(\\{\operatorname{FFN}(H^{\text{MHA}}_{j})\\})_{j=1}^{\mathtt{G}^{F}},\;\text{where}$
$\displaystyle H^{\text{MHA}}_{j}$
$\displaystyle=H^{\text{MHA}}[:,\frac{\mathtt{D}\times(j-1)}{\mathtt{G}^{F}}:\frac{\mathtt{D}\times
j}{\mathtt{G}^{F}}]$
As only the second FFN layer is lightened, the complexity of L-FFN is
${(1+1/\mathtt{G}^{F})}/{(1+1)}$ as the case for the standard FFN counterpart.
### 4.5. Assembling LightCTS
We compose LightCTS using the plain stacking architecture from Figure 1(b) and
using the proposed light T- and S-operator modules (i.e., L-TCN and GL-
Former). In particular, we use one CNN layer to implement the embedding module
and configure the aggregation and output module with two fully-connected
layers:
$\hat{Y}=\operatorname{ReLU}((H^{S}+H^{T})\cdot\text{W}^{o}_{1}+\text{b}^{o}_{1})\cdot\text{W}^{o}_{2}+\text{b}^{o}_{2},$
where $\hat{Y}\in\mathbb{R}^{\mathtt{N}\times\mathtt{L}}$ is the forecast
result, $\mathtt{L}=1$ or $\mathtt{Q}$ denotes the forecast dimension
depending on whether single-step or multi-step forecasting is performed;
$H^{T}$ and $H^{S}$ are the output features of the last-shot compression and
GL-Former, respectively; and $\text{W}^{o}_{1}$, $\text{W}^{o}_{2}$,
$\text{b}^{o}_{1}$, and $\text{b}^{o}_{2}$ are learnable matrices and biases.
Finally, we follow previous work (Rao et al., 2022; Wu et al., 2021, 2019) and
employ mean absolute error (MAE) as the loss function.
It should be noted that this simple plain stacking architecture does not lower
the effectiveness of feature extraction. The key reason is twofold: 1) the
last-shot compression (detailed in Section 4.3) with the proposed T-operator
module L-TCN (detailed in Section 4.2) are able to emphasize the most recent
timestep’s features and to reduce the noise in feature maps, thus improving
the subsequent spatial feature extraction; and 2) the proposed S-operator
module GL-Former (detailed in Section 4.4) addresses both local and global-
scale spatial correlations. By considering adjacency matrices in the attention
block through a mask, GL-Former can capture the prior knowledge of spatial
correlations, which the standard Transformer is unable to do, thus enhancing
the capability of spatial feature extraction. We report on empirical studies
in Section 5.4 that offer evidence of the effectiveness of the proposed last-
shot compression, GL-Former, and plain stacking.
## 5\. Experiments
We evaluate LightCTS on both multi-step and single-step CTS forecasting tasks.
We include many commonly-used benchmark datasets, four for multi-step
forecasting (two on traffic flow forecasting and two on traffic speed
forecasting) and two for single-step forecasting. These benchmarks are
associated with different accuracy metrics. To enable direct and fair
comparisons, we use the metrics employed in the original papers. The code and
datasets are made available (lig, 2023).
### 5.1. Multi-Step Forecasting
#### 5.1.1. Datasets
* •
PEMS04 (Song et al., 2020) is a traffic flow dataset collected from 307
sensors on 29 roads in San Francisco Bay Area during January – March 2018. The
traffic flow ranges from 0 to 919 with a mean of 91.74.
* •
PEMS08 (Song et al., 2020) is a traffic flow dataset collected from 170
sensors on 8 roads in San Bernardino during July – September 2016. The traffic
flow ranges from 0 to 1,147 with a mean of 98.17.
* •
METR-LA (Li et al., 2018) contains traffic speed data in mph gathered during
March – June 2012 from 207 loop detectors, from the road network of Los
Angeles County. The speed ranges from 0 to 70 mph with a mean of 53.98 mph.
* •
PEMS-BAY (Li et al., 2018) is a traffic speed dataset gathered from 325 loop
detectors in the road network in the San Francisco Bay Area. The speed ranges
from 0 to 85.1 mph with a mean of 62.62 mph.
The sampling interval of each dataset is 5 minutes. The datasets are organized
and split (i.e., in train:validation:test) as in previous studies (Wu et al.,
2019, 2021). The statistics are summarized in Table 4.
Table 4. Dataset statistics for multi-step forecasting.
Dataset | Data type | $\mathtt{N}$ | $\mathtt{T}$ | $\mathtt{P}$ | $\mathtt{Q}$ | Split Ratio
---|---|---|---|---|---|---
PEMS04 | Traffic flow | 307 | 16,992 | 12 | 12 | 6:2:2
PEMS08 | Traffic flow | 170 | 17,856 | 12 | 12 | 6:2:2
METR-LA | Traffic speed | 207 | 34,272 | 12 | 12 | 7:1:2
PEMS-BAY | Traffic speed | 325 | 52,116 | 12 | 12 | 7:1:2
#### 5.1.2. Metrics
We consider accuracy and lightness as follows.
* •
Accuracy Metrics. Following existing multi-step forecasting studies (Li et
al., 2018; Yu et al., 2018b; Wu et al., 2019, 2021), we use mean absolute
error (MAE), root mean squared error (RMSE), and mean absolute percentage
error (MAPE) to measure accuracy comprehensively. The three metrics capture
the forecasting accuracy from different perspectives: MAE gives equal weights
to all errors, RMSE focuses on the most severe errors, and MAPE highlights the
errors when ground truth values are small. Lower MAE, RMSE, and MAPE indicate
higher forecasting accuracy.
* •
Lightness Metrics. Consistent with the existing conventions (Howard et al.,
2017; Zhang et al., 2018) and to eliminate the influence of different DL
platforms and operating system conditions (e.g., multiple concurrent running
programs), we evaluate the lightness of CTS forecasting models using FLOPs and
the number of parameters222FLOPs and parameter counts are captured by _Fvcore_
(fvc, 2023) from Facebook Research. considered during _inferencing_. This
accords with existing studies (Zhang et al., 2018; Li et al., 2021; Tan and
Le, 2019). In addition, we report the latency and peak memory use (abbreviated
as Peak Mem) of models during inferencing on a low-computational-resource
device (see Section 5.1.4). The results are for practical reference and will
vary depending on hardware, software, implementation, and other factors.
#### 5.1.3. CTS Forecasting Models for Comparisons
All models are implemented using their original code; and if using the same
dataset, we report the original results.
* •
DcRnn (Li et al., 2018). A relatively early DL-based model that adopts
diffusion GCNs and GRUs for S- and T-operators, respectively.
* •
GwNet (Wu et al., 2019). A widely used benchmark model that integrates
adaptive GCNs and TCNs for S- and T-operators, respectively.
* •
AgCrn (Bai et al., 2020). A comprehensive but costly model that considers
dynamic spatial correlations through different time steps.
* •
MtGnn (Wu et al., 2020). A successor of GwNet with new graph learning layers
and an optimized overall structure.
* •
AutoCts (Wu et al., 2021). An automated framework that allows heterogeneous
ST-blocks with different S/T-operators and their connections through automatic
search.
* •
EnhanceNet (Cirstea et al., 2021). A framework that uses distinct filters for
each time series and dynamic adjacency matrices to capture spatial
correlations over time. TCNs are used to implement T-operators.
* •
Fogs (Rao et al., 2022). A recent model uses first-order gradients to avoid
fitting irregularly-shaped distributions.
* •
AutoCts-KDf/AutoCts-KDp. We construct two compressed variants of
AutoCts333AutoCts is selected as the teacher model for compression as it
generally achieves the highest effectiveness among all competitor methods in
our study. using knowledge distillation (KD) for regression tasks (Takamoto et
al., 2020). Specifically, AutoCts-KDf and AutoCts-KDp are compressed to have
nearly the same numbers of FLOPs and parameters as LightCTS.
#### 5.1.4. Implementation Details
All models are trained on a server with an NVIDIA Tesla P100 GPU. To
investigate models’ inferencing performance in constrained computing
environments, we employ an X86 device with a 380 MHz CPU.
The L-TCN has ($\mathtt{L}_{T}$ = 4) layers with the dilation rate $\delta$ of
each layer set to [1, 2, 4, 8] to ensure that the receptive field of the last
layer can cover the entire time series length of the input CTS. The GL-Former
has ($\mathtt{L}_{S}$ = 6) attention blocks for METR-LA and PEMS-BAY, and
$\mathtt{L}_{S}$ = 4 for PEMS04 and PEMS08. The stacking pattern is one global
block followed by one local block, as shown in Figure 4. Following parameter
tuning, we set the embedding size $\mathtt{D}$ = 48 for METR-LA, $\mathtt{D}$
= 64 for the other three datasets. For all datasets, the group numbers
$\mathtt{G}^{T}$ = 4, $\mathtt{G}^{M}$ = $\mathtt{G}^{F}$ = 2, and the
reduction ratio $r$ of the SE module is set to 8. We adopt the Adam optimizer
with a learning rate of 0.002 to train models for 250 epochs.
#### 5.1.5. Overall Comparisons
Following existing conventions for direct and fair comparison (Wu et al.,
2021; Rao et al., 2022; Wu et al., 2020), we report the average accuracy over
all 12 future time steps for the PEMS04 and PEMS08 datasets and report the
accuracy at the 3rd, 6th, and 12th time steps for the METR-LA and PEMS-BAY
datasets. Tables 5 and 6 show the results for both the accuracy and lightness
measures. In this section, the best results are in bold, and the second-best
results are underlined.
Considering accuracy on PEMS04 and PEMS08 datasets, Table 5 shows that
LightCTS achieves the best MAE and RMSE results and ranks second regarding
MAPE. Although Fogs achieves the best MAPE on both datasets, it is only
marginally better than LightCTS (less than 0.1%), and its MAE and RMSE rank
only around 4th among all models. The accuracy results on METR-LA and PEMS-BAY
datasets in Table 6 show that LightCTS achieves the best MAE, at least the
second-best MAPE, and a competitive RMSE (top-3 in most cases). While LightCTS
does not always rank in the top-3 in terms of RMSE, it is only negligibly
below. For example, for the 15th-minute forecast on METR-LA dataset, LightCTS
achieves an RMSE of 5.16 to rank 4th, while the top-3 models achieve 5.11,
5.14, and 5.15; the maximum margin is only 0.05, corresponding to a speed of
0.05 mph traffic speed. Given that there are always fluctuations across real-
world datasets, such a small performance difference is insignificant.
Considering lightness, Tables 5 and 6 show that LightCTS has significantly
fewer FLOPs and model parameters and achieves lower latency and peak memory
use than all other models, with the exception that AgCrn achieves slightly
fewer parameters and lower peak memory use on PEMS08 dataset. However, AgCrn
is much less accurate. LightCTS clearly uses fewer resources than the two most
accurate competing models, EnhanceNet and AutoCts (e.g., less than 1/6 and
1/10 FLOPs), while maintaining comparable accuracy.
Besides, although KD (Takamoto et al., 2020) enables creating the AutoCts-KDf
and AutoCts-KDp models that have FLOPs and parameter counts comparable to
those of LightCTS, their effectiveness metrics are significantly lower, as
seen in Tables 5 and 6.
In summary, LightCTS offers substantially reduced computational and storage
overheads while providing top-tier accuracy at multi-step CTS forecasting.
LightCTS thus offers unique value because it enables CTS forecasting with no
accuracy penalty using limited resources, which are often found in real-world
applications. In addition, it also lowers costs when deployed in non-
constrained settings, such as servers.
Table 5. Accuracy and lightness comparison for multi-step traffic flow
forecasting.
Data | Models | | FLOPs
---
(unit: M)
| Params
---
(unit: K)
| Latency
---
(unit: s)
| Peak Mem
---
(unit: Mb)
MAE | RMSE | MAPE
PEMS04 | DcRnn | 3739 | 371 | 22.9 | 8.1 | 24.70 | 38.12 | 17.12%
GwNet | 1277 | 311 | 4.8 | 6.8 | 19.16 | 30.46 | 13.26%
AgCrn | 3936 | 749 | 9.5 | 19.2 | 19.83 | 32.26 | 12.97%
MtGnn | 393 | 547 | 5.3 | 12.1 | 19.32 | 31.57 | 13.52%
AutoCts | 2043 | 368 | 5.2 | 7.8 | 19.13 | 30.44 | 12.89%
EnhanceNet | 969 | 283 | 4.2 | 6.0 | 19.11 | 30.34 | 14.33%
Fogs | 5936 | 2366 | 14.6 | 42.9 | 19.34 | 31.20 | 12.71%
AutoCts-KDf | 196 | 17 | 3.0 | 5.3 | 25.17 | 38.85 | 17.06%
AutoCts-KDp | 1278 | 186 | 4.4 | 6.8 | 22.83 | 35.21 | 15.55%
LightCTS | 147 | 185 | 1.1 | 4.7 | 18.79 | 30.14 | 12.80%
PEMS08 | DcRnn | 2070 | 371 | 14.9 | 4.8 | 17.86 | 27.83 | 11.45%
GwNet | 479 | 309 | 1.0 | 4.0 | 15.13 | 24.07 | 10.10%
AgCrn | 726 | 150 | 3.2 | 2.6 | 15.95 | 25.22 | 10.09%
MtGnn | 153 | 352 | 2.7 | 4.6 | 15.71 | 24.62 | 10.03%
AutoCts | 808 | 366 | 3.7 | 4.7 | 14.82 | 23.64 | 9.51%
EnhanceNet | 365 | 275 | 1.7 | 3.8 | 14.82 | 23.60 | 9.58%
Fogs | 1949 | 1294 | 6.6 | 14.2 | 14.92 | 24.09 | 9.42%
AutoCts-KDf | 100 | 22 | 1.3 | 3.3 | 19.47 | 30.26 | 13.77%
AutoCts-KDp | 485 | 181 | 1.9 | 4.0 | 16.89 | 26.40 | 11.08%
LightCTS | 70 | 177 | 0.4 | 2.8 | 14.63 | 23.49 | 9.43%
Table 6. Accuracy and lightness comparison for multi-step traffic speed forecasting. Data | Model | FLOPs | Params | Latency | Peak Mem | 15 mins | 30 mins | 60 mins
---|---|---|---|---|---|---|---|---
METR-LA | | (unit: M) | (unit: K) | (unit: s) | (unit: Mb) | MAE | RMSE | MAPE | MAE | RMSE | MAPE | MAE | RMSE | MAPE
DcRnn | 2521 | 436 | 16.4 | 13.6 | 2.77 | 5.38 | 7.30% | 3.15 | 6.45 | 8.80% | 3.60 | 7.60 | 10.50%
GwNet | 658 | 309 | 1.7 | 8.9 | 2.69 | 5.15 | 6.90% | 3.07 | 6.22 | 8.37% | 3.53 | 7.37 | 10.01%
AgCrn | 2453 | 748 | 7.5 | 22.6 | 2.83 | 5.45 | 7.56% | 3.20 | 6.55 | 8.79% | 3.58 | 7.41 | 10.13%
MtGnn | 208 | 405 | 3.9 | 11.9 | 2.69 | 5.18 | 6.86% | 3.05 | 6.17 | 8.19% | 3.49 | 7.23 | 9.87%
AutoCts | 1090 | 366 | 2.8 | 11.4 | 2.67 | 5.11 | 6.80% | 3.05 | 6.11 | 8.15% | 3.47 | 7.14 | 9.81%
EnhanceNet | 648 | 453 | 2.6 | 13.4 | 2.69 | 5.14 | 6.93% | 3.06 | 6.10 | 8.29% | 3.49 | 7.23 | 9.96%
Fogs | 2858 | 1524 | 7.4 | 45.9 | 2.72 | 5.20 | 7.05% | 3.12 | 6.30 | 8.60% | 3.64 | 7.61 | 10.62%
AutoCts-KDf | 95 | 15 | 1.6 | 5.9 | 3.04 | 5.80 | 8.49% | 3.57 | 7.03 | 10.49% | 4.19 | 8.34 | 12.73%
AutoCts-KDp | 595 | 155 | 2.2 | 7.1 | 2.78 | 5.21 | 7.33% | 3.18 | 6.23 | 9.00% | 3.64 | 7.28 | 10.97%
LightCTS | 71 | 133 | 0.3 | 5.6 | 2.67 | 5.16 | 6.82% | 3.03 | 6.16 | 8.11% | 3.42 | 7.21 | 9.46%
PEMS-BAY | DcRnn | 5386 | 436 | 22.6 | 13.8 | 1.38 | 2.95 | 2.90% | 1.74 | 3.97 | 3.90% | 2.07 | 4.74 | 4.90%
GwNet | 1408 | 312 | 3.7 | 10.3 | 1.30 | 2.74 | 2.73% | 1.63 | 3.70 | 3.67% | 1.95 | 4.52 | 4.63%
AgCrn | 4224 | 749 | 10.1 | 22.7 | 1.35 | 2.83 | 2.87% | 1.69 | 3.81 | 3.84% | 1.96 | 4.52 | 4.67%
MtGnn | 432 | 573 | 7.6 | 19.2 | 1.32 | 2.79 | 2.77% | 1.65 | 3.74 | 3.69% | 1.94 | 4.49 | 4.53%
AutoCts | 2295 | 369 | 5.9 | 11.9 | 1.30 | 2.71 | 2.69% | 1.61 | 3.62 | 3.55% | 1.89 | 4.32 | 4.36%
EnhanceNet | 1442 | 474 | 5.4 | 14.2 | 1.33 | 2.81 | 2.80% | 1.64 | 3.72 | 3.65% | 1.93 | 4.47 | 4.51%
Fogs | 6608 | 2551 | 16.3 | 76.5 | 1.38 | 2.91 | 2.94% | 1.73 | 3.93 | 3.97% | 2.09 | 4.71 | 4.96%
AutoCts-KDf | 218 | 18 | 3.4 | 9.5 | 1.42 | 2.92 | 2.95% | 1.78 | 4.02 | 3.99% | 2.11 | 4.82 | 5.06%
AutoCts-KDp | 1431 | 248 | 4.4 | 10.4 | 1.38 | 2.85 | 2.87% | 1.72 | 3.82 | 3.86% | 2.06 | 4.62 | 4.73%
LightCTS | 208 | 236 | 1.2 | 9.2 | 1.30 | 2.75 | 2.71% | 1.61 | 3.65 | 3.59% | 1.89 | 4.32 | 4.39%
Table 7. Accuracy and lightness comparison for single-step CTS forecasting. Data | Model | FLOPs | Params | Latency | Peak Mem | 3-th | 6-th | 12-th | 24-th
---|---|---|---|---|---|---|---|---|---
| | (unit: M) | (unit: K) | (unit: s) | (unit: Mb) | RRSE | CORR | RRSE | CORR | RRSE | CORR | RRSE | CORR
Solar-Energy | DsaNet | 914 | 6377 | 0.8 | 32.5 | 0.1822 | 0.9842 | 0.2450 | 0.9701 | 0.3287 | 0.9444 | 0.4389 | 0.8943
MtGnn | 1090 | 348 | 0.5 | 9.9 | 0.1778 | 0.9852 | 0.2348 | 0.9726 | 0.3109 | 0.9509 | 0.4270 | 0.9031
MaGnn | 492 | 105 | 0.4 | 9.2 | 0.1771 | 0.9853 | 0.2361 | 0.9724 | 0.3105 | 0.9539 | 0.4108 | 0.9097
AutoCts | 2237 | 91 | 1.1 | 17.6 | 0.1750 | 0.9853 | 0.2298 | 0.9763 | 0.2957 | 0.9566 | 0.4143 | 0.9097
AutoCts-KDf | 418 | 12 | 0.4 | 9.2 | 0.1802 | 0.9834 | 0.2463 | 0.9696 | 0.3332 | 0.9403 | 0.4277 | 0.9021
AutoCts-KDp | 1196 | 41 | 0.7 | 13.4 | 0.1785 | 0.9844 | 0.2371 | 0.9736 | 0.3288 | 0.9435 | 0.4196 | 0.9043
LightCTS | 169 | 38 | 0.2 | 8.6 | 0.1714 | 0.9864 | 0.2202 | 0.9765 | 0.2955 | 0.9568 | 0.4129 | 0.9084
Electricity | DsaNet | 2262 | 6377 | 1.2 | 53.9 | 0.0855 | 0.9264 | 0.0963 | 0.9040 | 0.1020 | 0.8910 | 0.1044 | 0.8898
MtGnn | 4800 | 362 | 1.5 | 21.4 | 0.0745 | 0.9474 | 0.0878 | 0.9316 | 0.0916 | 0.9278 | 0.0953 | 0.9234
MaGnn | 2215 | 120 | 0.8 | 20.3 | 0.0745 | 0.9476 | 0.0876 | 0.9323 | 0.0908 | 0.9282 | 0.0963 | 0.9217
AutoCts | 8740 | 95 | 3.2 | 21.3 | 0.0743 | 0.9477 | 0.0865 | 0.9315 | 0.0932 | 0.9247 | 0.0947 | 0.9239
AutoCts-KDf | 1858 | 16 | 1.8 | 12.4 | 0.0818 | 0.9292 | 0.0949 | 0.9148 | 0.1003 | 0.9007 | 0.1018 | 0.8935
AutoCts-KDp | 3937 | 33 | 2.3 | 17.3 | 0.0764 | 0.9442 | 0.0899 | 0.9275 | 0.0934 | 0.9188 | 0.0983 | 0.9071
LightCTS | 239 | 27 | 0.4 | 10.0 | 0.0736 | 0.9445 | 0.0831 | 0.9343 | 0.0898 | 0.9261 | 0.0952 | 0.9215
### 5.2. Single-Step Forecasting
#### 5.2.1. Datasets
* •
Solar-Energy (Lai et al., 2018) contains records of solar power production in
megawatt-hour (MWh) collected from 137 photovoltaic plants in Alabama during
2006. The records range from 0 to 88.9 MWh with a mean of 6.4 MWh. The
sampling interval is 10 minutes.
* •
Electricity (Lai et al., 2018) contains records of electricity consumption in
kilowatt-hour (kWh) for 321 clients in Portugal during 2012 – 2014. The values
range from 0 to 764,000 kWh, with an average of 2,514 kWh. The sampling
interval is 15 minutes.
The data organization and data splitting follow existing work (Wu et al.,
2020, 2021). Dataset statistics are summarized in Table 8.
Table 8. Dataset statistics for single-step forecasting. Dataset | $\mathtt{N}$ | $\mathtt{T}$ | $\mathtt{P}$ | $\mathtt{Q}$ | Split Ratio
---|---|---|---|---|---
Solar-Energy | 137 | 52,560 | 168 | {3, 6, 12, 24} | 6:2:2
Electricity | 321 | 26,304 | 168 | {3, 6, 12, 24} | 6:2:2
#### 5.2.2. Metrics
We use the same lightness metrics as the multi-step forecasting task. We use
root relative squared error (RRSE) and correlation coefficient (CORR) to
measure forecasting accuracy, which are the conventional metrics used in
single-step CTS forecasting (Wu et al., 2021; Chen et al., 2022; Chang et al.,
2018). Specifically, RRSE indicates how well a model performs w.r.t. the
average of the true values, while CORR measures the strength of the linear
correlation between the forecast results and the true values. The more
accurate the model, the lower the RRSE and the higher the CORR.
#### 5.2.3. CTS Models in Comparisons
We include two CTS models that are specifically designed for single-step
forecasting:
* •
DsaNet (Huang et al., 2019). A dual self-attention network for multivariate
time series forecasting.
* •
MaGnn (Chen et al., 2022). A multi-branch model that extracts temporal
features at different time scales.
In addition, we include MtGnn (Wu et al., 2020) and AutoCts (Wu et al., 2021)
(see Section 5.1.3) as they also support single-step forecasting. We use the
settings that achieve the best accuracy for the comparison models; or if using
the same dataset, we report their original results.
#### 5.2.4. Implementation Details
To build LightCTS for single-step forecasting, we do almost the same as for
multi-step forecasting. The differences are as follows. According to parameter
tuning, we set $\mathtt{D}$ = 32 for the Solar-Energy dataset and $\mathtt{D}$
= 24 for the Electricity dataset, and we set the dilation rates in the
($\mathtt{L}_{T}$ = 8) L-TCN layers to [1, 2, 4, 8, 16, 32, 48, 64]. We adopt
a GL-Former with ($\mathtt{L}_{S}$ = 2) attention blocks (i.e., one global
attention followed by one local attention). We adopt the Adam optimizer with a
learning rate of 0.0005 to train models for 100 epochs.
#### 5.2.5. Overall Comparisons
Table 7 shows the results on Solar-Energy and Electricity datasets. In
accordance with existing research (Chen et al., 2022; Wu et al., 2020), we
report single-step forecasting results for the 3rd, 6th, 12th, and 24th future
time steps.
We observe similar trends as for multi-step forecasting. In terms of accuracy,
LightCTS achieves the best performance on most of the comparison items, and it
is very competitive on the others. Next, LightCTS is the most lightweight
model with far fewer FLOPs and parameters than all competitors. For example,
LightCTS uses less than 1/10 (_resp._ 1/3) FLOPs than AutoCts (_resp._ MaGnn).
In addition, LightCTS has the lowest latency and peak memory use among all
baselines. LightCTS’s low requirement of computing resource means a great
potential to be deployed in resource-constrained environments.
### 5.3. Parameter Study
We study systematically the impact of key LightCTS hyperparameters, including
the embedding size $\mathtt{D}$, the group number $\mathtt{G}^{T}$ of L-TCN,
and the attention block number $\mathtt{L}_{S}$ of GL-Former. These
hyperparameters are selected as they are adjustable and affect model
performance noticeably. We summarize the results in Figures 5 and 6 for multi-
step forecasting on PEMS08 and single-step forecasting on Solar-Energy,
respectively. We report the results on the other datasets in the supplemental
material (lig, 2023).
#### 5.3.1. Impact of Embedding Size $\mathtt{D}$
Figure 5(a) shows the impact of $\mathtt{D}$ on model accuracy and lightness
of single-step forecasting on the PEMS08 dataset. Both the FLOPs and number of
parameters increase steadily as $\mathtt{D}$ increases, and so do the latency
and peak memory use.
Considering accuracy, as $\mathtt{D}$ grows from 32 to 64, MAE, RMSE, and MAPE
decrease moderately. However, when $\mathtt{D}$ goes up from 64 to 80, the
forecasting errors increase slightly. The reason may be that a smaller
$\mathtt{D}$ restricts the model’s ability to extract ST-features, while a
larger $\mathtt{D}$ may introduce redundancy into the model and make it
difficult to train, and can lead to overfitting. In Figure 6(a), the single-
step forecasting exhibits similar trends as the multi-step counterpart. The
FLOPs and number of parameters increase while the forecasting errors drop
first and climb up afterwards.
The results are consistent with those of existing studies (Wu et al., 2020;
Chen et al., 2022)—directly cutting the embedding size to a small value
inevitably reduces model accuracy. Thus, a new design for manipulating
$\mathtt{D}$, such as our L-TCN and GL-Former, is effective at enabling
lightweight and accurate CTS forecasting models.
Figure 5. Impact of (a) embedding size $\mathtt{D}$, (b) group number
$\mathtt{G}^{T}$, and (c) attention block number $\mathtt{L}_{S}$ in GL-Former
for multi-step forecasting on PEMS08 dataset. Figure 6. Impact of (a)
embedding size $\mathtt{D}$, (b) group number $\mathtt{G}^{T}$, and (c)
attention block number $\mathtt{L}_{S}$ in GL-Former for single-step
forecasting on Solar-Energy dataset.
#### 5.3.2. Impact of Group Number $\mathtt{G}^{T}$
We further investigate the impact of the group number $\mathtt{G}^{T}$ of
L-TCN on the performance and lightness of LightCTS. This parameter controls
the tradeoff between L-TCN’s capacity for temporal information extraction and
its lightness.
The multi-step forecasting results in Figure 5(b) show that when
$\mathtt{G}^{T}$=4, the evaluation errors reach a minimum that is below the
value for $\mathtt{G}^{T}$=1, i.e., the standard TCN without grouping. This
implies that our grouping strategy can remove redundant connections without
compromising forecasting accuracy. However, due to information loss, the
evaluation errors increase substantially when the group size is increased to
8. A similar pattern is seen for single-step forecasting in Figure 6(b), where
the best accuracy is also obtained when $\mathtt{G}^{T}$=4.
Further, additional experiments show that similar patterns appear for the
other group number hyperparameters, namely $\mathtt{G}^{M}$ and
$\mathtt{G}^{F}$. For brevity, we report on these experiments in the
supplemental material (lig, 2023).
#### 5.3.3. Impact of Attention Block Number $\mathtt{L}_{S}$ in GL-Former
LightCTS supports an elastic number of attention blocks in GL-Former. Hence,
it is of interest to understand how the GL-Former attention block number
$\mathtt{L}_{S}$ affects accuracy and lightness. The number $\mathtt{L}_{T}$
of L-TCN layers is also tunable, with its value being relevant to the input
time series $\mathtt{P}$ for producing sufficient receptive fields. We thus
focus on varying and testing the attention block number $\mathtt{L}_{S}$.
As shown in Figures 5(c) and 6(c), for both multi-step and single-step
forecasting, as GL-Former goes deeper (i.e., larger $\mathtt{L}_{S}$), the
evaluation errors first decrease and then increase. This happens because
although deeper models can theoretically better extract information, they are
prone to overfitting or hard to train (Vladimirova et al., 2019). Notably, in
Figure 6(c), since the T-operators consume a dominant proportion of FLOPs due
to the large input length ($\mathtt{P}$ = 168) of time series, the overall
FLOPs and peak memory use of LightCTS increase only slightly when
$\mathtt{L}_{S}$ is increased.
### 5.4. Ablation Study
Table 9. Ablation study on PEMS08. Refer to Section 5.4 for the details of the
models.
Model | | FLOPs
---
(unit: M)
| Params
---
(unit: K)
| Latency
---
(unit: s)
| Peak Mem
---
(unit: Mb)
MAE | RMSE | MAPE
LightCTS | 70 | 177 | 0.4 | 2.8 | 14.63 | 23.49 | 9.43%
LightCTS$\backslash$T | 149 | 226 | 0.5 | 3.1 | 14.64 | 23.55 | 9.48%
LightCTS$\backslash$LS | 390 | 285 | 0.6 | 3.7 | 15.31 | 24.21 | 10.55%
LightCTS$\backslash$M | 75 | 239 | 0.5 | 3.0 | 14.57 | 23.48 | 9.48%
LightCTS$\backslash$F | 75 | 238 | 0.5 | 3.0 | 14.64 | 23.59 | 9.66%
LightCTS$\backslash$LA | 70 | 177 | 0.4 | 2.8 | 14.70 | 23.71 | 9.55%
LightCTS-A | 390 | 285 | 0.6 | 3.7 | 15.64 | 24.52 | 10.96%
LightCTS-CGCN | 53 | 74 | 0.3 | 1.7 | 16.53 | 26.34 | 10.63%
LightCTS-DGCN | 95 | 154 | 0.4 | 2.6 | 16.24 | 25.79 | 10.72%
We conduct an ablation study on PEMS08 dataset to understand the contribution
of each component in LightCTS. Specifically, we implement a group of LightCTS
variants by removing one of the lightweight components and observe the impact
on both accuracy and lightness. The variants include:
* •
LightCTS$\backslash$T that substitutes the L-TCN with the standard TCN as the
T-operator module.
* •
LightCTS$\backslash$LS that substitutes the last-shot compression with the
classical full-shot aggregation method (see Figure 1(a)),
* •
LightCTS$\backslash$M that substitutes the L-MHA component of the GL-Former
with the standard MHA component.
* •
LightCTS$\backslash$F that substitutes the L-FFN component of the GL-Former
with the standard FFN component.
* •
LightCTS$\backslash$LA that substitutes all local attention blocks with global
attention blocks.
We also introduce three other LightCTS variants to assess our design choices:
* •
LightCTS-A that adopts the alternate stacking pattern instead of the proposed
plain stacking.
* •
LightCTS-CGCN that substitutes the GL-Former with Chebyshev GCNs (Defferrard
et al., 2016) as the S-operator module.
* •
LightCTS-DGCN that substitutes the GL-Former with Diffusion GCNs (Gasteiger et
al., 2019) as the S-operator module.
Table 9 shows the results. We make the following observations. 1) LightCTS
achieves almost the best accuracy with much fewer FLOPs and parameters when
all the lightweight techniques are deployed, implying that the proposed
modules (i.e., L-TCN and GL-Former) and the last-shot compression are more
efficient than their standard counterparts. 2) LightCTS$\backslash$LA is
inferior to LightCTS in terms of accuracy, indicating that the local attention
block utilizing prior knowledge of spatial information does help achieve
better forecasting performance. 3) The comparison between LightCTS and
LightCTS-A indicates that the plain stacking strategy is much better at
reducing overheads than the standard alternate stacking strategy, while
simultaneously improving the forecasting accuracy. 4) When comparing LightCTS-
CGCN, LightCTS-DGCN, and LightCTS, we observe that although LightCTS consumes
more resources, its accuracy surpasses the GCN-based models’ by a large
margin. Thus, we choose the Transformer-based S-operators in LightCTS.
### 5.5. Studies on Memory Constraints
We evaluate the performance of LightCTS under memory constraints of 3Mb,
2.5Mb, and 2Mb, as commonly found in commodity MCUs (STM, 2023). We include
representative baselines: AutoCts (the most accurate), AutoCts-KDf/AutoCts-KDp
(KD variants of AutoCts), AgCrn (the least peak memory use), and GwNet (the
lowest latency). For fair comparisons, we adjust the embedding size
$\mathtt{D}$ for all models to fit into the constrained memory while
maintaining their structures and components. Results on PEMS08 are presented
in Table 10. Results on other datasets are available elsewhere (lig, 2023).
Even with the lowest possible embedding size, AutoCts and its variants are
unable to comply with the memory constraints due to their large intermediate
results. This is concrete evidence of the inapplicability of these models to
resource-constrained devices such as MCUs. While GwNet and AgCrn are able to
meet the memory constraints, they experience significant accuracy loss. In
contrast, LightCTS shows the least accuracy degradation and surpasses the
baseline models significantly with the lowest latency under all studied
constraints. These results demonstrate the need for specialized lightweight
designs as state-of-the-art CTS models without such considerations fail to
achieve satisfactory accuracy when downscaled for low-memory settings.
Table 10. Models vs memory constraints on PEMS08.
| Mem Constraint
---
(unit: Mb)
Model | | FLOPs
---
(unit: M)
| Params
---
(unit: K)
| Latency
---
(unit: s)
| Peak Mem
---
(unit: Mb)
MAE | RMSE | MAPE
| LightCTS | 70 | 177 | 0.4 | 2.8 | 14.63 | 23.49 | 9.43%
3 | GwNet | 328 | 178 | 1.0 | 2.9 | 17.40 | 27.34 | 11.14%
| AgCrn | 726 | 150 | 3.2 | 2.6 | 15.95 | 25.22 | 10.09%
| LightCTS | 42 | 103 | 0.3 | 2.5 | 14.82 | 23.78 | 9.66%
2.5 | GwNet | 137 | 49 | 0.8 | 2.4 | 18.00 | 28.20 | 11.49%
| AgCrn | 586 | 115 | 2.9 | 2.5 | 16.72 | 26.26 | 11.27%
| LightCTS | 8 | 15 | 0.2 | 2.0 | 16.70 | 26.28 | 10.75%
2 | GwNet | 40 | 10 | 0.6 | 2.0 | 19.38 | 30.14 | 13.58%
| AgCrn | 100 | 11 | 2.2 | 2.0 | 19.17 | 29.71 | 13.06%
## 6\. Related Work
DL-based Models for CTS Forecasting. Deep learning models dominate CTS
forecasting. Different studies involve different S/T-operators. GCNs and GCN
variants are the most common S-operators (Li et al., 2018; Yu et al., 2018b;
Bai et al., 2020; Wu et al., 2020; Chen et al., 2022). In addition to building
graphs using prior knowledge in GCNs, learned graphs (Wu et al., 2019, 2020;
Chen et al., 2022) and adaptive graphs (Bai et al., 2020) demonstrate
advantages in capturing dynamic and implicit correlations among time series.
Further, TCNs (Yu et al., 2018b; Wu et al., 2019, 2020; Guo et al., 2019;
Huang et al., 2020; Chen et al., 2022) and RNNs (Bai et al., 2020; Chen et
al., 2020a; Chang et al., 2018; Li et al., 2018; Shih et al., 2019) are the
most widely adopted T-operators. Recently, the Transformer and its variants
have been employed as S-operators (Xu et al., 2020; Park et al., 2020; Zhou et
al., 2021; Wu et al., 2021) and T-operators (Xu et al., 2020; Park et al.,
2020; Zhou et al., 2021; Wu et al., 2021), due to the powerful correlation
modeling abilities of Transformers. Neural architecture search (NAS) has been
introduced to automatically select appropriate S/T-operators, resulting in
competitive performance without having to design a CTS forecasting model
manually (Wu et al., 2021; Pan et al., 2021).
All existing studies focus on improving forecasting accuracy. However,
progress is slowing down and becoming marginal (see Tables 6 and 7). In
contrast, LightCTS contributes lightweight S/T-operators and enables
lightweight CTS models (w.r.t. computation and model size) without
compromising forecasting accuracy. In this sense, LightCTS renders forecasting
more cost-effective and extends its potential applicability to edge devices in
CPSs.
Lightweight DL Models. Developing light DL models is motivated by the
requirements of real-time and mobile applications. There are two streams of
related work (Chen et al., 2020b): compressing well-trained big models and
designing lightweight models from scratch. The first stream has been well
studied in areas like CV (Krizhevsky et al., 2012; Han et al., 2015; Yu et
al., 2018a) and NLP (Michel et al., 2019; Jiao et al., 2020; Wang et al.,
2020) fields. However, LightCTS falls outside this stream as there are no
compelling well-trained CTS models.
Next, impressive advances in the design of lightweight models from scratch
have also been widely reported in CV (Zhang et al., 2018; Sandler et al.,
2018; Tan and Le, 2019; Howard et al., 2017, 2019; Wu et al., 2018). A popular
operator is the depth-wise separable convolution that decouples the standard
convolution into intra- and inter-feature map computation. It serves as the
basic block of the famous MobileNets (Howard et al., 2017) and Xception
(Chollet, 2017). A follow-up work is the inverted bottleneck structure
(Sandler et al., 2018) that finds a narrow-wide-narrow convolution to achieve
reduced computations while competitive accuracy. From a different perspective,
EfficientNet (Tan and Le, 2019) aims to scale existing modules to meet certain
constraints rather than designing new efficient modules. However, the
aforementioned lightweight modules cannot be applied directly to CTS
forecasting because of the inherently different data structures and tasks
involved. For example, these methods focus mainly on simplifying 2D and 3D
convolutions for extracting local features of image and video data, while CTS
models require uncovering long-term temporal dynamics and non-uniform spatial
correlations. Given this gap, we identify potential directions to achieve
lightness of CTS forecasting based on a careful study of existing CTS models.
LightCTS offers a plain stacking architecture together with a last-shot
compression to efficiently deal with the heterogeneity of S/T-operators, which
is unique compared to models used in CV. We also design L-TCN and GL-Former
according to the exclusive characteristics of temporal dynamics and spatial
correlations in CTS.
## 7\. Conclusion
We present LightCTS, a new framework for lightweight forecasting of correlated
time series (CTS) that achieves comparable accuracy to state-of-the-art CTS
forecasting models but consumes much fewer computational resources. LightCTS
integrates a set of novel computational cost reduction techniques, notably a
plain stacking architecture, the L-TCN (Light Temporal Convolutional Network)
and GL-Former (GlobalLocal TransFormer) modules for extracting spatio-temporal
features, and a last-shot compression scheme for reducing redundant,
intermediate features. Comprehensive experiments offer evidence that LightCTS
is capable of providing state-of-the-art CTS forecasting accuracy with much
fewer FLOPs and parameters than existing CTS forecasting proposals.
## References
* (1)
* fvc (2023) 2023\. Fvcore. https://github.com/facebookresearch/fvcore. (Accessed Jan 2023).
* lig (2023) 2023\. LightCTS project: Code, datasets and supplemental material. https://github.com/AI4CTS/lightcts. (Accessed Jan 2023).
* STM (2023) 2023\. Microcontrollers and microprocessors, STMicroelectronics. https://www.st.com/en/microcontrollers-microprocessors.html. (Accessed Jan 2023).
* Bai et al. (2020) Lei Bai, Lina Yao, Can Li, Xianzhi Wang, and Can Wang. 2020. Adaptive graph convolutional recurrent network for traffic forecasting. In _NeurIPS_. 17804–17815.
* Cao and Tay (2003) Li-Juan Cao and Francis Eng Hock Tay. 2003. Support vector machine with adaptive parameters in financial time series forecasting. _TNNLS_ 14, 6 (2003), 1506–1518.
* Chang et al. (2018) Yen-Yu Chang, Fan-Yun Sun, Yueh-Hua Wu, and Shou-De Lin. 2018\. A memory-network based solution for multivariate time-series forecasting. _arXiv preprint arXiv:1809.02105_ (2018).
* Chen et al. (2021) Boyu Chen, Peixia Li, Baopu Li, Chen Lin, Chuming Li, Ming Sun, Junjie Yan, and Wanli Ouyang. 2021\. BN-NAS: Neural architecture search with batch normalization. In _CVPR_. 307–316.
* Chen et al. (2022) Ling Chen, Donghui Chen, Zongjiang Shang, Youdong Zhang, Bo Wen, and Chenghu Yang. 2022\. Multi-scale adaptive graph neural network for multivariate time series forecasting. _arXiv preprint arXiv:2201.04828_ (2022).
* Chen et al. (2020a) Weiqi Chen, Ling Chen, Yu Xie, Wei Cao, Yusong Gao, and Xiaojie Feng. 2020a. Multi-range attentive bicomponent graph convolutional network for traffic forecasting. In _AAAI_. 3529–3536.
* Chen et al. (2020b) Yanjiao Chen, Baolin Zheng, Zihan Zhang, Qian Wang, Chao Shen, and Qian Zhang. 2020b. Deep learning on mobile and embedded devices: State-of-the-art, challenges, and future directions. _ACM Comput. Surv._ 53, 4 (2020), 1–37.
* Cheng et al. (2022) Xu Cheng, Fan Shi, Xiufeng Liu, Meng Zhao, and Shengyong Chen. 2022. A novel deep class-imbalanced semisupervised model for wind turbine blade icing detection. _TNNLS_ 33, 6 (2022), 2558–2570.
* Chollet (2017) François Chollet. 2017\. Xception: Deep learning with depthwise separable convolutions. In _CVPR_. 1251–1258.
* Cirstea et al. (2021) Razvan-Gabriel Cirstea, Tung Kieu, Chenjuan Guo, Bin Yang, and Sinno Jialin Pan. 2021\. Enhancenet: Plugin neural networks for enhancing correlated time series forecasting. In _ICDE_. 1739–1750.
* de Freitas et al. (1999) Nando de Freitas, Marta Milo, Philip Clarkson, Mahesan Niranjan, and Andrew Gee. 1999. Sequential support vector machines. In _IEEE Signal Processing Society Workshop_. 31–40.
* Defferrard et al. (2016) Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In _NIPS_. 3837–3845.
* Derler et al. (2011) Patricia Derler, Edward A Lee, and Alberto Sangiovanni Vincentelli. 2011\. Modeling cyber-physical systems. _Proc. IEEE_ 100, 1 (2011), 13–28.
* Du et al. (2019) Shengdong Du, Tianrui Li, Yan Yang, and Shi-Jinn Horng. 2019\. Deep air quality forecasting using hybrid deep learning framework. _TKDE_ 33, 6 (2019), 2412–2424.
* Faloutsos et al. (2019) Christos Faloutsos, Jan Gasthaus, Tim Januschowski, and Yuyang Wang. 2019. Classical and contemporary approaches to big time series forecasting. In _SIGMOD_. 2042–2047.
* Gasteiger et al. (2019) Johannes Gasteiger, Stefan Weißenberger, and Stephan Günnemann. 2019\. Diffusion improves graph learning. In _NeurIPS_. 13333–13345.
* Grigsby et al. (2021) Jake Grigsby, Zhe Wang, and Yanjun Qi. 2021. Long-range transformers for dynamic spatiotemporal forecasting. _arXiv preprint arXiv:2109.12218_ (2021).
* Guo et al. (2019) Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan. 2019. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In _AAAI_. 922–929.
* Han et al. (2020) Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, and Chang Xu. 2020\. Ghostnet: More features from cheap operations. In _CVPR_. 1580–1589.
* Han et al. (2015) Song Han, Jeff Pool, John Tran, and William Dally. 2015\. Learning both weights and connections for efficient neural network. In _NIPS_. 1135–1143.
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016\. Deep residual learning for image recognition. In _CVPR_. 770–778.
* Howard et al. (2019) Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. 2019\. Searching for mobilenetv3. In _CVPR_. 1314–1324.
* Howard et al. (2017) Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017\. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_ (2017).
* Hu et al. (2018) Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-and-excitation networks. In _CVPR_. 7132–7141.
* Huang et al. (2020) Rongzhou Huang, Chuyin Huang, Yubao Liu, Genan Dai, and Weiyang Kong. 2020. LSGCN: Long short-term traffic prediction with graph convolutional networks. In _IJCAI_. 2355–2361.
* Huang et al. (2019) Siteng Huang, Donglin Wang, Xuehan Wu, and Ao Tang. 2019\. Dsanet: Dual self-attention network for multivariate time series forecasting. In _CIKM_. 2129–2132.
* Jiao et al. (2020) Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020\. TinyBERT: Distilling BERT for natural language understanding. In _EMNLP_. 4163–4174.
* Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012\. Imagenet classification with deep convolutional neural networks. In _NIPS_. 1106–1114.
* Lai et al. (2018) Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. 2018\. Modeling long-and short-term temporal patterns with deep neural networks. In _SIGIR_. 95–104.
* Lai et al. (2022) Zhichen Lai, Xu Cheng, Xiufeng Liu, Lizhen Huang, and Yongping Liu. 2022. Multiscale wavelet-driven graph convolutional network for blade icing detection of wind turbines. _IEEE Sensors Journal_ 22, 22 (2022), 21974–21985.
* Lea et al. (2016) Colin Lea, Rene Vidal, Austin Reiter, and Gregory D Hager. 2016\. Temporal convolutional networks: A unified approach to action segmentation. In _ECCV_. 47–54.
* Li et al. (2021) Changlin Li, Guangrun Wang, Bing Wang, Xiaodan Liang, Zhihui Li, and Xiaojun Chang. 2021\. Dynamic slimmable network. In _CVPR_. 8607–8617.
* Li et al. (2018) Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. 2018\. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. In _ICLR_. 1–10.
* Liu et al. (2021) Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In _CVPR_. 10012–10022.
* Ma et al. (2018) Lin Ma, Dana Van Aken, Ahmed Hefny, Gustavo Mezerhane, Andrew Pavlo, and Geoffrey J Gordon. 2018\. Query-based workload forecasting for self-driving database management systems. In _SIGMOD_. 631–645.
* Mehta and Rastegari (2021) Sachin Mehta and Mohammad Rastegari. 2021. MobileViT: Light-weight, general-purpose, and mobile-friendly vision transformer. In _ICLR_.
* Michel et al. (2019) Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one?. In _NeurIPS_. 14014–14024.
* Morales-Hernández et al. (2022) Alejandro Morales-Hernández, Inneke Van Nieuwenhuyse, and Sebastian Rojas Gonzalez. 2022. A survey on multi-objective hyperparameter optimization algorithms for Machine Learning. _Artificial Intelligence Review_ (2022), 1–51.
* Pan et al. (2021) Zheyi Pan, Songyu Ke, Xiaodu Yang, Yuxuan Liang, Yong Yu, Junbo Zhang, and Yu Zheng. 2021. AutoSTG: Neural architecture search for predictions of spatio-temporal graph. In _WWW_. 1846–1855.
* Papadimitriou and Yu (2006) Spiros Papadimitriou and Philip Yu. 2006. Optimal multi-scale patterns in time series streams. In _SIGMOD_. 647–658.
* Park et al. (2020) Cheonbok Park, Chunggi Lee, Hyojin Bahng, Yunwon Tae, Seungmin Jin, Kihwan Kim, Sungahn Ko, and Jaegul Choo. 2020\. ST-GRAT: A novel spatio-temporal graph attention networks for accurately forecasting dynamically changing road speed. In _CIKM_. 1215–1224.
* Pedersen et al. (2020) Simon Aagaard Pedersen, Bin Yang, and Christian S Jensen. 2020\. Anytime stochastic routing with hybrid learning. _PVLDB_ 13, 9 (2020), 1555–1567.
* Rao et al. (2022) Xuan Rao, Hao Wang, Liang Zhang, Jing Li, Shuo Shang, and Peng Han. 2022\. FOGS: First-order gradient supervision with learning-based graph for traffic flow forecasting. In _IJCAI_.
* Sandler et al. (2018) Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. MobileNetV2: Inverted residuals and linear bottlenecks. In _CVPR_. 4510–4520.
* Shi et al. (2016) Weisong Shi, Jie Cao, Quan Zhang, Youhuizi Li, and Lanyu Xu. 2016. Edge computing: Vision and challenges. _IEEE Internet Things J._ 3, 5 (2016), 637–646.
* Shih et al. (2019) Shun-Yao Shih, Fan-Keng Sun, and Hung-yi Lee. 2019. Temporal pattern attention for multivariate time series forecasting. _Mach. Learn._ 108, 8 (2019), 1421–1441.
* Song et al. (2020) Chao Song, Youfang Lin, Shengnan Guo, and Huaiyu Wan. 2020\. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In _AAAI_. 914–921.
* Sudharsan et al. (2021) Bharath Sudharsan, John G Breslin, and Muhammad Intizar Ali. 2021\. Ml-mcu: A framework to train ml classifiers on mcu-based iot edge devices. _IEEE Internet Things J._ (2021).
* Takamoto et al. (2020) Makoto Takamoto, Yusuke Morishita, and Hitoshi Imaoka. 2020\. An efficient method of training small models for regression problems with knowledge distillation. In _MIPR_. IEEE, 67–72.
* Tan and Le (2019) Mingxing Tan and Quoc Le. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In _ICML_. 6105–6114.
* Tay and Cao (2002) Francis EH Tay and LJ Cao. 2002. Modified support vector machines in financial time series forecasting. _Neurocomputing_ 48, 1-4 (2002), 847–861.
* van den Oord et al. (2016) Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio. In _ISCA Speech Synthesis Workshop_. 125\.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _NeurIPS_ , Vol. 30.
* Vladimirova et al. (2019) Mariia Vladimirova, Jakob Verbeek, Pablo Mesejo, and Julyan Arbel. 2019. Understanding priors in Bayesian neural networks at the unit level. In _ICML_. 6458–6467.
* Wang et al. (2020) Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. In _NeurIPS_. 5776–5788.
* Wu et al. (2018) Bichen Wu, Alvin Wan, Xiangyu Yue, Peter Jin, Sicheng Zhao, Noah Golmant, Amir Gholaminejad, Joseph Gonzalez, and Kurt Keutzer. 2018. Shift: A zero flop, zero parameter alternative to spatial convolutions. In _CVPR_. 9127–9135.
* Wu et al. (2021) Xinle Wu, Dalin Zhang, Chenjuan Guo, Chaoyang He, Bin Yang, and Christian S Jensen. 2021\. AutoCTS: Automated correlated time series forecasting. _PVLDB_ 15, 4 (2021), 971–983.
* Wu et al. (2020) Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, and Chengqi Zhang. 2020\. Connecting the dots: Multivariate time series forecasting with graph neural networks. In _KDD_. 753–763.
* Wu et al. (2019) Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, and Chengqi Zhang. 2019. Graph WaveNet for deep spatial-temporal graph modeling. In _IJCAI_. 1907–1913.
* Xie et al. (2017) Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In _CVPR_. 1492–1500.
* Xu et al. (2020) Mingxing Xu, Wenrui Dai, Chunmiao Liu, Xing Gao, Weiyao Lin, Guo-Jun Qi, and Hongkai Xiong. 2020. Spatial-temporal transformer networks for traffic flow forecasting. _arXiv preprint arXiv:2001.02908_ (2020).
* Yu et al. (2018b) Bing Yu, Haoteng Yin, and Zhanxing Zhu. 2018b. Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. In _IJCAI_. 3634–3640.
* Yu and Koltun (2016) Fisher Yu and Vladlen Koltun. 2016. Multi-scale context aggregation by dilated convolutions. In _ICLR_. 1–10.
* Yu et al. (2018a) Ruichi Yu, Ang Li, Chun-Fu Chen, Jui-Hsin Lai, Vlad I Morariu, Xintong Han, Mingfei Gao, Ching-Yung Lin, and Larry S Davis. 2018a. NISP: Pruning networks using neuron importance score propagation. In _CVPR_. 9194–9203.
* Yuan et al. (2020) Haitao Yuan, Guoliang Li, Zhifeng Bao, and Ling Feng. 2020\. Effective travel time estimation: When historical trajectories over road networks matter. In _SIGMOD_. 2135–2149.
* Zhang et al. (2018) Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. 2018\. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In _CVPR_. 6848–6856.
* Zhou et al. (2021) Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 2021. Informer: Beyond efficient transformer for long sequence time-series forecasting. In _AAAI_. 11106–11115.
* Zhu and Shasha (2002) Yunyue Zhu and Dennis Shasha. 2002. Statstream: Statistical monitoring of thousands of data streams in real time. (2002), 358–369.
|
# Deep Reinforcement Learning for Controlled Traversing of the Attractor
Landscape of Boolean Models in the Context of Cellular Reprogramming
Andrzej Mizera University of Warsaw IDEAS NCBR Jakub Zarzycki University
of Warsaw IDEAS NCBR
###### Abstract
Cellular reprogramming can be used for both the prevention and cure of
different diseases. However, the efficiency of discovering reprogramming
strategies with classical wet-lab experiments is hindered by lengthy time
commitments and high costs. In this study, we develop a novel computational
framework based on deep reinforcement learning that facilitates the
identification of reprogramming strategies. For this aim, we formulate a
control problem in the context of cellular reprogramming for the frameworks of
BNs and PBNs under the asynchronous update mode. Furthermore, we introduce the
notion of a pseudo-attractor and a procedure for identification of pseudo-
attractor state during training. Finally, we devise a computational framework
for solving the control problem, which we test on a number of different
models.
## 1 Introduction
Complex diseases pose a great challenge largely because genes and gene
products operate within a complex system – the gene regulatory network (GRN).
There is an inherent dynamic behaviour emerging from the structural wiring of
a GRN: gene expression profiles, i.e., states of a GRN, evolve in time to
finally reach stable states referred to as attractors. Attractors correspond
to cell types or cell fates [13]. During normal development of a multi-
cellular organism, not all attractors are manifested. Some of the ‘abnormal
attractors’, associated with diseases, become accessible by disturbance of the
GRN’s dynamics. This is seldom a consequence of a disruption in a single gene,
but rather arises as an aftermath of GRN perturbations [3]. This could be
cured by guiding cells to desired ‘healthy’ attractors with experimental
techniques of cellular reprogramming, i.e., the artificial changing of cell
fate. Unfortunately, finding effective interventions that trigger desired
changes using solely wet-lab experiments is difficult, costly, and requires
lengthy time commitments. This motivates us to consider _in-silico_
approaches.
Although various computational frameworks are commonly used to model GRNs, the
formalism of Boolean networks (BNs) and its extension, i.e., probabilistic
Boolean networks (PBNs), have the advantage of being simple yet capable of
capturing the important dynamic properties of the system under study. As such,
they facilitate the modelling of large biological systems. This is especially
relevant in the context of _in-silico_ identification of effective cellular
reprogramming strategies, which requires large GRNs to be modelled.
Identification of cellular reprogramming strategies can be stated as a control
problem of BN and PBN models of GRNs. Although many BN/PBN control methods
exist in the literature, the existing structure- and dynamics-based state-of-
the-art computational techniques are limited to small and mid-size networks,
i.e., of up to a hundred of genes or so, usually requiring the systems to be
decomposed in some way. This is often insufficient for cellular reprogramming
considerations.
The issue of scalability can be addressed by devising new methods based on
deep reinforcement learning (DRL) techniques, which have proved very
successful in decision problems characterised by huge state-action spaces. To
contribute to the realisation of this idea, we formulate a control problem in
the context of cellular reprogramming for the frameworks of BNs and PBNs under
the asynchronous update mode. Furthermore, we introduce the notion of a
pseudo-attractor and a procedure for identifying pseudo-attractor states
during DRL agent training. Finally, these contributions allow us to devise a
DRL-based framework for solving the control problem. We consider our
contributions as a relevant step towards achieving scalable control methods
for large Boolean models of GRNs for identifying effective and efficient
cellular reprogramming strategies.
The paper is structured as follows. Related work is discussed in Sec. 2.
Preliminaries are provided in Sec. 3. We formulate our control problem in the
context of cellular reprogramming in Sec. 4 and devise our DRL-based control
framework in Sec. 5. The experiments performed to evaluate our framework and
the obtained results are presented in Sec.6 and Sec. 7, respectively. Finally,
we conclude our study in Sec. 8.
## 2 Related work
### 2.1 Dynamics-based approaches to GRN control
Identification of proper control strategies for non-linear systems requires
both network structure and dynamics [11]. Thus, we focus on dynamic-based and
DRL-based methods for BN/PBN control. An efficient method based on the ‘divide
and conquer’ strategy was proposed in [18] to solve the minimal one-step
source-target control problem by using instantaneous, temporary, and permanent
gene perturbations. The minimal sequential source-target control and the
target control problems of BNs were considered in [23] and [22], respectively.
All these methods were implemented as a software tool CABEAN [24]. Recently,
semi-symbolic algorithms were proposed in [4] to stabilise partially specified
asynchronous BNs in states exhibiting specific traits. In [19], the control
problem for the most permissive BN update mode in the context of fixed points
and minimal trap spaces is considered.
### 2.2 DRL-based approches to GRN control
The application of reinforcement learning for controlling GRNs was pioneered
by in [21] with focus on how to control GRNs by avoiding undesirable states in
terms of steady state probabilities of PBNs. The main idea was to treat the
time series gene expression samples as a sequence of experience tuples and use
a batch version of Q-Learning to produce an approximated policy over the
experience tuples. Later, the BOAFQI-Sarsa method that does not require time
series samples was devised in [16]. A batch reinforcement learning method,
mSFQI, was proposed in [15] for control based on probabilities of gene
activity profiles. Recently, the study of [1] used a Deep Q-Network with
prioritised experience replay, for control of synchronous PBNs to drive the
networks from a particular state towards a more desirable one. Finally, a DRL-
based approximate solution to the control problem in synchronous PBNs was
proposed in [14]. The proposed method finds a control strategy from any
network state to a specified target attractor using a Double Deep Q-Network
model.
## 3 Preliminaries
### 3.1 Boolean and probabilistic Boolean networks
Boolean networks is a well established framework for the modelling of GRNs. A
Boolean network consists of nodes, that can be in one of two states, and
functions describing how the individual nodes interact with each other. PBNs
are an extension of the formalism of BNs.
###### Definition 3.1.
(Boolean Network) A _Boolean Network_ is defined as a pair $(V,F)$, where
$V=\\{x_{1},x_{2},\ldots,x_{n}\\}$ is a set of binary-valued nodes (also
referred to as genes) and $F=\\{f_{1},f_{2},\ldots,f_{n}\\}$ is a set of
Boolean predictor functions, where
$f_{i}(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{k}})$ defines the value of node
$x_{i}$ depending on the values of the $k\leq n$ parent nodes
$x_{i_{1}},x_{i_{2}},\ldots,x_{i_{k}}$ with $i_{j}\in[1..n]$ for $j\leq k$.
Since interactions in biology are usually more complex we need a more general
model of a GRN. We achieve it by allowing for each node to have multiple
Boolean functions. Formally, probabilistic Boolean networks are defined as
follows:
###### Definition 3.2.
(Probabilistic Boolean Network) A _probabilistic Boolean network_ is defined
as a pair $(V,\mathcal{F})$, where $V=\\{x_{1},x_{2},\ldots,x_{n}\\}$ is a set
of binary-valued nodes (also referred to as genes) and
$\mathcal{F}=(F_{1},F_{2},\ldots,F_{n})$ is a list of sets. Each node
$x_{i}\in V$, $i=1,2,\ldots,n$, has associated a set $F_{i}\in\mathcal{F}$ of
Boolean predictor functions:
$F_{i}=\\{f^{i}_{1},f^{i}_{2},\ldots,f^{i}_{l(i)}\\}$, where $l(i)$ is the
number of predictor functions of node $x_{i}$. Each $f^{i}_{j}\in F_{i}$ is a
Boolean function defined with respect to a subset of $V$ referred to as parent
nodes for $f^{i}_{j}$ and denoted $\textrm{Pa}(f^{i}_{j})$. For each node
$x_{i}\in V$ there is a probability distribution
$\mathbf{c}^{i}=(c^{i}_{1},c^{i}_{2},\ldots,c^{i}_{l(i)})$ on $F_{i}$, where
each predictor function $f^{i}_{j}\in F_{i}$ has an associated selection
probability denoted $c^{i}_{j}$; it holds that $\sum_{j=1}^{l(i)}c^{i}_{j}=1$.
A PBN in which each node only admits only one Boolean function is a _Boolean
network_.
### 3.2 Network dynamics
We define a _state_ of a BN/PBN as an $n$-dimensional vector
$\mathbf{s}\in\\{0,1\\}^{n}$, where the $i$-th element represents the state of
gene $x_{i}$ for $i\in[1..n]$. A BN/PBN evolves in discrete time steps. It
starts in an initial state $\mathbf{s}_{0}$ and its state gets updated in
every time step in accordance to the predictor functions. In this study, we
focus on the asynchronous updating, which is preferable in the context of GRN
modelling. Under the asynchronous scheme, a single gene $x_{i}$ is selected
and updated in accordance with its predictor function $f_{i}$ (BNs) or one
randomly selected from $F_{i}$ in accordance with $\mathbf{c}^{i}$. The
network dynamics can be depicted in the form of a _state transition graph_.
Based on this concept of, we can introduce the notion of a BN/PBN attractor.
###### Definition 3.3.
(State Transition Graph (STG)) A state transition graph of a BN/PBN of $n$
genes under the asynchronous update mode is a graph $G(S,\rightarrow)$, where
$S=\\{0,1\\}^{n}$ is the set of all possible states and $\rightarrow$ is the
set of directed edges such that a directed edge from $s$ to $s^{\prime}$,
denoted $s\rightarrow s^{\prime}$, is in $\rightarrow$ if and only if
$s^{\prime}$ can be obtained from $s$ by a single asynchronous update.
###### Definition 3.4.
(Attractor) An _attractor_ of a BN/PBN is a bottom strongly connected
component in the STG of the network. A _fixed-point attractor_ and a _multi-
state attractor_ are bottom strongly connected components consisting of a
single state or more than one state, respectively.
###### Example 3.5.
We consider a PBN of 4 genes $V=\\{x_{0},x_{1},x_{2},x_{3}\\}$ regulated in
accordance with the following Boolean functions:
$f_{0}^{1}(x_{0})=x_{0}$
---
$f_{0}^{2}(x_{0},x_{1},x_{2},x_{3})=x_{0}\&\neg(x_{0}\&\neg x_{1}\&\neg
x_{2}\&x_{3})$
$f_{1}^{1}(x_{0},x_{1})=\neg x_{0}\&x_{1}$
$f_{1}^{2}(x_{0},x_{1},x_{2},x_{3})=\neg x_{0}\&(x_{1}|(x_{2}\&x_{3}))$
$f_{2}^{1}(x_{0},x_{1},x_{2},x_{3})=\neg x_{0}\&(x_{1}\&x_{2}\&x_{3})$
$f_{2}^{2}(x_{0},x_{1},x_{2},x_{3})=x_{0}\&(\neg x_{1}\&\neg x_{2}\&\neg
x_{3})$
$f_{3}^{1}(x_{0},x_{1},x_{2},x_{3})=\neg x_{0}\&(x_{1}|x_{2}|x_{3})$
$f_{3}^{2}(x_{0},x_{1},x_{2},x_{3})=\neg x_{0}\&(x_{1}|x_{2}|x_{3})$
Under the asynchronous update mode, the dynamics of the PBN is governed by the
STG depicted in Fig. 1.
### 3.3 Reinforcement Learning
The main task of reinforcement learning (RL) is to solve sequential decision
problems by optimising a cumulative reward function. A _policy_ is a strategy
that determines which action to take and an _optimal policy_ is one
determined by selecting the actions that maximise the future cumulative
reward. It be obtained by solving the _the Bellman equation_ , which expresses
the relationship between the value of a state and the expected future rewards:
$V(s)=\max_{a}[R_{a}(s,s^{\prime})+\gamma\sum_{s^{\prime}}P(s^{\prime}\mid
s,a)V(s^{\prime})],$
where $V(s)$ is the value of state $s$, $R_{a}(s,s^{\prime})$ is the immediate
reward, $P(s^{\prime}\mid s,a)$ is the transition probability to the next
state $s^{\prime}$, and $\gamma$ is the discount factor. The equation guides
the RL agent’s decision-making by considering both immediate rewards and the
discounted value of future states, forming the basis for reinforcement
learning algorithms. To find an approximate solution to the Bellman equation,
the $Q$ function is considered which is defined as the total discounted reward
received after taking action $a$ in state $s$:
$Q(s,a)=R_{a}(s,s^{\prime})+\gamma\sum_{s^{\prime}}P(s^{\prime}\mid
s,a)\max_{a^{\prime}}Q(s^{\prime},a^{\prime}).$
### 3.4 Q function approximations
In the case of large state-action spaces, the $Q$ function values often cannot
be determined, therefore they are approximated using DRL. It was shown that as
the agent explores the environment this approximation converges to the true
values of $Q$ [27]. Under the assumption that the environment is stationary,
i.e., the reward function and transition probabilities do not change in time,
one can keep evaluating the agent on new states without affecting its ability
to train itself [17].
(0,0,0,1)(0,0,1,0)(0,0,1,1)(0,1,0,0)(0,1,0,1)(1,0,0,0)(0,1,1,1)(0,1,1,0)(1,0,0,0)(1,0,1,0)(1,0,1,1)(1,1,0,0)(1,1,0,1)(1,1,1,0)(1,1,1,1)(0,0,0,0)
Figure 1: STG of the PBN defined in Example 3.5 under the asynchronous update
mode. Shaded states are the attractor states of the three attractors, i.e.,
two fixed-point attractors $A_{1}=\\{(0,0,0,0)\\}$ and
$A_{2}=\\{(0,1,0,1)\\}$, and one multi-state attractor
$A_{3}=\\{(1,0,0,0),(1,0,1,0)\\}$.
#### 3.4.1 Branching Dueling Q-Network
Different DRL-based approaches can be used for the approximation of the $Q$
function. In this study we will focus on the Branching Dueling Q-Network (BDQ)
approach introduced in [25] as an extension of another well-known approach,
i.e., the Dueling Double Deep Q-Network (DDQN) [10].
BDQ deep neural network structures are designed to address complex and high-
dimensional action spaces. Instead of using a single output layer for all
actions, BDQ has multiple branches, each responsible for a specific subset of
actions. BDQ aims to enhance the scalability and sample efficiency of
reinforcement learning algorithms in complex scenarios. It can be thought of
as an adaptation of the dueling network into the action branching
architecture. The dueling architecture uses two separate artificial neural
networks, i.e., the _target network_ for evaluation and the _controller
network_ for selection of actions. Its main benefit is that it avoids
overestimating q-values, can more rapidly identify action redundancies, and
generalises more efficiently by learning a general q-value that is shared
across many similar actions.
## 4 Formulation of the control problem
### 4.1 Pseudo-attractors
Unfortunately, obtaining the attractor landscape for a large BN/PBN network,
i.e., the family of all its attractors, is a challenging problem by itself and
one cannot expect to be in possession of this information in advance. Because
our aim is to devise a scalable computational framework for the control of
large network models based on DRL, we need to be able to identify the BN/PBN
attractors during training, i.e., exploration of the DRL environment. For this
purpose, we first introduce the notion of a _pseudo-attractor_. Then, we
proceed to define the problem of _source-target attractor control_.
In general, identifying attractors of a large Boolean network is a
computationally demanding task. Finding an attractor with the shortest period
is an NP-hard problem [2]. Moreover, in the case of classical PBNs, the fix-
point and limit cycle attractors correspond to the irreducible sets of states
in the underlying Markov chain [20]. For large-size PBNs with different
predictors for numerous individual genes, the limit cycle attractors may be
large, i.e., they may consist of many states. Nevertheless, usually states of
an irreducible set are not revisited with the same probability. From the point
of view of the control problem in the context of cellular reprogramming, only
the frequently revisited states of an attractor are the relevant ones since
they correspond to phenotypical cellular states that are observable in the
lab. This makes these states ‘recognisable’ for the application of cellular
reprogramming interventions in practice in accordance with the control
strategy suggested by our computational framework. We refer to the subset of
frequently revisited states of an attractor as a _pseudo-attractor_
associated with the attractor and define it formally as follows.
###### Definition 4.1 (Pseudo-attractor).
Let $A$ be an attractor of a PBN in the classical formulation, i.e., an
irreducible set of states of the Markov chain underlying the PBN. Let $n:=|A|$
be the size of the attractor $A$ and let $\mathbb{P}_{A}$ be the unique
stationary probability distribution on $A$. The _pseudo-attractor_ associated
with $A$ is the maximal subset $PA\subseteq A$ such that for all $s\in PA$ it
holds that $\mathbb{P}_{A}(s)\geq\frac{1}{n}$. The states of a pseudo
attractor are referred to as _pseudo-attractor states_.
The correctness of the definition is guaranteed by the fact that the state
space of the underlying Markov chain of a PBN is finite and that the Markov
chain restricted to the attractor states is irreducible. It is a well known
fact that all states of a finite and irreducible Markov chain are positive
recurrent. In consequence, the attractor restricted Markov chain has a unique
stationary distribution. Furthermore, for any PBN attractor there exists a
non-empty pseudo-attractor as stated by the following lemma.
###### Observation 4.2.
Let $A$ be an attractor of a PBN. Then there exists a pseudo-attractor
$PA\subseteq A$ such that $|PA|\geq 1$.
###### Proof.
Let $n$ be the size of the attractor $A$, i.e., $n:=|A|$. Since the underlying
Markov chain of the PBN restricted to $A$ is irreducible and positive
recurrent, it has a unique stationary distribution, which we denote
$\mathbb{P}_{A}$. We proceed to show that there exists at least one state
$s^{\prime}\in A$ such that $\mathbb{P}_{A}(s^{\prime})\geq\frac{1}{n}$. For
this, let us assume that no such state exists. Then, we have that
$\sum_{s\in A}\mathbb{P}_{A}(s)<\sum_{s\in A}\frac{1}{n}=n\cdot\frac{1}{n}=1.$
The left-hand side of the above inequality is strictly less than 1 and hence
$\mathbb{P}_{A}$ is not a probability distribution on $A$, which leads to a
contradiction. In consequence, $|PA|\geq 1$. ∎
###### Observation 4.3.
In the case of the uniform stationary distribution on an attractor, the
associated pseudo-attractor is equal to the attractor: Let $A$ be an attractor
of a PBN such that the unique stationary distribution of the underlying Markov
chain of the PBN restricted to $A$ is uniform. Then, for the pseudo-attractor
$PA$ associated with $A$ it holds that $PA=A$.
###### Proof.
Let $n$ be the size of the attractor $A$. By the assumption of uniformity of
the stationary distribution $\mathbb{P}_{A}$ it holds that
$\mathbb{P}_{A}(s)=\frac{1}{n}$ for each $s\in A$. Since the pseudo-attractor
$PA$ is the maximal subset of $A$ such that $\mathbb{P}_{A}(s)\geq\frac{1}{n}$
for each $s\in A$, it folows that $PA=A$. ∎
Finally, we argue that Def. 4.1 of the pseudo-attractor straightforwardly
extends to BNs under the asynchronous update mode and that Obs. 4.2 and Obs.
4.3 remain valid in this case. Indeed, the asynchronous dynamics of the BN
restricted to a multi-state attractor of the network is a finite and
irreducible Markov chain. Therefore, in the continuation, we use the notion of
the pseudo-attractor both in the context of PBNs and BNs.
### 4.2 Source-target attractor control
With the biological context of cellular reprogramming in mind, we proceed to
define our control problem for BN and PBN models of GRNs. We start with
providing the definition of an _attractor-based control strategy_ , also
referred to as _control strategy_ for short. Then, we define _Source-Target
Attractor Control_ , and immediately follow with an example. Note that in Def.
4.4 pseudo-attractor states are considered and not pseudo-attractors. This is
due to the fact that the procedure that will be introduced in Sec. 5.1
identifies pseudo-attractor states but does not assign them to individual
pseudo-attractors. Note that our definition of the source-target attractor
control is a generalisation of the ‘attractor-based sequential instantaneous
control (ASI)’ problem for BNs defined in [23] as our formulation of the
control problem extends to the formalism of PBNs and pseudo-attractor states.
An exact ‘divide-and-conquer’-type algorithm for solving the ASI problem for
BNs was provided in [23], and implemented in the software tool CABEAN [24].
###### Definition 4.4.
(Attractor-based Control Strategy) Given a BN/ PBN and a pair of its source-
target (pseudo-)attractors, a attractor-based control strategy is a sequence
of interventions which drives the network dynamics from the source to the
target (pseudo-)attractor. Interventions are understood as simultaneous flips
(perturbations) of values for a subset of genes in a particular network state
and their application is limited to (pseudo-)attractor states. We will denote
simultanious inverventions as sets, e.g. $\\{x_{1},x_{3},x_{7}\\}$ and
strategies as lists of sets, e.g.
$[\\{x_{1}x_{7}\\},\\{x_{2}\\}\\{x_{2},x_{4}\\}]$. Furthermore, the _length of
a control strategy_ is defined as the number of interventions in the control
sequence. We refer to an attractor-based control strategy of the shortest
length as the _minimal attractor-based control strategy_.
###### Definition 4.5 (Source-Target Attractor Control).
Given a BN/ PBN and a pair of source-target attractors or pseudo-attractor
states, find a minimal attractor-based control strategy.
###### Example 4.6.
The PBN from Example 3.5 may be controlled from state $(1,0,1,0)$ to
$(0,0,0,0)$ by intervening on $x_{1}$ and allowing the PBN to evolve in
accordance with its original dynamics:
$(1,0,1,0)\xrightarrow{i=1}(0,0,1,0)\xrightarrow{\text{evolution}}(0,0,0,0).$
However, the evolution is non-deterministic and the PBN may evolve to another
attractor, see Fig. 1:
$(0,0,1,0)\xrightarrow{\text{evolution}}(0,1,0,1).$
The only way to be sure to move to $(0,0,0,0)$ is to flip genes
$\\{x_{1},x_{3}\\}$ either simultaneously, which gives a strategy of length
one, or one-by-one, which gives a strategy of length two, i.e.,
$[\\{x_{3}\\},\\{x_{1}\\}]$.
## 5 DRL-based framework for the source-target attractor control
We propose a DRL-based computational framework, i.e., pbn-STAC, for solving
the source-target attractor control problem. Since our control problem is to
some extent similar to the one considered in [14] and our implementation is
based on the implementation therein, we compare our framework assumptions and
solutions to theirs during the presentation of pbn-STAC. In contrast to the
synchronous PBN update mode in [14], we consider the asynchronous update mode,
which is commonly considered as more appropriate for the modelling of
biological systems. The approach of [14] allows DRL agents to apply control
actions in any state of the PBN environment. Since our focus is on the
modelling of cellular reprogramming, we believe that this approach may be hard
to apply in experimental practice. It would require the ability to discern
virtually all cellular states, including the transient ones, which is
impossible with currently available experimental techniques. Since attractors
correspond to cellular types or, more generally, to cellular phenotypic
functional states, which are more easily observable in experimental practice,
we allow our DRL agent to intervene only in (pseudo-) attractor states in
consistency with the control problem formulation in Sec. 4.
In the control framework of [14], an action of the DRL agent can perturb at
most one gene at a time. However, for our formulation of the control problem
this is too restrictive. We have encountered examples of source-target
attractor pair where no control strategy consisting of such actions exists.
Therefore, we need to relax this restriction. However, we do not want to
intervene on too many genes at once as it would be rather pointless – in the
extreme case of allowing all genes to be perturbed at once, one could simply
flip all of the unmatched gene values. Furthermore, such an intervention would
also be hard to realise or even be unworkable in real biological scenarios –
it is expensive and sometimes even impossible to intervene on many genes at
once in the lab. Hence, we introduce a parameter which value defines an upper
limit for the number of genes that can be simultaneously perturbed. Based on
experiments (data not shown), we set this value to three. This setting is
sufficient for obtaining successful control strategies for all of our case
studies, yet low enough not to trivialise the control problem. Of course, the
value can be tuned to meet particular needs.
The DRL agent in [14] learns how to drive the network dynamics from any state
to the specified target attractor. With the context of cellular reprogramming
in mind, we consider in our framework only attractors as control sources and
targets with both of them specified. This models the process of transforming a
cell from one type into another. To be able to solve the source-target
attractor control problem, we define the reward function $R_{a}(s,s^{\prime})$
as:
$R_{a}(s,s^{\prime})=1000*\mathbbm{1}_{TA}(s^{\prime})-|a|,$
where $\mathbbm{1}_{TA}$ is an indicator function of target attractor, and
$|a|$ is the number of genes perturbed by applying action $a$. The loss
function is defined as the Mean Squared Error (MSE) between the predicted
Q-values and the target Q-values, calculated using the Bellman equation.
To train our DRL agent in each episode, we randomly choose a source-target
attractor pair and terminate each episode after 20 unsuccessful steps. This
approach however requires all the attractors to be known prior to training.
For networks with small numbers of nodes, the attractors can be computed.
However, as already mentioned, obtaining the list of all attractors for large
networks is a challenging problem by itself and one cannot expect the list to
be available in advance. To address this issue, we have introduced the notion
of a pseudo-attractor in Def. 4.1. Now we proceed to present a procedure for
detecting pseudo-attractor states which is exploited by our framework for
solving the control problem for large networks, i.e., ones for which
information on attractors is missing.
### 5.1 Pseudo-attractor states identification procedure
Identification of pseudo-attractors is hindered in large-size PBN models.
Nevertheless, pseudo-attractor states can be identified with simulations due
their property of being frequently revisited. We propose the following Pseudo-
Attractor States Identification Procedure (PASIP) which consists of two steps
executed in two phases: Step I during PBN environment pre-processing and Step
II with two cases, referred to as Step II-1 and Step II-2, during DRL agent
training.
##### Pseudo-Attractor States Identification Procedure
1. I
During environment pre-processing, a pool of $k$ randomly selected initial
states is considered, from which PBN simulations are started. Each PBN
simulation is run for initial $n_{0}=200$ time steps, which are discarded,
i.e., the so-called burn-in period. Then, the simulation continues for
$n_{1}=1000$ time steps during which the visits to individual states are
counted. All states in which at least $5\%$ of the simulation time $n_{1}$ is
spent are added to the list of pseudo-attractor states.
2. II
During training, the procedure discerns two cases:
* II-1
The simulation of the PBN environment may enter a fix-point attractor not
detected in Step I. If the simulation gets stuck in a particular state for
$n_{2}=1000$ steps, the state is added to the list of pseudo-attractor states.
* II-2
During training, the simulation of the PBN environment may enter a multi-state
attractor that has not been detected in Step I. For this reason, a history of
the most recently visited states is kept. When the history buffer reaches the
size of $n_{3}=10000$ items, revisits for each state are counted and states
revisited more than 15% of times are added to the list of pseudo-attractor
states. If no such state exists, the history buffer is cleared and the
procedure continues for another $n_{3}$ time steps. The new pseudo-attractor
states are added provided no known pseudo-attractor state was reached.
Otherwise, the history information is discarded.
Notice that the procedure allows us to identify the pseudo-attractor states,
but does not allow us to assign them to individual pseudo-attractors.
Therefore, when training a DRL agent with the use of pseudo-attractor states,
we consider the control strategies between all source-target pairs of pseudo-
attractor states. This is why our formulation of the control problem, the DRL
agent is restricted to apply its actions only in PBN (pseudo-)attractor
states. Therefore, in the case of large networks where no information on
attractors is available, the environment pre-processing phase is important as
it provides an initial pool of pseudo-attractor states for the training.
When identifying pseudo-attractors, we do not know the size of the PBN
attractor with which the pseudo-atractor is associated. Therefore, we cannot
determine the exact probability threshold of Def. 4.1 for identifying
individual pseudo-attractors. The proposed procedure addresses this issue as
follows. In Step I, the chosen $5\%$ identification threshold enables the
identification of a pseudo-attractor being a complete attractor, i.e., all its
states, of size up to $20$ states, which follows from the following
observation.
###### Observation 5.1.
For any PBN attractor $A$, the size of the associated pseudo-attractor $PA$
found by Step I of the pseudo-attractor identification procedure with $k\%$
identification threshold is exactly upper bounded by
$|PA|\leq\begin{cases}\frac{100}{k}-1,&100\text{ mod }k=0\text{ and
}|A|>\frac{100}{k}\\\
\left\lfloor\frac{100}{k}\right\rfloor,&\text{otherwise}.\end{cases}$
###### Proof.
Let $S=\frac{100}{k}-1$ if $k\mid 100$ and
$|A|=\left\lfloor\frac{100}{k}\right\rfloor$ and assume that we have an
attractor $A$ of size strictly greater than $S$. Then by the pigeonhole
principle one of the states has to be visited less then $\frac{100}{S}\%<k\%$
of times. So it will not be fully recovered by the procedure. Hence
$|PA|<S+1$.
Contrary, if we have an attractor $A$ of size $S$, which has uniform
distribution then $PA$ will equal exactly $|A|$, so the upper bound for the
size of $|PA|$ is at least $S$. ∎
In light of Def. 4.1 and Obs. 4.3, the associated pseudo-attractor of an
attractor of size $20$ can be identified only if the stationary distribution
on the attractor is uniform. If the attractor size is less than 20, it is
still possible to include all attractor states in the pseudo-attractor even if
the distribution is non-uniform. Notice that with decreasing size of the
unknown attractor, our procedure allows more and more pronounced deviations
from the uniform distribution while preserving the complete attractor
detection capability provided the stationary probabilities of all attractor
states are above the threshold.
If an attractor is of size larger than $20$ states, Step I of our procedure
with $5\%$ identification threshold will identify the associated pseudo-
attractor only if the stationary distribution is non-uniform and the pseudo-
attractor will contain only the most frequently revisited states. The maximum
possible size of the identified pseudo-attractor in this case is $19$, which
follows from Obs. 5.1. This is a desired property of our procedure as it keeps
the number of pseudo-attractor states manageable which has significant
positive influence on stabilising the model training as will be discussed
below.
The environment pre-processing phase provides an initial set of pseudo-
attractor states. The initial set is expanded in Step II during the model
training phase. Step II-1 allows to identify plausible fix-point attractors.
Step II-2 enables the identification of plausible multi-state attractors.
However, here the focus is on smaller attractors than in the case of Step 1:
we classify states as pseudo-attractor states if they are revisited at least
$15\%$ of time, which corresponds to attractors of size 6. This is to restrict
the number of spurious pseudo-attractor states in order to stabilise model
training as explained next.
We have encoutered an issue related to late discovery of pseudo-attractor
states during training. As can be observed in Fig. 2(a), the procedure may
detect a new pseudo-attractor at any point in time which destabilises
training: a new state is detected at around 90000 steps, which causes abrupt,
significant increase of the average episode length. We propose a remedy to
this problem in Sec. 5.2. Our experiments with small networks, i.e., ones for
which exact attractors could be computed, revealed that it is beneficial to
underestimate the set of attractor states in Step I of the procedure as the
missed ones are usually discovered later during the training phase.
For big networks, e.g., with hundreds of nodes, the set of pseudo-attractors
may take a long time to stabilise. Yet this approach provides us with the
ability to process networks too big to be handled by traditional methods. The
computations of pseudo-attractors can be parallelised in a rather
straightforward way to speed up the detection. Furthermore, the notion of
pseudo-attractors can easily be generalised to other types of GRN models,
e.g., PBNs with perturbation, which is yet another well-established GRN
modelling framework.
### 5.2 Exploration probability boost
The approach of [14] implements the $\varepsilon$-greedy policy in order to
balance exploitation and exploration of the DRL agent during training. The
$\varepsilon$-greedy policy introduces the _exploration probability_
$\varepsilon$ and with probability $1-\varepsilon$ follows the greedy policy,
i.e., it selects the action $a^{*}=\arg\max_{a\in\mathcal{A}}Q(s,a)$, or with
the $\varepsilon$ exploration probability selects a random action. We set the
initial $\varepsilon$ value to 1 and linearly decrease it to 0.05 over the
initial 3000 steps of training.
Combining the original $\varepsilon$-greedy policy with online pseudo-
attractor states identification gives rise to unstable training. When trying
to train the DRL agent for our control problem while keeping identifying
pseudo-attractor states during training, stability issues discussed in Sec.
4.1 were observed. To alleviate this negative influence on training, we
introduce the _exploration probability boost_ (EPB) to the
$\varepsilon$-greedy policy. The idea of EPB is to increase the exploration
probability $\varepsilon$ after each discovery of a new pseudo-attractor to
$\max(\varepsilon,0.3)$ if the current value of $\varepsilon$ is less than
0.3. After the increase, the linear decrease to 0.05 follows with the rate of
the initial decrease. As revealed by our experiments, this simple technique
makes learning much more stable. This is illustrated in Fig. 2(b), where the
agent discovers new pseudo-attractor states at around the 150000-th training
step and the use of the improved $\varepsilon$-greedy policy allowed us to
reduce the increase of the average episode length in a significant way and
resulted in a quick return to the previously trained low value of the average
episode length.
((a)) Training without EPB.
((b)) Training with EPB.
Figure 2: Examples of average episode lengths during training run with and
without EPB. New pseudo-attractor states are being identified during training.
### 5.3 pbn-STAC implementation
We implement pbn-STAC as a fork of gym-PBN [8], an environment for modelling
of PBNs, and pbn-rl [9], a suite of DRL experiments for a different PBN
control problem formulated in [14]. In pbn-STAC, we have adapted the original
code of gym-PBN and pbn-rl to our formulation of the PBN control problem,
i.e., the source-target attractor control. First, we extend gym-PBN by adding
the asynchronous PBN environment to it. Second, to allow for simultaneous
perturbation of a combination of genes within a DRL action, we replace the
original DDQN architecture with the BDQ architecture [25], which, contrary to
DDQN, scales linearly with the dimension of the action space. The architecture
of our BDQ network is depicted in Fig. 3.
Figure 3: Schematic illustration of the BDQ network architecture
Third, we implement the pseudo-attractor states identification procedure and
the exploration probability boost technique. Finally, the framework takes as
input a source-target pair of attractors or pseudo-attractor states. In the
case of a multi-state target attractor, a training episode is regarded as
successful if any of the target attractor states is reached. For a multi-state
source attractor, we uniformly sample one of its states and set it as the
initial state. In this way, different source attractor states are considered
as initial during DRL agent training. Our DRL-based framework for the source-
target attrator control is made available via the dedicated pbn-STAC GitHub
repository [28].
## 6 Experiments
### 6.1 BN and PBN models of GRNs
Melanoma models. We infer BN and PBN environments of various sizes for the
melanoma GRN using the gene expression data provided by Bittner _et al._ in
[6]. This is a well-known dataset on melanoma, which is extensively studied in
the literature, see, e.g., [14, 5, 7]. To infer the BN/PBN structures, we
follow the approach of [14] implemented in gym-PBN [14]. It is based on the
coefficient of determination (COD), which is a measure of how well the
dependent variable can be predicted by a model, a perceptron in the case of
[14].
The original dataset of Bittner _et al._ is quantised by the method of [14].
Then, the BN and PBN models of sizes 7, 10, and 28 are obtained from these
data. The models are denoted as BN-x or PBN-x, respectively, where x is the
number of genes. To infer the predictors for the models, we set the number of
predictors for each gene to 1 for BN models and to 3 for PBN models. For each
gene, the algorithm selects the Boolean functions with the maximum COD values.
For more details on the inference method, we refer to [14].
Case study of B. bronchiseptica. We test our DRL-based control framework on an
existing model of a real biological system, i.e., the network of immune
response against infection with the respiratory bacterium _Bordetella
bronchiseptica_ , which was originally proposed and verified against empirical
findings in [26]. The computational model, denoted IRBB-33, is an asynchronous
BN consisting of 33 genes.
### 6.2 Performance evaluation methodology
We evaluate the performance of pbn-STAC in solving the control problem
formulated in Sec. 4 on BN and PBN models of melanoma of various sizes, i.e.,
incorporating 7, 10, and 28 genes. Moreover, we consider IRBB-33, the 33-genes
BN model. The dynamics under the asynchronous update mode is considered for
all models. The evaluation consists of the agent interacting with the
environment by taking actions, where an action consists of flipping the values
of a particular subset of genes in an attractor or pseudo-attractor state. We
recover a control strategy for a given source-target pair learned by a trained
DRL agent byinitialising the BN/PBN environment with the source and target and
letting it run while applying the actions suggested by the DRL agent in the
source and all the intermediate (pseudo-)attractor states encountered on the
path from the source to the target. To evaluate the performance of pbn-STAC on
a particular BN/PBN model, we recover control strategies for all possible
ordered source-target pairs of the model’s attractors or pseudo-attractor
states. For all the BN models of melanoma and IRBB-33, we are able to compute
all their attractors and optimal control strategies for all pairs of
attractors using the CABEAN software tool with the attractor-based sequential
instantaneous source-target control (ASI) method. We use the information on
the exact attractors and optimal control strategies for BN models as ground
truth for the evaluation of pbn-STAC.
For PBN-7 and PBN-10, we compute the attractors with the NetworkX package
[12], which facilitates the analysis of complex networks in Python.
Unfortunately, due to very large memory requirements, we are unable to obtain
the attractors of the 28-genes PBN model of melanoma with this approach, so we
consider pseudo-attractor states. The optimal-length control strategies are
obtained for PBN-7 and PBN-10 models by exhaustive search.
Notice that due to the nondeterministic nature of our environments, i.e., the
asynchronous update mode, the results may vary between runs. Therefore, for
each source-target pair, we repeat the run 10 times. For each recovered
control strategy, we count its length, and record the information whether the
target attractor is reached. For a given BN/PBN model, we report the
percentage of successful control strategies found and the average length of
the successful control strategies.
## 7 Results
### 7.1 Identification of pseudo-attractor states
We evaluate the performance of PASIP proposed in Sec. 5.1. For this purpose,
we run pbn-STAC with PASIP on the considered BN and PBN models. We present the
obtained results in Tab. 1. For each model, except the melanoma PBN-28 for
which the exact attractors could not be obtained, we provide the information
on the number of exact attractors, the total number of attractor states, and
the total numbers of identified pseudo-attractor states with our procedure. We
measure the precision of our approach defined as
$\textrm{TP}/(\textrm{TP}+\textrm{FP})$, where TP is the number of true
positives, i.e., the number of pseudo-attractor states that are attractor
states, and FP is the number of false positives, i.e., the number of states
identified as pseudo-attractor states which are not part of any of the
network’s attractor. We can conclude that for all cases in which the exact
attractors are known, our procedure does not introduce any FPs. Moreover, it
can identify the attractor states with $100\%$ precision in all but one case,
i.e., the PBN-28 network which has 2412 fix-point attractors and for which our
procedure correctly identifies 1053 of them. This justifies our strong belief
that running our procedure for longer time would result in definitely higher
precision also in the case of BN-28. In summary, the presented results show
that PASIP is reliable.
Model | #Attr. | #Attr. states | #PA-states | Precision
---|---|---|---|---
BN-7 | 6 | 6 | 6 | 100%
BN-10 | 26 | 26 | 26 | 100%
BN-28 | 2412 | 2412 | 1053 | 43.65%
IRBB-33 | 3 | 3 | 3 | 100%
PBN-7 | 4 | 4 | 4 | 100%
PBN-10 | 6 | 6 | 6 | 100%
PBN-28 | unknown | unknown | 14 | N/A
Table 1: Comparison of the number of exact attractor states and pseudo-attractor states identified by PASIP for various BN and PBN models. The fact that we were unable to obtain the exact attractors for the PBN-28 model is indicated with ‘unknown’. Attr. is short for Attracor and PA stands for Pseudo-attractor. Model | #Attractors | Optimal Strategy | pbn-STAC
---|---|---|---
BN-7 | 6 | 1.0 | 3.98
BN-10 | 26 | 1.1 | 2.14
BN-28 | 2412 | 1.1 | -
IRBB-33 | 3 | 1.0 | 9.2
PBN-7 | 4 | 1.1 | 5.5
PBN-10 | 6 | 1.2 | 15.2
PBN-28 | unknown | unknown | 60.7
Table 2: Average lengths of pbn-STAC control strategies and optimal control
strategies obtained with CABEAN (BNs) or exhaustive search (PBNs) for all
source-target pairs of individual models. The fact that we were unable to
obtain the optimal strategy for the PBN-28 model is indicated with ‘unknown’.
### 7.2 Control of BN models of melanoma
We evaluate the ability of pbn-STAC to solve the control problem by comparing
the obtained results to the optimal ASI control strategies computed with
CABEAN. As can be seen in Tab. 2, the strategies obtained with pbn-STAC for
larger BN models tend to be longer on average compared to the optimal ones.
However the overhead is rather stable across different models. We investigate
the issue of longer control strategies further by computing a histogram of
control strategy lengths for the BN-7 model provided in Fig. 5(a). It is
apparent that in most cases the control strategies are short and close to the
optimal ones. Nevertheless, there are a few cases of longer control strategies
that give rise to the higher average values. The longer strategies are present
due to the fact that the interventions suggested by the trained DRL-agent
often place the system in a so-called _weak basin of attraction_ of an
attractor, i.e., a set of states from which the attractor is reachable, but
not necessarily – the dynamics can still lead the system to another attractor
from these states due to non-determinism arising from the asynchronous update
mode. The strategies computed by CABEAN are optimal since they are obtained by
considering the so-called _strong basins of attraction_ , i.e., states from
which only a single attractor can be reached. Nevertheless, determining strong
basins is challenging, not to say impossible, for large networks (see [18] for
details). In light of this and the fact that pbn-STAC can handle larger
networks, the obtained results can be seen as reasonable and acceptable.
Unfortunately, due to the huge number of attractors of the BN-28 model, the
training of pbn-STAC on this model needs to be run for much longer time and we
did not manage to finish it within our time limits. Notice that our training
procedure considers all ordered pairs of the attractors. Handling such cases
requires further research.
((a)) Usual reward
((b)) Improved reward
Figure 4: Training of the DRL agent on the IRBB-33 environment with different
reward schemes.
### 7.3 Control of the IRBB-33 model
In the case of the IRBB-33 network, we have to modify the reward scheme. As
can be seen in Fig. 4(a), the reward scheme introduced in Sec. 5, referred to
as the _mixed reward_ , does not lead to any improvement of the average
episode length during training over 200 000 training steps. After trying
different reward schemes for this network (data not shown), we found that the
following scheme
$\displaystyle R_{a}(s,s^{\prime})=-|a|+100*(\mathbbm{1}_{TA}(s^{\prime})-1),$
It improves the training of the DRL agent significantly, as can be seen in
Fig. 4(b), where the convergence is achieved in tens of thousands of steps.
((a)) BN-7
((b)) BN-10
((c)) PBN-28
((d)) BN-33
Figure 5: Histogram of the control strategy lengths for the BN-7 model.
The average control strategy length obtained with pbn-STAC is $9.2$, as
presented in Tab. 2. The length is again larger than in the optimal case, but
the overhead is stable with respect to the results obtained for the BN models
of melanoma. Again, as can be observed in Fig. 5(d), in the majority of cases
the strategies are of length one, which perfectly corresponds with the optimal
strategy. Unfortunately, there are a few very long ones, which give rise to
the higher average value.
### 7.4 Control of PBN models of melanoma
We run the pbn-STAC control framework on the three PBN models of melanoma. For
PBN-28, we are not able to compute the set of exact attractors, but we
identify 14 pseudo-attractor states. Unfortunately, we can not obtain the
optimal control strategies for this network with exhaustive search.
As shown in Tab. 2, the control strategies found by pbn-STAC are on average
longer than the optimal ones. Moreover, their lengths seem to increase with
the size of the network faster than in the case of BN models. Unfortunately,
the optimal result is not available for the PBN-28 model to make a comparison.
Although the average length for the this network is high, the distribution is
heavily skewed with a long tail of longer control strategies as can be seen in
Fig. 5(c). Nevertheless, once again the majority of the source-target pairs
are controllable with very few interventions. This characteristic of the
control strategies obtained with pbn-STAC is consistent across different
models and theirs types.
## 8 Conclusions
In this study we formulated a control problem for the BN and PBN frameworks
under the asynchronous update mode that corresponds to the problem of
identifying effective cellular reprogramming strategies. We have developed and
implemented a computational framework, i.e., pbn-STAC, based on DRL that
solves the control problem. It allows to find proper control strategies that
drive a network from the source to the target attractor by intervening only in
other attractor states that correspond to phenotypical functional cellular
states that can be observed in the lab. Since identifying attractors of large
BNs/PBNs is a challenging problem by itself and we consider our framework as a
contribution towards developing scalable control methods for large networks,
we introduced the notion of a pseudo-attractor and developed a procedure that
identifies pseudo-attractor states during DRL agent training. We evaluate the
performance of pbn-SEC on a number of networks of various sizes and a
biological case study and compare its solutions with the exact, optimal ones
wherever possible.
The obtained results show the potential of the framework in terms of
effectiveness and at the same time reveal some bottlenecks that need to be
overcome to improve the performance. The major identified issue is related to
the long tails of the distributions of the lengths of strategies identified by
pbn-SEC, i.e., there are many strategies of lengths close to the optimal ones,
and a few which are very long. This negatively influences the average value.
Addressing this problem would allow us to significantly improve the
performance of pbn-STAC and make it effective on large models. We consider
these developments and the evaluations of pbn-STAC on models of large sizes as
future work.
Finally, we perceive our framework as rather straightforwardly adaptable to
other types of PBNs, such as PBNs with perturbations, or Probabilistic Boolean
Control Networks.
## References
* [1] A. Acernese and et al. Double Deep-Q Learning-Based Output Tracking of Probabilistic Boolean Control Networks. IEEE Access, 2020.
* [2] T. Akutsu, S. Kuhara, O. Maruyama, and S. Miyano. Identification of genetic networks by strategic gene disruptions and gene overexpressions under a boolean model. Theoretical Computer Science, 298(1), 2003.
* [3] A.-L. Barabási, N. Gulbahce, and J. Loscalzo. Network medicine: a network-based approach to human disease. Nature Reviews Genetics, 12(1):56–68, 2011.
* [4] N. Beneš and et al. Phenotype Control of Partially Specified Boolean Networks. In Proc. 21st International Conference on Computational Methods in Systems Biology (CMSB’23). Springer-Verlag, 2023.
* [5] N. Beneš, L. Brim, O. Huvar, S. Pastva, and D. Šafránek. Boolean network sketches: a unifying framework for logical model inference. Bioinformatics, 39(4), Apr. 2023.
* [6] M. Bittner and et al. Molecular classification of cutaneous malignant melanoma by gene expression profiling. Nature, 406(6795):536–540, Aug. 2000.
* [7] Q. Du, Y. Lin, C. Ding, L. Wu, Y. Xu, and Q. Feng. Pharmacological activity of matrine in inhibiting colon cancer cells vm formation, proliferation, and invasion by downregulating claudin-9 mediated emt process and mapk signaling pathway. Drug Design, Development and Therapy, Volume 17:2787–2804, Sept. 2023.
* [8] C. Evangelos. gym-pbn. https://github.com/UoS-PLCCN/gym-PBN.
* [9] C. Evangelos. pbn-rl. https://github.com/UoS-PLCCN/pbn-rl.
* [10] S. Fujimoto, H. van Hoof, and D. Meger. Addressing function approximation error in actor-critic methods, 2018.
* [11] A. J. Gates and L. M. Rocha. Control of complex networks requires both structure and dynamics. Scientific Reports, 6:Article 24456, 2016.
* [12] A. A. Hagberg, D. A. Schult, and P. J. Swart. Exploring network structure, dynamics, and function using NetworkX. In Proceedings of the 7th Python in Science Conference (SciPy2008), 2008.
* [13] S. Huang, G. Eichler, Y. Bar-Yam, and D. E. Ingber. Cell fates as high-dimensional attractor states of a complex gene regulatory network. Physical Review Letters, 2005.
* [14] S. Moschoyiannis, E. Chatzaroulas, V. Šliogeris, and Y. Wu. Deep Reinforcement Learning for Stabilization of Large-scale Probabilistic Boolean Networks. IEEE Transactions on Control of Network Systems, 10(3):1412–1423, 2022.
* [15] C. E. H. Nishida, R. A. C. Bianchi, and A. H. R. Costa. A framework to shift basins of attraction of gene regulatory networks through batch reinforcement learnin. Artificial Intelligence in Medicine, 107, 2020.
* [16] C. E. H. Nishida, A. H. R. Costa, and R. A. C. Bianchi. Control of gene regulatory networks basin of attractions with batch reinforcement learning. In Proc. 7th Brazilian Conference on Intelligent Systems. IEEE CS, 2018.
* [17] S. Padakandla. A survey of reinforcement learning algorithms for dynamically varying environments. ACM Computing Surveys, 54(6):1–25, July 2021.
* [18] S. Paul, C. Su, J. Pang, and A. Mizera. An efficient approach towards the source-target control of Boolean networks. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 17(6):1932–1945, 2020.
* [19] L. Paulevé. Marker and source-marker reprogramming of Most Permissive Boolean networks and ensembles with BoNesis. Peer Community Journal, 3:Article e30, 2023.
* [20] I. Shmulevich and et al. Probabilistic Boolean networks: a rule-based uncertainty model for gene regulatory networks. Bioinformatics, 18(2), 2002.
* [21] U. Sirin, F. Polat, and R. Alhajj. Employing batch reinforcement learning to control gene regulation without explicitly constructing gene regulatory networks. In Proc. 23rd International Joint Conference on Artificial Intelligence, pages 2042–2048. AAAI Press, 2013.
* [22] C. Su and J. Pang. A dynamics-based approach for the target control of Boolean networks. In Proc. 11th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics, pages 50:1–50:8. ACM Press, 2020.
* [23] C. Su and J. Pang. Sequential temporary and permanent control of Boolean networks. In Proc. 18th International Conference on Computational Methods in Systems Biology, volume 12314 of Lecture Notes in Computer Science, pages 234–251. Springer-Verlag, 2020.
* [24] C. Su and J. Pang. CABEAN: A software for the control of asynchronous Boolean networks. Bioinformatics, 37(6):879–881, 2021.
* [25] A. Tavakoli, F. Pardo, and P. Kormushev. Action branching architectures for deep reinforcement learning. In AAAI Conference on Artificial Intelligence, 2018.
* [26] J. Thakar, A. K. Pathak, L. Murphy, R. Albert, and I. M. Cattadori. Network model of immune responses reveals key effectors to single and co-infection dynamics by a respiratory bacterium and a gastrointestinal helminth. PLoS Computational Biology, 8(1):e1002345, Jan. 2012.
* [27] C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 8(3–4):279–292, May 1992.
* [28] J. Zarzycki. https://github.com/jakub-zarzycki2022/gym-pbn-stac, 2023.
|
# Performance of wave function and Green’s function methods for non-
equilibrium many-body dynamics
Cian C. Reeves Department of Physics, University of California, Santa
Barbara, Santa Barbara, CA 93117 Gaurav Harsha Department of Chemistry,
University of Michigan, Ann Arbor, Michigan 48109, USA Avijit Shee
Department of Chemistry, University of California, Berkeley, USA Yuanran Zhu
Applied Mathematics and Computational Research Division, Lawrence Berkeley
National Laboratory, Berkeley, CA 94720, USA Chao Yang Applied Mathematics
and Computational Research Division, Lawrence Berkeley National Laboratory,
Berkeley, CA 94720, USA K Birgitta Whaley Department of Chemistry,
University of California, Berkeley, USA Berkeley Center for Quantum
Information and Computation, Berkeley Dominika Zgid Department of Chemistry,
University of Michigan, Ann Arbor, Michigan 48109, USA Department of Physics,
University of Michigan, Ann Arbor, Michigan 48109, USA Vojtěch Vlček
Department of Chemistry and Biochemistry, University of California, Santa
Barbara, Santa Barbara, CA 93117 Department of Materials, University of
California, Santa Barbara, Santa Barbara, CA 93117
###### Abstract
Theoretical descriptions of non-equilibrium dynamics of quantum many-body
systems essentially employ either (i) explicit treatments, relying on
truncation of the expansion of the many-body wavefunction, (ii) compressed
representations of the many-body wavefunction, or (iii) evolution of an
effective (downfolded) representation through Green’s functions. In this work,
we select representative cases of each of the methods and address how these
complementary approaches capture the dynamics driven by intense field
perturbations to non-equilibrium states. Under strong driving, the systems are
characterized by strong entanglement of the single particle density matrix and
natural populations approaching those of a strongly interacting equilibrium
system. We generate a representative set of results that are numerically exact
and form a basis for critical comparison of the distinct families of methods.
We demonstrate that the compressed formulation based on similarity-transformed
Hamiltonians (coupled cluster approach) is practically exact in weak fields
and, hence, weakly or moderately correlated systems. Coupled cluster, however,
struggles for strong driving fields, under which the system exhibits strongly
correlated behavior, as measured by the von Neumann entropy of the single
particle density matrix. The dynamics predicted by Green’s functions in the
(widely popular) $GW$ approximation are less accurate but improve
significantly upon the mean-field results in the strongly driven regime.
††preprint: APS/123-QED
## I Introduction
The properties of non-equilibrium quantum systems have drawn great attention
in recent years. Driving a system can alter its properties, leading to phase
changes[1, 2], exotic states of matter[3, 4] and allowing for tuning of
material properties through continuous driving[5, 6]. Motivated by new
experimental techniques and observations, the theoretical study of these non-
equilibrium systems is also attracting wide attention[7, 8, 9, 10, 11, 12,
13]. This has lead to a wide range of theoretical techniques, that have found
success for equilibrium problems, to be extended to the non-equilibrium
regime. The available ab-initio techniques that are systematically improvable
either employ wave function or many-body Green’s function[14] as the basis of
their solution. The former are typically used in quantum chemistry, while the
latter is a suitable framework for condensed matter physics problems. There is
however lack of critical assessment of their most appropriate domain
applicability across various non-equilibrium regimes. While these approaches
are, to a large extent, complementary formulations of the time dependent
problem, they practically tackle the many-body description differently and
hence face distinct challenges which are explored in this paper.
Among wave function methods, the time dependent exact diagonalization (TD-ED)
is equivalent to the time evolution of the Schrödinger equation and hence it
provides benchmark results. Due to its cost, it is applicable to only small
finite systems. In the quantum chemistry community, it is referred to as time
dependent full configuration interaction (TD-FCI)111Note, that in ED the
system Hamiltonian is diagonalized providing all the eigenvectors, while in
TD-FCI customarily only few eigenvectors are obtained.. The time dependent
methods based on an approximate (truncated) CI expansion enjoy much success,
for example, time-dependent configuration interaction singles (TD-CIS) [16]
method is very well suited for ionization dynamics. However, such methods have
very limited applications, and active space-based CI methods are often
preferred. One of the methods in that direction is the extension of multi-
configurational time dependent Hartree approaches (MCTDH) [17] to atomic
problems such as ionization of helium [18]. Several other methods in the same
vein are also known, such as the time-dependent restricted active space CI
(TD-RASCI) [19], and the complete active space self-consistent field (TD-
CASSCF) [20]. The computational cost of those techniques scales exponentially
with the size of the Hilbert space, each particular formulation has a
different scaling prefactor, but are generally limited to small systems.
The reduced (non-exponential) scaling wavefunction-based approaches leverage
some form of compressed representation, e.g., as time dependent density matrix
renormalization group method (TD-DMRG) [21, 22, 23, 24] or time dependent
formulation of coupled cluster [25, 26] (TD-CC) methods. For these approaches,
the price of the reduction in computational scaling comes with the
establishment of the domain of most suitable applicability. TD-DMRG is usually
most successful in 1D systems, while TD-CC, following the ground-state
version, is typically suitable for a wide variety of weakly correlated
problems. [27, 28, 29, 30] One of the earliest formulations of TD-CC, of
particular interest to this work, was proposed by Arponen [7]. In recent
years, new developments in theory and implementation have emerged. [31, 32,
33, 34]
In contrast to the wavefunction methods, the methods applied to condensed
matter systems are most conveniently based upon the Green’s function (GF)
formulation of the many-body perturbation theory (MBPT)[35, 36, 37]. This
approach recasts the many-body problem onto a set of effective correlators,
most typically the one-quasiparticle GF[38], capturing the probability
amplitude associated with the propagation of a quasiparticle. As an effective
single particle quantity, GF MBPT can be efficiently implemented[39, 40]
treating systems with thousands of electrons [41, 42, 43]. The complexity is
accounted for via the space-time nonlocal self-energy that is systematically
built by expansion in the system fluctuations[44, 45, 46], making GF MBPT one
of the most powerful methods for studying electronic phenomena in large scale
systems. It is formally straightforward to extend GF MBPT to non-equilibrium
settings via the set of integrodifferential Kadanoff-Baym equations (KBEs)[47,
48, 49]. The cost of solving KBEs scales cubically with the number of
timesteps[50]. Multiple cost reduction schemes have been proposed recently[51,
52, 53, 54, 55, 56, 57, 58]. Yet, the most widely used of these is the
Hartree-Fock generalized Kadanoff-Baym ansatz (HF-GKBA)[55] which can be
formulated in linear scaling fashion[56, 59, 60, 61, 62]. In some cases the
HF-GKBA can even improve on elements of the KBEs, specifically the tendency of
the KBEs to be overdamped and reach artificial steady states[63, 64].
In this paper, we critically compare the distinct formulations of the time
dynamics and illustrate their performance on a series of numerical benchmarks
for a generalized Hubbard chain driven out of equilibrium by an external
pulse. Using TD-CI, TD-CC, the KBEs,the HF-GKBA and time dependent Hartree-
Fock(TD-HF), we study the dynamics of the density matrix and study the effects
of different strengths of non-equilibrium perturbation. In general, the TD-CI
results serve as near-exact baseline against which other methods are compared.
The paper first reviews the essential parts of the distinct formulations and
approximations in the theory section. The results then summarize the findings
for lattice problems perturbed by pulses of increasing strengths that drive
the system towards the regime which is shown to exhibit characteristics of
strong correlations. We map the capacity of the distinct formalisms to capture
the dynamics for those regimes and also for systems with short- and long-range
interactions, with and without non-local memory effects (in the context of GF
formalisms), and confirm our findings by testing systems of various sizes. The
details of the theory and implementation of the methods are in the
supplementary information[65], while the main text focuses on the analysis of
the results, which are summarized in the conclusion section.
## II Theory
In this work, we explore representative formulations of three conceptually
distinct methodologies, configuration interactions (based on truncation of the
wavefunction expansion in the space of configurations represented by
determinants), coupled cluster approach (employing the compression of the
many-body wavefunction), and many-body Green’s function techniques (based on a
downfolded effective representation of excitation evolution). In the following
subsections we present the key underlying ideas and we leave much of the
technical details of each approach to the supplemental information[65].
### II.1 Time-dependent configuration interaction
The time dependent full configuration interaction (TD-FCI) approach is taken
as a reference point for assessing the performance of the other methods. In
TD-FCI, we express the time-dependent Schrödinger equation in configuration
space:
$i\frac{\partial C_{I}}{\partial t}=\sum_{J}H_{IJ}(t)C_{J}$ (1)
where, $I$ and $J$ stand for all electronic configurations generated for a
given problem, and $C_{I}$ and $C_{J}$ are the coefficients of those
configurations. To solve this equation for a general time-dependent
Hamiltonian we numerically implement the time-ordering and exponential of a
large matrix, $H_{IJ}$, based on the Lanczos technique [66]. With this
approach, we construct a propagator in a small subspace based on a reference
form of the wavefunction at time $t$, i.e., $|\Psi(t)\rangle$. For example, at
$t=t_{0}$, we build a propagator based on the ground state wave function of
the time-independent Hamiltonian. Such a propagator is however valid only for
a short time. Through a regular renewed search of the “active” subspace we can
ultimately propagate the TD-FCI expansion for long times. The procedure is
numerically stable and yields unitary dynamics; the details of the
implementation are provided in the supplemental information[65].
### II.2 Time-dependent coupled cluster
Coupled cluster (CC) is another formalism expanding the wavefunction as a
linear combination of configurations. In CC, one introduces a non-Hermitian
approximation to expectation values
$\braket{A}_{CC}=\braket{\Psi_{L}}{A}{\Psi_{R}}$, where the bra and ket
wavefunctions are defined as
$\displaystyle\ket{\Psi_{R}}$ $\displaystyle=e^{T}\ket{\Phi},$ (2a)
$\displaystyle\bra{\Psi_{L}}$
$\displaystyle=\bra{\Phi}\left(1+Z\right)e^{-T},$ (2b)
where $T=\sum_{\mu}\tau_{\mu}\gamma_{\mu}^{\dagger}$ and
$Z=\sum_{\mu}z_{\mu}\gamma_{\mu}$ are composed of particle-hole excitation
$\gamma_{\mu}^{\dagger}$ and de-excitation $\gamma_{\mu}$ operators, defined
with respect to the Slater determinant reference $\ket{\Phi}$.
In time-dependent CC, the amplitudes $\tau_{\mu}$ and $z_{\mu}$ carry all the
time-dependence, and are found by making the action integral,
$\begin{split}S&=\int dt\mathcal{L}(t)\\\ &=\int
dt\braket{\Phi}{\left(1+Z(t)\right)e^{-T(t)}\left(i\frac{\partial}{\partial
t}-H(t)\right)e^{T(t)}}{\Phi},\end{split}$ (3)
stationary with respect to variations in $\tau_{\mu}(t)$ and $z_{\mu}(t)$.
When no truncation is imposed on the operators $T$ and $Z$, CC is equivalent
to exact diagonalization, i.e., equivalent to FCI (discussed above) where the
wavefunction is constructed as a linear combination of all possible Slater
determinants within the given spin-orbital basis. In practice, however,
truncation is necessary. Fortunately, in weak-to-moderately correlated
systems, it is sufficient to include only single and double particle-hole
excitation (de-excitation) operators in $T$ ($Z$). This leads to the well
known CCSD approximation which is used for all the results and discussion
presented in this paper.
It is noteworthy that due to the exponential parameterization, the ket
wavefunction in truncated CC includes contributions from all Slater
determinants, although these contributions are factorized in terms of the
lower-rank parameters. This is in contrast with truncated CI, where the cut-
off is imposed explicitly on the number of determinants. At the same time, the
asymmetric nature of CC expectation values leads to a violation of the
variational principle. Consequently, in TD-CC, real observable quantities may
develop un-physical or complex parts. However, the unphysical terms often
serve as a diagnostic tool for the CCSD approximation. In the moderately
interacting regimes, where CC works well, the expectation values are well
behaved. The increasing magnitude of unphysical expectation values generally
coincides with the onset of strong correlations, where CC is known to fail.
This point will be discussed in the sections below. Further details regarding
the derivation and implementation of the TD-CC equations are provided in the
supplemental information. [65]
### II.3 Green’s function methods
Finally we employ two approaches based on the time evolution of the one-
particle Green’s function (GF), i.e., the time dependent probability amplitude
associated with propagation of a quasiparticle. At non-equilibrium, the GF is
an explicit function of two time arguments, and their respective ordering on
the real time axis (as well as imaginary time axis for the finite temperature
formalisms). The first are the Kadanoff-Baym equations, representing a set of
theoretically exact integro-differential equations for the two-time GF, and
are given by[67]
$\begin{split}[-\partial_{\tau}-h]G^{\mathrm{M}}(\tau)&=\delta(\tau)+\int_{0}^{\beta}d\bar{\tau}\Sigma^{\mathrm{M}}(\tau-\bar{\tau})G^{\mathrm{M}}(\bar{\tau}),\\\
i\partial_{t_{1}}G^{\rceil}(t_{1},-i\tau)&=h^{\textrm{HF}}(t_{1})G^{\rceil}(t_{1},-i\tau)+I^{\rceil}(t_{1},-i\tau),\\\
-i\partial_{t_{2}}G^{\lceil}(-i\tau,t_{2})&=G^{\lceil}(-i\tau,t_{2})h^{\textrm{HF}}(t_{2})+I^{\lceil}(-i\tau,t_{1}),\\\
i\partial_{t_{1}}G^{\lessgtr}(t_{1},t_{2})&=h^{\textrm{HF}}(t)G^{\lessgtr}(t_{1},t_{2})+I_{1}^{\lessgtr}(t_{1},t_{2}),\\\
-i\partial_{t_{2}}G^{\lessgtr}(t_{1},t_{2})&=G^{\lessgtr}(t_{1},t_{2})h^{\textrm{HF}}(t_{2})+I_{2}^{\lessgtr}(t_{1},t_{2}),\\\
\end{split}$ (4)
Here $\Sigma$ is a space-time nonlocal effective potential embodying all many-
body interactions; $G^{\mathrm{M},<,>,\rceil,\lceil}$ are various components
of the Green’s function that depend on which of the time arguments are in real
time versus in imaginary time and the $I^{<,>,\rceil,\lceil}$ are integrals
that account for many-body correlation effects in the system. A full
description and discussion of these quantities is given in the supplemental
information[65].
The KBEs are approximate only through the choice of the self-energy. In this
work, we focus on the $GW$ approximation, representing the arguably most
popular choice for simulations of condensed matter systems. In essence, this
approach accounts for dynamical density-density interactions, which dominate
the correlation effects in weakly and moderately correlated systems (such as
semiconductors[44, 68, 14]. The combination of integrals in the equations
above and their two-time nature means solving the KBEs scales cubically in the
number of time steps, making these calculations prohibitively expensive.
Furthermore, the KBEs often suffer from issues of artificial damping coming
from the approximate self-energy[64, 63].
Given the cost of KBEs propagation, they often solved only approximately, and
the most common route is to employ the Hartree-Fock Generalized Kadanoff-Baym
ansatz (HF-GKBA). It assumes a particular self-energy for the time diagonal
component of the GF and takes the off-diagonal self-energy to be approximated
by only the bare exchange interactions (hence the Hartree-Fock (HF)
approximation in its name). By using the HF form of the time off-diagonals, it
is possible to derive an ordinary differential equation (ODE) scheme for the
evolution of the equal-time component of the GF. This translates to a time-
linear formalism that recasts the equation of motion using the explicit form
of the two particle GF instead of making use of $\Sigma$ [69, 56]. The
equations of motion for this scheme are given by:
$\begin{split}i\partial_{t}G^{<}_{ij}(t)&=[h^{\textrm{HF}}(t),G^{<}(t)]_{ij}+[I+I^{\dagger}]_{ij}(t),\\\
I_{ij}(t)&=-i\sum_{klp}w_{iklp}(t)\mathcal{G}_{lpjk}(t),\\\
i\partial_{t}\mathcal{G}_{ijkl}(t)&=[h^{(2),\textrm{HF}}(t),\mathcal{G}(t)]_{ijkl}\\\
&\hskip 56.9055pt+\Psi_{ijkl}(t)+\Pi_{ijkl}-\Pi_{lkji}^{*}.\end{split}$ (5)
Here $\mathcal{G}(t)$ is a two particle propagator, $\Psi_{ijkl}$ accounts for
pair correlations built up due to two-particle scattering events and
$\Pi_{ijkl}$ accounts for polarization effects[69]. While approximate, the HF-
GKBA does not suffer from artificial damping as the KBEs do and thus in some
cases offers improved results over the KBEs at a much reduced cost. This
further enables the use of more advanced forms of $\Sigma$, the study of
larger systems, and the ability to perform longer time evolution. A more
detailed description of the HF-GKBA and the ODE scheme are given in the
supplemental information[65].
### II.4 Model Systems
To benchmark different time-dependent methods, we use a generalization of the
one-dimensional Hubbard model, which is driven out of equilibrium with the
help of an electric-field pulse. The Hamiltonian for the system is written as:
$\begin{split}\mathcal{H}&=-J\sum_{\langle
i,j\rangle\sigma}c^{\dagger}_{i\sigma}c_{j\sigma}+U\sum_{i}n_{i\uparrow}n_{i\downarrow}\\\
&\hskip
71.13188pt+V\sum_{i}(-1)^{i}n_{i}+\sum_{ij}h^{\mathrm{N.E}}_{ij}c_{i}^{\dagger}c_{j}.\end{split}$
(6)
The first two terms are the usual nearest neighbor hopping and onsite
interaction of the Hubbard model. The third term turns the system into a
bipartite lattice and serves to open up a trivial gap in the system. This is
necessary since the HF-GKBA prepares the initial state using adiabatic
switching. The adiabatic theorem requires a gapped system to correctly prepare
the HF-GKBA initial state. Such gapped models have been used in previous
studies of the HF-GKBA[57, 58]. We choose $J=1$ and express all interaction
strengths in units of $J$. Further, all the results and discussion assume half
filling, i.e., one electron per site.
For each method the system is prepared in the corresponding ground state
before being excited with a time dependent field modeled under the dipole
approximation and given by
$h^{\mathrm{N.E}}_{ij}(t)=\delta_{ij}\left(\frac{N-1}{2}-i\right)E\mathrm{e}^{-\frac{(t-t_{0})^{2}}{2\sigma^{2}}}.$
(7)
Here $N$ is the number of lattice sites in the model, $E$ is the strength of
the field, $\sigma$ determines the temporal width of the pulse and $t_{0}$ is
the pulse midpoint.
## III Results
### III.1 Effect of excitation strength
First we investigate the effect of the strength of the perturbation by varying
the parameter $E$ in our model. This will capture the behavior of each method
as we go from linear response type excitations up to strongly perturbed
regimes. In Fig. 1 we show the dynamics of the electronic dipole for the
system in equation (6) and (7) with $E=1.0J$, $2.0J$, $3.0J$ and $5.0J$, as
computed with each of our methods. For this section, we show results obtained
for the number of sites $N=12$ and we choose an interaction strength of
$U=1.0J$. This puts the kinetic and Coulomb energy scales at the same level.
Here each method captures the $U=1.0$ ground states very well. Thus, this
setup allows us to see how each method performs in non-equilibrium without
having to worry about the effects of a poorly captured ground state. While we
have explored various parameters, the findings below are representative of the
behaviors observed. Additional results are discussed in the supplemental
information[65].
We will start this section by discussing the results produced by the reference
TD-CI approach, which is numerically exact. These results will serve as our
benchmark to compare against other methods. For $E=1.0J$, we see two dominant
types of oscillation: a strong low-frequency contribution that manifests as
long wavelength oscillations in the dipole and a weak high-frequency
contribution. This behavior continues with increasing $E,$ however, for strong
perturbation strengths, we see that the magnitude of both types of
oscillations diminishes, and a yet lower frequency mode begins to dominate.
This continues until the dynamics become almost flat when we come to the
bottom panel of Fig. 1 with $E=5.0J$. Further results and analysis to
understand these exact trends in dipole moment are discussed in Sec. III.2 and
IV.
We continue this section by comparing the results produced by our other
methods.
For completeness, we include the TD-HF trajectory, which serves as a baseline
for the types of expansion techniques discussed below (either CC or those
based on the GFs, with HF being their underlying reference Hamiltonian around
which the self-energy is expanded). TD-HF is based solely on a time evolution
using a single determinant. For $E=1.0J$ we see TD-HF performs well, capturing
the amplitude and frequency of the oscillations of the TD-CI. It suffers from
a similar issue the HF-GKBA has in that it has a slight offset in frequency
that causes it to move out of phase with the exact result as the time
evolution progresses. Interestingly for $E=2.0J$ the TD-HF actually improves
upon the results given by HF-GKBA and the KBEs. This fortuitous improvement is
likely due to the self-energy over-damping the MBPT results. As $E$ is
increased again we see the result worsen. For $E=3.0J$ TD-HF is in agreement
up to $t\approx 10J^{-1}$ and for $E=5.0J$ to around $t=8J^{-1}$. After these
times however the TD-HF result differs drastically, although still oscillates
around the correct result. Judging by the results, it seems now the TD-HF
result is not damped enough, due to the lack of a self-energy and thus
oscillates with much greater amplitude than the benchmark results. In the next
paragraphs, we discuss the CC and GF-based techniques, which can be thought of
as expansions around this “correlation-free” TD-HF dynamics.
The TD-CC approach is in excellent agreement with TD-CI for the lowest two $E$
values ($E=1.0J,2.0J$). Note that this is despite the relatively complex
dynamics. For increased strength (starting with $E=3.0J$), TD-CC captures the
dynamics very closely only up until $t\approx 10J^{-1}$, but deviates from the
exact solution soon after. This illustrates a growing error in TD-CC for long-
time evolutions at large excitation strengths despite the initial TD-CC
dynamics for $E=3.0J$ being in reasonable qualitative agreement with TD-CI.
Note that without the TD-CI reference, it is not immediately clear from Fig. 1
that TD-CC dynamics is erroneous. For $E=4.0J,5.0J$, soon after the system is
driven out of equilibrium (around $t\approx 5J^{-1}$), the CC approach starts
to break down, and the results become unphysical as the occupation numbers
develop a large imaginary contribution. In fact, the disagreement of CC with
TD-CI in the presence of strong perturbation can be anticipated by looking at
Fig. S6 of SI, [65] where we plot the imaginary part of CC estimates for the
occupation number on the first site in the Hubbard lattice. It is evident that
even for $E=3.0J$ at long time scales, and especially for $E=4.0J,\,5.0J$, TD-
CC leads to large unphysicalities, signaling its breakdown. At the same time,
keeping track of imaginary parts or inconsistent behavior in expectation
values in TD-CC can serve as a diagnostic tool to measure the efficacy of the
coupled cluster approximation itself.
Next, we turn to the MBPT methods, starting with the solution for the KBEs
using the $GW$ self-energy. For $E=1.0J$ the agreement is very strong up to
$t\approx 15J^{-1}$. After this, while TD-CC improves in the quantitative
results, the KBEs still provide qualitatively correct results. They capture
the low frequency mode of oscillation extremely well, however the high
frequency part is damped out almost entirely by $t\approx 25J^{-1}$. Moving to
$E=2.0J$, the time up to which we see perfect agreement is shortened to
$t\approx 10J^{-1}$. Additionally, unlike the $E=1.0J$ case where reasonable
qualitative agreement held until $t=50J^{-1}$, here we see that after
$t\approx 20J^{-1}$ the KBE results differ significantly from those produced
by TD-CI and TD-CC. The damping in the KBEs is more pronounced now, and the
dynamics are close to flat. This damping represents a well-known issue with
the KBEs and will be discussed later in section IV. This trend continues as we
continue to increase the excitation strength. For $E=3.0J$, perfect agreement
with TD-CI now only holds to $t\approx 10J^{-1}$, around the same as the TD-CC
result. The KBE damping kicks in almost immediately after this at $t\approx
12J^{-1}$. After this the KBE dynamics are essentially flat. Interestingly,
the exact TD-CI result seems to oscillate around the stationary value that the
KBEs reach. For $E=5.0J$ the KBE results fail almost immediately by strongly
suppressing any oscillations at or after $t\approx 5J^{-1}$. Again, we see the
result is damped (leading to only a slowly time-varying dipole moment which
monotonically increases within the simulation window). However, compared to
the TD-CI result, this is not qualitatively wrong despite the magnitude of the
dipole is now much further from the TD-CI result compared to the results in
Fig. 1(a)-(c).
We now turn to the HF-GKBA, which truncates certain memory effects compared to
the KBEs. Similar to KBEs, we see an excellent agreement with TD-CI for
$E=1.0J$ up until $t\approx 15J^{-1}$. After this, the HF-GKBA still shows a
strong qualitative and quantitative agreement compared to the benchmark
results, contrasting with the overdamped behavior of KBEs. The HF-GKBA
captures the low frequency mode of oscillation very well. For longer times,
the high frequency oscillations from the TD-CI and TD-CC results are offset.
Additionally, the magnitude of these high frequency oscillations is diminished
relative to the benchmark results. However, the HF-GKBA does systematically
improve on the damping seen in the KBE result. For $E=2.0J$ the HF-GKBA
follows a very similar trend to the KBEs. The main difference is, again, a
reduction in the damping seen in the KBE dynamics. Both follow roughly the
same trajectory, which appears to be centered around the main low frequency
oscillations of TD-CI and TD-CC. Moving to $E=3.0J$ the HF-GKBA result now has
a noticeable offset from the KBE solution, but keeps the same trend of
improving upon the overdamping. After $t\approx 10J^{-1}$ the HF-GKBA no
longer captures the benchmark results qualitatively or quantitatively.
Furthermore, the HF-GKBA oscillates minorly around a single value (somewhat
similarly to the KBEs). However, the value the HF-GKBA result seems to center
around is lower than the central values of the TD-CI, TD-CC, and the KBEs
which are more or less equivalent. Turning finally to Fig. 1(d) with
$E=5.0J^{-1}$ we see the HF-GKBA actually captures the qualitative behavior of
the TD-CI dipole well. The HF-GKBA displays a slightly larger magnitude of
oscillation than those of TD-CI, however the mean values are very close and it
offers a markedly significant improvement over the TD-CC and KBE results.
Figure 1: Dynamics of the electronic dipole system described by Eq. (6) and
(7) for various values of $E$ parameter. a) $E=1.0J$, b) $2.0J$, c) $3.0J$ d)
$5.0J$.
### III.2 Exciting to strongly correlated non-equilibrium regime
In this section, we further investigate the results discussed in section
III.1. As the TD-CI results provide access to the (nearly exact) wavefunction
trajectory, we use it to analyze the time evolution in greater depth. In the
next section (IV), we rely on this analysis to illustrate the origins of
failure in each method in capturing the proper physics of non-equilibrium
systems.
In Fig. 2(a)-(c), we show the natural occupations of the single particle
density matrix for the model discussed in section II.4 and section III.1 for
$E=1.0J$, $E=2.0J$ and $E=5.0J$. The natural occupations correspond to the
eigenvalues of time dependent single particle density matrix expressed in
spin-orbitals, taken from TD-CI. Our analysis is motivated by equilibrium,
ground-state physics of the underlying half-filled Hubbard model, where
natural occupations approach $\lambda_{i}\rightarrow 0.5$ as the system
becomes strongly correlated in the $U\rightarrow\infty$ limit (see Fig. S3 in
supplemental information[65]). By examining natural occupations, one can
therefore assess how correlated the system is. Here we propose this quantity
as a measure of correlation in the non-equilibrium setting.
In Fig. 2 (d), we also look at the von-Neumann entropy for a scan of $E$
values from $1.0J$ to $5.0J$. The entropy is computed as,
$\mathcal{S}[\rho]=\frac{2}{N}\mathrm{Tr}\left[\rho\log_{2}(\rho)\right].$ (8)
Note that for the ground-state, as $U\to\infty$, the entropy approaches unity.
For $E=1.0J$, in Fig. 2(a), we see very little deviation from the natural
occupations. This corresponds to near zero entropy of the single particle
density matrix. Each of our methods matches the benchmark perfectly in
equilibrium for the results in Fig 1. This demonstrates that when the natural
occupations remain close to the equilibrium values the four methods capture
well the non-equilibrium TD-CI results qualitatively and even quantitatively.
As the excitation strength is increased to $E=2.0J$, we see a more noticeable
difference from the equilibrium result. Though the oscillations of the natural
occupations are still relatively small. The entropy reflects this more clearly
where it is only around $\mathcal{S}[\rho]=0.1$, indicating weak mixing
between the natural orbitals. For the dynamics in Fig. 1, the results produced
by TD-CC are still in perfect agreement with the TD-CI result. For the same
conditions, the MBPT methods appear over damped but they still capture
elements of the TD-CI result.
For $E=5.0J$ we see a strong mixing of the natural occupations, nearing the
$\lambda_{i}[\rho]=0.5$ limit, corresponding to an entropy of
$\mathcal{\mathcal{S}[\rho]}\approx 0.9$. In an equilibrium picture this would
correspond to a strongly correlated system, with high $U$. Such a transition
was observed for all system sizes and interaction range types. Consequently,
we consider this to be a representative case of a system driven by a strong
external perturbation from a “weakly correlated” ground state to a “strongly
correlated” excited state.
Understandably, such a physical regime poses difficulty for methods suited for
weakly correlated systems, such as CC or the MBPT expansion. Indeed, TD-CC
results fail completely in this regime, as do the KBE results. In contrast the
HF-GKBA offers an improvement in this regime and does a good job at capturing
qualitative and quantitative of the TD-CI results. In Fig. S5 of the
supplemental information we show the entropy for different $E$ for
$N_{s}=8,12$ and $16$ and see similar behavior for each system size[65].
Figure 2: Left: Natural occupations of the single particle density matrix
taken from TD-CI for the model discussed in section II.4 and section III.1 for
a) $E=1.0J$, b) $E=2.0J$ and c) $E=5.0J$. Right: The entropy for a scan of $E$
values from $1.0J$ to $5.0J$
## IV Discussion and conclusions
We now return to discuss the implications of the results we have described in
section III.1 and III.2. Each of the methods we test perform reasonably well
in weakly excited systems. For pulse strengths up to $E=2.0J$ TD-CC gives
perfect agreement compared to TD-CI. However, as stronger perturbations are
applied to drive the system out of equilibrium, TD-CC breaks down. This is
because CC wavefunction is expanded around the time-independent, ground-state
Hartree-Fock reference which becomes increasingly bad as the system evolves
after a strong time-dependent perturbation.
Although, even for $E=3.0J$ TD-CC offers a good improvement over the HF-GKBA
and the KBEs. Interestingly, at $E=2.0J$ we observe that TD-HF also improves
upon the results for the time-dependent dipole produced by HF-GKBA and the
KBEs. Note that this implies that the trajectories of equal-time observables
depending on the density matrix (such as the dipole studied here) are
qualitatively well captured even by a mean-field technique. Further, this
leads to the interesting question of whether in dynamical systems it is always
better to include self-energy effects, or whether there are regimes in which
the inclusion of the (non-exact) self-energy can have detrimental effects by
overdamping the system dynamics. This is a question that deserves further
investigation with additional forms of the self-energy beyond merely the
density-density response included in the $GW$ formalism, but also including
higher order fluctuations[45, 46].
In the $E=1.0J$ case the Kadanoff-Baym equations qualitatively capture the
dynamics after excitation. However, as is commonly seen[64, 63] they suffer
from severe overdamping that removes the high frequency component of the
dipole almost immediately. For $E=1.0J$ the HF-GKBA also captures the
qualitative features of the dipole oscillations and even offers improvement to
the KBEs by removing some of the damping. As we increase $E$ further, the full
KBEs continue to be overdamped, while the HF-GKBA improves the result, adding
back in some of the high frequency oscillation modes lost in the KBEs.
Interestingly, as we show in Fig. S4 of the supplemental information[65], when
long range interactions are included in the Hamiltonian the result of the HF-
GKBA are improved drastically. This improvement is likely related to the
suitability of a self-energy approximation for a given problem. The $GW$
approximation, used here, assumes the dominant contribution to the self-energy
is from the screened coulomb interaction, or equivalently from the induced
charge density fluctuations. Due to the extremely localized nature of the
Hubbard model screening is relatively weak, meaning a different self-energy
approximation may be more appropriate. A long range interacting model
naturally has more potential for screening, thus making the $GW$ approximation
a more relevant contribution to the self-energy, and so improving the
agreement with TD-CI.
This observation may also help to explain the improvement of the HF-GKBA for
$E=5.0J$. In this highly excited regime, the particles are much more
delocalized. This increases the screening and we propose that this lead to a
regime in which the physics accounted for by the $GW$ approximation becomes
more dominant, leading to reasonably well captured dynamics. Indeed, $GW$ can
be interpreted as dynamically screened exchange interaction, i.e., its
difference compared to TD-HF stems purely from adding correlations effects due
to the density fluctuations which likely dominate in this regime, over other
types of correlations. It is in this regime that CC shows its most significant
failing, showing results that are drastically different from the CI result
both quantitatively and qualitatively. Here, the main difference between GF
MBPT and CC stems from resummations of the screened interactions, which are
not included in CC. TD-CC can possibly be improved via orbital optimization in
the pursuit of renormalizing the interactions, as well as by including
additional higher-order excitation terms (beyond the CCSD level).[31, 32, 70,
33]
From these results, it is clear that the TD-CC performs excellently for
relatively weak perturbations and improves upon MBPT, especially in the finer
details. However, in strongly perturbed regimes, TD-CC breaks down and no
longer provides results that match TD-CI, even at the qualitative level.
Notably, for regimes in which the system is strongly perturbed, the HF-GKBA
offers results that are very close to those produced by TD-CI, and are much
more numerically stable than those produced by TD-CC. This points to the
possibility that the HF-GKBA and TD-CC have somewhat complimentary regimes of
applicability, however definitive statements about this observation will
require further exploration of various self-energy formulations. As is often
seen the KBEs show no significant improvement upon the HF-GKBA, furthermore in
this particular example due to the severe overdamping the HF-GKBA outperforms
the KBEs. This points to the fact that inclusion of the time-nonlocal
components of the self-energy, which dominate the “quantum memory” of KBEs, is
practically detrimental to the quality of the predicted dynamics, despite its
formal correctness. This question warrants further exploration, especially in
connection to applications of more involved self-energy formulations, beyond
$GW$.
We link the observed performance of the CC and GF methods (within the given
approximations) to the effective strength of correlations induced by the non-
equilibrium. In section III.2, we looked at the different regimes brought
about by varying the pulse strength. The entropy of the density matrix, given
in equation (8), is here used to quantify correlations in the system[71]. In
Fig. 2 we showed that for stronger pulses the natural populations become
closer to $\lambda=0.5$, which is consistent with those of a strongly
interacting equilibrium system as shown in Fig. S3 of the supplemental
information for the model in equation (6) with $N_{s}=6$[65]. Further, in in
Fig. 1 (d) the dynamics of the dipole are very slowly varying in time. This is
consistent with the type of dynamics one would see in a very strongly
interacting system due suppression of the kinetic energy term. We interpret
this regime as non-equilibrium induced correlations.
In this work we have only investigated properties related to the density
matrix, namely the dipole and the von Neumann entropy. Capturing the dynamics
of the density matrix is an important benchmark for any time dependent method,
and being able to predict these quantities is valuable for quantitative
simulation of carrier dynamics in materials or charge dynamics in quantum
chemistry systems. On the other hand, other measurable quantities of great
importance for non-equilibrium quantum physics, such as the the time-dependent
spectral function, require knowledge of the time-non-local terms. We are
actively investigating the ability of each of these methods to capture the
time dependent spectral properties of non-equilibrium systems, as well as
looking into efficient ways of improving existing approaches[72].
A further extension of this study, also underway, comes in terms of the system
sizes studied. Despite the huge reduction in cost afforded in TD-CI compared
to full diagonalization, we are still limited in the system size due to the
still large size of the constructed subspace. One route we are actively
pursuing is implementing a time dependent adaptive sampling CI approach(TD-
ASCI). This approach improves upon traditional CI by adding a step that
weights wavefunctions making up the subspace, based on estimating how much of
a contribution they will make to the subspace. This can vastly reduce the
subspace size thus opening the road to even larger systems. This presents a
route to provide accurate benchmark data in large scale or multi-band periodic
systems where such data is extremely difficult to obtain.
In conclusion, the numerical benchmarks presented in this work help to
practically assess the performance of variety of time-dependent numerical
methods under different strengths of perturbation. We demonstrate that TD-CC
is an excellent theory for moderate coupling and weak to moderately strong
perturbations, but that it becomes unreliable for strong perturbations. We
have further seen the KBEs fail to produce results that improve upon TD-CC.
For weak excitations the HF-GKBA produces worse results than TD-CC but for
strong excitations the HF-GKBA gives results that are qualitatively and
quantitatively very close to the true result. Thus, we surmise that a
multilayered methodology employing the complementary strengths of TD-CC and
HF-GKBA will likely provide accurate results in a wide variety of non-
equilibrium regimes.
## Acknowledgements
This material is based upon work supported by the U.S. Department of Energy,
Office of Science, Office of Advanced Scientific Computing Research and Office
of Basic Energy Sciences, Scientific Discovery through Advanced Computing
(SciDAC) program under Award Number DE-SC0022198. This research used resources
of the National Energy Research Scientific Computing Center, a DOE Office of
Science User Facility supported by the Office of Science of the U.S.
Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award
BES-ERCAP0029462
## References
* Beebe _et al._ [2017] M. R. Beebe, J. M. Klopf, Y. Wang, S. Kittiwatanakul, J. Lu, S. A. Wolf, and R. A. Lukaszew, Time-resolved light-induced insulator-metal transition in niobium dioxide and vanadium dioxide thin films, Opt. Mater. Express 7, 213 (2017).
* Disa _et al._ [2023] A. S. Disa, J. Curtis, M. Fechner, A. Liu, A. von Hoegen, M. Först, T. F. Nova, P. Narang, A. Maljuk, A. V. Boris, B. Keimer, and A. Cavalleri, Photo-induced high-temperature ferromagnetism in YTiO3, Nature 617, 73 (2023).
* Dong _et al._ [2021] S. Dong, M. Puppin, T. Pincelli, S. Beaulieu, D. Christiansen, H. Hübener, C. W. Nicholson, R. P. Xian, M. Dendzik, Y. Deng, Y. W. Windsor, M. Selig, E. Malic, A. Rubio, A. Knorr, M. Wolf, L. Rettig, and R. Ernstorfer, Direct measurement of key exciton properties: Energy, dynamics, and spatial distribution of the wave function, Natural Sciences 1, e10010 (2021).
* Bao _et al._ [2022] C. Bao, P. Tang, D. Sun, and S. Zhou, Light-induced emergent phenomena in 2D materials and topological materials, Nature Reviews Physics 4, 33 (2022).
* Nuske _et al._ [2020] M. Nuske, L. Broers, B. Schulte, G. Jotzu, S. A. Sato, A. Cavalleri, A. Rubio, J. W. McIver, and L. Mathey, Floquet dynamics in light-driven solids, Phys. Rev. Res. 2, 043408 (2020).
* Zhou _et al._ [2023] S. Zhou, C. Bao, B. Fan, F. Wang, H. Zhong, H. Zhang, P. Tang, W. Duan, and S. Zhou, Floquet engineering of black phosphorus upon below-gap pumping, Phys. Rev. Lett. 131, 116401 (2023).
* Arponen [1983] J. Arponen, Variational principles and linked-cluster exp S expansions for static and dynamic many-body problems, Annals of Physics 151, 311 (1983).
* Ishikawa and Sato [2015] K. L. Ishikawa and T. Sato, A Review on Ab Initio Approaches for Multielectron Dynamics, IEEE Journal of Selected Topics in Quantum Electronics 21, 1 (2015).
* Li _et al._ [2020] X. Li, N. Govind, C. Isborn, A. E. DePrince, and K. Lopata, Real-Time Time-Dependent Electronic Structure Theory, Chem. Rev. 120, 9951 (2020).
* Sun _et al._ [2021] J. Sun, C.-W. Lee, A. Kononov, A. Schleife, and C. A. Ullrich, Real-Time Exciton Dynamics with Time-Dependent Density-Functional Theory, Phys. Rev. Lett. 127, 077401 (2021).
* Karlsson _et al._ [2021] D. Karlsson, R. van Leeuwen, Y. Pavlyukh, E. Perfetto, and G. Stefanucci, Fast Green’s Function Method for Ultrafast Electron-Boson Dynamics, Phys. Rev. Lett. 127, 036402 (2021).
* Perfetto _et al._ [2022] E. Perfetto, Y. Pavlyukh, and G. Stefanucci, Real-Time $GW$: Toward an ab initio description of the ultrafast carrier and exciton dynamics in two-dimensional materials, Phys. Rev. Lett. 128, 016801 (2022).
* Sverdrup Ofstad _et al._ [2023] B. Sverdrup Ofstad, E. Aurbakken, Ø. Sigmundson Schøyen, H. E. Kristiansen, S. Kvaal, and T. B. Pedersen, Time-dependent coupled-cluster theory, WIREs Computational Molecular Science 13, e1666 (2023).
* Martin _et al._ [2016a] R. M. Martin, L. Reining, and D. M. Ceperley, _Interacting electrons_ (Cambridge University Press, 2016).
* Note [1] Note, that in ED the system Hamiltonian is diagonalized providing all the eigenvectors, while in TD-FCI customarily only few eigenvectors are obtained.
* Rohringer _et al._ [2006] N. Rohringer, A. Gordon, and R. Santra, Configuration-interaction-based time-dependent orbital approach for ab initio treatment of electronic dynamics in a strong optical laser field, Phys. Rev. A 74, 043420 (2006).
* Lode _et al._ [2020] A. U. J. Lode, C. Lévêque, L. B. Madsen, A. I. Streltsov, and O. E. Alon, Colloquium: Multiconfigurational time-dependent hartree approaches for indistinguishable particles, Rev. Mod. Phys. 92, 011001 (2020).
* Hochstuhl and Bonitz [2011] D. Hochstuhl and M. Bonitz, Two-photon ionization of helium studied with the multiconfigurational time-dependent Hartree–Fock method, The Journal of Chemical Physics 134, 084106 (2011), https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/1.3553176/14905108/084106_1_online.pdf .
* Hochstuhl and Bonitz [2012] D. Hochstuhl and M. Bonitz, Time-dependent restricted-active-space configuration-interaction method for the photoionization of many-electron atoms, Phys. Rev. A 86, 053424 (2012).
* Sato and Ishikawa [2013] T. Sato and K. L. Ishikawa, Time-dependent complete-active-space self-consistent-field method for multielectron dynamics in intense laser fields, Phys. Rev. A 88, 023402 (2013).
* Daley _et al._ [2004] A. J. Daley, C. Kollath, U. Schollwöck, and G. Vidal, Time-dependent density-matrix renormalization-group using adaptive effective hilbert spaces, Journal of Statistical Mechanics: Theory and Experiment 2004, P04005 (2004).
* Zwolak and Vidal [2004] M. Zwolak and G. Vidal, Mixed-state dynamics in one-dimensional quantum lattice systems: A time-dependent superoperator renormalization algorithm, Phys. Rev. Lett. 93, 207205 (2004).
* Schollwöck [2011] U. Schollwöck, The density-matrix renormalization group in the age of matrix product states, Annals of Physics 326, 96 (2011), january 2011 Special Issue.
* Schollwöck [2005] U. Schollwöck, The density-matrix renormalization group, Rev. Mod. Phys. 77, 259 (2005).
* Crawford and Schaefer [2000] T. D. Crawford and H. F. Schaefer, An Introduction to Coupled Cluster Theory for Computational Chemists, in _Reviews in Computational Chemistry_, edited by K. B. Lipkowitz and D. B. Boyd (John Wiley & Sons, Inc., 2000) pp. 33–136.
* Bartlett and Musiał [2007] R. J. Bartlett and M. Musiał, Coupled-cluster theory in quantum chemistry, Rev. Mod. Phys. 79, 291 (2007).
* Wälz _et al._ [2012] G. Wälz, D. Kats, D. Usvyat, T. Korona, and M. Schütz, Application of Hermitian time-dependent coupled-cluster response Ansätze of second order to excitation energies and frequency-dependent dipole polarizabilities, Phys. Rev. A 86, 052519 (2012).
* Koulias _et al._ [2019] L. N. Koulias, D. B. Williams-Young, D. R. Nascimento, A. E. I. DePrince, and X. Li, Relativistic Real-Time Time-Dependent Equation-of-Motion Coupled-Cluster, J. Chem. Theory Comput. 15, 6617 (2019).
* Skeidsvoll _et al._ [2020] A. S. Skeidsvoll, A. Balbi, and H. Koch, Time-dependent coupled-cluster theory for ultrafast transient-absorption spectroscopy, Phys. Rev. A 102, 023115 (2020).
* Skeidsvoll and Koch [2023] A. S. Skeidsvoll and H. Koch, Comparing real-time coupled-cluster methods through simulation of collective Rabi oscillations, Phys. Rev. A 108, 033116 (2023).
* Kvaal [2012] S. Kvaal, Ab initio quantum dynamics using coupled-cluster, J. Chem. Phys. 136, 194109 (2012).
* Sato _et al._ [2018] T. Sato, H. Pathak, Y. Orimo, and K. L. Ishikawa, Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics, J. Chem. Phys. 148, 051101 (2018).
* Pathak _et al._ [2020] H. Pathak, T. Sato, and K. L. Ishikawa, Time-dependent optimized coupled-cluster method for multielectron dynamics. II. A coupled electron-pair approximation, The Journal of Chemical Physics 152, 124115 (2020).
* Peng _et al._ [2021] R. Peng, A. F. White, H. Zhai, and G. Kin-Lic Chan, Conservation laws in coupled cluster dynamics at finite temperature, The Journal of Chemical Physics 155, 044103 (2021).
* Reining [2018] L. Reining, The $GW$ approximation: content, successes and limitations, WIREs Computational Molecular Science 8, e1344 (2018).
* Onida _et al._ [2002] G. Onida, L. Reining, and A. Rubio, Electronic excitations: density-functional versus many-body green’s-function approaches, Rev. Mod. Phys. 74, 601 (2002).
* Aulbur _et al._ [2000] W. G. Aulbur, L. Jönsson, and J. W. Wilkins, Quasiparticle calculations in solids, Journal of Physics C: Solid State Physics 54, 1 (2000).
* Martin _et al._ [2016b] R. Martin, L. Reining, and D. Ceperley, _Interacting Electrons_ (Cambridge University Press, 2016).
* Wilhelm _et al._ [2021] J. Wilhelm, P. Seewald, and D. Golze, Low-scaling gw with benchmark accuracy and application to phosphorene nanosheets, Journal of Chemical Theory and Computation 17, 1662 (2021), pMID: 33621085, https://doi.org/10.1021/acs.jctc.0c01282 .
* Neuhauser _et al._ [2014] D. Neuhauser, Y. Gao, C. Arntsen, C. Karshenas, E. Rabani, and R. Baer, Breaking the Theoretical Scaling Limit for Predicting Quasiparticle Energies: The Stochastic $GW$ Approach, Phys. Rev. Lett. 113, 076402 (2014).
* Vlček _et al._ [2018] V. Vlček, W. Li, R. Baer, E. Rabani, and D. Neuhauser, Swift $GW$ beyond 10,000 electrons using sparse stochastic compression, Phys. Rev. B 98, 075107 (2018).
* Gao _et al._ [2016] W. Gao, W. Xia, X. Gao, and P. Zhang, Speeding up gw calculations to meet the challenge of large scale quasiparticle predictions, Scientific Reports 6, 36849 (2016).
* Govoni and Galli [2015] M. Govoni and G. Galli, Large scale gw calculations, Journal of Chemical Theory and Computation 11, 2680 (2015), pMID: 26575564, https://doi.org/10.1021/ct500958p .
* Hedin [1965] L. Hedin, New method for calculating the one-particle green’s function with application to the electron-gas problem, Phys. Rev. 139, A796 (1965).
* Vlcek [2019] V. Vlcek, Stochastic vertex corrections: Linear scaling methods for accurate quasiparticle energies, Journal of chemical theory and computation 15, 6254 (2019).
* Mejuto-Zaera and Vlček [2022] C. Mejuto-Zaera and V. Vlček, Self-consistency in $GW\mathrm{\Gamma}$ formalism leading to quasiparticle-quasiparticle couplings, Phys. Rev. B 106, 165129 (2022).
* Stefanucci and van Leeuwen [2013] G. Stefanucci and R. van Leeuwen, _Nonequilibrium Many-Body Theory of Quantum Systems: A Modern Introduction_ (Cambridge University Press, 2013).
* Kadanoff and Baym [1961] L. Kadanoff and G. Baym, _Quantum Statistical Mechanics_ (The Benjamin/Cummings Publishing Company, New York, 1961).
* Schwinger [1961] J. Schwinger, Brownian Motion of a Quantum Oscillator, Journal of Mathematical Physics 2, 407 (1961), https://pubs.aip.org/aip/jmp/article-pdf/2/3/407/19046507/407_1_online.pdf .
* Bonitz [2015] M. Bonitz, _Quantum Kinetic Theory_ (Springer International Publishing Switzerland, 2015).
* Kaye and Golež [2021] J. Kaye and D. Golež, Low rank compression in the numerical solution of the nonequilibrium Dyson equation, SciPost Phys. 10, 091 (2021).
* Kaye and U. R. Strand [2023] J. Kaye and H. U. R. Strand, A fast time domain solver for the equilibrium Dyson equation, Advances in Computational Mathematics 49, 10.1007/s10444-023-10067-7 (2023).
* Yin _et al._ [2021] J. Yin, Y.-h. Chan, F. da Jornada, D. Qiu, C. Yang, and S. G. Louie, Analyzing and predicting non-equilibrium many-body dynamics via dynamic mode decomposition, arXiv preprint arXiv:2107.09635 (2021).
* Stahl _et al._ [2022] C. Stahl, N. Dasari, J. Li, A. Picano, P. Werner, and M. Eckstein, Memory truncated Kadanoff-Baym equations, Phys. Rev. B 105, 115146 (2022).
* Lipavský _et al._ [1986] P. Lipavský, V. Špička, and B. Velický, Generalized Kadanoff-Baym ansatz for deriving quantum transport equations, Phys. Rev. B 34, 6933 (1986).
* Schlünzen _et al._ [2020] N. Schlünzen, J.-P. Joost, and M. Bonitz, Achieving the Scaling Limit for Nonequilibrium Green Functions Simulations, Phys. Rev. Lett. 124, 076601 (2020).
* Reeves _et al._ [2023a] C. C. Reeves, J. Yin, Y. Zhu, K. Z. Ibrahim, C. Yang, and V. Vlček, Dynamic mode decomposition for extrapolating nonequilibrium Green’s-function dynamics, Phys. Rev. B 107, 075107 (2023a).
* Reeves _et al._ [2023b] C. C. Reeves, Y. Zhu, C. Yang, and V. Vlček, Unimportance of memory for the time nonlocal components of the kadanoff-baym equations, Phys. Rev. B 108, 115152 (2023b).
* Schlünzen _et al._ [2017] N. Schlünzen, J.-P. Joost, F. Heidrich-Meisner, and M. Bonitz, Nonequilibrium dynamics in the one-dimensional Fermi-Hubbard model: Comparison of the nonequilibrium Green-functions approach and the density matrix renormalization group method, Phys. Rev. B 95, 165139 (2017).
* Bonitz _et al._ [2013] M. Bonitz, S. Hermanns, and K. Balzer, Dynamics of Hubbard Nano-Clusters Following Strong Excitation, Contributions to Plasma Physics 53, 778 (2013).
* Balzer _et al._ [2013] K. Balzer, S. Hermanns, and M. Bonitz, The generalized Kadanoff-Baym ansatz. Computing nonlinear response properties of finite systems, Journal of Physics: Conference Series 427, 012006 (2013).
* Hermanns _et al._ [2013] S. Hermanns, K. Balzer, and M. Bonitz, Few-particle quantum dynamics–comparing nonequilibrium Green functions with the generalized Kadanoff–Baym ansatz to density operator theory, Journal of Physics: Conference Series 427, 012008 (2013).
* von Friesen _et al._ [2010] M. P. von Friesen, C. Verdozzi, and C.-O. Almbladh, Artificial damping in the Kadanoff-Baym dynamics of small Hubbard chains, Journal of Physics: Conference Series 220, 012016 (2010).
* von Friesen _et al._ [2009] M. P. von Friesen, C. Verdozzi, and C.-O. Almbladh, Successes and Failures of Kadanoff-Baym Dynamics in Hubbard Nanoclusters, Phys. Rev. Lett. 103, 176404 (2009).
* [65] See Supplemental Material at for technical details on each of the methods employed in this paper, as well as the relevant implementation details. Additional results are also included: Comparision of methods in a long range interacting model, asymptotic behaviour of the equilibrium natural occupations with increasing $U$ and analysis of the system entropy with different system sizes.
* Park and Light [1986] T. J. Park and J. C. Light, Unitary quantum time evolution by iterative Lanczos reduction, J. Chem. Phys. 85, 5870 (1986).
* Stan _et al._ [2009] A. Stan, N. Dahlen, and R. van Leeuwen, Time propagation of the Kadanoff–Baym equations for inhomogeneous systems, J. Chem. Phys. 130, 224101 (2009).
* Hybertsen and Louie [1986] M. S. Hybertsen and S. G. Louie, Electron correlation in semiconductors and insulators: Band gaps and quasiparticle energies, Physical Review B 34, 5390 (1986).
* Joost _et al._ [2020] J.-P. Joost, N. Schlünzen, and M. Bonitz, G1-G2 scheme: Dramatic acceleration of nonequilibrium Green functions simulations within the Hartree-Fock generalized Kadanoff-Baym ansatz, Phys. Rev. B 101, 245101 (2020).
* Harsha _et al._ [2019] G. Harsha, T. M. Henderson, and G. E. Scuseria, Thermofield Theory for Finite-Temperature Coupled Cluster, J. Chem. Theory Comput. 15, 6127 (2019).
* Yuan _et al._ [2022] S. Yuan, Y. Chang, and L. K. Wagner, Quantification of electron correlation for approximate quantum calculations, The Journal of Chemical Physics 157, 194101 (2022), https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/5.0119260/16552862/194101_1_online.pdf .
* Reeves and Vlcek [2024] C. Reeves and V. Vlcek, A real-time dyson expansion scheme: Efficient inclusion of dynamical correlations in non-equilibrium spectral propertie (2024), arXiv:2403.07155 [physics.comp-ph] .
|
Nuclear transparency of the charged hadron produced in the inclusive
electronuclear reaction
Swapan Das 111email<EMAIL_ADDRESS>
Nuclear Physics Division, Bhabha Atomic Research Centre,
Trombay, Mumbai 400085, India
Homi Bhabha National Institute, Anushakti Nagar, Mumbai 400094, India
###### Abstract
The nuclear transparency of the charged hadron produced in the inclusive
$(e,e^{\prime})$ reaction on the nucleus has been calculated using Glauber
model for the nuclear reaction. The color transparency (CT) of the produced
hadron and the short-range correlation (SRC) of the nucleons in the nucleus
have been incorporated in the Glauber model to investigate their effects on
the nuclear transparency of the hadron. The calculated nuclear transparencies
for the proton and pion are compared with the data.
The hadron-nucleus cross section is less than that in the plane wave impulse
approximation (PWIA) because the initial and(or) final state interactions of
the hadron with the nucleus are neglected in PWIA. The difference in the cross
sections can be characterized by the nuclear transparency $T_{A}$, defined [1]
as
$T_{A}=\frac{\sigma_{hA}}{\sigma_{hA(PWIA)}},$ (1)
where $\sigma_{hA}$ represents the hadron-nucleus cross section.
The transverse size $d_{\perp}$ of the hadron produced in the nucleus due to
the space-like high momentum transfer $Q^{2}$ is reduced as $d_{\perp}\sim
1/Q$ [1, 2]. The reduced (in size) hadron is referred as point like
configuration (PLC) [1]. According to Quantum Chromodynamics, a color neutral
PLC has reduced interaction with the nucleon in the nucleus because the sum of
its gluon emission amplitudes cancel [1, 3]. The PLC expands to the size of
physical hadron, as it moves up to a length ($\sim 1$ fm) called hadron
formation length $l_{h}$ [1, 4]:
$l_{h}=\frac{2k_{h}}{\Delta M^{2}},$ (2)
where $k_{h}$ is the momentum of hadron in the laboratory frame. $\Delta
M^{2}$ is related to the mass difference between the hadronic states
originating due to the (anti)quarks fluctuation in PLC. The interaction of PLC
with the nucleon in the nucleus increases, as its size increases during its
passage $l_{h}$ through the nucleus. The decrease in the hadron-nucleon cross
section in the nucleus, as explained by Glauber model [5], leads to the
increase in the hadron-nucleus cross section $\sigma_{hA}$. Therefore, the
transparency $T_{A}$ in Eq. (1) of the hadron raises. The enhancement in
$T_{A}$ due to the above phenomenon is referred as color transparency (CT) of
the hadron. The physics of CT for hadrons have been discussed elaborately in
Refs. [3, 6].
The experiments to search the CT of proton ($p$CT) in the A$(p,pp)$ reactions
done at Brookhaven National Laboratory (BNL) [7] could not confirm the $p$CT.
In fact, the experimental results are not understood [8]. The $p$CT is not
also seen in the A$(e,e^{\prime}p)$ experiment done at Standford Linear
Accelerator Center (SLAC) [9] and Jefferson Laboratory (JLab) [10] for
$0.64\leq Q^{2}\leq 8.1$ GeV2. This experiment done for $8\leq
Q^{2}\mbox{(GeV${}^{2}$)}\leq 14.2$ [11] at the upgraded JLab facility agrees
with the previous observation [9, 10]. Therefore, it appears the PLC required
for CT is unlikely to form for three quarks $(qqq)$ system, such as proton.
Since the meson is a bound state of two quarks (i.e., quark-antiquark) the PLC
formation of it can be more probable than that of the baryon, a three quarks
$(qqq)$ system. The color transparency is unambiguously reported from Fermi
National Accelerator Laboratory (FNAL) [12] in the experiment of the nuclear
diffractive dissociation of pion (of 500 GeV/$c$) to dijets. The color
transparency is also illustrated in the $\pi^{-}$ meson photoproduction [13]
and $\rho^{0}$ meson electroproduction (from nuclei) experiments [14]. Several
authors studied the $\rho$-meson color transparency in the energy region
available at JLab [1, 15].
The nuclear transparency of the $\pi^{+}$ meson produced in the
A$(e,e^{\prime})$ process was measured at JLab for the photon virtuality
$Q^{2}=1.1-4.7$ GeV2 [16]. The data have been understood by the pionic color
transparency ($\pi$CT) [17]. Larson et al., [18] described the momentum
dependence of $\pi$CT in the above reaction. Cosyn et al., [19] studied the
effects of $\pi$CT and nucleon short-range correlation in the pion photo- and
electro- production from nuclei. Larionov et al., [4] estimated the $\pi$CT in
the $(\pi^{-},l^{+}l^{-})$ reaction on nuclei for $p_{\pi}=5-20$ GeV/$c$,
which can be measured at the forth-coming facilities in Japan Proton
Accelerator Research Complex (J-PARC) [20]. This reaction provides the
informations complementary to those obtained from the A$(\gamma^{*},\pi)$
reaction. Miller and Strikman [21] illustrated large CT in the pionic knockout
of the proton off nuclei at the energy 200 GeV available at CERN COMPASS
experiment.
The enhancement in $T_{A}$ due to $\sigma_{hA}$ in Eq. (1) can also occur
because of the short-range correlation (SRC) of nucleon in the nucleus. The
SRC arises because of the repulsive (short-range) interaction between the
nucleons bound in the nucleus. This interaction keeps the bound nucleons apart
($\sim$ 1 fm), which is called nuclear granularity [8]. Therefore, the SRC
prevents the shadowing of the hadron-nucleon interaction due to the
surrounding nucleons present in the nucleus. This occurrence, as elucidated by
Glauber model [5], leads to the enhancement in $\sigma_{hA}$. The SRC is
widely used to investigate various aspects in the nuclear physics [22].
The hadron $h$ is produced in the inclusive A$(e,e^{\prime})$X reaction
because of the interaction of the virtual photon $\gamma^{*}$ (emitted at the
$ee^{\prime}$ vertex) with the nucleus A. In this reaction, the nucleus in the
final state denoted by X is unspecified. The scattering amplitude for the
$\gamma^{*}\mbox{A}\to h\mbox{X}$ transition, according to Glauber model [1],
can be written as
$F_{X0}[({\bf{q-k}}_{h})_{\perp}]=\frac{iq}{2\pi}\int d{\bf
b}e^{i({\bf{q-k}}_{h})_{\perp}\cdot{\bf{b}}}\Gamma^{\gamma^{*}h}_{X0}({\bf
b}),$ (3)
where ${\bf q}$ and ${\bf k}_{h}$ are the momenta of $\gamma^{*}$ and $h$
respectively. $\Gamma^{\gamma^{*}h}_{X0}({\bf b})$ describes the matrix
element for the transition of the nucleus from its initial to final states,
i.e.,
$\Gamma^{\gamma^{*}h}_{X0}({\bf b})=<X|\Gamma^{\gamma^{*}h}_{A}({\bf b},{\bf
r}_{1},...,{\bf r}_{A})|0>,$ (4)
where $|0>$ denotes the ground state of the target nucleus and $|X>$
represents the unspecified nuclear state in the exit channel. The nuclear
profile operator $\Gamma^{\gamma^{*}h}_{A}({\bf b},{\bf r}_{1},...,{\bf
r}_{A})$ [1, 23] is given by
$\Gamma^{\gamma^{*}h}_{A}({\bf b},{\bf r}_{1},...,{\bf
r}_{A})=\sum_{i}\Gamma^{\gamma^{*}h}({\bf b-b}_{i})e^{i({\bf
q-k}_{h})_{\parallel}z_{i}}\Pi^{A-1}_{j\not{=}i}[1-\Gamma^{hN}({\bf
b-b}_{j})\theta(z_{j}-z_{i})].$ (5)
The summation $i$ is taken over the number of nucleons in the nucleus
participated for the hadron production, e.g., the protons in the nucleus take
part to produced charged hadron in the reaction.
$\Gamma^{\gamma^{*}h}({\bf\tilde{b}})$ is the two-body profile function for
the hadron produced from the nucleon, i.e., $\gamma^{*}N\to hN$ process. It is
related to the reaction amplitude $f_{\gamma^{*}h}({\bf\tilde{q}}_{\perp})$
[1] as
$\Gamma^{\gamma^{*}h}({\bf\tilde{b}})=\frac{1}{i2\pi q}\int
d{\bf\tilde{q}}_{\perp}e^{-i{\bf\tilde{q}}_{\perp}\cdot{\bf\tilde{b}}}f_{\gamma^{*}h}({\bf\tilde{q}}_{\perp}).$
(6)
The two-body profile function $\Gamma^{hN}({\bf\tilde{b}})$ is connected to
$hN$ (hadron-nucleon) elastic scattering amplitude
$f_{hN}({\bf\tilde{q}}_{\perp})$ [1, 5] as
$f_{hN}({\bf\tilde{q}^{\prime}}_{\perp})=\frac{ik_{h}}{2\pi}\int
d{\bf\tilde{b}^{\prime}}e^{i{\bf\tilde{q}^{\prime}}_{\perp}\cdot{\bf\tilde{b}^{\prime}}}\Gamma^{hN}({\bf\tilde{b}^{\prime}}).$
(7)
The nuclear states, assuming the independent particle model [24], can be
written in terms of the single particle state $\Phi$ as
$|0>=\Pi^{A}_{l=1}|\Phi_{0}({\bf r}_{l})>$ and $|X>=|\Phi_{X}({\bf
r}_{m})>\Pi^{A-1}_{n{\not=}m}|\Phi_{0}({\bf r}_{n})>$. Using those,
$\Gamma^{\gamma^{*}h}_{X0}({\bf b})$ in Eq. (4) can be written as
$\Gamma^{\gamma^{*}h}_{X0}({\bf b})=\sum_{i}\int d{\bf r}_{i}\Phi^{*}_{X}({\bf
r}_{i})\Gamma^{\gamma^{*}h}({\bf b-b}_{i})e^{i({\bf
q-k}_{h})_{\parallel}z_{i}}\Phi_{0}({\bf r}_{i})D({\bf b},z_{i}),$ (8)
where $D({\bf b},z_{i})$ is given by
$\displaystyle D({\bf b},z_{i})$ $\displaystyle=$
$\displaystyle\Pi^{A-1}_{j{\not=}i}\int d{\bf r}_{j}\Phi^{*}_{0}({\bf
r}_{j})[1-\Gamma^{hN}({\bf b-b}_{j})\theta(z_{j}-z_{i})]\Phi_{0}({\bf r}_{j})$
(9) $\displaystyle=$ $\displaystyle\left[1-\frac{1}{A}\int d{\bf
b}_{j}\Gamma^{hN}({\bf b-b}_{j})\int dz_{j}\theta(z_{j}-z_{i})\varrho({\bf
r}_{j})\right]^{A-1}.$
In this equation, $\varrho({\bf r}_{j})$ is the matter density distribution of
the nucleus, i.e., $\varrho({\bf r}_{j})=A|\Phi_{0}({\bf r}_{j})|^{2}$.
$\varrho({\bf b}_{j},z_{j})$ can be replaced by $\varrho({\bf b},z_{j})$,
since $\Gamma^{hN}({\bf b-b}_{j})$ varies much rapidly than $\varrho({\bf
b}_{j},z_{j})$ [1]. Using Eq. (7) and ${\cal
L}t_{n\to\infty}(1+\frac{x}{n})^{n}=e^{x}$, the above equation can be
simplified to
$D({\bf b},z_{i})\simeq e^{-\frac{1}{2}\sigma^{hN}_{t}[1-i\alpha_{hN}]T({\bf
b},z_{i})},$ (10)
where $\alpha_{hN}$ denotes the ratio of the real to imaginary part of the
hadron-nucleon scattering amplitude $f_{hN}(0)$, and $\sigma^{hN}_{t}$ =
$\frac{4\pi}{k_{h}}Im[f_{hN}(0)]$ is the hadron-nucleon total cross section.
$T({\bf b},z_{i})$ is the partial thickness function of the nucleus, i.e.,
$T({\bf b},z_{i})=\int^{\infty}_{z_{i}}dz_{j}\varrho({\bf b},z_{j}).$ (11)
Using Eq. (8), $F_{X0}[({\bf{q-k}}_{h})_{\perp}]$ in Eq. (3) can be expressed
as
$\displaystyle F_{X0}$ $\displaystyle=$ $\displaystyle\frac{iq}{2\pi}\int
d{\bf b}e^{i({\bf{q-k}}_{h})_{\perp}\cdot{\bf{b}}}\sum_{i}\int d{\bf
r}_{i}\Phi^{*}_{X}({\bf r}_{i})\Gamma^{\gamma^{*}h}({\bf b-b}_{i})e^{i({\bf
q-k}_{h})_{\parallel}z_{i}}\Phi_{0}({\bf r}_{i})D({\bf b},z_{i}),$ (12)
$\displaystyle=$ $\displaystyle\sum_{i}\int d{\bf r}_{i}\Phi^{*}_{X}({\bf
r}_{i})f^{(i)}_{hN}([{\bf{q-k}}_{h}]_{\perp})e^{i({\bf q-k}_{h})\cdot{\bf
r}_{i}}\Phi_{0}({\bf r}_{i})D({\bf r}_{i}).$
$f^{(i)}_{hN}$, defined in Eq. (7), can be considered identically equal for
all nucleons.
The nucleus in the final state $|X>$ differs from its initial state $|0>$
(i.e., ground state) for the charged hadron production, i.e.,
$\Phi_{X}{\not=}\Phi_{0}$ and $F_{00}=0$. To calculate the cross section,
$|F_{X0}|^{2}$ is to multiply by the phase-space of the reaction and that is
to divide by the incident flux. Since the final state $|X>$ of the nucleus is
not detected in the inclusive reaction, the summation over all states $X$ has
to carry out. In the multi-GeV region, the phase space of the reaction can be
considered independent of the state $X$, and therefore, the nuclear
transparency $T_{A}$ can be written [1] as
$T_{A}=\frac{\sum_{X\not{=}0}|F_{X0}|^{2}}{\sum_{X\not{=}0}|F_{X0}|^{2}_{PWIA}}.$
(13)
The hadron-nucleon cross section $\sigma^{hN}_{t}$ in the free-space is used
in Eq. (10) to evaluate $T_{A}$ in Glauber model. To look for the color
transparency (CT), $\sigma^{hN}_{t}$ (according to quantum diffusion model [2,
18]) has to replace by $\sigma^{hN}_{t,CT}$:
$\sigma^{hN}_{t,CT}(Q^{2},l_{z})=\sigma^{hN}_{t}\left[\left\\{\frac{l_{z}}{l_{h}}+\frac{n_{q}^{2}<k^{2}_{t}>}{Q^{2}}\left(1-\frac{l_{z}}{l_{h}}\right)\right\\}\theta(l_{h}-l_{z})+\theta(l_{z}-l_{h})\right],$
(14)
where $Q^{2}$ is the space-like four-momentum transfer, i.e., photon
virtuality. $n_{q}$ denotes the number of valence quak-(anti)quark present in
the hadron, e.g., $n_{q}=2(3)$ for pion (proton) [2]. $k_{t}$ illustrates the
transverse momentum of the (anti)quark: $<k_{t}^{2}>^{1/2}=0.35$ GeV/$c$.
$l_{z}$ is the path length traversed by the hadron after its production. The
hadron formation length $l_{h}(\propto\frac{1}{\Delta M^{2}})$ is already
defined in Eq. (2).
The short-range correlation (SRC) can be incorporated by replacing the nuclear
density distribution $\varrho$ in Eq. (11) by
$\varrho({\bf b},z_{j})\to\varrho({\bf b},z_{j})C(|z_{j}-z_{i}|),$ (15)
where $C(u)$ represents the correlation function [8]. Using the nuclear matter
estimate, it can be written as
$C(u)=\left[1-\frac{h(u)^{2}}{4}\right]^{1/2}[1+f(u)],$ (16)
with $h(u)=3\frac{j_{1}(k_{F}u)}{k_{F}u}$ and $f(u)=-e^{-\alpha u^{2}}(1-\beta
u^{2})$. The Fermi momentum $k_{F}$ is chosen equal to 1.36 fm-1. $C(u)$ with
the parameters $\alpha=1.1$ fm-2 and $\beta=0.68$ fm-2 agrees well that
derived from the many-body calculations [8].
The nuclear transparency $T_{A}$ of the charged hadron, i.e., proton and
$\pi^{+}$ meson, produced in the inclusive electronuclear reaction has been
calculated using Glauber model (GM), where the measured nuclear density
distribution $\varrho(r)$ [25], and hadron-nucleon cross section
$\sigma_{t}^{hN}$ [26] are used. As shown later, the calculated results due to
GM (presented by the dashed curves) underestimate the measured $T_{A}$ for
both proton and pion. Therefore, GM has been modified by taking account of CT
and SRC. Since the CT is energy dependent, the calculated $T_{A}(\pi^{+})$
increases with $Q^{2}$ due to the inclusion of CT in GM. Unlike CT, the SRC is
independent of energy. Therefore, the calculated $T_{A}(\pi^{+})$ due to SRC
added in GM does not illustrate the $Q^{2}$ dependence. The dot-dot-dashed and
dot-dashed curves arise because of the inclusion of CT in GM for $\Delta
M^{2}$, defined in Eq. (2), taken equal to 0.7 and 1.4 GeV2 respectively. The
calculated $T_{A}$ due to SRC incorporated in GM are presented by the solid
curves.
The calculated proton transparency $T_{A}(p)$ vs photon virtuality $Q^{2}$ in
the A$(e,e^{\prime}p)$X reaction is compared with the data in Fig. 1. The data
reported from SLAC [9] and JLab [10, 11] are represented by the white squares
and black circles respectively. Fig. 1(a) shows CT does not exist for the
proton moving through 12C for a wide range of the photon virtuality, i.e.,
$0.64\leq Q^{2}\leq 14.2$ GeV2. This is corroborated by the results for other
nuclei shown in Figs. 1(b) and (c), where the data are available for lesser
range of $Q^{2}$, i.e., $0.64\leq Q^{2}\leq 8.1$ GeV2 for 56Fe and $0.64\leq
Q^{2}\leq 6.77$ GeV2 for 197Au. Therefore, the CT of proton is distinctly
ruled out. Fig. 1 shows the calculated $T_{A}(p)$ due to the inclusion of SRC
in GM reproduce the data reasonably well for all nuclei.
The measured pionic transparency $T_{A}(\pi^{+})$ for $1.1\leq Q^{2}\leq 4.69$
GeV2 in the A$(e,e^{\prime}\pi^{+})$X reaction have been reported from JLab
[16] for 12C, 27Al, 63Cu and 197Au nuclei. The data for all nuclei (except
12C) show the enhancement of $T_{A}(\pi^{+})$ with $Q^{2}$. Proposals are
there to measure $T_{A}(\pi^{+})$ at JLab for higher $Q^{2}$, i.e., $5\leq
Q^{2}\leq 9.5$ GeV2 [3, 27]. Therefore, $T_{A}(\pi^{+})$ for $1.1\leq
Q^{2}\leq 9.5$ GeV2 have been calculated and those are presented in Fig. 2
along with the available data [16]. The calculated results due to GM+CT (i.e.,
$\pi$CT) are accord with both the $Q^{2}$ dependence and magnitude of the
data. This is in concurrence with the earlier calculations [4, 17]. The
calculated $T_{A}(\pi^{+})$ due to GM+SRC do not describe the $Q^{2}$
dependence of the data but the calculated results agree with the large number
of data points within the errors. Therefore, the data of $T_{A}(\pi^{+})$ in
the region of $Q^{2}=5-9.5$ GeV2 are necessary to prove the existence of
$\pi$CT.
The author appreciates Prof. Dipnagkar Dutta for the discussions on the
experimental results, and thanks A. K. Gupta and S. M. Yusuf for their
encouragement to work on theoretical nuclear physics.
## References
* [1] G. T. Howell and G. A. Miller, Phys. Rev. C 88 (2013) 035202.
* [2] G. R. Farrar, H. Liu, L. L. Frankfurt and M. I. Strikman, Phys. Rev. Lett. 61 (1988) 686.
* [3] D. Dutta, K. Hafidi, and M. Strikman, Prog. Part. Nucl. Phys. 69 (2013) 1.
* [4] A. B. Larionov, M. Strikman and M. Bleicher, Phys. Rev. C 93 (2016) 034618.
* [5] R. J. Glauber, in Lectures in Theoretical Physics, edited by W. E. Brittin et al. (Interscience, New York, 1959), Vol. I, p. 315; J. M. Eisenberg and D. S. Kolton, Theory of Meson Interaction with Nuclei (John Wiley $\&$ Sons, New York, 1980) p. 158.
* [6] L. L. Frankfurt, G. A. Miller and M. Strikman, Annu. Rev. Nucl. Part. Sci. 44 (1994) 501; L. Frankfurt and M. Strikman, Phys. Rep. 160 (1988) 235; P. Jain, B. Pire and J. P. Ralston, Phys. Rep. 271 (1996) 67.
* [7] A. S. Carroll et al., Phys. Rev. Lett. 61 (1988) 1698; I. Mardor et al., Phys. Rev. Lett. 81 (1998) 5085; A. Leksanov et al., Phys. Rev. Lett. 87 (2001) 212301.
* [8] T.-S. H. Lee and G. A. Miller, Phys. Rev. C 45 (1992) 1863.
* [9] T. G. O’Neill et al., Phys. Lett. B 351 (1995) 87; N. C. R. Makins et al., Phys. Rev. Lett. 72 (1994) 1986.
* [10] D. Dutta et al., Phys. Rev. C 68 (2003) 064603; D. Abbott et al., Phys. Rev. Lett 80 (1998) 5072; K. Garrow et al., Phys. Rev. C 66 (2002) 044613.
* [11] D. Bhetuwal et al.,Phys. Rev. Lett. 126 (2021) 082301.
* [12] E. M. Aitala et al., Phys. Rev. Lett. 86 (2001) 4773.
* [13] D. Dutta et al. (E94104 Collaboration), Phys. Rev. C 68 (2003) 021001R.
* [14] A. Airapetian et al., Phys. Rev. Lett. 90 (2003) 052501; L. El Fassi et al., Phys. Lett. B 712 (2012) 326.
* [15] B. Z. Kopeliovich, J. Nemchik and I. Schmidt, Phys. Rev. C 76 (2007) 015205; L. Frankfurt, G. A. Miller and M. Strikman, Phys. Rev. C 78 (2008) 015208; K. Gallmeister, M. Kaskulov and U. Mosel, Phys. Rev. C 83 (2011) 015201.
* [16] B. Clasie et al., Phys. Rev. Lett. 99 (2007) 242502; X. Qian et al., Phys. Rev. C 81 (2010) 055209.
* [17] M. M. Kaskulov, K. Gallmeister and U. Mosel, Phys. Rev. C 79 (2009) 015207.
* [18] A. Larson, G. A. Miller and M. Strikman, Phys. Rev. C 74 (2006) 018201.
* [19] W. Cosyn, M. C. Mart$\i^{\prime}$nez and J. Ryckebusch, Phys. Rev. C 77 (2008) 034602.
* [20] S. Kumano, in 21st International Symposium on Spin Physics (SPIN 2014) Beijing, China, October 20-24, 2014: arXiv 1504.05264 [hep-ph]; Int. J. Mod. Phys. (Conference Series) 40 (2016) 1660009.
* [21] G. A. Miller and M. Strikman, Phys. Rev. C 82 (2010) 025205.
* [22] G. A. Miller and J. E. Spencer, Ann. Phys. (N.Y.) 100 (1976) 562; O. Benhar et al., Phys. Rev. C 44 (1991) 2328; S. Das, Phys. Scr. 96 (2021) 035304.
* [23] J. H$\ddot{\mbox{u}}$fner, B. Kopeliovich and J. Nemchik, Phys. Lett. B 383 (1996) 362.
* [24] T. H. Bauer, R. D. Spital, D. R. Yennie and F. M. Pipkin, Rev. Mod. Phys. 50 (1978) 261; Erratum, 51 (1979) 407.
* [25] C. W. De Jager, H. De Vries and C. De Vries, At. Data and Nucl. Data Tables 14 (1974) 479; 36 (1987) 495.
* [26] P. A. Zyla et al., (Particle Data Group), Prog. Theor. Exp. 2020 (2020) 083C01; https://pdg.lbl.gov./2020/hadronic-xsections/hadron.html; C. Lechanoine-Leluc and F. Lehar, Rev. Mod. Phys. 65 (1993) 47; D. V. Bugg et al., Phys. Rev. 146 (1966) 980; S. Barshay, C. B. Dover and J. P. Vary, Phys. Rev. C 11 (1975) 360.
* [27] D. Dutta, private communication.
Figure 1: (color online). The calculated nuclear transparency of the proton
$T_{A}(p)$ vs. photon virtuality $Q^{2}$. The dashed curve denotes $T_{A}(p)$
calculated using Glauber model (GM). The dot-dot-dashed and dot-dashed curves
illustrate the proton color transparency ($p$CT) for two different values of
$\Delta M^{2}$, see text. The solid curves arise due to the inclusion of the
short-range correlation (SRC) in GM. The data are taken from Refs. [9]-[11].
Figure 2: (color online). Same as those presented in Fig. 1 but for the pion.
The data are taken from Refs. [16].
|
Smaller11 $\BODY$ Smaller08 $\BODY$ MnLargeSymbols’164 MnLargeSymbols’171
Scalar-Mediated Quantum Forces Between Macroscopic Bodies and Interferometry
Philippe Brax<EMAIL_ADDRESS>and Sylvain Fichet
<EMAIL_ADDRESS>
a Institut de Physique Théorique, Université Paris-Saclay, CEA, CNRS,
F-91191 Gif/Yvette Cedex, France
b ICTP South American Institute for Fundamental Research & IFT-UNESP,
R. Dr. Bento Teobaldo Ferraz 271, São Paulo, Brazil
c Centro de Ciencias Naturais e Humanas, Universidade Federal do ABC,
Santo Andre, 09210-580 SP, Brazil
Abstract
We study the quantum force between classical objects mediated by massive
scalar fields bilinearly coupled to matter. The existence of such fields is
motivated by dark matter, dark energy, and by the possibility of a hidden
sector beyond the Standard Model. We introduce the quantum work felt by an
arbitrary (either rigid or deformable) classical body in the presence of the
scalar and show that it is finite upon requiring conservation of matter. As an
example, we explicitly show that the quantum pressure inside a Dirichlet
sphere is finite — up to renormalizable divergences. With this method we
compute the scalar-induced quantum force in simple planar geometries. In
plane-point geometry we show how to compute the contribution of the quantum
force to the phase shift observable in atom interferometers. We show that atom
interferometry is likely to become a competitive search method for light
particles bilinearly coupled to matter, provided that the interferometer arms
have lengths below $\sim 10$ cm.
###### Contents
1. 1 Introduction
2. 2 The Quantum Work
1. 2.1 Action and Quantum Vacuum Energy
2. 2.2 The Source and its Deformation
3. 2.3 Quantum Work and Force
4. 2.4 $W_{\lambda}$ is Finite (Renormalizable Case)
3. 3 Weak Coupling: Finiteness Properties and Thin Shell Limit
1. 3.1 $W_{\lambda}$ is Finite (EFT Case)
2. 3.2 The Thin Shell Limit
3. 3.3 The Dirichlet Limit
4. 4 Scalar Quantum Forces between Rigid Bodies
1. 4.1 Rigid Bodies
2. 4.2 Vanishing of Tadpoles
3. 4.3 The Scalar Casimir-Polder Limit
4. 4.4 The Scalar Casimir Limit
5. 5 The Dirichlet Sphere
1. 5.1 Is the quantum pressure on the sphere finite in QFT?
2. 5.2 Review and Discussion
3. 5.3 The Quantum Work on the Dirichlet Sphere
6. 6 Planar Geometry
1. 6.1 Force Between two Plates
2. 6.2 Force Between a Plate and a Point Source
7. 7 Bounding Quantum Forces with Atom Interferometry
1. 7.1 The setting
2. 7.2 Computing the Phase Shift
3. 7.3 Limits
4. 7.4 Sensitivity to New Particles
8. 8 Conclusion
9. A Derivation of the Plate-Point Casimir-Polder Potential
10. B On Divergences from the Heat Kernel Expansion
11. C The Loop Divergence from Momentum Space
## Introduction
The exchange of virtual particles induces macroscopic forces between bodies.
Beyond the tree level, such forces are relativistic and quantum in essence,
they are properly described in the framework of quantum field theory (QFT).
Here we refer to such forces simply as “quantum forces”. Even though the
seminal works on Casimir forces are from the nineteen forties CP_original ;
Casimir_original , the topic of quantum forces is still very much active (see
e.g. Milton:2004ya ; Klimchitskaya:2009cw ; 2015AnP…527…45R ; Woods:2015pla ;
Bimonte:2017bir ; Bimonte:2021maf ; Bimonte:2022een for recent papers and
reviews). In the case of electromagnetic interactions, refined calculations
take into account medium properties such as electromagnetic permittivities as
well as effects from finite temperature, see bordag2009advances for a
comprehensive review. In the present paper we work in the simple framework of
scalar-mediated quantum forces at zero temperature, with the key assumption
that the scalar couples bilinearly to the sources. 222See e.g. Milton:2002vm ;
Graham:2002fw ; Jaffe:2005vp ; Mobassem:2014jma for a selection of related
works involving scalar quantum forces. In this scalar setting, we use a simple
variational method to derive quantum forces between bodies of various shapes
and positions.
The scalar case is of prime relevance for cosmology Hui:2016ltb ;
Joyce:2014kja and deriving quantum forces mediated by massive scalars could
lead to new laboratory tests of important cosmological models, from scalar
dark matter to dark energy. In this paper, we argue that atom interferometry
Hamilton:2015zga could be such a promising technique. Our primary motivation
comes from the pervasiveness of scalar fields bilinearly coupled to matter in
cosmology Joyce:2014kja and in extensions of the Standard Model of particle
physics Allanach:2016yth . For instance, the dark matter in our Universe can
be modelled as a scalar particle with $Z_{2}$ symmetry, which couples thus
bilinearly to matter. A wealth of dark energy models and related modified
gravity models Brax:2021wcv also involve a light scalar field. For these,
when the classical force induced by the linear coupling to sources is screened
(see e.g. Damour:1994zq ; Khoury:2003aq ; Khoury:2003rn ), the quantum force
arising from the bilinear coupling to sources can become dominant Brax:2018grq
, hence motivating our study. Finally, irrespective of observational
motivations, the existence of a sector hidden beyond the Standard Model and
featuring a light scalar with a $Z_{2}$ symmetry (e.g. a pseudo-Nambu
Goldstone boson of a symmetry of the hidden sector) is a logical possibility
that requires investigation. There is thus, in short, a cornucopia of reasons
to study scalar fields bilinearly coupled to matter.
A secondary motivation is that some of the technical and conceptual results
that we present — such as the proof of finiteness of the quantum work — are
best exposed with a scalar field. We use an approach based on the variation of
the quantum vacuum energy which is similar in spirit to the one found in the
seminal work by Schwinger Schwinger:1977pa . The developments presented here
can be taken as a streamlined presentation of this variational method based in
the context of scalar QFT.
The vacuum energy of QFT features divergences which are treated via standard
renormalization methods, e.g. using dimensional regularization and introducing
local counterterms Peskin:257493 , which maps these divergences onto the RG
flow of the Lagrangian parameters. The fact that such divergences do not
affect the quantum force can be shown in complete generality at one-loop using
e.g. the heat kernel expansion bordag2009advances . Apart from these well-
behaved divergences, other “spurious” divergences which are non-removable by
renormalization can appear in certain Casimir calculations. This happens for
example for a calculation of the pressure in the Dirichlet sphere
Milton_Sphere (see detailed discussion in Sec. 5), and even for a version of
the calculation of the pressure between plates (see detailed discussion in
Sec. 6.1). While such spurious divergences might be easily identified and
removed in an ad hoc way, in this work we will show that they are
systematically removed by the requirement of conservation of matter in the
sources. Matter conservation has so far not been taken into account in
calculations of quantum forces, to the best of our knowledge. At the
conceptual level it is the new ingredient of our calculations.
Hopefully our presentation will be useful to cosmologists who would like to
see their models tested in the laboratory. In particular we transparently
explain in terms of Feynman diagrams how the screening of the scalar inside
the sources gives rise to two distinct regimes for the quantum force. This
situation is similar to the Casimir and Casimir-Polder forces arising in
electromagnetism. The common nature and unified description of these
electromagnetic quantum forces has been long known, see e.g. Refs.
Dzyaloshinskii_1961 ; Schwinger:1977pa . Here we will give a formula for the
quantum force in the massive scalar case and relate the “scalar Casimir” and
“scalar Casimir-Polder” forces. 333See e.g. Feinberg:1968zz ;
FeinbergSucherNeutrinos ; Grifols:1996fk ; Fichet:2017bng ; Costantino:2019ixl
for Casimir-Polder-like forces in the context of particle physics. The use of
this general formula is mandatory for computing certain observables such as
phase shifts in interferometry.
Finally, another purpose of this work is to lay out the foundations for more
phenomenological studies, for which the effect of the quantum force from a
particle beyond the Standard Model needs to be predicted in realistic
experiments. As such, we will present a calculation of the phase shift induced
by a quantum force and measurable in atom interferometers. We will then study
to which extent atom interferometry is a competitive method to search for
light dark particles.
### Technical Review
The present study bears some technical similarities with other works from the
Casimir literature. Here we briefly list a few of these references. Our
definition of the quantum work involves the variation of the quantum vacuum
energy with respect to a deformation of the source. A similar approach relying
on energy variation has been used in Schwinger:1977pa , expressed in the
language of Schwinger’s source theory. The source we consider has finite
density, and the Dirichlet limit is obtained by sending the density to
infinity. A similar framework was considered in Graham:2002fw ; Graham:2002xq
; Graham:2003ib (the conservation equation is not exploited in these
references). In our formalism we consider arbitrary deformations of an
arbitrary body. A somewhat similar approach was taken in Schwinger:1977pa in
the case of deformation of a dielectric fluid. We sometimes use regularisation
by point-splitting, in the context of Casimir calculations this method has for
example been used in Milton:2004ya . Finally, under certain conditions
including that the bodies are incompressible, finiteness of the quantum work
at one-loop can be shown using the heat kernel formalism. This is reviewed and
discussed in App. B.
Comparison to Franchino-Vinas:2021lbl : After pre-publication we became aware
of the recent article Franchino-Vinas:2021lbl whose scope and results have
partial overlap with the present study — upon appropriate matching of terms
and concepts. Both works introduce the concept of variable/effective mass. The
principle of virtual work whose proof for arbitrary geometry in a specific
model is presented in Franchino-Vinas:2021lbl is fully compatible with the
general formula of the quantum work presented here. The translation of the key
quantities between the model of Franchino-Vinas:2021lbl and the present work
is $\phi|_{[\color[rgb]{0,0,1}{35}]}\equiv\Phi$,
$\frac{\lambda_{1}}{4}\sigma^{2}\phi^{2}|_{[\color[rgb]{0,0,1}{35}]}\equiv{\cal
B}_{m}J$. An important difference of scope is that in Franchino-Vinas:2021lbl
the deformation flow of the body is rigid, which implies that no discussion of
matter conservation is needed. In contrast, the compressible deformation flow
and its relation to matter conservation are a key aspect of our study.
Finally, the formula for the quantum work resulting from the rigid deformation
of an interface is obtained in Franchino-Vinas:2021lbl in terms of a surface
integral of the stress tensor. This result turns out to precisely match the
one we obtain in our Eq. (45) upon setting the variation of density to zero,
even though the intermediate steps of the derivations are rather different.
This is a nontrivial verification of the consistency of both works.
### Outline
The paper is arranged as follows. Section 2 presents the framework for scalar
quantum forces, giving a formula for the quantum work valid at the non-
perturbative level for arbitrary deformable sources. Section 3 specializes to
the weak coupling case (which includes the case of effective field theories).
The quantum work in the limiting case of thin-shell geometries is further
evaluated. Section 4 specializes to two rigid bodies, showing that the Casimir
and Casimir-Polder forces for massive scalar fields are asymptotically
recovered as limits of our unifying formula for quantum forces. The Casimir
pressure on the Dirichlet sphere is revisited and shown to be finite in
section 5. The generalized Casimir forces for finite density objects and a
finite vacuum mass for the scalar field in plane-plane and point-plane
geometry are respectively computed in sections 6. In section 7 we then
calculate the phase shift in atom interferometers. In App. A we compare a
computation of the Casimir-Polder interaction from a scattering amplitude and
our derivation, showing explicitly that they coincide. In App. B we review a
proof of the finiteness of the quantum work at the one-loop level using the
heat kernel expansion following the steps of bordag2009advances , which
contributes to motivating our approach. App. C contains details on a loop
calculation in momentum space .
### Definitions and Conventions
We assume $d+1$-dimensional Minkowski spacetime ${\cal M}_{d+1}$ with mostly-
minus signature $(+,-,\ldots,-)$. The $d+1$ Cartesian coordinates are denoted
by $x_{\mu}$, spatial coordinates are denoted by $x_{i}\equiv{\bm{x}}$. We
will be considering a source $J({\bm{x}})$ with arbitrary shape and dimension.
The support of the source is described by the indicator function
${\bm{1}}_{J}({\bm{x}})$ or equivalently using a continuous support function
$l({\bm{x}})$ which is positive where $J$ is supported and negative where it
is not, with ${\bm{1}}_{J}(x)\equiv\Theta(l({\bm{x}}))$ where $\Theta$ is the
Heaviside distribution. The boundary of the source is denoted by $\partial
J=\left\\{{\bm{x}}\in{\cal M}_{d+1}|l({\bm{x}})=0\right\\}$. Integration over
the support of the source $J$ is denoted $\int_{J}d^{d}{\bm{x}}$. Integration
over the support of the boundary $\partial J$ is denoted $\int_{\partial
J}d\sigma({\bm{x}})$. Although our main focus is ultimately the $d=3$ case, we
keep $d$ general when possible and specialize to $d=3$ in specific settings.
## The Quantum Work
In this section we compute the quantum work felt by a source bilinearly
coupled to a quantum field under an arbitrary deformation of the source. We
focus on a scalar field $\Phi$ for simplicity — generalization to spinning
fields is identical although more technical. The bodies subject to the Casimir
forces are assumed to be classical and static. The set of bodies is
collectively represented in the partition function by a static source term
$J({\bm{x}})$. More precisely, the $J({\bm{x}})$ distribution corresponds to
the vacuum expectation of the density operator $\hat{n}({\bm{x}})$ in the
presence of matter,
$J({\bm{x}})=\langle\Omega|\hat{n}({\bm{x}})|\Omega\rangle$.
### Action and Quantum Vacuum Energy
We consider the fundamental Lagrangian
${\cal
L}[\Phi]=\frac{1}{2}(\partial_{M}\Phi)^{2}-\frac{1}{2}m^{2}\Phi^{2}+\ldots\,.$
(1)
The ellipses include possible interactions of $\Phi$, which do not need to be
specified. The interacting theory for $\Phi$ can either be renormalizable —
with either weak or strong coupling, or may also be an effective field theory
(EFT) involving a series of operators of arbitrary dimensions. In this latter
case the theory is weakly coupled below the EFT cutoff scale on distances
larger than $\Delta x\sim\frac{1}{\Lambda}$. The scale $\Lambda$ is the energy
cut-off of the theory.
We consider the partition function in Minkowski spacetime 444We call the
generating functional $Z[J]$ the partition function in analogy to the
Euclidean case.
$Z[J]=\int{\cal D}\Phi e^{i\left(S[\Phi]-\int d^{d+1}x\,{\cal
B}[\Phi]J({\bm{x}})\right)}$ (2)
where ${\cal B}[\Phi]$ is a bilinear operator in $\Phi$. This operator can
encode an arbitrary number of field derivatives. We distinguish two cases. If
the bilinear operator has no derivative, then the scalar theory can be
renormalizable. In this case we write the operator as
${\cal B}_{m}[\Phi]=\frac{1}{2\Lambda}\Phi^{2}$ (3)
If the bilinear operator has derivatives, then the scalar theory is an EFT and
in general contains a whole series of higher dimensional operators. In this
case, including an arbitrary number of such terms, we can write the operator
as
${\cal B}_{\rm EFT}[\Phi]=\sum_{n>1,i}\frac{c_{n,i}}{2\Lambda^{2n+1}}\Phi{\cal
O}_{n,i}\Phi\,$ (4)
where ${\cal O}_{n,i}$ is a scalar derivative operator with $2n$ derivatives
which act either to the left or to the right, 555When writing the bilinear
interaction in the form Eq. (4), consistently with the source term defined in
(2), we take into account that any derivative acting on $J$ has been removed
using integration by part, producing derivatives that act on the fields. With
this constraint, the most general structure is the one given in Eq. (5).
${\cal
O}_{n,i}=(\overleftarrow{\partial}^{2})^{p}\overleftarrow{\partial}_{\mu_{1}}\ldots\overleftarrow{\partial}_{\mu_{a}}(\overrightarrow{\partial}^{2})^{n-p-a}\overrightarrow{\partial}_{\mu_{1}}\ldots\overrightarrow{\partial}_{\mu_{a}}$
(5)
The $c_{n,i}$ coefficients are dimensionless.
Let us comment further about some aspects of EFT. Here we distinguished
between the ${\cal B}_{m}$ and ${\cal B}_{\rm EFT}$ operators, essentially
because our subsequent analysis of finiteness of the quantum work slightly
differs between both cases. In general, a given EFT can feature both the
${\cal B}_{m}$ and ${\cal B}_{\rm EFT}$ terms. In such case the ${\cal B}_{m}$
contribution would tend to dominate, unless it is suppressed by a symmetry
(e.g. a shift symmetry). Still in the EFT framework, we may also notice that
non-derivative higher order operators involving powers of $\Phi$ and $J$ are
in general present. While the higher order operators which are bilinear in
$\Phi$ could be simply accounted in the generic source $J$, the broader view
is that the EFT validity domain prevents these operators from becoming
important, i.e. schematically $\partial/\Lambda\ll 1$. Finally we emphasize
that our subsequent analyses involving UV divergences can be done at the level
of the EFT, without having to specify the underlying completion, see e.g.
Manohar:1996cq ; Manohar:2018aog for more details on divergences and
renormalization in EFT.
Since the source is static, the partition function takes the form
$Z[J]=e^{-iE[J]T}$ (6)
where $E[J]$ is referred to as the quantum vacuum energy and $T$ is an
arbitrary time interval specified in evaluating the time integrals. This time
scale will drop from the subsequent calculations. 666 In Minkowski space we
have in general $Z=\langle 0|e^{-i{\cal H}T}|0\rangle$, with ${\cal H}$ the
Hamiltonian of the system. This is why the eigenvalue $E[J]$ is identified as
the vacuum energy. We also mention that upon Wick rotation to Euclidean space,
$E[J]$ corresponds to the free energy of the system. In general, we can set
$J$ as an abstract quantity that can be used to generate the correlators of
the theory. In our case, since the source couples to ${\cal B}$ (see Eq. (2)),
taking functional derivatives of $E[J]$ in $J$ generates the connected
correlators of the composite operator $\cal B$, i.e. $\langle{\cal
B}(x_{1}){\cal B}(x_{2})\ldots\rangle$. In this work, we consider that $J$
represents a physical distribution of matter, i.e. $J({\bm{x}})$ is taken to
be the expectation value of the density operator $\hat{n}({\bm{x}})$ in the
presence of matter,
$J({\bm{x}})=\langle\Omega|\hat{n}({\bm{x}})|\Omega\rangle$. For concreteness,
one can for instance think of a nonrelativistic fermion density, appearing for
example via $\bar{\psi}\psi=n({\bm{x}})$ or
$\bar{\psi}\gamma_{\mu}\psi=\delta_{\mu 0}n({\bm{x}})$ in the relativistic
formulation. 777 Higher monomials contributions such as
$\frac{(\bar{\psi}\psi)^{2}}{\Lambda^{3}}$ are neglected in this example. We
mention that in the presence of screening, i.e. in the Dirichlet or Casimir
regime defined in next sections, the validity of the EFT tends to be improved
because the coupling to the source tends to be suppressed. Validity of the EFT
in the presence of screening has been discussed in Brax:2018grq .
### The Source and its Deformation
The source is parametrized by
$J({\bm{x}})=n({\bm{x}}){\bm{1}}_{J}({\bm{x}})$ (7)
and corresponds to a particle number distribution of mass dimension $d$. The
support of this distribution is encoded in
${\bm{1}}_{J}({\bm{x}})=\Theta(l({\bm{x}}))$ where the continuous function
$l({\bm{x}})$ is positive where $J$ is supported and negative where it is not.
The number density $n({\bm{x}})$ is in general an arbitrary distribution over
the support. The integral $N_{J}=\int d\mu_{i}J(\mu_{i})$ amounts to the total
particle number of the source.
We then introduce a deformation of the source. We assume that matter is
deformable i.e. both the support and the number density can vary under the
deformation. We will see that such a generalization from rigid to deformable
matter is necessary in order to ensure that the calculation is well-defined
and that no infinities show up.
The infinitesimal deformation of the source is parametrized by a scalar
parameter $\lambda$. Under our assumptions the source depends on this
deformation parameter as
$J_{\lambda}({\bm{x}})=n_{\lambda}({\bm{x}})\Theta[l_{\lambda}({\bm{x}})]$.
The deformed source takes the form
$J_{\lambda+d\lambda}({\bm{x}})=n_{\lambda+d\lambda}({\bm{x}})\Theta[l_{\lambda+d\lambda}({\bm{x}})]$.
The deformation of the support of the source is parametrized by
$l_{\lambda+d\lambda}({\bm{x}})=l_{\lambda}({\bm{x}}-{\bf
L}({\bm{x}})d\lambda)$ (8)
where the ${\bf L}$ vector is the deformation flow. Defining
$\frac{\partial}{\partial\lambda}\equiv\partial_{\lambda}$ the variation of
the source under the $\lambda$ deformation is then given by
$\partial_{\lambda}J_{\lambda}({\bm{x}})=\partial_{\lambda}n_{\lambda}\Theta[l_{\lambda}({\bm{x}})]-n_{\lambda}{\bf
L}\cdot{\bm{\partial}}l_{\lambda}\,\delta[l_{\lambda}({\bm{x}})]$. An
arbitrary deformation of a generic source is pictured in Fig. 1.
We assume that $J({\bm{x}})$ is made out of classical matter and is not a
completely abstract distribution. As the source is made of classical matter,
then its local number density must be conserved. Any deformation of the source
must be therefore subject to the conservation of the number density. The local
conservation equation under the deformation parametrized by $\lambda$ is
$\partial_{\lambda}n_{\lambda}+{\bm{\partial}}\cdot\left(n_{\lambda}{\bf
L}\right)=0\,.$ (9)
It implies the integral form
$\partial_{\lambda}\int_{J_{\lambda}}d^{d}{\bm{x}}\,n_{\lambda}({\bm{x}})=\int
d^{d}{\bm{x}}\,\partial_{\lambda}J_{\lambda}({\bm{x}})=0$ (10)
where the second integral is over all space.
In the special case where the density $n$ is constant in $\lambda$ and $x$,
Eq. (9) reduces to the condition of an incompressible deformation flow,
${\bm{\partial}}\cdot{\bf L}=0$. In this case the source describes
incompressible matter (e.g. a fluid). If ${\bm{\partial}}\cdot{\bf L}=0$ and
${\bf L}$ is piecewise constant in $x$, then the source describes rigid
matter. Section. 4 specializes to such rigid sources.
### Quantum Work and Force
Figure 1: An arbitrary infinitesimal deformation of a generic source. The
dotted and plain contours respectively correspond to the boundary of
$J_{\lambda}$ and $J_{\lambda+d\lambda}$. The arrows denote the deformation
flow ${\bf L}$.
We now study how the quantum system evolves upon a general, infinitesimal
deformation of the source, $\lambda\to\lambda+d\lambda$. To proceed we
introduce the quantum work under a variation in $\lambda$,
$W_{\lambda}=-\partial_{\lambda}E[J_{\lambda}]\,.$ (11)
In the particular case of a rigid body and if the deformation field can be
factored out, then a quantum force can be defined as
$W_{\lambda}={\bf L}\cdot{\bm{F}}.$ (12)
This defines the quantum force between the objects. Such a simplification is
however generally not possible, the most fundamental quantity to consider is
the quantum work. Our definition of quantum work is closely related to
variational approaches such as the principle of virtual work applied to
quantum physics, see e.g. Schwinger:1977pa and Li:2019ohr ; Franchino-
Vinas:2020okl for related recent studies.
The quantum vacuum energy $E[J]$ is a formally divergent quantity. However, if
one varies it with respect to a physical parameter, the resulting variation is
a physical observable and thus should be finite. 888Throughout this work, the
physical parameter is typically a geometric variable such as the distance
between two bodies. Hence even though $W_{\lambda}$ encodes all the quantum
effects felt by the source(s), the only divergences remaining in this quantity
are those with a physical meaning, i.e. the ones which must be treated in the
framework of renormalization. One origin for such divergences is the field
interactions, another can be the curvature of spacetime, as pointed out in
Fichet:2021xfn in AdS. Even for a free theory in flat space, there can be
renormalization of surface tension terms, as pointed out in Bordag:2004rx . We
refer to such divergences as “physical” ones — these are the familiar
divergences from QFT, that only appear for specific integer values of
spacetime dimension. One of our goals in the following is to show that there
are no other divergences than the physical ones in the QFT calculation of the
quantum force.
Using the definition of Eqs. (2), (6), the quantum work is given by
$W_{\lambda}=-\int d^{d}{\bm{x}}\left\langle{\cal
B}\right\rangle_{J_{\lambda}}(x,x)\partial_{\lambda}J_{\lambda}({\bm{x}})\,$
(13)
where $\left\langle{\cal B}\right\rangle_{J_{\lambda}}$ is the time-ordered
quantum average of ${\cal B}$ in the presence of the source $J_{\lambda}$.
Here and throughout we use the shortcut notation
$W_{J_{\lambda}}=W_{\lambda}$, $\left\langle{\cal
B}\right\rangle_{J_{\lambda}}=\left\langle{\cal B}\right\rangle_{\lambda}$.
While all quantities depend on $\lambda$, the $\lambda$ dependence is relevant
only under the $\partial_{\lambda}$ variation and is often dropped elsewhere.
The two-point function in presence of the source at coinciding points will be
sometimes denoted by $\left\langle{\cal B}\right\rangle_{J}$ in the rest of
the paper.
Using the general form of the bilinear coupling, ${\cal B}$ is expressed in
terms of the two point function of $\Phi$ in the presence of $J$, $\langle
T\Phi(x)\Phi(y)\rangle_{J}$, evaluated at coinciding points. We assume that
the classical value of $\Phi$ is zero. 999This can be the consequence of a
$Z_{2}$ symmetry, enforcing that $\Phi$ only appears bilinearly in the action
such that $\langle\Phi\rangle=0$. In the presence of nonzero
$\langle\Phi\rangle$, the possible $Z_{2}$ symmetry is broken, implying that
at weak coupling the fluctuation of $\Phi$ over the $\langle\Phi\rangle$
background has a linear coupling to the source. As a result a classical force
is also present in addition to the quantum force. This case does not need to
be investigated in the present paper. This extra classical force appearing in
the presence of $\langle\Phi\rangle\neq 0$ does not automatically dominate
over the quantum force — instead, various regimes arise as a function of the
geometry. These aspects have been partly investigated in Brax:2018grq . Hence
the disconnected part of the two-point function vanishes and the two-point
function reduces to $\langle T\Phi(x)\Phi(y)\rangle_{J}=\Delta_{J}(x,y)$,
where $\Delta_{J}(x,y)$ is the Feynman propagator in the presence of the
source $J$. The quantum average of ${\cal B}$ is then expressed in terms of
the Feynmann propagator $\Delta_{J}$ as
$\langle{\cal B}_{m}\rangle_{J}=\frac{1}{2\Lambda}\Delta_{J}(x_{1},x_{2})$
(14) $\langle{\cal B}_{\rm
EFT}\rangle_{J}=\sum_{n>1,i}\frac{c_{n,i}}{2\Lambda^{2n+1}}{\cal
O}_{n,i}\Delta_{J}(x_{1},x_{2})\,.$ (15)
Here in the EFT case we introduce the shortcut $\langle T\Phi(x){\cal
O}_{n,i}\Phi(y)\rangle_{J}\equiv{\cal O}_{n,i}\Delta_{J}(x,y)$ where the
${\cal O}_{n,i}$ operator as defined in Eq. (5) has derivatives acting to the
left and to the right. The general formula for the quantum work presented in
Eq. (13) is valid at the non-perturbative level.
### $W_{\lambda}$ is Finite (Renormalizable Case)
The quantity $\left\langle{\cal B}\right\rangle_{J}(x,x)$ is formally
divergent since it contains the propagator at coinciding points. It is thus
not obvious why the quantum work $W_{\lambda}$ should be finite. Throughout
this subsection, we regulate the divergence in $\left\langle{\cal
B}\right\rangle_{J}$ by introducing a small splitting of the endpoints,
$\left\langle{\cal B}\right\rangle_{J}(x,x)\equiv\left\langle{\cal
B}\right\rangle_{J}(x,x_{\epsilon})|_{\epsilon\to 0}$ where we defined
$x_{\epsilon}=x+\epsilon$. 101010 The analogous regularization in Fourier
space is momentum cutoff, $p<\frac{1}{\epsilon}$. These regularizations admit
a physical meaning. $\epsilon$ can be thought as the distance scale below
which the description of the classical matter as a continuous distribution
breaks down. In this view, the cutoff length $\epsilon$ has a physically
meaningful value, and the statement of existence of a divergence turns into a
statement on $\epsilon$-dependence of the result.
We consider the renormalizable case i.e. the $\langle{\cal B}\rangle_{m}$
operator (the EFT case is addressed in section 3.1). We assume that, as a
preliminary step, all the divergences which can be removed by the
renormalization of the coupling constants of local operators have been
performed, e.g. the fundamental mass $m$ is the renormalized mass. We assume
that $n({\bm{x}})$ is finite for any ${\bm{x}}$ over the support of $J$ (the
$n\to\infty$ limit is discussed in section 3.2). No assumption on the
interactions of $\Phi$ is necessary thus $\Phi$ can be strongly coupled. Under
these conditions the finiteness of the quantum work can be shown as follows.
In the $\epsilon\to 0$ limit, we can decompose the expectation value of $\cal
B$ as the sum of i) the $\epsilon$-dependent, would-be divergent term, and ii)
a finite term in which the $\epsilon$ dependence amounts to $O(\epsilon)$
corrections that can be neglected for $\epsilon\to 0$. This gives
$\left\langle{\cal B}\right\rangle_{J}(x,x_{\epsilon})=\left\langle{\cal
B}\right\rangle_{J}^{\rm div}(x,x_{\epsilon})+\left\langle{\cal
B}\right\rangle^{\rm fin}_{J}(x,x)+O(\epsilon)\,.$ (16)
We then use the assumption that the number density is finite. It implies that
the effective squared mass of $\Phi$ inside the source,
$m^{2}+\frac{n({\bm{x}})}{\Lambda}$, is finite. We remind that $m$ has been
renormalized already. A divergence in the short distance behaviour of
$\langle{\cal B}\rangle(x,x_{\epsilon})$ can arise from a propagator going
from $x$ to $x_{\epsilon}$. As the effective mass term amounts to a relevant
operator with finite value, it is negligible in the short distance limit of
the propagator, i.e. in the large momentum limit. As the source $J$ appears in
the propagator only via the effective mass term, we conclude that the
divergent piece $\left\langle{\cal B}\right\rangle_{J}^{\rm
div}(x,x_{\epsilon})$ is independent of $J$. Furthermore, in that short
distance limit, the propagator must be Lorentz invariant, and we conclude that
the divergent piece in Eq. (16) is independent of $x$,
$\left\langle{\cal B}\right\rangle_{J}^{\rm div}(x,x_{\epsilon})|_{{\rm
small}\,\epsilon}=\left\langle{\cal B}\right\rangle^{\rm
div}(x,x_{\epsilon})=\left\langle{\cal B}\right\rangle^{\rm
div}(0,\epsilon)\equiv\left\langle{\cal B}\right\rangle^{\rm
div}_{\epsilon}\,.$ (17)
We thus see that the divergent piece depends only on $\epsilon$ and diverges
in the $\epsilon\to 0$ limit. Using the decomposition Eq. (16) and the
definition of the quantum work, we obtain the decomposition
$W_{\lambda}=W^{\rm fin}_{\lambda}+W^{\rm div}_{\lambda}$ (18)
with
$W^{\rm fin,div}_{\lambda}=-\int d^{d}{\bm{x}}\left\langle{\cal
B}\right\rangle_{J}^{\rm
fin,div}(x,x)\partial_{\lambda}J_{\lambda}({\bm{x}})\,.$ (19)
In the divergent piece, $\left\langle{\cal B}\right\rangle^{\rm
div}_{\epsilon}$ factors out of the integral because it is independent of $x$.
This gives
$W^{\rm div}_{\lambda}=-\left\langle{\cal B}\right\rangle_{\epsilon}^{\rm
div}\int d^{d}{\bm{x}}\,\partial_{\lambda}J_{\lambda}({\bm{x}})\,.$ (20)
The remaining integral corresponds exactly to the variation of the total
density of the source under the deformation, appearing in the integral form of
the conservation equation Eq. (10). Thus if the equation of conservation Eq.
(10) is satisfied, then Eq. (20) vanishes. We conclude that, upon conservation
of matter in the source, for any deformation and finite $n$ the quantum work
is finite:
$W^{\rm div}_{\lambda}=0\,.$ (21)
This is true at the nonperturbative level. In the particular case of
incompressible matter, Eq. (20) reduces to
$W^{\rm div}_{\lambda}|_{\rm incompressible}=-n\left\langle{\cal
B}\right\rangle^{\rm div}\int_{J}d^{d}{\bm{x}}\,{\bm{\partial}}\cdot
L({\bm{x}})\,.$ (22)
In that case we can say that, upon conservation of matter in the source, for
any divergent-free deformation flow and finite $n$, the quantum work is
finite:
$W^{\rm div}_{\lambda}|_{\rm incompressible}=0$ (23)
The finiteness of the quantum work will be exemplified in the upcoming
sections.
Finally we comment on the finite part of the quantum work. The finite part can
be put in the useful alternative form by evaluating the integrand, using the
divergence theorem and using the conservation equation,
$W^{\rm fin}_{\lambda}=-\int d^{d}{\bm{x}}\,n_{\lambda}({\bm{x}}){\bf
L}\cdot{\bm{\partial}}\left[\left\langle{\cal
B}\right\rangle_{\lambda}(x,x)\right]\,.$ (24)
This is another way to verify that any constant piece in $\left\langle{\cal
B}\right\rangle(x,x)$ does not contribute to the quantum work as it appears
under a gradient in the integrand.
## Weak Coupling: Finiteness Properties and Thin Shell Limit
At weak coupling the $\Phi$ field has an equation of motion (EOM) that we can
use to evaluate the quantum work. We introduce the bilinear operator ${\cal
B}^{\prime\prime}$, defined by
${\cal B}=\frac{1}{2}\Phi(x){\cal B}^{\prime\prime}\Phi(x)\,.$ (25)
This is the operator that appears in the EOM. For example, when applied to
${\cal B}_{m}$ this is ${\cal B}^{\prime\prime}_{m}=\frac{1}{\Lambda}$. In the
EFT case, ${\cal B}^{\prime\prime}$ is the differential operator appearing in
Eq. (4),
${\cal B}_{\rm
EFT}^{\prime\prime}=\sum_{n>1,i}\frac{c_{n,i}}{\Lambda^{2n+1}}{\cal
O}_{n,i}\,.$ (26)
The left and right derivatives in ${\cal O}_{n,i}$ act on the propagators
attached respectively to the left and right of the vertex.
At leading order in the perturbative expansion, the $\Delta_{J}(x,x^{\prime})$
propagator satisfies the equation of motion
${\cal D}_{x}\Delta_{J}(x,x^{\prime})+{\cal
B}^{\prime\prime}J({\bm{x}})\Delta_{J}(x,x^{\prime})=-i\delta^{d+1}(x-x^{\prime})$
(27)
where ${\cal D}=\square+m^{2}$ is the wave operator and $\square$ is the
scalar d’Alembertian. The solution to Eq. (27) is a Born series that describes
the bare propagator $\Delta_{0}$ (i.e. $\Delta_{J}|_{J\to 0}$) dressed by
insertions of ${\cal B}^{\prime\prime}J$. For convenience we define the
insertion
$\Sigma(x,y)=-i{\cal B}^{\prime\prime}J({\bm{x}})\delta^{d+1}(x-y)$ (28)
and we use the inner product $f\star g=\int d^{d+1}u\,f(u)g(u)$. With these
definitions the dressed propagator is given by
$\displaystyle\Delta_{J}(x,x^{\prime})$
$\displaystyle=\sum^{\infty}_{q=0}\Delta_{0}\left[\star\Sigma\star\Delta_{0}\right]^{q}(x,x^{\prime})$
(29) $\displaystyle=\Delta_{0}(x,x^{\prime})-\int
d^{d+1}u\,\Delta_{0}(x,u)i{\cal
B}^{\prime\prime}J({\bm{u}})\Delta_{0}(u,x^{\prime})+\ldots$ (30)
Putting this result back into Eq. (13) provides the leading, one-loop
contribution to the quantum work, 111111 Another, conceptually similar way to
derive this formula is via the heat kernel formalism, see e.g. Bordag:2004rx ;
Franchino-Vinas:2020okl
$W_{\lambda}^{\rm 1-loop}=-\frac{1}{2\Lambda}\int d^{d}{{\bm{x}}}{\cal
B}^{\prime\prime}\sum^{\infty}_{q=0}\Delta_{0}\left[\star\Sigma\star\Delta_{0}\right]^{q}(x,x)\partial_{\lambda}J({\bm{x}})$
(31)
This is valid for both ${\cal B}_{m}$ and ${\cal B}_{\rm EFT}$ insertions. In
terms of Feynman diagrams Eq. (31) is simply a loop with an arbitrary number
of insertions of ${\cal B}^{\prime\prime}J$ and one insertion of
$\partial_{\lambda}J$. A term of the series is represented (without the
$\partial_{\lambda}$ variation) in Fig. 2, where each insertion is represented
by a black dot.
In the EFT case, the validity domain of the EFT implies that higher
derivatives terms in ${\cal B}_{\rm EFT}$ must remain small. Still, in the
series of Eq. (31), the effect of lower derivative terms may become important
in some regime. When the effect of these derivatives is important, it may
happen that convergences issues in Eq. (31) appear, that need careful
consideration taking into account EFT validity. A related example is discussed
in details in Bordag:2001ta . In the following, we will show the finiteness of
every term in $W_{\lambda}^{\rm 1-loop}$ and simply assume that the overall
series is convergent.
Figure 2: A sample one-loop diagram in the presence of an arbitrary source.
Black dots represent insertions of $-i{\cal B}^{\prime\prime}J({\bm{x}})$.
Each black dot is integrated over the support of the source. Under an
infinitesimal deformation of the source ($\lambda\to\lambda+d\lambda$), the
corresponding variation (i.e. $\partial_{\lambda}$) of this diagram
contributes to the one-loop quantum work Eq. (31).
### $W_{\lambda}$ is Finite (EFT Case)
Our proof of Eq. (21) uses that the effective mass is a relevant operator that
becomes negligible at short distances. In contrast, the insertions from ${\cal
B}^{\rm EFT}$ correspond to irrelevant operators hence the same reasoning
cannot apply — the operators become more important at short distance. The
solution to this apparent puzzle is that the EFT in its domain of validity is
necessarily weakly coupled, hence instead of using a non-perturbative argument
one can use the series representation Eq. (31) to prove finiteness.
Let us verify finiteness term-by-term. We single out a term from Eq. (31) and
introduce point-splitting, ${\cal
B}^{\prime\prime}\Delta_{0}\left[\star\Sigma\star\Delta_{0}\right]^{q}(x,x_{\epsilon})$,
with $x_{\epsilon}=x+\epsilon$. Our goal is to show that the divergent piece
in this quantity is independent of $x$. The term is
${\cal
B}^{\prime\prime}\Delta_{0}\left[\star\Sigma\star\Delta_{0}\right]^{q}(x,x_{\epsilon})=(-i)^{q}\left(\prod^{q}_{i=1}\int
d^{d}{\bm{\mu}}_{i}J({\bm{\mu}}_{i})\int dt_{i}\right)\prod^{q}_{i=0}{\cal
B}^{\prime\prime}\Delta_{0}(\mu_{i},\mu_{i+1})\bigg{|}_{\mu_{0}=x,\mu_{q+1}=x_{\epsilon}}$
(32)
It is understood that one of the block of derivatives in ${\cal
B}^{\prime\prime}$ acts to the left and the other acts to the right.
The divergence in the diagram defined by Eq. (32) occurs when all the
positions coincide. This is more easily verified in momentum space, in which
case there is a single loop integral over the internal momentum flowing around
the loop. The divergence is tied to the large momentum for all the
propagators, which in position space corresponds to the limit of coincident
endpoints. We present the explicit momentum space calculation in App. C. The
divergent piece of Eq. (32) is $(-i)^{q}c_{J}^{q}L^{\rm div}_{q,\epsilon}$
where $c_{J}$ is finite and $L^{\rm div}_{q,\epsilon}$ is the divergent part.
The key point is that $L^{\rm div}_{q,\epsilon}$ is position-independent.
Putting this piece into the definition of the quantum work Eq. (31) gives the
divergent piece of the quantum work,
$W_{\lambda}^{\rm
1-loop,div}=-\frac{1}{2\Lambda}\sum_{q=0}^{\infty}(-ic_{J})^{q}L^{\rm
div}_{q,\epsilon}\int d^{d}{\bm{x}}\partial_{\lambda}J\,.$ (33)
The remaining integral corresponds exactly to the variation of the total
density of the source under the deformation, appearing in the integral form of
the conservation equation Eq. (10). Thus if the equation of conservation Eq.
(10) is satisfied, then Eq. (33) vanishes.
It follows that, upon conservation of matter in the source, for any
deformation and finite $n$ the quantum work is finite:
$W_{\lambda}^{\rm 1-loop,div}=0\,.$ (34)
The incompressible version of this finiteness property trivially follows, like
for Eq. (23).
### The Thin Shell Limit
So far we have considered a generic source as an arbitrary volume in
$d$-dimensional space. Here we investigate a subset of sources for which the
support is a thin shell approaching a codimension-one hypersurface.
We denote the source by
$J_{\eta}=n(x){\bm{1}}_{\mathcal{S}_{\eta},\lambda}(x)$ where $\eta$
parametrizes the small width of the shell. For $\eta\to 0$ the support of the
shell tends to a hypersurface denoted by $\mathcal{S}$. To avoid any ambiguity
we always keep $\eta$ small but nonzero in the following calculations. The
volume element can be split as
$\int_{\mathcal{S}_{\eta}}d^{d}{\bm{x}}\overset{{\rm
small}\,\eta}{=}\int_{\mathcal{S}}d\sigma({\bm{x}})\int_{\rm width}dx_{\perp}$
(35)
where the $x_{\perp}$ coordinate parametrizes the direction normal to
$\mathcal{S}$. The boundary of ${\cal S}_{\eta}$ can also be decomposed as
$\partial{\cal S}_{\eta\to 0}={\cal S}_{\rm in}\cup{\cal S}_{\rm out}$ (36)
where ${\cal S}_{\rm in,out}$ are the two hypersurfaces bounding the volume
enclosed by $\cal S_{\eta}$ in the limit $\eta\to 0$. The propagator in the
presence of the thin shell is denoted by $\Delta_{\cal S}(x,x^{\prime})$. The
density can be chosen to scale with $\eta$ such that it remains finite for
$\eta\to 0$. This happens if the density scales as $\eta n=$cst. In the
following, the deformation of the source is kept arbitrary. All quantities
depend on the deformation parameter $\lambda$. We will drop the $\lambda$
index when appropriate.
We evaluate the quantum work for this specific class of sources, taking $\eta$
small but finite. For simplicity we consider the coupling to the source
induced via the ${\cal B}_{m}$ operator. Our evaluation involves various
manipulations of the EOMs and of the divergence theorem. We emphasize that we
do not use the conservation equation. That way we can explicitly demonstrate
later on, at the level of applications, that the conservation equation is
required to obtain finiteness of the quantum work. For clarity we split the
calculation in various steps.
##### Step 1:
Starting from the general expression of the quantum work Eq. (13), we evaluate
the $\partial_{\lambda}J_{\lambda}$ variation. We use
$\frac{{\bm{\partial}}l}{\lVert{\bm{\partial}}l\lVert}=\mathbf{n}_{\rm in}$
with $\mathbf{n}_{\rm in}$ the inward-pointing normal vector, then use the
divergence theorem $\int_{\partial{\cal S}}d\sigma({\bm{x}})\mathbf{n}_{\rm
out}\cdot{\bm{f}}({\bm{x}})=\int_{{\cal
S}}d^{d}{\bm{x}}{\bm{\partial}}\left({\bm{f}}({\bm{x}})\right)$ with
$\mathbf{n}_{\rm out}=-\mathbf{n}_{\rm in}$. We obtain
$W_{{\cal
S}_{\eta},\lambda}=-\frac{1}{2\Lambda}\int_{\mathcal{S}_{\eta}}d^{d}{\bm{x}}\,\Delta_{\cal
S}({\bm{x}},{\bm{x}})\partial_{\lambda}n_{\lambda}(x)-\frac{1}{2\Lambda}\int_{\mathcal{S}_{\eta}}d^{d}{\bm{x}}\,{\bm{\partial}}\left[{\bf
L}n_{\lambda}(x)\Delta_{\cal S}(x,x)\right]\,.$ (37)
##### Step 2: (Using the equations of motion)
We will further simplify the second term of Eq.(37) by observing that it can
be related to discontinuities determined from the equation of motion. In order
to proceed we introduce the notation for derivatives acting on either the
first or second argument of the propagator,
${\bm{\partial}}_{1}\Delta(x,x^{\prime})\equiv{\bm{\partial}}_{x}\Delta(x,x^{\prime})$,
${\bm{\partial}}_{2}\Delta(x,x^{\prime})\equiv{\bm{\partial}}_{x^{\prime}}\Delta(x,x^{\prime})$.
The coincident-point propagator is regularized via point-splitting using
$x_{\epsilon}=x+\epsilon$ where the shifted point is taken to belong to ${\cal
S}$ when $x\in{\cal S}$.
In the thin shell limit, the second term in Eq. (37) takes the form
$-\frac{1}{2\Lambda}\eta\,\int_{\cal
S}d\sigma({\bm{x}})\left({\bm{\partial}}_{1}[{\bf L}(x)n(x)\Delta_{\cal
S}(x,x_{\epsilon})])+{\bf L}(x)n(x){\bm{\partial}}_{2}[\Delta_{\cal
S}(x,x_{\epsilon})]\right)(1+O(\eta))$ (38)
We used the volume element Eq. (35) and that the integrand is continuous over
$\mathcal{S}_{\eta}$. In the first term of Eq. (38) the vector ${\bf L}$ is
kept inside the derivative for further convenience.
The EOM is
${\cal D}_{x}\Delta_{\cal
S}(x,x^{\prime})+\frac{1}{\Lambda}J_{\eta}({\bm{x}})\Delta_{\cal
S}(x,x^{\prime})=-i\delta^{d+1}(x-x^{\prime})$ (39)
where ${\cal D}_{x}=\square_{x}+m^{2}$. Below the $\delta^{d+1}$ is always
zero due to point splitting. Each of the terms in Eq. (38) can be expressed
using an appropriate derivative of the EOM, with the remaining endpoint set to
an appropriate value.
The first term in Eq. (38) is obtained by multiplying the EOM with ${\bf
L}({\bm{x}})$ then applying ${\bm{\partial}}_{x}$. We then integrate across
the normal coordinate and set the remaining endpoint of the propagator
$x^{\prime}$ to coincide with $x$ in the transverse coordinates. This gives
the identity
$-\frac{\eta}{\Lambda}{\bm{\partial}}_{1}\left[{\bf L}n\Delta_{\cal
S}(x,x_{\epsilon})\right](1+O(\eta))\overset{{\rm
small}\,\eta}{=}\left[\int_{\rm width}dx_{\perp}{\bm{\partial}}_{x}\left[{\bf
L}({\bm{x}})\square_{x}\Delta_{\cal
S}(x,x^{\prime})\right]\right]_{x^{\prime}\to x_{\epsilon}}$ (40)
The fundamental mass contributes as $O(\eta)$, it is thus neglected . After
integrating over the transverse coordinates (i.e. applying $\int_{\cal
S}d\sigma({\bm{x}})$), the l.h.s of Eq. (40) coincides with the first term of
Eq. (38).
The second term in Eq. (38) is obtained by applying
${\bm{\partial}}_{x^{\prime}}$ to the EOM then contracting with ${\bf
L}({\bm{x}}^{\prime})$. The subsequent steps are the same as above and the
result is
$-\frac{\eta}{\Lambda}{\bf L}n{\bm{\partial}}_{2}\left[\Delta_{\cal
S}(x,x_{\epsilon})\right](1+O(\eta))\overset{{\rm
small}\,\eta}{=}\left[\int_{\rm width}dx_{\perp}{\bf
L}({\bm{x}}^{\prime}){\bm{\partial}}_{x^{\prime}}\left[\square_{x}\Delta_{\cal
S}(x,x^{\prime})\right]\right]_{x^{\prime}\to x_{\epsilon}}\,.$ (41)
Using the identities (40) and (41) in Eq. (38) one can eliminate the presence
of the density in favor of d’Alembertians.
##### Step 3:
Finally we use the divergence theorem on the right hand side of both Eqs. (40)
and (41). In Eq.(41) the divergence theorem turns the ${\square}_{x}$ into
$\mathbf{n}\cdot{\bm{\partial}}$. 121212Notice that $x^{\prime}$ is set to
$x_{\epsilon}$ only outside of the integral. Hence the $x^{\prime}$ dependence
of the integrand is irrelevant when applying the divergence theorem. Replacing
these results in Eq. (37) we obtain the final form for the quantum work on a
thin shell,
$\displaystyle W_{\lambda}^{\cal S}=$
$\displaystyle-\frac{1}{2}\int_{\mathcal{S}_{\rm in}\cup\mathcal{S}_{\rm
out}}d\sigma({\bm{x}})\left(\,n_{i}L_{j}\,\partial^{i}_{1}\partial^{j}_{2}\Delta_{\cal
S}(x,x_{\epsilon})+\mathbf{n}.{\bf
L}\,\square_{1}\Delta_{\mathcal{S}}(x,x_{\epsilon})\right)$
$\displaystyle-\frac{1}{2\Lambda}\int_{\mathcal{S}_{\eta}}d^{d}{\bm{x}}\,\Delta_{\cal
S}(x,x_{\epsilon})\partial_{\lambda}n_{\lambda}({\bm{x}})+O(\eta)$ (42)
The volume term in the second line encodes the variation of the number density
of the source under the deformation. Notice that we have not used the
conservation equation yet in deriving Eq. (42). The conservation equation must
ensure that the quantum work is finite, along the line of (21). This will be
exemplified in Sec. 5.
### The Dirichlet Limit
We have so far assumed that $\eta n\equiv n_{\cal S}$ is finite. $n_{\cal S}$
can be understood as the number density on the hypersurface $\cal S$. In this
subsection we take the limit $n_{\cal S}\to\infty$. In this limit we obtain
that the propagator vanishes anywhere inside ${\cal S}_{\eta}$, including on
the boundaries ${\cal S}_{\rm in}$, ${\cal S}_{\rm out}$, i.e. we have
$\Delta_{\mathcal{S}}(x_{,}x^{\prime})=0$ for any $x^{\prime}$ or $x\in{\cal
S}$. On the other hand the derivatives normal to the boundary do not vanish on
the boundary, i.e. $\partial_{1}^{\perp}\Delta_{\cal
S}(x,x^{\prime})|_{x\in{\cal S}_{\rm in,out}}\neq 0$. 131313 These properties
are shown at the level of the EOM using field continuity. We integrate the EOM
on any domain crossing ${\cal S_{\eta}}$ and use the divergence theorem. This
makes appear the well-known ”jump” in the normal derivatives
$\partial_{1}^{\perp}\Delta_{\cal S}(x,x^{\prime})|_{x\in{\cal S}_{\rm
out}}-\partial_{1}^{\perp}\Delta_{\cal S}(x,x^{\prime})|_{x\in{\cal S}_{\rm
in}}=\frac{n_{\cal S}}{\Lambda}\Delta_{\cal S}(x,x^{\prime})|_{x\in{\cal
S}}\times(1+O(\eta))\,.$ (43) The propagator is continuous everywhere, hence
on the rhs is it enough to simply write $x\in{\cal S}$ without further detail.
Due to the requirement of continuity of the propagator, the discontinuity of
derivatives must remain finite. As a result, taking the limit $n_{\cal
S}\to\infty$ implies that $\Delta_{\cal S}(x,x^{\prime})|_{x\in{\cal S}}\to 0$
for any $x^{\prime}$. In summary, the limit $n_{\cal S}\to\infty$ amounts to
a Dirichlet boundary condition for the propagator.
We now apply this Dirichlet limit to the quantum work Eq. (42). The second
surface term in the first line of Eq. (42) vanishes in the Dirichlet limit
since the second endpoint belongs to the boundary and has no derivative acting
on it. In contrast the first surface term involves derivatives on both
endpoints and thus does not vanish in the Dirichlet limit. This term further
simplifies: the first derivatives of $\Delta(x,x^{\prime})$ across ${\cal S}$
are discontinuous only in the normal coordinate, while they are continuous in
the other directions, therefore
$n_{i}L_{j}\,\partial^{i}_{1}\partial^{j}_{2}\Delta_{\cal
S}(x,x_{\epsilon})=L_{\perp}\,\partial^{\perp}_{1}\partial^{\perp}_{2}\Delta_{\cal
S}(x,x_{\epsilon})$. We see that only the normal component of the deformation
flow $L_{\perp}=\mathbf{n}\cdot{\bf L}$ contributes in the Dirichlet limit.
We can then make contact with the scalar stress-energy tensor
$T^{\mu\nu}=-\partial^{\mu}\phi\partial^{\nu}\phi+\frac{1}{2}\eta^{\mu\nu}(\partial_{\rho}\phi\partial^{\rho}\phi-m^{2}\phi^{2})$.
Namely, when considering the normal component $T^{\perp\perp}$, we recognize
that the difference of the time-ordered expectation value $\langle
T^{\perp\perp}\rangle$ between ${\cal S}_{\rm out}$ and ${\cal S}_{\rm in}$ is
$\left[\langle T^{\perp\perp}\rangle\right]^{{\cal S}_{\rm out}}_{{\cal
S}_{\rm
in}}=\frac{1}{2}\left[\langle\partial^{\perp}\phi\partial^{\perp}\phi\rangle\right]^{{\cal
S}_{\rm out}}_{{\cal S}_{\rm
in}}=\frac{1}{2}\left[\partial_{1}^{\perp}\partial_{2}^{\perp}\Delta_{\cal
S}(x,x)\right]^{{\cal S}_{\rm out}}_{{\cal S}_{\rm in}}$ (44)
Notice that there is a contribution from the normal derivatives inside the
isotropic $\eta_{\mu\nu}$ term. The derivatives in the transverse directions
do not contribute due to continuity. It follows that the quantum work on a
thin shell in the Dirichlet limit can be expressed using the stress-energy
tensor as
$W_{\lambda}^{\cal S}=-\int_{\mathcal{S}_{\rm in}\cup\mathcal{S}_{\rm
out}}d\sigma({\bm{x}})\,L_{\perp}\,\langle\Omega|T^{\perp\perp}|\Omega\rangle-\frac{1}{2\Lambda}\int_{\mathcal{S}_{\eta}}d^{d}{\bm{x}}\,\Delta_{\cal
S}(x,x_{\epsilon})\partial_{\lambda}n_{\lambda}({\bm{x}})+O(\eta)$ (45)
This is, of course, not a coincidence. This contribution to the quantum work
reproduces exactly the difference of stress-energy tensors that is used to
compute the Casimir forces or pressures on thin shells, see e.g. Milton_Sphere
. The second term, which is the new term arising in our calculation, must
ensure that any spurious divergence arising from the first term cancels out
upon requiring matter conservation of the source, as dictated by the
finiteness property Eq. (21). We will exemplify this in the case of the
Dirichlet sphere in Sec. 5.
## Scalar Quantum Forces between Rigid Bodies
Figure 3: Sample Feynman diagrams for a scalar field in the presence of two
extended sources. (i): A generic one-loop contribution. (ii): “Tadpole” loops
vanishing under $\partial_{\lambda}$. (iii): Casimir limit (Strong coupling to
sources) (iv): Casimir-Polder limit (Weak coupling to sources)
### Rigid Bodies
In this section we choose a specific shape for the source and the deformation.
The source is assumed to be the compound of two rigid bodies $J=J_{1}+J_{2}$
with number densities $n_{1,2}$. We assume that the $J_{2}$ source moves
rigidly with respect to $J_{1}$. The deformation flow ${\bf L}$ thus reduces
to a constant vector over $J_{2}$ and vanishes elsewhere. In this particular
case the ${\bf L}$ factors out in $W_{\lambda}$ and we can talk about the
quantum force ${\bf F}_{1\to 2}$ between $J_{1}$ and $J_{2}$. Using Eq. (31)
the general expression for the quantum work is expressed as
$W_{\lambda}^{\rm 1-loop}={\bf L}\cdot{\bf F}_{1\to 2}=-\frac{1}{2\Lambda}\int
d^{d}{\bm{x}}\sum^{\infty}_{q=0}\Delta_{0}\left[\star\Sigma\star\Delta_{0}\right]^{q}(x,x){\bf
L}\cdot{\bm{\partial}}J_{2}(x)$ (46)
where $\Sigma=-\frac{i}{\Lambda}(J_{1}+J_{2})\delta^{d+1}(x-y)$. We will then
evaluate the general formula Eq. (46) in specific limits. An arbitrary term of
the series is represented in Fig. 3i.
### Vanishing of Tadpoles
In Eq.(46), each insertion of $\Sigma$ contains both $J_{1}$ and $J_{2}$. Let
us focus on subterms involving only $J_{2}$. Such terms amount to the
generalization of “tadpole” diagrams for an extended source, here $J_{2}$.
Using integrations by parts and the fact that the propagators $\Delta_{0}$ in
empty spacetime are Lorentz invariant, i.e are functions of $u-v$ only, one
can check that any such tadpole term is equal to minus itself and thus
vanishes. This makes sense since such terms do not involve the $J_{1}$ source
at all and should not contribute to the force between the two sources. A
tadpole diagram is represented in Fig. 3ii. These diagrams contain divergent
contributions to the quantum work. Therefore the vanishing of the tadpole
diagrams ensures that the perturbative finiteness Eq. (34) is satisfied.
### The Scalar Casimir-Polder Limit
Let us assume that the values of $\frac{n_{1,2}}{\Lambda}$ are small enough
such that the leading contributions come from the first terms in the series.
The first term of the series has $q=0$. This term amounts to a tadpole
diagram, hence it vanishes as shown in Sec. 4.2. We thus turn to the $q=1$
term. This term is
$W^{\rm 1-loop}_{q=1}=\frac{i}{2\Lambda^{2}}\int
d^{d}{\bm{u}}\,d^{d+1}v\,\Delta_{0}(u,v)J(v)\Delta_{0}(v,u){\bf
L}\cdot{\bm{\partial}}J_{2}(u)\,.$ (47)
We then decompose $J(u)=J_{1}(u)+J_{2}(u)$. The $\int
J_{2}\Delta_{0}^{2}{\bm{\partial}}J_{2}$ piece is again a tadpole and thus
vanishes as shown in Sec. 4.2 — using integration by parts, on can check that
it is equal to minus itself. The remaining term is
$W^{\rm 1-loop}_{q=1}=\frac{i}{2\Lambda^{2}}\int
d^{d}{\bm{u}}d^{d+1}v\,\Delta_{0}(u,v)J_{1}(v)\Delta_{0}(v,u){\bf
L}\cdot{\bm{\partial}}J_{2}(u)\,.$ (48)
Upon integrating by parts (or evaluating ${\bm{\partial}}J_{2}$ and using the
divergence theorem), we recognize the variation of a bubble diagram that
corresponds precisely to the definition of the Casimir-Polder potential
$V_{\rm CP}(R)$ between two point sources. Namely,
$W^{\rm 1-loop}_{q=1}=-n_{1}n_{2}\int d^{d}{\bm{u}}d^{d}{\bm{v}}\,{\bf
L}\cdot{\bm{\partial}}V_{\rm CP}(u-v)\,$ (49)
where we have defined the potential
$V_{\rm CP}(r)=-\frac{i}{2}\int
dt\left(\Delta_{0}(0;r,t)\right)^{2}=-\frac{1}{32\pi^{3}\Lambda^{2}}\frac{m}{r^{2}}K_{1}(2mr)\,.$
(50)
We can see the explicit dependence on the fundamental mass $m$ in this result.
Details of the explicit evaluation in the last step can be found in e.g. Ref.
Costantino:2019ixl .
We can notice that Eq. (49) amounts to the integral over the $J_{1,2}$
supports of the quantum work between two point sources generated by the
directional derivative of the potential $W_{\rm CP}$, i.e. $W^{\rm
1-loop}_{n=1}=n_{1}n_{2}\int d^{d}{\bm{u}}d^{d}{\bm{v}}\,W_{\rm CP}$ with
$W_{\rm CP}=-{\bf L}\cdot{\bm{\partial}}V_{\rm CP}$.
A diagram in the Casimir-Polder limit is shown in Fig. 3iv. In this limit the
quantum loop penetrates the whole bodies.
### The Scalar Casimir Limit
A different limit is obtained when the effective mass inside the sources,
$m^{2}+\frac{n_{1,2}({\bm{x}})}{\Lambda}$, is large enough for the dressed
propagator to be repelled from the sources. This occurs whenever ${\cal
D}_{x}\Delta(x,x^{\prime})\ll J({\bm{x}})\Delta_{J}(x,x^{\prime})$ for any
$x,x^{\prime}$. In this Dirichlet limit, the EOM Eq. (27) gives then
$J({\bm{x}})\Delta_{J}(x,x^{\prime})\approx 0$, which enforces
$\Delta_{J}(x,x^{\prime})\approx 0$ for any $x$ in the source and any
$x^{\prime}$ in the whole space. The propagator vanishes on the boundary,
$\Delta_{J}(x\in\partial J,x^{\prime})\approx 0$, by continuity. Therefore the
propagator has Dirichlet boundary conditions in this regime. We refer to this
limit as the scalar Casimir limit since it reproduces a Dirichlet problem for
which the quantum force is usually referred to as ”Casimir” even when the
underlying theory is not electrodynamics, e.g. here a massive scalar field
theory.
A sample diagram from the Casimir limit is shown in Fig. 3iii. In this limit
the quantum loop does not penetrate inside the bodies.
Summarizing, we have shown that our formula for the one-loop quantum work
Eq.(46) interpolates between the scalar Casimir-Polder force and the scalar
Casimir force. The two limits are realised in different physical regimes,
which essentially depend on the competition between the magnitudes of the
effective mass and of the d’Alembertian in the EOM. Qualitatively, we can say
that for fixed fundamental parameters and densities, the Casimir-Polder limit
emerges in the short separation regime while the Casimir limit emerges in the
large separation regime.
## The Dirichlet Sphere
In this section we consider the Casimir pressure on a spherical shell in the
presence of a scalar field with Dirichlet boundary condition on the shell,
i.e. a “Dirichlet sphere”. This is a standard problem, that we wish here to
revisit. An early calculation can be found in Boyer:1968uf , and an expression
describing the Casimir pressure on a $d-1$-sphere has been derived in
Milton_Sphere , which will be our main reference.
### Is the quantum pressure on the sphere finite in QFT?
The QFT prediction obtained in Milton_Sphere features a “spurious”
divergence. Here we discuss why such a divergence is not expected in light of
the finiteness properties derived in Sec. 2. We have emphasised in Sec. 2 the
role of matter conservation to obtain a finite quantum work. In the case of
the sphere, the deformation flow describing the radial deformation of the
sphere is not divergence-free, ${\bm{\partial}}\cdot L_{\rm Sphere}\neq 0$ for
arbitrary spacetime dimension $d$ except $d=1$. If the sphere density were to
be assumed to be constant, then the matter of the sphere would not conserved
under the deformation and it would then follow that neither of the finiteness
properties (21) and (23) would apply. Hence a divergent piece would show up in
the expression of the quantum pressure as an artefact. This is exactly what we
find below. We also show how this spurious divergence exactly cancels when
matter conservation is used.
### Review and Discussion
We first review the result from Milton_Sphere . The radius of the sphere is
denoted by $a$. The pressure on the sphere obtained in this reference can be
put in the form
$\frac{F_{S_{d-1},[\color[rgb]{0,0,1}{26}]}}{A_{d-1}}=\frac{F^{\rm
fin}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}}{A_{d-1}}+\frac{F^{\rm
div}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}}{A_{d-1}}$ (51)
where
$\displaystyle\frac{F^{\rm
fin}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}}{A_{d-1}}=i\frac{1}{a^{d}}\sum^{\infty}_{h=0}c_{h}\int^{\infty}_{-\infty}d\omega\omega\frac{d}{d\omega}\log\left(\omega
aJ_{h-1+\frac{d}{2}}(|\omega|a)H^{(1)}_{h-1+\frac{d}{2}}(|\omega|a)\right)$
$\displaystyle\frac{F^{\rm
div}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}}{A_{d-1}}=i\frac{1-d}{a^{d}}\sum^{\infty}_{h=0}c_{h}\int^{\infty}_{-\infty}d\omega.$
and the coefficients are given by
$c_{h}=\frac{(h-1+\frac{d}{2})\Gamma(h+d-2))}{2^{d}\pi^{\frac{d+1}{2}}h!\Gamma(\frac{d-1}{2})}.$
(53)
$A_{d-1}$ is the area of the $d-1$-sphere with radius $a$. All the terms are
real upon rotation to Euclidean time, here for convenience we keep the
Lorentzian integrals. The result Eq. (51) is obtained based on the difference
between the radial component of the stress tensor on each side of the sphere.
In our language, this is equivalently obtained by considering the sphere as a
rigid source with infinite density and deforming it along the radial flow
${\bf L}={\bm{e}}_{r}$.
The $F_{S}^{\rm fin}$ term is finite for any $d$ different from positive even
integers. The divergence showing up when $d=2k$, $k=1,2\ldots$ is a familiar
feature in QFT that can be treated in the framework of renormalization. This
physical divergence is not our focus here. In contrast, the $F_{S}^{\rm div}$
is infinite for any $d\neq 1$. This behavior is not the one of a meromorphic
function in $d$, and should thus draw our attention.
In Milton_Sphere it was nicely observed that for $d<1$ the
$\sum^{\infty}_{h=0}c_{h}$ series vanishes identically. A proposal was then
made to remove the $F^{\rm div}$ term using an analytical continuation of $d$
to the $d<1$ region. In summary the argument amounts to stating that since in
this region one has $\sum^{\infty}_{h=0}c_{h}=0$ identically, any term
constant in $h$ arising from the integral is irrelevant since it is multiplied
by zero and can thus be subtracted. Notice that the proposed argument is
different from usual dimensional regularization which simply turns the
quantity of interest into a meromorphic function of $d$.
An issue remains, however, as $\sum^{\infty}_{h=0}c_{h}=0$ in the $d<1$ region
only guarantees that the divergent term $F^{\rm div}$ takes the indefinite
form “$0\times\infty$”. As a result, even if one requires the
$\sum^{\infty}_{h=0}c_{h}$ sum to vanish for any $d>1$ by analytical
continuation of zero, the proposed method remains inconclusive in the sense
that $0\times\infty$ is undefined. For this reason, we will see that the use
of our formalism (namely the quantum work on a Dirichlet shell Eq. (42)
together with matter conservation) allows one to bypass these ambiguities and
make the result finite.
### The Quantum Work on the Dirichlet Sphere
We now proceed with our calculation of the quantum work on the Dirichlet
sphere with radial deformation. We first define the source and the
deformation. We consider a source with the geometry of a spherical shell of
width $\eta$ and with a finite number density $n$ in the $\eta\to 0$ limit,
$J_{\eta}({\bm{x}})=\frac{n_{\lambda}}{\eta}{\bm{1}}_{a-\frac{\eta}{2}<r<a+\frac{\eta}{2}}({\bm{x}})\,$
(54)
The propagator in the presence of this source is denoted by
$\Delta_{S}(x,x^{\prime})$. The boundary of the shell for small $\eta$ is
identified with $\partial S_{\eta\to 0}=S_{\rm in}\cup S_{\rm out}$ where
$S_{\rm in,out}$ are the $d-1$-spheres with respective radii $r=a_{-},a_{+}$.
The deformation of the source is parametrized by $\lambda$ and changes the
sphere radius such that $a_{\lambda+d\lambda}=a_{\lambda}+Ld\lambda$.
Equivalently, in terms of the support function, the deformation is
$l_{\lambda+d\lambda}(r)=l_{\lambda}(r-Ld\lambda)$. The deformation flow
vector is thus ${\bf L}=L{\bm{e}}_{r}$. Using the conservation equation Eq.
(9) we can easily derive the variation of density corresponding to such a
deformation. We find
$\partial_{\lambda}n=-\frac{d-1}{a}nL\,.$ (55)
We can now compute the quantum work. Since we are interested in a sphere we
can readily use the general formula for the quantum work on a thin shell Eq.
(42). In the Dirichlet limit the quantum work reads
$\displaystyle W_{S_{d-1}}=$
$\displaystyle-\frac{1}{2}A_{d-1}L\left[\partial_{r^{\prime}}\partial_{r^{\prime\prime}}\Delta_{S}(r^{\prime},t;r^{\prime\prime},t)|_{r^{\prime}=r^{\prime\prime}\in
S_{\rm
out}}-\partial_{r^{\prime}}\partial_{r^{\prime\prime}}\Delta_{S}(r^{\prime},t;r^{\prime\prime},t)|_{r^{\prime}=r^{\prime\prime}\in
S_{\rm in}}\right]$
$\displaystyle-\frac{1}{2\Lambda}\int_{S}d^{d}{\bm{x}}\Delta_{S}(x,x)\partial_{\lambda}n_{\lambda}\Big{|}_{n\to\infty}\,.$
(56)
The first term in Eq. (56) matches precisely the quantity computed in
Milton_Sphere . Namely we find
$\displaystyle W_{S_{d-1}}=L\left(F^{\rm
fin}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}+F^{\rm
div}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}\right)-\frac{1}{2\Lambda}\int_{S}d\sigma({\bm{x}})\Delta_{S}(x,x)\partial_{\lambda}n_{\lambda}\Big{|}_{n\to\infty}\,$
(57)
where the components of the force are defined in Eqs. (5.2).
The remaining task is to evaluate the last term in Eq. (57), which encodes the
variation of density. To this end we first have to evaluate the propagator in
the presence of the sphere with finite density. This is done by recomputing
the propagator in Milton_Sphere , replacing the two Dirichlet boundary
conditions on $S$ by two boundary conditions obtained from integrating the EOM
on the shell enclosing $r=a$ and using the divergence theorem. Introducing the
Fourier transform in time
$\Delta(x,x)=\int\frac{d\omega}{2\pi}\Delta_{\omega}({\bm{x}},{\bm{x}})$, we
find for the propagator at coinciding points
$\Delta_{\omega}(a,a)\overset{{\rm
large}\,n}{\to}i\frac{\Lambda}{na^{d-1}}\sum^{\infty}_{h=0}\frac{(h-1+\frac{d}{2})\Gamma(h+d-2))}{2^{d-2}\pi^{\frac{d-1}{2}}h!\Gamma(\frac{d-1}{2})}=i\frac{\Lambda}{na^{d-1}}\sum^{\infty}_{h=0}4\pi
c_{h}$ (58)
with $c_{h}$ defined in Eq. (53). Using the variation of density dictated by
matter conservation Eq. (55) we then obtain
$\frac{1}{2\Lambda}\int_{S}d\sigma({\bm{x}})\Delta_{S}(x,x)\partial_{\lambda}n_{\lambda}\Big{|}_{n\to\infty}=i\frac{1-d}{a^{d}}L\sum^{\infty}_{h=0}c_{h}\int^{\infty}_{-\infty}d\omega=LF^{\rm
div}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}\,.$ (59)
We see that this contribution from the variation of density exactly cancels
the divergent piece in Eq. (57). It follows that our final result for the
quantum work on the Dirichlet sphere amounts to the finite part of the result
from Milton_Sphere , namely
$W_{S_{d-1}}=LF^{\rm fin}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}\,.$ (60)
The fact that the term from the variation of density cancels the $F^{\rm
div}_{S_{d-1},[\color[rgb]{0,0,1}{26}]}$ divergence upon requirement of matter
conservation is non trivial. This cancellation provides a check of our
expression for the quantum work on a thin shell Eq. (42) and illustrates how
the finiteness of the quantum work can manifest itself concretely.
## Planar Geometry
In this section we evaluate the quantum force in simple planar geometries.
While the geometric setup in itself is very well-known, the exact results for
the specific quantum force considered here in the case of a massive scalar
have not been presented elsewhere. This detailed calculation serves to
illustrate how the quantum work remains finite as a result of matter
conservation. It also exhibits the transition between the scalar Casimir and
Casimir-Polder regimes. The plate-point result obtained here is also key for
the search for new particles via atom interferometry that is presented in
section 7.
### Force Between two Plates
We focus on the classic Casimir setup with two plates facing each other and
separated by a distance $\ell$ along the $z$ axis. The deformation we consider
amounts to a variation of $\ell$. We compute the force induced by a massive
scalar field with bilinear coupling to the constituents of the plates.
The quantum field $\Phi$ is described by the Lagrangian
${\cal
L}=\frac{1}{2}(\partial_{\mu}\Phi)^{2}-\frac{m^{2}}{2}\Phi^{2}-\frac{1}{2\Lambda^{2}}\Phi^{2}J({\bf
x})\,.$ (61)
In this application, it is convenient to define $J$ to be the mass density
distribution of the sources. In this setting there are five regions along $z$,
with the plates supported on region $1$ and $3$. The source $J$ is defined as
$J(z)=\rho_{1}\Theta(-\bar{z}_{\infty}<z<0)+\rho_{3}\Theta(\ell<z<z_{\infty})$
(62)
The width of the plates is taken to be much larger than the separation,
$z_{\infty},\bar{z}_{\infty}\gg\ell$. The fact that the plates actually end
instead of continuing to infinity i.e.
$|z_{\infty}|,|\bar{z}_{\infty}|<\infty$ is crucial in order to ensure matter
conservation, and therefore that the quantum work is finite as dictated by Eq.
(23). The effective mass can be written as
$\displaystyle
m^{2}(z)=m_{\infty}^{2}\Theta(z<-\bar{z}_{\infty})+m_{1}^{2}\Theta(-\bar{z}_{\infty}<z<0)+m_{2}^{2}\Theta(0<z<\ell)$
$\displaystyle+m_{3}^{2}\Theta(\ell<z<z_{\infty})+m_{\infty}^{2}\Theta(z>z_{\infty})\,,$
(63)
where
$m_{\infty,2}^{2}=m^{2}$ (64)
and
$m_{1,3}^{2}=\frac{\rho_{1,3}}{\Lambda^{2}}+m^{2}\,.$ (65)
In Eq. (61) the boundary operator amounts to
${\cal B}_{m}(\Phi)=\frac{\Phi^{2}}{2\Lambda^{2}}\,.$ (66)
Derivative operators such as ${\cal
B}(\Phi)=\frac{(\partial\Phi)^{2}}{2\Lambda^{4}}$ could be treated along the
same lines.
The deformation of the source that we consider amounts to shifting the right
plate (i.e. region 3, the second term in Eq. (62). This corresponds to an
infinitesimal shift of $\ell$ and $z_{\infty}$,
$\ell_{\lambda+d\lambda}=\ell_{\lambda}+Ld\lambda$,
$z_{\infty,\lambda+d\lambda}=z_{\infty,\lambda}+Ld\lambda$. Notice that the
left plate is left untouched, hence $\bar{z}_{\infty}$ does not vary.
Equivalently, in terms of the support function of the right plate, this is
described by $l_{\lambda+d\lambda}(z)=l_{\lambda}(z-Ld\lambda)$. The geometry
is summarized as
Since the source moves rigidly, the formula for the quantum work Eq. (46)
applies and a quantum force can be defined out from the quantum work, $W^{\rm
1-loop}=LF_{\rm quant}$. In the following we determine $F_{\rm quant}$.
#### Propagator
The Feynman propagator in position-momentum space, defined by
$\Delta(x,x^{\prime})=\int\frac{d^{3}p}{(2\pi)^{3}}e^{ip^{\alpha}(x-x^{\prime})_{\alpha}}\Delta_{p}(z,z^{\prime})$
(67)
with $(p^{\alpha},z)$, $\alpha=(0,1,2)$, has been calculated in the presence
of a piece-wise constant mass in Brax:2018grq . Defining the $z$-dependent
momentum with the Feynman $\epsilon$-prescription
$\omega(z)=\sqrt{(p_{0})^{2}-(p_{1})^{2}-(p_{2})^{2}+i\epsilon-m^{2}(z)},$
(68)
the homogeneous equation of motion becomes
$(\partial_{z}^{2}+\omega^{2}(z))\Phi(z)=0$ (69)
whose solutions in a given region $i$ are simply $e^{\pm i\omega_{i}z}$. The
solution everywhere can be found by continuity of the solution and its
derivative at each of the interfaces. The propagator is obtained by solving
the equations of motion in the five regions and matching them at the boundary.
Details can be found in the appendix of Brax:2018grq . For the calculation of
the quantum force, we will only need to evaluate the propagator at coinciding
points on the two boundaries of the right-hand plate, $z=\ell$ and
$z=z_{\infty}$.
#### Quantum Force
The deformation of the source term is found to be
$\partial_{\lambda}J=-(\rho_{3}-\rho_{2})L\left(\delta(z-\ell)-\delta(z-z_{\infty})\right)\,.$
(70)
Putting this variation back into the definition of the quantum work, we can
factor out the $L$ term and obtain the quantum force as defined in Eq. (46).
As a result the quantum force is given by
$\displaystyle{F}_{\rm quant}$
$\displaystyle=\frac{1}{2}(m_{3}^{2}-m_{2}^{2})\int
d^{2}{\bm{x}}_{\parallel}(\Delta(x^{\alpha},\ell;x^{\alpha},\ell)-\Delta(x^{\alpha},z_{\infty};x^{\alpha},z_{\infty}))$
$\displaystyle=\frac{1}{2}(m_{3}^{2}-m_{2}^{2})\int
d^{2}{\bm{x}}_{\parallel}\int\frac{d^{3}p}{(2\pi)^{3}}(\Delta_{p}(\ell,\ell)-\Delta_{p}(z_{\infty},z_{\infty}))$
(71)
with ${\bm{x}}_{\parallel}=(x_{1},x_{2})$. The cancellation between the
divergent parts of the two propagators at coinciding points is evident in
Eq.(71), using the fact that the divergence is location-independent. In the
second line we have introduced the propagator in position-momentum space. Here
we have explicitly
$\Delta_{p}(\ell,\ell)-\Delta_{p}(z_{\infty},z_{\infty})=\frac{(\omega_{1}+\omega_{2})+e^{2i\ell\omega_{2}}(\omega_{2}-\omega_{1})}{(\omega_{1}+\omega_{2})(\omega_{2}+\omega_{3})-e^{2i\ell\omega_{2}}(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})}-\frac{1}{\omega_{2}+\omega_{3}}\,.$
(72)
with $\omega_{i}=\sqrt{(p_{\alpha})^{2}+i\epsilon-m^{2}_{i}}$.
The surface integral $\int d^{2}{\bm{x}}_{\parallel}=S$ is factored out hence
defining a pressure. The final expression for the quantum pressure between the
two plates (i.e. regions $1$ and $3$) is then
$\frac{F_{\rm
quant}}{S}=\int_{0}^{\infty}\frac{dkk^{2}}{2\pi^{2}}\frac{\gamma_{2}(\gamma_{2}-\gamma_{1})(\gamma_{2}-\gamma_{3})}{(\gamma_{2}-\gamma_{1})(\gamma_{2}-\gamma_{3})-e^{2\ell\gamma_{2}}(\gamma_{1}+\gamma_{2})(\gamma_{2}+\gamma_{3})}\,\,$
(73)
after Wick’s rotation with $\omega_{i}=i\gamma_{i}=i\sqrt{k^{2}+m^{2}_{i}}$.
This is the general expression of the plate-plate quantum pressure in the
presence of a scalar coupled quadratically to matter.
In the limit of large density-induced effective mass
$m_{1,3}\rightarrow\infty$, the general expression Eq. (73) becomes
$\frac{F_{\rm
quant}}{S}=\int_{0}^{\infty}\frac{dkk^{2}}{2\pi^{2}}\frac{\gamma_{2}}{1-e^{2\ell\gamma_{2}}}\,.$
(74)
In this limit the effective mass in the plates become so large that the field
obeys Dirichlet boundary conditions. Accordingly, Eq. (74) matches exactly the
Casimir pressure from a massive scalar with Dirichlet boundary conditions. For
a massless scalar the integral can be explicitly performed and we retrieve
$\frac{F_{\rm quant}}{S}=-\frac{\pi^{2}}{480\ell^{4}}\,.$ (75)
This is the classic Casimir pressure for a massless scalar field.
In the limit of small density-induced effective mass defined as
$(m^{2}_{1,3}-m_{2}^{2})/m^{2}_{2}\ll 1$, i.e. when the contribution of the
density to the effective mass is small with respect to the fundamental mass,
the pressure becomes
$\frac{F_{\rm
quant}}{S}=-(m^{2}_{1}-m_{2}^{2})(m^{2}_{3}-m_{2}^{2})\int_{0}^{\infty}\frac{dkk^{2}}{2\pi^{2}}\frac{e^{-2\ell\gamma_{2}}}{16(\gamma_{2})^{3}}\,.$
(76)
We checked that this corresponds exactly to the Casimir-Polder force
integrated over regions 1 and 3. We will see how this calculation can be
cross-checked in the next example.
In summary, we have verified that both the scalar Casimir and scalar Casimir-
Polder pressures are recovered as limits of the more general expression of the
plate-plate quantum pressure Eq. (73). Qualitative considerations are given in
the next example.
#### On Finiteness
In Eq. (71) we have observed the cancellation between the divergent piece of
$\Delta_{p}(\ell,\ell)$ and $\Delta_{p}(z_{\infty},z_{\infty})$ in the
integrand. This cancellation makes the expression for the force finite. To
illustrate how the presence of the $\Delta_{p}(z_{\infty},z_{\infty})$ term is
tied to matter conservation, we consider the following counterexample.
Let us imagine that we ignored the displacement of the outer edge
($z=z_{\infty}$) of the plate. This would imply that the matter of the plate
is not conserved since the plate’s width would change while the density
remains constant. In such a setup, the result would be the same as Eq. (71)
but without the $\Delta_{p}(z_{\infty},z_{\infty})$ contribution. Therefore
the expression would be infinite. This simple counterexample illustrates that,
when dropping the requirement of matter conservation, the prediction of the
force becomes infinite i.e. property (23) does not hold.
### Force Between a Plate and a Point Source
As a third application of our formalism, we focus on the interaction between a
point particle and a plate. As before we assume the Lagrangian
${\cal
L}=\frac{1}{2}(\partial_{\mu}\Phi)^{2}-\frac{m^{2}}{2}\Phi^{2}-\frac{1}{2\Lambda^{2}}\Phi^{2}\,\,J({\bf
x})\,.$ (77)
The plate is supported on $z<0$ and has mass density $\rho_{1}$. The source is
taken to be
$J({\bf x})=\rho_{1}\Theta(-z_{\infty}<z<0)+m_{N}{\delta^{2}}({\bf
x_{\parallel}})\delta(z-\ell)\,.$ (78)
The mass of the point particle is $m_{N}$. We define the effective mass of
$\Phi$ in the plate as
$m^{2}_{1}=\frac{\rho_{1}}{\Lambda^{2}}+m^{2}\,$ (79)
which depends on the coupling to matter, $\frac{1}{\Lambda^{2}}$. The
effective mass of $\Phi$ is then piecewise constant,
$m^{2}(z)=m_{\infty}^{2}\Theta(z<-z_{\infty})+m_{1}^{2}\Theta(-z_{\infty}<z<0)+m_{2}^{2}\Theta(z>0)$
(80)
with $m_{\infty,2}=m$.
The deformation we consider is an infinitesimal shift of the point particle
position $\ell$, $\ell_{\lambda+d\lambda}=\ell_{\lambda}+Ld\lambda$. The
geometry is summarized as
#### Propagator
The Feynman propagator in position-momentum space $(p^{\alpha},z)$,
$\alpha=(0,1,2)$ in the presence of a piecewise constant mass has been
calculated in Brax:2018grq . The effect of the point source on the propagation
is negligible. 141414This can be checked by evaluating the dressed propagator
in energy-position space $(p_{0},{\bm{x}})$. In the resummed propagator, the
effect of the insertion is small within the EFT validity range, leaving the
term with one point source insertion as the main non-vanishing contribution to
the quantum work. For the present calculation, if the frame is chosen such
that the deformation changes the position of the point source and not of the
plate, one can safely ignore the region at $-z_{\infty}$ and thus consider the
propagator over two regions, with
$m^{2}(z)=m_{1}^{2}\Theta(z<z_{12})+m_{2}^{2}\Theta(z>z_{12})$. The propagator
is found to be
$\Delta_{p}(z,z^{\prime})=\begin{cases}\frac{e^{i\omega_{2}(z_{>}-z_{<})}}{2\omega_{2}}E_{2}(z_{<})\quad\quad
z_{12}<z_{<}\\\
\frac{e^{i(\omega_{2}(z_{>}-z_{12})-\omega_{1}(z_{<}-z_{12})}}{\omega_{1}+\omega_{2}}\quad
z_{<}<z_{12}<z_{>}\\\
\frac{e^{i\omega_{1}(z_{>}-z_{<})}}{2\omega_{1}}E_{1}(z_{>})\quad\quad
z_{>}<z_{12}\end{cases}\,$ (81)
where
$\displaystyle
E_{1}(z)=1+e^{i2(z_{12}-z)\omega_{1}}\frac{\omega_{1}-\omega_{2}}{\omega_{1}+\omega_{2}}$
$\displaystyle
E_{2}(z)=1+e^{i2(z-z_{12})\omega_{2}}\frac{\omega_{2}-\omega_{1}}{\omega_{1}+\omega_{2}}\,.$
We have defined $z_{<}=\min(z,z^{\prime})$ and $z_{>}=\max(z,z^{\prime})$. We
have introduced $\omega_{i}=\sqrt{(p_{\alpha})^{2}-m^{2}_{i}+i\epsilon}$. The
$\epsilon$ prescription guarantees that the propagators decay at infinity. The
$E_{1}$, $E_{2}$ functions essentially describe how the presence of the
boundary affects the propagator with both endpoints in the same region. When
the boundary $z_{12}$ is rejected to infinity, one recovers the usual
expression for a fully homogeneous space.
#### Quantum Force
The deformation of the source is
$\partial_{\lambda}J=-m_{N}L{\delta^{2}}({\bf
x_{\parallel}})\partial_{z}\delta(z-\ell)\,.$ (83)
Using this expression into the quantum work, one obtains after one integration
by parts the quantum force
$\displaystyle{F}_{\rm quant}$
$\displaystyle=-\frac{1}{2}\frac{m_{N}}{\Lambda^{2}}\partial_{z}\Delta_{J}(x^{\alpha},z;x^{\alpha},z)|_{z\to\ell}$
(84)
$\displaystyle=-\frac{1}{2}\frac{m_{N}}{\Lambda^{2}}\int\frac{d^{3}p}{(2\pi)^{3}}\partial_{z}\Delta_{J}({\bm{k}};z,z)|_{z\to\ell}$
In the last line we have introduced the position-momentum space propagator.
Using Eq. (81) with $z_{12}=0$ since the plate is placed at the origin, we
have
$\displaystyle F_{\rm quant}=$
$\displaystyle-\frac{m_{N}}{\Lambda^{2}}\frac{1}{2}\int\frac{d^{3}{p}}{(2\pi)^{3}}\frac{1}{2\omega_{2}}\partial_{z}E_{2}(z)|_{z=\ell}$
$\displaystyle=$
$\displaystyle-(m_{1}^{2}-m_{2}^{2})\frac{m_{N}}{\Lambda^{2}}\frac{1}{4\pi^{2}}\int
dk\,k^{2}\frac{e^{-2\ell\gamma_{2}}}{(\gamma_{1}+\gamma_{2})^{2}}\,$ (85)
where we have performed a Wick rotation and introduced
$\gamma_{i}=\sqrt{k^{2}+m^{2}_{i}}$. This is the general expression of the
plate-point quantum force in the presence of a scalar coupled quadratically to
matter. Diagrammatically, the particle will interact with the plate via loops
starting at the point particle, going into the plate and coming back to the
point particle.
In the limit of large density-induced effective mass $m_{1}\to\infty$, the
force takes the form
$F_{\rm quant}=-\frac{m_{N}}{\Lambda^{2}}\frac{1}{4\pi^{2}}\int
dk\,k^{2}e^{-2\ell\gamma_{2}}\,.$ (86)
This is the limit for which the density is so large that the field is repelled
by the plate and the field obeys Dirichlet boundary conditions at the boundary
of the plate. We refer to this case as the Casimir limit. In the massless case
we obtain
$F_{\rm quant}=-\frac{m_{N}}{16\pi^{2}\Lambda^{2}}\frac{1}{\ell^{3}}\,.$ (87)
In the limit of small density-induced effective mass we expand in
$(m^{2}_{1}-m_{2}^{2})\ll m_{i}^{2}$ and obtain
$F_{\rm
quant}=-(m_{1}^{2}-m_{2}^{2})\frac{m_{N}}{\Lambda^{2}}\frac{1}{4\pi^{2}}\int
dk\,k^{2}e^{-2\ell\gamma_{2}}\frac{1}{4\gamma_{2}^{2}}$ (88)
In the massless case we obtain
$F_{\rm
quant}=-\frac{m_{1}^{2}\,m_{N}}{16\pi^{2}\Lambda^{2}}\frac{1}{\ell}\,.$ (89)
This is the Casimir-Polder limit. As a cross check, in the Appendix we show
that this limit is exactly recovered by integrating the point-point Casimir-
Polder potential over the extended source.
In summary, we have verified that both the scalar Casimir and scalar Casimir-
Polder pressures are recovered as limits of the more general expression of the
plate-point quantum pressure Eq. (73). The two limits of the massless formula
Eqs. (87) and (89) make transparent that there is a transition between the two
regimes as a function of the separation $\ell$. Namely, the Casimir regime
occurs for $\ell\gg m^{-1}_{1}$ while the Casimir-Polder regime occurs for
$\ell\ll m^{-1}_{1}$, with $m^{2}_{1}=\frac{\rho_{1}}{\Lambda^{2}}$. One way
to think about this phenomenon is that, while at large distance the plate
behaves as a mirror, leading to a Casimir force, at short distance the quantum
fluctuations start penetrating the mirror. As a result the behaviour of the
Casimir force gets softened into the the Casimir-Polder one at short distance.
This behaviour is confirmed numerically.
## Bounding Quantum Forces with Atom Interferometry
### The setting
In this section we calculate a somewhat more evolved observable in the context
of atom (or neutron) interferometry experiments. Namely we compute the matter-
wave phase shift measurable in atom interferometers, which is generated in the
presence of a quantum force between the atom and a neighbouring plate. The
calculation uses the results from section 6.2.
Atom interferometry has been used to test gravity (see e.g. KasChu ; KasChu2 ;
1999Natur.400..849P ; 2006PhyS…74C..15M ) and to search for classical fifth
forces, often in the context of dark energy-motivated models Burrage:2014oza ;
Hamilton:2015zga ; Jaffe:2016fsh ; Sabulsky:2018jma . Interferometry has never
been used to search for quantum forces such as the one modelled by the scalar
field used throughout this paper. In this section we thus i) compute the phase
shift induced by a quantum force and ii) demonstrate that interferometry is a
competitive method to search for a dark field bilinearly coupled to matter.
Figure 4: Spacetime paths followed by the atoms in the interferometer. The
path in the presence of the force ($\Gamma$, plain lines) is deformed with
respect to the path in the absence of the force ($\Gamma_{0}$, dotted lines).
The pictures indicate the asymptotic behaviors (Casimir-Polder and Casimir) of
the force between the atom and the plate.
Our focus is on a setup which is essentially an adaptation of the simplest
“Kasevich-Chu” experiment (see KasChu ; KasChu2 ; 1999Natur.400..849P , and
e.g. 2006PhyS…74C..15M for a review). We describe briefly the setup and refer
to the above references and to e.g. Storey:1994oka for further details. Atom
or neutron interferometry uses the difference of phase of two coherent
wavepackets following two different spacetime paths as shown in Fig. 4. The
two paths amount to two broken worldlines, $ACB$ and $ADB$, with a change of
direction at $C$ and $D$ respectively, and with same endpoint $B$. The changes
of velocity of the wavepackets are induced by laser pulses.
We assume that the interferometry experiment is carried out along the $z$ axis
over a plate located at $z=0$. We assume that the setup is oriented
horizontally at the surface of the Earth since we are not interested in
measuring the strength of the gravity field. The phase shift is caused by the
force between the atom and the plate. Experimentally, the plate might be a
large ball whose radius is assumed to be much larger than the length of
$\Gamma$. In this setup the potential $V(z)$ is the one for the plane-point
quantum force derived in section 6.2.
Using the WKB approximation, the leading phase shift between the two paths
induced by a potential $V(z)$ is given by (see Storey:1994oka )
$\delta\phi=-\int_{\rm\Gamma_{0}}dt\ V(z(t))\ $ (90)
where $\Gamma_{0}=A_{0}C_{0}B_{0}D_{0}$ denotes the unperturbed closed path
and $z(t)$ the corresponding classical trajectory of the wavepackets. Along
each segment of $\Gamma_{0}$, the trajectories are straight,
$z(t)=z_{i}+v_{i}(t-t_{i})\,.$ (91)
The velocities along each segment of the path can in general differ. In the
present setup the time spacing between each pulse is $T$ and the velocities on
the segments of $\Gamma_{0}$ are $v_{A_{0}C_{0}}=v_{D_{0}B_{0}}=v$,
$v_{C_{0}B_{0}}=v_{A_{0}D_{0}}=v^{\prime}$, as shown in Fig. 4.
### Computing the Phase Shift
We consider the scalar model defined in Eq. (6.2), where the fundamental mass
of the field is denoted by $m$ and the mass density of the plate is denoted
$\rho$. The effective mass of the $\Phi$ field for $z<0$ and $z>0$ is given by
$m_{1}^{2}=\frac{\rho}{\Lambda^{2}}+m^{2}$ and $m_{2}^{2}=m^{2}$. The plane-
point force (85) derives from a potential $F(\ell)=-\frac{\partial
V(\ell)}{\partial\ell}$ given by
$V(\ell)=-\frac{m_{N}\rho}{\Lambda^{4}}\frac{1}{4\pi^{2}}\int
dk\,k^{2}\frac{e^{-2\ell\,\gamma_{2}}}{2\gamma_{2}(\gamma_{1}+\gamma_{2})^{2}}\,$
(92)
where $\ell$ is the distance from the particle to the plate, here taken to be
along the $z$ direction. Let us consider one segment of the path $\Gamma_{0}$
where the particle evolves between times $t_{i}$ and $t_{j}$, where $i,j$
denote the endpoints of the segment. The associated phase shift is
$\displaystyle\delta\phi_{ij}$
$\displaystyle=\int_{t_{i}}^{t_{j}}dt\,\,\frac{m_{N}\rho}{\Lambda^{4}}\frac{1}{4\pi^{2}}\int
dk\,k^{2}\frac{e^{-2(z_{i}+v(t-t_{i}))\,\gamma_{2}}}{2\gamma_{2}(\gamma_{1}+\gamma_{2})^{2}}\,$
(93)
$\displaystyle=\frac{1}{v}\frac{m_{N}\rho}{\Lambda^{4}}\frac{1}{4\pi^{2}}\int
dk\,k^{2}\frac{e^{-2z_{i}\,\gamma_{2}}-e^{-2z_{j}\,\gamma_{2}}}{4\gamma^{2}_{2}(\gamma_{1}+\gamma_{2})^{2}}\,$
(94)
This is an exact result following from the exact expression Eq. (92). If one
instead used the approximate expressions for either the Casimir or the
Casimir-Polder regime to compute the phase shift, one would need to ensure
that all distances involved are respectively much bigger or much smaller than
the Compton wavelength in the plate, $m_{1}^{-1}$ (see section 6). Since in
the interferometry experiment the separation between the point source and the
plane varies, using either the Casimir or the Casimir-Polder approximation may
potentially give an erroneous result. The phase shift calculation provides a
concrete example of prediction for which the use of the exact result Eq. (92)
is in general mandatory.
### Limits
The phase shift given by Eq. (94) can be further evaluated in some limits when
taking $m=0$. All approximations below have been checked numerically.
#### $m_{1}z_{i}\gg 1$, $m_{1}z_{j}\gg 1$
This case amounts to computing the phase shift in the Casimir regime. It is
obtained by approximating $\gamma_{1}+\gamma_{2}\approx m_{1}$ in the
denominator of Eq. (94). We obtain
$\displaystyle\delta\phi_{ij}$
$\displaystyle=\int^{t_{j}}_{t_{i}}dt\frac{m_{N}}{32\pi^{2}\Lambda^{2}z^{2}}$
(95)
$\displaystyle=\frac{m_{N}(z_{j}-z_{i})}{32\pi^{2}\Lambda^{2}\,v\,z_{i}z_{j}}\,=\frac{m_{N}(t_{j}-t_{i})}{32\pi^{2}\Lambda^{2}z_{i}z_{j}}\,.$
(96)
The overall factor $\Lambda^{-2}$ is characteristic of the Casimir regime.
#### $m_{1}z_{i}\ll 1$, $m_{1}z_{j}\ll 1$
This case would amount to computing the phase shift in the Casimir-Polder
regime. However, approximating $\gamma_{1}+\gamma_{2}\approx 2k$ in the
denominator gives a divergent result, therefore we have to go beyond the
Casimir-Polder approximation to obtain a finite expression. This is possible
only in our formalism: the small but nonzero effective mass in the plate
regularizes the divergence. It is obtained by taking
$\gamma_{1}+\gamma_{2}\approx 2k+\frac{m_{1}^{2}}{2k}$ in the denominator. We
obtain
$\delta\phi_{ij}=\frac{m_{N}(t_{j}-t_{i})\rho}{128\pi^{2}\Lambda^{4}}\left(\frac{1}{2}-\gamma+\frac{z_{j}\log\left(\frac{\Lambda}{z_{j}\sqrt{\rho}}\right)-z_{i}\log\left(\frac{\Lambda}{z_{i}\sqrt{\rho}}\right)}{z_{j}-z_{i}}\right)$
(97)
The overall $\Lambda^{-4}$ is characteristic of the Casimir-Polder regime.
#### $m_{1}z_{i}\ll 1$, $m_{1}z_{j}\gg 1$
In this nontrivial case we do not find an accurate approximation, only
expressions valid up to $O(1)$ uncertainty, which are nevertheless very
useful. Taking $\gamma_{1}+\gamma_{2}\approx m_{1}$ in the denominator gives
$\delta\phi_{ij}=\frac{m_{N}(t_{j}-t_{i})\rho}{16\pi^{2}z_{j}\Lambda^{3}}\,.$
(98)
Taking $\gamma_{1}+\gamma_{2}\approx 2k+m_{1}^{2}/2k$ in the denominator gives
$\delta\phi_{ij}=\frac{m_{N}(t_{j}-t_{i})\rho}{128\pi z_{j}\Lambda^{3}}\,.$
(99)
The exact result lies in between these two expressions. We can see that the
overall scaling for this case is $\Lambda^{-3}$.
### Sensitivity to New Particles
Figure 5: Bounds on the quantum force induced by a scalar field bilinearly
coupled to nucleons. Red lines correspond to the sensitivity from atom
interferometry using $a_{\rm ex}=10^{-8}$m2s-1. Yellow regions correspond to
bounds from other experiments and match Ref. Brax:2017xho . Those were
computed in the Casimir-Polder approximation, except the bound from bouncing
neutrons.
The phase shift over the closed path $\Gamma_{0}$ can be written as
$\Delta\Phi=a\kappa T^{2}$, where $\kappa=M(v-v^{\prime})$ is the transferred
momentum from the laser pulses and $T$ is the period of pulses (i.e. $2T$ is
the total time between splitting ($A$) and recombination ($B$) of the
wavepackets, see Fig. 4). The coefficient $a$ has dimension of an acceleration
and can be taken as the figure of merit for the precision of the atom
interferometer. The sensitivity of existing experiments can typically reach
$a_{\rm ex}\sim 10^{-9}g=10^{-8}\,{\rm m}^{2}{\rm s}^{-1}$ (100)
(see e.g. 1999Natur.400..849P ), that we use as our reference value.
The predicted value of $a$ given by the quantum force in our model is easily
obtained in the different regimes discussed above. Comparing it to $a_{\rm
ex}$ we obtain an experimental bound on the parameters of the $\Phi$ field,
i.e. an exclusion region in the $(m,\Lambda)$ plane. For better comparison to
other experimental constraints we introduce
$\bar{\Lambda}=\frac{\Lambda^{2}}{m_{N}}$, which then matches the convention
in Brax:2017xho .
The result is shown in Fig. 5. In the regime relevant for the presented
sensitivities, the phase shift is dominated by the contributions from the
segments near the plate, $\delta\phi\approx\delta\phi_{AC}-\delta\phi_{AD}$.
Moreover these contributions are typically in the nontrivial regime of section
7.3.3, which depends only on $z_{C}\approx z_{D}\equiv\bar{z}$, which we refer
to as the length of the arm of the interfometer. The sensitivity greatly
increases when $\bar{z}$ decreases, reflecting the fact that the quantum force
quickly increases at short distance. The universal diagonal line correspond to
the suppression of the quantum force due to the short Compton wavelength of
the $\Phi$ field. A different sensitivity in $a$ would amount to a change in
$\bar{z}$. All the lines are in the nontrivial regime of section 7.3.3,
because for the relevant values of $\Lambda$ the effective Compton wavelength
in the plate $m^{-1}_{1}$ is much smaller than $\bar{z}$.
Other experimental bounds are shown in Fig. 5 for comparison. 151515 In the
case of dark matter, searches via quantum forces are typically complementary
from direct detection searches. The former probe low DM masses while the
latter probe high DM masses. Fig. 2 in Fichet:2017bng illustrates this
complementarity. 161616 Besides interferometry, another kind of experiment
carried out between a particle and a plane is the neutron bouncer, see e.g.
Nesvizhevsky:2007by ; Brax:2011hb ; Brax:2013cfa ; Jenke:2014yel ;
Cronenberg:2018qxf ; Brax:2017hna . The quantum levels of the neutrons in the
gravitational field of the Earth are probed, via e.g. Rabi oscillation
techniques, that constrain the difference between energy levels. The quantum
levels of the neutrons are classified by an integer $n$ and have the energies
$E_{n}=m_{N}gz_{0}\epsilon_{n}$ where $m_{N}$ is the neutron’s mass, $g$ the
acceleration of gravity on Earth and $z_{0}=(2m_{N}^{2}g)^{-1/3}$ is a
characteristic scale of the order of $6\,\mu$m. The number $-\epsilon_{n}$ are
the zeros of the Airy function ${\rm Ai}$ where the wave functions are
$\psi_{n}(z)\propto{\rm Ai}(\frac{z}{z_{0}}-\epsilon_{k})$ and $z$ is the
distance to the plate. The perturbation to the energy levels due to the
anomalous plate-neutron interaction is given by $\delta
E_{n}=\langle\psi_{n}|V(z)|\psi_{n}\rangle$ which is constrained by $|\delta
E_{3}-\delta E_{1}|\leq 10^{-14}{\rm eV}$ Cronenberg:2018qxf . The subsequent
bound obtained using the particle-plane force Eq. (85) is shown in Fig. 5.
This bound is subdominant with respect to the other bounds — also when using
the Casimir-Polder approximation Brax:2017hna . These other bounds have been
so far computed only in the Casimir-Polder regime. In light of the present
work, we can see that, while this is exact for molecular bounds and neutron
scattering, the Casimir-Polder regime is in general only an approximation in
the presence of macroscopic bodies. Using the exact prediction would tend to
weaken the bound (see section 6).
We can thus conclude that atom interferometry turns out to be rather
competitive method to search for quantum dark forces, provided the
interferometer arms (or at least those near the plate, i.e. $AC$ and $AD$)
have length below $\bar{z}\sim 10$ cm. This is a reasonable length scale from
the experimental viewpoint, which already appears in recent experiments such
as the one of Hamilton:2015zga .
## Conclusion
How does a classical body respond to an arbitrary deformation in the quantum
vacuum? This question can be tackled by introducing the notion of quantum work
$W$, an observable quantity which goes beyond the quantum force ${\bm{F}}$ and
reduces to $W={\bm{F}}\cdot{\bf L}$ in specific cases when the deformation
flow ${\bf L}$ is simple enough, e.g. when rigid bodies are displaced with
respect to each other. In this paper we have studied the quantum work induced
by a massive scalar field bilinearly coupled to macroscopic bodies made of
classical matter. Unlike for abstract sources, the number densities of such
bodies must satisfy the local conservation of matter. We have shown that the
prediction of the quantum work turns out to be finite — up to physical,
renormalizable divergences — upon requesting conservation of matter. This
result applies to any shape and geometry, either rigid or deformable. This is
shown both for a renormalizable — possibly strongly-coupled — theory, and in a
more general effective field theory setup allowing for higher derivative
interactions between the scalar and matter.
Our result about finiteness of the quantum work readily explains why the QFT
prediction of quantum forces sometimes feature seemingly “unremovable”
divergences for certain geometries. A key example is the quantum work felt by
a Dirichlet sphere under a radial deformation. The radial deformation flow is
not divergence-free and thus the sphere density must vary to ensure matter
conservation. Not taking this into account implies that matter of the sphere
is not conserved, thus the finiteness property is not ensured, and therefore
the expression of the quantum force can have a spurious divergence. We have
explicitly verified that taking into account the variation of the density
removes the spurious divergence in the case of the Dirichlet sphere.
When specializing to rigid bodies, the quantum work leads to a quantum force
that reduces to the scalar Casimir and Casimir-Polder forces as special
limits. There is a clear diagrammatic understanding of this interpolation. In
the short distance regime, the main contribution comes from the loop with only
one coupling to each body, which corresponds to Casimir-Polder. In the long
distance regime, loops with arbitrary number of insertions contribute, but
their resummation amounts to having a Dirichlet condition on the boundary of
the bodies, which corresponds to the traditional Casimir setup.
We have computed the quantum forces in plate-plate and plate-point geometries.
If, for example, the scalar has zero fundamental mass, in the plate-plate
geometry the force behaves as $\frac{1}{\ell^{4}}$ at long distance (Casimir)
and as $\frac{1}{\ell^{2}}$ at short distance (Casimir-Polder). For plate-
point geometry the force behaves as $\frac{1}{\ell^{3}}$ at long distance
(Casimir) and as $\frac{1}{\ell}$ at short distance (Casimir-Polder). For
nonzero fundamental mass the two limits can still exist, but the Casimir force
drops exponentially at separation $\ell\gg\frac{1}{2m}$.
These results have concrete applications for physical setups aimed at
searching for light dark particles. Such particles are ubiquitous in dark
matter models, dark energy models, and in extensions of the Standard Model.
Here we have briefly illustrated an application of the point-plane quantum
force for atom interferometers. In this type of experiment, a non-relativistic
atom with trajectory in the vicinity of a large sphere or a plane is sensitive
to the existence of a new force, which induces a phase shift that can be
computed in the WKB approximation. We have computed the atomic phase shift
induced by the scalar quantum force, and point out that the full result (as
opposed to a simple Casimir or Casimir-Polder approximation) is required to
obtain a sensible prediction. Using inputs from existing experiments, we show
that atom interferometry is likely to become a competitive method to search
for light dark particles bilinearly coupled to matter, provided that the
interferometer arms are shorter than $\sim 10$ cm, as summarized in Fig. 5.
This is a reasonable length scale from the experimental viewpoint, which
already appears in setups such as the one of Hamilton:2015zga .
In future work we will further investigate/revisit the constraints and
sensitivities of future experiments to macroscopic dark quantum forces with
Casimir-like behaviors.
## Acknowledgments
SF thanks Daniel Davies for a useful discussion. This work has been supported
by the São Paulo Research Foundation (FAPESP) under grants #2011/11973,
#2014/21477-2 and #2018/11721-4, by CAPES under grant #88887.194785, and by
the University of California, Riverside.
## Appendix A Derivation of the Plate-Point Casimir-Polder Potential
The Casimir-Polder potential between a particle and a plate can be obtained by
direct calculation in the weak coupling regime (i.e. the limit of small
density in the plate). We start by computing the potential between two point
sources. The corresponding source term is
${\cal
L}\supset-\frac{1}{2}\Phi^{2}\left(\frac{m_{a}}{\Lambda^{2}}\delta^{3}({\bm{x}}-{\bm{x}}_{a})+\frac{m_{N}}{\Lambda^{2}}\delta^{3}({\bm{x}}-{\bm{x}}_{b})\right)\,.$
(101)
We remind that this is the non-relativistic approximation of the 4-point
interaction
${\cal
L}\supset-\frac{1}{2}\Phi^{2}\left(\frac{m_{a}}{\Lambda^{2}}\bar{\psi}_{a}\psi_{a}+\frac{m_{N}}{\Lambda^{2}}\bar{\psi}_{b}\psi_{b}\right)\,$
(102)
between the scalar and two fermion species. The scattering amplitude with two
insertions of two particle-anti particles pairs leads to a bubble diagram
which reads
$i{\cal
M}=-\frac{m_{N}m_{a}}{\Lambda^{4}}\,4m_{N}m_{a}\,\frac{1}{2}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{e^{i\omega_{2}|z_{1}-z_{2}|)}}{2\omega_{2}}\frac{e^{i\omega^{\prime}_{2}|z_{1}-z_{2}|)}}{2\omega^{\prime}_{2}}$
(103)
where $\omega_{2}=\sqrt{k^{2}-m_{2}^{2}}$,
$\omega^{\prime}_{2}=\sqrt{(k+p)^{2}-m_{2}^{2}}$. The factor of $\frac{1}{2}$
is a symmetry factor and the external fermions are such that their
nonrelativistic wavefunctions are normalised as $\bar{u}_{a}u_{a}=2m_{a},\
\bar{u}_{b}u_{b}=2m_{N}$. The non-relativistic scattering potential is given
by
$\displaystyle\tilde{V}(p,z_{1}-z_{2})=-\frac{\cal{M}}{4m_{a}m_{N}}$
$\displaystyle=-i\frac{m_{N}m_{a}}{\Lambda^{4}}\frac{1}{2}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{e^{i\omega_{2}|z_{1}-z_{2}|)}}{2\omega_{2}}\frac{e^{i\omega_{2}^{\prime}|z_{1}-z_{2}|)}}{2\omega_{2}^{\prime}}\,.$
The spatial potential is then obtained from the Fourier transform of
$\tilde{V}$,
$V\left(\sqrt{(z_{1}-z_{2})^{2}+{\bm{x}}_{\parallel}^{2}}\right)=\int\frac{d^{2}{\bm{p}}_{\parallel}}{(2\pi)^{2}}\tilde{V}({\bm{p}}_{\parallel},z_{1},z_{2})e^{i{\bm{p}}_{\parallel}\cdot{\bm{x}}_{\parallel}}$
(105)
where ${\bm{x}}_{\parallel}=(x_{1},x_{2})$.
We now consider an ensemble of $N_{1}$ particle of the species $a$ in a volume
$V_{1}$ with a number density $n_{1}=\frac{N_{1}}{V_{1}}$. We average the
potential over the plate with a separation $\ell$ to the point particle as
$V(\ell)=n_{1}\int
d^{2}{\bm{x}}_{\parallel}\int_{-\infty}^{0}dz_{1}\int\frac{d^{2}{\bm{p}}_{\parallel}}{(2\pi)^{2}}\tilde{V}({\bm{p}}_{\parallel},z_{1},\ell)e^{i{\bm{p}}_{\parallel}\cdot{\bm{x}}_{\parallel}}.$
(106)
The transverse integrals simplify and the potential becomes simply
$V(\ell)=n_{1}\int_{-\infty}^{0}dz_{1}\tilde{V}(0,z_{1},\ell)=-n_{1}\frac{m_{a}m_{N}}{\Lambda^{4}}\,\int\frac{d^{3}k_{E}}{(2\pi)^{3}}\frac{e^{-2\gamma_{2}\ell}}{16(\gamma_{2})^{3}}$
(107)
after Wick’s rotation. Using $m_{1}^{2}-m_{2}^{2}=n_{1}m_{a}/\Lambda^{2}$, we
find
$V(\ell)=-(m_{1}^{2}-m_{2}^{2})\frac{m_{N}}{\Lambda^{2}}\,\int\frac{d^{3}k_{E}}{(2\pi)^{3}}\frac{e^{-2\gamma_{2}\ell}}{16(\gamma_{2})^{3}}\,.$
(108)
Finally the force is obtained by taking the derivative with respect to $\ell$
$F=-\partial_{\ell}V=-(m_{1}^{2}-m_{2}^{2})\frac{m_{N}}{\Lambda^{2}}\,\int\frac{d^{3}k_{E}}{(2\pi)^{3}}\frac{e^{-2\gamma_{2}\ell}}{8(\gamma_{2})^{2}}\,.$
(109)
This reproduces Eq. (88).
## Appendix B On Divergences from the Heat Kernel Expansion
In this appendix we review how the heat kernel expansion allows for a proof of
the finiteness of the quantum work at one-loop level under the assumptions
that (i) the bodies are incompressible and (ii) the fluctuation is repelled
from the sources so that we can set boundary conditions at the source
surfaces, analogously to the Dirichlet limit described in section 4.4. We will
then see that the argument fails when condition (i) is dropped i.e.
considering a compressible body, which signals the need for an approach that
includes matter conservation.
We essentially review the exposition from bordag2009advances . We consider two
rigid bodies such that $J=J_{1}+J_{2}$ as in section 4. First of all, let us
rewrite the scalar action as
$S(\Phi)=S_{2}(\Phi)+S_{\rm int}(\Phi)$ (110)
where we have included the $J$-dependent contribution to the effective mass in
the quadratic part of the action. This action reads
$S_{2}(\Phi)=-\frac{1}{2}\int d^{4}x\Phi\Box_{J}\Phi$ (111)
where $\Box_{J}$ is the operator which includes the effective mass term. In
the renormalizable case, this is simply
$\Box_{J}=\Box+m^{2}+\frac{J}{\Lambda}\,.$ (112)
In the EFT case, higher order derivatives are present.
We assume weak coupling. When the sources are static, the quantum vacuum
energy reads
$E(J)=\frac{i}{T}\ln
Z(J)=-\frac{i}{2T}{\rm{Tr}}\ln\Box_{J}+\frac{i}{T}W_{2}(J)$ (113)
where the fist contribution is the one-loop contribution to the vacuum
diagrams, and $W_{2}(J)$ is the sum over all connected vacuum diagrams at two
loops and higher. Here we focus on the one-loop contribution, which is
independent of other possible interactions ( e.g. polynomial self-interactions
of $\Phi$). The one-loop piece can be evaluated using the heat kernel
$K_{J}=e^{-t\Box_{J}}$ and its trace as
${\rm Tr}\ln\Delta_{J}=-\int_{0}^{\infty}\frac{dt}{t}{\rm Tr}\,K_{J}(t)\,.$
(114)
See Vassilevich:2003xt for a conceptual review on heat kernel methods. The
trace of the heat kernel has a $t\to 0$ expansion
${\rm Tr}\,K_{J}(t)=\frac{1}{(2\pi
t)^{3/2}}(a_{0}+a_{1/2}t^{1/2}+a_{1}t+a_{3/2}t^{3/2}+\dots)$ (115)
We can see that the heat kernel coefficients $a_{n/2},\ n=0,\dots 3$ are
responsible for the divergences of $E(J)$. These coefficients are universal
local quantities Vassilevich:2003xt ; bordag2009advances and can be expressed
as volume integrals over the support of the $J_{i}$ sources and surface
integrals over their boundaries, $\partial J_{i}$. The first coefficient
simply amounts to the volume of the support of the fluctuation, which is the
space outside of the sources,
$a_{0}=V-V_{1}-V_{2}$ (116)
with $V_{1,2}={\rm Vol}(J_{1,2})$ and $V$ the volume of the total space.
Depending on the geometric setup, these various volumes may be finite or
infinite, this is a minor detail in the argument.
We can now vary the quantum vacuum energy under some deformation flow. We use
the notation of section 2. In the case of a rigid deformation flow (i.e.
satisfying ${\bm{\partial}}\cdot{\bf L}=0$), e.g. two bodies moving apart, the
coefficients are independent of the relative position of the bodies
Fulling:2003zx ; bordag2009advances . As a result all the divergences cancel
when computing the quantum work by varying the relative position of the
bodies. This is easily illustrated at the level of the $a_{0}$ coefficient,
for which we simply have
$\partial_{\lambda}a_{0}=-\partial_{\lambda}\int_{J_{\lambda}}d^{d}{{\bm{x}}}=-\int_{J_{\lambda}}d^{d}{{\bm{x}}}\,{\bm{\partial}}\cdot{\bf
L}=0\,.$ (117)
where we have used that $\partial_{\lambda}V=0$. Therefore under the
assumption of incompressibility the quantum work is finite. This is a version
of the classic argument given in bordag2009advances .
In contrast, in the case of a deformation flow for which
${\bm{\partial}}\cdot{\bf L}\neq 0$, i.e. dropping the assumption of
incompressibility, we can see that $\partial_{\lambda}a_{0}\neq 0$, which then
results in a divergence in the quantum work. There is no parameter in the
Lagrangian into which this divergence could be absorbed, therefore there is no
hope to renormalize this divergence away. The only way out we found is to let
the number density of the sources vary in such way that matter conservation in
the source is satisfied. As shown in section 2, 3.1 and exemplified in section
5, this is the condition that ensures that all possible divergences in the
quantum work vanish.
## Appendix C The Loop Divergence from Momentum Space
We are interested in the loop integral given in Eq. (32),
$I=\left(\prod^{q}_{i=1}\int d^{d}{\bm{\mu}}_{i}J({\bm{\mu}}_{i})\int
dt_{i}\right)\prod^{q}_{i=0}{\cal
B}^{\prime\prime}\Delta_{0}(\mu_{i},\mu_{i+1})\bigg{|}_{\mu_{0}=x,\mu_{q+1}=x_{\epsilon}}.$
(118)
We introduce the $(d+1)$-dimensional Fourier transform of the propagators
defined as
$\Delta_{0}(\mu_{i},\mu_{i+1})=\int\not{d}^{d+1}{k}e^{i{k.(\mu_{i+1}-\mu_{i}})}\tilde{\Delta}_{0}(k)\,.$
(119)
We introduce the $d$-dimensional Fourier transform of the sources,
$J({\bm{\mu}}_{i})=\int\not{d}^{d}{\bf k}e^{i{\bf
k}\cdot{\bm{\mu}}_{i}}\tilde{J}({\bf k})\,.$ (120)
We have defined $\not{d}k=\frac{dk}{2\pi}$. The contraction between the
$(d+1)$-vectors is done using the Minkowski metric $k\cdot
y=\eta_{\mu\nu}k^{\mu}y^{\nu}$ with $k^{\mu}=(\omega,\bf k)$. The loop
integral becomes
$I=\int\not{d}^{d+1}{k}_{0}\left(\prod_{i=1}^{q}\int\not{d}^{d}{\bf
p}_{i}\tilde{J}({\bf
p}_{i})\right)\left(\prod_{i=0}^{q}\tilde{B}^{\prime\prime}(k_{i-1},k_{i})\tilde{\Delta}_{0}(k_{i})\right)e^{ik_{q}\cdot
x_{\epsilon}-ik_{0}x}$ (121)
with ${\bf p}_{i}={\bf k}_{i}-{\bf k}_{i-1}$ the momentum transferred to the
source.
Performing the time and position integrals we obtain an integral over the
$(d+1)$-momentum $k_{0}$ running through the loop. The derivative operators in
real space become simply multiplying operators depending on the two momenta
entering each vertex $\tilde{B}^{\prime\prime}(k_{i-1},k_{i})$. In the EFT
case it contains at least two powers of the momenta.
Since the source $J$ has compact support, the Riemann-Lebesgue lemma
guarantees that $\tilde{J}({\bf p})$ vanishes for large momenta and in fact
falls off for $|{\bf p}|\gtrsim R_{J}^{-1}$. This implies a smooth cutoff on
the value of ${\bf p}$. On the other hand the divergence in $I$ occurs for
arbitrarily large loop momentum $k_{0}$ . Since the $\bf p$ are bounded, at
large loop momentum we have $k_{0}\simeq k_{1}\simeq\ldots\simeq k_{q}$, thus
the divergent part of the diagram is $\bf p$-independent and is only a
function of $({x}_{\epsilon}-{x})=\epsilon$. Writing
$c_{J}=\int\not{d}^{d}{\bf p}_{i}\tilde{J}({\bf p}_{i})$, we have the
structure
$I\approx c_{J}^{q}L^{\rm div}_{\epsilon}\,$ (122)
where $L^{\rm div}_{\epsilon}$ is the position-independent divergent quantity.
## References
* (1) H. B. G. Casimir and D. Polder, The influence of retardation on the london-van der waals forces, Phys. Rev. 73 (Feb, 1948) 360–372.
* (2) H. B. G. Casimir, On the Attraction Between Two Perfectly Conducting Plates, Indag. Math. 10 (1948) 261–263.
* (3) K. A. Milton, The Casimir effect: Recent controversies and progress, J. Phys. A 37 (2004) R209, [hep-th/0406024].
* (4) G. L. Klimchitskaya, U. Mohideen, and V. M. Mostepanenko, The Casimir force between real materials: Experiment and theory, Rev. Mod. Phys. 81 (2009) 1827–1885, [arXiv:0902.4022].
* (5) A. W. Rodriguez, P.-C. Hui, D. P. Woolf, S. G. Johnson, M. Lončar, and F. Capasso, Classical and fluctuation-induced electromagnetic interactions in micron-scale systems: designer bonding, antibonding, and Casimir forces, Annalen der Physik 527 (Jan., 2015) 45–80, [arXiv:1409.7348].
* (6) L. M. Woods, D. A. R. Dalvit, A. Tkatchenko, P. Rodriguez-Lopez, A. W. Rodriguez, and R. Podgornik, Materials perspective on Casimir and van der Waals interactions, Rev. Mod. Phys. 88 (2016), no. 4 045003, [arXiv:1509.03338].
* (7) G. Bimonte, T. Emig, M. Kardar, and M. Krüger, Nonequilibrium Fluctuational Quantum Electrodynamics: Heat Radiation, Heat Transfer, and Force, Ann. Rev. Condensed Matter Phys. 8 (2017) 119–143, [arXiv:1606.03740].
* (8) G. Bimonte and T. Emig, Unifying Theory for Casimir Forces: Bulk and Surface Formulations, Universe 7 (2021), no. 7 225, [arXiv:2108.07112].
* (9) G. Bimonte, T. Emig, N. Graham, and M. Kardar, Something Can Come of Nothing: Quantum Fluctuations and the Casimir Force, arXiv:2202.05386.
* (10) M. Bordag, G. L. Klimchitskaya, U. Mohideen, and V. M. Mostepanenko, Advances in the Casimir effect, vol. 145. Oxford University Press, 2009.
* (11) K. A. Milton, Calculating casimir energies in renormalizable quantum field theory, Phys. Rev. D68 (2003) 065020, [hep-th/0210081].
* (12) N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, M. Scandurra, and H. Weigel, Casimir energies in light of quantum field theory, Phys. Lett. B 572 (2003) 196–201, [hep-th/0207205].
* (13) R. L. Jaffe, The Casimir effect and the quantum vacuum, Phys. Rev. D 72 (2005) 021301, [hep-th/0503158].
* (14) S. Mobassem, Casimir effect for massive scalar field, Mod. Phys. Lett. A 29 (2014), no. 31 1450160, [arXiv:1403.0501].
* (15) L. Hui, J. P. Ostriker, S. Tremaine, and E. Witten, Ultralight scalars as cosmological dark matter, Phys. Rev. D 95 (2017), no. 4 043541, [arXiv:1610.08297].
* (16) A. Joyce, B. Jain, J. Khoury, and M. Trodden, Beyond the Cosmological Standard Model, Phys. Rept. 568 (2015) 1–98, [arXiv:1407.0059].
* (17) P. Hamilton, M. Jaffe, P. Haslinger, Q. Simmons, H. Müller, and J. Khoury, Atom-interferometry constraints on dark energy, Science 349 (2015) 849–851, [arXiv:1502.03888].
* (18) B. C. Allanach, Beyond the Standard Model Lectures for the 2016 European School of High-Energy Physics, in 2016 European School of High-Energy Physics, pp. 123–152, 2017. arXiv:1609.02015.
* (19) P. Brax, S. Casas, H. Desmond, and B. Elder, Testing Screened Modified Gravity, Universe 8 (2021), no. 1 11, [arXiv:2201.10817].
* (20) T. Damour and A. M. Polyakov, The String dilaton and a least coupling principle, Nucl. Phys. B 423 (1994) 532–558, [hep-th/9401069].
* (21) J. Khoury and A. Weltman, Chameleon fields: Awaiting surprises for tests of gravity in space, Phys. Rev. Lett. 93 (2004) 171104, [astro-ph/0309300].
* (22) J. Khoury and A. Weltman, Chameleon cosmology, Phys. Rev. D 69 (2004) 044026, [astro-ph/0309411].
* (23) P. Brax and S. Fichet, Quantum Chameleons, Phys. Rev. D99 (2019), no. 10 104049, [arXiv:1809.10166].
* (24) J. S. Schwinger, L. L. DeRaad, Jr., and K. A. Milton, Casimir Effect in Dielectrics, Annals Phys. 115 (1979) 1–23.
* (25) M. E. Peskin and D. V. Schroeder, An introduction to quantum field theory. Westview, Boulder, CO, 1995.
* (26) C. M. Bender and K. A. Milton, Scalar casimir effect for a d-dimensional sphere, Phys. Rev. D 50 (Nov, 1994) 6547–6555.
* (27) I. E. Dzyaloshinskii, E. M. Lifshitz, and L. P. Pitaevskii, GENERAL THEORY OF VAN DER WAALS' FORCES, Soviet Physics Uspekhi 4 (feb, 1961) 153–176.
* (28) G. Feinberg and J. Sucher, Long-Range Forces from Neutrino-Pair Exchange, Phys. Rev. 166 (1968) 1638–1644.
* (29) G. Feinberg and J. Sucher, Long-range forces from neutrino-pair exchange, Phys. Rev. 166 (Feb, 1968) 1638–1644.
* (30) J. A. Grifols, E. Masso, and R. Toldra, Majorana neutrinos and long range forces, Phys. Lett. B389 (1996) 563–565, [hep-ph/9606377].
* (31) S. Fichet, Quantum Forces from Dark Matter and Where to Find Them, Phys. Rev. Lett. 120 (2018), no. 13 131801, [arXiv:1705.10331].
* (32) A. Costantino, S. Fichet, and P. Tanedo, Exotic Spin-Dependent Forces from a Hidden Sector, arXiv:1910.02972. To appear in JHEP.
* (33) N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, M. Scandurra, and H. Weigel, Calculating vacuum energies in renormalizable quantum field theories: A New approach to the Casimir problem, Nucl. Phys. B 645 (2002) 49–84, [hep-th/0207120].
* (34) N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, O. Schroeder, and H. Weigel, The Dirichlet Casimir problem, Nucl. Phys. B 677 (2004) 379–404, [hep-th/0309130].
* (35) S. A. Franchino-Viñas, M. N. Mantiñan, and F. D. Mazzitelli, Quantum vacuum fluctuations and the principle of virtual work in inhomogeneous backgrounds, Phys. Rev. D 105 (2022), no. 8 085023, [arXiv:2110.14692].
* (36) A. V. Manohar, Effective field theories, Lect. Notes Phys. 479 (1997) 311–362, [hep-ph/9606222].
* (37) A. V. Manohar, Introduction to Effective Field Theories, arXiv:1804.05863.
* (38) Y. Li, K. A. Milton, X. Guo, G. Kennedy, and S. A. Fulling, Casimir forces in inhomogeneous media: renormalization and the principle of virtual work, Phys. Rev. D 99 (2019), no. 12 125004, [arXiv:1901.09111].
* (39) S. A. Franchino-Viñas and F. D. Mazzitelli, Effective action for delta potentials: spacetime-dependent inhomogeneities and Casimir self-energy, Phys. Rev. D 103 (2021), no. 6 065006, [arXiv:2010.11144].
* (40) S. Fichet, Field Holography in General Background and Boundary Effective Action from AdS to dS, arXiv:2112.00746.
* (41) M. Bordag and D. V. Vassilevich, Nonsmooth backgrounds in quantum field theory, Phys. Rev. D 70 (2004) 045003, [hep-th/0404069].
* (42) M. Bordag, D. Vassilevich, H. Falomir, and E. M. Santangelo, Multiple reflection expansion and heat kernel coefficients, Phys. Rev. D 64 (2001) 045017, [hep-th/0103037].
* (43) T. H. Boyer, Quantum electromagnetic zero point energy of a conducting spherical shell and the Casimir model for a charged particle, Phys. Rev. 174 (1968) 1764–1774.
* (44) M. Kasevich and S. Chu, Atomic interferometry using stimulated Raman transitions, Phys. Rev. Lett. 67 (1991) 181–184.
* (45) M. Kasevich and S. Chu, Measurement of the gravitational acceleration of an atom with a light-pulse atom interferometer, Applied Physics B: Lasers and Optics 54 (May, 1992) 321–332.
* (46) A. Peters, K. Y. Chung, and S. Chu, Measurement of gravitational acceleration by dropping atoms, Nature 400 (Aug., 1999) 849–852.
* (47) A. Miffre, M. Jacquey, M. Büchner, G. Trénec, and J. Vigué, Atom interferometry, quant-ph/0605055.
* (48) C. Burrage, E. J. Copeland, and E. A. Hinds, Probing Dark Energy with Atom Interferometry, JCAP 1503 (2015), no. 03 042, [arXiv:1408.1409].
* (49) M. Jaffe, P. Haslinger, V. Xu, P. Hamilton, A. Upadhye, B. Elder, J. Khoury, and H. Müller, Testing sub-gravitational forces on atoms from a miniature, in-vacuum source mass, Nature Phys. 13 (2017) 938, [arXiv:1612.05171].
* (50) D. O. Sabulsky, I. Dutta, E. A. Hinds, B. Elder, C. Burrage, and E. J. Copeland, Experiment to detect dark energy forces using atom interferometry, Phys. Rev. Lett. 123 (2019), no. 6 061102, [arXiv:1812.08244].
* (51) P. Storey and C. Cohen-Tannoudji, The Feynman path integral approach to atomic interferometry: A tutorial, J. Phys. II 4 (1994), no. 11 1999–2027.
* (52) P. Brax, S. Fichet, and G. Pignol, Bounding Quantum Dark Forces, Phys. Rev. D97 (2018), no. 11 115034, [arXiv:1710.00850].
* (53) V. V. Nesvizhevsky, G. Pignol, and K. V. Protasov, Neutron scattering and extra short range interactions, Phys. Rev. D77 (2008) 034020, [arXiv:0711.2298].
* (54) P. Brax and G. Pignol, Strongly Coupled Chameleons and the Neutronic Quantum Bouncer, Phys. Rev. Lett. 107 (2011) 111301, [arXiv:1105.3420].
* (55) P. Brax, G. Pignol, and D. Roulier, Probing Strongly Coupled Chameleons with Slow Neutrons, Phys. Rev. D88 (2013) 083004, [arXiv:1306.6536].
* (56) T. Jenke et al., Gravity Resonance Spectroscopy Constrains Dark Energy and Dark Matter Scenarios, Phys. Rev. Lett. 112 (2014) 151105, [arXiv:1404.4099].
* (57) G. Cronenberg, P. Brax, H. Filter, P. Geltenbort, T. Jenke, G. Pignol, M. Pitschmann, M. Thalhammer, and H. Abele, Acoustic Rabi oscillations between gravitational quantum states and impact on symmetron dark energy, Nature Phys. 14 (2018), no. 10 1022–1026.
* (58) P. Brax and M. Pitschmann, Exact solutions to nonlinear symmetron theory: One- and two-mirror systems, Phys. Rev. D97 (2018), no. 6 064015, [arXiv:1712.09852].
* (59) D. V. Vassilevich, Heat kernel expansion: User’s manual, Phys. Rept. 388 (2003) 279–360, [hep-th/0306138].
* (60) S. A. Fulling, Systematics of the relationship between vacuum energy calculations and heat kernel coefficients, J. Phys. A 36 (2003) 6857–6873, [quant-ph/0302117].
|
# Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical
Reasoning
Pan Lu1,3, Liang Qiu1, Kai-Wei Chang1, Ying Nian Wu1, Song-Chun Zhu1,
Tanmay Rajpurohit2, Peter Clark3, Ashwin Kalyan3
1University of California, Los Angeles, 2Georgia Institute of Technology,
3Allen Institute for AI
###### Abstract
Mathematical reasoning, a core ability of human intelligence, presents unique
challenges for machines in abstract thinking and logical reasoning. Recent
large pre-trained language models such as GPT-3 have achieved remarkable
progress on mathematical reasoning tasks written in text form, such as math
word problems (MWP). However, it is unknown if the models can handle more
complex problems that involve math reasoning over heterogeneous information,
such as tabular data. To fill the gap, we present Tabular Math Word Problems
(TabMWP), a new dataset containing 38,431 open-domain grade-level problems
that require mathematical reasoning on both textual and tabular data. Each
question in TabMWP is aligned with a tabular context, which is presented as an
image, semi-structured text, and a structured table. There are two types of
questions: free-text and multi-choice, and each problem is annotated with gold
solutions to reveal the multi-step reasoning process. We evaluate different
pre-trained models on TabMWP, including the GPT-3 model in a few-shot setting.
As earlier studies suggest, since few-shot GPT-3 relies on the selection of
in-context examples, its performance is unstable and can degrade to near
chance. The unstable issue is more severe when handling complex problems like
TabMWP. To mitigate this, we further propose a novel approach, PromptPG, which
utilizes policy gradient to learn to select in-context examples from a small
amount of training data and then constructs the corresponding prompt for the
test example. Experimental results show that our method outperforms the best
baseline by 5.31% on the accuracy metric and reduces the prediction variance
significantly compared to random selection, which verifies its effectiveness
in the selection of in-context examples.111The data and code will be available
at https://tabmwp.github.io.
Work was partially done while Pan Lu being an intern at AI2.
## 1 Introduction
Developing machines equipped with mathematical reasoning capabilities is one
of the long-standing goals of artificial intelligence. Solving math word
problems (MWPs) is a well-defined task to diagnose the ability of intelligent
systems to perform numerical reasoning and problem-solving as humans. A surge
of datasets has been proposed to facilitate the research in this domain
(Upadhyay & Chang, 2017; Amini et al., 2019; Miao et al., 2020; Cobbe et al.,
2021). However, most existing MWP datasets focus on textual math word problems
only. Tables, widely distributed in different documents such as invoices,
health records, and financial reports, contain rich structured information
different from unstructured text. Solving math word problems in such a tabular
context is much more challenging than existing MWP benchmarks since the system
needs to make cell selections and align heterogeneous information before
performing further numerical reasoning.
Figure 1: Two examples from the TabMWP dataset. The example above is a free-
text problem with a numerical answer; the example below is a multi-choice
problem with a textual answer.
To fill this gap, we propose Tabular Math Word Problems (TabMWP), a new large-
scale dataset that contains 38,431 math word problems with tabular context,
taken from grade-level math curricula. There are two question types: free-text
questions in which the answer is an integer or decimal number, and multi-
choice questions where the answer is a text span chosen from option
candidates. Different from existing MWP datasets, each problem in TabMWP is
accompanied by a tabular context, which is represented in three formats: an
image, a semi-structured text, and a structured table. Each problem is also
annotated with a detailed solution that reveals the multi-step reasoning steps
to ensure full explainability. To solve problems in TabMWP, a system requires
multi-hop mathematical reasoning over heterogeneous information by looking up
table cells given textual clues and conducting multi-step operations to
predict the final answer. Take the problem above in Figure 1 as an example. To
answer the question “how much will she spend (if Tracy buys three kinds of
beads)?”, we first need to look up the corresponding three rows in the given
table, calculate the individual cost for each kind of bead, and finally sum
three costs up to get the answer of 31.44.
Inspired the success of the large pre-trained language model GPT-3 (Chen et
al., 2020a) in solving math word problems (Wei et al., 2022; Wang et al.,
2022), we first build a strong baseline using few-shot GPT-3 on TabMWP. A few
in-context examples are randomly selected from the training set, along with
the test example, and are constructed as a prompt for GPT-3 to predict the
answer. However, recent studies have shown that this type of few-shot learning
can be highly unstable across different selections of in-context examples
(Zhao et al., 2021; Liu et al., 2022a; Lu et al., 2022b). It could be worse on
TabMWP since its problems are distributed across multiple question types and
diverse table layouts. Liu et al. (2022a) try to address this issue by
retrieving semantically similar examples. However, this method might not work
well on TabMWP because it is not capable of measuring the similarity of
structured information, such as the number of cells in tables.
To alleviate this challenge, we further propose a novel approach that can
learn to select in-context examples from a small amount of training data via
policy gradient for prompt learning, termed PromptPG. As illustrated in Figure
2, an agent learns to find optimal in-context examples from a candidate pool,
with the goal of maximizing the prediction rewards on given training examples
when interacting with the GPT-3 environment. A policy network defines the
strategy of how to select the in-context examples given the current training
example. The policy network is built on top of the language model BERT (Devlin
et al., 2018) with fixed parameters, followed by a one-layer linear neural
network with learnable parameters. The learnable parameters are updated
following the policy gradient strategy (Sutton et al., 1998). Unlike random
selection (Wei et al., 2022; Wang et al., 2022), brute-force search, or
retrieval-based selection (Liu et al., 2022a), PromptPG learns to construct
the prompt dynamically given the candidate pool when interacting with the
GPT-3 API.
We implement two state-of-the-art methods as baselines, i.e., UnifiedQA
(Khashabi et al., 2020) on general question answering and TAPEX (Liu et al.,
2022b) on tabular question answering. Both are implemented in pre-trained and
fine-tuned settings. Experimental results show that our model PromptPG can
achieve an overall accuracy of 68.23% on TabMWP, which greatly surpasses
previous methods by a large margin of up to 5.31%. Further analysis
demonstrates that PromptPG can select better in-context examples compared with
a wide range of existing selection strategies and reduce the prediction
variance significantly compared to random selection.
The main contributions of our work are as follows: (a) We present a new large-
scale dataset, TabMWP, the first dataset for math word problems with tabular
context; (b) We propose a novel approach, PromptPG, which learns the prompt
dynamically via policy gradient to select in-context examples for few-shot
GPT-3. To the best of our knowledge, it is the first work that applies
reinforcement learning to select in-context examples for the few-shot GPT-3
model; (c) Experimental results show that PromptPG achieves an improvement of
up to 5.31% on TabMWP over existing methods, with reduced selection
instability compared to random selection.
Figure 2: Our proposed PromptPG is able to learn to select performing in-
context examples via policy gradient when interacting with the GPT-3 API
without any manually designed heuristics.
## 2 The TabMWP Dataset
### 2.1 Task Formulation
A tabular math word problem $p$ is represented as a pair ($t$, $q$), where $t$
is a table context and $q$ is a question. The table $t$ could be represented
in a visual format as an image, semi-structured text, or a structured
database. In this work, we focus on the semi-structured format as the table
context for simplicity. The table $t$ features complicated layouts and
formats: it contains multiple rows and columns, and each cell can be a string
of text, a string of a number, or a mix of them. Depending on the question and
answer types, the question $q$ may be accompanied by multiple-choice options
$c=\\{c_{1},c_{2},\dots,c_{n}\\}$ or a unit $u$. Given a semi-structured
tabular context $t$ and an unstructured question text $q$, the task is to
generate the answer $a$, which is either numerical only text for a free-text
question, or a text span from given options for a multiple-choice question.
### 2.2 Dataset Construction
Data collection. We construct TabMWP based on openly available content and
more details are provided in Appendix A.1. Only math word problems that are
accompanied by a tabular context and a detailed solution are collected. We
develop a script to extract the tabular context, the question, options that
apply, the correct answer, and the solution for each problem. These elements
can be precisely identified using HTML tags. For each table, we take a
screenshot and store its raw text.
Data preprocessing. To make TabMWP compatible with various baselines, we
represent the tabular context as three formats: an image, semi-structured
text, and a structured spreadsheet. The semi-structured format is created by
converting the raw table text into a flattened token sequence, with each row
separated by a newline character ‘$\backslash$n’ and each column separated by
‘$\mid$’. The semi-structured text is further transformed to the structured
format, which can be easily retrieved and executed by SQL-based methods (Liu
et al., 2022b) using packages like pandas. For clarity, the table title is
separated from the raw table. Examples of three formats are shown in Appendix
A.1.
For better quantitative evaluation, we formalize the TabMWP problems as two
question types: (a) free-text questions, where the answer is numerical text
only and the unit text is separately extracted; and (b) multi-choice
questions, the answer of which is the text span from choice options. The order
of choice options is shuffled to alleviate distribution bias. Redundant
information in solutions is removed, and some solutions are manually rewritten
to be more human-readable. Finally, problems with the same table, question,
and answer text are regarded as redundant and thus removed. We further conduct
quality control to ensure data quality, which is discussed in Appendix A.1.
### 2.3 Dataset Statistics
Statistic | Number
---|---
Total questions | 38,431
* free-text questions | 28,719
* multi-choice questions | 9,712
# of different questions | 28,876
# of different answers | 6,153
# of different solutions | 35,442
# of different tables | 37,644
# of tables with a title | 23,259
# of table cells (Average/Max) | 12.9 / 54
# of table rows (Average/Max) | 5.9 / 11
# of table columns (Average/Max) | 2.2 / 6
Question length (Average/Max) | 22.1 / 92
Answer length (Average/Max) | 1.1 / 27
Solution length (Average/Max) | 49.5 / 350
Table 1: Key statistics for TabMWP.
Key statistics. The TabMWP dataset contains 38,431 tabular math word problems,
which are partitioned with 6:2:2 into the training, development, and test
splits, corresponding to 23,059, 7,686, and 7,686 problems. Their main
statistics are shown in Table 1. 74.7% of the questions in TabMWP belong to
free-text questions, while 25.3% are multi-choice questions. There are 28,876
different questions, 6,153 different answers, and 35,442 different solutions,
indicating that TabMWP has a rich diversity in the problem distribution. The
questions have an average of 22.1 words in length and solutions of 49.5,
showing that they have lexical richness.
One distinct characteristic of TabMWP is that each problem is accompanied by a
tabular context, without which the problem would be unsolvable. There are
37,644 different tables in total, and 60.5% of the tables have a title. The
table has an average of 5.9 rows and 2.2 columns, which results in an average
of 12.9 cells and a maximum of 54 cells. These statistics suggest that tables
in TabMWP distribute diversely across semantics and layouts.
Comparison to existing datasets. As shown in Table 2, TabMWP differs from
related datasets in various aspects: (1) TabMWP is the first dataset to study
math word problems over tabular context on open domains and is the largest in
terms of data size; (2) Problems in TabMWP are annotated with the tabular
context, unlike previous MWP datasets in the first segment; (3) Different from
Table QA datasets like FinQA, TAT-QA, and MultiHiertt, a lack of either
mathematical reasoning or the tabular context renders the problems in TabMWP
unanswerable; (4) There are two question types in TabMWP, and the answer could
be a text span, an integer number, or a decimal number; (5) Each problem is
annotated with natural language solutions to reveal multi-hop reasoning steps.
Dataset | Size | #Table | Need | Need | Table Type | Question Type | Answer Type | Solution
---|---|---|---|---|---|---|---|---
Math? | Table? | Domain | Format | Free-text | MC | Text | Integer | Decimal | Type
Dolphin18K (2016) | 831 | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | formula
DRAW-1K (2017) | 1,000 | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | formula
Math23K (2017) | 23,162 | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | formula
MathQA (2019) | 37,297 | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | formula
ASDiv (2020) | 2,305 | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ | formula
SVAMP (2021) | 1,000 | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | formula
GSM8K (2021) | 8,792 | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | text
IconQA (2021b) | 107,439 | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗
FinQA (2021) | 8,281 | 2,766 | ✓ | 76.6% | finance | text | ✓ | ✗ | ✗ | ✓ | ✓ | program
TAT-QA (2021) | 16,552 | 2,747 | 50.0% | ✓ | finance | text | ✓ | ✗ | ✗ | ✓ | ✓ | ✗
MultiHiertt (2022) | 10,440 | 9,843 | ✓ | 89.8% | finance | text | ✓ | ✗ | ✗ | ✓ | ✓ | ✗
TabMWP (ours) | 38,431 | 37,644 | ✓ | ✓ | open | text* | ✓ | ✓ | ✓ | ✓ | ✓ | text
Table 2: A comparison of MWP and Table QA datasets that require numerical
reasoning. text*: each table in TabMWP is accompanied by an image format.
## 3 Methods
### 3.1 Few-shot GPT-3 for TabMWP
Provided with a few in-context examples of math word problems as the context,
GPT-3 can generate the answer for a test problem, and show impressive
performance across different MWP datasets (Wei et al., 2022; Wang et al.,
2022). Inspired by its success, we first build a strong baseline using few-
shot GPT-3 on our TabMWP dataset. Specifically, a few training examples, along
with the test example $p_{i}$, are provided to GPT-3 for the answer
prediction. Each training example consists of a table context $t$, a question
$q$, options $c$ that apply, and an answer $a$. To make the few-shot GPT-3
model workable on TabMWP, we utilize the semi-structured format as the tabular
context. Following Wei et al. (2022), a solution $s$ can be augmented in front
of the answer $a$ to reveal the multi-step reasoning process, which is able to
boost the prediction performance.
### 3.2 Dynamic Prompting via Policy Gradient
The in-context examples can be randomly (Wei et al., 2022; Wang et al., 2022)
or retrieval-based selected (Liu et al., 2022a) from the training set. Recent
research, however, has shown that few-shot GPT-3 can be highly unstable across
different selections of in-context examples and permutations of those examples
(Zhao et al., 2021; Liu et al., 2022a; Lu et al., 2022b). This instability may
be more severe on TabMWP, where examples are more distinct because they
include both unstructured questions of various types and semi-structured
tables in various layouts. To alleviate this issue, we aim to propose a novel
approach that can learn to select performing in-context examples using a
policy gradient strategy, without brute-force searching or manually designed
heuristics.
Formally, given a TabMWP problem $p_{i}$, we want the agent to find $K$ in-
context examples $e_{i}=\\{e_{i}^{1},e_{i}^{2},...,e_{i}^{K}\\}$ from a
candidate pool $E_{\text{cand}}$, and generate the answer $\hat{a}_{i}$,
maximizing a reward $r_{i}=R(\hat{a}_{i}|p_{i})$. The in-context examples are
selected according to a policy
$e_{i}^{k}\sim\pi_{\theta}(e_{i}|p_{i}),~{}e_{i}^{k}\in
E_{\text{cand}},e_{i}^{k}~{}\text{are independent for}~{}k=\\{1,2,...,K\\},$
(1)
where $\theta$ are the policy’s parameters. The answer is generated through:
$\hat{a}_{i}=\text{GPT-3}(e_{i},p_{i})$ using the selected examples and the
given problem as the input prompt. The reward is then computed by evaluating
the generated answer $\hat{a}_{i}$ with respect to the ground truth answer
$a_{i}$:
$r_{i}=R(\hat{a}_{i}|p_{i})=\textsc{Eval}(\hat{a}_{i},a_{i}),~{}r_{i}\in\\{-1,1\\}.$
(2)
The function $\textsc{Eval}()$ returns a reward of $1$ if the generated answer
aligned with the label and $-1$ otherwise. Our goal is to maximize the
expected reward of the generated answer under the policy
$\mathbb{E}_{e_{i}\sim\pi_{\theta}(e_{i}|p_{i})}[R(\text{GPT-3}(e_{i},p_{i}))]$.
We optimize the reward with respect to the parameters of the policy network
using the Policy Gradient method (Sutton et al., 1998). The expected reward
cannot be computed in closed form, so we compute an unbiased estimation with
Monte Carlo Sampling,
$\mathbb{E}_{e_{i}\sim\pi_{\theta}(e_{i}|p_{i})}\left[R(\text{GPT-3}(e_{i},p_{i}))\right]\approx\frac{1}{N}\sum_{i=1}^{N}R(\text{GPT-3}(e_{i},p_{i})),~{}e_{i}\sim\pi_{\theta}(e_{i}|p_{i}),$
(3)
where $N$ is the size of each batch yielded from our training problem set
$P_{\text{train}}$. In this work, we experiment using the REINFORCE policy
gradient algorithm (Williams, 1992):
$\displaystyle\nabla\mathbb{E}_{e_{i}\sim\pi_{\theta}(e_{i}|p_{i})}\left[R(\text{GPT-3}(e_{i},p_{i}))\right]$
$\displaystyle=\mathbb{E}_{e_{i}\sim\pi_{\theta}(e_{i}|p_{i})}\nabla_{\theta}\log(\pi_{\theta}(e_{i}|p_{i}))R(\text{GPT-3}(e_{i},p_{i}))$
(4)
$\displaystyle\approx\frac{1}{N}\sum_{i=1}^{N}\nabla_{\theta}\log(\pi_{\theta}(e_{i}|p_{i}))R(\text{GPT-3}(e_{i},p_{i})),~{}e_{i}\sim\pi_{\theta}(e_{i}|p_{i}).$
Intuitively, if the predicted answer is correct, we update the policy so that
the probability of selecting the same prompts gets higher. Otherwise, we
update the policy to reduce the probability of selecting such less matched
examples. The learning process is summarized in Algorithm 1 in the appendix.
To get the contextualized representation of the given problem and candidate
examples, we use the BERT (Devlin et al., 2018) [CLS] token representation as
the problem encoding. We add a small linear layer on top of the BERT final
pooling layer. That allows our model to learn both the semantic similarity
that the pre-trained BERT model provides and the hidden logical similarity
shared among the math problems. During training, the parameters of BERT are
fixed and only the appended linear layer is updated, i.e., $\theta$ is
composed of the learnable parameters $\mathbf{W}$ and $\mathbf{b}$:
$\displaystyle\mathbf{h}(e_{i})$
$\displaystyle=\mathbf{W}(\textsc{BERT}(e_{i}))+\mathbf{b},$ (5)
$\displaystyle\mathbf{h}(p_{i})$
$\displaystyle=\mathbf{W}(\textsc{BERT}(p_{i}))+\mathbf{b},$
$\displaystyle\pi_{\theta}(e_{i}|p_{i})$
$\displaystyle=\frac{\exp{[\mathbf{h}(e_{i})\cdot\mathbf{h}(p_{i})}]}{\sum_{e_{i}^{\prime}\in
E_{\text{cand}}}\exp{[\mathbf{h}(e_{i}^{\prime})\cdot\mathbf{h}(p_{i})}]}.$
## 4 Experiments
### 4.1 Experimental Settings
Baselines. We first develop two large language models, UnifiedQA (Khashabi et
al., 2020) and TAPEX (Liu et al., 2022b), in both pre-trained and fine-tuned
settings, as strong baselines on TabMWP. Different model sizes are included to
examine the performance across different model capacities. We further
implement the zero-shot GPT-3 model, the few-shot GPT-3 model, and their
chain-of-thought (CoT) reasoning variants (Wei et al., 2022). We also study
the heuristic guess baseline and human performance to analyze the lower and
upper bounds on TabMWP, respectively.
Evaluation metric. The answer part is extracted from the GPT-3 generation
using manually designed regular regressions. To evaluate the baselines and our
method, we utilize the accuracy metric to determine if the generated answer is
correct given the ground truth answer. For free-text problems where the answer
is set as a number, we normalize the prediction and the label to decimal
numbers with two-digit precision and check if their values are equivalent. For
multi-choice problems, we choose the most similar one from options to the
generated answer following Khashabi et al. (2020).
Implementation details. Fine-tuned UnifiedQA and TAPEX baselines are trained
on the train split and evaluated on the test split. Few-shot GPT-3 and few-
shot-CoT GPT-3 randomly select two in-context examples from the training data
to build the prompt. Our PromptPG is built on top of few-shot GPT-3 with a
different selection strategy: (a) in the training stage, the agent learns to
select two examples from 20 candidates and is evaluated on 160 training
examples to calculate the reward; (b) in the test stage, the agent with an
optimal policy chooses two examples from 20 candidates for each test example.
The candidates are randomly selected from the training set. Experiments for
two few-shot GPT-3 baselines and our PromptPG are repeated three times, and
the average accuracy is reported in Table 3. More implementation details can
be found in Appendix A.4.
### 4.2 Experimental Results
Table 3 demonstrates the results of different baselines and our method on the
TabMWP dataset. Benefiting from pre-training on the tabular corpus, the TAPEX
baseline performs better on average than UnifiedQA with a similar model size,
which is only pre-trained on unstructured textual data. Increasing the model
size can improve the prediction accuracy for both UnifiedQA and TAPEX. Fine-
tuned on TabMWP, the baseline models can significantly improve the prediction
performance on the average and all aggregated accuracy metrics.
Method | Training | Selection | Question Types | Answer Types | Grades | Avg.
---|---|---|---|---|---|---
Data | Strategy | FREE | MC | INT | DEC | EXTR | BOOL | OTH | 1-6 | 7-8
Heuristic Baselines
Heuristic guess | - | - | 6.71 | 39.81 | 8.37 | 0.26 | 30.80 | 51.22 | 26.67 | 17.55 | 12.27 | 15.29
Human performance | - | - | 84.61 | 93.32 | 84.95 | 83.29 | 97.18 | 88.69 | 96.20 | 94.27 | 81.28 | 90.22
pre-trained Baselines
UnifiedQA${}_{\textsc{Small}}$ | - | - | 1.18 | 43.62 | 1.37 | 0.43 | 38.70 | 49.78 | 37.14 | 15.57 | 7.65 | 12.18
UnifiedQA${}_{\textsc{Base}}$ | - | - | 4.60 | 43.02 | 5.28 | 1.97 | 37.08 | 50.11 | 38.10 | 17.14 | 11.11 | 14.56
UnifiedQA${}_{\textsc{Large}}$ | - | - | 4.48 | 48.80 | 5.19 | 1.72 | 48.33 | 50.33 | 40.00 | 19.78 | 10.87 | 15.96
TAPEX${}_{\textsc{Base}}$ | - | - | 7.32 | 39.76 | 8.68 | 2.06 | 35.06 | 47.11 | 20.95 | 18.67 | 11.81 | 15.73
TAPEX${}_{\textsc{Large}}$ | - | - | 8.80 | 46.59 | 10.62 | 1.72 | 46.91 | 48.11 | 30.48 | 22.65 | 13.18 | 18.59
fine-tuned Baselines
UnifiedQA${}_{\textsc{Small}}$ | 7,686 | - | 22.27 | 51.31 | 27.27 | 2.83 | 52.28 | 48.11 | 69.52 | 35.85 | 21.71 | 29.79
UnifiedQA${}_{\textsc{Base}}$ | 7,686 | - | 34.02 | 70.68 | 40.74 | 7.90 | 84.09 | 55.67 | 73.33 | 53.31 | 30.46 | 43.52
UnifiedQA${}_{\textsc{Large}}$ | 7,686 | - | 48.67 | 82.18 | 55.97 | 20.26 | 94.63 | 68.89 | 79.05 | 65.92 | 45.92 | 57.35
TAPEX${}_{\textsc{Base}}$ | 7,686 | - | 39.59 | 73.09 | 46.85 | 11.33 | 84.19 | 61.33 | 69.52 | 56.70 | 37.02 | 48.27
TAPEX${}_{\textsc{Large}}$ | 7,686 | - | 51.00 | 80.02 | 59.92 | 16.31 | 95.34 | 64.00 | 73.33 | 67.11 | 47.07 | 58.52
Prompting Baselines w/ GPT-3
Zero-shot | - | - | 53.57 | 66.67 | 55.55 | 45.84 | 78.22 | 55.44 | 54.29 | 63.37 | 48.41 | 56.96
Zero-shot-CoT | - | - | 54.36 | 66.92 | 55.82 | 48.67 | 78.82 | 55.67 | 51.43 | 63.62 | 49.59 | 57.61
Few-shot (2-shot) | 2 | Random | 54.69 | 64.11 | 58.36 | 40.40 | 75.95 | 52.41 | 53.02 | 63.10 | 49.16 | 57.13
Few-shot-CoT (2-shot) | 2 | Random | 60.76 | 69.09 | 60.04 | 63.58 | 76.49 | 61.19 | 67.30 | 68.62 | 55.31 | 62.92
PromptPG w/ GPT-3 (Ours)
Few-shot-CoT (2-shot) | 160+20 | Dynamic | 66.17 | 74.11 | 64.12 | 74.16 | 76.19 | 72.81 | 65.71 | 71.20 | 64.27 | 68.235.31↑
Table 3: Evaluation results of various baselines and our method on TabMWP.
Training Data: number of used training data; Selection Strategy: strategy of
selecting in-context examples for few-shot GPT-3; FREE: free-text questions;
MC: multi-choice questions; INT: integer answers; DEC: decimal answers; EXTR:
extractive text answers; BOOL: Boolean text answers; OTH: other text answers.
Without any example provided to GPT-3, zero-shot GPT-3 achieves a comparable
accuracy as the best fine-tuned baselines UnifiedQA${}_{\textsc{Large}}$ and
TAPEX${}_{\textsc{Large}}$, showing its surprisingly good generalization
ability on TabMWP. Provided with two randomly sampled in-context examples as
the prompt, few-shot GPT-3 gets an improvement of 0.17%. Generating the multi-
step solution before the answer, the few-shot-CoT GPT-3 model reports the best
performance among all of these baseline models, with an accuracy of 62.92%.
Unlike few-shot-CoT GPT-3 randomly selecting the in-context examples, our
proposed PromptPG learns to select performing examples with the help of policy
gradient. PromptPG establishes a state-of-the-art performance on the TabMWP
dataset: it surpasses the best baseline few-shot-CoT GPT-3 by 5.31% on
average. PromptPG shows its consistent advantages on two question types, two
grade groups, and most of the answer types.
Heuristic guess and human performance. The accuracy of multi-choice questions
by heuristic guess is 39.81%, which aligns with the fact that there are 2.88
options on average. The accuracy for free-text questions is considerably low
since the inputs of TabMWP problems do not have direct clues for the answers.
Humans outperform all benchmarks consistently across question types, answer
types, and grade groups, with a 21.99% average accuracy advantage over our
best performing PromptPG. This gap is to be filled by future research on semi-
structured mathematical reasoning.
Problem types and difficulty. Among all the baselines, we find it is easier
for models to answer multi-choice questions than free-text questions.
Questions with the boolean (BOOL) and other (OTH) answer types tend to have
lower accuracy scores than the extractive (EXTR) answer type, because the
former ones need the abilities of fact verification and language understanding
on diverse options, respectively. It is also not surprising for us to find
that all the models perform worse on problems in grades 7-8 than in a lower-
level group of 1-6.
### 4.3 Ablation Study
Here, we will study how different factors have an effect on the performances
of baselines and our method on TabMWP. Experiments are conducted on 1,000
development examples.
Blind study of the dataset. We evaluate the information gain of each component
of the TabMWP problems by removing it from model inputs. To eliminate the
impact and variance caused by example selection, the study is conducted using
the zero-shot GPT-3 model. As shown in Table 4, there is a dramatic decline
when either the tabular context (T) or the question text (Q) is missing from
the inputs. For example, T$\rightarrow$A and Q$\rightarrow$A only attain an
average accuracy of 6.10% and 7.00%, respectively, and their accuracies are
near to zero on the multi-choice questions. Taking both tabular and textual
data as inputs (TQ$\rightarrow$A), the model significantly beats the heuristic
guess. With the complete input information (TQ(C)$\rightarrow$A), the full
model achieves the best performance. The blind study shows that our TabMWP is
robust and reliable in distribution, and all input components are
indispensable parts that provide necessary information for answering the
questions.
Model | Format | FREE | MC | INT | DEC | EXTR | BOOL | OTH | 1-6 | 7-8 | Avg.
---|---|---|---|---|---|---|---|---|---|---|---
Heuristic guess | TQ(C)$\rightarrow$A | 7.31 | 40.36 | 9.20 | 0.00 | 34.44 | 47.32 | 50.00 | 17.99 | 13.96 | 16.40
Zero-shot GPT-3 | T$\rightarrow$A | 8.28 | 0.36 | 10.24 | 0.67 | 0.66 | 0.00 | 0.00 | 9.41 | 1.02 | 6.10
Zero-shot GPT-3 | Q$\rightarrow$A | 9.24 | 1.09 | 10.94 | 2.68 | 1.32 | 0.89 | 0.00 | 10.23 | 2.03 | 7.00
Zero-shot GPT-3 | T(C)$\rightarrow$A | 8.28 | 41.82 | 10.24 | 0.67 | 36.42 | 50.89 | 25.00 | 23.60 | 8.12 | 17.50
Zero-shot GPT-3 | Q(C)$\rightarrow$A | 9.10 | 33.09 | 10.94 | 2.01 | 25.17 | 44.64 | 25.00 | 21.29 | 7.11 | 15.70
Zero-shot GPT-3 | TQ$\rightarrow$A | 55.31 | 68.36 | 56.60 | 50.34 | 79.47 | 54.46 | 58.33 | 66.34 | 47.46 | 58.90
Zero-shot GPT-3 (full model) | TQ(C)$\rightarrow$A | 54.76 | 72.00 | 56.42 | 48.32 | 76.82 | 66.07 | 66.67 | 67.00 | 47.97 | 59.50
Table 4: Blind studies on TabMWP. T: tabular context; Q: question; C: choice
options; A: answer.
(a) Accuracy w.r.t. different numbers of training examples, given 20 candidate
examples.
(b) Accuracy w.r.t. different numbers of candidates, given 80 and 160 training
examples.
Figure 3: Accuracy w.r.t. different numbers of training and candidate
examples. Experiments are conducted on 1,000 development instances, and each
setting is repeated with four random seeds.
Number of training examples. We study the effect of different numbers of
training examples on our dynamic prompt learning in Figure 3 (a). With more
training examples, the prediction accuracy first gradually increases to a peak
of around 160 training examples. After that, the accuracy goes down with a
growing variance. We reckon it is because the policy gradient algorithm can
benefit from the scaling-up training data but fails to exploit more examples
efficiently.
Number of candidate examples. In Figure 3 (b), we investigate how different
numbers of candidate examples can affect policy learning performance. With the
increasing candidate number, it is observed that the prediction accuracy will
first go up and then go down after a threshold, given 80 or 160 training
examples. It is probably because when the candidate pool is too small, the
policy gradient algorithm has a limited action space to explore enough problem
types. In contrast, too many candidates could make the algorithm hard to learn
an optimal policy in a large search space.
Selection strategy | Acc. (%)
---|---
Same question type | 66.2 $\pm$ 0.60
Same answer type | 67.9 $\pm$ 0.38
Same grade level | 67.9 $\pm$ 1.87
Most complex (# of table cells) | 64.0 $\pm$ 0.42
Most complex (# of ques. words) | 68.2 $\pm$ 0.26
Random selection | 65.2 $\pm$ 4.01
Nearest neighbor | 68.2 $\pm$ 0.29
PromptPG (Ours) | 70.9 $\pm$ 1.27
Table 5: Evaluation results w.r.t. different strategies for selecting in-
context examples.
Different selection strategies. In Table 5, we compare the proposed PromptPG
with random selection and other heuristic-based example selection strategies
for the few-shot-CoT GPT-3 model. Compared to random selection, selecting the
same question or answer type of examples helps the model to take the task-
relevant examples as the prompt, thus improving the accuracy and reducing the
variance. Choosing the most complex examples does not boost the prediction
performance consistently. The most semantically similar examples, as a kind of
nearest neighbor search of the test example, help construct the performing and
stable prompt for GPT-3. PromptPG shows its effectiveness in selecting optimal
in-context examples over other strategies and largely reduces the instability
caused by randomness.
### 4.4 Case Study
We conduct the case study in Appendix A.6. We visualize the two in-context
examples selected by strategies of our PromptPG, nearest neighbor search, and
random selection, in Figure 5, 6, and 7, respectively. The nearest neighbor
search strategy selects the “superficially” similar examples to the test
example. Instead, PromptPG tends to select examples that have multiple
reasoning steps in the solution and similar abilities in mathematical
reasoning, which results in higher prediction accuracy. Successful examples in
Figure 8 \- 12 show that PromptPG is able to generate reasonable reasoning
steps to predict correct answers for a wide range of TabMWP problems. Failure
examples in Figure 13 \- 18 suggest that PromptPG has limitations when solving
problems provided with complex tabular contexts or requiring a high-level
ability of mathematical reasoning.
## 5 Related Work
### 5.1 Math Word Problems
The task of solving Math Word Problems (MWPs) is to predict the answer given a
natural language description of a math problem. There have been great efforts
in developing datasets for MWPs, including Dolphin18K (Huang et al., 2016),
DRAW-1K (Upadhyay & Chang, 2017), Math23K (Wang et al., 2017), MathQA (Amini
et al., 2019), ASDiv (Miao et al., 2020), and SVAMP (Patel et al., 2021).
However, these datasets only involve the textual modality, and most are
limited to a small data scale. Some recent datasets like DVQA (Kafle et al.,
2018), Geometry3K (Lu et al., 2021a) and IconQA (Lu et al., 2021b) introduce
math problems with diagrams as the visual context, where the system needs to
perform mathematical reasoning over multi-modal information. To the best of
our knowledge, our dataset TabMWP is the first dataset that requires
mathematical reasoning over heterogeneous information from both the textual
question and the tabular context. To solve MWPs, one popular line of previous
methods is to generate the intermediate expressions and execute them to get
the final answers (Huang et al., 2017; Roy & Roth, 2017; Amini et al., 2019).
Inspired by the recent progress achieved by GPT-3 in solving MWPs (Wei et al.,
2022; Wang et al., 2022; Kojima et al., 2022), we evaluate TabMWP using GPT-3
models in zero-shot and few-shot learning manners.
### 5.2 Table QA Datasets
Table Question Answering (Table QA) refers to the task of answering questions
about tabular data. Numerous datasets have been developed for Table QA. For
example, TabMCQ (Jauhar et al., 2016) is an early dataset collected from grade
exams. Datasets like WTQ (Pasupat & Liang, 2015), WikiSQL (Zhong et al.,
2017), and SQA (Iyyer et al., 2017) contain semi-structured tables from
Wikipedia, while Spider (Yu et al., 2018) collects structured tables sourced
from databases. Recent work aims at introducing datasets that require multi-
hop reasoning between the textual and tabular data: HybridQA (Chen et al.,
2020c), OTTQA (Chen et al., 2020b), MultiModalQA (Talmor et al., 2020), AIT-QA
(Katsis et al., 2021), and FeTaQA (Nan et al., 2022). Datasets most related to
our TabMWP dataset are FinQA (Chen et al., 2021), TAT-QA (Zhu et al., 2021),
and MultiHiertt (Zhao et al., 2022) because they need numerical reasoning on
financial reports with tabular data. Note that 77.6% of questions in TAT-QA
can be solvable without mathematical reasoning and 50.0% of questions in FinQA
are not table-must to be answered. In contrast, our proposed TabMWP collects
questions where both mathematical reasoning and tabular context are necessary.
### 5.3 Prompt Learning for Language Models
Large pre-trained language models, such as GPT-3 (Chen et al., 2020a), have
shown their remarkable ability of few-shot learning on a wide range of
downstream tasks (Houlsby et al., 2019; Brown et al., 2020; Lu et al., 2022a).
Given a few in-context examples as demonstrations, GPT-3 can generalize to
unseen test examples without parameter updating. For example, Wei et al.
(2022) randomly select different in-context examples from the training set and
formulate their corresponding prompt with a test sample. However, recent
studies show that few-shot GPT-3 highly depends on the selection of in-context
examples and could be unstable, varying from the near chance to near state-of-
the-art performance (Zhao et al., 2021; Liu et al., 2022a). To mitigate the
volatility of selecting in-context examples, Lu et al. (2022b) propose
retrieving relevant examples that are semantically similar to the test sample.
Other possible strategies could be using brute-force permutation search or
relying on manually designed heuristics like choosing the most complex
examples. Inspired by reinforcement learning’s ability to search for an
optimal action policy, we propose applying the policy gradient strategy
(Sutton et al., 1998) to learn to select in-context examples more efficiently
and stably without designing human-designed heuristics.
## 6 Conclusion
In this paper, we propose TabMWP, the first large-scale dataset for math word
problems in tabular contexts. TabMWP contains 38,431 open-domain problems with
two question types and three answer types, and each problem is annotated with
a multi-step solution. We evaluate TabMWP using state-of-the-art QA and
TableQA methods in both pre-trained and fine-tuned settings, as well as the
large pre-trained language model GPT-3. We further propose a novel approach,
PromptPG, for few-shot GPT-3, which utilizes policy gradient to learn to
select in-context examples from the training data and construct the performing
prompt for the test example. Experimental results show that PromptPG
outperforms existing strong baselines by a large margin of 5.31% and reduces
the accuracy volatility compared to random selection. To the best of our
knowledge, it is the first work that applies reinforcement learning to select
in-context examples for the few-shot GPT-3 model.
## 7 Acknowledge
We would like to thank Zhou Yu and Jiuxiang Gu for insightful discussions on
dataset collection. We thank Chenhao Mu and Yao Fu for constructive
suggestions in developing baselines and experiments. The work does not relate
to Liang Qiu’s position at Amazon Alexa.
## References
* Amini et al. (2019) Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)_ , pp. 2357–2367, 2019.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in Neural Information Processing Systems (NeurIPS)_ , 33:1877–1901, 2020.
* Chen et al. (2020a) Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big self-supervised models are strong semi-supervised learners. _Advances in Neural Information Processing Systems (NeurIPS)_ , 33:22243–22255, 2020a.
* Chen et al. (2020b) Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Yang Wang, and William W Cohen. Open question answering over tables and text. In _International Conference on Learning Representations (ICLR)_ , 2020b.
* Chen et al. (2020c) Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. Hybridqa: A dataset of multi-hop question answering over tabular and textual data. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pp. 1026–1036, 2020c.
* Chen et al. (2021) Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan R Routledge, et al. Finqa: A dataset of numerical reasoning over financial data. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 3697–3711, 2021.
* Cobbe et al. (2021) Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. _arXiv preprint arXiv:2110.14168_ , 2021.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ , 2018.
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning (ICML)_ , pp. 2790–2799. PMLR, 2019.
* Huang et al. (2016) Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. How well do computers solve math word problems? large-scale dataset construction and evaluation. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)_ , pp. 887–896, 2016.
* Huang et al. (2017) Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian Yin. Learning fine-grained expressions to solve math word problems. In _Proceedings of Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 805–814, 2017.
* Iyyer et al. (2017) Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. Search-based neural structured learning for sequential question answering. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)_ , pp. 1821–1831, 2017.
* Jauhar et al. (2016) Sujay Kumar Jauhar, Peter Turney, and Eduard Hovy. Tabmcq: A dataset of general knowledge tables and multiple-choice questions. _arXiv preprint arXiv:1602.03960_ , 2016.
* Kafle et al. (2018) Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. Dvqa: Understanding data visualizations via question answering. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 5648–5656, 2018.
* Katsis et al. (2021) Yannis Katsis, Saneem Chemmengath, Vishwajeet Kumar, Samarth Bharadwaj, Mustafa Canim, Michael Glass, Alfio Gliozzo, Feifei Pan, Jaydeep Sen, Karthik Sankaranarayanan, et al. Ait-qa: Question answering dataset over complex tables in the airline industry. _arXiv preprint arXiv:2106.12944_ , 2021.
* Khashabi et al. (2020) Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. Unifiedqa: Crossing format boundaries with a single qa system. In _Findings of the Association for Computational Linguistics (EMNLP)_ , pp. 1896–1907, 2020.
* Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Kojima et al. (2022) Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. _arXiv preprint arXiv:2205.11916_ , 2022.
* Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pp. 7871–7880, Online, July 2020. Association for Computational Linguistics (ACL).
* Liu et al. (2022a) Jiachang Liu, Dinghan Shen, Yizhe Zhang, William B Dolan, Lawrence Carin, and Weizhu Chen. What makes good in-context examples for gpt-3? In _Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures_ , pp. 100–114, 2022a.
* Liu et al. (2022b) Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. Tapex: Table pre-training via learning a neural sql executor. In _International Conference on Learning Representations (ICLR)_ , 2022b.
* Lu et al. (2021a) Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. In _The 59th Annual Meeting of the Association for Computational Linguistics (ACL)_ , 2021a.
* Lu et al. (2021b) Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. In _The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks_ , 2021b.
* Lu et al. (2022a) Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In _The 36th Conference on Neural Information Processing Systems (NeurIPS 2022)_ , 2022a.
* Lu et al. (2022b) Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_ , pp. 8086–8098, 2022b.
* Miao et al. (2020) Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing english math word problem solvers. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)_ , pp. 975–984, 2020.
* Nan et al. (2022) Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Lin, Neha Verma, Rui Zhang, Wojciech Kryściński, Nick Schoelkopf, Riley Kong, Xiangru Tang, et al. Fetaqa: Free-form table question answering. _Transactions of the Association for Computational Linguistics (TACL)_ , 10:35–49, 2022.
* Pasupat & Liang (2015) Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJNLP)_ , pp. 1470–1480, 2015.
* Patel et al. (2021) Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math word problems? In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)_ , pp. 2080–2094, 2021.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research (JMLR)_ , 21:1–67, 2020.
* Roy & Roth (2017) Subhro Roy and Dan Roth. Unit dependency graph and its application to arithmetic word problem solving. In _Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)_ , 2017.
* Sutton et al. (1998) Richard S Sutton, Andrew G Barto, et al. _Introduction to reinforcement learning_. MIT press Cambridge, 1998.
* Talmor et al. (2020) Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, and Jonathan Berant. Multimodalqa: complex question answering over text, tables and images. In _International Conference on Learning Representations (ICLR)_ , 2020.
* Upadhyay & Chang (2017) Shyam Upadhyay and Ming-Wei Chang. Annotating derivations: A new evaluation strategy and dataset for algebra word problems. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (ACL)_ , pp. 494–504, 2017\.
* Wang et al. (2022) Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. _arXiv preprint arXiv:2203.11171_ , 2022.
* Wang et al. (2017) Yan Wang, Xiaojiang Liu, and Shuming Shi. Deep neural solver for math word problems. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 845–854, 2017.
* Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. _arXiv preprint arXiv:2201.11903_ , 2022.
* Williams (1992) Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_ , 8(3):229–256, 1992.
* Yu et al. (2018) Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 3911–3921, 2018.
* Zhao et al. (2022) Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang. Multihiertt: Numerical reasoning over multi hierarchical tabular and textual data. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_ , pp. 6588–6600, 2022.
* Zhao et al. (2021) Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In _International Conference on Machine Learning (ICML)_ , pp. 12697–12706. PMLR, 2021.
* Zhong et al. (2017) Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. _arXiv preprint arXiv:1709.00103_ , 2017.
* Zhu et al. (2021) Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat-Seng Chua. Tat-qa: A question answering benchmark on a hybrid of tabular and textual content in finance. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-JCNLP)_ , pp. 3277–3287, 2021.
## Appendix A Appendix
### A.1 Dataset collection
The raw problems are collected from an online learning website,
IXL222https://www.ixl.com/math, which hosts a large number of high-quality
math problems curated by educational experts.
Quality control. The goal of constructing TabMWP is to collect math word
problems that necessitate multi-hop mathematical reasoning between the
question and the tabular context. Therefore, we ask human experts to filter
problems that can be solved either without the context of the table or by
looking up table cells without numerical reasoning. To further ensure data
quality, we ask human experts to perform a final review to re-check the
dataset and manually revise incorrect annotations.
Question types | Answer types (%) | Descriptions
---|---|---
Free-text | Integer (59.50%) | The answer is an integer number, e.g., “40”, “1,207”, “-3”.
Decimal (15.23%) | The answer is a decimal or a fraction number, e.g., “192.80”, “68/217”.
Multi-choice | Extractive (13.01%) | The answer could be extracted from the table context.
Boolean (10.97%) | The answer is Boolean, e.g., “yes”/“no”, “true”/“false”, “linear”/“nonlear”.
Other (1.29%) | The answer belongs to other text types, e.g., a statement.
Table 6: Format diversity of questions and answers in TabMWP.
Table 7: Three different formats for the tables in the TabMWP dataset.
### A.2 Human study
To examine how humans perform on our TabMWP dataset, we released the human
evaluation task on Amazon Mechanical Turk (AMT) to the test split. We designed
two sub-tasks for the human study: answering the free-text questions and
answering the multi-choice questions. The user interfaces for the two sub-
tasks are shown in Figure 4. Each human intelligence task (HIT) contains 5
exam questions and 15 test questions. A worker should have a HIT Approval Rate
of 98% or higher and be approved with 5,000 or more HITs. The worker is
provided with detailed instructions at the beginning and needs to pass at
least 3 free-text exam questions or 4 multi-choice exam questions to be
qualified for the human study. Each HIT is assigned to two different workers.
We assign a reward of $0.80 and $0.60 for one HIT of free-text and multi-
choice sub-tasks, respectively.
Figure 4: User interfaces of human study for free-text and multi-choice
questions.
### A.3 The PromptPG Algorithm
The pipeline of PromptPG to learn to select in-context examples is summarized
in Algorithm 1.
Algorithm 1 Dynamic Prompt Learning via Policy Gradient (PromptPG)
Input: Initial policy $\pi_{\theta_{0}}$, training example set
$P_{\text{train}}$, candidate example set $E_{\text{cand}}$, # of training
epochs $N$
Output: Learned policy $\pi_{\theta}$
1:function REINFORCE($\pi_{\theta_{0}}$, $P_{\text{train}}$,
$E_{\text{cand}}$, $N$)
2: Initialize policy network $\pi$ with parameter $\theta_{0}$
3: for epoch = $1,2,...,N$ do
4: for $P_{\text{batch}}\in P_{\text{train}}$ do $\triangleright$ get a batch
from the training set
5: $\mathcal{L}_{\text{batch}}\leftarrow 0$
6: for $p_{i}\in P_{\text{batch}}$ do
7: Sample $e_{i}^{k}\sim\pi_{\theta}(e_{i}|p_{i}),e_{i}^{k}\in
E_{\text{cand}},k=\\{1,...,K\\}$ $\triangleright$ $K$ is # of in-context
examples
8: $\hat{a}_{i}\leftarrow\text{GPT-3}(e_{i}^{1},...,e_{i}^{k},p_{i})$
$\triangleright$ $\hat{a}_{i}$ is the GPT-3 generated answer
9: $r_{i}\leftarrow\text{Eval}(\hat{a}_{i},a_{i}),r_{i}\in\\{-1,1\\}$
$\triangleright$ $a_{i}$ is the ground truth answer of $p_{i}$
10:
$\mathcal{L}_{\text{batch}}\leftarrow\mathcal{L}_{\text{batch}}-r_{i}\cdot\ln\pi_{\theta}(e_{i}|p_{i})$
11: end for
12: Optimize $\mathcal{L}_{\text{batch}}$ wrt. $\theta$
13: end for
14: end for
15: return $\pi_{\theta}$
16:end function
### A.4 Implementation Details
Heuristics guess. To investigate the lower bound of the accuracy on TabMWP, we
design simple heuristics to guess answers for each question type. For multi-
choice questions, we randomly select one from the given options with even
probabilities. For free-text questions on TabMWP, the answers could only be
integral or decimal numbers. Intuitively, we take advantage of regular
expressions to extract all the numbers from the tabular context and the
question text as candidates, and then randomly choose one number as the
prediction.
UnifiedQA baselines. UnifiedQA (Khashabi et al., 2020) is a T5-based (Raffel
et al., 2020) QA system that was pre-trained on 8 seed QA datasets of multiple
formats but with a unified text-to-text paradigm. We load the pre-trained
checkpoint as the pre-trained baseline and train it on TabMWP as the fine-
tuned baseline. Three different parameter sizes are compared: small (60M),
base (220M), and large (770M).
TAPEX baselines. TAPEX (Liu et al., 2022b) is a BART-based (Lewis et al.,
2020) language model pre-trained on structured tabular data to mimic the
behavior of a SQL executor that can answer table-based questions. TAPEX shows
state-of-the-art performance on four table-related datasets. We establish the
pre-trained and fine-tuned baselines on top of TAPEX with two model sizes:
base (140M) and large (400M).
Zero-shot GPT-3 and zero-shot-CoT GPT-3. We establish the zero-shot baseline
based on GPT-3 (Chen et al., 2020a). The zero-shot setup follows the format of
TQ(C)$\rightarrow$A where the input is the concatenation of tokens of the
tabular context (T), the question text (Q), and choice options (C) that apply
while the output is to predict the answer (A). Following Kojima et al. (2022),
we further build zero-shot-CoT GPT-3, which refers to the GPT-3 model with a
chain-of-thought (CoT) prompt. Specifically, we add the prompt “Let’s think
step by step” at the end of the input to ask the model to generate the multi-
step solution (S) to mimic the reasoning process as humans. Then the model
takes the raw input and the newly generated solution to predict the final
answer.
Few-shot GPT-3 and few-shot-CoT GPT-3. In the few-shot setting, we follow the
standard prompting (Wei et al., 2022) where in-context examples are randomly
selected from the training data as demonstrations for the text example.
Similarly, the few-shot-CoT GPT-3 baseline takes the prompt template of
TQ(C)$\rightarrow$SA to generate the solution before the final answer.
Experimental details. Our experiments for UnifiedQA baselines, TAPEX
baselines, and our proposed PromptPG are conducted using PyTorch on two Nvidia
RTX 3090 GPUs.
For fine-tuning the UnifiedQA and TAPEX baselines, we use the Adam optimizer
(Kingma & Ba, 2014) with an initial learning rate of $5\mathrm{e}{-5}$. The
training process takes 10 epochs with a batch size of 16. The maximum number
of input tokens is set as 200 and the maximum output length is 100.
In our proposed PromptPG, the embedding size of the added linear neural
network is 768. To learn the policy network, we use the Adam optimizer with an
initial learning rate of $1\mathrm{e}{-3}$. The maximum number of training
epochs is 30, with a batch size of 20. The training process is stopped early
if there is any NaN value in the loss for a batch of training data.
For the GPT-3 engine, we use text-davinci-002, the most capable engine
recommended by the official documentation. The temperature is set as 0 and the
top probability is set as 1.0 to get the most deterministic prediction. The
maximum number of tokens allowed for generating text is 512. Both the
frequency penalty and the presence penalty are set as the default value, i.e.,
0.
### A.5 More Experimental Results
Method | Selection strategy | # training | # candidate | # few-shot | Trial 1 | Trial 2 | Trial 3 | Average (%)
---|---|---|---|---|---|---|---|---
examples | examples | examples
Few-shot GPT-3 | Random selection | 0 | 0 | 2 | 58.12 | 57.00 | 56.27 | 57.13 $\pm$ 0.93
Few-shot-CoT GPT-3 | Random selection | 0 | 0 | 2 | 59.85 | 63.52 | 65.39 | 62.92 $\pm$ 2.30
Few-shot-CoT GPT-3 | PromptPG (ours) | 160 | 20 | 2 | 68.85 | 65.63 | 70.22 | 68.23 $\pm$ 1.92
Table 8: Experimental settings and raw accuracy results of random selection
and our PromptPG for the few-shot GPT-3 model on the TabMWP test split. For
each setting, we repeat the experiment with the same set of three different
random seeds.
### A.6 Case study examples
$\triangleright$ In-context example 1 (ID: 28463) Table: Option $|$ Change in
phone price Add an upgrade $|$ $60 Buy a used phone $|$ -$75 Question: Luna is
looking at the price of new cell phones online. Her favorite company,
OrangeTech, has a special this weekend. Luna can add an upgrade to a phone for
an additional cost, or she can buy a used phone to get a discount. The change
in price for each option is shown in the table. Which option results in a
greater change in price? Options: (A) adding an upgrade (B) buying a used
phone Answer: (Step 1) To find the option that results in a greater change in
price, use absolute value. Absolute value tells you how much the price
changes. (Step 2) Add an upgrade: $|$$60$|$ = $60 (Step 3) Buy a used phone:
$|$-$75$|$ = $75 (Step 4) Buying a used phone results in a greater change in
price. It reduces the price by $75. The answer is buying a used phone.
$\triangleright$ In-context example 2 (ID: 13974) Table: heart-shaped beads
$|$ $3/kilogram rectangular beads $|$ $2/kilogram spherical beads $|$
$2/kilogram oval beads $|$ $2/kilogram Question: Rebecca bought 2.5 kilograms
of oval beads. How much did she spend? (Unit: $)
Answer:
(Step 1) Find the cost of the oval beads. Multiply the price per kilogram by
the number of kilograms.
(Step 2) $2 × 2.5 = $5
(Step 3) She spent $5. The answer is 5.
$\triangleright$ Test example (ID: 17417) Table: $[$TITLE$]$: Birthday party
Activity $|$ Parents $|$ Children Singing $|$ 14 $|$ 20 Eating cake $|$ 5 $|$
10 Jumping rope $|$ 16 $|$ 20 Swimming $|$ 16 $|$ 19 Playing tag $|$ 4 $|$ 9
Question: At Josie’s birthday party, children and parents celebrated by
participating in some activities. How many more children than parents are
playing tag? (Unit: children)
Answer:
(Step 1) To find the difference between the number of children and parents
playing tag, subtract the number of parents from the number of children.
(Step 2) 9 - 4 = 5
(Step 3) There are 5 more children than parents playing tag. The answer is 5.
Output: 5
Figure 5: Two in-context examples selected by PromptPG, the prompt, and the
correct prediction. The selected examples require similar abilities of
mathematical reasoning to the test example.
$\triangleright$ In-context example 1 (ID: 18429) Table: $[$TITLE$]$:
Children’s weights (lbs) Stem $|$ Leaf 1 $|$ 7 2 $|$ 4 3 $|$ 4 $|$ 5 $|$ 2, 2,
8 6 $|$ 6 7 $|$ 1, 3 8 $|$ 7, 8 9 $|$ 0 Question: Dr. Thornton, a
pediatrician, weighed all the children who recently visited her office. How
many children weighed exactly 52 pounds? (Unit: children)
Answer:
(Step 1) For the number 52, the stem is 5, and the leaf is 2. Find the row
where the stem is 5. In that row, count all the leaves equal to 2.
(Step 2) You counted 2 leaves, which are blue in the stem-and-leaf plot above.
(Step 3) 2 children weighed exactly 52 pounds. The answer is 2.
$\triangleright$ In-context example 2 (ID: 18129) Table: whole wheat pancake
mix $|$ $1/lb buttermilk pancake mix $|$ $2/lb chocolate chip pancake mix $|$
$2/lb blueberry pancake mix $|$ $1/lb plain pancake mix $|$ $2/lb raspberry
pancake mix $|$ $2/lb Question: If Vince buys 3.4 pounds of buttermilk pancake
mix, how much will he spend? (Unit: $)
Answer:
(Step 1) Find the cost of the buttermilk pancake mix. Multiply the price per
pound by the number of pounds.
(Step 2) $2 × 3.4 = $6.80
(Step 3) He will spend $6.80. The answer is 6.80.
$\triangleright$ Test example (ID: 17417) Table: $[$TITLE$]$: Birthday party
Activity $|$ Parents $|$ Children Singing $|$ 14 $|$ 20 Eating cake $|$ 5 $|$
10 Jumping rope $|$ 16 $|$ 20 Swimming $|$ 16 $|$ 19 Playing tag $|$ 4 $|$ 9
Question: At Josie’s birthday party, children and parents celebrated by
participating in some activities. How many more children than parents are
playing tag? (Unit: children)
Answer:
(Step 1) For the activity “Playing tag”, the stem is 4, and the leaf is 9.
Find the row where the stem is 4. In that row, count all the leaves equal to
9.
(Step 2) You counted 9 leaves, which are blue in the stem-and-leaf plot above.
9 children are playing tag. The answer is 9.
Output: 9
Ground truth: 5
Figure 6: Two in-context examples selected by nearest neighbor search, the
prompt, and the final wrong prediction. The selected examples are only
semantically similar to the test example.
$\triangleright$ In-context example 1 (ID: 13033) Table: $[$TITLE$]$:
Watermelons harvested Day $|$ Number of watermelons Wednesday $|$ 59 Thursday
$|$ 51 Friday $|$ 53 Saturday $|$ 52 Question: A farmer recalled how many
watermelons were harvested in the past 4 days. On which day were the most
watermelons harvested?
Options: (A) Wednesday (B) Thursday (C) Friday (D) Saturday
Answer:
(Step 1) Find the greatest number in the table. Remember to compare the
numbers starting with the highest place value. The greatest number is 59.
(Step 2) Now find the corresponding day. Wednesday corresponds to 59. The
answer is Wednesday.
$\triangleright$ In-context example 2 (ID: 32386) Table: $[$TITLE$]$:
Basketball hoops Park $|$ Number of basketball hoops Heron Park $|$ 2 Kelly
Park $|$ 7 Westfield Park $|$ 4 Pinehurst Park $|$ 4 Linden Park $|$ 3
Mooreland Park $|$ 7 Crestview Park $|$ 2 Riverfront Park $|$ 4 Question: The
parks department compared how many basketball hoops there are at each park.
What is the mode of the numbers?
Answer:
(Step 1) Read the numbers from the table.
(Step 2) First, arrange the numbers from least to greatest: 2, 2, 3, 4, 4, 4,
7, 7.
(Step 3) Now count how many times each number appears.
(Step 4) 2 appears 2 times. 3 appears 1 time. 4 appears 3 times. 7 appears 2
times.
(Step 5) The number that appears most often is 4.
(Step 6) The mode is 4. The answer is 4.
$\triangleright$ Test example (ID: 17417) Table: $[$TITLE$]$: Birthday party
Activity $|$ Parents $|$ Children Singing $|$ 14 $|$ 20 Eating cake $|$ 5 $|$
10 Jumping rope $|$ 16 $|$ 20 Swimming $|$ 16 $|$ 19 Playing tag $|$ 4 $|$ 9
Question: At Josie’s birthday party, children and parents celebrated by
participating in some activities. How many more children than parents are
playing tag? (Unit: children)
Answer:
(Step 1) Find the difference between the number of children and the number of
parents for each activity.
(Step 2) For singing, the difference is 6.
(Step 3) For eating cake, the difference is 5.
(Step 4) For jumping rope, the difference is 4.
(Step 5) For swimming, the difference is 3.
(Step 6) For playing tag, the difference is 5.
(Step 7) The greatest difference is 6. The answer is 6.
Output: 6
Ground truth: 5
Figure 7: Two randomly selected in-context examples, the prompt, and the final
prediction. The selected examples have limited relevance to the test example,
resulting in the wrong output.
Table: $[$TITLE$]$ Math teachers High school $|$ Number of math teachers
Central High $|$ 9 Hillview High $|$ 10 Westside High $|$ 4 Moore High $|$ 2
River High $|$ 6 Northside High $|$ 4 Lincoln High $|$ 8 Thompson High $|$ 5
Question: The school district compared how many math teachers each high school
has. What is the mean of the numbers? Answer: (Step 1) The mean is the average
of the numbers. (Step 2) To find the mean, add all the numbers and divide by
the number of items. (Step 3) 9 + 10 + 4 + 2 + 6 + 4 + 8 + 5 = 48 (Step 4)
There are 8 items, so the mean is 48 ÷ 8 = 6. (Step 5) The mean of the numbers
is 6. Output: 6
Figure 8: The correct prediction from our PromptPG for a free-text question
example. This example requires taking the mean of eight numbers from the table
via addition and division.
Table: topaz $|$ $18.55 per lb amethyst $|$ $19.88 per lb tiger’s eye $|$
$10.29 per lb fool’s gold $|$ $16.00 per lb quartz $|$ $14.63 per lb calcite
$|$ $15.39 per lb granite $|$ $19.23 per lb Question: Austin went to the store
and bought 3 pounds of fool’s gold, 1 pound of topaz, and 2 pounds of tiger’s
eye. How much did he spend? (unit: $) Answer: (Step 1) To find the total
amount Austin spent, multiply the price per pound of each type of stone by the
number of pounds of that stone that he bought, and then add all of the
products together. (Step 2) 3 pounds of fool’s gold: $16.00 per pound $\times$
3 pounds = $48.00 (Step 3) 1 pound of topaz: $18.55 per pound $\times$ 1 pound
= $18.55 (Step 4) 2 pounds of tiger’s eye: $10.29 per pound $\times$ 2 pounds
= $20.58 (Step 5) $48.00 + $18.55 + $20.58 = $87.13 (Step 6) Austin spent
$87.13. Output: 87.13
Figure 9: The correct prediction from our PromptPG for a free-text question
example. This example requires looking up three items and their corresponding
prices in the table, calculating their costs, and finally summing them up to
get the final answer.
Table: Employee $|$ Pay period $|$ Dhruba Khanal $|$ December 9-15 $|$ Total
earnings $|$ $|$ $620.00 Federal income tax $|$ $71.14 $|$ State income tax
$|$ $48.90 $|$ Other taxes $|$ $47.00 $|$ Total taxes $|$ $|$ ? Pay after
taxes $|$ $|$ ? Question: Look at Dhruba’s pay stub. Dhruba lives in a state
that has state income tax. How much did Dhruba make after taxes? (unit: $)
Answer: (Step 1) To find Dhruba’s pay after taxes, subtract the total taxes
from Dhruba’s total earnings. (Step 2) $620.00 - $71.14 - $48.90 - $47.00 =
$452.96 (Step 3) Dhruba made $452.96 after taxes. The answer is 452.96.
Output: 452.96
Figure 10: The correct prediction from our PromptPG for a free-text question
example. In this example, the model is asked to understand a hierarchical tax
report and calculate the pay after taxes.
Table: $[$TITLE$]$ Bus schedule the school $|$ 8:00 A.M. $|$ 9:15 A.M. $|$
9:30 A.M. $|$ 10:00 A.M. $|$ 11:00 A.M. the zoo $|$ 9:00 A.M. $|$ 10:15 A.M.
$|$ 10:30 A.M. $|$ 11:00 A.M. $|$ 12:00 P.M. the mall $|$ 9:15 A.M. $|$ 10:30
A.M. $|$ 10:45 A.M. $|$ 11:15 A.M. $|$ 12:15 P.M. the grocery store $|$ 9:30
A.M. $|$ 10:45 A.M. $|$ 11:00 A.M. $|$ 11:30 A.M. $|$ 12:30 P.M. the science
museum $|$ 10:30 A.M. $|$ 11:45 A.M. $|$ 12:00 P.M. $|$ 12:30 P.M. $|$ 1:30
P.M. the library $|$ 11:15 A.M. $|$ 12:30 P.M. $|$ 12:45 P.M. $|$ 1:15 P.M.
$|$ 2:15 P.M. the kickball field $|$ 11:45 A.M. $|$ 1:00 P.M. $|$ 1:15 P.M.
$|$ 1:45 P.M. $|$ 2:45 P.M. the playground $|$ 12:45 P.M. $|$ 2:00 P.M. $|$
2:15 P.M. $|$ 2:45 P.M. $|$ 3:45 P.M. the doctor’s office $|$ 1:15 P.M. $|$
2:30 P.M. $|$ 2:45 P.M. $|$ 3:15 P.M. $|$ 4:15 P.M. Question: Look at the
following schedule. Lena is at the mall. If she wants to arrive at the library
at 1.15 P.M., what time should she get on the bus? Options: (A) 11:15 A.M. (B)
10:30 A.M. (C) 12:30 P.M. (D) 12:15 P.M. Answer: (Step 1) Find the mall on the
schedule. (Step 2) The mall is on the bus route at 9:15 A.M., 10:30 A.M.,
10:45 A.M., 11:15 A.M., and 12:15 P.M. Find the library on the schedule. (Step
3) The library is on the bus route at 11:15 A.M., 12:30 P.M., 12:45 P.M., 1:15
P.M., and 2:15 P.M. (Step 4) Lena should get on the bus at 11:15 A.M. to
arrive at the library at 1:15 P.M. The answer is 11:15 A.M. Output: (A) 11:15
A.M.
Figure 11: The correct prediction from our PromptPG for a multi-choice
question. There are 9 rows and 6 columns in the given tabular context. Our
model successfully locates the target cells in the table and performs multi-
hop reasoning to predict the correct answer.
Table: science-fiction book $|$ $4.31 mystery novel $|$ $8.26 crossword puzzle
book $|$ $8.74 geography book $|$ $8.61 coloring book $|$ $8.08 paperback book
$|$ $8.45 Question: Ariana has $16.50. Does she have enough to buy a paperback
book and a mystery novel? Options: (A) yes (B) no Answer: (Step 1) To answer
this question, look at the table to find the price of a paperback book and the
price of a mystery novel. (Step 2) A paperback book costs $8.45 and a mystery
novel costs $8.26. (Step 3) $8.45 + $8.26 = $16.71 (Step 4) Since $16.71 is
greater than $16.50, Ariana does not have enough money to buy a paperback book
and a mystery novel. The answer is no. Output: (B) no
Figure 12: The correct prediction from our PromptPG for a multi-choice
question with Boolean options. It needs to compare the budget and the total
costs to verify if Ariana has enough money.
Table: yellow tiger’s eye $|$ $0.85 piece of green slate $|$ $0.59 piece of
red sandstone $|$ $0.19 piece of rose quartz $|$ $0.61 smooth piece of marble
$|$ $0.45 Question: How much money does Connor need to buy a piece of rose
quartz, a piece of green slate, and a piece of red sandstone? (unit: $)
Answer: (Step 1) Connor needs to buy a piece of rose quartz, a piece of green
slate, and a piece of red sandstone. To find the total amount of money Connor
needs, add the prices of the three items. (Step 2) $0.85 \+ $0.59 + $0.19 =
$1.63 (Step 3) Connor needs $1.63 to buy a piece of rose quartz, a piece of
green slate, and a piece of red sandstone. The answer is 1.63. Output: 1.63
Ground truth: 1.39
Figure 13: The wrong prediction from our PromptPG for a free-text question
example. Our model retrieves the wrong price for the rose quartz, thus
calculating the wrong cost sum of three items.
Table: $[$TITLE$]$ Apples per tree Stem $|$ Leaf 1 $|$ 1, 3, 6 2 $|$ 2, 3, 3,
6 3 $|$ 0 4 $|$ 3 5 $|$ 2, 6 6 $|$ 0, 0, 2 7 $|$ 2, 8 8 $|$ 4, 5, 5, 6, 7
Question: A farmer counted the number of apples on each tree in her orchard.
How many trees have at least 51 apples but fewer than 83 apples? (unit: trees)
Answer: (Step 1) To solve this problem, look at the table and find all the
trees that have at least 51 apples but fewer than 83 apples. (Step 2) There
are three trees that have at least 51 apples but fewer than 83 apples. The
answer is 3. Output: 3 Ground truth: 7
Figure 14: The wrong prediction from our PromptPG for a free-text question
example. In this example, the problem is provided with an abstract Stem-Leaf
table. Our model fails to understand this domain-specific table and lacks a
high-level ability of logical reasoning to get the correct answer.
Table: $[$TITLE$]$ Lawn mowers sold Day $|$ Number of lawn mowers Tuesday $|$
8 Wednesday $|$ 7 Thursday $|$ 5 Friday $|$ 7 Saturday $|$ 9 Sunday $|$ 8
Monday $|$ 8 Question: A garden supply store kept track of how many lawn
mowers it sold in the past 7 days. What is the median of the numbers? Answer:
(Step 1) The median is the ”middle” value. To find the median, put the numbers
in order from least to greatest and find the number in the middle. (Step 2) 8,
7, 5, 7, 9, 8, 8 (Step 3) The median is 7. The answer is 7. Output: 7 Ground
truth: 8
Figure 15: The wrong prediction from our PromptPG for a free-text question
example. It seems that our model has a limited ability to order numbers in the
mathematical domain.
Table: $[$TITLE$]$ Tour boat schedule Ocean City $|$ 8:15 A.M. $|$ 9:00 A.M.
$|$ 9:15 A.M. $|$ 9:30 A.M. $|$ 10:00 A.M. Whale Watch Harbor $|$ 9:30 A.M.
$|$ 10:15 A.M. $|$ 10:30 A.M. $|$ 10:45 A.M. $|$ 11:15 A.M. Oyster Lighthouse
$|$ 10:15 A.M. $|$ 11:00 A.M. $|$ 11:15 A.M. $|$ 11:30 A.M. $|$ 12:00 P.M.
Fisherman’s Cove $|$ 11:15 A.M. $|$ 12:00 P.M. $|$ 12:15 P.M. $|$ 12:30 P.M.
$|$ 1:00 P.M. Surfing Beach $|$ 12:00 P.M. $|$ 12:45 P.M. $|$ 1:00 P.M. $|$
1:15 P.M. $|$ 1:45 P.M. Question: Look at the following schedule. Haley is at
Ocean City at 9.45 A.M. How soon can she get to Surfing Beach? Options: (A)
1:45 P.M. (B) 12:00 P.M. (C) 10:30 A.M. (D) 12:15 P.M. Answer: (Step 1) Find
9:45 A.M. on the schedule. (Step 2) The next stop is Whale Watch Harbor at
10:15 A.M. (Step 3) The earliest she can get to Surfing Beach is 12:00 P.M.
The answer is 12:00 P.M. Output: (B) 12:00 P.M. Ground truth: (A) 1:45 P.M.
Figure 16: The wrong prediction from our PromptPG for a multi-choice question
example. In this example, the query of the current time provided in the
question text does not hit any time slot exactly in the tabular context.
Therefore, the model fails to locate the accurate time for the next stop.
Table: $[$TITLE$]$ Blocks from school Name $|$ Number of blocks Wendy $|$ 14
Carson $|$ 8 Zach $|$ 17 Lila $|$ 17 Nicole $|$ 3 Shane $|$ 7 Henry $|$ 5
Devin $|$ 17 Question: Some students compared how many blocks they live from
school. What is the mean of the numbers? Answer: (Step 1) The mean is the
average of the numbers. (Step 2) To find the mean, add the numbers and divide
by the number of students. (Step 3) 14 + 8 + 17 + 17 + 3 + 7 + 5 + 17 = 86
(Step 4) There are 8 students, so divide 86 by 8. (Step 5) 86 $\div$ 8 = 10.75
(Step 6) The mean is 10.75. The answer is 10.75. Output: 10.75 Ground truth:
11
Figure 17: The wrong prediction from our PromptPG for a free-text question
example. The example poses a challenge for the model to perform an arithmetic
operation on a long chain of numbers.
Table: $[$TITLE$]$ Bus schedule Name $|$ Score Anita $|$ 128 Krysta $|$ 135
Hugo $|$ 137 Devin $|$ 115 Question: Some friends played miniature golf and
wrote down their scores. How many more strokes did Hugo take than Anita?
(unit: strokes) Answer: (Step 1) Anita’s score is 128. Hugo’s score is 137.
(Step 2) Hugo took 9 more strokes than Anita. Output: 137 Ground truth: 9
Figure 18: The wrong prediction from our PromptPG for a free-text question
example. Although our model includes the correct answer in the generated
output, the output does not follow the format that is designed in in-context
examples. It makes our answer extractor fail to get the target answer. This
issue could be alleviated by completing human-designed rules or developing an
additional module to extract the answer from the prediction more accurately in
various cases.
|
# Enhancing Dynamic CT Image Reconstruction with Neural Fields Through
Explicit Motion Regularizers
Pablo Arratia, Matthias Ehrhardt, Lisa Kreusser
University of Bath
Bath, UK
{pial20, me549<EMAIL_ADDRESS>
###### Abstract
Image reconstruction for dynamic inverse problems with highly undersampled
data poses a major challenge: not accounting for the dynamics of the process
leads to a non-realistic motion with no time regularity. Variational
approaches that penalize time derivatives or introduce PDE-based motion model
regularizers have been proposed to relate subsequent frames and improve image
quality using grid-based discretization. Neural fields are an alternative to
parametrize the desired spatiotemporal quantity with a deep neural network, a
lightweight, continuous, and biased towards smoothness representation. The
inductive bias has been exploited to enforce time regularity for dynamic
inverse problems resulting in neural fields optimized by minimizing a data-
fidelity term only. In this paper we investigate and show the benefits of
introducing explicit PDE-based motion regularizers, namely, the optical flow
equation, in 2D+time computed tomography for the optimization of neural
fields. We also compare neural fields against a grid-based solver and show
that the former outperforms the latter.
_K_ eywords Dynamic Inverse Problems $\cdot$ Neural fields $\cdot$ Explicit
regularizer $\cdot$ Optical flow
## 1 Introduction
In many imaging tasks, the target object changes during the data acquisition.
In clinical settings for instance, imaging techniques such as Computed
Tomography (CT), Positron Emission Tomography (PET) or Magnetic Resonance
Imaging (MRI) are used to study moving organs such as the heart or the lungs.
Usually, the acquired data is a time series collected at several discretized
times $0=t_{0}<t_{1}<\ldots<t_{N_{T}}=T$ but the rapid and constant motion of
these organs prevents the scanners from taking enough measurements in a single
time instant $t_{i}$ and thus, measurements are highly undersampled in space.
One way to proceed is by neglecting the time component and solving several
static inverse problems. However, the lack of information makes this naive
frame-by-frame reconstruction a severely ill-posed problem leading to a poor
reconstruction. It is therefore necessary to seek a spatiotemporal quantity
with coherence between subsequent frames whose reconstruction takes into
account the dynamic of the process. Typical approaches are variational methods
that penalize first-order temporal derivative of the sequence [1, 2, 3] and
variational methods that incorporate explicit motion models based on partial
differential equations (PDEs) [4, 5, 6, 7]. We focus on the latter focus,
which aims at penalizing unrealistic dynamics at the cost of introducing the
motion as an additional quantity to discover. We refer to [8] for an extensive
review of dynamic inverse problems. The aforementioned papers use classical
grid-based representations of the spatiotemporal image which suffer from two
issues: (1) their lack of regularity which motivates the use of regularizers
such as Tikhonov, total variation, or the previously mentioned motion model,
and (2) their complexity grows exponentially with the dimension and
polynomially with the discretization due to the curse of dimensionality which
can incur in memory burden.
In the last couple of years, coordinate-based multilayer perceptrons (MLPs)
have been employed as a new way of parameterizing quantities of interest. In
computer vision, these are referred to as neural fields or implicit neural
representations [9, 10], while the term Physics-Informed Neural Networks
(PINNs) has been adopted when solving PDEs [11, 12]. The main idea is to use a
neural network $u_{\theta}$ with trainable weights $\theta$ as an ansatz for
the solution of the problem. It takes as input a low-dimensional point in the
domain $x\in\Omega$, e.g., a pixel location, and outputs the value
$u_{\theta}(x)$ at that point. The problem is then rephrased as a non-convex
optimization that seeks optimal weights $\theta$. The method requires training
a neural network for every new instance, thus, it is said to be self-
supervised and differs from the usual learning framework where a solution map
is found by training a network over large datasets. Applications of neural
fields include image reconstruction in CT [13, 14, 15], MRI [16, 17, 18, 19,
20], image registration [21, 22, 23], continuous shape representation via
signed distance functions [24] and volume rendering [25] among others.
It is well-known that, under mild conditions, neural networks can approximate
functions at any desired tolerance [26], but their widespread use has been
justified by other properties such as (1) the implicit regularization they
introduce, (2) overcoming the curse of dimensionality, and (3) their
lightweight, continuous and differentiable representation. In [27, 28] it is
shown that the amount of weights needed to approximate the solution of
particular PDEs grows polynomially on the dimension of the domain. For the
same reason, only a few weights can represent complex images, leading to a
compact and memory-efficient representation. Finally, numerical experiments
and theoretical results show that neural fields tend to learn smooth functions
early during training [29, 30, 31]. This is both advantageous and
disadvantageous: neural fields can capture smooth regions of natural images
but will struggle at capturing edges. The latter can be overcome with Fourier
feature encoding [32].
In the context of dynamic inverse problems and neural fields, most of the
literature relies entirely on the smoothness introduced by the network on the
spatial and temporal variables to get a regularized solution. This allows
minimizing a data-fidelity term only without considering any explicit
regularizers. Applications can be found on dynamic cardiac MRI in [17, 20,
19], where the network outputs the real and imaginary parts of the signal,
while in [18] the neural field is used to directly fit the measurements and
then inference is performed by inpainting the k-space with the neural field
and taking the inverse Fourier transform. In [33, 34] neural fields are used
to solve a photoacoustic tomography dynamic reconstruction emphasizing their
memory efficiency. In [15], a 3D+time CT inverse problem is addressed with a
neural field parametrizing the initial frame and a polynomial tensor warping
it to get the subsequent frames. To the best of our knowledge, it is the only
work making use of neural fields and a motion model via a deformable template.
In this paper, we investigate the performance of neural fields regularized by
explicit PDE-based motion models in the context of dynamic inverse problems in
CT in a highly undersampled measurement regime with two dimensions in space.
Motivated by [4] and leveraging automatic differentiation to compute spatial
and time derivatives, we study the optical flow equation as an explicit motion
regularizer imposed as a soft constraint as in PINNs. Our findings are based
on numerical experiments and are summarized as follows:
* •
An explicit motion model constraints the neural field into a physically
feasible manifold improving the reconstruction when compared to a motionless
model.
* •
Neural fields outperform grid-based representations in the context of dynamic
inverse problems in terms of the quality of the reconstruction.
* •
We show that, once the neural field has been trained, it generalizes well into
higher resolutions.
The paper is organized as follows: in section 2 we introduce dynamic inverse
problems, motion models and the optical flow equation, and the joint image
reconstruction and motion estimation variational problem as in [4]; in section
3 we state the main variational problem to be minimized and study how to
minimize it with neural fields and with a grid-based representation; in
section 4 we study our method on a synthetic phantom which, by construction,
perfectly satisfies the optical flow constraint, and show the improvements
given by explicit motion regularizers; we finish with the conclusions in
section 5.
## 2 Dynamic Inverse Problems in Imaging
The development of imaging devices has allowed us to obtain accurate images
with non-invasive methods, which has been particularly exploited in areas such
as biomedical imaging with Computed Tomography, Magnetic Resonance Imaging,
Positron Emission Tomography, etc. Those methods collect measurements from
where the imaged object can be recovered. In (static) inverse problems, we aim
to reconstruct a quantity $u:\Omega\subset\mathbb{R}^{d}\to\mathbb{R}$ from
measurements $f$ by solving an equation of the form
$Ku+\varepsilon=f,$ (1)
where $K$ is the forward operator modelling the imaging process and
$\varepsilon$ is some noise. For dynamic inverse problems, the time variable
is included and thus we seek a time-dependent quantity
$u:\Omega\times[0,T]\to\mathbb{R}$ solving an equation of the form
$K(t)[u(\cdot,t)](\cdot)+\varepsilon=f(\cdot,t),\quad\text{in
}\Omega^{\prime}\times[0,T],$ (2)
where $K$ now is a potentially time-dependent forward operator modelling the
imaging process (e.g., a rotating CT scanner).
When the motion is slow compared to the acquisition speed, it is possible to
take enough measurements to get a reconstruction by neglecting the time
variable, considering the static inverse problem (1) instead, and solving for
each $t$ a variational problem of the form
$\min_{u(\cdot,t)}\mathcal{D}(u(\cdot,t),f(\cdot,t))+\alpha\mathcal{R}(u(\cdot,t)),\quad
t\in[0,T],$
where $\mathcal{D}$ represents a data-fidelity term measuring how well
equation (1) for a particular time instance $t$ is satisfied and whose choice
depends on the nature of the noise, e.g., $L^{2}$ error for gaussian noise or
Kullback-Leibler divergence for Poisson noise. $\mathcal{R}$ is a
regularization term such as the total variation [35] that adds prior
information on $u(\cdot,t)$, and $\alpha>0$ is a regularization parameter
balancing both terms.
However, when the imaged object undergoes some dynamics during measurement
acquisition, then the scanner may fail at sampling enough measurements at a
certain time $t$ and using the previous time-independent variational
formulation will lead to a poor reconstruction with artifacts even with the
use of regularizers due to the lack of enough measurements. This has motivated
adding temporal regularity in the solution by introducing motion models [4,
6]. Such models aim to solve a joint variational problem with the unknowns
being the image sequence $u:\Omega\times[0,T]\to\mathbb{R}$ and the motion
given, for instance, by the velocity field
$v:\Omega\times[0,T]\to\mathbb{R}^{d}$.
### 2.1 Motion Model
A motion model describes the relation between pixel intensities of the
sequence and the velocity flow from frame to frame through an equation of the
form $r(u,v)=0$ in $\Omega\times[0,T]$. Its choice is application-dependent,
for instance, the continuity equation is a common choice for 3D problems in
space while the optical flow equation is more suitable for 2D problems. These
models are typically employed for the task of motion estimation, this is,
given the image sequence $u$, to get the velocity flow. For example the
optical flow equation reads as
$r(u,v):=\partial_{t}u+v\cdot\nabla v=0,\quad\text{ in }\Omega\times[0,T].$
It is derived from the brightness constancy assumption, this is, pixels keep
constant intensity along their trajectory in time. This model poses only one
equation for $d$ unknowns, the components of the velocity field, leading to an
underdetermined set of equations. This is solved by considering a variational
problem in $v$ with a regularization term:
$\min_{v}\mathcal{A}(r(u,v))+\beta\mathcal{S}(v),$ (3)
where $\mathcal{A}$ is a metric measuring how well the equation $r(u,v)=0$ is
satisfied, $\mathcal{S}$ is a regularizer, and $\beta>0$ is the regularization
parameter balancing both terms. This variational model was firstly introduced
in [36] with $\mathcal{A}$ the $L^{2}$ norm and the regularizer as the $L^{2}$
norm of the gradient. Since then, different even non-smooth norms and
regularizers have been tried, for instance, in [37] the $L^{1}$ norm is used
to impose the motion model, and in [38] it is employed the $L^{1}$ norm and
the total variation for regularization.
### 2.2 Joint Image Reconstruction and Motion Estimation
To solve highly-undersampled dynamic inverse problems, in [4] it is proposed a
joint variational problem where not only the dynamic process $u$ is sought,
but also the underlying motion expressed in terms of a velocity field $v$. The
main hypothesis is that a joint reconstruction can enhance the discovery of
both quantities, image sequence and motion, improving the final reconstruction
compared to motionless models. Hence, the sought solution $(u^{*},v^{*})$ is a
minimizer for the variational problem given below:
$\min_{u,v}\mathcal{D}(u,f)+\alpha\mathcal{R}(u)+\beta\mathcal{S}(v)+\gamma\mathcal{A}(r(u,v)),$
(4)
with $\alpha,\beta,\gamma>0$ being regularization parameters balancing the
four terms. In [4], the domain is 2D+time, and among others, it is shown how
the purely motion estimation task of a noisy sequence can be enhanced by
solving the joint task of image denoising and motion estimation.
This model was further employed for 2D+time problems in [6] and [7]. In the
former it is studied its application on dynamic CT with sparse limited-angles
and it is studied both $L^{1}$ and $L^{2}$ norms for the data fidelity term,
with better results for the former. In the latter, the same logic is used for
dynamic cardiac MRI. In 3D+time domains, we mention [39] and [40] for dynamic
CT and dynamic photoacoustic tomography respectively.
## 3 Methods
Depending on the nature of the noise, different data-fidelity terms can be
considered. In this work, we consider Gaussian noise $\varepsilon$, so, to
satisfy equation (2) we use an $L^{2}$ distance between predicted measurements
and data
$\mathcal{D}(u,f):=\displaystyle\int_{0}^{T}\dfrac{1}{2}\|K(t)[u(\cdot,t)](\cdot)-f(\cdot,t)\|^{2}_{L^{2}(\Omega^{\prime})}dt.$
Since $u$ represents a natural image, a suitable choice for regularizer
$\mathcal{R}$ is the total variation to promote noiseless images and capture
edges:
$\mathcal{R}(u):=\displaystyle\int_{0}^{T}\text{TV}(u(\cdot,t))dt.$
For the motion model, we consider the optical flow equation (5), and to
measure its distance to 0 we use the $L^{1}$ norm. For the regularizer in $v$
we consider the total variation on each of its components.
$\mathcal{A}(r(u,v)):=\|r(u,v)\|_{L^{1}(\Omega\times[0,T])},\quad\mathcal{S}(v):=\displaystyle\sum_{p=1}^{d}\displaystyle\int_{0}^{T}\text{TV}(v^{p}(\cdot,t))dt.$
(5)
Thus, the whole variational problem reads as follows:
$\displaystyle\min_{u,v}\int_{0}^{T}$
$\displaystyle\dfrac{1}{2}\|K(t)[u(\cdot,t)](\cdot)-f(\cdot,t)\|^{2}_{L^{2}(\Omega^{\prime})}+\alpha\text{TV}(u(\cdot,t))+$
(6)
$\displaystyle\beta\sum_{p=1}^{d}\text{TV}(v^{p}(\cdot,t))+\gamma\|\partial_{t}u(\cdot,t)+v(\cdot,t)\cdot\nabla
u(\cdot,t)\|_{L^{1}(\Omega)}dt.$
We now describe how to solve this variational problem numerically for the
neural field and the grid-based representation. In both cases, we proceed with
a discretize-then-optimize approach and assume that measurements
$f\in\mathbb{R}^{N_{T}\times N^{\prime}}$ are given on a uniform grid.
### 3.1 Numerical evaluation with Neural Fields
Since _Tomosipo_ acts on voxelated images and the measurements are given on a
grid as well, the predicted measurement at frame $i$ requires the evaluation
of the network at points on a cartesian grid to get a grid-based
representation
$(u_{\theta})_{i}:=\\{u_{\theta}(x_{j},t_{i})\\}_{j=1,\ldots,N}$. The operator
$K(t_{i})$ maps $(u_{\theta})_{i}\in\mathbb{R}^{N}$ to
$K(t_{i})(u_{\theta})_{i}\in R^{N^{\prime}}$, for $i=1,\ldots,N_{T}$. To
simplify the notation, we let $(Ku_{\theta})_{i}:=K(t_{i})(u_{\theta})_{i}$.
For the regularization terms, since neural fields are mesh-free, they can be
evaluated at any point of the domain. Additionally, derivatives can be
computed with no error through automatic differentiation, thus, these are
exact and there is no need to use finite difference schemes. We then
discretize (6) as follows:
$\displaystyle\min_{\theta,\phi}$
$\displaystyle\dfrac{T}{N_{T}}\dfrac{|\Omega^{\prime}|}{N^{\prime}}\sum_{i=1}^{N_{T}}\sum_{j=1}^{N^{\prime}}\dfrac{1}{2}((Ku_{\theta})_{ij}-f_{ij})^{2}+$
(7) $\displaystyle\mathbb{E}_{(x,t)\sim
U(\Omega\times[0,T])}\left[\alpha\|\nabla
u_{\theta}(x,t)\|_{2}+\beta\left(\displaystyle\sum_{p=1}^{d}\|\nabla
v_{\phi}^{p}(x,t)\|_{2}\right)+\gamma|\partial_{t}u_{\theta}(x,t)+v_{\phi}(x,t)\cdot\nabla
u_{\theta}(x,t)|\right].$
The evaluation of the data fidelity term in (7) would require first the
evaluation of the network at $N_{T}\times N$ fixed grid-points to get the
image $\\{(u_{\theta})_{ij}\\}_{i=1,\ldots,N_{T};j=1,\ldots,N}$, and, second,
the application of the forward model $K(t_{i})$ to each frame
$\\{(u_{\theta})_{i}\\}_{i=1,\ldots,N_{T}}$. This might be expensive and time-
consuming, so, we proceed with a stochastic-gradient-descent-like approach in
time instead, this is, at each iteration, the network is evaluated at a
randomly sampled frame, say, the $i$-th frame, with points of the form
$\\{(x_{j},t_{i})\\}_{j=1,\ldots,N}$ to get the representation of the image at
time $t_{i}$. Then, the forward model is applied on this frame only and the
parameters are updated to minimize the difference between predicted data and
the measured data $f_{i}$ at this frame. One epoch then consists of $N_{T}$
iterations. This represents considerable benefits in terms of memory since the
whole scene is never explicitly represented in the whole space-time grid but
adds variability during training. Additionally, at each iteration, we sample
$N_{c}$ collocation points of the form
$\\{(x^{c}_{l},t^{c}_{l})\\}_{l=1\ldots,N_{c}}$, using a Latin Hypercube
Sampling strategy [41] on the domain
$\Omega\times[t_{i}-\delta,t_{i}+\delta]$, for some $\delta>0$, with $t_{i}$
the frame used to evaluate the data fidelity term as explained previously. In
conclusion, at the $k$-th iteration, some $i$-th frame is sampled and the
parameters of the network are updated by taking the gradient with respect to
$\theta$ and $\phi$ of the following function:
$\begin{array}[]{l}\dfrac{T}{N_{T}}\dfrac{|\Omega^{\prime}|}{N^{\prime}}\displaystyle\sum_{j=1}^{N^{\prime}}\dfrac{1}{2}((Ku_{\theta})_{ij}-f_{ij})^{2}+\\\
\dfrac{T|\Omega|}{N_{c}}\displaystyle\sum_{l=1}^{N_{c}}\left[\alpha\|\nabla
u_{\theta}(x^{c}_{l},t^{c}_{l})\|_{2}+\beta\left(\displaystyle\sum_{p=1}^{d}\|\nabla
v_{\phi}^{p}(x^{c}_{l},t^{c}_{l})\|_{2}\right)+\gamma|\partial_{t}u_{\theta}(x^{c}_{l},t^{c}_{l})+v_{\phi}(x^{c}_{l},t^{c}_{l})\cdot\nabla
u_{\theta}(x^{c}_{l},t^{c}_{l})|\right].\end{array}$
Finally, we recall that the choice of $N_{c}$, the amount of collocation
points sampled on each iteration, is not clear. One would like to sample as
many points as possible to have a better approximation of the regularizer,
however, this might be time-consuming and prohibitive in terms of memory
because of the use of auto differentiation. Thus, we define the _Sampling
Rate_ $SR$ as the rate between $N_{c}$ and the amount of points $N$ on the
spatial grid:
$SR:=\dfrac{N_{c}}{N}.$ (8)
In the experiments, we use a sampling rate of 0.1.
### 3.2 Numerical evaluation with grid-based representation
In this section we briefly describe the numerical realization as in [4]. A
uniform grid
$\\{(x_{j},t_{i})\\}_{i=1,\ldots,N_{T};j=1,\ldots,N}\subset\Omega\times[0,T]$
is assumed. Next, the quantities of interest are vectorized as
$u\in\mathbb{R}^{N_{T}\times N}$, $v\in\mathbb{R}^{N_{T}\times N\times d}$,
such that, $u_{ij}$ denotes the value of $u$ at the point $(x_{j},t_{i})$. The
operator $K(t_{i})$ maps $u_{i}\in\mathbb{R}^{N}$ to $K(t_{i})u_{i}\in
R^{N^{\prime}}$, for $i=1,\ldots,N_{T}$. To simplify the notation, we let
$(Ku)_{i}:=K(t_{i})u_{i}$. Finite difference schemes are employed to compute
the corresponding derivatives, namely, $D$ and $D_{t}$ denote the discrete
gradients in space and time respectively (these could be forward or centred
differences). Thus $(Du)\in\mathbb{R}^{N_{T}\times Nd}$,
$(D_{t}u)\in\mathbb{R}^{N_{T}\times N}$, and
$(Dv^{p})\in\mathbb{R}^{N_{T}\times N\times d}$. Using the previous, the
variational problem is discretized as
$\displaystyle\min_{u,v}$
$\displaystyle\dfrac{T}{N_{T}}\dfrac{|\Omega^{\prime}|}{N^{\prime}}\sum_{i=1}^{N_{T}}\sum_{j=1}^{N^{\prime}}\dfrac{1}{2}((Ku)_{ij}-f_{ij})^{2}+$
$\displaystyle\dfrac{T}{N_{T}}\dfrac{|\Omega|}{N}\sum_{i=1}^{N_{T}}\sum_{j=1}^{N}\left[\alpha\|(Du)_{ij}\|_{2}+\beta\left(\displaystyle\sum_{p=1}^{d}\|(Dv^{p})_{ij}\|_{2}\right)+\gamma|(D_{t}u)_{ij}+v_{ij}\cdot(Du)_{ij}|\right].$
It can be easily seen that this problem is biconvex, hence, in [4], the
proposed optimization routine updates the current iteration $(u^{k},v^{k})$ by
alternating between the following two subproblems:
* •
Problem in $u$. Fix $v^{k}$ and solve the following minimization problem for
$u$:
$\displaystyle u^{k+1}:=\operatorname*{arg\,min}_{u}$
$\displaystyle\dfrac{T}{N_{T}}\dfrac{|\Omega^{\prime}|}{N^{\prime}}\sum_{j=1}^{N^{\prime}}\dfrac{1}{2}((Ku)_{ij}-f_{ij})^{2}+\dfrac{T}{N_{T}}\dfrac{|\Omega|}{N}\sum_{i=1}^{N_{T}}\sum_{j=1}^{N}\alpha\|(Du)_{ij}\|_{2}+\gamma|(D_{t}u)_{ij}+v^{k}_{ij}\cdot(Du)_{ij}|.$
(9)
* •
Problem in $v$. Fix $u^{k+1}$ and solve the following minimization problem for
$v$:
$v^{k+1}:=\operatorname*{arg\,min}_{v}\dfrac{T}{N_{T}}\dfrac{|\Omega|}{N}\displaystyle\sum_{i=1}^{N_{T}}\displaystyle\sum_{j=1}^{N}\beta\left(\displaystyle\sum_{p=1}^{d}\|(Dv^{p})_{ij}\|_{2}\right)+\gamma|(D_{t}u^{k+1})_{ij}+v_{ij}\cdot(Du^{k+1})_{ij}|.$
(10)
Each subproblem is convex with non-smooth terms involved that can be solved
using the Primal-Dual Hybrid Gradient (PDHG) algorithm [42]. We refer to [4]
for the details.
## 4 Numerical Experiments
In our numerical experiments, we make use of _Tomosipo_ [43] to compute the
X-ray transform and its transpose for both, the neural field and the grid-
based representation. This library provides an integration of the _ASTRA-
toolbox_ [44, 45] with PyTorch for deep learning purposes. We make use of fan-
beam projection geometry for the forward operator. The architecture for the
neural networks in $u$ and $v$ consists of an input layer of size 3, then
Fourier feature mappings of the form
$(x,t)\to(\Gamma_{1}(x),\Gamma_{2}(t))\in\mathbb{R}^{128}$, then three hidden
layers of size 128 and we finish with an output layer of size 1 for $u$ and
size $2$ for $v$. Here $\Gamma_{1}(x):=(\sin(2\pi B_{x}x),\cos(2\pi B_{x}x))$
and $\Gamma_{2}(T):=(\sin(2\pi B_{t}t),\cos(2\pi B_{t}t))$, with the matrices
$B_{x}$ and $B_{t}$ having non-trainable entries
$(B_{x})_{ij}\sim\mathcal{N}(0,\sigma_{x}^{2})$ and
$(B_{t})_{ij}\sim\mathcal{N}(0,\sigma_{t}^{2})$. We found
$\sigma_{x}=\sigma_{t}=0.1$ to give the best results. The neural networks are
trained with Adam optimizer with initial learning rate $10^{-3}$.
### 4.1 Synthetic experiments
Here we assess our method and compare it against the grid-based one with two
synthetic experiments. The considered domain is the square
$\Omega=[-1,1]^{2}$. To define the ground-truth phantom $u$ we proceed as
follows:
* •
Let $u_{0}:\Omega\to[0,1]$ be the initial frame, i.e., we let
$u(x,0):=u_{0}(x)$.
* •
Define $\varphi:\Omega\times[0,T]\to\Omega$ describing the motion of the
process. It takes a point $(x_{0},t)\in\Omega\times[0,T]$ and outputs
$\varphi(x_{0},t)=x\in\Omega$, the new position of $x_{0}$ at time $t$. For
each time we can define the function $\varphi_{t}:\Omega\to\Omega$ by
$\varphi_{t}(x_{0})=\varphi(x_{0},t)$. We ask for $\varphi_{t}$ to be a
diffeomorphism for every $t\in[0,T]$. Hence we can define the trajectory of
the point $x_{0}$ as $t\to x(t)=\varphi_{t}(x_{0})$
* •
Define $u(x,t):=u_{0}(\varphi_{t}^{-1}(x))$.
The phantom thus generated solves exactly the optical flow equation with
velocity field $v=\frac{d}{dt}\varphi$.
#### 4.1.1 Two-squares phantom
The first phantom is depicted in figure 1 with two squares moving within an
ellipsis-shaped background. The inverse of the motion for the squares on the
left and right are $\varphi_{1}^{-1}$ and $\varphi_{2}^{-1}$ respectively,
each one given by the following expressions:
$\varphi_{1}^{-1}(x,y,t)=\begin{pmatrix}x-\frac{t}{5}\cos(2\pi t)\\\
y-\frac{3t}{4}\sin(2\pi
t)\end{pmatrix},\quad\varphi_{2}^{-1}(x,y,t)=\begin{pmatrix}x-0.3t\\\
y-0.8t\end{pmatrix}.$
From this, the velocity fields are easily expressed as:
$v_{1}(x,y,t)=\begin{pmatrix}\frac{1}{5}\cos(2\pi t)-\frac{2\pi t}{5}\sin(2\pi
t)\\\ \frac{3}{4}\sin(2\pi t)+\frac{3\pi t}{2}\cos(2\pi t)\end{pmatrix},\quad
v_{2}(x,y,t)=\begin{pmatrix}0.3\\\ 0.8\end{pmatrix}.$
$v_{1}$ produces a spiral-like motion for the square on the left and $v_{2}$ a
constant diagonal motion for the square on the right. These are depicted in
the second row of figure 1 (see remark 1 to understand this representation).
###### Remark 1
The second row in figure 1 represents the velocity field as follows: the
coloured boundary frame indicates the direction of the velocity field. The
intensities of the image indicate the magnitude of the vector. As an example,
the square on the right moves constantly up and slightly to the right during
the motion.
Measurements are obtained by sampling one random angle per frame and further
corrupted with Gaussian noise with standard deviation $\sigma=0.01$. See the
third row in figure 1. To highlight the necessity of motion models, two naive
reconstructions are shown in the fourth row of figure 1. The one on the left
corresponds to a time-static reconstruction, i.e., assuming that the squares
are not moving. The result is an image that blurs those regions where the
squares moved. The one on the right is a frame-by-frame reconstruction which,
as expected, cannot get a reliable reconstruction from one projection only.
(a)
(b)
(c)
(d)
Figure 1: First row: ground truth image at frames 15, 35, 55, 75, 95 (out of
100). Second row: velocity field at frames 15, 35, 55, 75, 95 (out of 100).
Third row: fan-beam measurements with one random angle projection per time
instant. Fourth row left: reconstruction neglecting the time component. Fourth
row right: frame-by-frame reconstruction from angle at $t=0$.
_Effect of the motion regularization parameter $\gamma$._
We begin our study by choosing $\alpha,\beta=0$ and varying
$\gamma\in\\{0,10^{-5},10^{-4},10^{-3},10^{-2}\\}$. In this case, we find that
the best reconstruction in terms of PSNR is given by $\gamma=10^{-3}$. We show
this reconstruction and the one with $\gamma=0$ in figure 2. The
reconstruction with $\gamma=10^{-3}$ achieves a PSNR of 25.7 versus a PSNR of
22.08 for the implicitly regularized neural field. We also show in figure 3(a)
how the PSNR changes during the optimization. The curves $\gamma=0$ and
$\gamma=10^{-5}$ are the ones performing the worst since almost no motion
model is being imposed. As we increase $\gamma$ the reconstruction improves
and is consistently better than the motionless one. We mention however the
variability of the metric during training which poses challenges as for when
to stop training. We claim that this behaviour is due to the randomness coming
from the collocation points and an increase in the sampling rate can alleviate
it.
(a)
(b)
Figure 2: Reconstructions with neural fields. First row: prediction with
neural field $(\alpha,\beta,\gamma)=(0,0,10^{-3})$, PSNR: 25.7. Second row:
prediction with implicitly regularized neural field
$(\alpha,\beta,\gamma)=(0,0,0)$, PSNR: 22.08.
(a)
(b)
Figure 3: Left: Evolution of PSNR during training for different values of
$\gamma$ and every 100 epochs (in our setting each epoch consists of 100
iterations). Right: PSNR along different discretization levels for neural
field and grid-based representations.
_Neural fields versus Grid-based method._
We now compare explicitly regularized neural fields against the grid-based
method outlined in section 3.2. We do not consider the case $\alpha=\beta=0$
for the grid-based approach, since the lack of a TV regularizer leads to poor
performance. For this approach, it was found that the choice
$(\alpha,\beta,\gamma)=(10^{-3},10^{-3},10^{-3})$ led to the best results in
terms of PSNR, achieving a value of 22.8. We try the same choice for the
neural field representation in which case the PSNR is 24.1. Results are shown
in figure 4. There it is clear that neural fields outperform the grid-based
solution even for the choice of regularization parameters that led to the best
behaviour for the grid-based method. Moreover, when comparing the choices
$(\alpha,\beta,\gamma)=(10^{-3},10^{-3},10^{-3})$ and
$(\alpha,\beta,\gamma)=(0,0,10^{-3})$ from the previous point it can be seen
that adding the total variation regularizers on $u$ and $v$ has a negative
effect in terms of PSNR for the image reconstruction task.
(a)
(b)
Figure 4: Reconstructions with different methods for the choice
$(\alpha,\beta,\gamma)=(10^{-3},10^{-3},10^{-3})$. From left to right: frames
15, 35, 55, 75, 95 (out of 100). First row: prediction with neural field,
PSNR: 24.1. Second row: prediction with grid-based representation, PSNR: 22.8.
_Generalization into higher resolution._
We finish this section by assessing the continuous representation of the
neural field and their generalization by comparing it against the ground truth
at different resolutions. We recall that the neural field and the grid-based
were optimized on a fixed grid of size $64\times 64$. Once trained the neural
field can be evaluated at any resolution while the grid-based solution is
interpolated into higher resolutions via the nearest neighbourhood method. The
ground truth image originally defined at $1024\times 1024$ is downsampled to
the corresponding resolutions for comparison. For the neural field we take the
model with $(\alpha,\beta,\gamma)=(0,0,10^{-3})$ while for the grid-based
method, we take the model with
$(\alpha,\beta,\gamma)=(10^{-3},10^{-3},10^{-3})$. Results are shown in figure
3(b) where it can be seen that a larger increase in the quality of the
reconstruction with respect to the resolution is given by the neural field
solution.
## 5 Conclusion
In this work we studied neural fields for dynamic inverse problems. We saw how
to enhance neural fields reconstruction for dynamic inverse problems by making
use of explicit PDE-based motion regularizers, namely, the optical flow
equation. Constraining the neural field to this physically feasible motion
meant a significant improvement with respect to the more widely used
motionless implicitly regularized network. This opens the option for studying
more motion models, e.g., continuity equation for 3D+time problems, for neural
fields since most of the literature relies entirely on the implicit
regularization of the network. We saw that the motion regularization parameter
$\gamma$ played a relevant role in the quality of the reconstruction however
its choice is not clear, a small value of it led to a similar behaviour with
the implicitly regularized neural field, while a very large value promotes no
motion. Finally, we highlight that our goal was to improve the reconstruction
of the image leaving the motion estimation as an auxiliary problem and not a
goal. However, there are applications where the motion is a relevant quantity,
for example it is used in cardiac imaging for clinical assessment of the
heart. In such cases, it can be necessary to think of explicit regularizers
for the motion as well.
We have also studied the performance of neural fields against classical grid-
based representations, in this case, an alternating scheme plus PDHG, and even
for the choice of regularization parameters $\alpha,\beta,\gamma$ for which
this approach performed the best, neural fields still were better in terms of
PSNR.
We conclude that neural fields with explicit regularizers can significantly
improve the discovery of spatiotemporal quantities. Their mesh-free nature
makes them suitable for such tasks since derivatives can be computed via
automatic differentiation but also their memory consumption can remain
controlled even for large-scale imaging tasks [33].
## Acknowledgments
Pablo Arratia is supported by a scholarship from the EPSRC Centre for Doctoral
Training in Statistical Applied Mathematics at Bath (SAMBa), under the project
EP/S022945/1.
## References
* [1] Jennifer A Steeden, Grzegorz T Kowalik, Oliver Tann, Marina Hughes, Kristian H Mortensen, and Vivek Muthurangu. Real-time assessment of right and left ventricular volumes and function in children using high spatiotemporal resolution spiral bssfp with compressed sensing. Journal of Cardiovascular Magnetic Resonance, 20(1):79, 2018.
* [2] Tatiana A Bubba, Maximilian März, Zenith Purisha, Matti Lassas, and Samuli Siltanen. Shearlet-based regularization in sparse dynamic tomography. In Wavelets and Sparsity XVII, volume 10394, pages 236–245. SPIE, 2017.
* [3] Esa Niemi, Matti Lassas, Aki Kallonen, Lauri Harhanen, Keijo Hämäläinen, and Samuli Siltanen. Dynamic multi-source x-ray tomography using a spacetime level set method. Journal of Computational Physics, 291:218–237, 2015.
* [4] Martin Burger, Hendrik Dirks, and Carola-Bibiane Schönlieb. A variational model for joint motion estimation and image reconstruction, Jan 2018.
* [5] Martin Burger, Jan Modersitzki, and Sebastian Suhr. A nonlinear variational approach to motion-corrected reconstruction of density images. arXiv preprint arXiv:1511.09048, 2015.
* [6] Martin Burger, Hendrik Dirks, Lena Frerking, Andreas Hauptmann, Tapio Helin, and Samuli Siltanen. A variational reconstruction method for undersampled dynamic x-ray tomography based on physical motion models. Inverse Problems, 33(12):124008, 2017.
* [7] Angelica I Aviles-Rivero, Noémie Debroux, Guy Williams, Martin J Graves, and Carola-Bibiane Schönlieb. Compressed sensing plus motion (cs+ m): a new perspective for improving undersampled mr image reconstruction. Medical Image Analysis, 68:101933, 2021.
* [8] Andreas Hauptmann, Ozan Öktem, and Carola Schönlieb. Image reconstruction in dynamic inverse problems with temporal models. Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging: Mathematical Imaging and Vision, pages 1–31, 2021.
* [9] Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond, May 2022.
* [10] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems, 33:7462–7473, 2020.
* [11] M. Raissi, P. Perdikaris, and G.E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, Feb 2019.
* [12] Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi Rozza, Maizar Raissi, and Francesco Piccialli. Scientific machine learning through physics-informed neural networks: Where we are and what’s next. arXiv preprint arXiv:2201.05624, 2022.
* [13] Guangming Zang, Ramzi Idoughi, Rui Li, Peter Wonka, and Wolfgang Heidrich. Intratomo: self-supervised learning-based tomography via sinogram synthesis and prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1960–1970, 2021.
* [14] Yu Sun, Jiaming Liu, Mingyang Xie, Brendt Wohlberg, and Ulugbek S Kamilov. Coil: Coordinate-based internal learning for imaging inverse problems. arXiv preprint arXiv:2102.05181, 2021.
* [15] Albert W Reed, Hyojin Kim, Rushil Anirudh, K Aditya Mohan, Kyle Champley, Jingu Kang, and Suren Jayasuriya. Dynamic ct reconstruction from limited views with implicit neural representations and parametric motion fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2258–2268, 2021.
* [16] Junshen Xu, Daniel Moyer, Borjan Gagoski, Juan Eugenio Iglesias, P Ellen Grant, Polina Golland, and Elfar Adalsteinsson. Nesvor: Implicit neural representation for slice-to-volume reconstruction in mri. IEEE Transactions on Medical Imaging, 2023.
* [17] Johannes F Kunz, Stefan Ruschke, and Reinhard Heckel. Implicit neural networks with fourier-feature inputs for free-breathing cardiac mri reconstruction. arXiv preprint arXiv:2305.06822, 2023.
* [18] Wenqi Huang, Hongwei Bran Li, Jiazhen Pan, Gastao Cruz, Daniel Rueckert, and Kerstin Hammernik. Neural implicit k-space for binning-free non-cartesian cardiac mr imaging. In International Conference on Information Processing in Medical Imaging, pages 548–560. Springer, 2023.
* [19] Jie Feng, Ruimin Feng, Qing Wu, Zhiyong Zhang, Yuyao Zhang, and Hongjiang Wei. Spatiotemporal implicit neural representation for unsupervised dynamic mri reconstruction. arXiv preprint arXiv:2301.00127, 2022.
* [20] Tabita Catalán, Matías Courdurier, Axel Osses, René Botnar, Francisco Sahli Costabal, and Claudia Prieto. Unsupervised reconstruction of accelerated cardiac cine mri using neural fields. arXiv preprint arXiv:2307.14363, 2023.
* [21] Jelmer M Wolterink, Jesse C Zwienenberg, and Christoph Brune. Implicit neural representations for deformable image registration. In International Conference on Medical Imaging with Deep Learning, pages 1349–1359. PMLR, 2022.
* [22] Pablo Arratia López, Hernán Mella, Sergio Uribe, Daniel E Hurtado, and Francisco Sahli Costabal. Warppinn: Cine-mr image registration with physics-informed neural networks. Medical Image Analysis, 89:102925, 2023.
* [23] Jing Zou, Noémie Debroux, Lihao Liu, Jing Qin, Carola-Bibiane Schönlieb, and Angelica I Aviles-Rivero. Homeomorphic image registration via conformal-invariant hyperelastic regularisation. arXiv preprint arXiv:2303.08113, 2023.
* [24] Dieuwertje Alblas, Christoph Brune, Kak Khee Yeung, and Jelmer M Wolterink. Going off-grid: continuous implicit neural representations for 3d vascular modeling. In International Workshop on Statistical Atlases and Computational Models of the Heart, pages 79–90. Springer, 2022.
* [25] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021.
* [26] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators, Jan 1989\.
* [27] Martin Hutzenthaler, Arnulf Jentzen, Thomas Kruse, and Tuan Anh Nguyen. A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations. SN partial differential equations and applications, 1(2):10, 2020\.
* [28] Arnulf Jentzen, Diyora Salimova, and Timo Welti. A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients. arXiv preprint arXiv:1809.07321, 2018.
* [29] Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In International conference on machine learning, pages 5301–5310. PMLR, 2019.
* [30] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31, 2018.
* [31] Sifan Wang, Hanwen Wang, and Paris Perdikaris. On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 384:113938, 2021.
* [32] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in neural information processing systems, 33:7537–7547, 2020.
* [33] Luke Lozenski, Mark A Anastasio, and Umberto Villa. A memory-efficient dynamic image reconstruction method using neural fields. arXiv preprint arXiv:2205.05585, 2022.
* [34] Luke Lozenski, Refik Mert Cam, Mark A Anastasio, and Umberto Villa. Proxnf: Neural field proximal training for high-resolution 4d dynamic image reconstruction. arXiv preprint arXiv:2403.03860, 2024.
* [35] Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms, Nov 1992.
* [36] Berthold K.P. Horn and Brian G. Schunck. Determining optical flow, Aug 1981.
* [37] Gilles Aubert, Rachid Deriche, and Pierre Kornprobst. Computing optical flow via variational techniques. SIAM Journal on Applied Mathematics, 60(1):156–182, 1999.
* [38] Christopher Zach, Thomas Pock, and Horst Bischof. A duality based approach for realtime tv-l 1 optical flow. In Pattern Recognition: 29th DAGM Symposium, Heidelberg, Germany, September 12-14, 2007. Proceedings 29, pages 214–223. Springer, 2007\.
* [39] Nargiza Djurabekova, Andrew Goldberg, Andreas Hauptmann, David Hawkes, Guy Long, Felix Lucka, and Marta Betcke. Application of proximal alternating linearized minimization (palm) and inertial palm to dynamic 3d ct. In 15th international meeting on fully three-dimensional image reconstruction in radiology and nuclear medicine, volume 11072, pages 30–34. SPIE, 2019.
* [40] Felix Lucka, Nam Huynh, Marta Betcke, Edward Zhang, Paul Beard, Ben Cox, and Simon Arridge. Enhancing compressed sensing 4d photoacoustic tomography by simultaneous motion estimation. SIAM Journal on Imaging Sciences, 11(4):2224–2253, 2018.
* [41] Michael Stein. Large sample properties of simulations using latin hypercube sampling. Technometrics, 29(2):143–151, 1987.
* [42] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging, Dec 2010.
* [43] Allard Hendriksen, Dirk Schut, Willem Jan Palenstijn, Nicola Viganò, Jisoo Kim, Daniël Pelt, Tristan van Leeuwen, and K. Joost Batenburg. Tomosipo: Fast, flexible, and convenient 3D tomography for complex scanning geometries in Python. Optics Express, Oct 2021.
* [44] Wim Van Aarle, Willem Jan Palenstijn, Jan De Beenhouwer, Thomas Altantzis, Sara Bals, K Joost Batenburg, and Jan Sijbers. The astra toolbox: A platform for advanced algorithm development in electron tomography. Ultramicroscopy, 157:35–47, 2015.
* [45] Wim Van Aarle, Willem Jan Palenstijn, Jeroen Cant, Eline Janssens, Folkert Bleichrodt, Andrei Dabravolski, Jan De Beenhouwer, K Joost Batenburg, and Jan Sijbers. Fast and flexible x-ray tomography using the astra toolbox. Optics express, 24(22):25129–25147, 2016.
|
Inputing an arbitrary pure state $|\psi\rangle$ or mixed state to a quantum
verification procedure is analogous in the classical case to inputing a
probabilistic distribution over $y$, see Theorem 3.
The analog of the accepting and rejecting subspaces, see Definition 19, are
the following sets
$\displaystyle S_{C}^{\geq a}(x)$ $\displaystyle=$ $\displaystyle\left\\{y\ :\
\Pr_{z}[C_{n}(x,y,z)=1]\geq a\right\\}\ ,\ $ (152) $\displaystyle S_{C}^{\leq
b}(x)$ $\displaystyle=$ $\displaystyle\left\\{y\ :\
\Pr_{z}[C_{n}(x,y,z)=1]\leq b\right\\}\ .\ $ (153)
The class Functional $\rm MA$ ($\rm FMA$) can be defined in terms of these
sets, as well as the class Total Functional $\rm MA$ ($\rm TFMA$). (Note that
functional classes based on witnesses are not relevant in the classical case,
as they would correspond to probabilistic distribution over $y$’s.).
In the classical case, for fixed $x$ and $y$, one can sample repeatedly from
the distribution $p_{y}$. Remarkably in the quantum case this is also
possible, even though cloning of quantum states is impossible[13].
For fixed $x$ and $y$, if one wants to modify the acceptance probability
$p_{y}$, a simple and natural procedure is to sample $N$ times from the
distribution $p_{y}$, yielding a Binomial distribution $B(N,p_{y})$. If one
obtains $k$ heads, one tosses a new coin with bias $g(k)$. This yields a new
acceptance probability $p^{\prime}_{y}=P_{g}(p_{y})$ (where the function
$P_{g}$ depends on $N$ and the choice of function $g(k)$). This classical
procedure can be implemented quantumly, in which case we call it an iterative
procedure, see Definition 29.
We show in Theorem 9 that if $g$ is increasing and non constant, the function
relating the new and old acceptance probabilities $p^{\prime}_{y}(p_{y})$ is
strictly increasing. We then show in Theorem 10 that by taking $N$
sufficiently large, and appropriate $g$, the strictly increasing function
$p^{\prime}_{y}(p_{y})$ can be made to pass through a finite number of points.
These two results are purely classical, and apply both the classical and
quantum iterative procedures.
The notion of non destructive procedure, Definition 30, is non trivial in the
quantum case. In the classical case it is trivial, as one can copy at will the
input.
In section 13 we give three definitions of $\mbox{{QMA}}\cap\mbox{{coQMA}}$
and show that they are equivalent. Exactly the same definitions can be given
for $\rm MA\cap\rm coMA$, and the proofs of equivalence also holds in the
classical case. In particular $\rm MA\cap\rm coMA$ can be defined in terms of
a single probabilistic verification procedure. The class Functional $\rm
MA\cap\rm coMA$ can then be defined in terms of the sets
$\displaystyle S_{C}^{[0,\frac{1-a^{\prime}}{2}]\cup[\frac{1+a}{2},1]}(x)=$
$\displaystyle\quad\quad\left\\{y\ :\
\Pr_{z}[C_{n}(x,y,z)=1]\in\frac{1-a^{\prime}}{2}]\cup[\frac{1+a}{2},1]\right\\}\
,$ $\displaystyle S_{C}^{[\frac{1-a^{\prime}}{2},\frac{1+a}{2}]}(x)=$
$\displaystyle\quad\quad\left\\{y\ :\
\Pr_{z}[C_{n}(x,y,z)=1]\in[\frac{1-a^{\prime}}{2},\frac{1+a}{2}]\right\\}\ .$
(154)
as in Definition 36.
One can then prove analogs of the key results of sections 15, 16, and 17,
namely that $\rm TFMA$ equals Functional $\rm MA\cap\rm coMA$, that if $\rm
TFMA$ is included in $\rm FBPP$ then $\rm MA\cap\rm coMA$ equals $\rm BPP$,
and that if there exists a $\rm MA$ complete problem that robustly reduces to
a problem in $\rm TFMA$, then $\rm MA=\rm MA\cap\rm coMA$. Indeed the proofs
of all these results are in fact classical.
## References
* [1] N. Megiddo and C. H. Papadimitriou (1991). On total functions, existence theorems and computational complexity. Theoretical Computer Science, 81, pp. 317–324.
* [2] D. S. Johnson, C. H. Papadimitriou and M. Yannakakis (1988). How easy is local search? Journal of Computer and System Sciences, 37(1), pp. 79–100.
* [3] C. H. Papadimitriou, A. A. Schaeffer and M. Yannakakis (1990). On the complexity of local search. In Proceedings of the ACM 22nd Annual Symposium on Theory of Computing, pp. 438–445.
* [4] M. W. Krentel (1989). Structure in locally optimal solutions. In Proceedings of the IEEE 30th Annual Symposium on Foundations of Computer Science, pp. 216–221.
* [5] C. H. Papadimitriou (1994). On the complexity of the parity argument and other inefficient proofs of existence. Journal of Computer and System Sciences, 48(3), pp. 498–532.
* [6] C. Daskalakis, P. W. Goldberg and C. H. Papadimitriou (2009). The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39(1), pp. 195–259.
* [7] X. Chen, X. Deng and S. H. Teng (2009). Settling the complexity of computing two-player Nash equilibria. Journal of the ACM (JACM), 56(3), pp. 14–57.
* [8] P. W. Goldberg and C. Papadimitriou (2018). Towards a unified complexity theory of total functions. Journal of Computer and System Sciences, 94, pp. 167–192.
* [9] A. Y. Kitaev, A. H. Shen and M. N. Vyalyi (2002). Classical and quantum computation. Graduate Studies in Mathematics, Vol. 47 (AMS, Providence, RI).
* [10] A. D. Bookatz (2014). QMA-complete problems. Quantum Information and Computation 14, pp. 361–383. arXiv preprint arXiv:1212.6312.
* [11] D. Janzing, P. Wocjan and T. Beth (2003). Cooling and low energy state preparation for 3-local Hamiltonians are FQMA-complete. arXiv preprint arXiv:quant-ph/0303186.
* [12] S. Massar and M. Santha (2021). Total functions in QMA. Quantum Information Processing 20, pp. 35. arXiv:1805.00670.
* [13] C. Marriott and J.Watrous (2005). Quantum Arthur-Merlin games. Computational Complexity, 14(2), pp. 122–152.
* [14] M. Fürer, O. Goldreich, Y. Mansour, M. Sipser and S. Zachos (1989). On completeness and soundness in interactive proof systems. In Advances in Computing Research: a research annual, Vol. 5 (Randomness and Computation, S. Micali, ed.), pp. 429–442.
* [15] D. Nagaj, P. Wocjan and Y. Zhang (2009). Fast amplification of QMA. Quantum Information and Computation, 9(11), pp. 1053–1068.
* [16] D. Aharonov, M. Ben-Or, F. G. S. L. Brandão and O. Sattath (2008). The pursuit for uniqueness: extending Valiant-Vazirani theorem to the probabilistic and quantum settings. arXiv preprint arXiv:0810.4840.
* [17] A. Deshpande, A. V. Gorshkov and B. Fefferman (2020). The importance of the spectral gap in estimating ground-state energies. arXiv preprint arXiv:2007.11582. |
ifaamas [AAMAS ’24]Proc. of the 23rd International Conference on Autonomous
Agents and Multiagent Systems (AAMAS 2024)May 6 – 10, 2024 Auckland, New
ZealandN. Alechina, V. Dignum, M. Dastani, J.S. Sichman (eds.) 2024 2024 1250
University of Southampton Southampton United Kingdom University of Southampton
Southampton United Kingdom University of Southampton Southampton United
Kingdom
# Conversational Language Models for Human-in-the-Loop Multi-Robot
Coordination
Demonstration Track
William Hunt 0000-0003-4269-5050<EMAIL_ADDRESS>, Toby Godfrey
0009-0004-4501-5051<EMAIL_ADDRESS>and Mohammad D. Soorati
0000-0001-6954-1284<EMAIL_ADDRESS>
###### Abstract.
With the increasing prevalence and diversity of robots interacting in the real
world, there is need for flexible, on-the-fly planning and cooperation. Large
Language Models are starting to be explored in a multimodal setup for
communication, coordination, and planning in robotics. Existing approaches
generally use a single agent building a plan, or have multiple homogeneous
agents coordinating for a simple task. We present a decentralised, dialogical
approach in which a team of agents with different abilities plans solutions
through peer-to-peer and human-robot discussion. We suggest that argument-
style dialogues are an effective way to facilitate adaptive use of each
agent’s abilities within a cooperative team. Two robots discuss how to solve a
cleaning problem set by a human, define roles, and agree on paths they each
take. Each step can be interrupted by a human advisor and agents check their
plans with the human. Agents then execute this plan in the real world,
collecting rubbish from people in each room. Our implementation uses text at
every step, maintaining transparency and effective human-multi-robot
interaction.
###### Key words and phrases:
Mixed Human-Robot teams; Multi-robot coordination and collaboration; Large
Language Models
## 1\. Introduction
Multirobot systems, while still reasonably rare in everyday life, are becoming
increasingly common in domains such as agriculture Lytridis et al. (2021);
Albiero et al. (2022), search and rescue Clark et al. (2022), and construction
Werfel et al. (2011). It is projected that the household robotics market will
grow considerably this decade Straits Research (2023), leading to large
numbers of robots interacting not only with their users but also with each
other. This presents increasing demand for robotic systems which can flexibly
adapt to new tasks without prior training. There is also a recent trend
towards generalist robotics; the same model or overarching structure can be
deployed on a wide variety of platforms and tasks Open X-Embodiment
Collaboration (2023). To this end, it is desirable to move towards a domain-
agnostic coordination structure that is common across platforms so that the
group can be treated as a single entity. A popular area of recent development
in the AI landscape is that of Large Language Models (LLMs) Yang et al.
(2023); agents that operate in the domain of text through next-symbol
prediction. LLMs may present an effective way to interpret task or agent
descriptions, as well as calling on internalised understanding and reasoning
to develop the available information. LLMs can also facilitate some degree of
communication between agents that helps them organise while keeping the
control flow understandable and accessible to a human operator. Multi-modal AI
development is now influencing robotics and other agents-based research fields
Driess et al. (2023) by using LLMs to power end-to-end approaches which can
understand and use human-written inputs to inform their actions Brohan et al.
(2023). This fits into an overarching vision towards a multimodal, generalist
agent which can understand and operate on text, images, and other inputs in
robotics Reed et al. (2022). Although conversational agents such as ChatGPT
are typically used to model a human-agent conversation, some works have
focused on modelling a conversation between multiple agents. This is typically
done through “role-playing”; an agent is told to “imagine” that it is a person
with a certain role and then enters a conversation whilst assuming that role
Li et al. (2023). This process can be used to model internal monologue for a
single agent who “talks to themself” about what they can perceive and do Huang
et al. (2023). Conversational approaches can also use multiple personas, each
of whom brings a different specialised perspective to the collective
generation of text by editing the group solution to fulfill different goals
that they each have for the end result Wang et al. (2023).
Role-playing has been used for a variety of tasks including debate Chan et al.
(2023), auctions, haggling Nascimento et al. (2023); Fu et al. (2023), and
checking with a human supervisor that they are not hallucinating Ren et al.
(2023). These approaches set the conditions for a dialogue and leave the
agents to talk, assuming that an intelligent solution emerges naturally from
the conversation. This has been used to create a team of software developers
who each write code and pass responsibilities to the next developer Hong et
al. (2023). LLMs have also been integrated into robotic simulators for
communication to organise who performs each task, allowing robots to
coordinate their workspaces and decide which of them is able to reach an
object Mandi et al. (2023). A similar approach simulates agents fetching items
in a house, they communicate and incorporate short-term memory to request
assistance from each other on the fly Zhang et al. (2023). Some works include
diverse skillsets where each agent has different capabilities and skills, such
as a work where the agents build a team with the required skills before
planning and acting Chen et al. (2023).
We present a proof-of-concept system that leverages the knowledge of
pretrained models by building language-based agents which talk to each other,
and with humans, using natural language. This allows agents to discuss and
debate their strategies towards a collaborative solution to a high-level
mission objective with observation or assistance from a human supervisor. This
forms a pipeline that allows agents to take a high-level task description and
autonomously perform every step of the process, from planning to assignment.
## 2\. Demonstration
An LLM is used to create a conversation where agents build paths to be
executed on hardware, which in turn can detect problems and prompt the LLM to
re-plan. The pipeline uses text at every step of the process to retain deep
meaning from end to end. A Python program takes human input, and calls the GPT
API (“gpt-4-vision-preview”) OpenAI (2023) for language generation. The
conversation produces plans which are passed directly to the robots. In brief,
the steps are: (1) Agent Ego: The “system message” (identity) for each agent
is set to a description of each agent, plus some general guidance on how to
debate; (2) Environment Description: We provide a flowchart-style environment
model that shows agents which rooms are connected with arrows (see Fig.2(c));
(3) Human Supervisor: The supervisor presents a task to the agents; (4)
Discussion: Agents discuss the task and plan their approach; (5) Calling the
supervisor: Agents call the supervisor when done, or if they need help. The
supervisor can ask for alterations if desired or approve the plan. Agents are
added to the chat and a human supervisor is given a chat box to speak with
them. When agents discuss, they start each message with their name and it is
added to the log. From each agent’s perspective, they perceive every other
agent’s message as one from the human, but the name tags allow them to
understand the conversation properly. The system prompt, which the LLM
considers with every message, encourages agents to negotiate with each other.
This is important because the default GPT configuration is polite and rarely
contradicts its interlocutor, however for collective planning agents should
point out mistakes. When they reach a decision, a path of rooms for each agent
is extracted and passed to hardware for execution. Two TurtleBot3 robots (see
Fig.2(a)) are controlled with ROS2 Humble on a Raspberry Pi 4, they are each
equipped with a LIDAR and an optical camera. ROS gathers data from the LIDAR
and controls the differential-drive motors. The LIDAR is used for collision
avoidance to protect the hardware as it moves around the environment. The
optical camera is used to detect ArUco markers which indicate rooms.
We demonstrate a conversational planning system deployed on real robots to
simulate autonomous waste collection. A bin is mounted on top of each robot
using a 3D-printed structure (see Fig.2(b)), turning the robots into mobile
bins (see Fig.2(a)). Participants can engage by typing a task into the
supervisor PC, and then watching the agents converse. The participants can
also offer advice to the robots or point out issues before approving the plan.
The robots then move around the arena to solve the task in real time.
Participants are encouraged to provide vague and unusual instructions or
interrupt the execution with novel information to test the system and explore
the challenges of the approach as well as its potential. A cut-down example of
this process is shown in Fig.1, where the agents must clean up after lunch at
a conference. They define two roles, correct a mistake, and decide on the
division of labour. 111Our demo video can be found here:
https://www.youtube.com/watch?v=cVCwG8aLIvI
Figure 1. An example dialogue where the rooms are extracted.
(a) Cleaning Robots
(b) 3D printed connection
(c) Map of environment
(d) View of arena
Figure 2. Cleaning Robots and the demonstration setup.
## 3\. Conclusion
Robotic systems would benefit greatly from being able to understand,
interpret, and utilise otherwise ignored text data. Inter-agent conversation
may be a useful tool for decentralised mission planning with a human in the
loop. We demonstrate conversational multi-agent coordination that allows
agents to be represented with few-word names and calls on the deep knowledge
of Large Language Models. The language-based approach can leverage expert
opinion across many scenarios as the entire system is understandable and can
be interfaced directly by a human user as much as is required. The proposed
demonstration uses a language model to allow two robots to plan and execute a
garbage collection task with a human supervisor in the loop and a large screen
will display the conversations. Participants can interact and interrupt the
system with different textual inputs to learn more about the capabilities and
limitations of using language models for multi-robot coordination.
This project was done as part of the Fast-PI project funded by the UKRI
Trustworthy Autonomous Systems Hub [EP/V00784X/1]. It was also supported by UK
Research and Innovation [EP/S024298/1].
## References
* (1)
* Albiero et al. (2022) Daniel Albiero, Angel Pontin Garcia, Claudio Kiyoshi Umezu, and Rodrigo Leme de Paulo. 2022. Swarm robots in mechanized agricultural operations: A review about challenges for research. _Computers and Electronics in Agriculture_ 193 (2022), 106608.
* Brohan et al. (2023) Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, et al. 2023\. Do as i can, not as i say: Grounding language in robotic affordances. In _Conference on Robot Learning_. PMLR, 287–318.
* Chan et al. (2023) Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based evaluators through multi-agent debate. _arXiv preprint arXiv:2308.07201_ (2023).
* Chen et al. (2023) Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, et al. 2023\. AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents. _arXiv preprint arXiv:2308.10848_ (2023).
* Clark et al. (2022) Jediah R. Clark, Mohammad Naiseh, Joel Fischer, Marise Galvez Trigo, Katie Parnell, Mario Brito, Adrian Bodenmann, Sarvapali D. Ramchurn, and Mohammad Divband Soorati. 2022. Industry Led Use-Case Development for Human-Swarm Operations. arXiv:2207.09543 [cs.RO]
* Driess et al. (2023) Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. 2023\. Palm-e: An embodied multimodal language model. _arXiv preprint arXiv:2303.03378_ (2023).
* Fu et al. (2023) Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. 2023. Improving language model negotiation with self-play and in-context learning from ai feedback. _arXiv preprint arXiv:2305.10142_ (2023).
* Hong et al. (2023) Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. 2023\. Metagpt: Meta programming for multi-agent collaborative framework. _arXiv preprint arXiv:2308.00352_ (2023).
* Huang et al. (2023) Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. 2023\. Inner Monologue: Embodied Reasoning through Planning with Language Models. In _Conference on Robot Learning_. PMLR, 1769–1782.
* Li et al. (2023) Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023. Camel: Communicative agents for” mind” exploration of large scale language model society. _arXiv preprint arXiv:2303.17760_ (2023).
* Lytridis et al. (2021) Chris Lytridis, Vassilis G Kaburlasos, Theodore Pachidis, Michalis Manios, Eleni Vrochidou, Theofanis Kalampokas, and Stamatis Chatzistamatis. 2021. An overview of cooperative robotics in agriculture. _Agronomy_ 11, 9 (2021), 1818.
* Mandi et al. (2023) Zhao Mandi, Shreeya Jain, and Shuran Song. 2023. Roco: Dialectic multi-robot collaboration with large language models. _arXiv preprint arXiv:2307.04738_ (2023).
* Nascimento et al. (2023) Nathalia Nascimento, Paulo Alencar, and Donald Cowan. 2023. Self-adaptive large language model (llm)-based multiagent systems. In _2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C)_. IEEE, 104–109.
* Open X-Embodiment Collaboration (2023) Open X-Embodiment Collaboration. 2023. Open X-Embodiment: Robotic Learning Datasets and RT-X Models. https://robotics-transformer-x.github.io.
* OpenAI (2023) OpenAI. 2023. Introducing ChatGPT and Whisper APIs. https://openai.com/blog/introducing-chatgpt-and-whisper-apis
* Reed et al. (2022) Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. 2022\. A generalist agent. _arXiv preprint arXiv:2205.06175_ (2022).
* Ren et al. (2023) Allen Z Ren, Anushri Dixit, Alexandra Bodrova, Sumeet Singh, Stephen Tu, Noah Brown, Peng Xu, Leila Takayama, Fei Xia, Jake Varley, et al. 2023\. Robots that ask for help: Uncertainty alignment for large language model planners. _arXiv preprint arXiv:2307.01928_ (2023).
* Straits Research (2023) Straits Research. 2023. _Household Robotics Market: Information by Application (Robotic Vacuum Mopping, Lawn Mowing), Offering (Products, Services), and Region - Forecast till 2030_.
* Wang et al. (2023) Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2023. Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. _arXiv preprint arXiv:2307.05300_ (2023).
* Werfel et al. (2011) Justin K Werfel, Kirsten Petersen, and Radhika Nagpal. 2011. Distributed multi-robot algorithms for the TERMES 3D collective construction system. In _Proceedings of Robotics: Science and Systems_. Institute of Electrical and Electronics Engineers.
* Yang et al. (2023) Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, and Xia Hu. 2023. Harnessing the power of llms in practice: A survey on chatgpt and beyond. _arXiv preprint arXiv:2304.13712_ (2023).
* Zhang et al. (2023) Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tianmin Shu, and Chuang Gan. 2023. Building Cooperative Embodied Agents Modularly with Large Language Models. _arXiv preprint arXiv:2307.02485_ (2023).
|
# Beyond Confidence: Reliable Models Should Also Consider Atypicality
Mert Yuksekgonul
Stanford University
<EMAIL_ADDRESS>
&Linjun Zhang
Rutgers University
<EMAIL_ADDRESS>
James Zou333Joint Advisors.
Stanford University
<EMAIL_ADDRESS>
&Carlos Ernesto Guestrin333Joint Advisors.
Stanford University, CZ Biohub
<EMAIL_ADDRESS>
###### Abstract
While most machine learning models can provide confidence in their
predictions, confidence is insufficient to understand a prediction’s
reliability. For instance, the model may have a low confidence prediction if
the input is not well-represented in the training dataset or if the input is
inherently ambiguous. In this work, we investigate the relationship between
how atypical (rare) a sample or a class is and the reliability of a model’s
predictions. We first demonstrate that atypicality is strongly related to
miscalibration and accuracy. In particular, we empirically show that
predictions for atypical inputs or atypical classes are more overconfident and
have lower accuracy. Using these insights, we show incorporating atypicality
improves uncertainty quantification and model performance for discriminative
neural networks and large language models. In a case study, we show that using
atypicality improves the performance of a skin lesion classifier across
different skin tone groups without having access to the group attributes.
Overall, _we propose that models should use not only confidence but also
atypicality to improve uncertainty quantification and performance_. Our
results demonstrate that simple post-hoc atypicality estimators can provide
significant value.111Our code is available at
https://github.com/mertyg/beyond-confidence-atypicality
## 1 Introduction
_Typicality_ is an item’s resemblance to other category members [59]. For
example, while a dove and a sparrow are typical birds, a penguin is an
atypical bird. Many works from cognitive science (e.g., [58, 61, 47]) suggest
that typicality plays a crucial role in category understanding. For instance,
humans have been shown to learn, remember, and refer to typical items faster
[49]. Similarly, the representativeness heuristic is the tendency of humans to
use the typicality of an event as a basis for decisions [66]. This cognitive
bias is effective for making swift decisions, but it can lead to poor
judgments of uncertainty. For instance, the likelihood of typical events can
be overestimated [66] or uncertainty judgments can be inferior for atypical
events [67].
While it is hard to quantify the uncertainty of human judgments, machine
learning models provide confidence in their predictions. However, confidence
alone can be insufficient to understand the reliability of a prediction. For
instance, a low-confidence prediction could arise from an ambiguity that is
easily communicated, or due to the sample being underrepresented in the
training distribution. Similarly, a high-confidence prediction could be
reliable or miscalibrated. Our main proposal is that _models should quantify
not only the confidence but also the atypicality_ to understand the
reliability of predictions or the coverage of the training distribution.
However, many machine learning applications rely on pretrained models that
solely provide confidence levels, devoid of any measure of atypicality.
Contributions: To support our position, we use a simple formalization of
atypicality estimation. With the following studies, we show that by using
simple atypicality estimators, we can:
1\. Understand Prediction Quality: Calibration is a measure that assesses the
alignment between predicted probabilities of a model and the true likelihoods
of outcomes [21]. Neural networks [20] or even logistic regression [7] can be
miscalibrated out-of-the-box. Here, we argue that using atypicality can give
insights into when a model’s confidence is reliable. Through theoretical
analysis and extensive experimentation, we demonstrate that atypicality
results in lower-quality predictions. Specifically, _we show that predictions
for atypical inputs and samples from atypical classes are more overconfident
and have lower accuracy._
2\. Improve Calibration and Accuracy: _Recalibration_ methods offer some
mitigation to miscalibration [20] by adjusting a probabilistic model. We show
that models need different adjustments according to the atypicality of inputs
and classes, and atypicality is a key factor in recalibration. In light of
these findings, we propose a simple method: _Atypicality-Aware Recalibration_.
Our recalibration algorithm takes into account the atypicality of the inputs
and classes and is simple to implement. We show that complementing
recalibration methods with atypicality improves uncertainty quantification and
the accuracy of predictors. Further, in a case study for skin lesion
classification, we show that atypicality awareness can improve performance
across different skin-tone subgroups without access to group annotations.
3\. Improve Prediction sets: An alternative approach to quantify uncertainty
is to provide prediction sets that contain the label with high probability
[2]. Here, we investigate existing methods with atypicality and show that
prediction sets could underperform for atypical or low-confidence samples. By
using atypicality, we demonstrate the potential for improving prediction sets.
Overall, we propose that models should also consider atypicality, and we show
simple- and easy-to-implement atypicality estimators can provide significant
value.
Figure 1: Atypicality in Uncertainty. Left: We show examples from the
ImageNet-R dataset with our atypicality framework. Right: We provide a
conceptualization of the quadrants. Using atypicality, we can understand
prediction quality (§3), improve predictions (§4), and prediction sets (§5).
## 2 Interpreting Uncertainty with Atypicality
Motivation: In many machine learning applications, we have access to a model’s
confidence, which aims to quantify the likelihood that a prediction will be
accurate. In classification, model output is a probability distribution over
classes and confidence is the predicted probability of the top class, i.e.
$\max_{y}~{}\mathbb{\hat{P}}(Y=y|X=x)$. In practical scenarios, confidence is
the primary tool used to evaluate the reliability of a prediction where higher
confidence is associated with better predictions. However, the uncertainty in
confidence can stem from different sources that require different treatment
[46].
Here, we call a prediction _reliable_ if it is high-confidence and well-
calibrated. High confidence could be reliable or miscalibrated, and low
confidence could be due to ambiguity or rare inputs. We propose that
_atypicality_ provides a natural way to understand reliability when combined
with confidence. A sample is called typical if it is well-represented in the
previously observed samples, e.g., an image of a dog that is similar to other
dogs in the training data. However, if the image is unlike any other seen
during training, it is atypical. We argue that atypicality can help us
interpret a prediction’s reliability. Below we categorize samples and
predictions according to atypicality and confidence in four quadrants (Figure
1).
High-confidence and representative: Reliable predictions often fall within the
Reliable Quadrant, which includes _typical, high-confidence_ samples. These
samples are well-represented in the training dataset (typical), thus we expect
the high-confidence prediction to be reliable. For instance, the first image
on the top left (Figure 1) is a typical golden retriever and the model makes a
reliable prediction.
High-confidence yet far from the support: Having high-confidence does not
always indicate reliability. If the sample does not have support in the
training distribution, the confidence could be miscalibrated. Such samples lie
in the Extrapolation Quadrant which contains _atypical, high-confidence_
samples. For instance, the second image in the top right of Figure 1 is a
_toy_ hog and the model has not seen similar ones during training.
Low confidence due to ambiguity: In contrast, low confidence could also be
reliable when it correctly reflects an ambiguity. Such samples are in the
Ambiguous Quadrant that contains _typical, low-confidence_ samples. These are
typical since they may represent multiple classes; yet, due to ambiguity, the
model’s confidence is low. For instance, the second image in the bottom left
of Figure 1 can both be a hog and a comic book.
Low confidence and rare: For samples that are not well-represented in training
data, we expect to have low-quality predictions. Untrustworthy Quadrant
comprises _atypical, low-confidence_ samples that can include extremely rare
subgroups, for which we expect miscalibration and lower accuracy. For example,
the image in Figure 1 bottom right is an origami hog that was not seen in
training.
These examples suggest that relying solely on confidence does not provide a
complete understanding of the reliability of the predictions, and we can use
atypicality to interpret and improve reliability.
Formalizing Atypicality: Atypicality here is defined with respect to the
training distribution. Informally, an input or a class is atypical if it is
not _well-represented_ in the training distribution. For instance, if there
are no or limited similar examples to an input, it can be called atypical.
Note that this notion is not restricted to being ‘out-of-distribution’ [24],
since in-distribution groups could also be atypical or rare, and our goal is
to perform reliably for the entire spectrum.
Formally, let $X\in\mathbb{R}^{d}$ be the random variable denoting features
and $Y\in\mathcal{Y}=\\{1,2,...,C\\}$ denote the class, where we focus on
classification.
###### Definition 2.1 (Input Atypicality).
We define the atypicality of the input $x$ as222Here atypicality differs from
’typical sets’ in information theory that refers to a sequence of variables
[65].
$a_{X}(x)=-\max_{y}\log\mathbb{P}(X=x|Y=y).$
We use the logarithm of the class-conditional densities due to high
dimensionality and density values being close to zero. Intuitively, for a dog
image $x$, if $\mathbb{P}(X=x|Y=\textrm{dog})$ has a low value, we call $x$ an
atypical dog image. Overall, if $a(x)$ is high, then we call $x$ an atypical
input. Specifically, if an input is not typical for any class, then it is
atypical with respect to the training distribution. Similarly, we can also use
marginal density, $\mathbb{P}(X=x)$, or distance333For an input $x$, if the
nearest neighbor (NN) distance is large, then we call $x$ atypical as all
inputs in the training set are far from $x$. Density and distance are
connected through non-parametric density estimation and [29] shows that NN
distance can recover high-density regions. to quantify atypicality.
Similarly, the notion of atypical (rare) classes is prevalent in imbalanced
classification [13, 71]. Ensuring reliable performance for atypical classes
can be safety-critical, e.g., for a rare presence of dangerous melanoma [16].
We define class atypicality in the following:
###### Definition 2.2 (Class Atypicality).
For a class $y$, atypicality of a class is defined as
$a_{Y}(y)=-\log\mathbb{P}(Y=y).$
Estimating Atypicality for Discriminative Models: Quantifying input
atypicality requires access to the class-conditional / marginal distributions.
In practice, for neural networks trained for classification, these
distributions are unavailable and we need to perform the estimation. This
estimation can be challenging if the dimensionality is large, or the data is
unstructured, requiring assumptions about the distributions. Prior works [46,
36] showed that Gaussian Mixture Models (GMMs) in the embedding space of
neural networks can be used to model these distributions.
In experiments, we use Gaussians with shared covariance, i.e.
$\hat{\mathbb{P}}(X=x|Y=c)\sim~{}N(\hat{\mu}_{c},\hat{\Sigma})$, to estimate
input atypicality. We perform the estimation in the penultimate layer of
neural networks used to make predictions, using maximum-likelihood estimation
with samples from the training data. We explore other metrics, such as
$k$-Nearest Neighbors distance. We give implementation details and results
with different metrics in Appendix A.1. With these estimators, atypicality
estimation is cheap and can run on a CPU. Our goal is to show that simple
estimators can already reap large benefits. Our framework is flexible and
exploring more sophisticated estimators is a topic for future work.
Atypicality for LLMs: LLMs are increasingly used for classification [6].
Modern LLMs are autoregressive models that compute a marginal distribution,
$\hat{\mathbb{P}}_{\textrm{LLM}}(X)$. We compute the negative log-likelihood
of a prompt or a label and use this as an atypicality metric, i.e.
$a_{X}(x)=-\log\hat{\mathbb{P}}_{\textrm{LLM}}(x)$,
$a_{Y}(y)=-\log\hat{\mathbb{P}}_{\textrm{LLM}}(y)$. Similar to the
discriminative setting, atypicality here aims to quantify whether a prompt is
well-represented in the training data. A larger value for $a(X)$ would imply
that a prompt is more atypical. Below, we present typical and atypical prompts
for the AGNews dataset:
Classify the news articles into the categories of World, Sports, Business, and
Technology. Article: Safin tallest obstacle to host #39;s patriotic games hope
AS tennis fans go, Houston #39;s Jim #39;Mattress Mack #39; McIngvale is very
rich, extremely forthright, exceedingly patriotic and unflinchingly
Republican. Answer: Atypicality: $353.45$, Percentile: $\%94.5$ Classify the
news articles into the categories of World, Sports, Business, and Technology.
Article: Delta Air Lines Prepares Chapter 11 Filing Delta Air Lines Inc. could
file for Chapter 11 bankruptcy protection as soon as next week, a source
familiar with the matter said yesterday. Answer: Atypicality: $171.50$.
Percentile: $\%0.9$
## 3 Understanding the Prediction Quality with Atypicality
In this section, we show how our framework can be applied to understand the
quality of predictions. Experimental Setup: We investigate three
classification settings across a range of datasets:
1. 1.
Balanced Supervised Classification: We use ResNet18-50-152 [28], WideResNet28
[73], RoBERTa [39] trained on ImageNet [15], CIFAR10,100 [32], MNLI [70]
respectively.
2. 2.
Imbalanced Supervised Classification: We use ResNet18, ResNext50, ResNet152
trained on CIFAR-LT, ImageNet-LT and Places365-LT where models and data are
mostly from [71, 45]. Note that all of the test and validation sets have
balanced class distributions.
3. 3.
Classification with LLMs: We use open-source Alpaca7B [64] on IMDB [43], TREC
[40], and AG News [76] datasets with the prompts from [75].
(a)
(b)
(c)
(d)
Figure 2: Atypical Samples Have Low-Quality Predictions. (a) Here, samples are
grouped according to the Input Atypicality (x-axis) and Confidence (y-axis),
to the right meaning more atypical. Values show the difference between
confidence and accuracy, lighter color indicates more overconfidence. Within
the same confidence range, atypical groups have more miscalibration and are
more overconfident. (b,c,d) Predictions for atypical samples are less accurate
and more miscalibrated in balanced and imbalanced supervised classification
and classification with LLMs.
Details on datasets, models, and prompts are in Appendix B. Our experiments
were run on a single NVIDIA A100-80GB GPU. We report error bars over 10 random
calibration/test splits.
### 3.1 Atypicality is Correlated with Miscalibration
We first explore the importance of atypicality to understand model
calibration. Calibration quantifies the quality of a probabilistic model [21].
Informally, a model is considered perfectly calibrated if all events that are
predicted to occur $P\%$ of the time occur $P\%$ of the time for any
$P\in[0,100]$.
For the sake of simplicity, consider a binary classification problem where the
predictor is $\hat{\mathbb{P}}:\mathcal{X}\to[0,1]$. We quantify
miscalibration with Calibration Error (CE):
$\textrm{CE}[\hat{\mathbb{P}}]=\mathbb{E}[|\mathbb{P}(Y|\hat{\mathbb{P}}(X)=p)-p|].$
It is computationally infeasible to calculate the above expectation with the
conditional probability $\mathbb{P}(Y|\hat{\mathbb{P}}(X)=p)$. In practice, we
use a binned version of this quantity, Expected Calibration Error (ECE) [50,
20], to estimate CE. See Appendix C.1 for a formal definition.
Here, we aim to examine the relationship between model calibration and
atypicality. Given any $K>1$, we consider the quantiles of $a(X)$,
$a_{1},a_{2},\ldots,a_{K+1}$ such that
$\mathbb{P}(a(X)\in(a_{k},a_{k+1}])=1/K$ for $k\in[K]$. For imbalanced
classification problems, we compute the quantiles using the class atypicality.
Specifically, we investigate the atypicality-conditional calibration error
$\textrm{ECE}[\hat{\mathbb{P}}\mid a(X)\in(a_{k},a_{k+1}]]$, i.e., the
expected calibration error of an input that falls within the atypicality
quantile $k$.
Atypical Examples are Poorly Calibrated: In Figure 2a, we show the
distribution of miscalibration where each bin within the grid contains the
intersection of the corresponding confidence and atypicality quantiles. We
observe that within the same confidence range, predictions for atypical points
have lower accuracies and are more overconfident. In other words, predictions
in the Extrapolation or Untrustworthy regions are more miscalibrated than
the ones in the typical regions.
In Figure 2b, we split inputs into quantiles according to atypicality and
compute the ECE and Accuracy for each group. Results show a monotonic
relationship between atypicality and ECE or Accuracy across the three
settings. Specifically, we see that predictions for atypical inputs or samples
from rare classes are more miscalibrated and have lower accuracy. For samples
from rare classes, the model overpredicts the probabilities of the typical
class, hence we have overconfidence and low accuracy. Appendix C.3, and §4
present figures and tables for all model and dataset pairs.
### 3.2 Theoretical Analysis: Characterizing Calibration Error with
Atypicality
We characterize how calibration error varies with atypicality in a tractable
model that is commonly used in machine learning theory [7, 8, 72, 12]. Our
theoretical analysis further supports our empirical findings.
Data Generative Model: We consider the well-specified logistic model for
binary classification with Gaussian data, where $Y\in\\{-1,1\\}$ and the
$\mathbb{P}(Y=1|X)$ is defined by the sigmoid function:
$\mathbb{P}(Y=1\mid X)=\sigma(\langle\beta^{*},X\rangle),\quad X\sim
N(0,I_{d}).$
Where $I_{d}$ denotes the $d$-dimensional identity matrix, $\beta^{*}$ is the
ground truth coefficient vector, $\sigma(x)=1/(1+e^{-x})$, and we have
$i.i.d.$ observations $\\{(x_{i},y_{i})\\}_{i=1}^{n}$ sampled from the above
distribution.
The Estimator: We focus on studying the solution produced by minimizing the
logistic loss
$\hat{\beta}=\arg\min_{\beta}\frac{1}{n}\sum_{i=1}^{n}[\log(1+\exp(\beta^{\top}x_{i}))-y_{i}\cdot\beta^{\top}x_{i}].$
For $k\in\\{-1,1\\}$, $\hat{\mathbb{P}}_{k}(x)$ is an estimator of
$\mathbb{P}(y=k|x)$, with the form
$\hat{\mathbb{P}}_{k}(x)=\frac{1}{e^{-k\cdot\hat{\beta}^{\top}x}+1}$.
Calibration: We consider all $x$ where $\mathbb{P}_{1}(x)>1/2$, as
$\mathbb{P}_{1}(x)\leq 1/2$ can be analyzed similarly by symmetry (see
Appendix G). For $u\in(1/2,1)$, the signed calibration error at a confidence
level $u$ is
$u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u).$
We want to show that when $X$ is atypical, i.e., when $a(X):=\|X\|^{2}/2$ is
larger555The definition of atypicality follows from the marginal likelihood of
the data model: density for the Gaussian with zero mean and identity
covariance. , the accuracy $\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)$
would be generally smaller than the confidence $u$ (over-confidence).
###### Theorem 3.1.
Consider the data generative model and the learning setting above. For any
$K>1$, suppose we consider the quantiles of $a(X)$,
$a_{1},a_{2},...,a_{K},a_{K+1}$ such that
$\mathbb{P}(a(X)\in(a_{k},a_{k+1}])=1/K$ for $k\in[K]$. We assume
$\|\beta^{*}\|\leq c_{0}$, and $d/n=\kappa$, for some sufficiently small
$c_{0}$. Then, for sufficiently large $n$, for $k=2,\ldots,K$, we have
$\displaystyle\mathbb{E}_{u\sim\hat{\mathbb{P}}_{1}(X)}[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\mid
a(X)\in(a_{k},a_{k+1}]]>$
$\displaystyle\mathbb{E}_{u\sim\hat{\mathbb{P}}_{1}(X)}[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\mid
a(X)\in(a_{k-1},a_{k}]]\geq 0.$
That is, the resulting classifier is over-confident, and the level of over-
confidence becomes larger when the data is more atypical (with larger $a(X)$).
Further, the gap becomes larger for smaller sample sizes $n$. The proof of the
theorem is in Appendix G.2 and builds on the results from [7, 62].
## 4 Using Atypicality to Improve Recalibration
Here, we show how atypicality can complement and improve post-hoc calibration.
In §2, we observed that predictions for atypical inputs and samples from
atypical classes are more overconfident with lower accuracy. We next show that
taking input and class atypicality into account improves calibration.
### 4.1 Parametric Recalibration: Different Groups Need Different
Temperatures
Temperature scaling (TS), a single parameter variant of Platt Scaling [52], is
a simple recalibration method that calibrates the model using a single
parameter. The predictor is of the form
$\log\hat{\mathbb{P}}_{\textrm{TS}}(Y|X)\propto\log\hat{\mathbb{P}}(Y|X)/\tau,$
(1)
where $\hat{\mathbb{P}}(Y|X)$ is the model that takes an input and outputs
scores/logits, and $\tau$ is the temperature parameter. In practice, $\tau$ is
optimized using a calibration set to minimize a proper scoring rule [21, 9]
such as the cross-entropy loss.
To understand the behavior of TS with respect to atypicality, we separately
perform TS on points grouped according to the atypicality quantiles. Let us
denote the temperature fitted to the quantile covering
$a(X)\in(a_{k-1},a_{k}]$ by $\tau_{a_{k}}$. In Appendix Figure 10 we observe
an increasing relationship between $a_{k}$ and $\tau_{a_{k}}$. Different
atypicality groups need different adjustments, and more atypical groups need
larger temperatures. _This suggests that being atypicality-aware can improve
calibration. While a single temperature value improves average calibration, it
may hurt certain groups._
(a)
(b)
(c)
Figure 3: Post-hoc Recalibration for Classification. (a) Balanced Supervised
Classification: Atypicality-Aware Recalibration improves the calibration of
models trained with balanced datasets, across atypicality groups. (b)
Imbalanced Supervised Classification: Atypicality-Aware Recalibration improves
both the calibration across groups and the overall accuracy of models trained
with imbalanced datasets. (c) Classification with LLMs: Atypicality-Aware
Recalibration improves both the calibration across groups and the overall
accuracy of LLMs performing classification.
### 4.2 Atypicality-Aware Recalibration
We showed that predictions are more reliable when the input is typical.
However, predictions are less reliable for atypical inputs, and we may need
further revision. An analogy can be drawn to decision-making literature where
opinions of individuals are combined with geometric averaging weighted by
their expertise [17, 3]. Analogously, we propose _Atypicality-Aware
Recalibration (AAR)_ a method designed to address the reliability issues
identified in dealing with atypical inputs:
$\hat{\mathbb{P}}_{\textrm{AAR}}(Y|X)=\frac{\hat{\mathbb{P}}(Y|X)^{\psi(a(X))}\exp({S_{Y}})^{1-\psi(a(X))}}{Z(X)},$
(2)
where $\psi(a(X))$ is a function of input atypicality, $S_{Y}$ is a tunable
score for class $Y$, $Z(X)$ is the normalization term. Intuitively, when the
input is typical, we trust the model confidence; otherwise, we use a score for
the given class estimated from the calibration set. Note that this form
simplifies to
$\log{\hat{\mathbb{P}}_{\textrm{AAR}}(Y|X)}\propto{\phi(a(X))}\log{\hat{\mathbb{P}}(Y|X)}+S_{Y},$
(3)
where we subsume $(1-\psi(a(X))$ into $\phi(a(X))$. We give a simple
interpretation of this form: the multiplicative term is an atypicality-
dependent temperature, and the additive term is a class-dependent correction
where $\exp{(S_{Y})}$ can be considered to induce a correction distribution
over classes estimated from the calibration set.
Intuitively, when $\psi(a(X))=0$, the output reduces to a fixed distribution
over classes that was estimated using the calibration set. This distribution
can be seen to induce a prior probability over classes, and $\psi$ controls
the tradeoff between this prior and the model’s predictive distribution. As
the point becomes more typical, this distribution is closer to the model’s
predictive distribution. In Appendix Figure 11, we show how these values
behave with class atypicality. We find that rare classes require larger
positive corrections with larger $S_{Y}$.
Implementation Details: Following TS, we minimize the cross-entropy loss on a
calibration set. With the temperature-atypicality relationship observed in
Figure 10 we choose to instantiate the multiplicative factor as a quadratic
function, where $\phi(a(X))=c_{2}a(X)^{2}+c_{1}a(X)+c_{0}$ and in total we
have $|\\{S_{1},..,S_{|\mathcal{Y}|},c_{0},c_{1},c_{2}\\}|=|\mathcal{Y}|+3$
interpretable parameters. Once the embeddings and logits are computed, AAR
runs on a CPU in under 1 minute for all experimented settings.
Similar to our adaptive interpretation, a concurrent work, Adaptive
Temperature Scaling (AdaTS) [30], uses temperature scaling where the
temperature is parameterized by a Variational Autoencoder(VAE) [34] and a
multi-layer perceptron on top of the VAE embeddings. In the below experiments,
we give results with AdaTS as a baseline when applicable.
For Balanced Supervised Classification, in Figure 3a, we observe that being
atypicality aware improves recalibration across all groups. We perform
comparably to AdaTS, where the temperature function has in the order of
millions of parameters, whereas AAR has only $|\mathcal{Y}|+3$ parameters.
In Imbalanced Supervised Classification (Figure 3b), our algorithm not only
provides better calibration rates across all classes but also improves overall
accuracy. Note that only our method can change accuracy (due to the additive
term), and it performs better than other baselines in terms of ECE across all
classes. Further, the second column shows using Progressive Balancing [75] in
training, showing that our post-hoc method can complement methods that modify
training procedures.
For Classification with LLMs, we add an LLM calibration baseline Content-Free
Calibration (CF) [75]. We cannot use AdaTS as the embeddings are not fixed in
size. In Figure 3c, we see AAR has better calibration and accuracy across the
three datasets. Namely, by adjusting the LLM output using the LLM atypicality,
we can adjust the probabilities to increase the prediction quality.
### 4.3 Case Study: Fairness through Atypicality-Awareness
Machine learning models reportedly have performance disparity across subgroups
[5] due to factors such as varying sample size or noise levels [11]. For
instance, skin lesion classifiers can exhibit performance disparity across
different skin tones [16]. Fitzpatrick17k [18] is a dataset of clinical images
with Fitzpatrick skin tone annotations between 1-to-6, where a larger number
means darker skin tones, and when annotators do not agree, it is labeled as
‘Unknown’. We explore the classification problem with 9 classes indicating the
malignancy and the type of skin condition, using a ResNet18/34 pretrained on
ImageNet and finetuned on this task (See Appendix F).
When the goal is to improve performance across groups, one can use group
annotations and optimize performance within each group [26, 31]. Here, we
investigate how complementing recalibration with atypicality can improve
prediction quality across all groups _without group annotations_. For
comparison, we perform 3 recalibration methods: TS, AAR, and Skin-Tone
Conditional TS which calibrates the model individually for each skin-tone
group with TS. Since the skin-tone conditional calibration uses group
attributes, ideally it should act as an oracle.
In Figure 4, we give the Accuracy and ECE analyses where AAR improves
performance across all groups. For instance, the worst-group Accuracy (0.69)
or ECE (0.072) with AAR is close to the best-group Accuracy (0.63) or ECE
(0.062) with the other two methods. Overall, _our findings suggest that
Atypicality-Awareness can complement fairness-enforcing methods, and improve
performance even when the group annotations are unavailable._ We hypothesize
that with AAR, we can perform better than using supervised group attributes
since groups may not have sufficient sample size in the calibration set (131,
1950, 1509, 555 samples for Unknown, 1&2, 3&4, and 5&6 respectively), and we
can leverage atypicality to offer some mitigation. Further investigating how
to leverage atypicality to improve fairness and factors affecting performance
disparities is a promising direction for future work [11].
Figure 4: Improving Group Performance through Atypicality-Awareness. Here we
show that AAR improves the calibration and accuracy of models across different
skin tone groups. With AAR, we can improve both the worst group performance
and overall performance significantly without using group attributes. TS curve
is less visible since it significantly overlaps with Skin Tone Conditional.
## 5 Improving Prediction Sets with Atypicality
Conformal Prediction [63, 1] is a framework that assigns a calibrated
prediction set to each instance. The goal is to find a function
$\mathcal{C}:\mathcal{X}\rightarrow 2^{\mathcal{Y}}$ that returns a subset of
the label space such that $Y\in\mathcal{C}(X)$ with high probability. The
framework aims to guarantee _marginal coverage_ , i.e.,
$\mathbb{P}(Y\in\mathcal{C}(X))\geq 1-\alpha$, for a choice of $\alpha$. We
investigate two conformal calibration methods, Adaptive Prediction Sets (APS)
[60] and Regularized APS (RAPS) [2]. Let $\pi(X)$ be the permutation of the
label set that sorts $\hat{\mathbb{P}}(Y=c|X)$, i.e. the predicted
probabilities for each class $c$ after TS. The prediction sets are produced by
the function $\mathcal{C}(x)=\\{y:s(x,y)\leq\hat{q}\\}$, and these methods fit
the threshold $\hat{q}$ for a choice of the scoring function. APS uses the
cumulative sum of the predicted probabilities
$s(x,y)=\sum_{j=1}^{c}\hat{\mathbb{P}}(Y=j|X)$, where $y=\pi_{c}(X)$.
Intuitively, if the model was perfectly calibrated, we would have expected to
have $\hat{q}=1-\alpha$. Similarly, RAPS builds on the idea that tail
probabilities are noisy and regularizes the number of samples in the
prediction set.
Building on our ideas in the previous sections we implement Atypicality-Aware
prediction sets, namely _AA-APS_ and _AA-RAPS_ in the following way: We first
group points according to their confidence and atypicality quantiles. A group
$G$ here is defined using 4 thresholds, namely $G={x:(l_{a}^{(G)}<q_{a}(x)\leq
h_{a}^{(G)})\land(l_{c}^{(G)}<q_{c}(x)\leq h_{c}^{(G)})}$ where $q_{a}(x)$ and
$q_{c}(x)$ denote the atypicality and confidence quantiles for the sample $x$,
$l_{a}^{(G)}$ and $h_{a}^{(G)}$ denote the atypicality lower and upper bounds
for group $G$, and $l_{c}^{(G)}$ and $h_{c}^{(G)}$ denote the confidence lower
and upper bounds for group $G$. Using a calibration set, these bounds are
simply determined by the quantiles of confidence and atypicality statistics.
Then, we fit separate thresholds $\hat{q}_{G}$ for each group’s prediction
sets with APS or RAPS as subroutines. This allows us to have an adaptive
threshold depending only on the atypicality and confidence of predictions.
Figure 5: Improving Conformal Calibration with Atypicality for ResNet50 on
ImageNet. Here we show that atypicality awareness improves conformal
calibration performance across different groups. Methods are fitted to satisfy
$95\%$ coverage. We observe that APS and RAPS do not satisfy conditional
coverage for high atypicality regions or low confidence regions.
In Figure 5, we provide the coverage plots for APS and RAPS in the first and
third columns. Even though marginal coverage is satisfied, models do not
satisfy conditional coverage for atypical inputs or low-confidence
predictions. We observe that being Atypicality-Aware improves coverage across
otherwise underperforming groups. Further, AA-APS has lower set sizes on
average than APS ($15.6$ vs $21.3$). While RAPS has a lower average set size
than AA-RAPS ($4.2$ vs $9.1$) AA-RAPS has smaller set sizes for high-
confidence samples, whereas a larger set size for low-confidence samples where
the coverage is not met for RAPS. In Appendix D.3, we provide the same
analysis for ResNet18,50,152 at different coverage levels along with analyzing
the performance in the Confidence and Atypicality dimensions individually. For
instance in Figure 8, we observe that RAPS and APS do not satisfy coverage for
high atypicality regions, even when averaged across different confidence
levels.
## 6 Additional Related Work
Uncertainty and Atypicality: [46, 53] use density estimation to disentangle
epistemic and aleatoric uncertainty. Following this, they show improvements in
active learning and OOD detection [36]. We note that our goal is not this
disentanglement (e.g. Untrustworthy quadrant can have both aleatoric or
epistemic uncertainty), or Ambiguity could be due to a lack of features or
noise. [37] propose the related notion of distance awareness, and that it
leads to better uncertainty quantification. They offer architecture and
training modifications whereas we analyze existing models using our framework
including imbalanced and LLM settings, and propose simple and post-hoc
approaches. ‘OOD’ [25] or ‘anomaly’ [27] notions are tied to atypicality, yet
our goal is not to make a binary distinction between ‘in’ or ‘out’. We argue
that in-distribution samples could also be atypical (e.g. rare groups), and
the goal is to perform reliably in the entire spectrum. Other works with an
atypicality notion include bounding calibration of groups by the excess risk
[41], miscalibration under distribution shifts [51], uncertainty in Gaussian
Processes [56], forgetting time for rare examples [44], the poor performance
of groups with lower sample sizes [11], energy-based models improving
calibration [22], relating perplexity to zero-shot classification performance
for LLMs [19], grouping loss and local definitions of miscalibration [55], the
relationship between active learning and atypicality [23], sample size as a
factor for subgroup performance disparity [11]. [54] provide insightful
discussion around the nature of softmax confidence, and here we show that its
reliability depends on the atypicality of the input. Our new findings include
showing that predictions for atypical samples are more miscalibrated and
overconfident, and atypicality awareness improves prediction quality.
_Overall, while there are other relevant notions in the literature, our
distinct goal is to show that post-hoc atypicality estimation and
recalibration is a simple yet useful framework to understand and improve
uncertainty quantification that complements existing methods._
Recalibration and Conformal Prediction: There is a rich literature on
recalibration methods and prediction sets: TS [20], Platt Scaling [52],
conformal calibration [63, 2] among many. [35, 57, 10, 4] make a relevant
observation, showing that the coverage of conformal prediction is not equal
across all groups. They propose group conformal calibration, which requires
group labels whereas our proposal is unsupervised and does not depend on any
attribute information. Concurrent work [30] explores AdaTS, where they train a
separate VAE and MLP to produce an adaptive temperature. However, our
parameterization of temperature has 3 parameters and is interpretable.
## 7 Conclusion
Atypicality offers a simple yet flexible framework to better understand and
improve model reliability and uncertainty. We propose that pretrained models
should be released not only with confidence but also with an atypicality
estimator. While there are other relevant notions in the literature, our main
goal is to show that atypicality can provide a unifying perspective to discuss
uncertainty, understand individual data points, and improve fairness. Here we
focus on classification problems; it would be interesting to extend
atypicality to regression and generation settings. Furthermore, we would like
to extend the theoretical analysis to more general settings, as our empirical
results demonstrate that the observed phenomena hold more broadly.
## Acknowledgments
We would like to thank Adarsh Jeewajee, Bryan He, Edward Chen, Federico
Bianchi, Kyle Swanson, Natalie Dullerud, Ransalu Senanayake, Sabri Eyuboglu,
Shirley Wu, Weixin Liang, Xuechen Li, Yongchan Kwon, Yu Sun, and Zach Izzo for
their comments and suggestions on the manuscript. Linjun Zhang’s research is
partially supported by NSF DMS-2015378. Carlos Ernesto Guestrin is a Chan
Zuckerberg Biohub – San Francisco Investigator.
## References
* AB [21] Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511, 2021.
* ABMJ [20] Anastasios Angelopoulos, Stephen Bates, Jitendra Malik, and Michael I Jordan. Uncertainty sets for image classifiers using conformal prediction. arXiv preprint arXiv:2009.14193, 2020.
* AR [89] János Aczél and Fred S Roberts. On the possible merging functions. Mathematical Social Sciences, 17(3):205–243, 1989.
* BGJ+ [22] Osbert Bastani, Varun Gupta, Christopher Jung, Georgy Noarov, Ramya Ramalingam, and Aaron Roth. Practical adversarial multivalid conformal prediction. arXiv preprint arXiv:2206.01067, 2022.
* BHN [17] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. Nips tutorial, 1:2, 2017.
* BMR+ [20] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
* [7] Yu Bai, Song Mei, Huan Wang, and Caiming Xiong. Don’t just blame over-parametrization for over-confidence: Theoretical analysis of calibration in binary classification. In International Conference on Machine Learning, pages 566–576. PMLR, 2021.
* [8] Yu Bai, Song Mei, Huan Wang, and Caiming Xiong. Understanding the under-coverage bias in uncertainty estimation. Advances in Neural Information Processing Systems, 34:18307–18319, 2021.
* BW [19] David Bolin and Jonas Wallin. Local scale invariance and robustness of proper scoring rules. arXiv preprint arXiv:1912.05642, 2019.
* BYR+ [21] Noam Barda, Gal Yona, Guy N Rothblum, Philip Greenland, Morton Leibowitz, Ran Balicer, Eitan Bachmat, and Noa Dagan. Addressing bias in prediction models by improving subpopulation calibration. Journal of the American Medical Informatics Association, 28(3):549–558, 2021.
* CJS [18] Irene Chen, Fredrik D Johansson, and David Sontag. Why is my classifier discriminatory? Advances in neural information processing systems, 31, 2018.
* CLKZ [22] Lucas Clarté, Bruno Loureiro, Florent Krzakala, and Lenka Zdeborová. Theoretical characterization of uncertainty in high-dimensional linear classification. arXiv preprint arXiv:2202.03295, 2022.
* CWG+ [19] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. Advances in neural information processing systems, 32, 2019.
* DCLT [19] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv, abs/1810.04805, 2019.
* DDS+ [09] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* DVN+ [22] Roxana Daneshjou, Kailas Vodrahalli, Roberto A Novoa, Melissa Jenkins, Weixin Liang, Veronica Rotemberg, Justin Ko, Susan M Swetter, Elizabeth E Bailey, Olivier Gevaert, et al. Disparities in dermatology ai performance on a diverse, curated clinical image set. Science advances, 8(31):eabq6147, 2022.
* FP [98] Ernest Forman and Kirti Peniwati. Aggregating individual judgments and priorities with the analytic hierarchy process. European journal of operational research, 108(1):165–169, 1998.
* GHS+ [21] Matthew Groh, Caleb Harris, Luis Soenksen, Felix Lau, Rachel Han, Aerin Kim, Arash Koochek, and Omar Badri. Evaluating deep neural networks trained on clinical images in dermatology with the fitzpatrick 17k dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1820–1828, 2021.
* GIB+ [22] Hila Gonen, Srini Iyer, Terra Blevins, Noah A Smith, and Luke Zettlemoyer. Demystifying prompts in language models via perplexity estimation. arXiv preprint arXiv:2212.04037, 2022.
* GPSW [17] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning, pages 1321–1330. PMLR, 2017.
* GR [07] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007.
* GWJ+ [20] Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like one. In International Conference on Learning Representations, 2020.
* HDW [22] Guy Hacohen, Avihu Dekel, and Daphna Weinshall. Active learning on a budget: Opposite strategies suit high and low budgets. arXiv preprint arXiv:2202.02794, 2022.
* [24] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016.
* [25] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. ArXiv, abs/1610.02136, 2016.
* HJKRR [18] Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning, pages 1939–1948. PMLR, 2018.
* HMD [18] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606, 2018.
* HZRS [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* JKGG [18] Heinrich Jiang, Been Kim, Melody Guan, and Maya Gupta. To trust or not to trust a classifier. Advances in neural information processing systems, 31, 2018.
* JPL+ [23] Tom Joy, Francesco Pinto, Ser-Nam Lim, Philip HS Torr, and Puneet K Dokania. Sample-dependent adaptive temperature scaling for improved calibration. AAAI, 2023.
* KGZ [19] Michael P Kim, Amirata Ghorbani, and James Zou. Multiaccuracy: Black-box post-processing for fairness in classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 247–254, 2019.
* Kri [09] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009\.
* Kum [22] Sawan Kumar. Answer-level calibration for free-form multiple choice question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 665–679, 2022.
* KW [13] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
* LLC+ [22] Charles Lu, Andréanne Lemay, Ken Chang, Katharina Höbel, and Jayashree Kalpathy-Cramer. Fair conformal predictors for applications in medical imaging. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12008–12016, 2022.
* LLLS [18] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems, 31, 2018.
* LLP+ [20] Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax Weiss, and Balaji Lakshminarayanan. Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. Advances in Neural Information Processing Systems, 33:7498–7512, 2020.
* LN [89] Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1):503–528, 1989.
* LOG+ [19] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
* LR [02] Xin Li and Dan Roth. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics, 2002.
* LSH [19] Lydia T Liu, Max Simchowitz, and Moritz Hardt. The implicit fairness criterion of unconstrained learning. In International Conference on Machine Learning, pages 4051–4060. PMLR, 2019.
* LVdMJ+ [21] Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, 2021.
* MDP+ [11] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
* MGLK [22] Pratyush Maini, Saurabh Garg, Zachary C Lipton, and J Zico Kolter. Characterizing datapoints via second-split forgetting. arXiv preprint arXiv:2210.15031, 2022.
* MKS+ [20] Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip HS Torr, and Puneet K Dokania. Calibrating deep neural networks using focal loss. 2020\.
* MKvA+ [21] Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip HS Torr, and Yarin Gal. Deep deterministic uncertainty: A simple baseline. arXiv e-prints, pages arXiv–2102, 2021.
* MP [80] Carolyn B Mervis and John R Pani. Acquisition of basic object categories. Cognitive Psychology, 12(4):496–522, 1980.
* MR [10] Sébastien Marcel and Yann Rodriguez. Torchvision the machine-vision package of torch. In Proceedings of the 18th ACM international conference on Multimedia, pages 1485–1488, 2010.
* Mur [04] Gregory Murphy. The big book of concepts. MIT press, 2004.
* NCH [15] Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
* OFR+ [19] Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. Advances in neural information processing systems, 32, 2019.
* P+ [99] John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61–74, 1999.
* PBC+ [20] Janis Postels, Hermann Blum, Cesar Cadena, Roland Siegwart, Luc Van Gool, and Federico Tombari. Quantifying aleatoric and epistemic uncertainty using density estimation in latent space. arXiv preprint arXiv:2012.03082, 2020.
* PBZ [21] Tim Pearce, Alexandra Brintrup, and Jun Zhu. Understanding softmax confidence and uncertainty. arXiv preprint arXiv:2106.04972, 2021.
* PLMV [23] Alexandre Perez-Lebel, Marine Le Morvan, and Gael Varoquaux. Beyond calibration: estimating the grouping loss of modern neural networks. In The Eleventh International Conference on Learning Representations, 2023.
* Ras [04] Carl Edward Rasmussen. Gaussian processes in machine learning. In Summer school on machine learning, pages 63–71. Springer, 2004.
* RBSC [19] Yaniv Romano, Rina Foygel Barber, Chiara Sabatti, and Emmanuel J Candès. With malice towards none: Assessing uncertainty via equalized coverage. arXiv preprint arXiv:1908.05428, 2019.
* Rip [89] Lance J . Rips. Similarity, typicality, and categorization, page 21–59. Cambridge University Press, 1989.
* RM [75] Eleanor Rosch and Carolyn B Mervis. Family resemblances: Studies in the internal structure of categories. Cognitive psychology, 7(4):573–605, 1975.
* RSC [20] Yaniv Romano, Matteo Sesia, and Emmanuel Candes. Classification with valid and adaptive coverage. Advances in Neural Information Processing Systems, 33:3581–3591, 2020.
* RSS [73] Lance J Rips, Edward J Shoben, and Edward E Smith. Semantic distance and the verification of semantic relations. Journal of verbal learning and verbal behavior, 12(1):1–20, 1973.
* SC [19] Pragya Sur and Emmanuel J Candès. A modern maximum-likelihood theory for high-dimensional logistic regression. Proceedings of the National Academy of Sciences, 116(29):14516–14525, 2019.
* SV [08] Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research, 9(3), 2008.
* TGZ+ [23] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
* TJ [06] MTCAJ Thomas and A Thomas Joy. Elements of information theory. Wiley-Interscience, 2006.
* TK [74] Amos Tversky and Daniel Kahneman. Judgment under uncertainty: Heuristics and biases: Biases in judgments reveal some heuristics of thinking under uncertainty. science, 185(4157):1124–1131, 1974.
* TK [92] Amos Tversky and Daniel Kahneman. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4):297–323, 1992.
* VGS [05] Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. Algorithmic learning in a random world. Springer Science & Business Media, 2005.
* WDS+ [20] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, 2020.
* WNB [18] Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, 2018.
* ZCLJ [21] Zhisheng Zhong, Jiequan Cui, Shu Liu, and Jiaya Jia. Improving calibration for long-tailed recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16489–16498, 2021.
* ZDKZ [22] Linjun Zhang, Zhun Deng, Kenji Kawaguchi, and James Zou. When and how mixup improves calibration. In International Conference on Machine Learning, pages 26135–26160. PMLR, 2022.
* ZK [16] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
* ZKL+ [16] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Antonio Torralba, and Aude Oliva. Places: An image database for deep scene understanding. arXiv preprint arXiv:1610.02055, 2016.
* ZWF+ [21] Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pages 12697–12706. PMLR, 2021.
* ZZL [15] Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In NIPS, 2015.
## Appendix A Atypicality Estimation
### A.1 Input Atypicality Estimation
To estimate input atypicality, we use two ways to estimate the likelihood of a
point under the training distribution. First, we give methods for the
discriminative models.
##### Fitting individual Gaussians to class conditionals
Here, we follow a similar approach to [46]. Namely, we model the class
conditionals with a Gaussian distribution, where the covariance matrix is tied
across classes:
$\hat{\mathbb{P}}(X|Y=y)\sim N(X;\mu_{y},\Sigma)$ (4)
We fit the parameters $\mu_{y}$ and $\Sigma$ with maximum likelihood
estimation. The reason to tie the covariance matrix is due to the number of
samples required to fit the density. Namely, for a $d$-dimensional problem,
the total number of parameters to fit individual matrices becomes $O(yd^{2})$,
which results in low-quality estimates. Then, the atypicality becomes
$a_{X}(x)=-\max_{y\in\mathcal{Y}}\log\hat{\mathbb{P}}(X=x|Y=y)$ (5)
##### Computing distance with k-Nearest Neighbors
k-Nearest Neighbors: Similarly, we can use the nearest neighbor distance.
Concretely, we use the nearest neighbor distance,
$a_{X}(x)=d_{\min}(x,\mathcal{D}_{\textup{train}})=\min_{x^{\prime}\in\mathcal{D}_{\textup{train}}}|x^{\prime}-x|$
, as the atypicality metric. Alternatively, we can use different notions such
as the average of k-nearest neighbors, or the distance to the kth neighbor.
Below, we report the results by using the average distance to 5-nearest
neighbors.
##### Fitting the estimators
For all of the atypicality estimators, we fit the estimators using samples
from the training sets and make inferences for the calibration and test sets.
For instance, we use the training split of ImageNet to fit the corresponding
density estimator and compute the atypicality for the samples from the
validation/test split of ImageNet. All of our results using atypicality are
reported for the test splits of the below datasets.
##### Atypicality Estimation with LLMs:
For language models we simply compute the negative log-likelihood of each
prompt as the atypicality metric:
$a_{X}(x)=-\log\hat{\mathbb{P}}_{\textrm{LLM}}(x)$. To define confidence, we
use the logits of the language model, conditioned on the prompt. We use the
logit of the first token of each class label and compute the predicted
probabilities by applying softmax to the logits of each class.
### A.2 Class Atypicality
To estimate class atypicality, we simply count the fraction of examples from a
particular label in the training dataset. Let us have a training dataset
$\mathcal{D}_{\textrm{train}}=\\{(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{N},y_{N})\\}$.
Then, we estimate class atypicality with
$a_{Y}(y)=-\log\frac{\sum_{i\in[N]}\mathbf{1}[y_{i}=y]}{N}.$ (6)
### A.3 Atypicality and Confidence
Figure 6: Input Atypicality and Confidence. Here, the x-axis reflects the
input atypicality quantile, and the y-axis indicates confidence. The coloring
for the figure on the left indicates the accuracy within a bin, and the figure
on the right has the difference between confidence and accuracy within a bin.
We observe that even within the same confidence range, atypical examples tend
to be more miscalibrated and overconfident compared to typical examples.
Are atypicality and confidence equally informative? Beyond the data
perspective given in Figure 1, here we provide quantitative results to
demonstrate the difference. In Figure 6, we give a grid plot where the x-axis
indicates the typicality quantile of a point, and the y-axis indicates the
confidence of a point. The coloring on the left indicates the accuracy within
a bin split according to accuracy, and the right has the difference between
average confidence and accuracy. Observe that for a specific confidence
interval, larger values of typicality mean better quality probabilistic
estimates and larger atypicality means more miscalibration.
## Appendix B Experimental Details
### B.1 Balanced Supervised Classification
#### B.1.1 Datasets
Below is a full list of datasets for balanced classification:
1. 1.
ImageNet [15] from Torchvision [48] is an object recognition dataset with 1000
classes. We use the ImageNet-1k version.
2. 2.
CIFAR10/100 [32] from Torchvision [48] are object recognition datasets with
10/100 classes.
3. 3.
MNLI [70] from Huggingface Datasets [42] is a natural language inference
dataset with 3 classes, indicating entailment, neutral, and contradiction
outcomes.
#### B.1.2 Models
Most of the models are public models, e.g., obtained from the Transformers
Library [69] or Torchvision [48]. Below we give the full model details and how
one can access them:
1. 1.
RoBERTa(HuggingFace roberta-large-mnli) trained on the MNLI dataset. One can
use the id given here on HuggingFace to download the model.
2. 2.
ResNet18, ResNet50, ResNet152 from (Torchvision [48]) trained on ImageNet.
3. 3.
WideResNet28 trained on CIFAR10,100 obtained from [45].
For all BERT [14] style models we use the [CLS] token embeddings in the final
layer to perform classification. For all vision models, we use the penultimate
layer embeddings to fit the density estimators and perform the analyses. In
the experiments, we randomly split the test sets into two equal halves to have
a calibration split and a test split, and repeat the experiments over 10
random seeds.
### B.2 Imbalanced Supervised Classification
#### B.2.1 Datasets
All of our imbalanced classification datasets are previous benchmarks obtained
from the GitHub repository666https://github.com/dvlab-research/MiSLAS of [71]
with corresponding training, validation, and test splits. All of these
datasets have an exponential class imbalance.
1. 1.
ImageNet-LT is the long-tailed variant of ImageNet with 1000 classes.
2. 2.
CIFAR10/100-LT is the long-tailed variant of CIFAR10/100 with 10/100 classes.
3. 3.
Places365-LT is the long-tailed variant of Places365 [74] with 365 classes.
#### B.2.2 Models
Similarly, most of these models are obtained from [71].
1. 1.
ResNeXt50 trained on ImageNet-LT with and without Progressive Balancing, which
is a strategy to address class imbalance during training.
2. 2.
ResNet152 trained on Places365-LT
3. 3.
ResNet18 trained on CIFAR100-LT trained by us. This model is pretrained on
ImageNet and finetuned on CIFAR100-LT.
We use the validation splits of these datasets as the calibration set, and
report the results on the test set.
### B.3 Classification with LLMs
#### B.3.1 Model
We use Alpaca-7B [64] in a zero-shot setting, where we simply prompt the model
with the classification question. We use the prompting strategy from Content-
Free Calibration [75].
Below, we show examples of each dataset and prompt.
#### B.3.2 Datasets
IMDB is a binary classification dataset of movie reviews, where the goal is to
classify the sentiment in a review. The example prompt has the form ‘The
following review was written for a movie: [Review].\n What is the sentiment,
Positive or Negative? Answer: ’. Below is an example:
The following review was written for a movie: I and a friend rented this
movie. We both found the movie soundtrack and production techniques to be
lagging. The movie’s plot appeared to drag on throughout with little surprise
in the ending. We both agreed that the movie could have been compressed into
roughly an hour giving it more suspense and moving plot.
What is the sentiment, Positive or Negative? Answer:
where the correct answer should be ‘Negative’. We filter out the examples that
exceed the context length limit of Alpaca7B ($512$). We noticed that the
‘validation’ split of IMDB leads to significantly worse calibration compared
to splitting the test set. Thus, for all experiments, we use the test split of
IMDB and split it into 2 sets (instead of using the validation split as a
calibration set as in the other two datasets).
TREC is a 6-class question classification dataset where the goal is to predict
whether a question will have an answer that is an ‘Abbreviation’, ‘Entity’,
‘Description’, ‘Human’, ‘Location’, or a ‘Number’. We format the prompts with
‘Classify the questions based on their Answer Type. Potential Answer Types
are: Number, Location, Person, Description, Entity, or
Abbreviation.\n\nQuestion: [question]\n\nAnswer Type: ’. Below is an example
prompt:
Classify the questions based on their Answer Type. Potential Answer Types are:
Number, Location, Person, Description, Entity, or Abbreviation.
Question: What county is Modesto , California in ? Answer Type:
where the correct answer should be ‘Location’.
AG News is a news classification dataset. The goal is to classify a given news
into 4 potential classes: ‘World’, ‘Sports’, ‘Business’, or ‘Science and
Technology’. We format the prompts with ‘Classify the news articles into the
categories of World, Sports, Business, and Technology.\n\n Article:
[article]\n\nAnswer: ’. Below is an example prompt:
Classify the news articles into the categories of World, Sports, Business, and
Technology.
Article: Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-
sellers, Wall Street’s dwindling band of ultra-cynics, are seeing green again.
Answer:
where the correct answer should be ‘Business’.
Furthermore, we also use [75] as another calibration baseline. Following their
paper, we use N/A, [MASK], and the empty string as content-free. Concretely,
we follow their paper to first obtain the average predicted probabilities for
each label token for the content-free input, denoted by $p_{cf}$. We then let
$W=\textrm{diag}(p_{cf})^{-1}.$
When making test-time predictions, we compute $\textrm{Softmax}(W^{T}p)$ as
the new predicted probabilities. In our experiments, we observe that it does
not perform as well in this setting, as was previously suggested by [33].
## Appendix C Calibration
We run all our experiments with 10 different random seeds, where the seeds are
$\\{0,1,2,\ldots,9\\}$. Randomness is over fitting the atypicality estimators,
and calibration-test splits (we use the same splits with the recalibration
experiments for the sake of consistency).
### C.1 Expected Calibration Error
To compute ECE, we generate $\mathbb{B}=\\{B_{1},B_{2},...,B_{M}\\}$, $M$
equally-spaced bins where samples are sorted and grouped according to their
confidence. $B_{m}$ here denotes the set of the data indices where the
confidence of the prediction for the sample falls into the interval
$(\frac{m-1}{M},\frac{m}{M}]$. We compute ECEby
$\textrm{ECE}[\hat{\mathbb{P}}]=\sum_{m=1}^{M}\frac{|B_{m}|}{N}\lvert\textup{acc}(B_{m})-\textup{conf}(B_{m})\rvert,$
(7)
where
$\textup{acc}(B_{m})=\frac{1}{|B_{m}|}\sum_{i=1}^{|B_{m}|}\mathbf{1}[\hat{y}_{i}=y_{i}]$
is the accuracy for the bin $m$, and
$\textup{conf}(B_{m})=\frac{1}{|B_{m}|}\sum_{i=1}^{|B_{m}|}\hat{\mathbb{P}}(Y=\hat{y}_{i}|X=x_{i})$
gives the average confidence within the bin. $|B_{m}|$ is the size of the bin
$m$, $N$ is the total number of samples, and $\mathbf{1}[\cdot]$ is the
indicator function.
Throughout our experiments, we let the number of bins $|\mathbb{B}|=10$ by
default when computing ECE.
Similarly, below we report results with RMSCE (Root Mean Squared Error) [27]
as another calibration metric, which is formulated as the following:
$\textrm{RMSCE}[\hat{\mathbb{P}}]=\sqrt{\sum_{m=1}^{M}\frac{|B_{m}|}{N}(\textup{acc}(B_{m})-\textup{conf}(B_{m}))^{2}}$
(8)
### C.2 ECE and Atypicality Results with Different Atypicality Metrics
We further experiment with different atypicality metrics, such as the average
distance to the 5-nearest neighbors (Figure 7). We broadly observe that while
there are slight differences in the quantitative results between different
atypicality metrics, the qualitative phenomena remain intact. In Tables 1, 2,
and 3 we give all the results in the tabular form.
Figure 7: Atypicality with 5-nearest neighbors and Uncertainty. Here, we
report the results of the same experiments as Figure 3 with the average of the
distance to the 10-nearest neighbors as the atypicality metric. See Tables 1
for the results in tabular format.
### C.3 Results in the Tabular Format
Table 1: Recalibration Results for Balanced Classification. For each dataset
and atypicality quantile, the best results are marked in bold. We provide the
standard errors next to the means over 10 random seeds.
Uncalibrated TS AdaTS Atypicality-Aware Atypicality ECE RMSE ECE RMSE ECE RMSE
ECE RMSE GMM ResNet152 ImageNet 0.2 $0.029\pm 0.001$ $0.077\pm 0.001$
$0.014\pm 0.001$ $0.066\pm 0.002$ $0.014\pm 0.001$ $0.064\pm 0.002$ $0.016\pm
0.001$ $0.065\pm 0.001$ 0.4 $0.039\pm 0.001$ $0.086\pm 0.001$ $0.020\pm 0.001$
$0.074\pm 0.001$ $0.020\pm 0.002$ $0.075\pm 0.002$ $0.019\pm 0.001$ $0.070\pm
0.001$ 0.6 $0.050\pm 0.001$ $0.097\pm 0.001$ $0.026\pm 0.001$ $0.079\pm 0.002$
$0.025\pm 0.002$ $0.081\pm 0.002$ $0.026\pm 0.000$ $0.078\pm 0.001$ 0.8
$0.062\pm 0.001$ $0.106\pm 0.001$ $0.024\pm 0.001$ $0.076\pm 0.001$ $0.023\pm
0.002$ $0.078\pm 0.002$ $0.024\pm 0.001$ $0.077\pm 0.001$ 1.0 $0.084\pm 0.001$
$0.123\pm 0.001$ $0.037\pm 0.001$ $0.088\pm 0.001$ $0.033\pm 0.004$ $0.085\pm
0.003$ $0.026\pm 0.001$ $0.080\pm 0.001$ ResNet18 ImageNet 0.2 $0.017\pm
0.001$ $0.074\pm 0.001$ $0.031\pm 0.001$ $0.084\pm 0.001$ $0.028\pm 0.004$
$0.081\pm 0.004$ $0.016\pm 0.001$ $0.069\pm 0.001$ 0.4 $0.016\pm 0.001$
$0.074\pm 0.001$ $0.020\pm 0.001$ $0.079\pm 0.001$ $0.023\pm 0.003$ $0.079\pm
0.003$ $0.015\pm 0.001$ $0.072\pm 0.002$ 0.6 $0.025\pm 0.001$ $0.080\pm 0.001$
$0.023\pm 0.001$ $0.077\pm 0.001$ $0.024\pm 0.003$ $0.080\pm 0.002$ $0.019\pm
0.001$ $0.074\pm 0.001$ 0.8 $0.040\pm 0.001$ $0.092\pm 0.001$ $0.024\pm 0.001$
$0.078\pm 0.001$ $0.029\pm 0.002$ $0.082\pm 0.003$ $0.018\pm 0.001$ $0.074\pm
0.001$ 1.0 $0.054\pm 0.001$ $0.100\pm 0.001$ $0.033\pm 0.001$ $0.085\pm 0.001$
$0.032\pm 0.003$ $0.083\pm 0.003$ $0.017\pm 0.002$ $0.073\pm 0.002$ ResNet50
ImageNet 0.2 $0.021\pm 0.001$ $0.069\pm 0.002$ $0.018\pm 0.001$ $0.071\pm
0.001$ $0.017\pm 0.001$ $0.068\pm 0.002$ $0.018\pm 0.001$ $0.067\pm 0.002$ 0.4
$0.029\pm 0.001$ $0.079\pm 0.001$ $0.023\pm 0.001$ $0.077\pm 0.001$ $0.020\pm
0.002$ $0.074\pm 0.001$ $0.022\pm 0.001$ $0.074\pm 0.001$ 0.6 $0.041\pm 0.001$
$0.091\pm 0.001$ $0.023\pm 0.001$ $0.078\pm 0.001$ $0.026\pm 0.002$ $0.079\pm
0.002$ $0.026\pm 0.001$ $0.078\pm 0.001$ 0.8 $0.042\pm 0.001$ $0.092\pm 0.001$
$0.024\pm 0.001$ $0.078\pm 0.001$ $0.024\pm 0.001$ $0.080\pm 0.002$ $0.020\pm
0.001$ $0.075\pm 0.001$ 1.0 $0.078\pm 0.000$ $0.118\pm 0.000$ $0.049\pm 0.001$
$0.095\pm 0.001$ $0.047\pm 0.004$ $0.096\pm 0.003$ $0.031\pm 0.001$ $0.082\pm
0.001$ RoBERTa MNLI 0.2 $0.005\pm 0.001$ $0.062\pm 0.004$ $0.013\pm 0.001$
$0.077\pm 0.002$ $0.006\pm 0.002$ $0.063\pm 0.005$ $0.006\pm 0.001$ $0.063\pm
0.002$ 0.4 $0.016\pm 0.001$ $0.091\pm 0.004$ $0.013\pm 0.001$ $0.083\pm 0.002$
$0.010\pm 0.002$ $0.075\pm 0.005$ $0.008\pm 0.001$ $0.072\pm 0.002$ 0.6
$0.034\pm 0.002$ $0.116\pm 0.002$ $0.015\pm 0.001$ $0.092\pm 0.004$ $0.016\pm
0.001$ $0.093\pm 0.005$ $0.017\pm 0.002$ $0.090\pm 0.004$ 0.8 $0.061\pm 0.002$
$0.153\pm 0.003$ $0.031\pm 0.002$ $0.113\pm 0.004$ $0.024\pm 0.002$ $0.104\pm
0.004$ $0.028\pm 0.002$ $0.114\pm 0.004$ 1.0 $0.065\pm 0.002$ $0.156\pm 0.002$
$0.028\pm 0.002$ $0.112\pm 0.003$ $0.021\pm 0.002$ $0.106\pm 0.004$ $0.023\pm
0.003$ $0.107\pm 0.004$ WideResNet28 CIFAR10 0.2 $0.003\pm 0.000$ $0.041\pm
0.001$ $0.009\pm 0.000$ $0.057\pm 0.000$ $0.003\pm 0.000$ $0.051\pm 0.002$
$0.001\pm 0.000$ $0.041\pm 0.001$ 0.4 $0.005\pm 0.001$ $0.051\pm 0.002$
$0.007\pm 0.001$ $0.055\pm 0.001$ $0.002\pm 0.001$ $0.047\pm 0.002$ $0.002\pm
0.000$ $0.048\pm 0.002$ 0.6 $0.004\pm 0.001$ $0.050\pm 0.003$ $0.008\pm 0.001$
$0.058\pm 0.001$ $0.004\pm 0.001$ $0.049\pm 0.004$ $0.002\pm 0.000$ $0.044\pm
0.002$ 0.8 $0.009\pm 0.001$ $0.066\pm 0.003$ $0.004\pm 0.001$ $0.056\pm 0.002$
$0.003\pm 0.001$ $0.058\pm 0.003$ $0.002\pm 0.001$ $0.055\pm 0.002$ 1.0
$0.142\pm 0.002$ $0.240\pm 0.002$ $0.073\pm 0.002$ $0.168\pm 0.003$ $0.046\pm
0.002$ $0.129\pm 0.004$ $0.041\pm 0.002$ $0.123\pm 0.002$ WideResNet28
CIFAR100 0.2 $0.007\pm 0.001$ $0.059\pm 0.003$ $0.018\pm 0.001$ $0.084\pm
0.001$ $0.021\pm 0.003$ $0.089\pm 0.005$ $0.021\pm 0.001$ $0.094\pm 0.002$ 0.4
$0.029\pm 0.001$ $0.110\pm 0.002$ $0.005\pm 0.001$ $0.070\pm 0.003$ $0.008\pm
0.002$ $0.073\pm 0.003$ $0.005\pm 0.001$ $0.078\pm 0.003$ 0.6 $0.101\pm 0.002$
$0.201\pm 0.002$ $0.052\pm 0.001$ $0.136\pm 0.002$ $0.044\pm 0.004$ $0.124\pm
0.007$ $0.049\pm 0.001$ $0.132\pm 0.003$ 0.8 $0.257\pm 0.004$ $0.297\pm 0.002$
$0.106\pm 0.003$ $0.191\pm 0.003$ $0.101\pm 0.003$ $0.187\pm 0.002$ $0.105\pm
0.003$ $0.190\pm 0.003$ 1.0 $0.371\pm 0.004$ $0.351\pm 0.002$ $0.105\pm 0.004$
$0.200\pm 0.003$ $0.101\pm 0.004$ $0.199\pm 0.003$ $0.109\pm 0.005$ $0.207\pm
0.005$ KNN ResNet152 ImageNet 0.2 $0.035\pm 0.001$ $0.085\pm 0.001$ $0.014\pm
0.001$ $0.062\pm 0.002$ $0.016\pm 0.002$ $0.068\pm 0.001$ $0.011\pm 0.001$
$0.061\pm 0.001$ 0.4 $0.039\pm 0.001$ $0.085\pm 0.001$ $0.015\pm 0.001$
$0.067\pm 0.001$ $0.018\pm 0.001$ $0.069\pm 0.002$ $0.017\pm 0.001$ $0.071\pm
0.001$ 0.6 $0.035\pm 0.001$ $0.083\pm 0.001$ $0.021\pm 0.001$ $0.076\pm 0.002$
$0.024\pm 0.003$ $0.079\pm 0.004$ $0.019\pm 0.001$ $0.072\pm 0.001$ 0.8
$0.057\pm 0.002$ $0.102\pm 0.001$ $0.031\pm 0.001$ $0.084\pm 0.001$ $0.033\pm
0.003$ $0.085\pm 0.003$ $0.032\pm 0.001$ $0.082\pm 0.001$ 1.0 $0.099\pm 0.001$
$0.129\pm 0.001$ $0.036\pm 0.001$ $0.090\pm 0.001$ $0.042\pm 0.006$ $0.093\pm
0.004$ $0.037\pm 0.001$ $0.091\pm 0.001$ ResNet18 ImageNet 0.2 $0.025\pm
0.001$ $0.077\pm 0.002$ $0.020\pm 0.001$ $0.074\pm 0.001$ $0.037\pm 0.021$
$0.115\pm 0.042$ $0.019\pm 0.001$ $0.072\pm 0.001$ 0.4 $0.017\pm 0.001$
$0.072\pm 0.001$ $0.021\pm 0.001$ $0.076\pm 0.001$ $0.045\pm 0.025$ $0.120\pm
0.046$ $0.019\pm 0.001$ $0.074\pm 0.001$ 0.6 $0.021\pm 0.001$ $0.074\pm 0.001$
$0.027\pm 0.001$ $0.083\pm 0.001$ $0.052\pm 0.029$ $0.130\pm 0.050$ $0.020\pm
0.001$ $0.074\pm 0.002$ 0.8 $0.033\pm 0.002$ $0.085\pm 0.001$ $0.022\pm 0.001$
$0.079\pm 0.001$ $0.061\pm 0.038$ $0.136\pm 0.058$ $0.021\pm 0.001$ $0.077\pm
0.001$ 1.0 $0.054\pm 0.001$ $0.102\pm 0.001$ $0.030\pm 0.001$ $0.084\pm 0.001$
$0.079\pm 0.045$ $0.152\pm 0.064$ $0.027\pm 0.001$ $0.081\pm 0.002$ ResNet50
ImageNet 0.2 $0.029\pm 0.001$ $0.078\pm 0.001$ $0.016\pm 0.001$ $0.066\pm
0.001$ $0.015\pm 0.002$ $0.068\pm 0.002$ $0.016\pm 0.001$ $0.067\pm 0.001$ 0.4
$0.029\pm 0.001$ $0.078\pm 0.001$ $0.018\pm 0.001$ $0.072\pm 0.001$ $0.019\pm
0.001$ $0.074\pm 0.002$ $0.017\pm 0.001$ $0.070\pm 0.002$ 0.6 $0.029\pm 0.001$
$0.078\pm 0.001$ $0.021\pm 0.001$ $0.077\pm 0.001$ $0.020\pm 0.002$ $0.077\pm
0.002$ $0.020\pm 0.001$ $0.075\pm 0.001$ 0.8 $0.046\pm 0.001$ $0.096\pm 0.001$
$0.028\pm 0.001$ $0.079\pm 0.001$ $0.026\pm 0.001$ $0.079\pm 0.001$ $0.031\pm
0.001$ $0.082\pm 0.001$ 1.0 $0.076\pm 0.001$ $0.116\pm 0.001$ $0.039\pm 0.002$
$0.092\pm 0.001$ $0.040\pm 0.004$ $0.091\pm 0.003$ $0.038\pm 0.002$ $0.090\pm
0.001$ RoBERTa MNLI 0.2 $0.003\pm 0.000$ $0.056\pm 0.003$ $0.011\pm 0.001$
$0.079\pm 0.002$ $0.011\pm 0.002$ $0.067\pm 0.005$ $0.003\pm 0.001$ $0.061\pm
0.003$ 0.4 $0.014\pm 0.001$ $0.080\pm 0.003$ $0.015\pm 0.002$ $0.091\pm 0.003$
$0.016\pm 0.004$ $0.092\pm 0.007$ $0.008\pm 0.001$ $0.074\pm 0.002$ 0.6
$0.029\pm 0.002$ $0.111\pm 0.003$ $0.014\pm 0.001$ $0.077\pm 0.002$ $0.023\pm
0.005$ $0.095\pm 0.008$ $0.015\pm 0.001$ $0.081\pm 0.005$ 0.8 $0.067\pm 0.002$
$0.156\pm 0.003$ $0.034\pm 0.002$ $0.108\pm 0.004$ $0.030\pm 0.003$ $0.107\pm
0.004$ $0.026\pm 0.003$ $0.101\pm 0.005$ 1.0 $0.064\pm 0.002$ $0.153\pm 0.003$
$0.027\pm 0.002$ $0.110\pm 0.003$ $0.033\pm 0.007$ $0.119\pm 0.009$ $0.026\pm
0.003$ $0.107\pm 0.005$ WideResNet28 CIFAR10 0.2 $0.001\pm 0.000$ $0.031\pm
0.003$ $0.011\pm 0.000$ $0.061\pm 0.001$ $0.005\pm 0.001$ $0.049\pm 0.004$
$0.003\pm 0.000$ $0.037\pm 0.001$ 0.4 $0.000\pm 0.000$ $0.010\pm 0.005$
$0.012\pm 0.000$ $0.062\pm 0.001$ $0.007\pm 0.001$ $0.054\pm 0.004$ $0.004\pm
0.000$ $0.037\pm 0.001$ 0.6 $0.005\pm 0.000$ $0.049\pm 0.002$ $0.007\pm 0.001$
$0.055\pm 0.002$ $0.003\pm 0.001$ $0.046\pm 0.004$ $0.001\pm 0.000$ $0.048\pm
0.002$ 0.8 $0.009\pm 0.001$ $0.061\pm 0.001$ $0.004\pm 0.001$ $0.058\pm 0.001$
$0.004\pm 0.001$ $0.062\pm 0.003$ $0.003\pm 0.000$ $0.052\pm 0.002$ 1.0
$0.148\pm 0.002$ $0.241\pm 0.002$ $0.079\pm 0.003$ $0.170\pm 0.003$ $0.053\pm
0.004$ $0.136\pm 0.005$ $0.039\pm 0.003$ $0.121\pm 0.003$ WideResNet28
CIFAR100 0.2 $0.006\pm 0.000$ $0.054\pm 0.002$ $0.020\pm 0.001$ $0.089\pm
0.001$ $0.021\pm 0.002$ $0.089\pm 0.003$ $0.014\pm 0.001$ $0.083\pm 0.001$ 0.4
$0.022\pm 0.001$ $0.092\pm 0.002$ $0.004\pm 0.001$ $0.070\pm 0.002$ $0.006\pm
0.001$ $0.067\pm 0.004$ $0.005\pm 0.001$ $0.076\pm 0.002$ 0.6 $0.090\pm 0.001$
$0.183\pm 0.002$ $0.054\pm 0.001$ $0.140\pm 0.002$ $0.049\pm 0.003$ $0.132\pm
0.004$ $0.041\pm 0.002$ $0.120\pm 0.002$ 0.8 $0.266\pm 0.005$ $0.295\pm 0.003$
$0.124\pm 0.004$ $0.205\pm 0.003$ $0.118\pm 0.004$ $0.202\pm 0.003$ $0.091\pm
0.003$ $0.185\pm 0.003$ 1.0 $0.380\pm 0.003$ $0.356\pm 0.002$ $0.086\pm 0.003$
$0.187\pm 0.004$ $0.087\pm 0.004$ $0.185\pm 0.005$ $0.117\pm 0.005$ $0.216\pm
0.004$
Table 2: Recalibration Results for Imbalanced Classification. For each dataset
and atypicality quantile, the best results are marked in bold. We provide the
standard errors next to the means over 10 random seeds.
Uncalibrated TS AdaTS Atypicality-Aware Atypicality ECE RMSE Accuracy ECE RMSE
Accuracy ECE RMSE Accuracy ECE RMSE Accuracy GMM ResNet152 Places365-LT 0.2
$0.159\pm 0.000$ $0.144\pm 0.000$ $0.492\pm 0.000$ $0.064\pm 0.000$ $0.099\pm
0.000$ $0.492\pm 0.000$ $0.044\pm 0.004$ $0.088\pm 0.002$ $0.492\pm 0.000$
$0.065\pm 0.000$ $0.101\pm 0.000$ $0.412\pm 0.000$ 0.4 $0.208\pm 0.000$
$0.166\pm 0.000$ $0.387\pm 0.000$ $0.061\pm 0.000$ $0.095\pm 0.000$ $0.387\pm
0.000$ $0.053\pm 0.001$ $0.099\pm 0.001$ $0.387\pm 0.000$ $0.042\pm 0.000$
$0.090\pm 0.000$ $0.415\pm 0.000$ 0.6 $0.270\pm 0.000$ $0.186\pm 0.000$
$0.317\pm 0.000$ $0.073\pm 0.000$ $0.111\pm 0.000$ $0.317\pm 0.000$ $0.076\pm
0.005$ $0.111\pm 0.003$ $0.317\pm 0.000$ $0.054\pm 0.000$ $0.097\pm 0.000$
$0.398\pm 0.000$ 0.8 $0.334\pm 0.000$ $0.214\pm 0.000$ $0.218\pm 0.000$
$0.120\pm 0.000$ $0.150\pm 0.000$ $0.218\pm 0.000$ $0.127\pm 0.006$ $0.152\pm
0.002$ $0.218\pm 0.000$ $0.058\pm 0.000$ $0.103\pm 0.000$ $0.371\pm 0.000$ 1.0
$0.437\pm 0.000$ $0.243\pm 0.000$ $0.081\pm 0.000$ $0.211\pm 0.000$ $0.185\pm
0.000$ $0.081\pm 0.000$ $0.212\pm 0.007$ $0.187\pm 0.002$ $0.081\pm 0.000$
$0.103\pm 0.000$ $0.137\pm 0.000$ $0.283\pm 0.000$ ResNet18 CIFAR10-LT 0.2
$0.017\pm 0.000$ $0.077\pm 0.000$ $0.927\pm 0.000$ $0.060\pm 0.000$ $0.164\pm
0.000$ $0.927\pm 0.000$ $0.084\pm 0.008$ $0.190\pm 0.008$ $0.927\pm 0.000$
$0.030\pm 0.000$ $0.104\pm 0.000$ $0.874\pm 0.000$ 0.4 $0.054\pm 0.000$
$0.145\pm 0.000$ $0.825\pm 0.000$ $0.082\pm 0.000$ $0.182\pm 0.000$ $0.825\pm
0.000$ $0.102\pm 0.008$ $0.201\pm 0.006$ $0.825\pm 0.000$ $0.027\pm 0.000$
$0.104\pm 0.000$ $0.741\pm 0.000$ 0.6 $0.152\pm 0.000$ $0.236\pm 0.000$
$0.672\pm 0.000$ $0.049\pm 0.000$ $0.133\pm 0.000$ $0.672\pm 0.000$ $0.069\pm
0.005$ $0.174\pm 0.007$ $0.672\pm 0.000$ $0.044\pm 0.000$ $0.127\pm 0.000$
$0.770\pm 0.000$ 0.8 $0.098\pm 0.000$ $0.188\pm 0.000$ $0.779\pm 0.000$
$0.022\pm 0.000$ $0.107\pm 0.000$ $0.779\pm 0.000$ $0.032\pm 0.005$ $0.113\pm
0.007$ $0.779\pm 0.000$ $0.034\pm 0.000$ $0.104\pm 0.000$ $0.810\pm 0.000$ 1.0
$0.244\pm 0.000$ $0.286\pm 0.000$ $0.584\pm 0.000$ $0.128\pm 0.000$ $0.206\pm
0.000$ $0.584\pm 0.000$ $0.157\pm 0.011$ $0.248\pm 0.009$ $0.584\pm 0.000$
$0.034\pm 0.000$ $0.109\pm 0.000$ $0.829\pm 0.000$ CIFAR100-LT 0.2 $0.138\pm
0.000$ $0.231\pm 0.000$ $0.660\pm 0.000$ $0.106\pm 0.000$ $0.203\pm 0.000$
$0.660\pm 0.000$ $0.105\pm 0.012$ $0.198\pm 0.010$ $0.660\pm 0.000$ $0.031\pm
0.000$ $0.112\pm 0.000$ $0.594\pm 0.000$ 0.4 $0.235\pm 0.000$ $0.291\pm 0.000$
$0.523\pm 0.000$ $0.056\pm 0.000$ $0.137\pm 0.000$ $0.523\pm 0.000$ $0.058\pm
0.006$ $0.145\pm 0.007$ $0.523\pm 0.000$ $0.040\pm 0.000$ $0.109\pm 0.000$
$0.556\pm 0.000$ 0.6 $0.295\pm 0.000$ $0.320\pm 0.000$ $0.431\pm 0.000$
$0.063\pm 0.000$ $0.166\pm 0.000$ $0.431\pm 0.000$ $0.067\pm 0.009$ $0.160\pm
0.010$ $0.431\pm 0.000$ $0.043\pm 0.000$ $0.142\pm 0.000$ $0.508\pm 0.000$ 0.8
$0.381\pm 0.000$ $0.360\pm 0.000$ $0.344\pm 0.000$ $0.121\pm 0.000$ $0.221\pm
0.000$ $0.344\pm 0.000$ $0.100\pm 0.016$ $0.193\pm 0.013$ $0.344\pm 0.000$
$0.080\pm 0.000$ $0.179\pm 0.000$ $0.432\pm 0.000$ 1.0 $0.403\pm 0.000$
$0.382\pm 0.000$ $0.269\pm 0.000$ $0.116\pm 0.000$ $0.234\pm 0.000$ $0.269\pm
0.000$ $0.129\pm 0.015$ $0.233\pm 0.010$ $0.269\pm 0.000$ $0.051\pm 0.000$
$0.165\pm 0.000$ $0.423\pm 0.000$ ResNext50 ImageNet-LT 0.2 $0.019\pm 0.000$
$0.061\pm 0.000$ $0.716\pm 0.000$ $0.073\pm 0.000$ $0.097\pm 0.000$ $0.716\pm
0.000$ $0.069\pm 0.006$ $0.095\pm 0.003$ $0.716\pm 0.000$ $0.013\pm 0.000$
$0.062\pm 0.000$ $0.627\pm 0.000$ 0.4 $0.032\pm 0.000$ $0.075\pm 0.000$
$0.592\pm 0.000$ $0.059\pm 0.000$ $0.089\pm 0.000$ $0.592\pm 0.000$ $0.056\pm
0.006$ $0.087\pm 0.003$ $0.592\pm 0.000$ $0.022\pm 0.000$ $0.068\pm 0.000$
$0.576\pm 0.000$ 0.6 $0.081\pm 0.000$ $0.103\pm 0.000$ $0.481\pm 0.000$
$0.045\pm 0.000$ $0.079\pm 0.000$ $0.481\pm 0.000$ $0.046\pm 0.002$ $0.081\pm
0.001$ $0.481\pm 0.000$ $0.020\pm 0.000$ $0.063\pm 0.000$ $0.525\pm 0.000$ 0.8
$0.171\pm 0.000$ $0.144\pm 0.000$ $0.309\pm 0.000$ $0.079\pm 0.000$ $0.115\pm
0.000$ $0.309\pm 0.000$ $0.086\pm 0.007$ $0.116\pm 0.002$ $0.309\pm 0.000$
$0.032\pm 0.000$ $0.080\pm 0.000$ $0.446\pm 0.000$ 1.0 $0.324\pm 0.000$
$0.198\pm 0.000$ $0.113\pm 0.000$ $0.230\pm 0.000$ $0.174\pm 0.000$ $0.113\pm
0.000$ $0.235\pm 0.008$ $0.175\pm 0.002$ $0.113\pm 0.000$ $0.082\pm 0.000$
$0.113\pm 0.000$ $0.318\pm 0.000$ ResNext50(+P.B.) ImageNet-LT 0.2 $0.017\pm
0.000$ $0.064\pm 0.000$ $0.653\pm 0.000$ $0.081\pm 0.000$ $0.100\pm 0.000$
$0.653\pm 0.000$ $0.084\pm 0.004$ $0.101\pm 0.002$ $0.653\pm 0.000$ $0.020\pm
0.000$ $0.070\pm 0.000$ $0.579\pm 0.000$ 0.4 $0.023\pm 0.000$ $0.068\pm 0.000$
$0.579\pm 0.000$ $0.062\pm 0.000$ $0.090\pm 0.000$ $0.579\pm 0.000$ $0.063\pm
0.004$ $0.089\pm 0.002$ $0.579\pm 0.000$ $0.013\pm 0.000$ $0.064\pm 0.000$
$0.535\pm 0.000$ 0.6 $0.065\pm 0.000$ $0.090\pm 0.000$ $0.508\pm 0.000$
$0.027\pm 0.000$ $0.074\pm 0.000$ $0.508\pm 0.000$ $0.027\pm 0.003$ $0.070\pm
0.002$ $0.508\pm 0.000$ $0.018\pm 0.000$ $0.067\pm 0.000$ $0.508\pm 0.000$ 0.8
$0.129\pm 0.000$ $0.121\pm 0.000$ $0.388\pm 0.000$ $0.042\pm 0.000$ $0.082\pm
0.000$ $0.388\pm 0.000$ $0.043\pm 0.003$ $0.083\pm 0.002$ $0.388\pm 0.000$
$0.039\pm 0.000$ $0.076\pm 0.000$ $0.444\pm 0.000$ 1.0 $0.236\pm 0.000$
$0.164\pm 0.000$ $0.224\pm 0.000$ $0.147\pm 0.000$ $0.134\pm 0.000$ $0.224\pm
0.000$ $0.147\pm 0.004$ $0.133\pm 0.002$ $0.224\pm 0.000$ $0.078\pm 0.000$
$0.100\pm 0.000$ $0.359\pm 0.000$ KNN ResNet152 Places365-LT 0.2 $0.159\pm
0.000$ $0.144\pm 0.000$ $0.492\pm 0.000$ $0.064\pm 0.000$ $0.099\pm 0.000$
$0.492\pm 0.000$ $0.047\pm 0.004$ $0.089\pm 0.002$ $0.492\pm 0.000$ $0.091\pm
0.000$ $0.116\pm 0.000$ $0.387\pm 0.000$ 0.4 $0.208\pm 0.000$ $0.166\pm 0.000$
$0.387\pm 0.000$ $0.061\pm 0.000$ $0.095\pm 0.000$ $0.387\pm 0.000$ $0.054\pm
0.001$ $0.097\pm 0.001$ $0.387\pm 0.000$ $0.048\pm 0.000$ $0.090\pm 0.000$
$0.448\pm 0.000$ 0.6 $0.270\pm 0.000$ $0.186\pm 0.000$ $0.317\pm 0.000$
$0.073\pm 0.000$ $0.111\pm 0.000$ $0.317\pm 0.000$ $0.068\pm 0.004$ $0.107\pm
0.002$ $0.317\pm 0.000$ $0.057\pm 0.000$ $0.098\pm 0.000$ $0.416\pm 0.000$ 0.8
$0.334\pm 0.000$ $0.214\pm 0.000$ $0.218\pm 0.000$ $0.120\pm 0.000$ $0.150\pm
0.000$ $0.218\pm 0.000$ $0.117\pm 0.004$ $0.149\pm 0.002$ $0.218\pm 0.000$
$0.065\pm 0.000$ $0.106\pm 0.000$ $0.367\pm 0.000$ 1.0 $0.437\pm 0.000$
$0.243\pm 0.000$ $0.081\pm 0.000$ $0.211\pm 0.000$ $0.185\pm 0.000$ $0.081\pm
0.000$ $0.202\pm 0.004$ $0.184\pm 0.001$ $0.081\pm 0.000$ $0.111\pm 0.000$
$0.152\pm 0.000$ $0.207\pm 0.000$ ResNet18 CIFAR10-LT 0.2 $0.017\pm 0.000$
$0.077\pm 0.000$ $0.927\pm 0.000$ $0.060\pm 0.000$ $0.164\pm 0.000$ $0.927\pm
0.000$ $0.098\pm 0.009$ $0.204\pm 0.008$ $0.927\pm 0.000$ $0.025\pm 0.000$
$0.085\pm 0.000$ $0.873\pm 0.000$ 0.4 $0.054\pm 0.000$ $0.145\pm 0.000$
$0.825\pm 0.000$ $0.082\pm 0.000$ $0.182\pm 0.000$ $0.825\pm 0.000$ $0.127\pm
0.008$ $0.220\pm 0.005$ $0.825\pm 0.000$ $0.024\pm 0.000$ $0.106\pm 0.000$
$0.742\pm 0.000$ 0.6 $0.152\pm 0.000$ $0.236\pm 0.000$ $0.672\pm 0.000$
$0.049\pm 0.000$ $0.133\pm 0.000$ $0.672\pm 0.000$ $0.054\pm 0.005$ $0.157\pm
0.005$ $0.672\pm 0.000$ $0.039\pm 0.000$ $0.124\pm 0.000$ $0.768\pm 0.000$ 0.8
$0.098\pm 0.000$ $0.188\pm 0.000$ $0.779\pm 0.000$ $0.022\pm 0.000$ $0.107\pm
0.000$ $0.779\pm 0.000$ $0.028\pm 0.002$ $0.106\pm 0.005$ $0.779\pm 0.000$
$0.029\pm 0.000$ $0.102\pm 0.000$ $0.810\pm 0.000$ 1.0 $0.244\pm 0.000$
$0.286\pm 0.000$ $0.584\pm 0.000$ $0.128\pm 0.000$ $0.206\pm 0.000$ $0.584\pm
0.000$ $0.153\pm 0.009$ $0.248\pm 0.007$ $0.584\pm 0.000$ $0.037\pm 0.000$
$0.123\pm 0.000$ $0.832\pm 0.000$ CIFAR100-LT 0.2 $0.138\pm 0.000$ $0.231\pm
0.000$ $0.660\pm 0.000$ $0.106\pm 0.000$ $0.203\pm 0.000$ $0.660\pm 0.000$
$0.088\pm 0.008$ $0.184\pm 0.008$ $0.660\pm 0.000$ $0.029\pm 0.000$ $0.108\pm
0.000$ $0.595\pm 0.000$ 0.4 $0.235\pm 0.000$ $0.291\pm 0.000$ $0.523\pm 0.000$
$0.056\pm 0.000$ $0.137\pm 0.000$ $0.523\pm 0.000$ $0.049\pm 0.003$ $0.135\pm
0.004$ $0.523\pm 0.000$ $0.049\pm 0.000$ $0.123\pm 0.000$ $0.554\pm 0.000$ 0.6
$0.295\pm 0.000$ $0.320\pm 0.000$ $0.431\pm 0.000$ $0.063\pm 0.000$ $0.166\pm
0.000$ $0.431\pm 0.000$ $0.073\pm 0.008$ $0.171\pm 0.008$ $0.431\pm 0.000$
$0.030\pm 0.000$ $0.113\pm 0.000$ $0.509\pm 0.000$ 0.8 $0.381\pm 0.000$
$0.360\pm 0.000$ $0.344\pm 0.000$ $0.121\pm 0.000$ $0.221\pm 0.000$ $0.344\pm
0.000$ $0.119\pm 0.012$ $0.210\pm 0.009$ $0.344\pm 0.000$ $0.078\pm 0.000$
$0.175\pm 0.000$ $0.435\pm 0.000$ 1.0 $0.403\pm 0.000$ $0.382\pm 0.000$
$0.269\pm 0.000$ $0.116\pm 0.000$ $0.234\pm 0.000$ $0.269\pm 0.000$ $0.145\pm
0.011$ $0.244\pm 0.008$ $0.269\pm 0.000$ $0.059\pm 0.000$ $0.163\pm 0.000$
$0.424\pm 0.000$ ResNext50 ImageNet-LT 0.2 $0.019\pm 0.000$ $0.061\pm 0.000$
$0.716\pm 0.000$ $0.073\pm 0.000$ $0.097\pm 0.000$ $0.716\pm 0.000$ $0.067\pm
0.004$ $0.094\pm 0.002$ $0.716\pm 0.000$ $0.014\pm 0.000$ $0.065\pm 0.000$
$0.626\pm 0.000$ 0.4 $0.032\pm 0.000$ $0.075\pm 0.000$ $0.592\pm 0.000$
$0.059\pm 0.000$ $0.089\pm 0.000$ $0.592\pm 0.000$ $0.052\pm 0.004$ $0.086\pm
0.002$ $0.592\pm 0.000$ $0.023\pm 0.000$ $0.068\pm 0.000$ $0.575\pm 0.000$ 0.6
$0.081\pm 0.000$ $0.103\pm 0.000$ $0.481\pm 0.000$ $0.045\pm 0.000$ $0.079\pm
0.000$ $0.481\pm 0.000$ $0.042\pm 0.001$ $0.080\pm 0.001$ $0.481\pm 0.000$
$0.023\pm 0.000$ $0.068\pm 0.000$ $0.524\pm 0.000$ 0.8 $0.171\pm 0.000$
$0.144\pm 0.000$ $0.309\pm 0.000$ $0.079\pm 0.000$ $0.115\pm 0.000$ $0.309\pm
0.000$ $0.088\pm 0.005$ $0.116\pm 0.002$ $0.309\pm 0.000$ $0.037\pm 0.000$
$0.079\pm 0.000$ $0.447\pm 0.000$ 1.0 $0.324\pm 0.000$ $0.198\pm 0.000$
$0.113\pm 0.000$ $0.230\pm 0.000$ $0.174\pm 0.000$ $0.113\pm 0.000$ $0.238\pm
0.005$ $0.176\pm 0.001$ $0.113\pm 0.000$ $0.086\pm 0.000$ $0.116\pm 0.000$
$0.318\pm 0.000$ ResNext50(+P.B.) ImageNet-LT 0.2 $0.017\pm 0.000$ $0.064\pm
0.000$ $0.653\pm 0.000$ $0.081\pm 0.000$ $0.100\pm 0.000$ $0.653\pm 0.000$
$0.083\pm 0.005$ $0.100\pm 0.003$ $0.653\pm 0.000$ $0.018\pm 0.000$ $0.069\pm
0.000$ $0.578\pm 0.000$ 0.4 $0.023\pm 0.000$ $0.068\pm 0.000$ $0.579\pm 0.000$
$0.062\pm 0.000$ $0.090\pm 0.000$ $0.579\pm 0.000$ $0.062\pm 0.005$ $0.089\pm
0.003$ $0.579\pm 0.000$ $0.015\pm 0.000$ $0.062\pm 0.000$ $0.535\pm 0.000$ 0.6
$0.065\pm 0.000$ $0.090\pm 0.000$ $0.508\pm 0.000$ $0.027\pm 0.000$ $0.074\pm
0.000$ $0.508\pm 0.000$ $0.028\pm 0.003$ $0.073\pm 0.001$ $0.508\pm 0.000$
$0.024\pm 0.000$ $0.069\pm 0.000$ $0.507\pm 0.000$ 0.8 $0.129\pm 0.000$
$0.121\pm 0.000$ $0.388\pm 0.000$ $0.042\pm 0.000$ $0.082\pm 0.000$ $0.388\pm
0.000$ $0.043\pm 0.005$ $0.083\pm 0.002$ $0.388\pm 0.000$ $0.038\pm 0.000$
$0.080\pm 0.000$ $0.444\pm 0.000$ 1.0 $0.236\pm 0.000$ $0.164\pm 0.000$
$0.224\pm 0.000$ $0.147\pm 0.000$ $0.134\pm 0.000$ $0.224\pm 0.000$ $0.148\pm
0.005$ $0.134\pm 0.002$ $0.224\pm 0.000$ $0.076\pm 0.000$ $0.101\pm 0.000$
$0.358\pm 0.000$
Table 3: Recalibration Results for LLM Classification. For each dataset and
atypicality quantile, the best results are marked in bold. We provide the
standard errors next to the means over 10 random seeds.
Uncalibrated Content-Free Atypicality-Aware ECE RMSE Accuracy ECE RMSE
Accuracy ECE RMSE Accuracy LLM Alpaca7B AG News 0.25 $0.180\pm 0.000$
$0.219\pm 0.000$ $0.681\pm 0.000$ $0.452\pm 0.000$ $0.411\pm 0.000$ $0.527\pm
0.000$ $0.070\pm 0.000$ $0.142\pm 0.000$ $0.760\pm 0.000$ 0.50 $0.165\pm
0.000$ $0.202\pm 0.000$ $0.671\pm 0.000$ $0.413\pm 0.000$ $0.370\pm 0.000$
$0.571\pm 0.000$ $0.027\pm 0.000$ $0.101\pm 0.000$ $0.775\pm 0.000$ 0.75
$0.169\pm 0.000$ $0.204\pm 0.000$ $0.657\pm 0.000$ $0.429\pm 0.000$ $0.364\pm
0.000$ $0.549\pm 0.000$ $0.030\pm 0.000$ $0.099\pm 0.000$ $0.752\pm 0.000$
1.00 $0.202\pm 0.000$ $0.222\pm 0.000$ $0.580\pm 0.000$ $0.448\pm 0.000$
$0.350\pm 0.000$ $0.524\pm 0.000$ $0.028\pm 0.000$ $0.110\pm 0.000$ $0.715\pm
0.000$ IMDB 0.25 $0.023\pm 0.001$ $0.087\pm 0.002$ $0.887\pm 0.001$ $0.141\pm
0.001$ $0.194\pm 0.001$ $0.883\pm 0.001$ $0.011\pm 0.001$ $0.068\pm 0.002$
$0.927\pm 0.001$ 0.50 $0.024\pm 0.001$ $0.095\pm 0.002$ $0.851\pm 0.001$
$0.117\pm 0.002$ $0.185\pm 0.002$ $0.838\pm 0.002$ $0.014\pm 0.001$ $0.078\pm
0.002$ $0.920\pm 0.001$ 0.75 $0.051\pm 0.002$ $0.124\pm 0.002$ $0.795\pm
0.002$ $0.110\pm 0.001$ $0.181\pm 0.002$ $0.823\pm 0.002$ $0.009\pm 0.001$
$0.069\pm 0.001$ $0.895\pm 0.001$ 1.00 $0.063\pm 0.001$ $0.136\pm 0.001$
$0.756\pm 0.001$ $0.122\pm 0.001$ $0.201\pm 0.001$ $0.752\pm 0.001$ $0.021\pm
0.001$ $0.088\pm 0.002$ $0.883\pm 0.001$ TREC 0.25 $0.139\pm 0.000$ $0.355\pm
0.000$ $0.744\pm 0.000$ $0.286\pm 0.000$ $0.265\pm 0.000$ $0.776\pm 0.000$
$0.109\pm 0.000$ $0.266\pm 0.000$ $0.896\pm 0.000$ 0.50 $0.217\pm 0.000$
$0.444\pm 0.000$ $0.680\pm 0.000$ $0.314\pm 0.000$ $0.503\pm 0.000$ $0.672\pm
0.000$ $0.068\pm 0.000$ $0.148\pm 0.000$ $0.816\pm 0.000$ 0.75 $0.238\pm
0.000$ $0.482\pm 0.000$ $0.592\pm 0.000$ $0.464\pm 0.000$ $0.622\pm 0.000$
$0.520\pm 0.000$ $0.067\pm 0.000$ $0.198\pm 0.000$ $0.784\pm 0.000$ 1.00
$0.207\pm 0.000$ $0.451\pm 0.000$ $0.544\pm 0.000$ $0.528\pm 0.000$ $0.685\pm
0.000$ $0.432\pm 0.000$ $0.150\pm 0.000$ $0.199\pm 0.000$ $0.696\pm 0.000$
## Appendix D Recalibration
Through all our recalibration results, we first split the test set into two
equally sized calibration and test splits. Then, we fit the recalibration
method using the calibration split and compute the performance on the test
split. We run all our experiments with 10 different random seeds.
### D.1 Temperature Scaling
To perform temperature scaling [20], we use the calibration set to fit the
temperature parameter. To perform the optimization, we use the LBFGS [38]
algorithm from PyTorch with strong Wolfe line search, following [20]. Namely,
we optimize the parameter $\tau$ with
$\hat{\mathbb{P}}_{\textup{TS}}(X)=\textup{Softmax}(f({\bm{X}})/\tau)$ (9)
and then use it during inference to rescale the logits produced by $f$. We use
$0.1$ learning rate and $3000$ maximum iterations across all experiments and
initialize the temperature value as $1$, although find that TS is pretty
robust to the choice of hyperparameters.
### D.2 Atypicality-Aware Recalibration
Here we describe the implementation details For Atypicality-Aware
Recalibration (AAR). We formulate AAR with:
$\log{\hat{\mathbb{P}}_{\textrm{AAR}}(Y|X)}\propto{\phi(a(X))}\log{\hat{\mathbb{P}}(Y|X)}+S_{Y},$
(10)
In total, this gives us $|\mathcal{Y}|+3$ parameters. Using exactly the same
setting as TS, we use LBFGS with a strong Wolfe search to optimize the three
parameters, with the same splits as temperature scaling. We normalize the
atypicality values (subtract the mean and divide by the standard deviation of
the calibration set) for numerical stability. We use the same hyperparameters
as TS (with $0.1$ learning rate and $3000$ maximum iterations) without any
modification across all experiments, initialize $c_{0},c_{1},c_{2}$ as $0$ and
$S_{Y}$ parameters as $1$. We run the recalibration procedure on a CPU with
precomputed logits.
#### D.2.1 Adaptive Temperature Scaling
For AdaTS [30] we use the implementation provided with the paper
777https://github.com/thwjoy/adats. We identically use the hyperparameters and
the architecture provided in the paper and their repository. They use an
encoder and decoder architecture with $[1024,512,512]$ hidden units each, and
a temperature predictor network with $[128,128]$ hidden units. They use an
Adam Optimizer with a learning rate of $5e-4$ with $128$ batch size.
### D.3 Conformal Prediction
We follow the presentation in [1, 2]. Let $\pi(X)$ be the permutation of
$\mathcal{Y}=\\{1,\ldots,C\\}$ that sorts $\hat{\mathbb{P}}(Y=c|X)$, i.e. the
predicted probabilities for each class $c$. We define a score function
$s(x,y)=\sum_{j=1}^{c}\hat{\mathbb{P}}(Y=j|X),\textup{where }y=\pi_{c}.$ (11)
This means greedily including classes until the set contains the true label,
and using the cumulative sum of the probabilities as the score function. We
compute all of the scores for the calibration set,
$S_{\textrm{calib}}=\\{s(x_{1},y_{1}),...,s(x_{N},y_{N})\\}$, we the
$\frac{\lceil(N+1)(1-\alpha)\rceil}{N}$th quantile of the scores, $\hat{q}$.
Then, the prediction set is defined as
$\mathcal{C}(x)=\\{y:s(x,y)\leq\hat{q}\\}$ (12)
We can further add randomization to the procedure where we have the prediction
set function to be $\mathcal{C}(x,u):\mathcal{X}\times[0,1]$ for randomization
purposes to satisfy exact coverage. We refer to [68, 2, 1] for a more thorough
presentation.
RAPS is a variant of APS that regularizes the set sizes. They modify the
scoring function to add a regularization term. This is controlled by the test
size offset $k_{reg}$ that controls the value beyond which the regularization
is applied, and the $\lambda_{reg}$ gives the strength of the regularization.
To fit the $k_{reg},\lambda_{reg}$ parameters in RAPS, we follow the procedure
in [2] to fit both parameters. Namely, we fit $k_{reg}$ by Algorithm 4 in
their paper that leverages the set sizes in the calibration set, and we fit
$\lambda_{reg}$ by the largest regularization parameter that achieves the
smallest set sizes, searched over a grid of $\\{0.001,0.01,0.1,0.2,0.5\\}$
following their presentation.
### D.4 Atypicality-Aware Conformal Prediction
We have a simple discrete grouping scheme to make conformal prediction
atypicality aware. Namely, we group points using their atypicality and
confidence percentiles and fit individual thresholds. Concretely, we construct
a dataset of
$\mathcal{D}_{AA}=\\{(c_{i},c_{i+1}],(a_{j},a_{j+1}],\hat{q}_{i,j}\\}_{i,j\in[N]}$
using the calibration set where $(c_{i},c_{i+1}]$ denotes the confidence range
for quantile $i$, $(a_{j},a_{j+1}]$ denotes the atypicality range for quantile
$j$ and $\hat{q}_{i,j}$ denotes the threshold fitted to the group specified by
these intervals. We let $N=6$ as the number of groups, and in total, we end up
with $36$ thresholds. At test time, we check the quantile of the confidence
and atypicality of a point and use the corresponding temperature. For AA-RAPS,
we use the same $k_{reg}$ and $\lambda_{reg}$ values as was found with the
RAPS procedure. For practical purposes, we do not allow zero sets (prediction
sets at least include the top prediction).
We would like to make the remark that sometimes the marginal coverage can
exceed the desired value (e.g. Figure 9). This is often because the underlying
model is already very confident for a majority of data points (e.g. More than
half of the data points have 92% confidence). The gains we provide are often
for points with lower confidence regions, as the coverage is not satisfied in
those regions.
Figure 8: Atypicality-Aware Conformal Prediction for ResNet18 and ImageNet.
Target coverage rate is $95\%$. Figure 9: Atypicality-Aware Conformal
Prediction for ResNet152 and ImageNet. Target coverage rate is $95\%$. Figure
10: Fitted Temperature vs Atypicality. We observe a monotonically increasing
relationship between the atypicality of a group and the temperature parameter
fitted to that group with TS. Figure 11: Fitted Additive Correction Factor vs
Class Atypicality. We observe a monotonically increasing relationship between
the atypicality of a class and the additive correction parameter fitted to
that class with AAR. Figure 12: Atypicality-Aware Conformal Prediction for
ResNet50 and ImageNet. Target coverage rate is $95\%$.
## Appendix E Tables for Results
Here, we present the table version of the results in Figure 3. Tables 1,2
contain the ECE analysis.
## Appendix F Fitzpatrick17k and Skin Lesion Classification
We use the training script from [18] to finetune models on the Fitzpatrick17k
dataset. We train the models for $50$ epochs, fixing the backbone and training
only the probe on top of the penultimate layer. The probe consists of $2$
layers, one layer of $256$ units followed by ReLU and Dropout with probability
$0.4$, followed by the classifier layer with an output dimensionality of $9$.
We use an Adam optimizer with a $0.0001$ learning rate.
The entire dataset consists of $16,577$ images, where the potential labels
are: $10,886$ inflammatory, $1,352$ malignant epidermal, $1,194$
genodermatoses, $1,067$ benign dermal, $931$ benign epidermal, $573$ malignant
melanoma, $236$ benign melanocyte, $182$ malignant cutaneous lymphoma, and
$156$ malignant dermal. We split the dataset into 3 sets (Training ($0.5$),
Validation ($0.25$), and Test ($0.25$)). We use the validation set as the
calibration set and perform the experiments with 10 random splits.
## Appendix G Proofs
### G.1 Detailed derivation of the claim on Page 5
When $\mathbb{P}_{1}(X)\leq 1/2$, the signed calibration error at level
$u\in(1/2,1)$ becomes
$u-\mathbb{P}(Y=-1\mid\hat{\mathbb{P}}_{-1}(X)=u)=u-\mathbb{P}(Y=-1\mid\hat{\mathbb{P}}_{1}(-X)=u)=u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)$.
The last inequality is due to symmetry. More specifically, we claim
$(X,Y)\stackrel{{\scriptstyle d}}{{=}}(-X,-Y)$, where the notation
$\stackrel{{\scriptstyle d}}{{=}}$ denotes equal in distribution. In fact, as
$X\stackrel{{\scriptstyle d}}{{=}}-X$, it suffices to show that for any
$y\in\\{-1,1\\}$, and $x\in\mathbb{R}^{d}$, we have
$\mathbb{P}(Y=y\mid X=x)=\mathbb{P}(-Y=y\mid-X=x).$
When $y=-1$, the right hand side
$\displaystyle\mathbb{P}(-Y=-1\mid-X=x)=\mathbb{P}(Y=1\mid
X=-x)=\sigma(\langle\beta^{*},-x\rangle)$ $\displaystyle=$ $\displaystyle
1-\sigma(\langle\beta^{*},x\rangle)=1-\mathbb{P}(Y=1\mid
X=x)=\mathbb{P}(Y=-1\mid X=x).$
Similarly, when $y=1$,
$\displaystyle\mathbb{P}(-Y=1\mid-X=x)=\mathbb{P}(Y=-1\mid
X=-x)=1-\sigma(\langle\beta^{*},-x\rangle)$ $\displaystyle=$
$\displaystyle\sigma(\langle\beta^{*},x\rangle)=\mathbb{P}(Y=1\mid X=x).$
We complete the proof.
### G.2 Proof of Theorem 3.1
###### Theorem G.1 (Restatement of Theorem 3.1).
Consider the data generative model with the algorithm described in Section
3.2. For any $K>1$, suppose we consider the quantiles of $a(X)$,
$a_{1},a_{2},...,a_{K},a_{K+1}$ such that
$\mathbb{P}(a(X)\in(a_{k},a_{k+1}])=1/K$ for $k\in[K]$. In addition, we assume
$\|\beta^{*}\|\leq c_{0}$, and $d/n=\kappa$ for some sufficiently small
$c_{0},\kappa>0$. Then for sufficiently large $n$, we have
$\displaystyle\mathbb{E}_{u}[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\mid
a(X)\in[a_{k-1},a_{k}]]>$
$\displaystyle\mathbb{E}_{u}[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\mid
a(X)\in(a_{k},a_{k+1}]],$
for $k=2,..,K$.
###### Proof.
Following [7], we have
$u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)=u-\mathbb{E}_{Z}[\sigma(\frac{\|\beta^{*}\|}{\|\hat{\beta}\|}\cos\hat{\theta}\cdot\sigma^{-1}(u))+\sin\hat{\theta}\cdot\|\beta^{*}\|Z],$
where
$\cos\hat{\theta}=\frac{\hat{\beta}^{\top}\beta^{*}}{\|\hat{\beta}\|\cdot\|\beta^{*}\|}$
and $Z\sim N(0,1)$.
According to the results in Section 2.2 of [62], we have $\|\hat{\beta}\|\to
R^{*}=R^{*}(\kappa,\beta^{*})$ and $\cos\hat{\theta}\to
c^{*}=c^{*}(\kappa,\beta^{*})$, for two quantities $R^{*}$ and $c^{*}$ that
depend on $\kappa$ and $\beta^{*}$. We then have
$u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\to
u-\mathbb{E}_{Z}[\sigma(\frac{\|\beta^{*}\|}{R^{*}}c^{*}\cdot\sigma^{-1}(u))+\sqrt{1-c^{*2}}\cdot\|\beta^{*}\|Z].$
Using the proof of Theorem 3 in [7], we have that
$u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)=C_{\kappa}(u)\cdot\kappa+o(\kappa),$
where
$C_{\kappa}(u)=c_{1}\sigma^{\prime}(\sigma^{-1}(u))\cdot\sigma^{-1}(u)-c_{2}\sigma^{\prime\prime}(\sigma^{-1}(u)),$
for two positive constants $c_{1},c_{2}$.
As a result, we have
$\displaystyle u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\geq 0$ (13)
In addition, since when $z\in[-1,1]$, $z\cdot\sigma^{\prime}(z)$ and
$-\sigma^{\prime\prime}(z)$ are both increasing, we then have $C_{\kappa}(u)$
increasing for $\hat{\beta}^{\top}x=\sigma^{-1}(u)\in(-1,1)$.
##### Proving the result for $\\{k=2,\ldots,K-1\\}$
In addition, by our model assumption $x\sim N(0,I_{d})$, we have that $\|x\|$
and $\frac{x}{\|x\|}$ are independent, and $\frac{x}{\|x\|}\sim S$ where $S$
is a uniform distribution on the sphere in the $d$-dimensional space. As the
monotonic transformations will not change the events defined by quantiles, and
$\exp(-\|x\|^{2}/2)$ is a monotonic function in $\|x\|$, for the simplicity of
presentation we use $a(X)=\|X\|$ in the rest of this proof. As a result, given
$\|x\|=a$, we have
$\hat{\beta}^{\top}x\mid\|x\|=a\stackrel{{\scriptstyle
d}}{{=}}a\cdot\hat{\beta}^{\top}S=a\cdot\|\hat{\beta}\|\cdot S_{1},$
where $S_{1}$ is the first coordinate of $S$.
Consequently, if we further condition on the event where
$\hat{\beta}^{\top}x>0$ (as we assume $u>0$ throughout Section 3.2), we have
$\hat{\beta}^{\top}x\stackrel{{\scriptstyle d}}{{=}}a\cdot\|\hat{\beta}\|\cdot
S_{1}\mid S_{1}>0\stackrel{{\scriptstyle
d}}{{=}}a\cdot\|\hat{\beta}\|\cdot\frac{Z_{1}}{\sqrt{Z_{1}^{2}+Q}}\to a\cdot
R^{*}\cdot\frac{Z_{1}}{\sqrt{Z_{1}^{2}+Q}},$
where $Q\sim\chi_{p-1}^{2}$, $Z_{1}\sim N(0,1)$ and they are independent.
Due to the monotonicity of $C_{\kappa}(u)$ on $u$, we have that for any
$a_{1}>a_{2}$,
$C_{\kappa}(u)\mid\|x\|=a_{1}\stackrel{{\scriptstyle
d}}{{>}}C_{\kappa}(u)\mid\|x\|=a_{2},$
where the notation $\stackrel{{\scriptstyle d}}{{>}}$ denotes stochastic
dominance.
Consequently, we have
$\mathbb{E}_{u}[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\mid
a(X)\in[a_{k-1},a_{k}]]<\mathbb{E}_{u}[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\mid
a(X)\in(a_{k},a_{k+1}]],$
for $k=2,..,K-1$.
##### Proving the result for $k=K$
To complete the proof, it suffices to show that the inequality is also true
for $K$th quantile:
$\mathbb{E}_{u}[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\mid
a(X)\in[a_{K-1},a_{K}]]<\mathbb{E}_{u}[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\mid
a(X)\in(a_{k},a_{k+1}]],$
which is equivalent to
$\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[a_{K-1},a_{K}]\\}]<\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in(a_{k},a_{k+1}]\\}].$
In the above inequality, the right hand side can be decomposed into
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in(a_{k},a_{k+1}]\\}]$
$\displaystyle=$
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[a_{K},2p]\\}]$
$\displaystyle+\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[2p,a_{K+1}]\\}].$
Denote the $\alpha$ quantile of $\chi_{p}^{2}$ by $\chi_{\alpha,p}^{2}$. We
then have $a_{k}=\chi_{\frac{k}{K+1},p}^{2}$. We further decompose the
equation into
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[a_{K},2p]\\}]$
$\displaystyle=$
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[a_{K},\chi_{\frac{K+\delta}{K+1},p}^{2}]\\}]$
$\displaystyle+\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[\chi_{\frac{K+\delta}{K+1},p}^{2},2p]\\}].$
In the following, we proceed to prove
$\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[\chi_{\frac{K+\delta}{K+1},p}^{2},2p]\\}]>\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[a_{K-1},a_{K}\\}].$
(14)
We now use the approximation of the chi-square quantile: when $p\to\infty$, we
have
$a_{K}=\frac{1}{2}(z_{\frac{K}{K+1}}+\sqrt{2p})^{2}+o(1),\text{ and
}\chi_{\frac{K+\delta}{K+1},p}^{2}=\frac{1}{2}(z_{\frac{K+\delta}{K+1}}+\sqrt{2p})^{2}+o(1),$
where $z_{\alpha}$ denotes the $\alpha$-quantile of a standard normal random
variable.
Then
$\chi_{\frac{K+\delta}{K+1},p}^{2}-a_{K}=\frac{1}{2}(z_{\frac{K+\delta}{K+1}}-z_{\frac{K}{K+1}})(z_{\frac{K+\delta}{K+1}}+z_{\frac{K}{K+1}}+2\sqrt{2p}).$
Using the fact that $z_{1-\frac{1}{K}}=\sqrt{2\log K}+o(1)$ for $K\to\infty$,
then we have
$z_{\frac{K+\delta}{K+1}}-z_{\frac{K}{K+1}}=\frac{-\log(1-\delta)}{\sqrt{2\log
K}}+o(1).$
In addition, for any $a\in[\chi_{\frac{K+\delta}{K+1},p}^{2},2p]$ and
$a^{\prime}\in[a_{K-1},a_{K}]$, we have
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)=a]-\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)=a^{\prime}]$ $\displaystyle\geq$ $\displaystyle
C(z_{\frac{K+\delta}{K+1}}-z_{\frac{K}{K+1}}),$
for some universal constant $C$.
Therefore
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)\in[\chi_{\frac{K+\delta}{K+1},p}^{2},2p]]-\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)\in[a_{K-1},a_{K}]]$ $\displaystyle\geq$ $\displaystyle
C(z_{\frac{K+\delta}{K+1}}-z_{\frac{K}{K+1}}).$
Then
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[\chi_{\frac{K+\delta}{K+1},p}^{2},2p]\\}]$
$\displaystyle=$
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)\in[\chi_{\frac{K+\delta}{K+1},p}^{2},2p]]\cdot\mathbb{P}(a(X)\in[\chi_{\frac{K+\delta}{K+1},p}^{2},2p])$
$\displaystyle\geq$
$\displaystyle\left(\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)\in[a_{K-1},a_{K}]]+C(z_{\frac{K+\delta}{K+1}}-z_{\frac{K}{K+1}})\right)\cdot(\frac{1}{K}-\frac{\delta}{K+1}+o(\frac{\delta}{K+1}))$
$\displaystyle=$
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[a_{K-1},a_{K}\\}]$
$\displaystyle+C(z_{\frac{K+\delta}{K+1}}-z_{\frac{K}{K+1}})-(1+o(1))\frac{\delta}{K+1}\cdot\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)\in[a_{K-1},a_{K}]].$
The last equality uses the fact that $\mathbb{P}(a(X)\in[a_{K-1},a_{K}])=1/K,$
and therefore
$\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)\in[a_{K-1},a_{K}]\cdot\frac{1}{K}=\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[a_{K-1},a_{K}\\}]$
Then use the fact that
$|\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)\in[a_{K-1},a_{K}]]|=O(1)$ and we choose $\delta=o(1/\log K)$ so
$\frac{\delta}{K}=o(|\frac{\log(1-\delta)}{\sqrt{\log K}}|).$
Consequently,
$C(z_{\frac{K+\delta}{K+1}}-z_{\frac{K}{K+1}})-(1+o(1))\frac{\delta}{K+1}\cdot\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\mid
a(X)\in[a_{K-1},a_{K}]]>0,$
which implies
$\displaystyle\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[\chi_{\frac{K+\delta}{K+1},p}^{2},2p]\\}]\geq\mathbb{E}_{u}[(u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u))\cdot\bm{1}\\{a(X)\in[a_{K-1},a_{K}\\}].$
Combining with equation 13, we prove equation 14 and complete the proof. ∎
### G.3 Theoretical Justification of the calibration improvement using the
atypicality score
In this section, we provide the theoretical justification to understand why
incorporating the atypicality score will improve calibration. In particular,
we consider the binary classification problem with prediction $f:X\in[0,1]$
indicating the predicted probability of $Y=1$ given $X=x$.
For a predictor $f$, let us denote its conditional calibration error at an
atypicality level $\gamma$ by
$\textrm{CE}_{\gamma}(f)=\mathbb{E}[(f(X)-\mathbb{E}[Y|f(X)])^{2}|a(X)=\gamma]$.
###### Theorem G.2.
Consider the same setting as Theorem 3.1. Suppose the temperature function
$\hat{\tau}(a(X))=\operatorname*{arg\,min}_{\tau}\mathbb{E}[l(Y,\textup{Softmax}(f(X)/\tau(a(X))))]$
with $l$ being the cross entropy loss, and let
$\hat{\mathbb{P}}_{\textup{AAR}}(X)=\textup{Softmax}(f(X)/\hat{\tau}(a(X)))$.
Then
$\textup{CE}_{\gamma}(\hat{\mathbb{P}}_{\textup{AAR}})\leq\min\\{\textup{CE}_{\gamma}(\hat{\mathbb{P}}_{\textup{TS}}),\textup{CE}_{\gamma}(f)\\}.$
(15)
Proof: For a prediction function $f$, we first define the conditional mean
squared error of $f$ at an atypicality level $\gamma$ by
$\textup{MSE}_{\gamma}(f)=\mathbb{E}[(f(X)-Y)^{2}\mid a(X)=\gamma]$, then we
have
$\displaystyle\textup{MSE}_{\gamma}(f)-CE_{\gamma}(f)=$
$\displaystyle\mathbb{E}[(f(X)-Y)^{2}\mid
a(X)=\gamma]-\mathbb{E}[(f(X)-\mathbb{E}[Y\mid f(X),a(X)=\gamma])^{2}\mid
a(X)=\gamma]$ $\displaystyle=$ $\displaystyle\mathbb{E}[(\mathbb{E}[Y\mid
f(X),a(X)=\gamma]-Y)\cdot(2f(X)-\mathbb{E}[Y\mid f(X),a(X)=\gamma]-Y)\mid
a(X)=\gamma]$ $\displaystyle=$ $\displaystyle\mathbb{E}[(\mathbb{E}[Y\mid
f(X),a(X)=\gamma]-Y)\cdot(\mathbb{E}[Y\mid f(X),a(X)=\gamma]-Y)\mid
a(X)=\gamma]$ $\displaystyle+2\mathbb{E}[(\mathbb{E}[Y\mid
f(X),a(X)=\gamma]-Y)\cdot(f(X)-\mathbb{E}[Y\mid f(X),a(X)=\gamma]))\mid
a(X)=\gamma]$
Since
$\displaystyle\mathbb{E}[Y\mathbb{E}[Y\mid f(X),a(X)=\gamma]\mid a(X)=\gamma]$
$\displaystyle=$ $\displaystyle\mathbb{E}_{f(X)\mid
a(X)=\gamma}\mathbb{E}[Y\mathbb{E}[Y\mid f(X),a(X)=\gamma]\mid
f(X),a(X)=\gamma]]$ $\displaystyle=$
$\displaystyle\mathbb{E}[(\mathbb{E}[Y\mid f(X),a(X)=\gamma])^{2}\mid
a(X)=\gamma],$
we have
$\mathbb{E}[(\mathbb{E}[Y\mid f(X),a(X)=\gamma]-Y)\cdot(f(X)-\mathbb{E}[Y\mid
f(X),a(X)=\gamma]))\mid a(X)=\gamma]=0,$
and therefore
$\textup{MSE}_{\gamma}(f)-CE_{\gamma}(f)=\mathbb{E}[(\mathbb{E}[Y\mid
f(X),a(X)=\gamma]-Y)^{2}\mid a(X)=\gamma]$
Now that $\hat{\mathbb{P}}_{AAR}(f(x),a(x))$ is monotonic on the
$\hat{\mathbb{P}}(x)$, we have
$\mathbb{E}[Y\mid
f(x),a(X)=\gamma]=\mathbb{E}[Y\mid\hat{\mathbb{P}}_{AAR}(f(x),a(X)),a(X)=\gamma],$
implying
$\textup{MSE}_{\gamma}(\hat{\mathbb{P}}_{AAR})-CE_{\gamma}(\hat{\mathbb{P}}_{AAR})=\textup{MSE}_{\gamma}(\hat{\mathbb{P}})-CE_{\gamma}(\hat{\mathbb{P}}).$
(16)
Similarly, we have
$\textup{MSE}_{\gamma}(\hat{\mathbb{P}}_{TS})-CE_{\gamma}(\hat{\mathbb{P}}_{TS})=\textup{MSE}_{\gamma}(\hat{\mathbb{P}})-CE_{\gamma}(\hat{\mathbb{P}}).$
(17)
In the following, we will show that
$\textup{MSE}_{\gamma}(\hat{\mathbb{P}}_{AAR})<\min\\{\textup{MSE}_{\gamma}(\hat{\mathbb{P}}_{TS}),\textup{MSE}_{\gamma}(\hat{\mathbb{P}})\\}.$
(18)
First, as we consider the binary classification setting, with $l$ being the
cross-entropy loss, we have
$l(Y,\textup{Softmax}(f(X)/\tau(a(X))))=Y\log(\sigma(f(X)/\tau(a(X)))+(1-Y)\log(1-\sigma(f(X)/\tau(a(X))),$
where $\sigma(x)=1/(1+e^{x}).$
Then, by the definition of $\hat{\tau}(a(X))$, we have that
$\displaystyle\hat{\tau}(a(X))=$
$\displaystyle\operatorname*{arg\,min}_{\tau}\mathbb{E}[Y\log(\sigma(f(X)/\tau(a(X)))+(1-Y)\log(1-\sigma(f(X)/\tau(a(X)))]$
$\displaystyle=$
$\displaystyle\operatorname*{arg\,min}_{\tau}\mathbb{E}[\mathbb{E}[Y\log(\sigma(f(X)/\tau(a(X)))+(1-Y)\log(1-\sigma(f(X)/\tau(a(X)))\mid
a(X)]].$
Taking the derivative on the last line and setting it to zero, we have
$\mathbb{E}[\frac{Y}{\sigma(f(X)/\hat{\tau}(a(X))}-\frac{1-Y}{1-\sigma(f(X)/\hat{\tau}(a(X))}\mid
a(X)]=0,$
implying
$\mathbb{E}[\sigma(f(X)/\hat{\tau}(a(X))\mid a(X)]=\mathbb{E}[Y\mid a(X)].$
This makes the derivative of $\mathbb{E}[(Y-\sigma(f(X)/\tau(a(X))))^{2}\mid
a(X)]$ zero and therefore $\hat{\tau}(a(X))$ is also a minimizer of
$\mathbb{E}[(Y-\sigma(f(X)/\tau(a(X))))^{2}\mid a(X)]$:
$\hat{\tau}(a(X))=\operatorname*{arg\,min}_{\tau}\mathbb{E}[Y\log(\sigma(f(X)/\tau(a(X)))+(1-Y)\log(1-\sigma(f(X)/\tau(a(X)))]=\operatorname*{arg\,min}_{\tau}\mathbb{E}[(Y-\sigma(f(X)/\tau(a(X))))^{2}].$
Letting
$g(\gamma)=\operatorname*{arg\,min}_{c}\mathbb{E}[(Y-\sigma(f(X)/c))^{2}\mid
a(X)=\gamma]$, we have that
$g(a(X))=\operatorname*{arg\,min}_{\tau}\mathbb{E}[(Y-\sigma(f(X)/\tau(a(X))))^{2}\mid
a(X)],$
and therefore
$g(a(X))=\operatorname*{arg\,min}_{\tau}\mathbb{E}[\mathbb{E}[(Y-\sigma(f(X)/\tau(a(X))))^{2}\mid
a(X)]=\hat{\tau}(a(X)).$
As a result,
$\displaystyle\textup{MSE}_{\gamma}(\hat{\mathbb{P}}_{AAR})=$
$\displaystyle\mathbb{E}[(\hat{\mathbb{P}}_{AAR}(X)-Y)^{2}\mid a(X)=\gamma]$
$\displaystyle=$
$\displaystyle\mathbb{E}[(\hat{\mathbb{P}}_{AAR}(X)-Y)^{2}\mid a(X)=\gamma]$
$\displaystyle=$
$\displaystyle\mathbb{E}[(\sigma(f(X)/\hat{\tau}(a(X))-Y)^{2}\mid
a(X)=\gamma]$ $\displaystyle=$
$\displaystyle\mathbb{E}[(\textup{Softmax}(\hat{\mathbb{P}}(X)/g(a(X))-Y)^{2}\mid
a(X)=\gamma]$ $\displaystyle=$
$\displaystyle\operatorname*{arg\,min}_{c}\mathbb{E}[(\sigma(f(X)/c-Y)^{2}\mid
a(X)=\gamma]$ $\displaystyle\leq$
$\displaystyle\mathbb{E}[(\sigma(f(X)-Y)^{2}\mid a(X)=\gamma]$
$\displaystyle=$ $\displaystyle\textup{MSE}_{\gamma}(\hat{\mathbb{P}}).$
Similarly, we have
$\textup{MSE}_{\gamma}(\hat{\mathbb{P}}_{AAR})\leq\textup{MSE}_{\gamma}(\hat{\mathbb{P}}_{TS})$,
and therefore equation 18 holds.
Combining with equation 16 and equation 17, we have
$CE_{\gamma}(\hat{\mathbb{P}}_{AAR})\leq\min\\{CE_{\gamma}(\hat{\mathbb{P}}_{TS}),CE_{\gamma}(\hat{\mathbb{P}})\\}.$
Table 4: Conformal Calibration with Atypicality-Awareness.
Model Dataset Input Atypicality Group Confidence Group APS AA-APS RAPS AA-RAPS
Coverage SetSize Coverage SetSize Coverage SetSize Coverage SetSize ResNet152
ImageNet 1 1 $0.982\pm 0.002$ $51.192\pm 1.570$ $0.970\pm 0.003$ $32.694\pm
3.517$ $0.951\pm 0.004$ $7.168\pm 0.127$ $0.963\pm 0.004$ $14.408\pm 2.275$ 2
$0.984\pm 0.001$ $80.425\pm 2.506$ $0.961\pm 0.002$ $39.752\pm 1.670$
$0.940\pm 0.003$ $7.771\pm 0.180$ $0.966\pm 0.004$ $15.713\pm 0.977$ 3
$0.980\pm 0.001$ $114.526\pm 1.293$ $0.955\pm 0.004$ $55.887\pm 2.830$
$0.885\pm 0.005$ $8.485\pm 0.217$ $0.952\pm 0.004$ $25.191\pm 1.349$ 4
$0.979\pm 0.001$ $118.819\pm 1.787$ $0.949\pm 0.002$ $60.998\pm 1.293$
$0.833\pm 0.005$ $8.677\pm 0.235$ $0.949\pm 0.002$ $33.256\pm 0.812$ 5
$0.975\pm 0.001$ $111.696\pm 1.090$ $0.953\pm 0.002$ $66.884\pm 1.347$
$0.807\pm 0.006$ $8.801\pm 0.242$ $0.948\pm 0.002$ $42.109\pm 0.829$ 6
$0.958\pm 0.002$ $90.283\pm 1.460$ $0.949\pm 0.002$ $77.404\pm 1.682$
$0.777\pm 0.005$ $8.565\pm 0.229$ $0.946\pm 0.003$ $52.214\pm 1.401$ 2 1
$0.961\pm 0.004$ $16.767\pm 0.389$ $0.953\pm 0.004$ $14.533\pm 1.036$
$0.977\pm 0.002$ $5.279\pm 0.035$ $0.954\pm 0.005$ $3.888\pm 0.102$ 2
$0.964\pm 0.002$ $18.359\pm 0.530$ $0.951\pm 0.003$ $12.902\pm 0.814$
$0.974\pm 0.001$ $5.801\pm 0.046$ $0.950\pm 0.003$ $4.484\pm 0.079$ 3
$0.961\pm 0.001$ $21.409\pm 0.413$ $0.948\pm 0.002$ $14.636\pm 0.318$
$0.966\pm 0.002$ $5.964\pm 0.051$ $0.952\pm 0.003$ $5.172\pm 0.112$ 4
$0.964\pm 0.002$ $23.162\pm 0.321$ $0.956\pm 0.003$ $20.246\pm 0.634$
$0.966\pm 0.002$ $6.253\pm 0.081$ $0.962\pm 0.004$ $6.485\pm 0.112$ 5
$0.953\pm 0.002$ $24.833\pm 0.486$ $0.952\pm 0.003$ $25.470\pm 0.829$
$0.938\pm 0.002$ $6.331\pm 0.092$ $0.953\pm 0.003$ $9.446\pm 0.332$ 6
$0.929\pm 0.003$ $21.124\pm 0.290$ $0.943\pm 0.004$ $32.341\pm 1.343$
$0.909\pm 0.003$ $6.409\pm 0.106$ $0.952\pm 0.002$ $16.970\pm 0.828$ 3 1
$0.957\pm 0.002$ $6.843\pm 0.249$ $0.954\pm 0.004$ $5.860\pm 0.354$ $0.984\pm
0.001$ $4.839\pm 0.046$ $0.951\pm 0.004$ $3.015\pm 0.098$ 2 $0.963\pm 0.003$
$6.654\pm 0.202$ $0.965\pm 0.003$ $6.689\pm 0.406$ $0.986\pm 0.001$ $5.274\pm
0.044$ $0.966\pm 0.003$ $3.494\pm 0.071$ 3 $0.951\pm 0.002$ $7.609\pm 0.195$
$0.955\pm 0.003$ $8.542\pm 0.472$ $0.968\pm 0.001$ $5.435\pm 0.042$ $0.951\pm
0.003$ $4.052\pm 0.097$ 4 $0.938\pm 0.002$ $8.415\pm 0.205$ $0.955\pm 0.002$
$12.476\pm 0.349$ $0.961\pm 0.002$ $5.728\pm 0.048$ $0.951\pm 0.002$ $5.085\pm
0.085$ 5 $0.945\pm 0.002$ $8.500\pm 0.204$ $0.955\pm 0.003$ $12.539\pm 0.815$
$0.964\pm 0.002$ $5.724\pm 0.050$ $0.956\pm 0.003$ $5.531\pm 0.154$ 6
$0.932\pm 0.004$ $8.009\pm 0.199$ $0.953\pm 0.004$ $13.942\pm 0.657$ $0.944\pm
0.003$ $5.744\pm 0.048$ $0.953\pm 0.003$ $7.718\pm 0.745$ 4 1 $0.958\pm 0.001$
$1.480\pm 0.034$ $0.962\pm 0.003$ $1.828\pm 0.131$ $0.985\pm 0.001$ $3.810\pm
0.132$ $0.962\pm 0.003$ $1.613\pm 0.078$ 2 $0.953\pm 0.002$ $1.388\pm 0.021$
$0.958\pm 0.002$ $1.913\pm 0.118$ $0.982\pm 0.001$ $3.816\pm 0.141$ $0.957\pm
0.002$ $1.692\pm 0.070$ 3 $0.931\pm 0.002$ $1.451\pm 0.028$ $0.947\pm 0.003$
$2.699\pm 0.146$ $0.967\pm 0.002$ $3.920\pm 0.148$ $0.951\pm 0.003$ $2.237\pm
0.071$ 4 $0.935\pm 0.001$ $1.450\pm 0.020$ $0.958\pm 0.003$ $3.488\pm 0.224$
$0.976\pm 0.002$ $3.880\pm 0.128$ $0.957\pm 0.003$ $2.577\pm 0.117$ 5
$0.928\pm 0.002$ $1.424\pm 0.018$ $0.951\pm 0.003$ $5.173\pm 0.391$ $0.963\pm
0.002$ $4.160\pm 0.127$ $0.952\pm 0.002$ $3.185\pm 0.077$ 6 $0.902\pm 0.003$
$1.476\pm 0.028$ $0.958\pm 0.005$ $8.900\pm 0.726$ $0.955\pm 0.004$ $3.925\pm
0.130$ $0.959\pm 0.004$ $4.235\pm 0.163$ 5 1 $0.988\pm 0.001$ $1.000\pm 0.000$
$0.988\pm 0.001$ $1.000\pm 0.000$ $0.989\pm 0.001$ $1.344\pm 0.116$ $0.988\pm
0.001$ $1.000\pm 0.000$ 2 $0.982\pm 0.001$ $1.000\pm 0.000$ $0.982\pm 0.001$
$1.000\pm 0.000$ $0.984\pm 0.001$ $1.363\pm 0.122$ $0.982\pm 0.001$ $1.000\pm
0.000$ 3 $0.976\pm 0.001$ $1.000\pm 0.000$ $0.976\pm 0.001$ $1.001\pm 0.001$
$0.980\pm 0.002$ $1.351\pm 0.122$ $0.976\pm 0.001$ $1.001\pm 0.001$ 4
$0.977\pm 0.001$ $1.000\pm 0.000$ $0.977\pm 0.001$ $1.020\pm 0.009$ $0.980\pm
0.001$ $1.363\pm 0.109$ $0.977\pm 0.001$ $1.004\pm 0.003$ 5 $0.965\pm 0.001$
$1.000\pm 0.000$ $0.965\pm 0.001$ $1.137\pm 0.055$ $0.968\pm 0.002$ $1.342\pm
0.115$ $0.965\pm 0.001$ $1.063\pm 0.028$ 6 $0.970\pm 0.002$ $1.000\pm 0.000$
$0.972\pm 0.003$ $1.607\pm 0.125$ $0.973\pm 0.003$ $1.339\pm 0.107$ $0.973\pm
0.003$ $1.539\pm 0.080$ 6 1 $0.992\pm 0.000$ $1.000\pm 0.000$ $0.992\pm 0.000$
$1.000\pm 0.000$ $0.992\pm 0.000$ $1.000\pm 0.000$ $0.992\pm 0.000$ $1.000\pm
0.000$ 2 $0.989\pm 0.001$ $1.000\pm 0.000$ $0.989\pm 0.001$ $1.000\pm 0.000$
$0.989\pm 0.001$ $1.000\pm 0.000$ $0.989\pm 0.001$ $1.000\pm 0.000$ 3
$0.992\pm 0.000$ $1.000\pm 0.000$ $0.992\pm 0.000$ $1.000\pm 0.000$ $0.992\pm
0.000$ $1.000\pm 0.000$ $0.992\pm 0.000$ $1.000\pm 0.000$ 4 $0.995\pm 0.001$
$1.000\pm 0.000$ $0.995\pm 0.001$ $1.000\pm 0.000$ $0.995\pm 0.001$ $1.000\pm
0.000$ $0.995\pm 0.001$ $1.000\pm 0.000$ 5 $0.989\pm 0.001$ $1.000\pm 0.000$
$0.989\pm 0.001$ $1.000\pm 0.000$ $0.989\pm 0.001$ $1.000\pm 0.000$ $0.989\pm
0.001$ $1.000\pm 0.000$ 6 $0.991\pm 0.001$ $1.000\pm 0.000$ $0.991\pm 0.001$
$1.000\pm 0.000$ $0.991\pm 0.001$ $1.000\pm 0.000$ $0.991\pm 0.001$ $1.000\pm
0.000$ ResNet18 ImageNet 1 1 $0.982\pm 0.001$ $167.265\pm 2.058$ $0.952\pm
0.004$ $79.307\pm 3.180$ $0.876\pm 0.008$ $20.623\pm 1.724$ $0.952\pm 0.003$
$67.925\pm 2.552$ 2 $0.986\pm 0.001$ $157.710\pm 1.132$ $0.956\pm 0.002$
$85.303\pm 1.015$ $0.848\pm 0.009$ $20.740\pm 1.745$ $0.958\pm 0.003$
$80.333\pm 3.054$ 3 $0.970\pm 0.001$ $147.730\pm 1.267$ $0.943\pm 0.003$
$95.805\pm 2.283$ $0.810\pm 0.009$ $20.689\pm 1.725$ $0.938\pm 0.003$
$91.786\pm 1.213$ 4 $0.963\pm 0.002$ $129.188\pm 1.035$ $0.954\pm 0.003$
$109.238\pm 1.837$ $0.815\pm 0.009$ $20.328\pm 1.592$ $0.953\pm 0.002$
$102.926\pm 2.651$ 5 $0.957\pm 0.002$ $111.271\pm 1.097$ $0.952\pm 0.002$
$103.457\pm 2.047$ $0.809\pm 0.011$ $19.932\pm 1.441$ $0.942\pm 0.004$
$91.695\pm 3.655$ 6 $0.941\pm 0.004$ $82.127\pm 1.167$ $0.951\pm 0.005$
$96.879\pm 4.165$ $0.828\pm 0.008$ $19.108\pm 1.138$ $0.950\pm 0.004$
$87.490\pm 3.277$ 2 1 $0.972\pm 0.001$ $37.918\pm 1.031$ $0.958\pm 0.001$
$25.001\pm 0.687$ $0.978\pm 0.002$ $15.096\pm 0.135$ $0.959\pm 0.003$
$10.650\pm 0.322$ 2 $0.959\pm 0.002$ $42.117\pm 0.743$ $0.951\pm 0.004$
$31.357\pm 1.294$ $0.954\pm 0.002$ $15.879\pm 0.162$ $0.951\pm 0.004$
$16.748\pm 0.617$ 3 $0.964\pm 0.003$ $43.312\pm 0.868$ $0.957\pm 0.003$
$39.329\pm 1.425$ $0.951\pm 0.003$ $16.138\pm 0.225$ $0.960\pm 0.002$
$22.619\pm 1.125$ 4 $0.951\pm 0.003$ $41.803\pm 0.701$ $0.955\pm 0.003$
$45.617\pm 1.420$ $0.930\pm 0.002$ $16.357\pm 0.295$ $0.953\pm 0.003$
$31.513\pm 1.447$ 5 $0.945\pm 0.002$ $36.697\pm 0.482$ $0.957\pm 0.002$
$45.379\pm 1.101$ $0.935\pm 0.002$ $16.094\pm 0.249$ $0.961\pm 0.001$
$32.569\pm 0.974$ 6 $0.930\pm 0.003$ $27.148\pm 0.321$ $0.949\pm 0.005$
$43.307\pm 2.384$ $0.924\pm 0.003$ $15.730\pm 0.144$ $0.948\pm 0.005$
$28.152\pm 1.657$ 3 1 $0.964\pm 0.001$ $15.809\pm 0.414$ $0.954\pm 0.002$
$13.080\pm 0.586$ $0.985\pm 0.002$ $13.208\pm 0.471$ $0.955\pm 0.002$
$6.864\pm 0.146$ 2 $0.960\pm 0.002$ $18.034\pm 0.515$ $0.956\pm 0.003$
$16.515\pm 0.618$ $0.978\pm 0.001$ $13.659\pm 0.394$ $0.955\pm 0.002$
$8.275\pm 0.221$ 3 $0.964\pm 0.002$ $19.231\pm 0.344$ $0.957\pm 0.003$
$17.463\pm 0.722$ $0.979\pm 0.001$ $13.901\pm 0.310$ $0.957\pm 0.003$
$8.794\pm 0.163$ 4 $0.950\pm 0.001$ $19.479\pm 0.207$ $0.952\pm 0.003$
$21.470\pm 0.744$ $0.974\pm 0.001$ $13.899\pm 0.304$ $0.955\pm 0.003$
$11.390\pm 0.313$ 5 $0.940\pm 0.002$ $17.801\pm 0.160$ $0.953\pm 0.003$
$22.066\pm 0.545$ $0.957\pm 0.002$ $14.036\pm 0.296$ $0.951\pm 0.003$
$12.743\pm 0.271$ 6 $0.931\pm 0.002$ $13.096\pm 0.153$ $0.949\pm 0.004$
$21.352\pm 1.080$ $0.956\pm 0.002$ $13.169\pm 0.394$ $0.949\pm 0.005$
$12.924\pm 0.663$ 4 1 $0.964\pm 0.002$ $5.377\pm 0.082$ $0.957\pm 0.003$
$4.595\pm 0.226$ $0.990\pm 0.002$ $11.670\pm 0.870$ $0.957\pm 0.004$ $3.823\pm
0.144$ 2 $0.958\pm 0.002$ $6.371\pm 0.130$ $0.954\pm 0.003$ $5.887\pm 0.275$
$0.986\pm 0.003$ $11.954\pm 0.767$ $0.953\pm 0.003$ $4.648\pm 0.106$ 3
$0.946\pm 0.002$ $5.810\pm 0.109$ $0.953\pm 0.002$ $7.575\pm 0.321$ $0.984\pm
0.003$ $12.145\pm 0.751$ $0.954\pm 0.003$ $5.364\pm 0.157$ 4 $0.940\pm 0.001$
$6.698\pm 0.159$ $0.950\pm 0.003$ $9.025\pm 0.498$ $0.980\pm 0.002$ $12.432\pm
0.700$ $0.950\pm 0.004$ $6.110\pm 0.225$ 5 $0.926\pm 0.003$ $5.817\pm 0.102$
$0.949\pm 0.004$ $10.234\pm 0.682$ $0.977\pm 0.003$ $12.325\pm 0.750$
$0.950\pm 0.004$ $6.925\pm 0.198$ 6 $0.915\pm 0.002$ $4.948\pm 0.098$
$0.949\pm 0.003$ $11.054\pm 0.601$ $0.968\pm 0.004$ $11.478\pm 0.813$
$0.948\pm 0.003$ $7.368\pm 0.183$ 5 1 $0.963\pm 0.002$ $1.193\pm 0.008$
$0.968\pm 0.002$ $1.497\pm 0.052$ $0.989\pm 0.002$ $8.559\pm 1.391$ $0.967\pm
0.002$ $1.413\pm 0.041$ 2 $0.959\pm 0.002$ $1.232\pm 0.014$ $0.963\pm 0.002$
$1.456\pm 0.063$ $0.984\pm 0.003$ $8.755\pm 1.352$ $0.962\pm 0.002$ $1.485\pm
0.057$ 3 $0.957\pm 0.001$ $1.243\pm 0.012$ $0.961\pm 0.002$ $1.693\pm 0.085$
$0.986\pm 0.003$ $8.947\pm 1.344$ $0.961\pm 0.002$ $1.637\pm 0.071$ 4
$0.945\pm 0.002$ $1.264\pm 0.013$ $0.959\pm 0.002$ $2.492\pm 0.131$ $0.985\pm
0.003$ $8.933\pm 1.325$ $0.959\pm 0.002$ $2.309\pm 0.082$ 5 $0.943\pm 0.002$
$1.234\pm 0.013$ $0.953\pm 0.002$ $2.770\pm 0.345$ $0.976\pm 0.004$ $8.913\pm
1.365$ $0.954\pm 0.002$ $2.540\pm 0.195$ 6 $0.932\pm 0.002$ $1.214\pm 0.010$
$0.961\pm 0.002$ $3.875\pm 0.249$ $0.976\pm 0.004$ $8.459\pm 1.351$ $0.963\pm
0.003$ $3.635\pm 0.192$ 6 1 $0.991\pm 0.001$ $1.000\pm 0.000$ $0.991\pm 0.001$
$1.000\pm 0.000$ $0.993\pm 0.001$ $4.540\pm 1.729$ $0.991\pm 0.001$ $1.000\pm
0.000$ 2 $0.986\pm 0.001$ $1.000\pm 0.000$ $0.986\pm 0.001$ $1.000\pm 0.000$
$0.990\pm 0.002$ $4.395\pm 1.738$ $0.986\pm 0.001$ $1.000\pm 0.000$ 3
$0.988\pm 0.001$ $1.000\pm 0.000$ $0.988\pm 0.001$ $1.000\pm 0.000$ $0.992\pm
0.001$ $4.449\pm 1.739$ $0.988\pm 0.001$ $1.000\pm 0.000$ 4 $0.988\pm 0.001$
$1.000\pm 0.000$ $0.988\pm 0.001$ $1.000\pm 0.000$ $0.991\pm 0.002$ $4.481\pm
1.729$ $0.988\pm 0.001$ $1.000\pm 0.000$ 5 $0.986\pm 0.001$ $1.000\pm 0.000$
$0.986\pm 0.001$ $1.000\pm 0.000$ $0.990\pm 0.002$ $4.390\pm 1.741$ $0.986\pm
0.001$ $1.000\pm 0.000$ 6 $0.981\pm 0.001$ $1.000\pm 0.000$ $0.981\pm 0.001$
$1.003\pm 0.003$ $0.987\pm 0.002$ $4.146\pm 1.767$ $0.981\pm 0.001$ $1.009\pm
0.007$ ResNet50 ImageNet 1 1 $0.980\pm 0.002$ $80.190\pm 1.594$ $0.955\pm
0.004$ $50.203\pm 2.778$ $0.945\pm 0.002$ $9.874\pm 0.169$ $0.964\pm 0.002$
$16.099\pm 0.593$ 2 $0.986\pm 0.002$ $118.839\pm 1.737$ $0.955\pm 0.003$
$52.066\pm 2.094$ $0.896\pm 0.003$ $10.681\pm 0.209$ $0.950\pm 0.003$
$25.545\pm 0.845$ 3 $0.979\pm 0.001$ $134.702\pm 0.697$ $0.951\pm 0.003$
$60.743\pm 1.012$ $0.848\pm 0.004$ $11.056\pm 0.228$ $0.940\pm 0.002$
$35.518\pm 0.870$ 4 $0.976\pm 0.001$ $126.892\pm 0.939$ $0.952\pm 0.003$
$79.627\pm 2.106$ $0.835\pm 0.004$ $11.079\pm 0.230$ $0.953\pm 0.004$
$51.339\pm 1.445$ 5 $0.964\pm 0.001$ $114.559\pm 0.975$ $0.954\pm 0.003$
$93.609\pm 3.277$ $0.797\pm 0.005$ $11.011\pm 0.232$ $0.942\pm 0.003$
$63.441\pm 1.422$ 6 $0.949\pm 0.002$ $88.034\pm 0.799$ $0.950\pm 0.003$
$88.418\pm 1.898$ $0.806\pm 0.005$ $10.744\pm 0.221$ $0.957\pm 0.002$
$62.049\pm 1.862$ 2 1 $0.961\pm 0.002$ $17.461\pm 0.533$ $0.952\pm 0.004$
$14.757\pm 0.785$ $0.976\pm 0.002$ $7.080\pm 0.116$ $0.956\pm 0.004$ $5.205\pm
0.127$ 2 $0.967\pm 0.002$ $24.896\pm 0.732$ $0.958\pm 0.002$ $20.892\pm 1.441$
$0.965\pm 0.002$ $8.013\pm 0.106$ $0.958\pm 0.002$ $6.879\pm 0.191$ 3
$0.961\pm 0.001$ $29.101\pm 0.759$ $0.947\pm 0.003$ $21.754\pm 1.123$
$0.963\pm 0.001$ $8.134\pm 0.126$ $0.956\pm 0.003$ $8.773\pm 0.321$ 4
$0.958\pm 0.002$ $30.078\pm 0.379$ $0.956\pm 0.003$ $28.199\pm 0.951$
$0.946\pm 0.002$ $8.398\pm 0.129$ $0.951\pm 0.003$ $11.264\pm 0.347$ 5
$0.944\pm 0.001$ $29.401\pm 0.357$ $0.946\pm 0.002$ $32.362\pm 1.180$
$0.937\pm 0.003$ $8.559\pm 0.143$ $0.952\pm 0.002$ $15.040\pm 0.547$ 6
$0.939\pm 0.002$ $22.100\pm 0.488$ $0.954\pm 0.004$ $33.527\pm 1.955$
$0.922\pm 0.003$ $8.269\pm 0.135$ $0.955\pm 0.001$ $18.645\pm 0.283$ 3 1
$0.958\pm 0.002$ $9.385\pm 0.202$ $0.953\pm 0.002$ $7.633\pm 0.346$ $0.988\pm
0.001$ $6.471\pm 0.129$ $0.955\pm 0.002$ $3.820\pm 0.076$ 2 $0.951\pm 0.003$
$9.619\pm 0.194$ $0.949\pm 0.005$ $8.739\pm 0.516$ $0.977\pm 0.002$ $6.919\pm
0.118$ $0.950\pm 0.004$ $4.648\pm 0.082$ 3 $0.957\pm 0.002$ $11.449\pm 0.274$
$0.957\pm 0.003$ $10.853\pm 0.446$ $0.975\pm 0.001$ $7.163\pm 0.112$ $0.954\pm
0.004$ $5.130\pm 0.102$ 4 $0.953\pm 0.001$ $10.809\pm 0.223$ $0.951\pm 0.002$
$10.546\pm 0.396$ $0.972\pm 0.001$ $7.600\pm 0.113$ $0.952\pm 0.003$ $5.872\pm
0.235$ 5 $0.947\pm 0.002$ $11.507\pm 0.228$ $0.951\pm 0.003$ $14.754\pm 0.650$
$0.966\pm 0.003$ $7.414\pm 0.114$ $0.953\pm 0.004$ $6.874\pm 0.157$ 6
$0.920\pm 0.003$ $9.150\pm 0.188$ $0.951\pm 0.005$ $20.041\pm 1.624$ $0.944\pm
0.002$ $7.143\pm 0.101$ $0.950\pm 0.002$ $9.505\pm 0.576$ 4 1 $0.955\pm 0.002$
$1.739\pm 0.025$ $0.953\pm 0.002$ $1.667\pm 0.070$ $0.985\pm 0.002$ $5.088\pm
0.196$ $0.954\pm 0.002$ $1.574\pm 0.038$ 2 $0.948\pm 0.002$ $1.958\pm 0.016$
$0.950\pm 0.002$ $1.964\pm 0.083$ $0.983\pm 0.001$ $5.285\pm 0.189$ $0.951\pm
0.003$ $1.845\pm 0.061$ 3 $0.936\pm 0.003$ $2.082\pm 0.034$ $0.955\pm 0.002$
$4.255\pm 0.202$ $0.976\pm 0.002$ $5.495\pm 0.164$ $0.957\pm 0.003$ $3.423\pm
0.123$ 4 $0.936\pm 0.003$ $2.173\pm 0.045$ $0.955\pm 0.003$ $4.508\pm 0.303$
$0.973\pm 0.002$ $5.573\pm 0.170$ $0.952\pm 0.002$ $3.013\pm 0.110$ 5
$0.928\pm 0.002$ $2.071\pm 0.041$ $0.953\pm 0.003$ $8.483\pm 0.789$ $0.968\pm
0.002$ $5.720\pm 0.167$ $0.951\pm 0.002$ $4.338\pm 0.138$ 6 $0.895\pm 0.004$
$1.932\pm 0.026$ $0.950\pm 0.005$ $10.274\pm 1.147$ $0.950\pm 0.003$ $5.379\pm
0.151$ $0.948\pm 0.004$ $5.270\pm 0.220$ 5 1 $0.979\pm 0.001$ $1.000\pm 0.000$
$0.979\pm 0.001$ $1.001\pm 0.001$ $0.985\pm 0.001$ $1.918\pm 0.222$ $0.979\pm
0.001$ $1.000\pm 0.000$ 2 $0.982\pm 0.001$ $1.000\pm 0.000$ $0.982\pm 0.001$
$1.015\pm 0.008$ $0.985\pm 0.001$ $1.880\pm 0.221$ $0.982\pm 0.001$ $1.019\pm
0.007$ 3 $0.964\pm 0.002$ $1.000\pm 0.000$ $0.964\pm 0.002$ $1.126\pm 0.028$
$0.971\pm 0.002$ $1.977\pm 0.222$ $0.965\pm 0.002$ $1.178\pm 0.045$ 4
$0.972\pm 0.002$ $1.000\pm 0.000$ $0.973\pm 0.002$ $1.186\pm 0.047$ $0.978\pm
0.002$ $1.958\pm 0.205$ $0.973\pm 0.002$ $1.156\pm 0.050$ 5 $0.968\pm 0.002$
$1.000\pm 0.000$ $0.971\pm 0.002$ $1.417\pm 0.102$ $0.974\pm 0.002$ $1.911\pm
0.222$ $0.971\pm 0.002$ $1.293\pm 0.067$ 6 $0.946\pm 0.002$ $1.000\pm 0.000$
$0.957\pm 0.002$ $1.842\pm 0.143$ $0.956\pm 0.003$ $1.833\pm 0.193$ $0.957\pm
0.002$ $1.763\pm 0.112$ 6 1 $0.991\pm 0.001$ $1.000\pm 0.000$ $0.991\pm 0.001$
$1.000\pm 0.000$ $0.991\pm 0.001$ $1.000\pm 0.000$ $0.991\pm 0.001$ $1.000\pm
0.000$ 2 $0.991\pm 0.001$ $1.000\pm 0.000$ $0.991\pm 0.001$ $1.000\pm 0.000$
$0.991\pm 0.001$ $1.000\pm 0.000$ $0.991\pm 0.001$ $1.000\pm 0.000$ 3
$0.994\pm 0.001$ $1.000\pm 0.000$ $0.994\pm 0.001$ $1.000\pm 0.000$ $0.994\pm
0.001$ $1.000\pm 0.000$ $0.994\pm 0.001$ $1.000\pm 0.000$ 4 $0.992\pm 0.001$
$1.000\pm 0.000$ $0.992\pm 0.001$ $1.000\pm 0.000$ $0.992\pm 0.001$ $1.000\pm
0.000$ $0.992\pm 0.001$ $1.000\pm 0.000$ 5 $0.992\pm 0.001$ $1.000\pm 0.000$
$0.992\pm 0.001$ $1.000\pm 0.000$ $0.992\pm 0.001$ $1.000\pm 0.000$ $0.992\pm
0.001$ $1.000\pm 0.000$ 6 $0.987\pm 0.001$ $1.000\pm 0.000$ $0.987\pm 0.001$
$1.000\pm 0.000$ $0.987\pm 0.001$ $1.000\pm 0.000$ $0.987\pm 0.001$ $1.000\pm
0.000$
(a)
(b)
Figure 13: Post-hoc Recalibration for Classification with 5-NN distance as an
Atypicality Metric. (a) Balanced Supervised Classification: Atypicality-Aware
Recalibration improves the calibration of models trained with balanced
datasets, across atypicality groups. (b) Imbalanced Supervised Classification:
Atypicality-Aware Recalibration improves both the calibration across groups
and the overall accuracy of models trained with imbalanced datasets with
5-nearest neighbors distance as an atypicality metric.
## Appendix H Limitations
### H.1 Quantifying Atypicality
Since we do not have access to the true distribution of $\mathbb{P}(X)$, we
estimate it through the model, e.g. using the embeddings. This means we are
capturing the atypicality not solely with respect to the training distribution
but also the model. It is possible that a model that does not fit the data
well and produces low-quality atypicality estimates. _We would like to stress
that our goal here is to show that even simple estimators can demonstrate
significant benefits._ In general, we observe that our findings hold for large
datasets and widely used models, and atypicality gives a semantically |
# Extremal stability for configuration spaces
Ben Knudsen Department of Mathematics, Northeastern University, USA
<EMAIL_ADDRESS>, Jeremy Miller Department of Mathematics, Purdue
University, USA<EMAIL_ADDRESS>and Philip Tosteson Department of
Mathematics, University of Chicago, Chicago, IL<EMAIL_ADDRESS>
###### Abstract.
We study stability patterns in the high dimensional rational homology of
unordered configuration spaces of manifolds. Our results follow from a general
approach to stability phenomena in the homology of Lie algebras, which may be
of independent interest.
Ben Knudsen was supported in part by NSF grant DMS-1906174
Jeremy Miller was supported in part by a Simons collaboration grant
Philip Tosteson was supported in part by NSF grant DMS-1903040.
## 1\. Introduction
The purpose of this paper is to investigate a stability phenomenon in the high
dimensional rational homology of the unordered configuration spaces of a
$d$-manifold $M$. Writing
$\mathrm{Conf}_{n}(M)=\\{(x_{1},\ldots,x_{n})\in M^{n}\,|\,x_{i}\neq
x_{j}\text{ for }i\neq j\\}$
for the ordered configuration space of $n$ points in $M$ and
$B_{n}(M)=\mathrm{Conf}_{n}(M)/S_{n}$ for the unordered configuration space,
classical results of McDuff, Segal, and Church show that the spaces $B_{n}(M)$
exhibit _rational homological stability_. We will restrict our attention to
the case of even $d$, since there is a simple closed form available for $d$
odd [BCT89].
###### Theorem 1.1 (Homological stability [McD75, Seg79, Chu12]).
Let $M$ be a manifold of finite type and even dimension $d\geq 2$. For $n$
sufficiently large with respect to $i$, the function $n\mapsto\dim
H_{i}(B_{n}(M);\mathbb{Q})$ is equal to a polynomial in $n$ of degree at most
$\dim H_{0}(M;\mathbb{Q})-1$.
Traditionally, Theorem 1.1 is stated in the case that $M$ is connected, where
it is the statement that the function $n\mapsto\dim
H_{i}(B_{n}(M);\mathbb{Q})$ is eventually constant; the general case follows
easily from the connected case and the Künneth theorem.
While homological stability is a stability pattern in low homological
dimension, extremal stability, our main theorem, is a pattern in low
homological _codimension_. Since $H_{i}(B_{n}(M))=0$ for
$i>\nu_{n}:=n(d-1)+1$, it is reasonable to think of $H_{\nu_{n}-i}(B_{n}(M))$
as the codimension $i$ homology of $B_{n}(M)$, at least generically.
###### Theorem 1.2 (Extremal stability).
Let $M$ be a manifold of finite type and even dimension $d\geq 2$. For $n$
sufficiently large with respect to $i$, the function $n\mapsto\dim
H_{\nu_{n}-i}(B_{n}(M);\mathbb{Q})$ is equal to a quasi-polynomial in $n$ of
degree at most $\dim H_{d-1}(M;\mathbb{Q}^{w})-1$ and period dividing $2$.
Here, we have written $\mathbb{Q}^{w}$ for the orientation sheaf of $M$.
Equivalently, Theorem 1.2 states that there are two polynomials $p_{{\rm
even}}^{i}(n)$ and $p_{\rm odd}^{i}(n)$, governing the codimension $i$
homology of $B_{n}(M)$ for even and odd $n$ respectively. See §2.3 for our
conventions on quasi-polynomials.
###### Example 1.3 ([DCK17]).
For $\Sigma$ a compact, orientable surface of genus $2$, we have
$\dim
H_{\nu_{n}}(B_{n}(\Sigma);\mathbb{Q})=\begin{cases}\frac{n^{3}+n^{2}+16}{16}\vspace{.1cm}&\quad
n\text{ even}\\\ \frac{n^{3}+n^{2}-9n-9}{16}&\quad n\text{ odd}.\end{cases}$
$n$$i$ZeroExtremal stabilityHomological stability$H_{i}(B_{n}(M),\mathbb{Q})$
Figure 1. Homological stability and extremal stability
Instances of extremal stability for configuration spaces were first noticed by
Maguire in the case $M=\mathbb{C}P^{3}$ [Mag]. The only other prior example of
extremal stability that we are aware of is the high dimensional cohomology
groups of congruence subgroups of $SL_{n}(\mathbb{Z})$ [MNP20].
If $H_{d-1}(M,\mathbb{Q}^{w})=0$, then Theorem 1.2 is just the statement that
the groups $H_{\nu_{n}-i}(B_{n}(M),\mathbb{Q})$ eventually vanish. In this
situation, $H_{\nu_{n}-i}(B_{n}(M),\mathbb{Q})$ should not be thought of as
the codimension $i$ homology of $B_{n}(M)$. In Theorem 4.9, we prove a more
refined stability result for manifolds with vanishing high degree homology
groups.
### 1.1. Stabilization maps
Extremal stability is induced by a family of maps of the form
$H_{i}(B_{n}(M);\mathbb{Q})\to H_{i+2d-2}(B_{n+2}(M);\mathbb{Q}).$
Heuristically, such a map is obtained as follows. Fixing a class $\alpha\in
H_{d-1}(M;\mathbb{Q})$, which we imagine as a $(d-1)$-parameter family of
configurations of a single point in $M$, we write $\alpha\otimes[v,v]\in
H_{2d-2}(B_{2}(M);\mathbb{Q})$ for the class obtained by replacing this single
point with a pair of orbiting points. Here, we think of $v$ as the class of a
single point in $\mathbb{R}^{d}$ and $[\cdot,\cdot]$ as the Browder bracket in
the homology of Euclidean configuration spaces.
We wish to define our stabilization map by superposition with the class
$\alpha\otimes[v,v]$; that is, we attempt to send the cycle depicted on the
left of Figure 2 to the cycle depicted on the right. Unfortunately, this cycle
does not lie in the configuration space, since it contains points of
intersection among distinct particles!
Figure 2. Extremal stabilization map
This issue can be resolved in the example depicted (where $i=1$) by performing
surgery in a Euclidean neighborhood of the intersection point, resulting in a
$(2d-1)$-chain in $B_{3}(M)$ whose boundary corresponds to the image of the
nested Browder bracket $[v,[v,v]]\in
H_{2d-2}(B_{3}(\mathbb{R}^{d});\mathbb{Q})$ under the coordinate embedding
$\mathbb{R}^{d}\subseteq M$. Since $[v,[v,v]]=0$ by the Jacobi relation, we
may choose a bounding chain, removing the point of intersection.
The key fact exploited in this construction is that $[v,v]$ lies in the center
of the Lie algebra formed by the homology of the configuration spaces of
$\mathbb{R}^{d}$. This algebraic observation points the way to the rigorous
definition of the desired extremal stabilization maps.
### 1.2. Transit algebras
We approach Theorem 1.2, a priori a topological statement, in an entirely
algebraic setting. As shown by the first author [Knu17], the total homology of
the configuration spaces of a manifold is the homology of a certain Lie
algebra—see Theorem 4.7 below—so stability phenomena in Lie algebra homology
give rise to stability phenomena in the homology of configuration spaces.
The homology of a Lie algebra $\mathfrak{g}$ supports two types of action
relevant to stability. First, given an Abelian quotient
$\mathfrak{g}\to\mathfrak{k}$, there results an action of
$\operatorname{Sym}(\mathfrak{k}^{\vee}[-1])$; in the example of configuration
spaces, this degree lowering action gives rise to the maps
$H_{*}(B_{n}(M),\mathbb{Q})\to H_{*}(B_{n-1}(M),\mathbb{Q})$
obtained from the inclusion ${\rm Conf}_{n}(M)\subseteq{\rm
Conf}_{n-1}(M)\times M$ by pairing with an element of $H^{*}(M)$ and applying
transfer. Second, given a central subalgebra
$\mathfrak{h}\subseteq\mathfrak{g}$, there results an action of
$\operatorname{Sym}(\mathfrak{h}[1])$; in the example of configuration spaces,
this degree raising action gives rise to the extremal stabilization maps
described heuristically in Section 1.1.
These two actions interact via Weyl relations, and we call the resulting
algebra a _transit algebra_. As shown in §4.2, transit algebras provide a
common source for classical homological stability and extremal stability.
### 1.3. Acknowledgments
We thank Megan Maguire for helpful conversations and for the initial
inspiration to investigate this question.
## 2\. Algebraic background
In this section, we detail our conventions on graded objects, algebras, and
modules.
### 2.1. Degrees and slopes
We work in the setting of bigraded vector spaces over $\mathbb{Q}$. Such a
vector space $V$ arrives equipped with a decomposition
$V=\bigoplus_{n,i\in\mathbb{Z}}V_{n,i}$. We write $\langle r\rangle$ and $[s]$
for the relevant shift operations, i.e.,
$(V\langle r\rangle[s])_{n,i}=V_{n-r,i-s}.$
The parameters $n$ and $i$ are called the weight and the (homological) degree,
respectively, and we write $w(v)=n$ and $d(v)=i$ for a bihomogeneous element
$v\in V$ of bidegree $(n,i)$.
Duals and tensor products of bigraded vector spaces are defined by the
stipulations
$(V^{\vee})_{n,i}=\mathrm{Hom}_{\mathbb{Q}}(V_{-n,-i},\mathbb{Q})\qquad\qquad(V\otimes
W)_{n,i}=\bigoplus_{a+b=n,c+d=i}V_{a,c}\otimes W_{b,d}.$
With this tensor product, the category of bigraded vector spaces is monoidal,
and we equip it with the symmetric monoidal structure whose symmetry
incorporates Koszul signs in the homological degree _but not in the weight._
Because of this symmetry, parity of degree will play an important role in what
follows, and we write
$V^{\epsilon}=\bigoplus_{n\in\mathbb{Z}}\,\bigoplus_{i\equiv\epsilon\,\mathrm{mod}\,2}V_{n,i}$
for $\epsilon\in\\{0,1\\}$, considered as a bigraded subspace of $V$.
We say that $V$ is of finite type if $V_{n,i}$ is finite dimensional for
$n,i\in\mathbb{Z}$. If $V$ is of finite type and vanishes in a cofinite set of
bidegrees, we say that $V$ is finite dimensional. We say that $V$ is bounded
below if $V_{n,i}=0$ whenever either $n$ or $i$ is a negative number of
sufficiently large absolute value. We say that $V$ is connected if
$V_{0,0}=0$.
We record the following simple observations regarding the interaction of these
finiteness properties with duals and tensor products.
###### Lemma 2.1.
Let $V$ and $W$ be bigraded vector spaces.
1. (1)
If $V$ is of finite type or finite dimensional, then so is $V^{\vee}$.
2. (2)
If $V$ and $W$ are finite dimensional, then so is $V\otimes W$.
3. (3)
If $V$ and $W$ are bounded below, then so is $V\otimes W$.
4. (4)
If $V$ and $W$ are bounded below of finite type, then so is $V\otimes W$.
We think of a bigraded vector space as a planar grid of vector spaces, with
the weight recorded on the horizontal axis and the degree on the vertical. Our
language reflects this idea; for example, we say that $V$ is first-quadrant if
$V_{n,i}=0$ for $n<0$ and $i<0$ (in which case $V^{\vee}$ is third-quadrant).
This picture also informs the following.
###### Definition 2.2.
Fix a bigraded vector space $V$.
1. (1)
Let $0\neq v\in V$ be a bihomogeneous element of nonzero weight. The _slope_
of $v$ is the rational number $m(v)=d(v)/w(v)$.
2. (2)
Suppose that $V$ is concentrated in nonzero weights. The _maximal slope_ of
$V$ is
$m_{\max}(V)=\max\\{C\in\mathbb{Q}\mid\exists v\in V:m(v)=C\\}$
(resp. minimal slope, $m_{\min}(V)$, $\min$).
Given $C\in\mathbb{Q}$, we write $V_{C}\subseteq V$ for the span of the
bihomogeneous elements of slope $C$, and similarly for $V_{<C}$ etc. If
$V=V_{C}$, then we say that $V$ is of slope $C$. Note that, if $V$ is of slope
$C$, then $V$ is concentrated in nonzero weight.
A related notion of slope will also be important in what follows.
###### Definition 2.3.
Fix a bigraded vector space $V$ and $C\in\mathbb{Q}$. A _ray of slope_ $C$ in
$V$ is a subspace of the form $\mathcal{R}=\bigoplus_{u\geq
0}V_{au,bu+i_{0}}$, where $C=b/a$ with $(a,b)=1$. The _graded dimension_ of
$\mathcal{R}$ is the function
$n\mapsto\begin{cases}\dim\mathcal{R}_{n,Cn+i_{0}}&\quad a\mid n\\\
0&\quad\text{otherwise}.\end{cases}$
At times, it will be convenient to work with a third grading, by polynomial
degree, which will always be non-negative. Bigraded vector spaces are regarded
as trigraded vector spaces concentrated in polynomial degree $1$. Duals and
tensor products of trigraded vector spaces are defined analogously, with
Koszul signs reflecting only the homological degree. Discussions of slope will
never involve the third grading.
### 2.2. Symmetric (co)algebras and their duals
Given a bigraded vector space $V$, we write
$\operatorname{Sym}^{k}(V)=(V^{\otimes k})_{S_{k}}$, where the symmetric group
acts via the symmetry of the symmetric monoidal structure introduced above.
###### Definition 2.4.
Let $V$ be a bigraded vector space. The _symmetric algebra_ on $V$ is the
trigraded vector space $\operatorname{Sym}(V)=\bigoplus_{k\geq
0}\operatorname{Sym}^{k}(V)$, equipped with the product given componentwise by
the dashed filler in the commuting diagram
$\textstyle{V^{\otimes k}\otimes
V^{\otimes\ell}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{V^{\otimes(k+\ell)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Sym}^{k}(V)\otimes\operatorname{Sym}^{\ell}(V)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Sym}^{k+\ell}(V),}$
where the vertical arrows are the respective projections.
This multiplication map furnishes the symmetric algebra with the structure of
a commutative algebra object in the category of trigraded vector spaces. It is
a universal commutative algebra in the sense of the commutative diagram
---
$\textstyle{V\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{A}}$$\textstyle{\operatorname{Sym}(V)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\exists!}$
of trigraded vector spaces. In particular, there is a canonical, natural map
of algebras
$\operatorname{Sym}(V\oplus
W)\to\operatorname{Sym}(V)\otimes\operatorname{Sym}(W)$
induced by the assignment $(v,w)\mapsto v\otimes 1+1\otimes w$, which is
easily seen to be an isomorphism. This map furnishes the symmetric algebra
functor with an oplax monoidal structure; in particular,
$\operatorname{Sym}(V)$ is canonically a bicommutative bialgebra with
comultiplication induced by the diagonal of $V$.
Given a bihomogeneous basis $\\{t_{i}\\}_{i\in I}$ for $V$, a bihomogeneous
basis for $\operatorname{Sym}^{k}(V)$ is provided by the set of equivalence
classes of degree $k$ monomials $t_{i_{1}}\cdots t_{i_{k}}$ under the
equivalence relation
$t_{i_{1}}\cdots t_{i_{j}}t_{i_{j+1}}\cdots
t_{i_{k}}\sim(-1)^{d(t_{i_{j}})d(t_{i_{j+1}})}t_{i_{1}}\cdots
t_{i_{j+1}}t_{i_{j}}\cdots t_{i_{k}}.$
In this way, we obtain a trihomogeneous basis for $\operatorname{Sym}(V)$,
which we refer to as the _monomial basis_. In this basis, multiplication is
given by concatenation of monomials and comultiplication by the usual shuffle
coproduct. It should be emphasized that a generator behaves as an exterior or
polynomial generator according to the parity of its homological degree alone.
In the presence of the above basis, we write $\partial_{t_{i}}\in V^{\vee}$
for the functional dual to $t_{i}$, i.e.,
$\partial_{t_{i}}(t_{j})=\delta_{ij}.$ Allowing these functionals to act
according to the usual rules of differential calculus, with appropriate Koszul
signs, defines a linear embedding
$\operatorname{Sym}(V^{\vee})\to\operatorname{Sym}(V)^{\vee}$
which is an isomorphism of trigraded vector spaces under appropriate
finiteness assumptions, e.g., if $V$ is bounded below of finite type.
In this situation, by adjunction, there is a one-to-one correspondence of sets
of maps
$\varphi:\operatorname{Sym}(V^{\vee})\otimes\mathcal{M}\to\mathcal{M}\qquad\iff\qquad\widetilde{\varphi}:\mathcal{M}\to\operatorname{Sym}(V)\otimes\mathcal{M}$
given explicitly by the formulas
$\widetilde{\varphi}(m)=\sum_{p}p\otimes\varphi(\partial_{p}\otimes m)$ and
$\varphi(\partial_{p}\otimes m)=\partial_{p}\cap\widetilde{\varphi}(m),$ where
$p$ ranges over the monomial basis. In this way, a
$\operatorname{Sym}(V^{\vee})$-module determines a
$\operatorname{Sym}(V)$-comodule, referred to as the _adjoint comodule_. If
$\mathcal{M}$ is of finite type (as a trigraded vector space), then this
comodule structure is in turn equivalent to the
$\operatorname{Sym}(V^{\vee})$-module structure on $\mathcal{M}^{\vee}$
determined by the map
$\operatorname{Sym}(V^{\vee})\otimes\mathcal{M}^{\vee}\cong(\operatorname{Sym}(V)\otimes\mathcal{M})^{\vee}\xrightarrow{\widetilde{\varphi}^{\vee}}\mathcal{M}^{\vee}.$
We refer to this module as the _dual adjoint module_.
### 2.3. Modules and growth
In what follows, we will be interested in the eventual growth rates of graded
dimensions of vector spaces.
###### Definition 2.5.
A _quasi-polynomial_ is an element of $\Pi[t]$, where $\Pi$ is the ring of
periodic functions from $\mathbb{Z}$ to $\mathbb{Q}$. The _period_ of a quasi-
polynomial is the least common multiple of the periods of its coefficients.
When working over a symmetric algebra with generators of fixed slope, rays of
the same slope exhibit predictable growth. As a matter of notation, we write
$\mathrm{lcm}(V)=\mathrm{lcm}\\{n\in\mathbb{Z}\mid\exists
i\in\mathbb{Z}:V_{n,i}\neq 0\\}.$
###### Lemma 2.6.
Let $V$ be a finite dimensional, first-quadrant bigraded vector space of slope
$C\in\mathbb{Q}$ and $\mathcal{M}$ a finitely generated
$\operatorname{Sym}(V)$-module. The graded dimension of any ray in
$\mathcal{M}$ of slope $C$ is eventually equal to a quasi-polynomial of degree
at most $\dim V^{0}-1$ and period dividing $\mathrm{lcm}(V^{0})$.
###### Proof.
Any ray in $\mathcal{M}$ is a $\operatorname{Sym}(V)$-submodule, since $V$ has
slope $C$, and $\operatorname{Sym}(V)$ is Noetherian, since $V$ is finite
dimensional, so we assume that $\mathcal{M}$ is itself a ray of slope $C$.
Applying the shift functor $[r]$ to this ray does not change its graded
dimension, so we may assume that $\mathcal{M}=\mathcal{M}_{C}$. Lastly, we may
assume that $V=V^{0}$; indeed,
$\operatorname{Sym}(V)\cong\operatorname{Sym}(V^{0})\otimes\operatorname{Sym}(V^{1})$,
and $\operatorname{Sym}(V^{1})$ is finite dimensional, since $V^{1}$ is so.
With these assumptions in place, the homological degree in
$\operatorname{Sym}(V)$ and in $\mathcal{M}$ is determined by the weight, so
the claim follows from the classical theory of the Hilbert function of a
graded module over a graded polynomial ring [AM16, Theorem 11.1]. ∎
We pair this result with a criterion for detecting finite generation over
symmetric algebras with generators of fixed slope.
###### Lemma 2.7.
Let $V$ be a finite dimensional bigraded vector space and $\mathcal{M}$ a
finitely generated $\operatorname{Sym}(V)$-module. If $m_{\max}(V^{0})\leq C$,
then every ray in $\mathcal{M}$ of slope $C$ is finitely generated over
$\operatorname{Sym}(V^{0}_{C})$.
###### Proof.
If $m_{\max}(V^{0})<C$, then $V_{C}^{0}=0$ by assumption, so
$\operatorname{Sym}(V_{C}^{0})=\mathbb{Q}$, the monoidal unit. A finitely
generated $\mathbb{Q}$-module is simply a finite dimensional bigraded vector
space, so the first claim implies the second.
For the first claim, we may reduce to the case
$\mathcal{M}=\operatorname{Sym}(V)$ using shifts, sums, and quotients. As in
the proof of Lemma 2.6, we may further assume that $V=V^{0}$. Choosing a
bihomogeneous basis $\\{u_{1},\dots,u_{m}\\}$ for $V_{<C}$, we observe that
$\mathcal{M}$ is freely generated over $\operatorname{Sym}(V^{0}_{C})$ by the
set of monomials of the form $u_{1}^{j_{1}}\dots u_{m}^{j_{m}}$ with
$j_{1},\dots,j_{m}\in\mathbb{Z}_{\geq 0}$. Fixing a ray
$\mathcal{R}=\bigoplus_{t\geq 0}\mathcal{M}_{at,bt+i_{0}}$ with $b/a=C$ and
$(a,b)=1$, such a monomial lies in $\mathcal{R}$ if and only if
$\sum_{k=1}^{m}j_{k}(ad(u_{k})-bw(u_{k}))=ai_{0}.$
By assumption, we have $ad(u_{k})-bw(u_{k})<0$ for $1\leq k\leq m$, so only
finitely many monomials lie in $\mathcal{R}$. Since the product of a monomial
and an element of $\operatorname{Sym}(V_{C}^{0})$ lies in $\mathcal{R}$ if and
only if the monomial does, the claim follows. ∎
At times, an alternative to finite generation will be the relevant property.
###### Definition 2.8.
Let $V$ be a first-quadrant bigraded vector space and $\mathcal{M}$ a
$\operatorname{Sym}(V^{\vee})$-module. We say that $\mathcal{M}$ is _finitely
detected_ if there is a surjective linear map $q:\mathcal{M}\to N$ with finite
dimensional target such that, for every $m\in\mathcal{M}\setminus\\{0\\}$,
there is a monomial $p\in\operatorname{Sym}(V)$ with $q(\partial_{p}\cdot
m)\neq 0$.
Through duality, finite detection is closely related to other notions of
finiteness. For simplicity, we do not state the following results in the
greatest possible generality.
###### Lemma 2.9.
Let $V$ be a finite dimensional, first-quadrant bigraded vector space. The
following are equivalent.
1. (1)
The $\operatorname{Sym}(V^{\vee})$-module $\mathcal{M}$ is finitely detected.
2. (2)
The adjoint $\operatorname{Sym}(V)$-comodule structure on $\mathcal{M}$ is
finitely cogenerated.
3. (3)
The dual adjoint $\operatorname{Sym}(V^{\vee})$-module structure on
$\mathcal{M}^{\vee}$ is finitely generated.
###### Proof.
Let $q:\mathcal{M}\to N$ be a surjection with finite dimensional target. The
composite
$\mathcal{M}\xrightarrow{\delta}\operatorname{Sym}(V)\otimes\mathcal{M}\xrightarrow{\mathrm{id}\otimes
q}\operatorname{Sym}(V)\otimes N$
is given by the formula $m\mapsto\sum_{p}p\otimes q(\partial_{p}\cdot m)$,
where $p$ ranges over the monomial basis for $\operatorname{Sym}(V)$. Since
$q$ cogenerates if and only if this composite is injective, the first claim
follows. Since the dual of an injection is surjective, the second claim
follows as well. ∎
It follows that finitely detected modules enjoy many of the properties of
finitely generated modules.
###### Lemma 2.10.
Let $V$ be a finite dimensional, first-quadrant bigraded vector space. Any
submodule or quotient of a finitely detected
$\operatorname{Sym}(V^{\vee})$-module is also finitely detected.
###### Proof.
The claim follows from Lemma 2.9 and Noetherianity of
$\operatorname{Sym}(V^{\vee})$. ∎
###### Lemma 2.11.
Let $V$ be a finite dimensional, first-quadrant bigraded vector space of slope
$C\in\mathbb{Q}$ and $\mathcal{M}$ a finitely detected
$\operatorname{Sym}(V^{\vee})$-module. The graded dimension of any ray in
$\mathcal{M}$ of slope $C$ is eventually equal to a quasi-polynomial of degree
at most $\dim V^{0}-1$ and period dividing $\mathrm{lcm}(V^{0})$.
###### Proof.
The claim follows form Lemma 2.6 and 2.9 after negating bidegrees. ∎
## 3\. Transits and transit algebras
In this section, we introduce a family of algebras acting on the homology of a
Lie algebra.
### 3.1. Lie algebras and transits
Throughout, the term “Lie algebra” refers to a Lie algebra in the category of
bigraded vector spaces detailed above. Explicitly, a Lie algebra is a bigraded
vector space $\mathfrak{g}$ equipped with a map of bigraded vector spaces
$[-,-]:\mathfrak{g}\otimes\mathfrak{g}\to\mathfrak{g}$, called the bracket of
$\mathfrak{g}$, satisfying the equations
1. (1)
$[x,y]+(-1)^{d(x)d(y)}[y,x]=0$
2. (2)
$(-1)^{d(x)d(z)}[[x,y],z]+(-1)^{d(y)d(x)}[[y,z],x]+(-1)^{d(z)d(y)}[[z,x],y]=0$
for bihomogeneous elements $x,y,z\in\mathfrak{g}$. A map of Lie algebras is a
map of bigraded vector spaces intertwining the respective brackets.
We emphasize that, while weight is additive under the bracket, all signs are
independent of weight. We write $\mathfrak{z}=\mathfrak{z}(\mathfrak{g})$ for
the center of the Lie algebra $\mathfrak{g}$ and
$\mathfrak{a}=\mathfrak{a}(\mathfrak{g})$ for its Abelianization.
###### Definition 3.1.
Let $\mathfrak{g}$ be a Lie algebra. A _transit_ of $\mathfrak{g}$ is a pair
of maps of Lie algebras
$\mathfrak{h}\xrightarrow{f}\mathfrak{g}\xrightarrow{g}\mathfrak{k}$
with $\mathfrak{h}$ and $\mathfrak{k}$ Abelian and $f$ central. We say the
transit is _null_ or _split_ if $gf$ is trivial or bijective, respectively,
and _exact_ if $f$ and $g$ form a short exact sequence. A _map of transits_
from $(f_{1},g_{1})$ to $(f_{2},g_{2})$ is a commutative diagram of Lie
algebras of the form
$\textstyle{\mathfrak{h}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{1}}$$\textstyle{\mathfrak{h}_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{2}}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g_{1}}$$\scriptstyle{g_{2}}$$\textstyle{\mathfrak{a}_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{a}_{1}.}$
Maps of transits compose in the obvious way, forming a category. Note that a
Lie algebra admits an exact transit if and only if it is two-step nilpotent.
###### Example 3.2.
The pair $\mathfrak{z}\to\mathfrak{g}\to\mathfrak{a}$ given by inclusion and
projection is a transit, called the _universal transit_.
The term “universal” refers to the property of being a terminal object in the
category of transits.
###### Example 3.3.
Given transits $(f_{1},g_{1})$ and $(f_{2},g_{2})$ of $\mathfrak{g}$, the
_product transit_ is the pair
$\mathfrak{h}_{1}\times\mathfrak{h_{2}}\xrightarrow{f_{1}+f_{2}}\mathfrak{g}\xrightarrow{(g_{1},g_{2})}\mathfrak{k}_{1}\times\mathfrak{k}_{2}$
(we use that the $\mathfrak{h}_{i}$ are Abelian). We say the product is
_clean_ if $g_{2}f_{1}=g_{1}f_{2}=0$.
###### Lemma 3.4.
Every transit is isomorphic (non-canonically) to the clean product of a split
transit and a null transit.
###### Proof.
Set $\mathfrak{h}_{1}=\mathfrak{k}_{1}=\mathrm{im}(gf)$,
$\mathfrak{h}_{2}=\ker(gf)$, and $\mathfrak{k}_{2}=\mathrm{coker}(gf)$.
Choosing splittings, which are splittings of Lie algebras by Abelianness, the
product decomposition
$\mathfrak{h}_{1}\times\mathfrak{h}_{2}\cong\mathfrak{h}\xrightarrow{f}\mathfrak{g}\xrightarrow{g}\mathfrak{k}\cong\mathfrak{k}_{1}\times\mathfrak{k}_{2}$
has the desired properties by inspection. ∎
### 3.2. Transit algebras
In this section, we fix a Lie algebra $\mathfrak{g}$ and a transit
$\mathfrak{h}\xrightarrow{f}\mathfrak{g}\xrightarrow{g}\mathfrak{k}$. We write
$\langle-,-\rangle$ for the graded commutator of an algebra $\mathcal{A}$,
i.e., $\langle x,y\rangle=xy-(-1)^{d(x)d(y)}yx$ for bihomogeneous elements
$x,y\in\mathcal{A}$.
###### Definition 3.5.
The _transit algebra_ associated to the transit $(f,g)$ is the quotient
$W(f,g)=T(\mathfrak{h}[1]\oplus\mathfrak{k}[1]^{\vee})/I,$
where $T$ denotes the tensor algebra and $I$ the two-sided ideal generated by
the relations
$\displaystyle\langle x,y\rangle$ $\displaystyle=\langle\lambda,\mu\rangle=0$
$\displaystyle\langle\lambda,x\rangle$ $\displaystyle=\lambda(g(f(x)))$
for $x,y\in\mathfrak{h}[1]$ and $\lambda,\mu\in\mathfrak{k}[1]^{\vee}$.
The transit algebra extends in the obvious way to a functor from transits to
algebras. In particular, given an arbitrary transit $(f,g)$, there is a
canonical map of algebras
$W(f,g)\to W(\mathfrak{g}),$
where $W(\mathfrak{g})$ is the transit algebra of the universal transit.
###### Example 3.6.
If $(f,g)$ is null, then
$W(f,g)=\operatorname{Sym}(\mathfrak{h}[1]\oplus\mathfrak{k}[1]^{\vee})$.
###### Example 3.7.
If $(f,g)$ is split, then $W(f,g)$ is isomorphic to the Weyl algebra on the
vector space $\mathfrak{h}[1]$.
###### Lemma 3.8.
The transit algebra of a clean product is canonically isomorphic to the tensor
product of the transit algebras of the factors.
###### Proof.
Cleanness implies that $\langle\lambda,x\rangle=0$ for $x\in\mathfrak{h}_{i}$
and $\lambda\in\mathfrak{k}_{j}$ when $i\neq j\in\\{1,2\\}$, and the claim
follows. ∎
We anticipate that the following simple result will be fundamental to future
applications, although we make no use of it in what follows.
###### Proposition 3.9.
Transit algebras are Noetherian.
###### Proof.
By Lemmas 3.4 and 3.8 and Examples 3.6 and 3.7, it suffices to show that the
tensor product of a finitely generated symmetric algebra and a finitely
generated Weyl algebra is Noetherian, which follows by repeated application of
[MR01, Thm. 2.9]. ∎
### 3.3. Action on homology
The role of the transit algebras introduced above is as a source of extra
structure on the homology of Lie algebras.
###### Definition 3.10.
The _Chevalley–Eilenberg complex_ of the Lie algebra $\mathfrak{g}$ is the
bigraded vector space
$\mathrm{CE}(\mathfrak{g})=\operatorname{Sym}(\mathfrak{g}[1])$ equipped with
the differential $\partial$ determined as a coderivation by the formula
$\partial(xy)=(-1)^{d(x)}[x,y].$
The _Lie algebra homology_ of $\mathfrak{g}$ is the bigraded vector space
$H_{*}^{\mathrm{Lie}}(\mathfrak{g})=H_{*}(\mathrm{CE}(\mathfrak{g}),\partial)$.
###### Remark 3.11.
Our notation is abusive in that the symbols $x$, $y$, and $[x,y]$ refer to the
corresponding elements of $\mathfrak{g}[1]$, while $d(x)$ is the degree of $x$
as an element of $\mathfrak{g}$.
The implicit claim that $\partial^{2}=0$ is equivalent to the Jacobi identity
(2) above. Note that $\partial\equiv 0$ if $\mathfrak{g}$ is Abelian.
###### Theorem 3.12.
For any transit $(f,g)$ of $\mathfrak{g}$, there is a canonical, functorial
action of $W(f,g)$ on $H_{*}^{\mathrm{Lie}}(\mathfrak{g})$.
###### Proof.
It suffices to consider the universal case. By Abelianness, addition equips
$\mathfrak{z}$ with the structure of a commutative monoid with respect to the
Cartesian monoidal structure on Lie algebras. For the same reason, the
composite map
$\mathfrak{z}\times\mathfrak{g}\subseteq\mathfrak{g}\times\mathfrak{g}\xrightarrow{+}\mathfrak{g}$
is a map of Lie algebras, equipping $\mathfrak{g}$ with the structure of a
$\mathfrak{z}$-module. The Chevalley–Eilenberg complex is a symmetric monoidal
functor, so we obtain an action of
$\mathrm{CE}(\mathfrak{z})=\operatorname{Sym}(\mathfrak{z}[1])$ on
$\mathrm{CE}(\mathfrak{g})$, hence on its homology (we again use that
$\mathfrak{z}$ is Abelian).
On the other hand, the diagonal equips any Lie algebra with the structure of
cocommutative comonoid with respect to the Cartesian monoidal structure, and
the composite map
$\mathfrak{g}\xrightarrow{\Delta}\mathfrak{g}\times\mathfrak{g}\xrightarrow{\pi\times\mathrm{id}}\mathfrak{a}\times\mathfrak{g}$
equips $\mathfrak{g}$ with the structure of an $\mathfrak{a}$-comodule, where
$\pi$ is the projection to the quotient. Applying the Chevalley–Eilenberg
complex and dualizing, we obtain an action of
$\operatorname{Sym}(\mathfrak{a}[1]^{\vee})\subseteq\mathrm{CE}(\mathfrak{a})^{\vee}$
on $\mathrm{CE}(\mathfrak{g})$, hence on its homology (we use that
$\mathfrak{a}$ is Abelian).
It remains to check that these two actions descend to an action of
$W(\mathfrak{g})$, for which it suffices to verify the last relation. To
begin, we note that, at the level of underlying graded objects, the first
action is given by the composite
$\operatorname{Sym}(\mathfrak{z}[1])\otimes\operatorname{Sym}(\mathfrak{g}[1])\subseteq\operatorname{Sym}(\mathfrak{g}[1])\otimes\operatorname{Sym}(\mathfrak{g}[1])\xrightarrow{m}\operatorname{Sym}(\mathfrak{g}[1]),$
where $m$ is the usual multiplication (note that $m$ is not itself a chain
map), while the second action is given by the composite (cap product)
$\operatorname{Sym}(\mathfrak{a}[1]^{\vee})\otimes\operatorname{Sym}(\mathfrak{g}[1])\xrightarrow{\operatorname{Sym}(\pi^{\vee})\otimes\delta}\operatorname{Sym}(\mathfrak{g}[1]^{\vee})\otimes\operatorname{Sym}(\mathfrak{g}[1])\otimes\operatorname{Sym}(\mathfrak{g}[1])\xrightarrow{\mathrm{ev}\cdot\mathrm{id}}\operatorname{Sym}(\mathfrak{g}[1]),$
where $\delta$ is the usual comultiplication.
Fix nonzero bihomogeneous elements $x\in\mathfrak{z}[1]$ and
$\lambda\in\mathfrak{a}[1]^{\vee}$ and a monomial
$p\in\operatorname{Sym}(\mathfrak{g}[1])$. Writing
$\delta(p)=\sum_{i}p_{i}\otimes p_{i}^{\prime}$, and abusively identifying
$\lambda$ and $\operatorname{Sym}(\pi^{\vee})(\lambda)$, we calculate that
$\displaystyle\langle\lambda,x\rangle\cdot p$
$\displaystyle=\sum_{i}\left[\lambda(xp_{i})p_{i}^{\prime}+(-1)^{|x||p_{i}|}\lambda(p_{i})xp_{i}^{\prime}-(-1)^{|\lambda||x|}\lambda(p_{i})xp_{i}^{\prime}\right]$
$\displaystyle=\lambda(x)p+\sum_{i}\left[(-1)^{|x||p_{i}|}\lambda(p_{i})xp_{i}^{\prime}-(-1)^{|\lambda||x|}\lambda(p_{i})xp_{i}^{\prime}\right]$
$\displaystyle=\lambda(x)p+\sum_{i:|\lambda|+|p_{i}|=0}\left[(-1)^{|x||p_{i}|}\lambda(p_{i})xp_{i}^{\prime}-(-1)^{|\lambda||x|}\lambda(p_{i})xp_{i}^{\prime}\right]$
$\displaystyle=\lambda(x)p,$
where we use that $\lambda$ vanishes on monomials of polynomial degree
different from $1$ and on those of bidegree different from
$(-w(\lambda),-d(\lambda))$. Since $p$ was arbitrary, this calculation
establishes the desired relation. ∎
## 4\. Stability phenomena
In this section, we give examples of stability phenomena in Lie algebra
homology arising from the transit algebra action introduced above. We then
deduce our results about the homology of configuration spaces.
### 4.1. Transit algebras and stability
The purpose of this section is to demonstrate that the transit algebra actions
described in Theorem 3.12 give rise to stability phenomena in the homology of
Lie algebras. For simplicity, we work under the following standing
assumptions.
###### Assumption 4.1.
The Lie algebra $\mathfrak{g}$ is finite dimensional with $\mathfrak{g}[1]$
first-quadrant and connected.
There are two versions of the following result. We state the version for the
maximal slope; the version for the minimal slope is obtained by reversing
inequalities and interchanging $\min$ and $\max$.
###### Proposition 4.2.
Let $g:\mathfrak{g}\to\mathfrak{k}$ be an Abelian quotient, and suppose that
we have the inequalities
$\displaystyle m_{\max}(\mathfrak{k}[1]^{0})$ $\displaystyle\leq C$
$\displaystyle m_{\max}(\ker g[1]^{0})$ $\displaystyle<C.$
Each ray of slope $C$ in $H_{*}^{\mathrm{Lie}}(\mathfrak{g})$ is finitely
detected over $\operatorname{Sym}(\mathfrak{k}^{\vee}[-1]^{0}_{C})\subseteq
W(0,g).$ In particular, the graded dimension of such a ray is eventually equal
to a quasi-polynomial of degree at most $\dim(\mathfrak{k}[1]^{0}_{C})-1$ and
period dividing $\mathrm{lcm}(\mathfrak{k}[1]^{0}_{C})$.
###### Proof.
Any ray of slope $C$ in $H_{*}^{\mathrm{Lie}}(\mathfrak{g})$ is a
$\operatorname{Sym}(\mathfrak{k}^{\vee}[-1]_{C}^{0})$-submodule and a
subquotient of the corresponding ray in $\mathrm{CE}(\mathfrak{g})$;
therefore, by Lemmas 2.9, 2.10, and 2.11, it suffices to show that every ray
of slope $C$ in the dual adjoint module $\mathrm{CE}(\mathfrak{g})^{\vee}$ is
finitely generated.
The action of $\operatorname{Sym}(\mathfrak{k}^{\vee}[-1]^{0}_{C})$ on
$\mathrm{C}\mathrm{E}(\mathfrak{g})^{\vee}$, which respects the differential,
extends along the monomorphism $g^{\vee}$ to an action of
$\operatorname{Sym}(\mathfrak{g}^{\vee}[-1])$, which does not respect the
differential. Under this action, $\mathrm{CE}(\mathfrak{g})^{\vee}$ is free of
rank $1$, hence finitely generated. Since
$m_{\max}(\mathfrak{g}^{\vee}[-1]^{0})\leq C$, it follows from Lemma 2.7 that
each ray of slope $C$ is finitely generated over
$\operatorname{Sym}(\mathfrak{g}^{\vee}[-1]_{C}^{0})$. Our assumption implies
that $\mathfrak{g}[1]^{0}_{C}\cap\ker g[1]^{0}=0$, so
$\mathfrak{g}^{\vee}[-1]^{0}_{C}=\mathfrak{k}^{\vee}[-1]^{0}_{C}$. ∎
In a sense, the next result is dual to Proposition 4.2. Again, there are two
versions, only one of which we state.
###### Proposition 4.3.
Let $f:\mathfrak{h}\to\mathfrak{g}$ be a central subalgebra, and suppose that
we have the inequalities
$\displaystyle m_{\max}(\mathfrak{h}[1]^{0})$ $\displaystyle\leq C$
$\displaystyle m_{\max}(\operatorname{coker}f[1]^{0})$ $\displaystyle<C.$
Each ray of slope $C$ in $H_{*}^{\mathrm{Lie}}(\mathfrak{g})$ is finitely
generated over $\operatorname{Sym}(\mathfrak{h}[1]^{0}_{C})\subseteq W(f,0).$
In particular, the graded dimension of such a ray is eventually equal to a
quasi-polynomial of degree at most $\dim(\mathfrak{h}[1]^{0}_{C})-1$ and
period dividing $\mathrm{lcm}(\mathfrak{h}[1]^{0}_{C})$.
###### Proof.
Since each ray of slope $C$ in $\mathrm{CE}(\mathfrak{g})$ is a submodule with
subquotient the corresponding ray of slope $C$ in
$H_{*}^{\mathrm{Lie}}(\mathfrak{g})$, it suffices by Noetherianity and Lemma
2.6 to show that each ray of slope $C$ in $\mathrm{CE}(\mathfrak{g})$ is
finitely generated.
The action of $\operatorname{Sym}(\mathfrak{h}[1]_{C}^{0})$, which respects
the differential, extends along the inclusion of $\mathfrak{h}$ to an action
of $\operatorname{Sym}(\mathfrak{g}[1])$, which does not respect the
differential. Under this action, $\mathrm{CE}(\mathfrak{g})$ is free of rank
$1$, hence finitely generated. Since $m_{\max}(\mathfrak{g}[1]^{0})\leq C$, it
follows from Lemma 2.7 that each ray of slope $C$ is finitely generated over
$\operatorname{Sym}(\mathfrak{g}[1]_{C}^{0})$, but
$\mathfrak{g}[1]^{0}_{C}=\mathfrak{h}[1]^{0}_{C}$ by our assumption. ∎
The same method yields the following mild extension.
###### Corollary 4.4.
Let $\mathfrak{g}$ be as in Proposition 4.3 and consider a semidirect product
$\widetilde{\mathfrak{g}}=\mathfrak{g}\rtimes\mathfrak{l}$ with $\mathfrak{l}$
free and finitely generated. If $\mathfrak{l}$ centralizes every odd degree
element of $\mathfrak{h}$ having slope $C$ in $\mathfrak{h}[1]$, then the
conclusion of Proposition 4.3 holds for $\widetilde{\mathfrak{g}}$.
###### Proof.
The action of $\operatorname{Sym}(\mathfrak{h}[1]^{0}_{C})$ on
$\mathrm{CE}(\mathfrak{g})$ described in the proof of Proposition 4.3 extends
to an action on the underlying bigraded vector space of
$\mathrm{CE}(\widetilde{\mathfrak{g}})$, the latter being a sum of bigraded
shifts of the former, and our assumption on the relationship between
$\mathfrak{l}$ and $\mathfrak{h}$ guarantees that this action is compatible
with the differential. This action is compatible with the filtration by
polynomial degree in elements of $\mathfrak{l}$, so the
Lyndon–Hochschild–Serre spectral sequence
$H_{*}^{\mathrm{Lie}}(\mathfrak{l};H_{*}^{\mathrm{Lie}}(\mathfrak{g}))\implies
H_{*}^{\mathrm{Lie}}(\widetilde{\mathfrak{g}})$
is a spectral sequence of
$\operatorname{Sym}(\mathfrak{h}[1]^{0}_{C})$-modules. By Noetherianity and
Lemma 2.6, it suffices to show that each ray of slope $C$ in the initial page
of this spectral sequence is finitely generated.
Writing $V$ for the finite dimensional bigraded vector space generating
$\mathfrak{l}$ as a free Lie algebra, the page in question is the homology of
the complex
$H_{*}^{\mathrm{Lie}}(\mathfrak{g})\otimes V\to
H_{*}^{\mathrm{Lie}}(\mathfrak{g})$
of $\operatorname{Sym}(\mathfrak{h}[1]^{0}_{C})$-modules. Each ray of slope
$C$ in this complex is a finite sum of bigraded shifts of rays of slope $C$ in
$H_{*}^{\mathrm{Lie}}(\mathfrak{g})$, each of which has already been shown to
be finitely generated. The claim follows by Noetherianity. ∎
###### Remark 4.5.
The same method of proof establishes the analogous conclusion for iterated
semidirect products with free Lie algebras. Details are left to the reader.
Our final corallary uses the action of both halves of the transit algebra to
establish that rays are free modules in certain cases.
###### Corollary 4.6.
Let $\mathfrak{h}\xrightarrow{f}\mathfrak{g}\xrightarrow{g}\mathfrak{k}$ be a
split transit, and suppose that we have the inequalities
$\displaystyle m_{\max}(\mathfrak{h}[1]^{0})$ $\displaystyle\leq C$
$\displaystyle m_{\max}(\operatorname{coker}f[1]^{0})$ $\displaystyle<C.$
Each ray of slope $C$ of $H^{\mathrm{Lie}}_{*}(\mathfrak{g})$ is a finitely
generated free module over
$\operatorname{Sym}(\mathfrak{h}[1]^{0}_{C})\subseteq W(f,g)$.
###### Proof.
Finite generation follows from 4.3. By Example 3.7 and Theorem 3.12, the
action of $\operatorname{Sym}(\mathfrak{h}[1]^{0}_{C})$ extends to an action
of the Weyl algebra on $\mathfrak{h}[1]^{0}_{C}$. It is well known that a
finitely generated module over a polynomial ring whose action extends to the
Weyl algebra is free—briefly, choosing a minimal set of generators and a
relation $R$ of minimal degree, we note that, unless $R$ is constant
(contradicting the first requirement of minimality), the derivative of $R$ is
a relation of strictly smaller degree. ∎
### 4.2. Application to configuration spaces
Fix a $d$-manifold $M$ of finite type and write
$\mathfrak{g}_{M}=H_{c}^{*}(M;\mathbb{Q}^{w})\otimes v\oplus
H_{c}^{*}(M;\mathbb{Q})\otimes[v,v],$
where $v$ and $[v,v]$ are formal parameters of bidegree $(1,d-1)$ and
$(2,2d-2)$, respectively, and cohomology is regarded as concentrated in
negative degrees and weight $0$ (the reader is reminded that $\mathbb{Q}^{w}$
denotes the orientation sheaf). This bigraded vector space becomes a Lie
algebra with bracket defined to be
$[\alpha\otimes v,\beta\otimes v]=(-1)^{d(\beta)(d-1)}\alpha\beta\otimes[v,v]$
and zero otherwise, where $\alpha$ and $\beta$ are multiplied via the twisted
cup product.
###### Theorem 4.7 ([Knu17]).
Let $M$ be a manifold of even dimension $d$. There is an isomorphism of
bigraded vector spaces
$\bigoplus_{n\geq 0}H_{*}(B_{n}(M);\mathbb{Q})\cong
H_{*}^{\mathrm{Lie}}(\mathfrak{g}_{M}).$
Through Theorem 4.7, stability phenomena in Lie algebra homology become
stability phenomena in the homology of configuration spaces. As a matter of
notation, we write $\mathfrak{h}_{M}=H_{c}^{*}(M;\mathbb{Q})\otimes[v,v]$ and
$\mathfrak{k}_{M}=H_{c}^{*}(M;\mathbb{Q}^{w})\otimes v$, considered as Abelian
Lie algebras. Thus, we have the exact transit
$\mathfrak{h}_{M}\to\mathfrak{g}_{M}\to\mathfrak{k}_{M}$. We observe that, for
a degree $i$ cohomology class $\alpha$, as elements of $\mathfrak{g}_{M}[1]$,
we have
$\displaystyle d(\alpha\otimes v)=m(\alpha\otimes v)$ $\displaystyle=d-i$
$\displaystyle d(\alpha\otimes[v,v])$ $\displaystyle=2d-i-1$ $\displaystyle
m(\alpha\otimes[v,v])$ $\displaystyle=d-\frac{i+1}{2}.$
Note that $\mathfrak{g}_{M}$ satisfies Assumption 4.1.
As a first example, we give a proof of classical homological stability.
###### Proof of Theorem 1.1.
Since $M$ is a manifold, we have $0\leq i\leq d$ in the equations above, so we
have the inequalities
$\displaystyle m_{\min}(\mathfrak{h}_{M}[1])$
$\displaystyle\geq\frac{d-1}{2}>0$ $\displaystyle
m_{\min}(\mathfrak{k}_{M}[1])$ $\displaystyle\geq 0$
(we use that $d>1$). The claim follows after invoking the minimal slope
version of Proposition 4.2 with $C=0$ and noting that
$\mathfrak{k}_{M}[1]_{0}^{0}\cong H_{c}^{d}(M;\mathbb{Q}^{w})\cong
H_{0}(M;\mathbb{Q})\neq 0$ by Poincaré duality. ∎
We turn now to extremal stability, our main result.
###### Proof of Theorem 1.2.
Assume first that $M$ has no compact component. Then $0<i\leq d$ in the
equations above, so
$\displaystyle m_{\max}(\mathfrak{h}_{M}[1])$ $\displaystyle\leq d-1$
$\displaystyle m_{\max}(\mathfrak{k}_{M}[1])$ $\displaystyle\leq d-1$
We observe that $d(\alpha\otimes v)$ is even if and only if $i$ is even, in
which case the second inequality is strict; thus, we may apply Proposition 4.3
with $C=d-1$. The claim in this case follows upon noting that
$\mathfrak{h}[1]_{d-1}^{0}\cong H_{c}^{1}(M;\mathbb{Q})\cong
H_{d-1}(M;\mathbb{Q}^{w})$ by Poincaré duality.
In general, we proceed by induction on the number of compact components of
$M$. Writing $\dot{M}$ for the complement of a point in $M$ lying in a compact
component, we have the exact sequence of Lie algebras
$\mathfrak{g}_{\dot{M}}\to\mathfrak{g}_{M}\to\mathfrak{l},$
where $\mathfrak{l}$ is free on one generator of bidegree $(1,d-1)$ or
$(2,2d-2)$ according to whether the component in question is orientable. In
either case, this sequence splits by freeness, expressing $\mathfrak{g}_{M}$
as a semidirect product. Since $\mathfrak{l}$ centralizes every element of
$\mathfrak{h}_{M}=H_{c}^{*}(M;\mathbb{Q})\otimes[v,v]$, the claim follows from
Corollary 4.4 (see Remark 4.5). ∎
We note the following structural statement, which we have established in the
proof of Thorem 1.2.
###### Theorem 4.8.
Let $M$ be as in Theorem 1.2. Then every ray of slope $d-1$ in
$\bigoplus_{n\geq 0}H_{*}(B_{n}(M))$ is a finitely generated
$\operatorname{Sym}(H_{d-1}(M;\mathbb{Q}^{w})\langle 2\rangle[2d-2])$ module.
If $H_{d}(M;\mathbb{Q})=0$, then these modules are free.
###### Proof.
The proof of Theorem 1.2 uses the central subalgebra
$H^{1}_{c}(M)\otimes[v,v]\subseteq\mathfrak{h}_{M}\subseteq\mathfrak{h}_{M}$.
By Poincaré duality $H^{1}_{c}(M)\cong H_{d-1}(M;\mathbb{Q}^{w})$. It is shown
that every slope $d-1$ ray is a finitely generated
$\operatorname{Sym}(H^{1}_{c}(M)\langle 2\rangle[1])$ module.
When $H^{0}_{c}(M;\mathbb{Q}^{w})\cong H_{d}(M;\mathbb{Q})=0,$ we have that
$\mathfrak{j}_{M}:=\bigoplus_{i>0}H^{i}_{c}(M;\mathbb{Q}^{w})\otimes
v\oplus\bigoplus_{i\neq 1}H^{i}_{c}(M;\mathbb{Q})\otimes[v,v]$
is an ideal— in fact
$[\mathfrak{g}_{M},\mathfrak{g}_{M}]\subseteq\mathfrak{j}_{M}$ since any
element of $\mathfrak{g}_{M}$ of the form $\alpha\otimes[v,v]$ is central, and
any pair of elements $\beta_{1}\otimes v,\beta_{2}\otimes
v\in\mathfrak{g}_{M}$ with $d(\beta_{i})>0$ satisfies $[\beta_{1}\otimes
v,\beta_{2}\otimes v]\in\mathfrak{j}_{M}$.
Further,
$H^{1}_{c}(M)\otimes[v,v]\to\mathfrak{g}_{M}\to\mathfrak{g}_{M}/\mathfrak{j}_{M}$
is a split transit. Hence as in the proof of Corollary 4.6, the action of
$\operatorname{Sym}(H_{d-1}(M;\mathbb{Q}^{w})[1]\langle 2\rangle)$ extends to
an action of the Weyl algebra. Therefore every ray of slope $d-1$ is a
finitely generated free module. ∎
### 4.3. Vanishing and a loose end
If $H_{d-1}(M;\mathbb{Q})=0$, then the conclusion of Theorem 1.2 is that
$H_{\nu_{n}-i}(B_{n}(M);\mathbb{Q})$ is eventually zero, i.e., the homology of
the configuration spaces of $M$ has maximal radial slope strictly less than
$d-1$. In such a situation, the question of extremal stability should concern
rays of smaller slope. For simplicity, we assume that $M$ is orientable.
###### Theorem 4.9.
Let $M$ be an orientable manifold of finite type and even dimension $d\geq 2$.
Fix $r\geq 3$, and suppose that $H_{d-s}(M;\mathbb{Q})=0$ for $0<s<r$. For $n$
sufficiently large with respect to $i$, the function $n\mapsto\dim
H_{i+n(d-1-\lfloor\frac{r}{2}\rfloor)}(B_{n}(M);\mathbb{Q})$ is equal to a
quasi-polynomial in $n$ of period dividing two, and degree at most ${\dim
H_{d-r}(M;\mathbb{Q}^{w})-1}$ if $r$ is odd and ${\dim
H_{d-r-1}(M;\mathbb{Q}^{w})-1}$ if $r$ is even.
###### Proof.
The proof mirrors that of Theorem 1.2, with the reduction to the non-compact
case proceeding unchanged. We aim to apply Proposition 4.3 with
$C=d-1-\lfloor\frac{r}{2}\rfloor$. Since $M$ is orientable, twisted and
untwisted cohomology coincide, and $H_{c}^{s}(M;\mathbb{Q})\cong
H_{d-s}(M;\mathbb{Q})=0$ for $0\leq s<r$. Thus, we have
$\displaystyle m_{\max}(\mathfrak{h}_{M}[1])$ $\displaystyle\leq
d-\frac{r+1}{2}$ $\displaystyle m_{\max}(\mathfrak{k}_{M}[1])$
$\displaystyle\leq d-r<C,$
where we use that $r\geq 3$. We observe that $d(\alpha\otimes[v,v])$ is even
if and only if $i$ is odd, so
$m_{\max}(\mathfrak{h}_{M}[1]^{0})\leq\begin{cases}d-\frac{r+1}{2}&\quad
r\text{ odd}\\\ d-\frac{r+2}{2}&\quad r\text{ even},\end{cases}$
as desired. As before, the claim follows from Poincaré duality. ∎
In the case $r=2$, the situation is unclear, and we ask the following.
###### Question 4.10.
Suppose that $H_{d-1}(M;\mathbb{Q})=0$. For $i\in\mathbb{N}$, is the Hilbert
function
$n\mapsto\dim H_{{n(d-2)+i}}(B_{n}(M);\mathbb{Q})$
eventually a quasi-polynomial?
This question has an affirmative answer under the further assumption that
$H_{d-2}(M;\mathbb{Q})=0$, by Theorem 4.9, and it is not hard to show that the
same is true when $H_{d-3}(M;\mathbb{Q})=0$ using Proposition 4.2.
## References
* [AM16] M. F. Atiyah and I. G. Macdonald, _Introduction to commutative algebra_ , economy ed., Addison-Wesley Series in Mathematics, Westview Press, Boulder, CO, 2016, For the 1969 original see [ MR0242802]. MR 3525784
* [BCT89] C.-F. Bödigheimer, F. Cohen, and L. Taylor, _On the homology of configuration spaces_ , Topology 28 (1989), no. 1, 111–123. MR 991102 (90h:57031)
* [Chu12] T. Church, _Homological stability for configuration spaces of manifolds_ , Invent. Math. 188 (2012), no. 2, 465–504. MR 2909770
* [DCK17] G. C. Drummond-Cole and B. Knudsen, _Betti numbers of configuration spaces of surfaces_ , J. Lond. Math. Soc. (2) 96 (2017), no. 2, 367–393. MR 3708955
* [Knu17] B. Knudsen, _Betti numbers and stability for configuration spaces via factorization homology_ , Algebr. Geom. Topol. 17 (2017), no. 5, 3137–3187. MR 3704255
* [Mag] M. Maguire, _Computing cohomology of configuration spaces_ , https://arxiv.org/abs/1612.06314.
* [McD75] D. McDuff, _Configuration spaces of positive and negative particles_ , Topology 14 (1975), 91–107. MR 0358766 (50 #11225)
* [MNP20] J. Miller, R. Nagpal, and P. Patzt, _Stability in the high-dimensional cohomology of congruence subgroups_ , Compos. Math. 156 (2020), no. 4, 822–861. MR 4079629
* [MR01] J.C. McConnell and J.C. Robson, _Noncommutative Noetherian Rings_ , American Mathematical Society, 2001.
* [Seg79] G. Segal, _The topology of spaces of rational functions_ , Acta Math. 143 (1979), no. 1-2, 39–72. MR 533892 (81c:55013)
|
# Quantum memory error correction computation based on Chamon model
Jian Zhao Yu-Chun Wu<EMAIL_ADDRESS>Guo-Ping Guo
###### Abstract
Quantum error correction codes play a central role in the realisation of
fault-tolerant quantum computing. Chamon model is a 3D generalization of the
toric code. The error correction computation on this model has not been
explored so far. In this work, the Chamon model is turned to a non-CSS error
correction code. Logical qubits are built by the construct of logical Pauli
operators. The property of logical operators reveals the expressions of code
distance. According to the topological properties of Chamon models, an error
elimination algorithm is proposed. Based on the error elimination algorithm,
we propose a global randomized error correction algorithm to decode Chamon
models in every single-qubit depolarized channel. This decoding algorithm is
improved by adding the pretreatment process, termed the probabilistic greedy
local algorithm, which adapts to different kinds of high-dimensional models.
The estimated threshold error rate for numerical experiment can be raised to
$4.92\%$.
###### keywords:
Chamon model , quantum error correction computation , decoding algorithm
###### PACS:
03.65.Ud, 03.67.Mn
###### MSC:
81-08 , 94B35
[inst1]Key Laboratory of Quantum Information, Chinese Academy of Sciences,
School of Physics, University of Science and Technology of China, Hefei,
Anhui, 230026, P. R. China [inst2]CAS Center For Excellence in Quantum
Information and Quantum Physics, University of Science and Technology of
China, Hefei, Anhui, 230026, P. R. China [inst3]Hefei National Laboratory,
University of Science and Technology of China, Hefei 230088, P. R. China
[inst4]Institute of Artificial Intelligence, Hefei Comprehensive National
Science Center, Hefei, Anhui, 230088, P. R. China [inst5]Origin Quantum
Computing Hefei, Anhui 230026, P. R. China
## 1 Introduction
When processing certain computing tasks, it is reasonable to presume that the
potential effect of quantum computers may outperform classical computers[1].
Quantum computing promises exponentially or quadratically faster processing of
factoring problems[2, 3], quantum simulating problems [4] and searching
problems[5]. However, due to the objective existence of quantum noise, the
realization of practical quantum computers suffers challenge.
Quantum error correction codes play a central role in the realisation of
fault-tolerant quantum computing[6, 7]. The type of encoding determines the
arrangement of qubits in the hardware and the control of quantum gates at the
software level. Many errors correcting codes have been realised, including
Shor code[8], surface code[9, 10, 11], color code[12, 13], repetition code[14,
15, 16] and bosons quantum error-correcting code[17]. The work of high-
dimensional error-correcting codes mainly stays at the theoretical level, such
as $4$-D toric code[18], high-dimensional color code[19] and cubic code[20].
These proposed high-dimensional error correction codes with geometric topology
structures belong to CSS codes, whose stabilizers can be divided into two
classes.
A natural question is whether it is possible to analyse fault-tolerant quantum
computation in non-CSS codes with topological structures. The $XZZX$ code as a
variant of the surface code has been proposed [21]. We turn attention to high-
dimensional non-CSS code out of curiosity.
The original Chamon model is proposed to present a many-body Hamiltonian of
the system that cannot reach their ground states when the environment
temperature is closed to absolute zero, termed topologically ordered ground
states[22]. It can be regarded as a $3$-dimensional generalization of toric
codes with six-qubit nearest-neighbor interactions on a $3$-dimensional
lattice. Bravyi investigated thoroughly the types of the excitations in this
model and demonstrated the ground state degeneracy[23]. Recently, Shirley has
studied the fermionic gauge theory and fracton topological order in Chamon
models[24]. Nevertheless, it has not been considered so far for error
correction quantum computation based on Chamon models. For filling in gaps and
inspired by previous work, we turn Chamon model to a kind of error correction
code.
In this paper, a construction method of logical qubits is proposed in Chamon
model with arbitrary scales. The expression of code distance in Chamon models
is derived. According to the topological properties of Chamon models, this
paper presents a specific error correction algorithm. An improved error
correction algorithm is also proposed, which slightly raise the threshold to
$4.92\%$.
For given $N$ data qubits, one can construct a stabilizer code with $[N,k,d]$,
where $k$ is the number of constructed logical qubits and $d$ is called code
distance[25]. The sizes of $k$ and $d$ respectively reflect the ability to
accommodate the number of logical qubits and the ability to fault
tolerance[26]. In Chamon models, there exists a trade-off between $k$ and $d$.
Its asymptotic properties are manifested that when the number of physical
qubits is given $\mathcal{O}(n^{3})$, the maximum number of logical qubits can
reach $\mathcal{O}(n)$, with $d=\mathcal{O}(n)$; if the number of logical
qubits are sacrificed to a constant, the code distance $d$ can reach
$\mathcal{O}(n^{2})$. For example, if encoding constant logical qubits,
$\mathcal{O}(d^{2})$ physical qubits are needed in surface code, but in Chamon
models we only need to use $\mathcal{O}(d^{3/2})$ qubits.
Theoretically, the code distance in Chamon models with dimensions
$\alpha_{x},\alpha_{y},\alpha_{z}$ is stil an unresolved problem. For Chamon
models with $N$ qubits, the code distance $d$ was estimated by a rough upper
bound $\mathcal{O}(\sqrt{N})$[23]. We solved this problem in two cases that
the code distance
$d=\min\\{\alpha_{y}\alpha_{z},\alpha_{z}\alpha_{x},\alpha_{x}\alpha_{y}\\}$
when $\alpha_{x},\alpha_{y},\alpha_{z}$ are pairwise coprime and $d=2\alpha$
when $\alpha_{x},\alpha_{y},\alpha_{z}=\alpha$.
The discussion of paper is organized as follows. In Sec.2, the definition of
Chamon model is introduced. In Sec.3, by the constructions of logical
operators, Chamon model is turned to $4\alpha$ logical qubits and the
properties of logical operators are derived. The code distances of different
Chamon models are analysed in Sec.4. When three dimensions are coprime, the
logical operators become half-chain operators forming a loop in toric
representations, which is put in A. In Sec.5, for decoding Chamon models, the
global error correction algorithm is constructed and the threshold is
estimated, $p_{th}=4.45\%$. Consiering local properties of sparse qubits
errors, the decoding algorithm is improved, such that the threshold becomes
$4.92\%$. To organize the material, the proof of Lemma 3 is put in B and the
procedure of the probabilistic greedy local algorithm is expounded in C. Last
in Sec.6, we draw the conclusions of this paper.
Notation. The paper uses capital Roman letters $A$, $B$,$\ldots$, for matrices
or operators, lower case Roman letters $x$, $y$,$\ldots$, for vectors, and
Greek letters $\alpha$, $\beta$,$\ldots$, for scalars. In three-dimensional
space, denote the unit vectors by $e_{x}^{\pm}=(\pm 1,0,0)$,
$e_{y}^{\pm}=(0,\pm 1,0)$, $e_{z}^{\pm}=(0,0,\pm 1)$. Specially, denote the
Pauli operators by $X,Z$, and $Y=ZX$.
## 2 Chamon model
In this section, first introduce the definition of Chamon model, then analyse
the degrees of freedom in code space. The degrees of freedom determines the
maximum number of logical qubits used.
### 2.1 Building Chamon model
On three-dimensional cubic Bravais lattice, qubits are placed on the face
centers each neighbouring two cubes and the vertices each shared in eight
cubes. On each site there is only one qubit. The stabilizer acts on six
qubits, which is closest to either the midpoint of the fixed side of a cube or
the body center of a cube. According to the directions of the cube side,
called $x$-direction,$y$-direction and $z$-direction, these six qubits are
divided into three classes. Each class includes two qubits, whose sites
connecting a line parallel to one of the three directions. The stabilizers
action type is corresponding $X,Y$ and $Z$.
For convenience and rigor, it is necessary to introduce some notations. Given
a positive integer $\alpha$, the set
$\mathcal{Z}_{\alpha}=\\{0,1,\dots,\alpha-1\\}$ equipped with a
modulo-$\alpha$ addition operation forms a group. The Chamon model is
constructed by the group
$\mathcal{A}=\mathcal{Z}_{2\alpha_{x}}\times\mathcal{Z}_{2\alpha_{y}}\times\mathcal{Z}_{2\alpha_{z}}$.
###### Definition 1.
Given a group
$\mathcal{A}=\mathcal{Z}_{2\alpha_{x}}\times\mathcal{Z}_{2\alpha_{y}}\times\mathcal{Z}_{2\alpha_{z}}$,
$\alpha_{x},\alpha_{y},\alpha_{z}\in\mathcal{Z}^{+}$, let
$\mathcal{D}=\\{(x,y,z)\in\mathcal{A}|(x+y+z)/2\in\mathcal{Z}\\}$. Data qubits
are placed on each point of $\mathcal{D}$, called data qubits set. For all
$s=(s_{x},s_{y},s_{z})\in\mathcal{A}\backslash\mathcal{D}$, denote $s+e^{\pm}$
briefly by $e^{\pm}$, then the stabilizers group is generated by
$\displaystyle
S_{s}=X_{e_{x}^{+}}X_{e_{x}^{-}}Y_{e_{y}^{+}}Y_{e_{y}^{-}}Z_{e_{z}^{+}}Z_{e_{z}^{-}}.$
(1)
Denote the stabilizers group by $\mathcal{S}_{\mathcal{D}}=\langle
S_{s}\rangle$, $s\in\mathcal{A}\backslash\mathcal{D}$. Then the data qubits
set $\mathcal{D}$ and the corresponding stabilizers group
$\mathcal{S}_{\mathcal{D}}$ form the Chamon model.
In Chamon model, each qubit maps to a stabilizer one by one according to the
definition. Specifically, measure qubits can be put a unit length from every
data qubit along the positive x-axis; see Fig.1.
###### Remark 1.
In the special case when $\alpha_{y}=1$, all the stabilizers in Chamon model
$S_{s}$ are coupled with only four data qubits. The reason is the two coupled
qubits in $Y$-direction are in fact the same for each fixed stabilizer. Hence
$Y_{e_{y}^{+}}=Y_{e_{y}^{-}}$. Then the stabilizers are degenerated to an
interesting case:
$\displaystyle S_{s}=X_{e_{x}^{+}}X_{e_{x}^{-}}Z_{e_{z}^{+}}Z_{e_{z}^{-}},$
which is exactly regarded as the two-layer surface code independently with
$XZZX$ type stabilizers [21]. So is the case that $\alpha_{x}=1$ or
$\alpha_{z}=1$ due to the rotation symmetry. For the rest of this paper, only
consider general situations where $\alpha_{x},\alpha_{y},\alpha_{z}>1$.
Figure 1: The Chamon model. The stabilizer $S_{(1,0,0)}$ acts six blue qubits.
Data qubits are placed on the face centers and the vertices of the cubes. Each
side of the cubes represents one stabilizer. To be seen clearly some data
qubits are omitted.
The definition of stabilizers is rational. Note that all the stabilizers
always commute. It is trivial that $S_{s}$ and $S_{s^{\prime}}$ do not have
qubits shared. Once the stabilizers share qubits in common, they either share
only one data qubit applied to the same acting type, in which case they
commute apparently, or share two qubits, in which case the operators acting on
each shared qubit anti-commute. Thus for all the stabilizers,
$S_{s}S_{s^{\prime}}$=$S_{s^{\prime}}S_{s}$.
### 2.2 Analysing the Eigenspace
Chamon model as a kind of stabilizer code is a subspace of code space
$\mathcal{H}^{\otimes|\mathcal{D}|}$. For any quantum state
$|\psi\rangle\in\mathcal{H}^{\otimes|\mathcal{D}|}$, all stabilizers $S_{s}$
force the Hilbert space into a common eigen-subspace. Based on the measurement
results obtained by the stabilizers, the initial state is collapsed on a fixed
eigenstate. For lack of consideration for the independence of the generated
stabilizers, the number of stabilizers in Eq.(1) is
$|\mathcal{D}|=4\alpha_{x}\alpha_{y}\alpha_{z}$, $i=0,\dots,|\mathcal{D}|-1$.
The measurement outcomes can be denoted by a $|\mathcal{D}|$ dimension two-
valued vector $r=(r_{0},\dots,r_{|\mathcal{D}|-1})$,
$S_{i}|\phi\rangle=r_{i}|\phi\rangle$, $r_{i}\in\\{-1,1\\}$. Note that the
label $S_{i}$ has been remarked. For a fixed measurement $r$, the eigenspace
is denoted by
$\displaystyle\mathcal{E}_{r}=\left\\{|\phi\rangle\in\mathcal{H}^{\otimes|D|}\big{|}S_{i}|\phi\rangle=r_{i}|\phi\rangle,S_{i}\in\mathcal{S}_{\mathcal{D}},i=0,\dots,|\mathcal{D}|-1\right\\}.$
Denote the greatest common divisor of $\alpha_{x},\alpha_{y},\alpha_{z}$ by
$\alpha$. Bravyi has proved that the dimension of the eigenspace
$\mathcal{E}_{r}$ is $2^{4\alpha}$ [23]. Our purpose is to build $4\alpha$
logical qubits in any eigenspace.
## 3 Logical qubits
In this section, by the constructions of proper logical operators, the logical
qubits are built in any Chamon model.
### 3.1 The special case in $\alpha=1$
When $\alpha=1$, at most Chamon model can form $4\alpha=4$ logical qubits. The
purpose is to construct a kind of logical $X$ operators and logical $Z$
operators, denoted by $X_{L}^{(i)}$ and $Z_{L}^{(i)}$, satisfying
$\displaystyle{X_{L}^{(i)}}^{2}=$ $\displaystyle{Z_{L}^{(i)}}^{2}=I$
$\displaystyle X_{L}^{(i)}Z_{L}^{(j)}=$
$\displaystyle(-1)^{\delta_{ij}}Z_{L}^{(j)}X_{L}^{(i)},$ (2)
where $\delta_{ij}=0$ when $i\neq j$ or $\delta_{ij}=1$ when $i=j$,
$i,j=0,1,2,3$. Given an eigenspace $\mathcal{E}_{r}$, all the logical
operators must commute with $S_{i}$, which ensures that the logical operation
is performed inside the eigenspace. Identical to Pauli relationships in data
qubits, the logical $Y$ operators are defined as $Y_{L}^{(i)}\triangleq
Z_{L}^{(i)}X_{L}^{(i)}$.
It is not unique to construct proper logical operators $X_{L}$ and $Z_{L}$. A
naive thought is the constructed $X_{L}^{(i)}$ can be generated by only bit
flips $X$ on data qubits, $Z_{L}^{(i)}$ generated by only phase flips $Z$.
Next, introduce the construction process of the logical operators through
classifications of data qubits. Note that the qubits in data qubits set
$\mathcal{D}$ can be divided into four subsets, which contain one vertex
qubits set and 3 face-centered qubits sets. These qubits sets can be generated
by the points $g_{0}=(0,0,0)$, $g_{1}=(1,1,0)$, $g_{2}=(1,0,1)$ and
$g_{3}=(0,1,1)$ respectively. Define the data qubits set
$\mathcal{D}_{x}^{g_{i}},\mathcal{D}_{z}^{g_{i}}\subset\mathcal{D}$ as
$\displaystyle\mathcal{D}_{x}^{g_{i}}=\\{d|d=g_{i}+2\lambda_{x}e_{z}^{+}+2\lambda_{y}e_{y}^{+},\lambda_{x},\lambda_{y}\in\mathcal{Z}\\}$
$\displaystyle\mathcal{D}_{z}^{g_{i}}=\\{d|d=g_{i}+2\lambda_{x}e_{x}^{+}+2\lambda_{y}e_{y}^{+},\lambda_{z},\lambda_{y}\in\mathcal{Z}\\},$
(3)
where $i=0,1,2,3$. Clearly, the intersection of $\mathcal{D}_{x}^{g_{i}}$ and
$\mathcal{D}_{x}^{g_{j}}$ is empty for $i\neq j$. The union of
$\mathcal{D}_{x}^{g_{0}}$ and $\mathcal{D}_{x}^{g_{3}}$ is
$\\{d\in\mathcal{D}|d_{x}=0\\}$, denoted by $\mathcal{D}_{x=0}$ and
$\mathcal{D}_{x}^{g_{1}}\cup D_{x}^{g_{2}}=\mathcal{D}_{x=1}$, both
constituting a lattice plane parallel to $yOz$ plane, respectively. The same
considerations apply to $\mathcal{D}_{z}^{g_{i}}$. Define
$X|_{\mathcal{D}^{\prime}}=\prod_{j\in\mathcal{D}^{\prime}}X_{j}$, then
###### Theorem 1.
Given a Chamon model with $\alpha=1$, the operators
$X|_{\mathcal{D}_{x}^{g_{i}}}$ and $Z|_{\mathcal{D}_{z}^{g_{i}}}$ make the
Chamon model $4$ logical qubits.
###### Proof.
Firstly, for a fixed $i$,
$X^{2}|_{\mathcal{D}_{x}^{g_{i}}}=I=Z^{2}|_{\mathcal{D}_{z}^{g_{i}}}$. Note
that
$\mathcal{D}_{x}^{g_{i}}\cap\mathcal{D}_{z}^{g_{i}}=\\{d|d=g_{i}+2\lambda_{y}e_{y}^{+},\lambda_{y}\in\mathcal{Z}\\}$,
which contains $\alpha_{y}$ elements. Based on the assumption that
$\alpha_{y}$ is odd, hence the operators $X|_{\mathcal{D}_{x}^{g_{i}}}$ and
$Z|_{\mathcal{D}_{z}^{g_{i}}}$ anti-commute. Since all the stabilizers commute
with each other, $X|_{\mathcal{D}_{x}^{g_{i}}}$ and
$Z|_{\mathcal{D}_{z}^{g_{i}}}$ cannot be generated by stabilizers. One can
easily check $X|_{\mathcal{D}_{x}^{g_{i}}}$ and $Z|_{\mathcal{D}_{z}^{g_{i}}}$
commute with each stabilizer generator. Thus the Chamon model can be used as
logical qubits. Further since $D_{x}^{g_{i}}\cap D_{z}^{g_{j}}=\varnothing$,
the operators $X|_{\mathcal{D}_{x}^{g_{i}}}$ and
$Z|_{\mathcal{D}_{z}^{g_{j}}}$ pairwise commute, $\forall i\neq j$. It implies
the operators satisfy the conditions in Eq.(3.1). The commutation relations
hold the independence of logical operators
$X|_{\mathcal{D}_{x}^{g_{i}}},Z|_{\mathcal{D}_{z}^{g_{i}}}$, that is they
cannot be generated by other logical operators
$X|_{\mathcal{D}_{x}^{g_{j}}},Z|_{\mathcal{D}_{z}^{g_{j}}}$, $i\neq j$. We
have shown the Chamon model indeed forms 4 logical qubits.
∎
For analysing the property of logical operators here introduce some concepts.
Consider the centralizer of $\mathcal{S}$ in Pauli group denoted by
$\displaystyle\mathcal{C}(\mathcal{S})=\langle\mathcal{S},X|_{\mathcal{D}_{x}^{g_{i}}},Z|_{\mathcal{D}_{z}^{g_{i}}}\rangle,$
where $i=0,1,2,3$. Let us concern more about all the logical operators with
$\mathcal{S}$ invariance. Thus introduce the quotient group generated by
$\displaystyle\mathcal{C}(\mathcal{S})/\mathcal{S}=\langle
X|_{\mathcal{D}_{x}^{g_{i}}},Z|_{\mathcal{D}_{z}^{g_{i}}}\rangle,$
where denote the equivalence class of logical operators by the same notations.
Note that the logical operators can be translated. For example, for
$g_{0}=(0,0,0)$ and $g_{0}^{\prime}=(2,0,0)$, clearly,
$X|_{\mathcal{D}_{x}^{g_{0}}}=X|_{\mathcal{D}_{x}^{g_{0}^{\prime}}}$, due to
$\displaystyle\prod_{S_{j}\in\mathcal{S}^{\prime}}S_{j}X|_{\mathcal{D}_{x}^{g_{0}}}=X|_{\mathcal{D}_{x}^{g_{0}^{\prime}}},$
where
$\mathcal{S}^{\prime}=\\{S_{s}\in\mathcal{S}|s=e_{x}^{+}+2\lambda_{y}e_{y}^{+}+2\lambda_{z}e_{z}^{+},\lambda_{y},\lambda_{z}\in\mathcal{Z}\\}$.
Thus, It can be concluded that the logical operators
$X|_{\mathcal{D}_{x}^{g_{i}}},Z|_{\mathcal{D}_{z}^{g_{i}}}$ can be translated
across the x-axis and z-axis directions, respectively.
Coupled with half of the qubits on a certain plane, the operators
$X|_{\mathcal{D}_{x}^{g_{i}}}$ and $Z|_{\mathcal{D}_{z}^{g_{i}}}$ are termed
half-plane operators. It is clear that translated half-plane operators defined
in Thm.1 are still half-planar.
### 3.2 The general case
In this section, the logical $X$ operators and logical $Z$ operators in
$\mathcal{C}(\mathcal{S})/\mathcal{S}$ are introduced generally.
Before the constructions, we first visualize the geometry configurations of
the logical operators by an example with $\alpha_{x}=\alpha_{y}=\alpha_{z}=3$.
The greatest common divisor is $\alpha=3$. Let $g_{0,1}=(0,2,0)$ and define a
square half-plane operator in the $yOz$ plane as
$\displaystyle\mathcal{D}_{x}^{{0,1}}=\\{d|d=g_{0,1}+\lambda(e_{z}^{+}+e_{y}^{+})+\lambda^{\prime}(e_{z}^{+}+e_{y}^{-}),\lambda,\lambda^{\prime}\in\mathcal{Z}_{\alpha}\\}.$
Then apply $X$ operator on each qubit located in $\mathcal{D}_{x}^{{0,1}}$,
denoted by $X|_{\mathcal{D}_{x}^{{0,1}}}$. It is coupled with $9$ qubits,
which form a square occupying half of the $yOz$ plane. One can check it
commutes with all stabilizers; see the operator $X_{L0}^{1}$ in Fig.2.
Applying this idea to the case when $\alpha_{x}=\alpha_{y}=\alpha_{z}=\alpha$,
define $g_{i,j}=g_{i}+2je_{y}^{+}$, and let
$\displaystyle\mathcal{D}_{x}^{i,j}=$
$\displaystyle\\{d|d=g_{i,j}+\lambda(\mu_{ij}e_{z}^{+}+e_{y}^{+})+\lambda^{\prime}(\mu_{ij}e_{z}^{+}+e_{y}^{-}),\lambda,\lambda^{\prime}\in\mathcal{Z}_{\alpha}\\},$
$\displaystyle\mathcal{D}_{z}^{i,j}=$
$\displaystyle\\{d|d=g_{i,j}+\lambda(\nu_{ij}e_{x}^{+}+e_{y}^{+})+\lambda^{\prime}(\nu_{ij}e_{x}^{+}+e_{y}^{-}),\lambda,\lambda^{\prime}\in\mathcal{Z}_{\alpha}\\},$
(4)
where $i=0,1,2,3$ and $j=0,1,2,\dots,\alpha-1$,
$\mu_{ij}=(-1)^{[i/2]},\nu_{ij}=(-1)^{\lceil i/2\rceil({\rm mod}\hskip
1.39998pt2)}$. Let $X_{Li}^{j}=X|_{\mathcal{D}_{x}^{i,j}}$ and
$Z_{Li}^{j}=Z|_{\mathcal{D}_{x}^{i,j}}$, termed square logical operators.
###### Lemma 1.
Given a Chamon model with $\alpha_{x}=\alpha_{y}=\alpha_{z}=\alpha$, the
operators $X_{Li}^{j}$ and $Z_{Li}^{j}$ make the Chamon model $4\alpha$
logical qubits.
###### Proof.
Due to $D_{x}^{i,j}\cap D_{z}^{i,j}=\\{g_{i,j}\\}$, it implies that the
operators $X_{Li}^{j}$ and $Z_{Li}^{j}$ anti-commute. And these square half-
plane operators commute with all the stabilizers. Lastly to verify that
$X_{Li}^{j}$ and $Z_{Li^{\prime}}^{j^{\prime}}$ commute, one can check
$D_{x}^{i,j}\cap D_{z}^{i^{\prime},j^{\prime}}=\varnothing$,
$\forall(i,j)\neq(i^{\prime},j^{\prime})$. Thus, $X_{Li}^{j}$ and $Z_{Li}^{j}$
as logical operators make the Chamon model $4\alpha$ logical qubits. ∎
A specific example is presented with $\alpha_{x},\alpha_{y},\alpha_{z}=3$; see
Fig.2. Note that each logical operator occupies a square half-plane.
Figure 2: The Chamon model with $\alpha_{x},\alpha_{y},\alpha_{z}=3$. Data
qubits are set with square borders in different colors, corresponding to the
logical operators. Among them, the blue border corresponds to the case $i=0$,
a lighter blue for $j=1$, darker for $j=2$. Also, the sites $g_{i,j}$ are
filled with the corresponding colors. Solid borders denote the logical $X$
operators and dotted borders denote the logical $Z$ operators. The part of the
logical operators is still omitted.
The construction of logical operators generally is available in previous ideas
of square half-plane operators. Denote the greatest common divisor of
$\alpha_{x},\alpha_{y},\alpha_{z}$ by
$\alpha=g(\alpha_{x},\alpha_{y},\alpha_{z})$. Then
$\alpha_{x}=\beta_{x}\alpha$, $\alpha_{y}=\beta_{y}\alpha$,
$\alpha_{z}=\beta_{z}\alpha$, where $\beta_{x},\beta_{y}$ and $\beta_{z}$
satisfy $g(\beta_{x},\beta_{y},\beta_{z})=1$. There must exist an odd number
in $\beta_{x},\beta_{y}$ and $\beta_{z}$. Without loss of generality, suppose
$\beta_{y}$ is odd. An intuitive thought is that the plane parallel to $yOz$
plane in Chamon model can be divided into $\beta_{y}\beta_{z}$ congruent
squares. Apply the corresponding $X$ operators on every square and find they
constitute logical $X$ operators; similarly, it is applied to logical $Z$
operators.
Based on the previous analysis, it is necessary for applying to the general
case to modify the definition in Eq.(3.2). Let
$\displaystyle
D_{x}^{i,j}=\\{d|d=g+\lambda(\mu_{ij}e_{z}^{+}+e_{y}^{+})+\lambda^{\prime}(\mu_{ij}e_{z}^{+}+e_{y}^{-}),$
$\displaystyle\lambda,\lambda^{\prime}\in\mathcal{Z}_{\alpha},g\in\mathcal{G}_{i,j}^{x}\\},$
(5)
$\displaystyle\mathcal{D}_{z}^{i,j}=\\{d|d=g+\lambda(\nu_{ij}e_{x}^{+}+e_{y}^{+})+\lambda^{\prime}(\nu_{ij}e_{x}^{+}+e_{y}^{-}),$
$\displaystyle\lambda,\lambda^{\prime}\in\mathcal{Z}_{\alpha},g\in\mathcal{G}_{i,j}^{z}\\},$
(6)
where
$\displaystyle\mathcal{G}_{i,j}^{x}=$
$\displaystyle\\{g|g=g_{i,j}+2\lambda\alpha e_{y}^{+}+2\lambda^{\prime}\alpha
e_{z}^{+},\lambda\in\mathcal{Z}_{\beta_{y}},\lambda^{\prime}\in\mathcal{Z}_{\beta_{z}}\\},$
$\displaystyle\mathcal{G}_{i,j}^{z}=$
$\displaystyle\\{g|g=g_{i,j}+2\lambda\alpha e_{y}^{+}+2\lambda^{\prime}\alpha
e_{x}^{+},\lambda\in\mathcal{Z}_{\beta_{y}},\lambda^{\prime}\in\mathcal{Z}_{\beta_{x}}\\}.$
###### Theorem 2.
Given a Chamon model with $\alpha=g(\alpha_{x},\alpha_{y},\alpha_{z})$, the
operators $X_{Li}^{j}$ and $Z_{Li}^{j}$ make the Chamon model $4\alpha$
logical qubits.
Let us go on to study other properties of square logical $X,Z$ operators.
Firstly, observe the relation of the nearest neighbor square operators. In
Fig.2, $X_{L0}^{1}$ and $X_{L3}^{1}$ share 6 qubits and the support of
$X_{L3}^{1}$ can be translated from $X_{L0}^{1}$, that is
$\displaystyle\mathcal{D}_{x}^{3,1}=\mathcal{D}_{x}^{0,1}+e_{z}^{+}+e_{y}^{+},$
(7)
where $\mathcal{D}^{\prime}=\mathcal{D}+x$ indicates
$\mathcal{D}^{\prime}=\\{d+x,d\in\mathcal{D}\\}$. It implies
$X_{L0}^{1}X_{L3}^{1}=X|_{\mathcal{D}}$,
$\mathcal{D}=\\{d|d=g_{0,1}+\lambda(e_{z}^{+}+e_{y}^{-}),\lambda\in\mathcal{Z}_{2\alpha}\\}$,
which forms a rigid chain of length $2\alpha$. Generally,
###### Corollary 1.
There exists a logical $X$ operator forming a rigid chain of length
$2\beta_{y}\beta_{z}\alpha$, similarly, the logical $Z$ operator forming a
rigid chain of length $2\beta_{y}\beta_{x}\alpha$.
Firstly for any square $X$ operator in the plane parallel to $yOz$, explain it
is a logical operator and give the expression generated by $X_{Li}^{j}$. Let
$\displaystyle\mathcal{D}_{x}^{0,2}=$
$\displaystyle\mathcal{D}_{x}^{0,1}+2e_{y}^{+},$
$\displaystyle\mathcal{D}^{\prime}\triangleq$
$\displaystyle\mathcal{D}_{x}^{0,1}+e_{z}^{-}+e_{y}^{+}.$ (8)
One can check $X_{L0}^{2}X|_{\mathcal{D}^{\prime}}=X_{L0}^{1}X_{L3}^{1}$
because they represent the same rigid chain. It implies that the square
operator $X|_{\mathcal{D}^{\prime}}$ can be generated by other 3 logical
operators, $X|_{\mathcal{D}^{\prime}}=X_{L0}^{1}X_{L3}^{1}X_{L0}^{2}$. Define
the square operators as
$X_{L0}^{1},X_{L3}^{1},X_{L0}^{2}X|_{\mathcal{D}^{\prime}}$ the adjacent $4$
square operators, the support of which satisfies the adjacent relations in
Eq.(7)(3.2). Thus, in the $yOz$ plane, for any square $X$ operator, it can
always be generated by $X_{L0}^{j},X_{L3}^{j}$, $j=0,\dots,\alpha-1$. In other
words, the product of any adjacent $4$ square operators is unit operator with
$\mathcal{S}$ invariance. Take the square half-plane logical X operator as an
example, and define square half-plane operator as
$X_{sq}(g)=X|_{\mathcal{D}_{x}^{\mu_{l}\nu_{l}}(g)}$,
$\displaystyle\mathcal{D}_{x}^{\mu_{l}\nu_{l}}(g)=\\{d|d=g+\lambda\mu_{l}(e_{z}^{+}+e_{y}^{+})+\lambda^{\prime}\nu_{l}(e_{z}^{+}+e_{y}^{-}),$
$\displaystyle\lambda,\lambda^{\prime}\in\mathcal{Z}_{\alpha},g\in\mathcal{D}\\},$
where $\mu_{l},\nu_{l}\in\\{1,-1\\}$. In the fact, the whole relations of
adjacent $4$ square operators are as follows,
###### Corollary 2.
With $\mathcal{S}$ invariance,
$\displaystyle
X_{sq}(s+e_{x}^{+})X_{sq}(s+e_{x}^{-})X_{sq}(s+e_{y}^{+})X_{sq}(s+e_{y}^{-})=I,$
$\displaystyle
X_{sq}(s+e_{z}^{+})X_{sq}(s+e_{z}^{-})X_{sq}(s+e_{x}^{+})X_{sq}(s+e_{x}^{-})=I,$
$\displaystyle
X_{sq}(s+e_{z}^{+})X_{sq}(s+e_{z}^{-})X_{sq}(s+e_{y}^{+})X_{sq}(s+e_{y}^{-})=I,$
(9)
where $s\in\mathcal{A}/\mathcal{D}$.
###### Proof.
The product of these operators either forms the envelope surface of the
stabilizers or form identical transformation. We can check the two former
transformations are exactly the product of all stabilizers in the envelope. ∎
This result can be extended similarly to square half-plane $Y,Z$ operators.
Further, it reveals that any square half-plane operator can be generated by
$4\alpha$ square half-plane logical Pauli operators defined in Thm.2.
Let us review the half-plane operators in the special case $\alpha=1$.
Generally, when $\alpha\neq 1$, still denote the corresponding half-plane
operator by $X|_{\mathcal{D}_{x}^{g_{i}}},Z|_{\mathcal{D}_{z}^{g_{i}}}$, where
$\mathcal{D}_{x}^{g_{i}},\mathcal{D}_{z}^{g_{i}}$ have the same expressions as
Eq.(3.1). The relations between half-plane operators and square logical
operators are
###### Corollary 3.
Each half-plane operator is a logical operator and can be generated by square
logical operators. Specifically,
$\displaystyle X|_{\mathcal{D}_{x}^{g_{i}}}=\prod_{j=0}^{\alpha-1}X_{Li}^{j},$
$\displaystyle Z|_{\mathcal{D}_{z}^{g_{i}}}=\prod_{j=0}^{\alpha-1}Z_{Li}^{j}.$
(10)
###### Proof.
Consider the product of square operators $X_{L0}^{j}$, $j=0,\dots,\alpha-1$.
The support of this product operator is constrained in the $yOz$ plane. In
this plane, the data qubits in $\mathcal{D}_{x}^{g_{0}}$ are applied flip
operations odd times. The others in $\mathcal{D}_{x}^{g_{3}}$ are all flipped
even times, which implies $X|_{\mathcal{D}_{x}^{g_{0}}}=\prod X_{L0}^{j}$.
Similarly, the situations in Eq.(3) can be proved. Note that half-plane
logical operators can be translated, which means any half-plane operators can
be generated by square logical operators with $\mathcal{S}$ invariance. ∎
The result shows that for any Chamon model, not only can $4$ logical qubits be
constructed, but in each logical qubit space there are $\alpha$ logical qubits
encoded.
## 4 The distance of Chamon model
In this section, introduce the code distance of Chamon models to analyze the
error correction capability. And there exists a trade-off between the code
distance and the number of logical qubits in one Chamon model.
###### Definition 2.
For a stabilizer code $(\mathcal{D},\mathcal{S})$, the weight of a logical
operator $L$ is defined as the minimum number of qubit bit flips or phase
flips needed in $L$ with $\mathcal{S}$ invariance, denoted by $|L|$. Then the
code distance $d$ is defined as the minimum weights of logical operators, that
is
$\displaystyle d=\min_{L\neq I,L\in\mathcal{C}(\mathcal{S})/\mathcal{S}}|L|.$
In some cases, the explicit expressions of code distance are given.
###### Theorem 3.
Suppose $\alpha_{x},\alpha_{y}$ and $\alpha_{z}$ are pairwise coprime, then
for Chamon models with linear dimensions $\alpha_{x},\alpha_{y},\alpha_{z}$,
the expression of code distance $d$ is described as
$\displaystyle
d=\min\\{\alpha_{x}\alpha_{y},\alpha_{y}\alpha_{z},\alpha_{z}\alpha_{x}\\}.$
(11)
The proof of Thm.3 is based on the toric representations; see A.
###### Theorem 4.
Given a Chamon model with linear dimensions
$\alpha_{x}=\alpha_{y}=\alpha_{z}=\alpha$, the expression of code distance $d$
is described as
$\displaystyle d=2\alpha.$ (12)
###### Proof.
Consider one data qubit flip, which always affects the nearest 4 stabilizers
in a plane, locally. There are two methods to modify the stabilizers
measurements by applying flips on the nearest qubits. One is along 3 diagonal
lines and the other is along coordinate axis. Since the logical operators
commute with all stabilizers, the affected stabilizers measurements need to be
eliminated, which means the support of logical operator needs to form a loop.
If the loop is along coordinate axis, the operator becomes half-plane operator
with weight $\alpha^{2}$.
Another scenario has been discussed in Corollary 1. One can always find the
logical chain operator arranged diagonally with weight $2\alpha$. Thus the
minimum weight is $2\alpha$. ∎
For a general Chamon model with dimensions $\alpha_{x},\alpha_{y}$ and
$\alpha_{z}$. According to the greatest common divisor $\alpha$, the logical
operator generators can always be divided into fragments distributed in
different squares. It becomes too complicated to guarantee that the logical
operator with minimum weight can be regularly divided. However, one can easily
construct a logical chain operators according to Corollary 1, showing the code
distance $d\leq\min\\{2\beta_{y}\beta_{z}\alpha,2\beta_{y}\beta_{x}\alpha\\}$.
It reveals the relationship between the number of physical qubits, the number
of constructed logical qubits and the code distance, and there is a balance
between these parameters. When the number of physical qubits is given, the
larger the number of logical qubits, the smaller the code distance in general.
Its asymptotic properties are manifested that when the number of physical
qubits is given $\mathcal{O}(n^{3})$, the maximum number of logical qubits can
be $\mathcal{O}(n)$, and the code distance is also $\mathcal{O}(n)$; if
sacrificing the number of logical qubits to be a constant, the code distance
can reach $\mathcal{O}(n^{2})$.
## 5 Decoding Chamon model
Our goal is to use the Chamon model for fault-tolerant quantum computing. In
this section, let us introduce noise to Chamon models and propose an error
correction algorithm.
The errors influence of the Chamon model is complex. If analyzing the errors
of the measurement qubits and the errors propagation between the measurement
qubits and the data qubits, we even cannot recover from the errors. Thus, here
only analyze the memory errors in Chamon model with the perfect measurement
process.
### 5.1 Noise model
For analysing the memory errors on each data qubit, naturally introduce
depolarized channel to Chamon models. Suppose the error probability of data
qubits is $p$, then the representation for depolarized channel is
$\displaystyle\mathcal{\epsilon}(\rho)=$
$\displaystyle\frac{p^{\prime}I}{2}+(1-p^{\prime})\rho$ $\displaystyle=$
$\displaystyle(1-p)\rho+\frac{p}{3}(X\rho X+Y\rho Y+Z\rho Z),$ (13)
where $p=3p^{\prime}/4$. This channel includes $X$ flips, $Y$ flips and $Z$
flips and all the error rates are equal to $p/3$.
To obtain the error correction capability in Chamon models, it is necessary to
consider the meaning of the logical error rate and threshold error rate. The
success or failure of an error correction process is accidental, which is not
suitable for measuring the error correction ability of stabilizer codes.
Usually, use the Monte Carlo method to estimate the error probability of
logical qubits. If $N$ experiments are performed and the number of decoding
failed is $N_{r}$, then the logical error rate is defined as
$\displaystyle P_{L}=f(p)=\lim_{N\to\infty}N_{r}/N.$
It can be seen that $f$ is an increasing function. For a fixed stabilizer
code, the asymptotic properties of the logical error rate $P_{L}$ as a
function of $p$ depend on the code distance $d$. It shows that when $d$ is
relatively large and $p$ gets smaller, error correction process has a higher
probability of success, that is $P_{L}$ becomes smaller. Strictly speaking,
for sufficiently large $d$, sufficiently small $p$, $P_{L}$ can be arbitrarily
small.
###### Definition 3.
$\forall\varepsilon>0$, $\exists$ $d_{0},p^{\prime}_{th}$ such that when
$d>d_{0}$ and $p<p^{\prime}_{th}$, $|P_{L}-0|<\varepsilon$. The threshold
error rate is defined as $p_{th}=\sup\\{p^{\prime}_{th}\\}$.
### 5.2 Decoding algorithm
For estimating the threshold, a decoding algorithm is proposed to calculate
logical error rate. The core idea of our error correction algorithm is first
to eliminate errors globally and then search the logical state space.
Considering the local properties of data qubit errors, the algorithm is
improved by adding a pretreatment process.
The step of eliminating errors is challenging in the Chamon model, which is
distinct from general topological codes. For example, the most common minimum-
weight perfect-matching algorithm is applied widely to decode surface codes,
in which any given match corresponds to one way to eliminate errors. However,
the error distribution of the stabilizers in Chamon model is chaotic so that
error elimination cannot be obtained directly.
#### 5.2.1 Error elimination algorithm
Define a recover Pauli operator, denoted by $P_{\rm r}$, which can eliminate
all measurement errors. In this section, a kind of expression of $P_{\rm r}$
is obtained by the proposed error elimination algorithm. Though it may not
help to explore the right logical state, the errors of the stabilizers are
eliminated.
The algorithm can be listed as $4$ steps, which suits the case when
$g(\alpha_{x},\alpha_{y})=1$.
Step 1. The first step is to apply $Z$-flips in the $2\alpha_{z}$ planes
perpendicular to the $z$ axis, and the order is towards the positive $x$ axis.
The details are as follows. In each plane, denoted briefly by $z=z_{i}$,
$x=x_{j}$ starts from $0$ to $2\alpha_{x}-3$, each time $x_{j}$ increasing one
each time, $i=0,\cdots,2\alpha-1$. Every fixed $z=z_{i}$ and $x=x_{j}$ forms a
line with $\alpha_{y}$ stabilizers. The coordinates of these stabilizers are
related to $x,z$. Start from the stabilizer with the minimum $y$ coordinate
and observe the measurement result. If the measurement result of the $k$-th
stabilizer, the coordinate denoted by $s_{i,j,k}$, changes on this line, apply
$Z$ flip to the nearest data qubit in the positive $x$-axis direction, denoted
by $Z_{s_{i,j,k}+e_{x}^{+}}$. The measurement results need to be updated after
each operation. Last, record all qubits $Z$-flips and get
$\displaystyle P_{1}=\otimes Z_{s_{i,j,k}+e_{x}^{+}}$
This step makes the errors restricted to the plane of $x=2\alpha_{x}-2$ and
$x=2\alpha_{x}-1$.
Step 2. Apply $X$-flips in the remaining two planes in Step 1, and the order
is towards the positive $z$ axis. Specifically, for any $x=x_{j}$, $z=z_{i}$
starts from $0$ to $2\alpha_{z}-3$,$j=2\alpha_{x}-2,2\alpha-1$. Similar to
Step 1, in this way $z=z_{i}$ and $x=x_{j}$ forms a line with $\alpha_{y}$
stabilizers. Start from the stabilizer with the minimum $y$ coordinate and
observe the measurement result. If the measurement of the $k$-th stabilizer
changes, apply $X$ flip to the nearest data qubit in the positive $z$-axis
direction and update the measurement results. Record the whole bit flips, that
is
$\displaystyle P_{2}=\otimes X_{s_{i,j,k}+e_{z}^{+}}$
This step makes the errors distributed in the stabilizers set
$\mathcal{S}|_{\\{x=2\alpha_{x}-2,2\alpha_{x}-1\\}\cap\\{z=2\alpha_{z}-2,2\alpha_{z}-1\\}}$.
Intuitively, the errors are restricted to four lines parallel to the $y$ axis.
Step 3. Eliminate errors on every line. Before the process of Step 3, we
supplement some properties. Similar to data qubits, the stabilizers can also
be classified into $4$ classes, denoted by $\mathcal{S}^{s_{i}}$,
$\displaystyle\mathcal{S}^{s_{i}}=\\{S_{s}|s=s_{i}+2\lambda_{x}e_{x}^{+}+2\lambda_{y}e_{y}^{+}+2\lambda_{z}e_{z}^{+},$
$\displaystyle(\lambda_{x},\lambda_{y},\lambda_{z})\in\mathcal{Z}_{\alpha_{x}}\times\mathcal{Z}_{\alpha_{y}}\times\mathcal{Z}_{\alpha_{z}}\\},$
(14)
where $s_{0}=(0,1,0),s_{1}=(1,0,0),s_{2}=(0,0,1),s_{3}=(1,1,1).$ In Chamon
models, one can check that
###### Lemma 2.
For any $i=0,1,2,3$, the number of the stabilizers with changed measurements
in $S^{s_{i}}$ is even.
Note that all the stabilizers on the line parallel to the $y$ axis belong to
exactly one class of stabilizers. By Lemma 2, the number of stabilizers whose
measurements change on each line is even.
Before explaining the process of Step 3, there is one property left to prove.
On the line parallel to the $y$-axis, consider the two stabilizers
$S_{s_{(1)}},S_{s_{(2)}}\in\mathcal{S}^{s_{i}}$. Define two stabilizers the
second-nearest neighbor stabilizers, if they obey
$\displaystyle s_{(1)}-s_{(2)}=4e_{y}^{\pm}.$
Then another useful property is shown as follows:
###### Lemma 3.
Any two second-nearest neighbor stabilizers are denoted as $S_{s_{(1)}}$ and
$S_{s_{(2)}}$, and the midpoint of $s_{(1)}$ and $s_{(2)}$ is denoted as
$s_{(0)}$. In the Chamon model, if $\alpha_{x},\alpha_{y}$ are coprime, then
there exists some $Z$ flips, whose support parallel to the $xOy$ plane,
denoted as $P_{Z}(s_{(0)})$, so that the measurements of $S_{s_{(1)}}$ and
$S_{s_{(2)}}$ are changed, and all else are unchanged.
The proof of Lemma 3 is put in B. Here let us set an example illustrating the
idea of this proof.
(a) (b) (c)
Figure 3: The product of chain $Z$ operators changes the outcomes of the
second-nearest neighbor stabilizers. (a-b) In the direction of
$e_{x}^{+}+e_{y}^{-}$ or $e_{x}^{+}+e_{y}^{+}$, the chain $Z$ operator, whose
support colored black, changes only $4$ adjacent stabilizers, colored blue.
(c)The product of two chain $Z$ operators, denoted by $P_{Z}$, guarantees the
outcomes change of the second-nearest neighbor stabilizers.
###### Example 1.
Set $\alpha_{x}=3$, $\alpha_{y}=5$, as shown in Fig.3. Consider four adjacent
stabilizers, belonging to two classes of stabilizers. There is a chain $Z$
operator, acting on the data qubits arranged diagonally in the direction of
$e_{x}^{+}+e_{y}^{+}$, which can just change the outcomes of these
stabilizers, as shown in Fig.3(a). The support has $12$ data qubits.
Similarly, consider the symmetrical case, as shown in Fig.3(b). The chain $Z$
operators in the direction of $e_{x}^{+}+e_{y}^{-}$ can also change the
measurement results of the four adjacent stabilizers. Finally, the product of
these two chain operators results in the changes of the second-nearest
neighbor stabilizers measurements, as shown in Fig.3(c).
Now start to describe Step 3. On each line formed by $\alpha_{y}$ stabilizers,
sort the stabilizers along the $y$ axis, from $0$ to $\alpha_{y}-1$. Consider
the $k$-th stabilizer, whose coordinate denoted by $s_{k}$,
$k=0,\alpha_{y}-3$. $k$ increases sequentially from $0$, and if the $k$-th
stabilizer measurement changes, then apply $P_{Z}(s_{k}+2e_{y}^{+})$. This
step recovers the measurements of the stabilizers located at the $0$-th to the
$(\alpha_{y}-3)$-th. That is, on each line there are at most two points with
errors, whose coordinates are $s_{\alpha_{y}-2}$ and $s_{\alpha_{y}-1}$.
Before Step 4, briefly analyze the situation of the two stabilizers
measurements on each line. Two stabilizers belong to one class and by Lemma 2
the measurement results either both change or both have not changed. If the
stabilizer measurements do not change, the errors on this line have been
eliminated, or perform Step 4.
Step 4. Repeat Step 3 from $s_{\alpha_{y}-2}$ with step-length 2.
The reasons for eliminating all errors are as follows. Observing the
$(\alpha_{y}-2)$-th stabilizer, if the measurement result changes, the
measurement result of the $(\alpha_{y}-1)$-th stabilizer also changes.
Therefore, according to Step 3, apply the second-nearest neighbor operator
$P_{Z}(s_{\alpha_{y}-1})$. The $0$-th and the $(\alpha_{y}-1)$-th two
stabilizers measurement results still change. Since $\alpha_{y}-1$ is an even
number, thus all errors are eliminated by applying a series of the second-
nearest neighbor operators $P_{Z}(s_{\alpha_{y}-2})\cdots P_{Z}(s_{3})\cdot
P_{Z}(s_{1})$.
Here is an example to illustrate the process in Step 4.
###### Example 2.
Considering a Chamon model with $\alpha_{y}=5$, after the first three steps,
suppose there are two errors on a line parallel to the $y$ axis. These two
errors are on the $k$-th stabilizer, $k=3,4$, as shown in Fig. 4(a). After the
second-nearest neighbor Pauli $Z$ operator $P_{Z}(s_{4})$, the errors are
distributed in the $0$-th and the $4$-th stabilizers, as shown in Fig. 4(b).
The error propagates from the $0$-th stabilizer to the $k$-th stabilizer,
where $k$ is even, until $k=\alpha_{y}-1=4$. Thus, errors on this line are
eliminated, as shown in Fig. 4(b) and Fig. 4(c).
(a) (b) (c)
Figure 4: Step 4 of error elimination algorithm. (a) Translate the error in
$s_{3}$ to $s_{0}$ by applying $P_{Z}(s_{4})$; (b-c) Translate the error in
$s_{0}$ to $s_{4}$ by applying $P_{Z}(s_{3})\cdot P_{Z}(s_{1})$.
Denote the product of all the second-nearest-neighbor Pauli $Z$ operators in
Step 3 and Step 4 as $P_{34}=\prod_{i}P_{Z}(s_{i})$. The error elimination
algorithm shows that the recover Pauli operator is $P_{\rm
r}=P_{34}P_{2}P_{1}$.
Note that this algorithm requires that there exist two dimensions coprime, so
the algorithm has a wide range of applications to Chamon models. In the
construction process of Chamon model, given the scales of the total data
qubits, such as $\mathcal{O}(n^{3})$, one can build a Chamon model in which
three dimensions are $\mathcal{O}(n)$ and pairwise coprime.
#### 5.2.2 Global randomized error correction algorithm
In this section, the global randomized error correction algorithm is proposed
for Chamon models with $\alpha_{x},\alpha_{y},\alpha_{z}$ pairwise coprime. In
a numerical experiment, the basic idea is first to randomize the error
elimination algorithm $L$ times to obtain corresponding recover Pauli
operators, denoted by $P_{{\rm r}l}$, $l=0,\cdots,L-1$. Then apply half-plane
logical operators $X|_{\mathcal{D}_{x}^{g_{i}}}$ and
$Z|_{\mathcal{D}_{z}^{g_{i}}}$ to $P_{{\rm r}l}$ and choose the final recover
Pauli operator $P_{\rm r}$ with the minimum weight.
The process of randomization is described in three ways.
(1) The starting point is chosen randomly. In Step 1, randomize the starting
data qubits. The original algorithm starts from the data qubit located at
$(0,0,0)$. In fact, any position of data qubit can be used as a starting
point. After $P_{1}$, the errors are included on two adjacent planes. In Step
2, similarly, randomize the starting data qubits. The same idea suits Step
$3-4$. This is all caused by the translation invariance of the Chamon model as
a 3D torus.
(2) The order of Step 1 and Step 2 can also be adjusted. One can apply $X$
operators first, and then follow the $Z$ operators.
(3) From Step 3 to Step 4, performing the second-nearest neighbor Pauli $Z$
operator needs to limit $\alpha_{x}$ and $\alpha_{y}$ coprime. Similarly,
Since $\alpha_{y},\alpha_{z}$ are coprime, the second-nearest neighbor $X$
operator can be selected.
Finally, let us summarize the global randomized algorithm. Firstly, introduce
the standard depolarizing channel to Chamon models with perfect measurements.
Then apply the recover Pauli operator $P_{\rm r}$ to correct the data qubits
errors. For each recovered state, test for logical $X$ failure by checking
whether there are an odd or even number of $X$ flips on any planes
perpendicular to the $z$-axis. There will be an odd number of $X$-flips in the
plane perpendicular to the $z$-axis if a half-plane logical $X$ operator has
been applied, indicating a logical $X$ error. By performing many simulations,
the probability of logical $X$ error $P_{L}$ versus $p$ can be plotted, as
shown in Fig. 5. The threshold error rate is about $4.45\%$. The dimensions of
the three models denoted by $(\alpha_{x},\alpha_{y},\alpha_{z})$ are
$(2,3,2)$, $(2,3,5)$, $(2,3,7)$, $(2,3,11)$ and $(5,7,11)$, respectively.
Figure 5: Logical $X$ error rate $P_{L}$ as a function of data qubit $X$-flip
probability $p$. Here shows various Chamon models denoted by
$\alpha_{x},\alpha_{y},\alpha_{z}$ with different colors and the threshold
error rate is $4.45\%$ for the global error correction algorithm.
Note that the algorithm does not reflect the impact of local qubits errors,
but considers all stabilizers measurements. Thus, this error correction
algorithm is global. A natural question is whether the error correction
capability of the Chamon model can be improved when local errors form is
considered. Next, introduce this idea to the Chamon models and find the
threshold can be improved slightly.
### 5.3 Improved error correction algorithm
In this section, the global error correction algorithm is improved by a
pretreatment process, termed the probabilistic greedy local algorithm.
Although the global algorithm can eliminate errors definitely, when recovering
the original logical state, it may be led to randomness due to global
searching. The basic idea for the probabilistic greedy local algorithm is to
decrease the number of false stabilizers greedily by using the property that
any qubit memory error can affect the nearest $4$ stabilizers. Details of
specific procedures are put in C. Hence the process of the improved algorithm
is listed as follows:
* (a)
Execute the probabilistic greedy local algorithm. If the errors are all
eliminated, skip to Step (c).
* (b)
Execute the global randomized error elimination algorithm.
* (c)
Apply half-plane logical operators $X|_{\mathcal{D}_{x}^{g_{i}}}$ and
$Z|_{\mathcal{D}_{z}^{g_{i}}}$ to search the recover Pauli operator with the
minimum weight.
* (d)
Use parity check method to compute $P_{L}$.
Using the improved error correction algorithm, the probability of logical $X$
error $P_{L}$ versus $p$ can be plotted, as shown in Fig. 6. The threshold
error rate of this algorithm is about $4.92\%$, slightly higher than that
computed by the global error correction algorithm.
Figure 6: Logical $X$ error rate $P_{L}$ as a function of $X$-flip probability
$p$. Chamon models in different dimensions are denoted by
$\alpha_{x},\alpha_{y},\alpha_{z}$ with different colors. The threshold error
rate for the improved error correction algorithm is about $4.92\%$.
It is clear that the improvement of the local error correction algorithm to
the global algorithm is not essential. When $p$ is small, for a given data
qubit error rate $p$, it can be found that the logical error rate $P_{L}$ of
the improved error correction algorithm will be less. However, around the
threshold, the two algorithms exhibit essentially similar performance. The
reason is that the effect of the stabilizers is excluded from consideration in
the process of recovering Pauli operators. However, in the consideration of
stabilizers, decoding becomes an exponentially searching problem. The
improvement of the decoding algorithms for Chamon models is also our future
research direction.
## 6 Conclusion
The Chamon model in this paper becomes a kind of error correction code.
Compared to previous work, this paper thoroughly explains how $4\alpha$
logical qubits are constructed. The logical operators and the related
properties are given for any Chamon model. The threshold for Chamon model is
obtained under the perfect measurements.
In the decoding process, the proposed probabilistic greedy local algorithm
adapts to different kinds of high-dimensional models. And the advantage of
this retreatment process lies in its low cost without consideration of global
properties for error correction codes. A possible future research topic is to
generalize the error correction algorithm to other high-dimensional codes.
The global decoding algorithm depends on the linear dimensions for Chamon
models, which implies topological properties in Chamon models. It cannot be
suitable for the case that any two dimensions are not coprime. Hence it
remains to be studied how to generalize the applicable situations. On the
other hand, the local error correction algorithm also has room for
improvement. For example, one can design an algorithm more exquisitely by
considering the second-nearest interactions to improve the performance of the
algorithm.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China
(Grant No. 12034018) and Innovation Program for Quantum Science and Technology
(Grant No. 2021ZD0302300).
## Appendix A Chamon model in toric representations
In this section, the Chamon model is represented in a torus. The support of
logical operators in this representation is a non-contractible loop on the
torus. The stabilizers can be regarded as a circle with some discrete points.
Our goal is to explore the shortest loop on the torus, corresponding to the
logical operator with the minimum weight.
### A.1 Stabilizers and data qubits in toric representations[23]
First, introduce the idea of the representations of stabilizers. The
stabilizer and the corresponding 6 data qubits are represented by a regular
hexagon, in which the center represents the stabilizer, and each vertex
represents data qubits. A regular hexagon has three longest diagonals, whose
two endpoints represent the same measurement type, forming the three positive
directions of $x,y,z$, as shown in Fig. 7.
Figure 7: The stabilizer and corresponding hexagon representations.
$S=X_{a}X_{f}Y_{A}Y_{B}Z_{\delta_{1}}Z_{\delta_{2}}$.
Then consider the stabilizers on the $xOy$ plane. Note that these stabilizers
on the $xOy$ plane can be arranged diagonally in the $e_{x}+e_{y}$ direction.
It can be obtained that every two adjacent stabilizers share two data qubits.
Suppose $\alpha_{x}$ and $\alpha_{y}$ are coprime, then these diagonally
arranged stabilizers fill the whole $xOy$ plane under the periodic boundary
conditions. In the $e_{x}+e_{y}$ direction, two adjacent stabilizers share two
data qubits, corresponding to two regular hexagons sharing an edge. Fig. 8
shows hexagon representation when $\alpha_{x}=2,\alpha_{y}=3$. Note that one
data qubit corresponds to two vertices of the regular hexagons, because each
data qubit on the plane is coupled with four stabilizers, and its vertex is
shared by two regular hexagons. Thus the data qubits and stabilizers on the
$xOy$ plane can be represented by a row of regular hexagons.
(a)
(b)
Figure 8: The representations of stabilizers and data qubits on $xOy$ plane.
(a) Data qubits are marked with dots and letters, and stabilizers are
represented by blue edges. The data qubits used by the half-plane logical $Z$
operator in the $xOy$ plane are marked with different colors. (b) Stabilizers
are represented by the centers of regular hexagons and data qubits are
represented by vertices.
Last, generalize the method on $xOy$ plane to the entire Chamon model. A
natural idea is that the data qubits and the stabilizers in each plane
perpendicular to the $z$-axis can be represented as a row of hexagons
similarly. Thus intuitively there are $2\alpha_{z}$ rows of regular hexagons.
In this tessellation way, each vertex shares three hexagon faces, and each
face shares 6 vertices. This is inconsistent with the data qubits being
measured by the nearest 6 stabilizers, which is twice the number of shared
faces. Therefore, a straightforward and concise implementation is just to map
each data qubit to two vertices. Due to the periodic boundary conditions of
directions of $e_{z}$ and $e_{x}+e_{y}$, mathematically the Chamon model can
be represented as a torus. Here illustrate the toric representation of a
Chamon model with an example.
###### Example 3.
Given a Chamon model with $\alpha_{x}=2,\alpha_{y}=3$ and $\alpha_{z}=5$, as
shown in Fig.9, there are a total of $2\alpha_{x}\alpha_{y}$ generators of
stabilizers in the plane $z=i$, corresponding to a row of
$2\alpha_{x}\alpha_{y}$ regular hexagons, $i=0,\cdots,2\alpha_{z}-1$. When
$i=0$, the data qubits on the plane inherit the notation of Fig.8. There are a
total of $2\alpha_{z}=10$ layers, thus a total of
$4\alpha_{x}\alpha_{y}\alpha_{z}$ regular hexagons.
Figure 9: The Chamon model in toric representations. The logical operators are
displayed in three color chains, in which the blue chain and red chain
represent the logical $Z$ operator and logical $X$ operator, respectively. The
yellow one depends on the parity of $\alpha_{x}$ and $\alpha_{z}$. The
composite of colors indicates the overlapping of chains.
### A.2 Logical Pauli operators in toric representations
Suppose $\alpha_{x},\alpha_{y}$ and $\alpha_{z}$ are pairwise coprime for
simplicity. Recall the half-plane logical $Z$ operator
$Z_{Li}=Z|_{\mathcal{D}_{z}^{g_{i}}}$, the definition of
$\mathcal{D}_{z}^{g_{i}}$ as in Eq.(3.1). As in Fig.8(b), in toric
representations, the half-plane logical operator forms a loop non-
contractible. A trivial chain is the $Z$ operator acting the data qubits
arranged diagonally, then obtain $Z_{L}=Z_{A}Z_{a}Z_{C}Z_{c}\cdots$, that is
$Z_{L0}Z_{L1}$. This kind of full plane logical operator, termed the chain
operator in toric representations, also forms a non-contractible loop.
Now give the proof of the Thm.3. Our goal is changed to explore the loop with
the minimum weights. Similarly, the half-plane logical $Z$ operator is called
a half-chain operator, coupled with half of data qubits. In the toric
representations, the length of this chain is $2\alpha_{x}\alpha_{y}$. In this
direction, any chains with less than this length will break and fail to form a
loop. Due to each data qubit corresponding to two points on the chain, the
weight of this logical Z operator is $\alpha_{x}\alpha_{y}$. As in Fig.9,
there are two other direction half-chain operators. It can be proved that one
data qubit corresponds to two points in each chain, and the weights of the
logcial operators are $\alpha_{y}\alpha_{z}$ and $\alpha_{z}\alpha_{x}$,
respectively when $\alpha_{x},\alpha_{y}$ and $\alpha_{z}$ are coprime. In
three directions half-chain operators are the logical operators with minimum
weight in each direction. Thus, the code distance is the minimum number of
$\alpha_{x}\alpha_{y},\alpha_{y}\alpha_{z},\alpha_{z}\alpha_{x}$ for Chamon
model with pairwise coprime linear dimensions.
## Appendix B The proof of Lemma 3
From translation symmetry, let $s_{(1)}=(0,1,0)$, and set $s_{(2)}=(0,5,0)$.
Our goal is to find a set of data qubits on the $xOy$ plane, such that after
applying $Z$ operators to all data qubits in the set, only the outcomes of
$S_{s_{(1)}}$ and $S_{s_{(2)}}$ change. By $\alpha_{x}$ and $\alpha_{y}$
coprime, let
$\displaystyle n_{xy}=\min_{n\in Z^{+}}\\{n|n\alpha_{x}({\rm
mod}\alpha_{y})=1\\}.$
In Example 1, $n_{xy}=2$. The geometric meaning of $n_{xy}$ in the chain $Z$
operator is the number of basic units, whose length is $2\alpha_{x}$. Then the
weights of two chain Pauli $Z$ operators are $2\alpha_{x}n_{xy}$. Define the
supports of two chain Pauli $Z$ operators as
$\displaystyle\mathcal{D}_{z}^{++}=$
$\displaystyle\\{d|d=s_{(0)}+e_{x}^{+}+\lambda(e_{x}^{+}+e_{y}^{+}),\lambda\in\mathcal{Z}_{2\alpha_{x}n_{xy}-1}\\},$
$\displaystyle\mathcal{D}_{z}^{+-}=$
$\displaystyle\\{d|d=s_{(0)}+e_{x}^{+}+\lambda(e_{x}^{+}-e_{y}^{+}),\lambda\in\mathcal{Z}_{2\alpha_{x}n_{xy}-1}\\}.$
Then the Pauli $Z$ operator satisfying the Lemma 3 is constructed, namely
$\displaystyle P_{Z}(s_{(0)})=Z|_{\mathcal{D}_{z}^{++}}\cdot
Z|_{\mathcal{D}_{z}^{+-}}.$
Due to the translation invariance, any two second-nearest neighbor stabilizers
correspond to a Pauli $Z$ operator. In the proof of this lemma, not only is
the existence of $P_{Z}(s_{(0)})$ proved but the explicit expression is
obtained.
## Appendix C The probabilistic greedy local algorithm
The procedures of the greedy local algorithm for Chamon model are as follows:
Step 1. Assignment. A three-dimensional vector, recorded as
$w=(w_{x},w_{y},w_{z})$, is attached to each data qubit, representing three
weights, where $w_{x}$ is defined as the number of errors of four adjacent
stabilizers located in the plane perpendicular to the $x$ axis, so $1\leq
w_{x}\leq 4$. Each data qubit also corresponds to an auxiliary three-
dimensional vector with initialized value, denoted by $v=(0,0,0)$. Similarly,
$w_{y}$ and $w_{z}$ have the same meanings.
Step 2. Eliminating $w_{p}=4$. Read the weights $w_{p}$ of data qubits in
order, $p=x,y,z$. If $w_{p}=4$, then apply the Pauli operator $P$ to the
corresponding data qubit and update the weights of itself and the adjacent
data qubits. Return to Step 2, unless all weights of data qubits $w_{p}<4$.
Check whether the stabilizers errors are eliminated. If they have been
eliminated, the algorithm succeeds, or continue to Step 3.
Step 3. Eliminating $w_{p}=3$. Read the weights $w_{p}$ of data qubits in
order, $p=x,y,z$. If $w_{p}=3$, then apply the Pauli operator $P$ to the
corresponding data qubit, update weights and return to Step 2. If all weights
of data qubits $w_{p}<3$, then check whether the stabilizers errors are
eliminated. If errors have been eliminated, the algorithm succeeds, or
continue to Step 4.
Step 4. Eliminating $w_{p}=2$. Read the weights $w_{p}$ of data qubits in
order, $p=x,y,z$. If $w_{p}=2$ and $v_{p}=0$, then apply the Pauli operator
$P$ to the corresponding data qubit, update weights, record the corresponding
$v_{p}=1$, and return to Step 2. If not, claim the algorithm fails.
If the algorithm ends at Step 2 or Step 3, the algorithm succeeds, or the
algorithm fails. Especially, the greedy local algorithm cannot handle the case
that all the weights $v_{p}\leq 1$.
In Step 2 and Step 3, the number of error stabilizers has been decreasing.
Thus, the computational complexity is $\mathcal{O}(N^{2})$, where $N$ is the
number of data qubits. In Step 4, the record of $v$ can avoid the infinite
loop of the algorithm. And in the worst case, all the data qubits have
$v_{p}=1$, which means Step 4 is executed $\mathcal{O}(N)$ times. Thus the
complexity for the probabilistic greedy local algorithm is
$\mathcal{O}(N^{3})$.
## References
* [1] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, S. Lloyd, Quantum machine learning, Nature 549 (7671) (2017) 195–202.
* [2] P. W. Shor, Algorithms for quantum computation: discrete logarithms and factoring, in: Proceedings 35th annual symposium on foundations of computer science, Ieee, 1994, pp. 124–134.
* [3] T. Monz, D. Nigg, E. A. Martinez, M. F. Brandl, P. Schindler, R. Rines, S. X. Wang, I. L. Chuang, R. Blatt, Realization of a scalable shor algorithm, Science 351 (6277) (2016) 1068–1070.
* [4] S. Lloyd, Universal quantum simulators, Science 273 (5278) (1996) 1073–1078.
* [5] L. K. Grover, A fast quantum mechanical algorithm for database search, in: Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, 1996, pp. 212–219.
* [6] B. M. Terhal, Quantum error correction for quantum memories, Reviews of Modern Physics 87 (2) (2015) 307.
* [7] J. Roffe, Quantum error correction: an introductory guide, Contemporary Physics 60 (3) (2019) 226–245.
* [8] Y.-H. Luo, M.-C. Chen, M. Erhard, H.-S. Zhong, D. Wu, H.-Y. Tang, Q. Zhao, X.-L. Wang, K. Fujii, L. Li, et al., Quantum teleportation of physical qubits into logical code spaces, Proceedings of the National Academy of Sciences 118 (36) (2021).
* [9] C. K. Andersen, A. Remm, S. Lazar, S. Krinner, N. Lacroix, G. J. Norris, M. Gabureac, C. Eichler, A. Wallraff, Repeated quantum error detection in a surface code, Nature Physics 16 (8) (2020) 875–880.
* [10] G. Q. AI, Exponential suppression of bit or phase errors with cyclic error correction, Nature 595 (7867) (2021) 383.
* [11] A. Erhard, H. Poulsen Nautrup, M. Meth, L. Postler, R. Stricker, M. Stadler, V. Negnevitsky, M. Ringbauer, P. Schindler, H. J. Briegel, et al., Entangling logical qubits with lattice surgery, Nature 589 (7841) (2021) 220–224.
* [12] D. Nigg, M. Mueller, E. A. Martinez, P. Schindler, M. Hennrich, T. Monz, M. A. Martin-Delgado, R. Blatt, Quantum computations on a topologically encoded qubit, Science 345 (6194) (2014) 302–305.
* [13] C. Ryan-Anderson, J. Bohnet, K. Lee, D. Gresh, A. Hankin, J. Gaebler, D. Francois, A. Chernoguzov, D. Lucchetti, N. Brown, et al., Realization of real-time fault-tolerant quantum error correction, Physical Review X 11 (4) (2021) 041058.
* [14] J. Chiaverini, D. Leibfried, T. Schaetz, M. D. Barrett, R. Blakestad, J. Britton, W. M. Itano, J. D. Jost, E. Knill, C. Langer, et al., Realization of quantum error correction, Nature 432 (7017) (2004) 602–605.
* [15] P. Schindler, J. T. Barreiro, T. Monz, V. Nebendahl, D. Nigg, M. Chwalla, M. Hennrich, R. Blatt, Experimental repetitive quantum error correction, Science 332 (6033) (2011) 1059–1061.
* [16] J. R. Wootton, D. Loss, Repetition code of 15 qubits, Physical Review A 97 (5) (2018) 052313.
* [17] P. Campagne-Ibarcq, A. Eickbusch, S. Touzard, E. Zalys-Geller, N. E. Frattini, V. V. Sivak, P. Reinhold, S. Puri, S. Shankar, R. J. Schoelkopf, et al., Quantum error correction of a qubit encoded in grid states of an oscillator, Nature 584 (7821) (2020) 368–372.
* [18] E. Dennis, A. Kitaev, A. Landahl, J. Preskill, Topological quantum memory, Journal of Mathematical Physics 43 (9) (2002) 4452–4505.
* [19] C. Castelnovo, C. Chamon, Topological order in a three-dimensional toric code at finite temperature, Physical Review B 78 (15) (2008) 155120.
* [20] S. Bravyi, J. Haah, Analytic and numerical demonstration of quantum self-correction in the 3d cubic code, arXiv preprint arXiv:1112.3252 (2011).
* [21] J. P. Bonilla Ataides, D. K. Tuckett, S. D. Bartlett, S. T. Flammia, B. J. Brown, The xzzx surface code, Nature communications 12 (1) (2021) 1–12.
* [22] C. Chamon, Quantum glassiness in strongly correlated clean systems: An example of topological overprotection, Physical review letters 94 (4) (2005) 040402.
* [23] S. Bravyi, B. Leemhuis, B. M. Terhal, Topological order in an exactly solvable 3d spin model, Annals of Physics 326 (4) (2011) 839–866.
* [24] W. Shirley, X. Liu, A. Dua, Emergent fermionic gauge theory and foliated fracton order in the chamon model, Physical Review B 107 (3) (2023) 035136.
* [25] M. A. Nielsen, I. Chuang, Quantum computation and quantum information (2002).
* [26] E. Knill, R. Laflamme, L. Viola, Theory of quantum error correction for general noise, Physical Review Letters 84 (11) (2000) 2525.
|
###### Abstract
Meta-learning of numerical algorithms for a given task consist of the data-
driven identification and adaptation of an algorithmic structure and the
associated hyperparameters. To limit the complexity of the meta-learning
problem, neural architectures with a certain inductive bias towards favorable
algorithmic structures can, and should, be used. We generalize our previously
introduced Runge–Kutta neural network to a recursively recurrent neural
network (R2N2) superstructure for the design of customized iterative
algorithms. In contrast to off-the-shelf deep learning approaches, it features
a distinct division into modules for generation of information and for the
subsequent assembly of this information towards a solution. Local information
in the form of a subspace is generated by subordinate, inner, iterations of
recurrent function evaluations starting at the current outer iterate. The
update to the next outer iterate is computed as a linear combination of these
evaluations, reducing the residual in this space, and constitutes the output
of the network. We demonstrate that regular training of the weight parameters
inside the proposed superstructure on input/output data of various
computational problem classes yields iterations similar to Krylov solvers for
linear equation systems, Newton-Krylov solvers for nonlinear equation systems,
and Runge–Kutta integrators for ordinary differential equations. Due to its
modularity, the superstructure can be readily extended with functionalities
needed to represent more general classes of iterative algorithms traditionally
based on Taylor series expansions.
A Recursively Recurrent Neural Network (R2N2) Architecture
for Learning Iterative Algorithms
Danimir T. Doncevica,bAlexander Mitsosc,a,d Yue Guoe Qianxiao Lie Felix
Dietrichf Manuel Dahmena,∗ Ioannis G. Kevrekidisg,∗
* a
Institute of Energy and Climate Research, Energy Systems Engineering (IEK-10),
Forschungszentrum Jülich GmbH, Jülich 52425, Germany
* b
RWTH Aachen University Aachen 52062, Germany
* c
JARA-ENERGY, Jülich 52425, Germany
* d
RWTH Aachen University, Process Systems Engineering (AVT.SVT), Aachen 52074,
Germany
* e
Department of Mathematics, National University of Singapore, Singapore
* f
Department of Informatics, Technical University of Munich, Boltzmannstr. 3,
85748 Garching b. Munich, Germany
* g
Departments of Applied Mathematics and Statistics & Chemical and Biomolecular
Engineering, Johns Hopkins University, Baltimore, Maryland 21218, USA
Keywords: Numerical Analysis, Meta-Learning, Machine Learning, Runge-Kutta
Methods, Newton-Krylov Solvers, Data-driven Algorithm Design
## 1 Introduction
The relationship between a class of residual recurrent neural networks (RNN)
and numerical integrators has been known since Rico-Martinez et al., (1992)
proposed the architecture shown in Figure 1(a) for nonlinear system
identification. Together with more recent work of the authors Mitsos et al.,
(2018); Guo et al., (2022) it motivates the present work. The salient point of
the RNN proposed by Rico-Martinez et al., (1992) is that it provides the
structure of an integrator, in that case a fourth-order explicit Runge–Kutta
(RK) scheme, within which the right-hand-side of an ordinary differential
equation (ODE) can be approximately learned from time series data. To this
end, the integrator is essentially hard-wired in the forward pass of the RNN –
a first instance of the body of work nowadays referred to as “neural ordinary
differential equations” (Chen et al.,, 2018). However, upon closer inspection,
the architecture reveals a complementary utility: When the model equations are
known, it is the structure and weights associated with an algorithm that can
be discovered, see Figure 1(b). We study the latter setting in this work.
(a) When the architecture and integrator weights are hard-wired (black
connections and their weights), a model of the unknown system (thick red
squares) can be approximated, e.g., by multi-layer perceptrons. (b) When the
model equations are known (black squares), the structure and weights (thick
red connections and their weights) associated with a solution algorithm can be
discovered for that system.
Figure 1: A recurrent neural network architecture templated on Runge–Kutta
integrators, adapted from Rico-Martinez et al., (1992). The illustration shows
the classical fourth-order Runge–Kutta scheme, where $\vec{\mathrm{Y}}$ and
$\vec{\mathrm{X}}$ are the dependent and independent variables, respectively,
$\mathrm{f}$ is the right-hand side of an ODE, and $\vec{\mathrm{k}}_{i}$ are
stage values. Recurrent function evaluations are computed internally, and the
network itself can be applied iteratively via external recurrence. Given input
and output data for training, a model (1(a)) or an algorithm (here, an initial
value solver) for that model (1(b)) can be learned.
Algorithms encoded by this architecture inherit two structural features from a
RK scheme: i) they are based on recurrent function evaluations (inner
recurrence), and ii) they compute iterates by adding a weighted sum of these
inner evaluations that acts as correction to the input (outer recurrence).
These nested recurrent features also characterize (matrix)-free Krylov
subspace methods such as restarted GMRES (Saad and Schultz,, 1986) or Newton-
Krylov GMRES (Kelley,, 1995), where finite differences between recurrent
function evaluations provide estimates of directional derivatives in various
directions. This estimation of derivatives, or, more broadly, of Taylor series
as in the RK case, forms the backbone of either of these two classical
algorithm families, RK and Krylov methods. Observing these parallels, we
conjecture that a recursively recurrent neural network (R2N2) can approximate
many traditional numerical algorithms based on Taylor series expansions.
Returning to Figure 1(b), we see that the (hyper-)parameters of the R2N2 are
naturally linked to those of the algorithm it furnishes, e.g., weights
connecting to the output can encode quadratures, and the internal recurrence
count determines the number of function evaluations and the dimension of the
subspace that is generated. It is this particular connectivity, highlighted in
red, that is subject to discovery, whether it be encoding a Butcher tableau in
the case of RK (Butcher,, 2016; Guo et al.,, 2022) or finding the combination
of weights that convert function evaluations into directional derivatives for
Newton–Krylov.
The question that motivates this work is thus: Can meta-learning applied to
the architecture and parameters of the R2N2 discover old and new algorithms
“personalized” to certain problem classes? We advocate that the R2N2 provides
an architecture search space particularly suited for answering this question.
The parameters of the R2N2 can be partitioned a priori into those that are
hard-wired and those subject to meta-learning. We can focus on the discrete
features, i.e., determining the operations and connections that yield a best
performant algorithm, or on the (continuous) weights, i.e., on how to best
combine these operations. The former is also referred to as algorithm
configuration, while the latter is a special case of parameter tuning (Hoos,,
2011). Algorithm configuration has been optimized in a previous effort of the
authors (Mitsos et al.,, 2018), albeit for a different, less expressive
architecture with all weight parameters collapsed into a scalar stepsize. More
recently, we worked with a fixed architecture as in Figure 1(b) such that
Butcher tableaus personalized to specific classes of initial-value problems
(IVPs) were learned (Guo et al.,, 2022). The present work seeks to extend this
application to further algorithm families, in particular Newton-Krylov
solvers, and to thereby demonstrate that R2N2 gives rise to a potent
superstructure for optimal algorithm discovery. In follow-up work, we aim to
show that jointly optimizing the weights and architecture from such a
superstructure realizes learning of optimal iterative numerical algorithms,
and to explore the use of approximate physical models as preconditioners
within the superstructure.
### 1.1 Related work
Automated algorithm configuration and tuning dates back to Rice, (1976) and
studies meta-algorithms or meta-heuristics that pick the best algorithm from a
set of (parametrized) algorithms for certain problem classes by manipulating
the (hyper)parameters of a solver (Hoos,, 2011). Speed-ups up to a factor of
$50$ have been demonstrated, e.g., for satisfiability problems (KhudaBukhsh et
al.,, 2016) or mixed-integer problems (Hutter et al.,, 2010) by leveraging
problem structure. This is in contrast to classical numerical algorithms that
are designed to work with little or no a priori knowledge about the internal
structure of the problem instances to be solved and are often biased towards
worst-case performance criteria (Gupta and Roughgarden,, 2020). In contrast,
algorithms adapted to specific problem classes typically incur a
generalization weakness on other problems (Wolpert and Macready,, 1997).
Mitsos et al., (2018) modeled algorithms as feedback schemes with a cost
ascribed to each operation such that the design of iterative algorithms was
posed as an optimal control problem of mixed-integer nonlinear (MINLP) type.
They restricted operations to monomials of function evaluations and
derivatives, thus specifying a family of algorithms. Depending on the analyzed
problem, the procedure would recover known, established algorithms from this
family, but also new algorithms that were optimal for the problems considered.
Mitsos et al., (2018) solved the algorithm generation MINLPs via the
deterministic global solver BARON (Tawarmalani and Sahinidis,, 2005). Possible
gains of tailored algorithms must be weighed up against the required effort
for finding them. Machine learning (ML) can decrease this effort. This has
recently been demonstrated forcefully by Fawzi et al., (2022) who improved the
best known algorithms on several instances of matrix multiplication, an NP-
hard problem, through deep reinforcement learning built on top of the
AlphaZero framework (Silver et al.,, 2018). This landmark achievement is
expected to spur new interest in algorithm discovery via ML. ML methods can
optimize the performance of algorithms on a problem class implicitly given
through data (Balcan,, 2020; Gupta and Roughgarden,, 2020). Such data-driven
algorithm design has been applied to several computational problems, e.g.,
learning to solve graph-related problems (e.g., Tang et al., (2020)), learning
sorting algorithms (e.g., Schwarzschild et al., (2021)), learning to branch
(e.g., Khalil et al., (2016), and Balcan et al., (2018)), meta-learning
optimizers (e.g., Andrychowicz et al., (2016), and Metz et al., (2020)), and
our previous work of meta-learning RK integrators Guo et al., (2022). In the
context of this problem, the R2N2 defines a superstructure for a class of
iterative algorithms. The notion of superstructure is used in many disciplines
to denote a union of structures that are candidate solutions to a problem,
e.g., in optimization-based flowsheet design within process systems
engineering (Yeomans and Grossmann,, 1999; Mencarelli et al.,, 2020). In
general, optimizing a superstructure requires integer optimization techniques
as in Mitsos et al., (2018). Given the neural network interpretation of the
R2N2, however, (heuristic) methods for neural architecture search, e.g.,
Elsken et al., 2019a , and Li et al., 2021a , may be considered. Consequently,
the optimal algorithmic procedure corresponds to an optimized neural
architecture. In this work, however, we avoid integer optimization by
essentially fixing the neural architecture for each respective numerical
experiment, and optimizing the weights therein.
The R2N2 belongs to the class of recurrent neural networks (RNNs), which are a
natural fit for iterative algorithms. For instance, RNN architectures
templated on RK integrators have been suggested several decades ago (Rico-
Martinez et al.,, 1992, 1994, 1995; González-García et al.,, 1998). These
architectures can be used for both, computing the outputs of an integrator,
e.g., Rico-Martinez et al., (1992), and González-García et al., (1998) and
identifying terms in a differential equation, e.g., Rico-Martinez et al.,
(1994), Nascimento et al., (2020), Lovelett et al., (2020), Zhao and Mau,
(2020), and Goyal and Benner, (2021). Further, as the RK network of Rico-
Martinez et al., (1992) learns the residual between the input and the output
data, it constitutes the first occurrence of a residual network (ResNet, He et
al., (2016)). Several newer ResNet-based architectures retain structural
similarities with numerical algorithms (Lu et al.,, 2018). For instance,
FractalNet (Larsson et al.,, 2017) resembles higher-order RK schemes in its
macrostructure, as does the ResNet-derived architecture by Schwarzschild et
al., (2021). Lu et al., (2018) proposed a ResNet augmented by a module derived
from linear multistep methods (Butcher,, 2016). Beyond architectural
similarity with algorithms, neural networks proposed in literature can also be
designed such that the mathematical mapping they represent shares desirable
properties with that of an algorithm, e.g., symplectic mappings (Jin et al.,,
2020) for symplectic integrators or contracting layers (Chevalier et al.,,
2021) for fixed-point iterations. Dufera, (2021) and Guo et al., (2022)
trained networks to match the derivatives of ODEs or of their RK-based
solution expansion at training points, respectively. Finally, direct learning
of algorithms from neural network-like graphs has been proposed, e.g., by
Tsitouras, (2002), Denevi et al., (2018), Mishra, (2018) and Venkataraman and
Amos, (2021).
### Contributions
We build on the RK-NN of our previous work (Guo et al.,, 2022) and introduce
the R2N2 superstructure for iterative numerical algorithms. Our particular
realization of this superstructure has a strong inductive bias that alleviates
the need for configuration decisions, i.e., integer optimization, in algorithm
design. We further propose to encode finite differencing into the architecture
to aid the numerical estimation of directional derivative terms. Thus, higher-
order iterative algorithms for equation solving and numerical integration that
are traditionally constructed through Taylor series expansion can be
approximated by the R2N2 superstructure. The function to be evaluated inside
the R2N2 architecture is itself explicitly given as part of the input problem
instance, which is a major difference to operator networks that learn
parameter-to-solution mappings like those in Lu et al., (2019) and Li et al.,
2021b . The remaining trainable weights correspond to coefficients or
hyperparameters of the algorithms in question, and we tune these using PyTorch
(Paszke et al.,, 2019). We demonstrate that certain classical iterative
algorithms are encompassed by the proposed R2N2, e.g., Krylov subspace methods
for solving systems of equations, and RK methods for solving IVPs. We further
show that iterating the trained superstructure on problems similar to the
training data can match, and sometimes improve, the accuracy of Krylov solvers
and RK integrators for a given number of function evaluations.
The remainder of this article is structured as follows. In Section 2, we give
a general problem definition for learning algorithms from task data and
specify the problem for iterative algorithms. Section 3 introduces the R2N2
superstructure for learning iterative algorithms and shows its relation to
steps of iterative equation solvers and integrators. Section 4 presents
results of numerical experiments, where the R2N2 is trained to perform
iterations of linear and nonlinear equation solvers and integrators. We
summarize our results in Section 5 and discuss future research opportunities
in Section 6.
## 2 Problem definition
Various generic problem formulations for learning algorithms from problem data
exist in the literature, e.g., in Balcan, (2020), Gupta and Roughgarden,
(2020), and our prior work Guo et al., (2022). This section provides
background and notation for a generic algorithm learning problem, Section 2.1,
and specifies the problem formulation for learning iterative algorithms,
Section 2.2.
### 2.1 Generic problem formulation
Let $\mathbf{x}\in\mathbb{R}^{m}$ and $\mathcal{F}$ a set of vectors such that
$F\in\mathcal{F}$ is a problem instance composed of a continuous function
$\bm{f}\vcentcolon\mathbb{R}^{m}\mapsto\mathbb{R}^{m}$ and some optional,
additional problem parameters $\mathbf{p}$, e.g., time, such that we can write
$F=\left(\bm{f},\mathbf{p}\right)$. Then, a traditional class of problems can
be characterized by a functional $\Pi$ acting on $F$ and a point in
$\mathbb{R}^{m}$. Further, a solution of $F$ is any
$\mathbf{x}^{\star}\in\mathbb{R}^{m}$ for which
$\Pi(\mathbf{x}^{\star},F)=0.$ (1)
This abstract form encompasses many problem types. The first type we consider
here is finding the solution $\mathbf{x}^{\star}$ of algebraic equations, s.t
$\Pi(\mathbf{x}^{\star},F)=\left\|\bm{f}(\mathbf{x}^{\star})\right\|=0.$ (2)
The second problem type is finding the solution of initial-value problems
(IVPs) for a specific end time $t$, s.t.
$\Pi(\mathbf{x}^{\star},F)=\left\|\mathbf{y}_{0}+\int_{\tau=t_{0}}^{\tau=t}\bm{f}(\mathbf{y}(\tau))d\tau-\mathbf{x}^{\star}\right\|=0,$
(3)
where $\bm{f}$ is the right-hand side of an ordinary differential equation
(ODE) and $\mathbf{y}(\tau)\in\mathbb{R}^{m}$ is the state of the system at
time $\tau$ with the initial value $\mathbf{y}_{0}=\mathbf{y}(\tau=t_{0})$.
Equation (3) illustrates the definition of a problem instance and optional
parameters $\mathbf{p}$, where $\mathbf{p}=\left(\mathbf{y}_{0},t\right)$. The
solution is the final value of the states, i.e.,
$\mathbf{x}^{\star}=\mathbf{y}(\tau=t)$.
We only consider problem instances $F$ for which a solution
$\mathbf{x}^{\star}$ exists. Then, we call any operator mapping
$F\mapsto\mathbf{x}^{\star}$ a solver $\mathcal{S}(F)$. In general, we do not
explicitly know these solvers and hence need to approximate their action by
the design and use of numerical algorithms. That is, algorithms act as
approximate solvers $\mathcal{A}(F,\bm{\uptheta})\approx\mathcal{S}(F)$,
parametrized by $\bm{\uptheta}$, such that
$\Pi\left(\mathcal{A}(F,\bm{\uptheta}),F\right)\leq\delta,$
where $\delta\in\mathbb{R}^{+}$ is sufficiently small and ideally user-
defined. The challenge of designing numerical algorithms pertains to finding a
structure linking mathematical operations parametrized by a set of algorithmic
parameters $\bm{\uptheta}$, which together yield performant solvers for
problems that can be recast to Problem (1).
Due to the effort expended in designing algorithms, another important
consideration is the range of applicability of the algorithm, i.e., the size
of the set of problems it can solve. Traditional algorithms for problems like
(2) and (3) often consider a certain worst-case performance in a given problem
class $\mathcal{F}$. In contrast, we are interested in the average/expected
performance of algorithms over a specific problem class, which is defined by a
distribution $\mu$ over $\mathcal{F}$. Then, finding performant, or even
optimal, algorithms for such a class of problems requires solving
$\operatorname{min}_{\bm{\uptheta}}\mathbb{E}_{\mu}\left[\left\|\Pi\left(\mathcal{A}(F,\bm{\uptheta}),F\right)\right\|\right]+R(\mathcal{A}(F,\bm{\uptheta}),\bm{\uptheta}).$
(4)
Problem (4) seeks algorithm parameters $\bm{\uptheta}$ which minimize the
expectation of the residual norm of solutions computed to Equation (1) using
$\mathcal{A}(F,\bm{\uptheta})$ over problem instances distributed according to
$\mu$. The second term in (4),
$R(\mathcal{A}(F,\bm{\uptheta}),\bm{\uptheta})$, is reserved for some
additional regularization penalty that can promote certain properties in the
approximate solution $\mathcal{A}(\cdot,\bm{\uptheta}^{\star})$, such as a
desired convergence order (Guo et al.,, 2022). The regularization can also
penalize the parameter values $\bm{\uptheta}$ directly, e.g., in $L_{2}$
regularization.The algorithmic structure itself can be described by discrete
variables, e.g., to indicate whether an operation or module exists in the
algorithm. However, to include such discrete variables one requires a metric
that determines which structure is optimal. This is typically assessed over a
prolonged number of iterations and requires integer optimization techniques,
see, e.g., Mitsos et al., (2018). In this work, the goal is rather to
demonstrate that the iterations of several iterative algorithms have a common
superstructure. Consequently, we focus on tuning the parameters of this
superstructure towards different problem classes. Thus, Problem (4) resembles
a regular multi-task learning problem in the context of statistical learning
(Baxter,, 2000), where tasks are equated with problem instances $F$. To handle
such a problem, the expectation term can be approximated by drawing samples of
task data from $\mu$ and minimizing some loss function for them.
### 2.2 Iterative numerical algorithms
We focus on iterative algorithms, which construct a sequence of iterates
$\\{\mathbf{x}_{k}\\}$ approaching a solution of $F$. Starting with the
initial point $\mathbf{x}_{0}$, the algorithm computes new iterates by
applying
$\mathbf{x}_{k+1}=\mathcal{A}^{iter}(F,\bm{\uptheta};\mathbf{x}_{k}),$ (5)
such that the sequence converges to some $\hat{\mathbf{x}}$:
$\hat{\mathbf{x}}=\lim\limits_{k\rightarrow\infty}{\mathbf{x}_{k}}.$
If we have $\Pi(\hat{\mathbf{x}},F)=0$, then $\mathcal{A}^{iter}$ is
convergent to the solution of $F$.
In the following, we restrict the possible realizations of
$\mathcal{A}^{iter}$ strongly by narrowing our attention to iterative
algorithms that apply additive step updates in a generalized Krylov-type
subspace $\mathcal{K}_{n}$. This subspace is spanned by vectors
$\mathbf{v}_{0},\ldots,\mathbf{v}_{n-1}$, i.e.,
$\mathcal{K}_{n}(\bm{f},\mathbf{x})=\operatorname{\mathrm{span}}\\{\mathbf{v}_{0},\mathbf{v}_{1},\ldots,\mathbf{v}_{n-1}\\},$
(6)
that are generated by recurrent function evaluations at $\mathbf{x}_{k}$,
i.e.,
$\mathbf{v}_{0}\vcentcolon=\bm{f}\left(\mathbf{x}_{k}\right),$ (7a)
$\mathbf{v}_{j}=\bm{f}\left(\mathbf{x}_{k}+\sum_{l=0}^{j-1}b_{jl}\mathbf{v}_{l}\right),\quad
j=1,\ldots,n-1,$ (7b)
for some $b_{jl}\in\mathbb{R}$. The next iterate $\mathbf{x}_{k+1}$ is
computed by adding a linear combination of this basis to $\mathbf{x}_{k}$,
i.e.,
$\mathbf{x}_{k+1}=\mathbf{x}_{k}+\sum_{j=0}^{n-1}c_{j}\mathbf{v}_{j},$ (8)
for some $c_{j}\in\mathbb{R}$. Equation (7b) describes an inner iteration of
the algorithm, while Equation (8) constitutes an outer iteration. Together,
Equations (7)—(8) define a common superstructure for iterative algorithms.
This structure has parallels to Krylov subspace methods (Saad,, 2003), and RK
methods (Butcher,, 2016), respectively, see Section 3.3.
Considering only iterative algorithms within this superstructure has several
advantages compared to alternatives that rely on deep learning with heavily
parametrized models. First, the iterative nature of the superstructure greatly
reduces the total amount of parameters needed to learn solution procedures.
Second, Equations (7) and (8) eliminate most of the functional forms
admissible under Equation (5). And, third, directly embedding $\bm{f}$ in the
superstructure avoids the need to learn an extra representation of the problem
within the solver mapping. The resulting superstructure resembles an RNN, and
automatic differentiation frameworks for the training of neural networks such
as PyTorch (Paszke et al.,, 2019) can be used to determine the remaining free
parameters $\bm{\uptheta}$.
## 3 R2N2 superstructure for iterative algorithms
We recently proposed the RK-NN, a neural network architecture templated on RK
integrators, to personalize coefficients of a RK method to a specific problem
class (Guo et al.,, 2022). In this work, we extend our view of the RK-NN to
that of a more general superstructure for iterative algorithms, the R2N2, that
is applicable to different problem classes such as equation solving and
numerical integration. Section 3.1 describes the architecture of the R2N2.
Each forward pass through the R2N2 is interpreted as an iteration of an
algorithm (outer recurrence) that invokes one or many recurrent function
evaluations (inner recurrence), starting at the current iterate, to compute
the next iterate. The R2N2 is equivalent to the RK-NN from Guo et al., (2022)
for the specific case of minimizing empirical risk (Problem (4)) for classes
of IVPs, Problem (3). However, as a small addition to the original RK-NN, the
R2N2 superstructure now allows presetting various routines for function
evaluation, see Section 3.2. In Section 3.3, we show that the applicability of
the R2N2 as a superstructure is substantially extended beyond the case of
solving IVPs. Section 3.4 and Section 3.5 conclude this section with remarks
about training and implementation of the R2N2 superstructure.
### 3.1 Neural architecture underlying the R2N2 superstructure
The proposed R2N2 superstructure is portrayed by Figure 2. It represents the
computation of one iteration of an algorithm, i.e., the mathematical function
defined by the RNN architecture can substitute for
$\mathcal{A}^{iter}(F,\bm{\uptheta};\mathbf{x}_{k})$ in Equation (5), where a
problem instance $F$ contains $\bm{f}$ and $\mathbf{p}$. Each step requires
the current iterate $\mathbf{x}_{k}$ as an input, where the initial
$\mathbf{x}_{0}$ is typically supplied within $\mathbf{p}$. Further, a
function $\bm{f}\vcentcolon\mathbb{R}^{m}\mapsto\mathbb{R}^{m}$ is prescribed
externally as part of a task, but remains unchanged for all iterates. Finally,
a parameter $h$ for scaling of the layer computations is part of the input to
the superstructure. For some problem classes, we choose $h$ according to
problem parameters $\mathbf{p}$, e.g., the timestep in integration. Whenever
$h$ is not specified, assume $h=1$, i.e., no scaling.
Figure 2: Proposed recursively recurrent neural network-based superstructure
of an iterative algorithm. The recurrent cell is delimited by the dashed blue
line and computes $\mathbf{x}_{k+1}$ as a function of the current iterate
$\mathbf{x}_{k}$, a scaling parameter $h$ (cyan), and a function $\bm{f}$
(magenta). Trainable weights $\bm{\uptheta}_{j}$ are contained within each of
the layer modules $\bm{N}_{j}$ with $j=1,2,\ldots,n-1$. $\bm{\uptheta}_{n}$
denote the trainable weights of the output module $\bm{N}_{n}$ (all in blue).
$\mathbf{x}_{k}$ has a skip connection to each layer and to the output module.
Intermediate function arguments, i.e., inputs to $\bm{f}$ in layer $j$, are
called $\mathbf{x}^{\prime}_{j}$ and subspace members, i.e., outputs of
$\bm{f}$ in layer $j$, are denoted by $\mathbf{v}_{j}$ (orange).
The initial layer, which is the left-most in Figure 2, is always a direct
function evaluation at the current iterate, $\bm{f}(\mathbf{x}_{k})$, i.e., we
have $\mathbf{v}_{0}=\bm{f}(\mathbf{x}_{k})$. The remaining $n-1$ layers of
the superstructure each output a $\mathbf{v}_{j}$, $j=1,\ldots,n-1$ by
applying a composition of $\bm{f}$ and $\bm{N}_{j}$. $\bm{N}_{j}$ linearly
combines its inputs $\mathbf{v}_{0},\ldots,\mathbf{v}_{j-1}$ using trainable
parameters $\bm{\uptheta}_{j}$ and, scales the term with $h$ and adds it to
$\mathbf{x}_{k}$ to provide $\mathbf{x}^{\prime}_{j}$, the input to $\bm{f}$
in the $j$-th layer:
$\mathbf{x}^{\prime}_{j}=\bm{N}_{j}\left(\mathbf{x}_{k};\mathbf{v}_{0},\ldots,\mathbf{v}_{j-1};h\right)=\mathbf{x}_{k}+h\sum_{l=0}^{j-1}\theta_{j,l}\mathbf{v}_{l}$
(9)
The $\\{\mathbf{v}_{j}\\}$ including $\mathbf{v}_{0}$ span an $n$-dimensional
subspace in which the output layer $\bm{N}_{n}$ computes the next iterate
$\mathbf{x}_{k+1}$ using the trainable parameters $\bm{\uptheta}_{n}$, i.e.,
$\mathbf{x}_{k+1}=\mathbf{x}_{k}+h\sum_{j=0}^{n-1}\theta_{n,j}\mathbf{v}_{j}.$
(10)
This completes one forward pass through the R2N2 superstructure. Note that
algorithm parameters $\bm{\uptheta}$ are partitioned into
$\left\\{\bm{\uptheta}_{1},\ldots,\bm{\uptheta}_{n-1},\bm{\uptheta}_{n}\right\\}$
and that we can identify $h\theta_{j,l}$ with $b_{jl}$ from Equation (7b) for
$j=1,\ldots,n-1$ and $\theta_{n,j}$ with $c_{j}$ from Equation (8). Figure 3
shows the computation of multiple consecutive iterations with the proposed
R2N2 superstructure.
Figure 3: $T$ consecutive iterations using the R2N2 superstructure from Figure
2 compute a sequence $\left(\mathbf{x}_{1},\ldots,\mathbf{x}_{T}\right)$. The
blue dashed cell delimits the iteration $\mathcal{A}^{iter}$ computed in a
forward pass, i.e., one pass through the R2N2. The task-related inputs $h$
(cyan) and $\bm{f}$ (magenta) apply to each iteration. The initial guess
$\mathbf{x}_{0}$ is an input only to the first iteration.
### 3.2 Black-box function evaluations
In order to promote discovery of time-proven algorithms, the superstructure
induces some prior structure. For instance, the RK-NN developed in our
previous work (Guo et al.,, 2022) essentially sets
$\mathbf{v}_{j}=\bm{f}(\mathbf{x}_{k}+h\sum_{l=0}^{j-1}a_{j,l}\mathbf{v}_{l})$
to promote RK methods with lower-triangular Butcher tableaus. In the R2N2
superstructure, this corresponds to Equation (9) with
$\bm{\uptheta}_{j}\vcentcolon=\left(a_{j,0},\ldots,a_{j,j-1}\right)$ followed
by a direct function evaluation, $\bm{f}(\mathbf{x}^{\prime}_{j})$.
We can also hard-wire some of the operations of the superstructure to add yet
another implicit bias. For example, we encode forward-differencing, i.e.,
$\mathbf{v}_{j}=\frac{1}{\epsilon}\left(\bm{f}(\mathbf{x}_{k}+\epsilon\mathbf{x}^{\prime}_{j})-\bm{f}(\mathbf{x}_{k})\right),$
(11)
inside the superstructure for use in Section 3.3 and Section 4.2. Forward-
differencing allows estimation of directional derivatives. For instance,
Equation (11) estimates the derivative of $\bm{f}$ at $\mathbf{x}_{k}$ in the
direction of some $\mathbf{x}^{\prime}_{j}$. That is, the directional
derivative in the $j$-th layer is always approximated at the current iterate
$\mathbf{x}_{k}$ but the direction is determined by $\bm{N}_{j}$ according to
Equation (9) as a function of $\mathbf{x}_{k}$ and the previous $j-1$ layer
outputs. The corresponding pass through a layer of the superstructure is
sketched in Figure 4.
Figure 4: The dashed cell indicates forward-differencing for estimation of
directional derivatives with a small $\epsilon$ in the superstructure. The
directional derivative at $\mathbf{x}_{k}$ requires recalling the first layer
output $\bm{f}(\mathbf{x}_{k})$ to explicitly construct finite differences
between the outputs of the two function evaluations.
While Equation (11) and Figure 4 appear to deviate from Equation (7b) and
Figure 2, respectively, we illustrate that forward-differencing is contained
as a special case in the superstructure in supplementary material (SM1).
However, without hard-wiring forward-differencing is only achieved in the
limit when certain trainable parameters tend to zero and may therefore be hard
to learn during training. Hence, fixing a routine a priori can be beneficial.
### 3.3 Relation to Krylov subspace methods
We now discuss to which extent the operations required for one iteration of
Krylov and Newton-Krylov (NK) algorithms are related to the forward pass
through such a superstructure. For RK methods, the analogy between an RK-$n$
method and the superstructure with $n$ total layers, i.e., function
evaluations, was already demonstrated in Guo et al., (2022). Here, we make the
point that the superstructure also mimics some key aspects of Krylov subspace
methods.
#### 3.3.1 Krylov subspace solvers
Krylov subspace solvers compute an approximate solution to linear systems of
the form
$\mathbf{A}\mathbf{x}=\mathbf{b},$ (12)
where $\mathbf{A}\in\mathbb{R}^{m\times m}$ and $\mathbf{b}\in\mathbb{R}^{m}$,
in an $n$-dimensional subspace $\mathcal{K}_{n}$, $n\leq m$, such that the
norm of a residual $\mathbf{r}$, i.e.,
$\left\|\mathbf{r}\right\|=\left\|\mathbf{A}\mathbf{x}-\mathbf{b}\right\|,$
is minimized (Saad,, 2003). The Krylov subspace $\mathcal{K}_{n}$ is given by
$\mathcal{K}_{n}(\mathbf{A},\mathbf{r}_{0})=\operatorname{\mathrm{span}}\left\\{\mathbf{r}_{0},\mathbf{A}\mathbf{r}_{0},\mathbf{A}^{2}\mathbf{r}_{0},\ldots,\mathbf{A}^{n-1}\mathbf{r}_{0}\right\\},$
(13)
and is fully determined by $\mathbf{A}$ and the initial residual
$\mathbf{r}_{0}$. In absence of prior knowledge, $\mathbf{r}_{0}$ is usually
computed by choosing $\mathbf{x}_{0}=\mathbf{0}$, i.e.,
$\mathbf{r}_{0}=-\mathbf{b}$. The remaining Krylov vectors spanning the
subspace are obtained by recurrent left-side multiplication of
$\mathbf{r}_{0}$ with $\mathbf{A}$. Finally, the approximate solution follows
as
$\overline{\mathbf{x}}=\mathbf{x}_{0}+\mathbf{q}^{\star},$ (14)
$\mathbf{q}^{\star}\in\arg\operatorname{min}_{\mathbf{q}\in\mathcal{K}_{n}}\left\|\mathbf{A}\left(\mathbf{x}_{0}+\mathbf{q}\right)-\mathbf{b}\right\|.$
(15)
$\mathbf{q}^{\star}$ can be expressed as a linear combination of the vectors
spanning the subspace $\mathcal{K}_{n}$, i.e,
$\mathbf{q}^{\star}=\mathbf{V}_{n}\bm{\upbeta},$ (16)
where $\mathbf{V}_{n}$ is an $n\times n$ matrix that contains the members of
$\mathcal{K}_{n}$ as columns and $\bm{\upbeta}\in\mathbb{R}^{n}$ is a vector
of coefficients found by solving Problem (15). Contemporary Krylov methods
like GMRES (Saad and Schultz,, 1986) orthonormalize the basis of
$\mathcal{K}_{n}(\mathbf{A},\mathbf{r}_{0})$. This operation requires
additional vector-vector products but leads to an overall easier residual
minimization.
Instead of solving Equation (15), the R2N2 superstructure approximates the
solutions generated by the Krylov method, Equations (13) – (16), as follows.
We set $\bm{f}(\mathbf{x})\vcentcolon=\mathbf{A}\mathbf{x}-\mathbf{b}$ such
that the initial function evaluation returns
$\mathbf{v}_{0}=\bm{f}(\mathbf{0})=\mathbf{r}_{0}=-\mathbf{b}$. Due to the
nature of the layer modules $\bm{N}_{j}$, Equation (9), all layers after the
initial layer will return outputs $\mathbf{v}_{j}$ that are in the span of
$\left\\{\mathbf{r}_{0},\mathbf{A}\mathbf{r}_{0},\ldots,\mathbf{A}^{j-1}\mathbf{r}_{0}\right\\}$
such that the subspace generated by the R2N2 coincides with the Krylov
subspace, Equation (13). The output module of the R2N2, Equation (10), learns
a fixed linear combination of these $\mathbf{v}_{j}$ through its parameters
$\bm{\uptheta}_{n}$. By identifying these $\bm{\uptheta}_{n}$ as
$\bm{\upbeta}$, $\mathbf{x}_{k}$ as $\mathbf{x}_{0}$ and $h=1$, one forward
pass through the R2N2 can, in principle, imitate one outer iteration of the
Krylov subspace method for a single problem instance
$\left(\mathbf{A},\mathbf{b}\right)$.
On the other hand, one pass through the R2N2 is cheaper than an iteration of a
Krylov method, since the R2N2 does not perform orthonormalization and explicit
residual minimization. Additionally, both methods require $n-1$ function
evaluations to build the $n$-dimensional subspace, given that $\mathbf{r}_{0}$
is obtained for free. Finally, we point out that some Krylov-based solvers,
e.g., GMRES, can be iteratively restarted to compute a better approximation to
a solution of a linear system (Saad and Schultz,, 1986). This restarting
procedure is naturally represented by recurrent passes through the R2N2, i.e.,
a restart corresponds to updating the iterate from $\mathbf{x}_{k}$ to
$\mathbf{x}_{k+1}$.
#### 3.3.2 Newton-Krylov solvers
Newton-Krylov solvers essentially approximate Newton iterations for the
solution of a nonlinear equation system $\bm{f}(\mathbf{x})=\mathbf{0}$,
$\bm{f}\vcentcolon\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}$, where the linear
subproblem that arises in each Newton iteration is addressed using a Krylov
subspace method (Kelley,, 1995, 2003; Knoll and Keyes,, 2004). The $k$-th
linear subproblem requires solving
$\mathbf{J}(\mathbf{x}_{k})\Delta\mathbf{x}_{k}=-\bm{f}(\mathbf{x}_{k}).$
Therefore, the residual $\mathbf{r}_{0}$ of the linear solve corresponds to
$-\bm{f}(\mathbf{x}_{k})$ and the matrix $\mathbf{A}$ is substituted by the
Jacobian of $\bm{f}$ at $\mathbf{x}_{k}$,
$\mathbf{J}(\mathbf{x}_{k})\in\mathbb{R}^{m\times m}$. The corresponding
$k$-th Krylov subspace then reads
$\mathcal{K}_{n}^{(k)}\left(\mathbf{J}(\mathbf{x}_{k}),-\bm{f}(\mathbf{x}_{k})\right)=\operatorname{\mathrm{span}}\left\\{\mathbf{v}_{0},\ldots,\mathbf{v}_{n-1},\right\\},$
(17)
$\mathbf{v}_{j}=-\mathbf{J}(\mathbf{x}_{k})^{j}\bm{f}(\mathbf{x}_{k})\quad\forall\,j\in\\{0,\ldots,n-1\\}.$
Since the practical use-case of Newton-Krylov methods prefers to avoid
computing $\mathbf{J}(\mathbf{x}_{k})$ explicitly, a matrix-vector product
$\mathbf{J}(\mathbf{x}_{k})\mathbf{z}$, where $\mathbf{z}\in\mathbb{R}^{m}$ is
a vector, can be approximated using a directional derivative of $\bm{f}$ at
$\mathbf{x}_{k}$ (Kelley,, 2003), i.e.,
$\mathbf{J}(\mathbf{x}_{k})\mathbf{z}\approx\frac{\bm{f}(\mathbf{x}_{k}+\epsilon\mathbf{z})-\bm{f}(\mathbf{x}_{k})}{\epsilon},$
where $\epsilon$ is a small value that in this paper we choose in the order of
$10^{-8}$. The right-hand side corresponds to forward-differencing, Equation
(11), which is shown to lie within the R2N2 superstructure in the
supplementary material (SM1). Therefore, solution of the $k$-th linear
subproblem of the Newton-Krylov algorithm can be approximated by one forward
pass through the R2N2 superstructure if the remaining identities noted in the
previous section, Section 3.3.1, on linear Krylov solvers are applied again.
Consequently, the limited expressivity of $\bm{\uptheta}_{n}$ compared to the
minimization in a Krylov method is inherited too. Typically, multiple iterates
$\mathbf{x}_{k}$ have to be computed to sufficiently approximate a solution of
$\bm{f}(\mathbf{x})=\mathbf{0}$. Such iterative behavior can be reproduced by
performing multiple recurrent passes through the R2N2.
### 3.4 Training the R2N2 superstructure
Training the R2N2 superstructure implies solving Problem (4) for the trainable
parameters $\bm{\uptheta}$. The input data for each problem class is sampled
from a distribution representing a set of problem instances, the solutions to
which are the training targets. In the following, we indicate data samples by
an additional subscript $i=1,\ldots,N$, where $N$ is the total number of
samples, i.e., $\mathbf{x}_{i,k}$ refers to the $i$-th sample after the $k$-th
iteration. The output of the $k$-th pass through an R2N2 is the iterate
denoted by $\hat{\mathbf{x}}_{i,k+1}$. As a loss function we use a weighted
version of the mean squared error (MSE) between $\hat{\mathbf{x}}_{i,k}$ and
$\mathbf{x}_{i,k}^{target}$ or the corresponding residual
$\Pi\left(\hat{\mathbf{x}}_{i,k},F_{i}\right)$, summed over all iterations,
i.e.,
$MSE_{x}=\frac{1}{N}\sum_{k=1}^{T}\sum_{i=1}^{N}w_{i,k}{\left\|\hat{\mathbf{x}}_{i,k}-\mathbf{x}^{target}_{i,k}\right\|_{2}}^{2},$
(18a)
$MSE_{\Pi}=\frac{1}{N}\sum_{k=1}^{T}\sum_{i=1}^{N}w_{i,k}{\Pi(\hat{\mathbf{x}}_{i,k},F_{i})}^{2},$
(18b)
where $w_{i,k}$ are the weights belonging to the $i$-th sample in the $k$-th
iteration. Whether we utilize Equation (18a) or Equation (18b) is problem-
specific. For instance, for equation solvers, we can make use of
$\bm{f}(\mathbf{x}^{\star}_{i}):=\mathbf{0}$ to compute the residual for
Equation (18b) from $\bm{f}(\mathbf{x}_{i,k+1})$. For integrators on the other
hand, we can sample training targets $\mathbf{x}_{i,k+1}^{target}$ from the
trajectory computed by some high-order integrator or, if available, use an
analytic solution to generate target data. We do not use any regularizers for
training the R2N2 in this work.
### 3.5 Implementation
We implemented the R2N2 superstructure in PyTorch (version $1.8.0$) (Paszke et
al.,, 2019). For the implementation of the forward-differencing using function
evaluations, see supplementary material (SM2). We trained on an Intel i7-9700K
CPU using Adam (Kingma and Ba,, 2015) for $50.000$ epochs with the default
learning rate of $0.001$ or L-BFGS (Liu and Nocedal,, 1989) for $20.000$
epochs with a learning rate tuned by hand. Note that L-BFGS is commonly used
to fine-tune networks with comparable architecture as ours that were
pretrained with Adam, e.g., by Zhao and Mau, (2020), suggesting that L-BFGS
can improve the training result. A detailed assessment of the capability of
different optimizers and strategies to train the R2N2 superstructure is beyond
the scope of this work.
## 4 Numerical experiments
In this section, we demonstrate the ability of the R2N2 superstructure to
resemble efficient iterations for various problem types. As a case study, we
consider equation solvers and integrators.
### 4.1 Solving linear equation systems
First, we study the solution of linear equations, see Equation (12). For the
illustrative experiments we consider examples where a single task is given by
$F_{i}=\left(\mathbf{A},\mathbf{b}_{i}\right)$ with
$\mathbf{A}\in\mathbb{R}^{m\times m}$, $\mathbf{b}\in\mathbb{R}^{m}$,
$\mathbf{x}\in\mathbb{R}^{m}$ and $m=5$. $\mathbf{A}$ is generated as
$\mathbf{A}=\tilde{\mathbf{A}}^{T}\tilde{\mathbf{A}}+\mathbf{I}\bm{\uplambda}$,
where $\tilde{\mathbf{A}}$ is a matrix with random entries and
$\bm{\uplambda}$ can be used to manipulate the spectrum of $\mathbf{A}$. Here
we use $\bm{\uplambda}=\left(1,0.75,0.5,0.1,0.1\right)^{T}$. These choices
guarantee that $\mathbf{A}$ is symmetric positive definite with eigenvalues
fairly suited for Krylov methods (Kelley,, 1995), and, in extension, that
Problem (12) has a unique solution. $\mathbf{b}_{i}$ is sampled by adding
noise to a fixed randomly-chosen mean $\tilde{\mathbf{b}}$, i.e.,
$\mathbf{b}_{i}=\tilde{\mathbf{b}}+\mathbf{b}^{\prime}_{i}$, where
$\mathbf{b}^{\prime}\sim\mathcal{N}\left(0,0.2\left\|\tilde{\mathbf{b}}\right\|\mathbf{I}_{m}\right)$.
As training input we use the data set of tasks
$\\{F_{i}\\}=\\{(\mathbf{A},\mathbf{b}_{i})\\}$ and always select
$\mathbf{x}_{0}=\mathbf{0}$ as the initial point. Therefore, the resulting
$n$-dimensional Krylov subspace is always spanned by
$\left\\{\mathbf{b}_{i},\mathbf{A}\mathbf{b}_{i},\ldots,\mathbf{A}^{n-1}\mathbf{b}_{i}\right\\}$.
When injecting the input data to the R2N2, we resort to
$\bm{f}(\mathbf{x}_{i})\vcentcolon=\mathbf{A}\mathbf{x}_{i}-\mathbf{b}_{i}$.
We use $\bm{f}\left(\mathbf{x}_{i,k+1}^{t}\right)=\mathbf{0}$ as training
targets for all samples $i$ and all steps $k$. Therefore, the training loss at
a step $k$, derived from Equation (18b), becomes
$MSE_{f,k}=\sum_{k=1}^{T}w_{k}\sum_{i=1}^{N}\left(\mathbf{A}\hat{\mathbf{x}}_{i,k}-\mathbf{b}_{i}\right)^{2},$
(19)
where $w_{k}$ becomes relevant when training over multiple iterations, $T>1$,
and is tuned by hand.
To evaluate the performance of the R2N2, we compute the reduction of the norm
of the residual that was achieved after $k$ iterations through the R2N2, i.e.,
$\Delta\mathbf{r}_{i,k}=\left\|\mathbf{b}_{i}\right\|-\left\|\mathbf{A}\hat{\mathbf{x}}_{i,k}-\mathbf{b}_{i}\right\|.$
(20)
For the results that follow, we compare the R2N2 to the SciPy implementation
of the solver GMRES (Saad and Schultz,, 1986; Virtanen et al.,, 2020). GMRES
is set to use the same number of inner iterations, i.e., function evaluations
as the R2N2. Considering the restarted version of GMRES, GMRES(r), a forward
pass through the R2N2 represents one outer iteration (Saad and Schultz,,
1986). We evaluate the reduction in residual norm divided by the one achieved
using GMRES, i.e.,
$\frac{\left\|\Delta\mathbf{r}_{1,i}\right\|_{NN}}{\left\|\Delta\mathbf{r}_{1,i}\right\|_{GMRES}}$,
with the subscripts ‘NN’ indicating the R2N2 and ‘GMRES’ indicating the solver
GMRES.
Figure 5: Relative performance of R2N2 vs GMRES with $n=4$ inner iterations
for problem instances of Problem (12) with fixed $\mathbf{A}$ and various
samples $\mathbf{b}_{i}$ (red dots). The blue line indicates the baseline
achieved by GMRES. The residual reduction achieved on each sample is computed
using Equation (20). Subscripts ‘NN’ and ‘GMRES’ stand for the R2N2 and GMRES,
respectively. Each method returns a step in a $4$-dimensional subspace, which
is generated via $3$ function evaluations. $\mathbf{b}^{\star}$ denotes the
sample for which the relative performance of R2N2 is maximized.
Figure 5 shows that the R2N2 resembles a solver for the given samples. Its
performance is upper-bounded by the performance of GMRES, as GMRES minimizes
the residual in the subspace spanned by the Krylov vectors. Given that this
subspace is invariant with respect to the operations that can be learned by
the layers of the R2N2 for the linear problem instances, the R2N2 cannot
improve on the performance of GMRES in this first case study.
$\mathbf{b}^{\star}$ denotes the sample for which the R2N2 performs best. As
the R2N2 only learns a linear combination in the output layer – instead of the
residual minimization done in GMRES – its performance can at best be optimal
for a single problem instance.
When $\mathbf{A}$ is additionally drawn from a distribution, the R2N2
generally does not exactly match the performance of GMRES for any problem
instance from the training set. For instance, when the training set consists
of problem instances that are formed with either $\mathbf{A}_{1}$ or
$\mathbf{A}_{2}$ in addition to $\mathbf{b}_{i}$, the residual reduction
reaches about $98\%$ of that achieved by GMRES for problem instances with
$\mathbf{A}_{1}$ and $95\%$ for those with $\mathbf{A}_{2}$, see Figure 6.
Moreover, a correlation between the performance of the R2N2 and the cosine
similarity between $\mathbf{b}_{i}$ and $\mathbf{b}_{i}^{\star}$ is indicated
by Figure 6.
(a) Problem instances with $\mathbf{A}_{1}$.
(b) Problem instances with $\mathbf{A}_{2}$.
Figure 6: Relative performance of R2N2 vs GMRES with $n=4$ inner iterations
for problem instances of Problem (12) having various samples $\mathbf{b}_{i}$
(red dots). Further, the instances are separated into those featuring
$\mathbf{A}_{1}$ (6(a)) and those with $\mathbf{A}_{2}$ (6(b)). The blue lines
indicates the baseline achieved by GMRES. The residual reduction is computed
using Equation (20) with either $\mathbf{A}_{1}$ or $\mathbf{A}_{2}$ in place
of $\mathbf{A}$. Subscripts ‘NN’ and ‘GMRES’ stand for the R2N2 and GMRES,
respectively. By $\mathbf{b}_{1}^{\star}$ and $\mathbf{b}_{2}^{\star}$ we
denote the sample for which the relative performance of R2N2 is best for each
case, respectively.
Finally, we study the performance of the R2N2 over multiple iterations. Here,
the R2N2 is compared with the restarted version of GMRES, GMRES(r), where each
iteration uses the output of the previous iteration as an initial value. We
train the R2N2 to minimize loss after three outer iterations, given by
Equation (19). We set $w_{k}=3^{k}$, which was found to yield decent results
see Figure 7. The R2N2 reduces the residual over all consecutive outer
iterations, i.e., even beyond the third outer iteration, showcasing a
capability for generalization.
Figure 7: Convergence of the R2N2 vs GMRES(r) for solving (12) for multiple
samples of $\mathbf{b}_{i}$. Both solvers use $4$ inner iterations. The R2N2
was trained on $T=3$ outer iterations, using Equation (19) with $w_{k}=3^{k}$
as a loss, and then applied over $5$ iterations. The shaded area denotes
extrapolation.
### 4.2 Solving nonlinear equation systems
As an illustrative nonlinear equation system, we consider Chandrasekhar’s
$H$-function in conservative form, an example that was extensively used in
Kelley’s book on Newton-Krylov solvers (Kelley,, 2003). The discretized form
of this equation reads
$\left(\bm{f}(\mathbf{x})\right)_{j}=\mathbf{x}_{j}-\left(1-(\mathbf{A}_{c}\mathbf{x})_{j}\right)^{-1},$
(21)
for $j=1,\ldots,m$, where matrix $\mathbf{A}_{c}\in\mathbb{R}^{m\times m}$ is
a parametric matrix that includes a parameter $c$. We choose $m=10$
discretization points and $c\in\left[0.89,0.92\right]$ to generate a set of
training problems. We sample initial values $\mathbf{x}_{0,i}$ from a normal
distribution with mean $\mathbbm{1}^{(m)}$, where $\mathbbm{1}^{(m)}$ is an
$m$-dimensional vector of ones, and variance $0.2\cdot\mathbbm{1}^{(m)}$. We
again use (19) as a loss function. Further, we set all targets
$\bm{f}\left(\mathbf{x}_{k+1,i}^{t}\right)=0$ for samples $i$ and iterations
$k$. We explicitly encode forward-differencing, Equation (11), setting
$\epsilon=10^{-8}$. This choice emulates directional derivatives as computed
by Kelley’s Newton-Krylov GMRES (NK-GMRES, Kelley, (2003)), the SciPy
implementation of which we select as a benchmark (Virtanen et al.,, 2020).
We evaluate the performance after $k$ nonlinear iterations by the reduction in
the norm of the residual:
$\Delta\mathbf{r}_{k}=\left\|\bm{f}(\mathbf{x}_{0})\right\|-\left\|\bm{f}(\mathbf{x}_{k})\right\|.$
(22)
Figure 8 showcases the performance of R2N2 after training for $1$ and $2$
nonlinear iterations, respectively, and demonstrate that the R2N2 is able to
achieve more progress within these iterations than NK-GMRES. Note that the
R2N2 accomplishes this even though it does not explicitly perform the
minimization in the Krylov subspace to compute the step.
(a) After $k=1$ nonlinear iterations.
(b) After $k=2$ nonlinear iterations.
Figure 8: Relative performance of R2N2 vs NK-GMRES using $n=3$ inner
iterations and one (8(a)) or two (8(b)) outer iterations on Problem (21) with
various coefficients $c$ and various initial guesses $\mathbf{x}_{0,i}$. Note
the differences in scales between the two subfigures. The residuals are
computed using Equation (22) with subscripts ‘NN’ and ’NKG’ for the R2N2 and
the NK-GMRES solver, respectively. $\mathbf{x}_{0}^{\star}$ refers to the
sample for which the relative performance of R2N2 is best in the respective
experiment.
Figure 8 further demonstrates that the advantage in performance can be
maintained over a range of values for coefficient $c$ in $\mathbf{A}_{c}$ of
Problem (21). We therefore conjecture that the R2N2 has learned to construct a
subspace that contains more of the true solution for a specific problem class
than the subspace formed by NK-GMRES. This is possible because the subspace
$\mathcal{K}_{n}\left(\mathbf{J}(\mathbf{x}_{k}),-\bm{f}(\mathbf{x}_{k})\right)$,
Equation (17), used in the $k$-th iteration depends not only on
$\left(\bm{f},\mathbf{x}_{k}\right)$ but also on the trainable parameters
$\bm{\uptheta}_{j}$ in the $\bm{N}_{j}$ modules of the R2N2, cf. Equation
(7b). When we train for $2$ (or more) nonlinear iterations we find that the
advantage of the R2N2 over NK-GMRES decays, see Figure 8(b). Similar to the
linear case shown in Figure 7, by processing the second iteration during
training, the R2N2 effectively has to adapt to a joint input distribution of
$\mathbf{x}_{0}$ together with $\hat{\mathbf{x}}_{1}$, such that the
performance for either step is diminished. However, the R2N2 obtained for
Problem (21) does converge to an approximate solution when applied iteratively
outside of its training range, see Figure 9. We applied the R2N2 from Figure
8(b) that was trained on $2$ nonlinear iterations, for a total of $8$
nonlinear iterations, and we observe that each iteration reduces the residual.
However, only the first two iterations are able to maintain an advantage over
NK-GMRES. We note that the promising finding in Figure 9 does not guarantee
that the R2N2 would always converge to a solution of a nonlinear equation
system.
Figure 9: Convergence of the R2N2 vs NK-GMRES for $8$ nonlinear iterations of
solving (21) for multiple samples of $\mathbf{x}_{0,i}$. Both solvers use $3$
inner iterations. NK-GMRES is stopped after reaching error tolerance at the
fifth outer iteration. The R2N2 used herein coincides with the one shown in
Figure 8(b), i.e., it was trained only on the first $2$ nonlinear iterations.
The shaded area denotes extrapolation.
### 4.3 Solving initial-value problems
Finally, we study the solution of initial-value problems with the known right-
hand side $\bm{f}$, i.e., Problem (3). We covered learning integrators
thoroughly in our previous work (Guo et al.,, 2022) where we employed a Taylor
series-based regularization to promote certain orders of convergence as
property of the RK-NN. We now demonstrate that RK-NN integrators that
outperform classical RK integrators can also be learned without special
regularizers and, as a further extension of our previous work, that the RK-NN
integrators work over multiple timesteps. All RK-NNs in this section are
trained from the R2N2 superstructure and we thus name them R2N2 in the
remainder of the section.
We reiterate the van der Pol oscillator from our previous work, i.e.,
$\displaystyle\dot{x}_{(1)}(t)$ $\displaystyle=x_{(2)}(t),$ (23a)
$\displaystyle\dot{x}_{(2)}(t)$
$\displaystyle=a\left(1-x_{(1)}^{2}(t)\right)x_{(2)}(t)-x_{(1)}(t),$ (23b)
with $\mathbf{x}=\left(x_{(1)},x_{(2)}\right)$. For data generation, we sample
coefficients $a\sim\mathcal{U}(1.35,1.65)$, initial values
$x_{(1)}(t=t_{0})\sim\mathcal{U}(-4,-3)$ and
$x_{(2)}(t=t_{0})\sim\mathcal{U}(0,2)$ and timesteps
$h\in\left[0.01,0.1\right]$ equidistantly. Note that $\mathbf{x}(t=t_{0})$ and
$h$ are contained in the problem parameters $\mathbf{p}$. For Problem (3), we
use them directly as the inputs $\mathbf{x}_{k}$ and $h$ of the R2N2, cf.
Figure 2. For target data (and as a ground truth), we use solution
trajectories computed by SciPy’s odeint (Virtanen et al.,, 2020), which uses a
stepsize-controlled RK-45 (Dormand and Prince,, 1980) with error tolerance set
to $10^{-8}$. Training loss is calculated by the following specification of
Equation (18a):
$MSE_{x}=\sum_{k=1}^{T}\sum_{i=1}^{N}\frac{\left(\hat{\mathbf{x}}_{i,k}-\mathbf{x}_{i,k}^{t}\right)^{2}}{h_{i}^{p}}.$
The losses are summed over $T$ integration steps of a trajectory with the
timestep $h$, i.e., at times $t_{0}+h,t_{0}+2h,\ldots,t_{0}+Th$. Further, the
denominator of the loss terms allows weighting samples based on the timestep
of a specific sample, $h_{i}$, and, in particular, weighting according to an
expected convergence order $p$. We set $p=n$ for the results in Section 4.3,
where $n$ is the number of layers of the RK-NN. The function evaluations in
the R2N2 directly evaluate the right-hand side of Equation (23). One pass
through the R2N2 therefore resembles one step of a RK method.
To assess the performance of the learned integrators, we compare the R2N2
instantiated with $n$ layers to classical RK methods of $n$ stages, denoted by
RK-$n$. The training data contains samples with varying coefficients $a_{i}$,
timesteps $h_{i}$ and initial values $\mathbf{x}_{i}(t=0)$. The error for
either integrator is evaluated against a ground truth approximated by odeint
and denoted $\mathbf{x}_{k}^{t}$. Results using $n=3$ on the van der Pol
oscillator, Equation (23), are shown in Figure 10.
(a) The R2N2 error after one step is shown.
(b) The R2N2 error after two steps is shown.
(c) The R2N2 error after three steps is shown.
(d) The R2N2 error after five steps is shown.
Figure 10: Performance of R2N2 vs RK-$3$ integrating the van der Pol
oscillator equations, Equation (23), with varying coefficients $a$, stepsizes
$h_{i}$ and initial conditions $\mathbf{x}_{0,i}$. The figures show the same
R2N2, trained on one timestep, and evaluated after one (10(a)), two (10(b)),
three (10(c)) and five timesteps (10(d)), respectively. The slope of 4 is
added to indicate the nominal local truncation error of RK-3.
In Figure 10(a)–10(d), we show results for training on $T=1$ step of
integration and evaluating on $T=1$, $T=2$, $T=3$, and $T=5$ steps,
respectively, i.e., we test its ability to extrapolate outside of the training
range of seen values for iterates $\mathbf{x}_{k}$. Over the first three
timesteps, the R2N2 integrates the van der Pol oscillator equations more
accurately than RK-$3$. We want to stress here that the classical RK method
uses fixed coefficients for the computation of stage values and for the
computation of the step itself. Therefore, the R2N2 can improve over the
classical RK with regards to both of these computational steps. Evidently, we
are able to learn coefficients $\bm{\uptheta}_{j}$ in the stages $\bm{N}_{j}$
and $\bm{\uptheta}_{n}$ for the output $\bm{N}_{n}$ that lead to a better
approximation of the integral for problems from the training distribution, and
small extrapolation, than the RK method does.
Overall, we find that the accuracy of the R2N2 gradually decreases over the
number of timesteps such that the R2N2 is not more accurate than RK-$3$ after
the $5$-th iteration anymore. The trend observed in Figure 10(a)–10(d),
although less pronounced, is consistent with the behavior of the R2N2 over
multiple iterations of equation solving.
## 5 Conclusion
This work proposes an alternative, augmented perspective for the use of the
RK-NN, a recurrent neural network templated on RK integrators, as a
recursively recurrent superstructure for a wider class of iterative numerical
algorithms. The R2N2 superstructure embeds function evaluations inside its
layers and feeds the output to all successive layers via forward skip
connections and linear combinations. The embedding of function calls into the
architecture disentangles the algorithm to be learned from the function it
acts on. We have shown that the R2N2 superstructure provides an inductive bias
towards iterative algorithms based on recurrent function evaluation, e.g., the
well-established Krylov subspace solvers and RK methods. Our numerical
experiments demonstrate that, thus, the R2N2 superstructure can mimic steps of
Krylov, Newton-Krylov and RK algorithms, respectively.
In particular, when learning a single step of linear equation solvers (Section
4.1), the performance of the R2N2 is bounded by that of the benchmark GMRES.
This is consequential considering that the subspaces generated by the two
approaches are equal but GMRES further minimizes the residual in this
subspace, whereas the R2N2 learns a linear combination that can coincide with
the minimizer, cf. Equation (15), for at most a single problem instance. In
contrast, a nonlinear equation solver (Section 4.2) can improve upon
iterations performed by NK-GMRES, because in this case the R2N2 can learn to
construct subspaces in which the residual is reduced more strongly than by NK-
GMRES. Applying the R2N2 for multiple outer iterations resembles a restarted
iterative solver for linear problems or a Newton-Krylov method for nonlinear
problems, respectively, and can require the R2N2 to extrapolate outside of its
training range. We have empirically demonstrated its capability of such
generalization. However, we cannot guarantee the convergence of trained
solvers for arbitrary problems. Finally, we have revisited our previous work
about learning RK integrators (Guo et al.,, 2022) by demonstrating successful
integration over multiple timesteps (Section 4.3). Our results suggest that
the advantage of the R2N2 or RK-NN, respectively, over a classical RK method
cannot be sustained over longer time horizons of integration.
In summary, iterative algorithms trained within the R2N2 superstructure can,
when possible, find a subspace in which the residual can be reduced more than
by their classical counterparts, given the same cost in function evaluations.
The R2N2-derived solvers may even require less overall operations per
iteration than Krylov subspace methods. Future studies should extend
comparison to algorithms, where the cost of single iterations may vary and the
overall cost until convergence is evaluated.
## 6 Future research directions
### 6.1 Application to other computational problems
For further application, the compatibility of the R2N2 superstructure with
other problem classes that comply with the general form of Problem (1) and
that are solved with iterative algorithms, e.g., eigenvalue computation, PCA
decomposition (Gemp et al.,, 2021), or computation of Neumann series (Liao et
al.,, 2018), could be assessed. Moreover, besides learning an algorithm as a
neural network architecture that maps from a set of problems to their
solutions, the superstructure proposed in this work can also be deployed in
the inverse setting, i.e., to identify a problem or function under the action
of the known algorithm given input/output data. This was the original
motivation in Rico-Martinez et al., (1995), leading to nonlinear system
identification, that can be analyzed using inverse backward error analysis
(Zhu et al.,, 2020).
### 6.2 Extension of the superstructure
Other, more intricate algorithms can be represented by the proposed
superstructure if its architecture is extended with additional trainable
modules. For instance, general linear methods for integration (see Butcher,
(2016)) can be captured by the superstructure if not just the final output is
subject to recurrence, but the layer outputs $\mathbf{v}_{j}$ are too (cf.
with Figure 2). Moreover, a preconditioner (see Section 8 of Saad and Van Der
Vorst, (2000)) can be inserted inside the layers of the superstructure: for
instance, a cheap, approximate model of $\bm{f}$ can be used to to effectively
build such a preconditioner (Qiao et al.,, 2006). Finally, non-differentiable
operations that are part of some algorithms, e.g., the checking of an error
tolerance, can be included via smoothed relaxations (Ying et al.,, 2018; Tang
et al.,, 2020).
### 6.3 Neural architecture search
Larger architectures will eventually call for superstructure optimization to
yield parsimonious algorithms, i.e., determining the optimal (sub-)structure
for a given set of problem instances together with the optimized parameters.
This challenge can be formulated as a mixed-integer nonlinear program (MINLP).
Presently, these problems are addressed by heuristic methods, referred to as
neural architecture search (NAS, Elsken et al., 2019b , Hospedales et al.,
(2021)), that are relatively efficient in finding good architectures. For the
superstructure, three types of NAS methods appear suited: i) structured search
spaces to exploit modularity, e.g., Liu et al., (2017), Zoph et al., (2018),
Negrinho et al., (2019) and Schrodi et al., (2022), ii) adaptively growing
search spaces for refining the architecture, e.g., Cortes et al., (2017),
Elsken et al., 2019a and Schiessler et al., (2021), and iii) differentiable
architecture search, e.g., Liu et al., (2019) Li et al., 2021a . Sparsity
promoting training techniques like pruning typically address dense, fully-
connected layers and, therefore, are expected to provide little use in the
current architecture.
More traditional MINLP solvers like those used for superstructure optimization
of process systems (Grossmann,, 2002; Burre et al.,, 2022) exhibit certain
advantages over NAS methods. They explicitly deal with integer variables
allowing sophisticated use of discrete choices e.g., for mutually exclusive
architecture choices. Moreover, MINLPs can be solved deterministically to
guarantee finding a global solution, e.g., by a branch-and-bound algorithm
(Belotti et al.,, 2013). Global solution of MINLPs involving the
superstructure is challenging, since it requires neural network training
subproblems to be solved globally. With future advances in computational
hardware and algorithms this may become a viable approach. However,
substantial effort is needed to utilize such MINLP solvers for the training
tasks considered herein.
### 6.4 Implicit layers
An orthogonal approach for increasing the scope of the superstructure and
capitalizing on its modularity is to endow only a subset of its modules or
layers with trainable variables. The remaining modules that are not subject to
meta-optimization can be implemented by so called implicit layers that
implement their functionality, see, e.g., Rajeswaran et al., (2019) and
Lorraine et al., (2020). In future work, we plan to emulate the residual
minimization of Krylov solvers by substituting the output layer $\bm{N}_{n}$
of the superstructure with differentiable convex optimization layers (Amos and
Kolter,, 2017; Agrawal et al.,, 2019). Then, only the optimal subspace
generation, i.e., the parameters of the $\bm{N}_{j}$ modules, is left to
learn, or the subspace generation is optimized with respect to consecutive
minimization being performed in that subspace, respectively.
### 6.5 Dynamical systems perspective
A joint perspective on neural networks and dynamical systems has emerged
recently, e.g., E, (2017), Haber and Ruthotto, (2017), and Chang et al.,
(2017). Similarly, a connection between dynamical systems and continuous-time
limits of iterative algorithms has been discussed in literature (Stuart and
Humphries,, 1998; Chu,, 2008; Dietrich et al.,, 2020), especially for convex
optimization (Su et al.,, 2014; Krichene et al.,, 2015; Wibisono et al.,,
2016). Researchers have applied numerical integration schemes to these
continuous forms to recover discretized algorithms (Scieur et al.,, 2017;
Betancourt et al.,, 2018; Zhang et al.,, 2018). Conversely, the underlying
continuous-time dynamics of discrete algorithms encoded by the proposed R2N2
can be identified based on the iterates they produce, e.g., by their
associated Koopman operators (Dietrich et al.,, 2020). These Koopman operators
can then be analyzed to compare various algorithms and, even, to identify
conjugacies between them (Redman et al.,, 2022).
## Declaration of Competing Interest
We have no conflict of interest.
## Acknowledgements
DTD, AM and MD received funding from the Helmholtz Association of German
Research Centres and performed this work as part of the Helmholtz School for
Data Science in Life, Earth and Energy (HDS-LEE). YG and QL are supported by
the National Research Foundation, Singapore, under the NRF fellowship (project
No. NRF-NRFF13-2021-0005). FD received funding from the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) –- 468830823. The
work of IGK is partially supported by the US Department of Energy and the US
Air Force Office of Scientific Research.
## Bibliography
* Agrawal et al., (2019) Agrawal, A., Amos, B., Barratt, S. T., Boyd, S. P., Diamond, S., and Kolter, J. Z. (2019). Differentiable convex optimization layers. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E. B., and Garnett, R., editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 9558–9570.
* Amos and Kolter, (2017) Amos, B. and Kolter, J. Z. (2017). Optnet: Differentiable optimization as a layer in neural networks. In Precup, D. and Teh, Y. W., editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 136–145. PMLR.
* Andrychowicz et al., (2016) Andrychowicz, M., Denil, M., Colmenarejo, S. G., Hoffman, M. W., Pfau, D., Schaul, T., and de Freitas, N. (2016). Learning to learn by gradient descent by gradient descent. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3981–3989.
* Balcan, (2020) Balcan, M. (2020). Data-driven algorithm design. In Roughgarden, T., editor, Beyond the Worst-Case Analysis of Algorithms, pages 626–645. Cambridge University Press.
* Balcan et al., (2018) Balcan, M., Dick, T., Sandholm, T., and Vitercik, E. (2018). Learning to branch. In Dy, J. G. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 353–362. PMLR.
* Baxter, (2000) Baxter, J. (2000). A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149–198.
* Belotti et al., (2013) Belotti, P., Kirches, C., Leyffer, S., Linderoth, J., Luedtke, J., and Mahajan, A. (2013). Mixed-integer nonlinear optimization. Acta Numerica, 22:1–131.
* Betancourt et al., (2018) Betancourt, M., Jordan, M. I., and Wilson, A. C. (2018). On symplectic optimization. arXiv preprint arXiv:1802.03653.
* Burre et al., (2022) Burre, J., Bongartz, D., and Mitsos, A. (2022). Comparison of MINLP formulations for global superstructure optimization. Optimization and Engineering, pages 1–30.
* Butcher, (2016) Butcher, J. C. (2016). Numerical methods for ordinary differential equations. Wiley, Chichester, West Sussex, third edition edition.
* Chang et al., (2017) Chang, B., Meng, L., Haber, E., Tung, F., and Begert, D. (2017). Multi-level residual networks from dynamical systems view. arXiv preprint arXiv:1710.10348.
* Chen et al., (2018) Chen, T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. (2018). Neural ordinary differential equations. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 6572–6583.
* Chevalier et al., (2021) Chevalier, S., Stiasny, J., and Chatzivasileiadis, S. (2021). Contracting neural-newton solver. arXiv preprint arXiv:2106.02543.
* Chu, (2008) Chu, M. T. (2008). Linear algebra algorithms as dynamical systems. Acta Numerica, 17:1–86.
* Cortes et al., (2017) Cortes, C., Gonzalvo, X., Kuznetsov, V., Mohri, M., and Yang, S. (2017). Adanet: Adaptive structural learning of artificial neural networks. In International Conference on Machine Learning, pages 874–883. PMLR.
* Denevi et al., (2018) Denevi, G., Ciliberto, C., Stamos, D., and Pontil, M. (2018). Learning to learn around A common mean. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 10190–10200.
* Dietrich et al., (2020) Dietrich, F., Thiem, T. N., and Kevrekidis, I. G. (2020). On the Koopman operator of algorithms. SIAM Journal on Applied Dynamical Systems, 19(2):860–885.
* Dormand and Prince, (1980) Dormand, J. and Prince, P. (1980). A family of embedded Runge–Kutta formulae. Journal of Computational and Applied Mathematics, 6(1):19–26.
* Dufera, (2021) Dufera, T. T. (2021). Deep neural network for system of ordinary differential equations: Vectorized algorithm and simulation. Machine Learning with Applications, 5:100058.
* E, (2017) E, W. (2017). A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1–11.
* (21) Elsken, T., Metzen, J. H., and Hutter, F. (2019a). Efficient multi-objective neural architecture search via Lamarckian evolution. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
* (22) Elsken, T., Metzen, J. H., and Hutter, F. (2019b). Neural architecture search: A survey. Journal of Machine Learning Research, 20(55):1–21.
* Fawzi et al., (2022) Fawzi, A., Balog, M., Huang, A., Hubert, T., Romera-Paredes, B., Barekatain, M., Novikov, A., R Ruiz, F. J., Schrittwieser, J., Swirszcz, G., et al. (2022). Discovering faster matrix multiplication algorithms with reinforcement learning. Nature, 610(7930):47–53.
* Gemp et al., (2021) Gemp, I. M., McWilliams, B., Vernade, C., and Graepel, T. (2021). Eigengame: PCA as a Nash equilibrium. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
* González-García et al., (1998) González-García, R., Rico-Martínez, R., and Kevrekidis, I. G. (1998). Identification of distributed parameter systems: A neural net based approach. Computers & Chemical Engineering, 22:S965–S968.
* Goyal and Benner, (2021) Goyal, P. and Benner, P. (2021). Learning dynamics from noisy measurements using deep learning with a Runge–Kutta constraint. arXiv preprint arXiv:2109.11446.
* Grossmann, (2002) Grossmann, I. E. (2002). Review of nonlinear mixed-integer and disjunctive programming techniques. Optimization and Engineering, 3(3):227–252.
* Guo et al., (2022) Guo, Y., Dietrich, F., Bertalan, T., Doncevic, D. T., Dahmen, M., Kevrekidis, I. G., and Li, Q. (2022). Personalized algorithm generation: A case study in learning ODE integrators. SIAM Journal on Scientific Computing, 44(4):A1911–A1933.
* Gupta and Roughgarden, (2020) Gupta, R. and Roughgarden, T. (2020). Data-driven algorithm design. Communications of the ACM, 63(6):87–94.
* Haber and Ruthotto, (2017) Haber, E. and Ruthotto, L. (2017). Stable architectures for deep neural networks. Inverse problems, 34(1):014004.
* He et al., (2016) He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society.
* Hoos, (2011) Hoos, H. H. (2011). Automated algorithm configuration and parameter tuning. In Autonomous search, pages 37–71. Springer.
* Hospedales et al., (2021) Hospedales, T., Antoniou, A., Micaelli, P., and Storkey, A. (2021). Meta-learning in neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9):5149–5169.
* Hutter et al., (2010) Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2010). Automated configuration of mixedinteger programming solvers. In International Conference on Integration of Artificial Intelligence (AI) and Operations Research (OR) Techniques in Constraint Programming, pages 186–202. Springer.
* Jin et al., (2020) Jin, P., Zhang, Z., Zhu, A., Tang, Y., and Karniadakis, G. E. (2020). Sympnets: Intrinsic structure-preserving symplectic networks for identifying hamiltonian systems. Neural Networks, 132:166–179.
* Kelley, (1995) Kelley, C. T. (1995). Iterative Methods for Linear and Nonlinear Equations. Society for Industrial and Applied Mathematics.
* Kelley, (2003) Kelley, C. T. (2003). Solving nonlinear equations with Newton’s method. SIAM.
* Khalil et al., (2016) Khalil, E. B., Bodic, P. L., Song, L., Nemhauser, G. L., and Dilkina, B. (2016). Learning to branch in mixed integer programming. In Schuurmans, D. and Wellman, M. P., editors, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 724–731. AAAI Press.
* KhudaBukhsh et al., (2016) KhudaBukhsh, A. R., Xu, L., Hoos, H. H., and Leyton-Brown, K. (2016). SATenstein: Automatically building local search SAT solvers from components. Artificial Intelligence, 232:20–42.
* Kingma and Ba, (2015) Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y., editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
* Knoll and Keyes, (2004) Knoll, D. A. and Keyes, D. E. (2004). Jacobian-free Newton–Krylov methods: a survey of approaches and applications. Journal of Computational Physics, 193(2):357–397.
* Krichene et al., (2015) Krichene, W., Bayen, A. M., and Bartlett, P. L. (2015). Accelerated mirror descent in continuous and discrete time. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2845–2853.
* Larsson et al., (2017) Larsson, G., Maire, M., and Shakhnarovich, G. (2017). Fractalnet: Ultra-deep neural networks without residuals. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
* (44) Li, L., Khodak, M., Balcan, N., and Talwalkar, A. (2021a). Geometry-aware gradient algorithms for neural architecture search. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
* (45) Li, Z., Kovachki, N. B., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A. M., and Anandkumar, A. (2021b). Fourier neural operator for parametric partial differential equations. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
* Liao et al., (2018) Liao, R., Xiong, Y., Fetaya, E., Zhang, L., Yoon, K., Pitkow, X., Urtasun, R., and Zemel, R. S. (2018). Reviving and improving recurrent back-propagation. In Dy, J. G. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 3088–3097. PMLR.
* Liu and Nocedal, (1989) Liu, D. C. and Nocedal, J. (1989). On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1):503–528.
* Liu et al., (2017) Liu, H., Simonyan, K., Vinyals, O., Fernando, C., and Kavukcuoglu, K. (2017). Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436.
* Liu et al., (2019) Liu, H., Simonyan, K., and Yang, Y. (2019). DARTS: differentiable architecture search. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
* Lorraine et al., (2020) Lorraine, J., Vicol, P., and Duvenaud, D. (2020). Optimizing millions of hyperparameters by implicit differentiation. In Chiappa, S. and Calandra, R., editors, The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], volume 108 of Proceedings of Machine Learning Research, pages 1540–1552. PMLR.
* Lovelett et al., (2020) Lovelett, R. J., Avalos, J. L., and Kevrekidis, I. G. (2020). Partial observations and conservation laws: Gray-box modeling in biotechnology and optogenetics. Industrial & Engineering Chemistry Research, 59(6):2611–2620.
* Lu et al., (2019) Lu, L., Jin, P., and Karniadakis, G. E. (2019). DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193.
* Lu et al., (2018) Lu, Y., Zhong, A., Li, Q., and Dong, B. (2018). Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. In Dy, J. G. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 3282–3291. PMLR.
* Mencarelli et al., (2020) Mencarelli, L., Chen, Q., Pagot, A., and Grossmann, I. E. (2020). A review on superstructure optimization approaches in process system engineering. Computers & Chemical Engineering, 136:106808.
* Metz et al., (2020) Metz, L., Maheswaranathan, N., Freeman, C. D., Poole, B., and Sohl-Dickstein, J. (2020). Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves. arXiv preprint arXiv:2009.11243.
* Mishra, (2018) Mishra, S. (2018). A machine learning framework for data driven acceleration of computations of differential equations. arXiv preprint arXiv:1807.09519.
* Mitsos et al., (2018) Mitsos, A., Najman, J., and Kevrekidis, I. G. (2018). Optimal deterministic algorithm generation. Journal of Global Optimization, 71(4):891–913.
* Nascimento et al., (2020) Nascimento, R. G., Fricke, K., and Viana, F. A. (2020). A tutorial on solving ordinary differential equations using python and hybrid physics-informed neural network. Engineering Applications of Artificial Intelligence, 96:103996.
* Negrinho et al., (2019) Negrinho, R., Gormley, M. R., Gordon, G. J., Patil, D., Le, N., and Ferreira, D. (2019). Towards modular and programmable architecture search. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E. B., and Garnett, R., editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13715–13725.
* Paszke et al., (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E. Z., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. (2019). PyTorch: An imperative style, high-performance deep learning library. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E. B., and Garnett, R., editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035.
* Qiao et al., (2006) Qiao, L., Erban, R., Kelley, C. T., and Kevrekidis, I. G. (2006). Spatially distributed stochastic systems: Equation-free and equation-assisted preconditioned computations. The Journal of Chemical Physics, 125(20):204108.
* Rajeswaran et al., (2019) Rajeswaran, A., Finn, C., Kakade, S. M., and Levine, S. (2019). Meta-learning with implicit gradients. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E. B., and Garnett, R., editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 113–124.
* Redman et al., (2022) Redman, W. T., Fonoberova, M., Mohr, R., Kevrekidis, I. G., and Mezić, I. (2022). Algorithmic (semi-) conjugacy via Koopman operator theory. arXiv preprint arXiv:2209.06374.
* Rice, (1976) Rice, J. R. (1976). The algorithm selection problem. In Advances in Computers, volume 15, pages 65–118. Elsevier.
* Rico-Martinez et al., (1994) Rico-Martinez, R., Anderson, J. S., and Kevrekidis, I. G. (1994). Continuous-time nonlinear signal processing: A neural network based approach for gray box identification. In Proceedings of IEEE Workshop on Neural Networks for Signal Processing, pages 596–605.
* Rico-Martinez et al., (1995) Rico-Martinez, R., Kevrekidis, I. G., and Krischer, K. (1995). Nonlinear system identification using neural networks: Dynamics and instabilities. In Neural Networks for Chemical Engineers, pages 409–442. Elsevier Amsterdam, The Netherlands.
* Rico-Martinez et al., (1992) Rico-Martinez, R., Krischer, K., Kevrekidis, I. G., Kube, M., and Hudson, J. (1992). Discrete- vs. continuous-time nonlinear signal processing of Cu electrodissolution data. Chemical Engineering Communications, 118(1):25–48.
* Saad, (2003) Saad, Y. (2003). Iterative methods for sparse linear systems. SIAM.
* Saad and Schultz, (1986) Saad, Y. and Schultz, M. H. (1986). GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM Journal on Scientific and Statistical Computing, 7(3):856–869.
* Saad and Van Der Vorst, (2000) Saad, Y. and Van Der Vorst, H. A. (2000). Iterative solution of linear systems in the 20th century. Journal of Computational and Applied Mathematics, 123(1-2):1–33.
* Schiessler et al., (2021) Schiessler, E. J., Aydin, R. C., Linka, K., and Cyron, C. J. (2021). Neural network surgery: Combining training with topology optimization. Neural Networks, 144:384–393.
* Schrodi et al., (2022) Schrodi, S., Stoll, D., Ru, B., Sukthanker, R., Brox, T., and Hutter, F. (2022). Towards discovering neural architectures from scratch. arXiv preprint arXiv:2211.01842.
* Schwarzschild et al., (2021) Schwarzschild, A., Borgnia, E., Gupta, A., Huang, F., Vishkin, U., Goldblum, M., and Goldstein, T. (2021). Can you learn an algorithm? generalizing from easy to hard problems with recurrent networks. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W., editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 6695–6706.
* Scieur et al., (2017) Scieur, D., Roulet, V., Bach, F. R., and d’Aspremont, A. (2017). Integration methods and optimization algorithms. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R., editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1109–1118.
* Silver et al., (2018) Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419):1140–1144.
* Stuart and Humphries, (1998) Stuart, A. and Humphries, A. R. (1998). Dynamical systems and numerical analysis, volume 2. Cambridge University Press.
* Su et al., (2014) Su, W., Boyd, S. P., and Candès, E. J. (2014). A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2510–2518.
* Tang et al., (2020) Tang, H., Huang, Z., Gu, J., Lu, B., and Su, H. (2020). Towards scale-invariant graph-related problem solving by iterative homogeneous GNNs. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H., editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
* Tawarmalani and Sahinidis, (2005) Tawarmalani, M. and Sahinidis, N. V. (2005). A polyhedral branch-and-cut approach to global optimization. Mathematical Programming, 103(2):225–249.
* Tsitouras, (2002) Tsitouras, C. (2002). Neural networks with multidimensional transfer functions. IEEE Transactions on Neural Networks, 13(1):222–228.
* Venkataraman and Amos, (2021) Venkataraman, S. and Amos, B. (2021). Neural fixed-point acceleration for convex optimization. arXiv preprint arXiv:2107.10254.
* Virtanen et al., (2020) Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Polat, İ., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors (2020). SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261–272.
* Wibisono et al., (2016) Wibisono, A., Wilson, A. C., and Jordan, M. I. (2016). A variational perspective on accelerated methods in optimization. Proceedings of the National Academy of Sciences, 113(47):E7351–E7358.
* Wolpert and Macready, (1997) Wolpert, D. H. and Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1):67–82.
* Yeomans and Grossmann, (1999) Yeomans, H. and Grossmann, I. E. (1999). A systematic modeling framework of superstructure optimization in process synthesis. Computers & Chemical Engineering, 23(6):709–731.
* Ying et al., (2018) Ying, Z., You, J., Morris, C., Ren, X., Hamilton, W. L., and Leskovec, J. (2018). Hierarchical graph representation learning with differentiable pooling. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 4805–4815.
* Zhang et al., (2018) Zhang, J., Mokhtari, A., Sra, S., and Jadbabaie, A. (2018). Direct Runge–Kutta discretization achieves acceleration. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 3904–3913.
* Zhao and Mau, (2020) Zhao, J. and Mau, J. (2020). Discovery of governing equations with recursive deep neural networks. arXiv preprint arXiv:2009.11500.
* Zhu et al., (2020) Zhu, A., Jin, P., Zhu, B., and Tang, Y. (2020). Inverse modified differential equations for discovery of dynamics. arXiv preprint arXiv:2009.01058.
* Zoph et al., (2018) Zoph, B., Vasudevan, V., Shlens, J., and Le, Q. V. (2018). Learning transferable architectures for scalable image recognition. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 8697–8710. Computer Vision Foundation / IEEE Computer Society.
|
Jet modifications from colour rope formation in dense systems of non-parallel
strings
Christian Bierlich, Smita Chakraborty$\star$, Gösta Gustafson, and Leif
Lönnblad
Department of Astronomy and Theoretical Physics, Sölvegatan 14A, S-223 62
Lund, Sweden
⋆<EMAIL_ADDRESS>
$\dagger$ MCNET-22-02, LU-TP 22-09
## Abstract
We revisit our rope model for string fragmentation that has been shown to give
a reasonable description of strangeness and baryon enhancement in high-
multiplicity $\mathrm{\bf pp}$ events at the LHC. A key feature of the model
is that the enhancement is driven by the increased string tension due to
strings overlapping in dense systems. By introducing an improved space–time
picture for the overlap between fragmenting strings, where also non-parallel
strings are properly taken into account, we are now able to investigate the
enhancement both in jets and in the underlying event in a consistent way.
###### Contents
1. 1 Introduction
2. 2 String hadronization and colour fluxtubes
1. 2.1 Lund string hadronization
2. 2.2 Strings as colour fluxtubes
3. 3 Rope hadronization with non-parallel strings
1. 3.1 The parallel frame formalism
2. 3.2 Overlap in the parallel frame
3. 3.3 Monte Carlo implementation
4. 3.4 Interplay with the Shoving model
4. 4 Results in $\mathrm{pp}$ collisions
1. 4.1 Model behaviour
2. 4.2 Underlying event observables in reconstructed $\mathrm{Z}$ events
3. 4.3 Strangeness yields in Z$+$jet events
5. 5 Conclusion
6. A Dependence of fragmentation parameters on $\kappa_{\mathrm{eff}}$
## 1 Introduction
One of the most characteristic features of Quark–Gluon Plasma (QGP) formation
in heavy ion (AA) collisions, is that of so–called “jet quenching” [1]. In
heavy ion collisions, jet quenching is mainly seen in energy loss or
dispersion effects, manifest as, for example, suppression of high $p_{\perp}$
particle yields, with respect to scaled proton–proton ($\mathrm{pp}$) case [2]
or the suppression of away-side jets in central collisions [3]. With the
higher energies available at LHC, the phenomenon has also been explored using
$\mathrm{Z}$ bosons plus jets, where the $\mathrm{Z}$ decaying to leptons is
used as an unaffected probe, to gauge the effect on the jet traversing the QGP
[4].
Several experimental signatures for QGP production have, however, also been
observed in high multiplicity pp collisions, including strangeness enhancement
[5] and long-range multi-particle correlations [6], more commonly known as
“collective flow”. Jet quenching effects have so far not been observed in
small systems (pp or proton–ion, $\mathrm{p}A$), which begs the question if
jet modification phenomena are completely absent in small systems, or if the
correct way to look for it has just not been established [7]. One obvious
reason for the difficulty, is that it is not possible, like in $AA$
collisions, to look at differences when comparing to similar measurements for
pp collisions. Comparisons with theoretical expectations are also difficult,
as the expected effects from quenching in small systems are very small, and
the signal is strongly affected by (uncertain) effects from initial state
radiation. It should also be mentioned that most theoretical descriptions of
jet quenching assumes that the jet is formed in a deconfined “bath” of free
partons (_i.e._ the QGP), which may not be appropriate in small systems, where
at most a few droplets would form. This includes approaches such as QGP-
modified splitting kernels [8] at high virtualities, coupled with shower
modifications by transport theory [9, 10] at lower ones, but also approaches
like the one offered by JEWEL [11], where partonic rescattering off medium
partons are combined with the Landau–Pomeranchuk–Migdal effect [12].
It should be noted that there are other mechanisms that may influence jet
production. In particular, some generators include “colour reconnections”,
especially in combination with multi-parton interactions (see, _e.g._ , [13,
14]), but it has been shown (_e.g._ , in [15] and [16]), that colour
reconnections may influence jet shapes even in $e^{+}e^{-}$ annihilations. In
addition it has been shown that the baryonic colour reconnection model in
HERWIG [17] can give rise to strangeness enhancement in dense environments in
$\mathrm{pp}$ collisions.
In a series of papers we have demonstrated, that collective flow and
enhancement of strangeness and baryons, can be reproduced in high multiplicity
pp events as a result of string-string interaction, when the infinitely thin
string is generalized to a confining colour fluxtube, similar to a vortex line
in a superconductor [18]. As discussed in ref. [19], models of string
interactions offers a novel and convenient framework for studying jet
modifications in small systems, as they are implemented in the general purpose
Monte Carlo event generator P YTHIA , which allows the user to generate
realistic collision events, with the effects switched “on” or “off”. The study
of jet modification effects does therefore not need to rely on a (non-
existing) reference system.
The aim of this paper is therefore to look at possible effects of jet
modification via increased strangeness and baryon numbers in jets. A very
important tool is here the method developed in ref. [20], to account for the
interaction between strings which are not parallel to each other. This was not
possible in earlier versions of string-string interaction, but is naturally
very important for handling the interaction between string pieces connected to
a jet and strings in the underlying event.
The remainder of this paper is organised as follows. In section 2, we recap
the Lund string hadronization framework, taking into account the transverse
extension of strings, and discuss how the string tension increase when such
strings overlap, leading to strangeness and baryon enhancement. Then we
present the parallel frame and our updated rope model in section 3. In section
4 we investigate how the average string tension varies as a function of
multiplicity and transverse momentum, and then investigate the observable
modifications the updated rope model predicts for jets and the underlying
event in $\mathrm{pp}$ collisions at the LHC, before we present our
conclusions in section 5.
## 2 String hadronization and colour fluxtubes
In this section we will briefly introduce relevant parts of the Lund string
hadronization model, building up to the rope hadronization model used for the
model results. For more detailed reviews on Lund strings, we refer the reader
to the large body of existing literature. The original papers deal mainly with
hadronization of a single straight string [21, 22]. Gluons were introduced as
‘kinks‘ on a string in refs. [23, 24]. Somewhat dated reviews are presented in
refs. [25, 26], and a number of recent papers on Lund strings present the
model in a more modern context [27, 28, 20, 29], including our original paper
on rope hadronization [30].
The Lund string is a "massless relativistic string" (or a "Nambu-Goto
string"). Such a string has no transverse extension, and it also has no
longitudinal momentum, which implies that it is boost invariant. 111For the
kinematics of such a string see ref. [31]. This may be a good approximation
for a linear colour fluxtube, where the width is not important. In section 2.2
we will discuss going beyond this approximation.
### 2.1 Lund string hadronization
##### Hadronization of a straight string
We first look at a single, straight string stretched between a quark and an
anti-quark. The string can break via $q\bar{q}$ pair creation, in a process
which can be regarded as a tunneling process as discussed in ref. [32]. For a
single quark species the production probability is given by
$\frac{\mathrm{d}\mathcal{P}}{\mathrm{d}^{2}p_{\perp}}\propto\kappa\exp\left(-\frac{\pi\mu^{2}_{\perp}}{\kappa}\right).$
(1)
Here $\mu^{2}_{\perp}=\mu^{2}+p^{2}_{\perp}$ is the quark squared transverse
mass. The exponential conveniently factorizes, leaving separate expressions
for selection of mass and $p_{\perp}$ to be used in the Monte Carlo event
generator. With $\kappa\approx$ 1 GeV/fm, this result implies that strange
quarks are suppressed by roughly a factor 0.3 relative to a u- or a d-quark
(and that the probability to produce a c-quark with this mechanism is $\sim
10^{-11}$). It also means that the quarks are produced with an average
$p_{\perp}\sim 250$ MeV, independent of its flavour.
When the quarks and antiquarks from neighbouring breakups combine to mesons,
their momenta can be calculated as an iterative process. The hadrons are here
“peeled off” one at a time, each taking a fraction ($z$) of the remaining
light-cone momentum ($p^{\pm}=E\pm p_{z}$) along the positive or negative
light-cone respectively. The probability for a given $z$-value is here given
by
$f(z)\propto\frac{(1-z)^{a}}{z}\exp(-bm_{\perp}^{2}/z).$ (2)
Here $m_{\perp}$ is the transverse mass of the meson, and the two parameters
$a$ and $b$ are to be determined by tuning to data from
$\mathrm{e}^{+}\mathrm{e}^{-}$ collisions. In principle the $a$–parameter
could depend on the quark species, but in default P YTHIA (the Monash tune)
it is the same for strange and non-strange quarks. Baryon–antibaryon pairs can
be produced via production of a diquark–antidiquark pair, and in this case the
$a$-parameter has to be modified. The parameter $b$ must, however, be
universal.
An important consequence of eq. (2) is the probability distribution in proper
time ($\tau$) for string breakup vertices. Expressed in terms of the quantity
$\Gamma=(\kappa\tau)^{2}$, the distribution is given by:
$\mathcal{P}(\Gamma)\mathrm{d}\Gamma\propto\Gamma^{a}\exp(-b\Gamma)\mathrm{d}\Gamma.$
(3)
The breakup-time is an important ingredient for string interactions, as the
hadronization time sets an upper limit on the available time for strings to
push each other and form ropes. As such, hadronization of a system of
interacting strings will not happen when the system has reached equilibrium,
but will be cut off when the string hadronizes. For strings hadronizing early,
one can then imagine a mixed phase of strings and hadrons, before the
transition to a pure hadron cascade. In this paper we consider only the effect
of string interactions, and leave the interplay with the hadronic cascade for
a future paper. We note, however, that a full hadronic cascade has recently
been implemented in P YTHIA [33, 34], revealing only minor effects in proton
collisions. Typical values for $a$, $b$ and $\kappa$ give an average breakup
time of around 1.5 fm. This can not be identified as the hadronization time
(or freeze-out time). This could equally well be interpreted as the time when
the quark and the antiquark meet for the first time. In addition the breakup
times fluctuate, and each string will hadronize at different times.
##### Gluons and non-straight strings
An essential component in the Lund hadronization model is that a gluon is
treated as a point-like “kink” on the string, carrying energy and momentum. A
gluon carries both colour and anti-colour, and the string can be stretched
from a quark, via a set of colour-ordered gluons, to an anti-quark (or
alternatively in a closed loop of colour-ordered gluons).
When a gluon has lost its energy, the momentum-carrying kink is split in two
corners, moving with the speed of light but carrying no momentum, stretching a
new straight string piece between them. When two such corners meet, they can
“bounce off”; the string connecting them then disappears, but a new one is
“born”. In a pp collision a typical string will contain several gluons,
connected by string pieces which are stretched out, may disappear and then be
replaced by new string pieces. All these string pieces move transversely in
different directions, but at any time the string consists of a set of straight
pieces. For a description of how such a string hadronizes, we refer to refs.
[35, 20]. The interaction between strings with several non-parallel pieces is
discussed in section 3.
### 2.2 Strings as colour fluxtubes
The description of a confining colour field by an infinitely thin string is
necessarily an approximation, relevant only when the result is insensitive to
the width. In high multiplicity events this is no longer the case, and the
strings have to be treated as colour fluxtubes, with a non-zero width. We here
first discuss the properties of a single fluxtube, and then the interaction
between two or more parallel fluxtubes. The generalization to non-parallel
fluxtubes is presented in section 3.
#### 2.2.1 A single fluxtube
The simplest model for a QCD fluxtube is the MIT bag model [36]. Here a
homogenous longitudinal colour-electric field is kept inside a tube by the
pressure from the vacuum condensate. An improved description is obtained in
lattice calculations. A common method is here to use the method of Abelian
projections, proposed by ’t Hooft [37], which is based on partial gauge
fixing. The result of these calculations show that the field is dominated by a
longitudinal colour-electric field, surrounded by a transverse colour-magnetic
current in the confining vacuum condensate [38, 39]. This picture is very
similar to the confinement of the magnetic field in a vortex line in a
superconductor (with electric and magnetic fields interchanged, see e.g. ref.
[40]).
As observed in ref. [20] the measured shape of the colour electric field
obtained in ref. [41] is well approximated by a Gaussian distribution:
$E(\rho)=E_{0}\exp\left(-\rho^{2}/2R^{2}\right),$ (4)
where $\rho$ is the transverse distance in cylinder coordinates. The width of
a fluxtube is difficult to estimate in lattice calculations, as it is
naturally given in lattice units, see e.g. ref. [42]. In the bag model, the
width is roughly given by $\sqrt{\langle\rho^{2}\rangle}\approx 1$ fm (where
$\rho$ is the radial coordinate), while lattice calculations give radii of
$\sim 0.5$ fm or even less [43, 44, 38].
The field density in eq. (4) is related to the string tension through
$\int d^{2}\rho E^{2}(\rho)/2=\pi E_{0}^{2}R^{2}=g\kappa,$ (5)
where $g$ is the fraction of the total energy of the string associated with
the colour electric field. We expect $g$ to be of the order 1/2, which is the
value obtained in the bag model, where the energy in the field and the
expelled condensate are of equal size. For a further discussion of the vacuum
condensate and colour fluxtubes we refer to ref. [45] and references therein.
#### 2.2.2 Interacting parallel fluxtubes
High multiplicity collisions will give a high density of fluxtubes, with a
corresponding high energy density. In ref. [20] we discussed the collective
effects expected from the initial expansion, and in this paper we will
concentrate on the effects of rope hadronization, and in particular study the
production of strange hadrons. Here we first restate our treatment of
interaction between parallel fluxtubes, presented in ref. [30]. How this can
be generalized to a general situation with non-parallel fluxtubes will be
discussed in section 3 below.
##### Rope formation
For two overlapping parallel fluxtubes, separated by a transverse distance
$\delta$, we get from eq. (4) the interaction energy of the field
$\int d^{2}\rho(\mathbf{E}_{1}(\rho)+\mathbf{E}_{2}(\rho))^{2}/2-2\int
d^{2}\rho E^{2}(\rho)/2=\int
d^{2}\rho\mathbf{E}_{1}(\rho)\cdot\mathbf{E}_{2}(\rho)=2\pi
E_{0}^{2}R^{2}e^{-\delta^{2}/4R^{2}}.$ (6)
Such a system will expand transversely, and if it does not hadronize before,
it will reach equilibrium, where the energy density corresponds to the free
energy density in the vacuum condensate.
The expression in eq. (6) does not include the surface energy for the combined
flux tube. In the bag model this is zero, and in equilibrium the transverse
area will be doubled, and the interaction energy will be zero. For a vortex
line in a dual QCD superconductor, it depends on the properties of the
superconductor, but also here the interaction energy will be much reduced at
the time of hadronization. It will then be necessary to go beyond the Abelian
approximation. For two fluxtubes stretched by quarks, the two quarks can
either form a colour sextet or an anti-triplet, and with more fluxtubes also
higher multiplets are possible. Here lattice calculations show that a set of
overlapping strings form a "rope", with a tension proportional to the second
Casimir operator for the colour multiplet at the end of the rope [46].
Biro, Nielsen, and Knoll pointed out [47] that if a rope is formed by a number
of strings with random charges, they add up as a random walk in colour space.
This implies that the net colour grows as the square root of the number of
strings. A rope stretched by $m$ colour charges and $n$ anti-charges can then
form a colour multiplet characterised by two numbers $p$ and $q$, such that an
arbitrary state, by a rotation in colour space, can be transformed into a
state with $p$ coherent colours (_e.g._ red) and $q$ coherent anti-colours
(_e.g._ anti-blue), such that the colour and the anti-colour do not form a
colour singlet. Such a multiplet is denoted $\\{p,q\\}$, and we always have
$p\leq m$ and $q\leq n$.
For any such multiplet we can write down the number of states, _i.e._ the
multiplicity222The multiplicity provides the standard nomenclature for
multiplets, where $N=1$ is called “singlet”, $N=3$ is called “triplet”, $N=6$
is called “sextet” etc. We will here, when necessary, use the slightly more
verbose notation $\\{p,q\\}$, which allows one to distinguish between _e.g._ a
triplet and an anti-triplet. of the multiplet:
$N=\frac{1}{2}(p+1)(q+1)(p+q+2).$ (7)
As mentioned above, the total tension of such a rope is proportional to the
second Casimir operator for the multiplet, which gives
$\kappa^{\\{p,q\\}}=\frac{C_{2}({p,q})}{C_{2}({1,0})}\kappa^{\\{1,0\\}}=\frac{1}{4}\left(p^{2}+pq+q^{2}+3p+3q\right)\kappa^{\\{1,0\\}},$
(8)
where $\kappa^{\\{1,0\\}}\equiv\kappa$ is the tension in a single string.
In the P YTHIA treatment used here, there are, however, other effects also
addressing string coherence effects. Importantly, parts of this colour
summation is in an approximate way treated by “colour reconnection”. As a
simple example we can look at two anti-parallel strings with triplet–anti-
triplet pairs in each end. These can either form an octet or a singlet, with
probabilities 8/9 and 1/9 respectively. Here the octet (denoted $\\{1,1\\}$)
gives
$\kappa^{\\{1,1\\}}=\kappa\cdot
C_{2}^{\\{1,1\\}}/C_{2}^{\\{1,0\\}}=9\kappa/4.$ (9)
The singlet ($\\{0,0\\}$), with no string at all, gives
$\kappa^{\\{0,0\\}}=0$.
The colour reconnection process in a situation with several strings can be
related to an expansion in powers of $1/N_{c}$, as discussed in refs. [48,
49].
For the special case of $N_{c}=3$ there is also a different kind of
reconnection. For a rope formed by two parallel strings, the two triplets in
one end can give either a sextet or an anti-triplet (and a corresponding anti-
sextet or triplet in the other end) with probabilities 2/3 and 1/3
respectively. For the latter we simply have just a single string.
The two original colour triplets are connected in a “junction”, and such a
reconnection can be particularly important for baryon production. This
possibility is not implemented in the present version of our Monte Carlo, but
will be included in future work. We note that for an arbitrary number of
colours, the corresponding situation is only obtained when $N_{c}-1$ colour
charges combine to one anti-colour charge. The junction formation with three
strings does therefore, for $N_{c}\neq 3$, correspond to a configuration where
$N_{c}$ strings are connected, which cannot be directly interpreted as a
$1/N_{c}$ correction.
We will in the following adopt a picture where the process of string (rope)
fragmentation follows after a process of colour reconnections, and that this
will leave the system in a state with $p$ parallel and $q$ anti-parallel
strings forming a coherent multiplet $\\{p,q\\}$.
##### Rope hadronization
A rope specified by the multiplet $\\{p,q\\}$, can break via a succession of
single $q\bar{q}$ productions, through the tunnelling mechanism in eq. (1). In
each step a multiplet $\\{p,q\\}$ is changed to either $\\{p-1,q\\}$ or
$\\{p,q-1\\}$. It is here important to note that _the tunneling is not
determined by the total tension in the rope, but by the energy released,
determined by the reduction in the tension_ caused by the production of the
new $q\bar{q}$ pair. Hence, we get from eq. (8) an effective string tension,
when the field goes from $\\{p+1,q\\}$ to $\\{p,q\\}$, given by
$\kappa_{\mathrm{eff}}=\kappa^{\\{p+1,q\\}}-\kappa^{\\{p,q\\}}=\frac{2p+q+4}{4}\kappa.$
(10)
The consequence of this picture is that we can treat the rope fragmentation as
the sequential decay of the individual strings forming the rope, much in the
same way as an everyday rope would break thread by thread. Technically it
means that we can use the normal string fragmentation procedure in P YTHIA 8,
with the modification that we in each break-up change the fragmentation
parameters according to the effective string tension calculated from the
overlap of neighbouring strings. The changes to these parameter explained in
detail in ref. [30], and are for reference also listed in appendix A. The
changes are somewhat convoluted, since most of the parameters only indirectly
depend on the string tension, but the main effect easily seen in eq. (1),
namely that an increased string tension will increase the probability of
strange quarks and diquarks relative to light quarks in the string breakup.
## 3 Rope hadronization with non-parallel strings
Our previous work on rope formation [30] relied on the assumption that strings
in high energy hadron collisions can be assumed to be approximately parallel
to each other and to the beam axes. This prevented a detailed investigation of
possible effects in hard jets, especially those traversing the dense
environment of an $AA$ collision. In our recent work on the shoving model [20]
we found a remedy where the interaction between any pair of strings can be
studied in a special Lorentz frame, even if they are not parallel to each
other or to the beam. We call it “the parallel frame”, and it can be shown
that any pair of straight string pieces can be transformed into such a frame,
where they will always lie in parallel planes.
Below we will use this parallel frame to calculate the increased string
tension in the rope formation of arbitrarily complex string configurations.
### 3.1 The parallel frame formalism
In the previous rope implementation [30], the way to determine if any two
string pieces are overlapping was to boost them to their common centre-of-mass
frame and here measure the distance between them at a given space-time point
of break-up. This was done in a fairly crude way, not really taking into
account that the two string pieces typically cannot be considered to be
parallel in this frame. In general there is no frame where two arbitrary
string pieces can be considered to be exactly parallel, but in the parallel
frame introduced in ref. [20] it can be shown that any two string pieces will
always be stretched out in parallel planes in a symmetric way. This works for
all pairs of string pieces, even if one piece is in a high transverse momentum
jet and the other is in the underlying event.
Figure 1: The parallel frame showing the parallel planes of two strings and
the opening angle $\theta$ and skewness angle $\phi$.
In figure 1 we show a space–time picture of two string pieces stretched
between two pairs of partons in this parallel frame. Since massless partons
are propagating at the speed of light irrespective of the magnitude of their
momenta, only the angles between them are important for the following. In the
parallel frame the two string pieces have the same opening angle $\theta$, and
the partons of one piece propagate with an angle $\theta/2$ w.r.t. the
$z$-axis. The partons of the other propagate in the opposite direction, with
an angle $\pi-\theta/2$. At any given time, both string pieces will lie in
planes parallel to the $xy$-plane and to each other. Looking at the
projections of the string pieces on the $xy$-plane, we denote the angle
between them by $\phi$, and the frame is chosen such that all partons form an
angle $\phi/2$ with the $x$-axis.
To simplify the calculations we write the momenta of the partons using their
transverse momentum, $p_{\perp}$, and pseudo-rapidity difference, $\eta$, with
respect to the $z$-axis, rather than the energy and opening polar angle (where
$p_{z}=e\cos\frac{\theta}{2}=p_{\perp}\sinh\frac{\eta}{2}$), and get, using
the notation $p=(e;p_{x},p_{y},p_{z})$,
$\displaystyle p_{1}$ $\displaystyle=$ $\displaystyle p_{\perp
1}\left(\cosh\frac{\eta}{2};\phantom{-}\cos\frac{\phi}{2},\phantom{-}\sin\frac{\phi}{2},\phantom{-}\sinh\frac{\eta}{2}\right),$
$\displaystyle p_{2}$ $\displaystyle=$ $\displaystyle p_{\perp
2}\left(\cosh\frac{\eta}{2};-\cos\frac{\phi}{2},-\sin\frac{\phi}{2},\phantom{-}\sinh\frac{\eta}{2}\right),$
$\displaystyle p_{3}$ $\displaystyle=$ $\displaystyle p_{\perp
3}\left(\cosh\frac{\eta}{2};\phantom{-}\cos\frac{\phi}{2},-\sin\frac{\phi}{2},-\sinh\frac{\eta}{2}\right),$
$\displaystyle p_{4}$ $\displaystyle=$ $\displaystyle p_{\perp
4}\left(\cosh\frac{\eta}{2};-\cos\frac{\phi}{2},\phantom{-}\sin\frac{\phi}{2},-\sinh\frac{\eta}{2}\right).$
(11)
Clearly we have six degrees of freedom, and we can construct six independent
squared invariant masses, $s_{ij}=(p_{i}+p_{j})^{2}$. This means that for any
set of four _massless_ partons we can (as long as no two momenta are
completely parallel) solve for $p_{\perp i}$, which will give us:
$p^{2}_{\perp
1}=\frac{s_{12}}{4}\sqrt{\frac{s_{13}s_{14}}{s_{23}s_{24}}},\,\,\,\,p^{2}_{\perp
2}=\frac{s_{12}}{4}\sqrt{\frac{s_{23}s_{24}}{s_{13}s_{14}}},\,\,\,\,p^{2}_{\perp
3}=\frac{s_{34}}{4}\sqrt{\frac{s_{13}s_{23}}{s_{14}s_{24}}},\,\,\,\,p^{2}_{\perp
4}=\frac{s_{34}}{4}\sqrt{\frac{s_{14}s_{24}}{s_{13}s_{23}}},$ (12)
and furthermore solve for the angles $\phi$ and $\eta$:
$\cosh\eta=\frac{s_{14}}{4p_{\perp 1}p_{\perp 4}}+\frac{s_{13}}{4p_{\perp
1}p_{\perp 3}}\quad\text{and}\quad\cos\phi=\frac{s_{14}}{4p_{\perp 1}p_{\perp
4}}-\frac{s_{13}}{4p_{\perp 1}p_{\perp 3}}.$ (13)
To further specify the frame we renumber the particles so that $\phi<\pi/2$ to
have the strings more parallel to the $x$-axis and not to the $y$-axis, and we
define the $x$-axis to be their combined _rope_ axis. The result is that for a
breakup at a given space–time point in one string piece, we can in the
parallel frame have a reasonable handle on the overlap with any other string
piece.
### 3.2 Overlap in the parallel frame
In eq. (6) we wrote down the interaction energy of two completely parallel
strings separated by a small distance. We now want to use this to estimate the
effective overlap of two strings that are not completely parallel, but lie in
parallel planes.
At a specific point along the $x$-axis in the parallel frame we denote the
separation between the strings in the $yz$-plane by $(\delta_{y},\delta_{z})$
and integrate the interaction of the field given the skewness angle $\phi$ to
obtain
$\displaystyle\mathcal{I}(\delta_{y},\delta_{z},\phi)\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\int
d^{2}\rho\mathbf{E}_{1}(\rho)\cdot\mathbf{E}_{2}(\rho)$ (14) $\displaystyle=$
$\displaystyle\\!\\!E_{0}^{2}\cos\phi\\!\\!\int\\!\\!\\!dy\,dz\exp\left(-\frac{y^{2}\cos\frac{\phi}{2}+z^{2}}{2R^{2}}\right)\exp\left(-\frac{(y-\delta_{y})^{2}\cos\frac{\phi}{2}+(z-\delta_{z})^{2}}{2R^{2}}\right)$
$\displaystyle=$ $\displaystyle\\!\\!2\pi
E_{0}^{2}R^{2}\frac{\cos\phi}{\cos\frac{\phi}{2}}\exp\left(-\frac{\delta_{y}^{2}\cos\frac{\phi}{2}+\delta_{z}^{2}}{4R^{2}}\right).$
Here we note that the skewness angle enters both in the scalar product and in
the strength of the field along the $y$-axis, and that the overlap vanishes
for orthogonal strings.
We can now define the relative overlap as
$\mathcal{I}(\delta_{y},\delta_{z},\phi)/\mathcal{I}(0,0,0)$ and use it as a
probability (assuming that $\mathbf{E}_{1}\cdot\mathbf{E}_{2}>0$) that a
breakup in one string is affected by an increased string tension due to the
overlap with the other. This would then correspond to a
$\\{2,0\\}\to\\{1,0\\}$ transition giving an effective string tension
$\kappa_{\mathrm{eff}}=3\kappa/2$ in eq. (10). If the strings instead points
in the opposite directions along the $x$-axis
($\mathbf{E}_{1}\cdot\mathbf{E}_{2}<0$) this would correspond to a
$\\{1,1\\}\to\\{0,1\\}$ breakup with $\kappa_{\mathrm{eff}}=5\kappa/4$.
In this way we can for each breakup in one string piece, take all other string
pieces in an event, and for each go to the parallel frame to determine if it
will contribute to $p$ or $q$. In our implementation described below, we sum
the relative overlaps in $p$ and $q$ respectively and round them off to
integers, rather than treating them as individual probabilities for each pair
of string pieces, which on average gives the same result.
It should be pointed out that in the parallel frame we also have a handle on
which string breaks up first. If we assume that the string breaks at a common
average proper time along the string, $\tau_{H}$, we can in the parallel frame
calculate the proper time of the other string in space–time point where we
calculate the overlap. If the latter is at larger $\tau_{H}$, we conclude that
the other string has already broken up, and can no longer contribute to an
increased string tension in the break-up being considered.
### 3.3 Monte Carlo implementation
The main technical problem with implementing the rope model in P YTHIA 8, is
the order in which the string fragmentation proceeds. First, the flavour and
transverse momentum of the break-up is chosen (eq. (1)) together with the type
of the chopped-off hadron. Only then is the momentum fraction, $z$, chosen
according to eq. (2), and only then do we know exactly where the string breaks
and can calculate the $\kappa_{\mathrm{eff}}$ in that point. But we need to
know $\kappa_{\mathrm{eff}}$ to be able to calculate a break-up, so we have a
kind of Catch-22 situation.
The way we solve this is to perform a trial break-up to pre-sample the overlap
of a given string, and use the overlap there to get an approximate
$\kappa_{\mathrm{eff}}$. Then we discard the sample break-up and produce a new
one using this $\kappa_{\mathrm{eff}}$. On the average we will then get a
reasonable estimate of the overlap around a break-up. For a general break-up
in the underlying event this should be good enough, but if we are interested
in details of the hadron production in, _e.g._ , the tip of a jet, this
procedure may be inappropriate (see further discussion below in section 4.3).
The procedure to calculate $\kappa_{\mathrm{eff}}$ looks as follows:
1. 1.
Produce a trial break-up in the string being fragmented, and deduce from which
string piece it comes.
2. 2.
Pair this piece with every other string piece in the event, make a Lorentz
transformation to the parallel frame of each pair.
3. 3.
Using the pseudo-rapidity of the produced hadron in each such frame, and
assuming the break-up occurred at the proper time, $\tau_{H}$, find the space-
time point of the break-up of the first string piece.
4. 4.
In the corresponding $yz$-plane determine the proper time of the other string
piece and if that is less than $\tau_{H}$, calculate the overlap according to
eq. (14), and determine if this overlap should contribute to $p$ or $q$ in the
breakup.
5. 5.
With the summed $p$ and $q$ (rounded off to integer values), we now calculate
$\kappa_{\mathrm{eff}}$ according to eq. (10).
6. 6.
Throw away the trial break-up with its produced hadron and change the P YTHIA
8 fragmentation parameters according to the obtained $\kappa_{\mathrm{eff}}$
and generate the final break-up.
As mentioned in section 2.1, some care has to be taken when it comes to soft
gluons. Normally, all string pieces can be said to be dipoles between colour-
connected partons, and in any parallel frame this string piece is parallel to
the $xy$-plane. But a soft gluon may have lost all its momentum before the
string breaks, and the break-up can then occur in a piece of the string that
is not parallel to the string pieces of the connected dipoles. To include this
possibility we introduce secondary dipoles, so that if we have two dipoles
connected to a soft gluon, _e.g._ $q_{i}-g_{j}$ and $g_{j}-\bar{q}_{k}$, then
a secondary string will be included spanned between the momenta of $q_{i}$ and
$\bar{q}_{k}$, but using the space-time point where the gluon has lost all its
momentum to the connected string pieces, as a point of origin.
The problem with soft gluons is present also for our shoving model in [20],
and the solution with secondary dipoles is now also used there. This will be
described in more detail in a future publication, where we also describe the
procedure for including these higher order dipoles in cases where we have
several consecutive soft gluons along a string.
### 3.4 Interplay with the Shoving model
Clearly our rope model is very tightly connected with our shoving model. They
both rely on the parallel frame and technically they both use the same
infrastructure for looking at overlaps between string pieces. However, here
there is again a kind of Catch-22.
Physically the shoving precedes the hadronization, and pushes the strings
apart before they hadronize. As this affects the value of
$\kappa_{\mathrm{eff}}$, the shoving should be executed first. However, for
technical reasons the pushes are applied directly to the produced hadrons
rather than to the individual string pieces. Therefore we must calculate the
hadronization before we can execute the pushes.
We are currently working on a solution to this problem, and plan to present it
in a future publication. The main effect of the shoving is expected to be a
dilution of the strings resulting in a lowered $\kappa_{\mathrm{eff}}$. As
discussed in [20] the precise value of the string radius is not known, and in
that paper we simply used a canonical value of 1 fm. Also the string radius
will affect the values of $\kappa_{\mathrm{eff}}$, and preliminary studies
show that the effects of string dilution from shoving are of the same order as
moderate decrease of the string radius of around 10%.
## 4 Results in $\mathrm{pp}$ collisions
In this section, features of the rope hadronization model with the parallel
frame-formalism are investigated in $\mathrm{pp}$ collisions. Since the main
feature of this new formalism is the much improved handling of string pieces
which are not parallel to the beam axis (_i.e._ jets), we will mostly
concentrate on observables in events containing a process with high momentum
transfer, but in section 4.1 we first show the behaviour in minimum bias
collisions. Here the most fundamental check of the dependence of
$\kappa_{\mathrm{eff}}$ with final state multiplicity, but more relevant for
the parallel frame formalism, is the dependence of $\kappa_{\mathrm{eff}}$ on
particle $p_{\perp}$. In section 4.2 we compare to existing experimental
results in the underlying event (UE) for $\mathrm{Z}$-triggered events. This
is to ensure that the existing description of such observables is not altered
by our model. Finally in section 4.3, we present predictions for the jet
observables that are affected by rope formation in $\mathrm{pp}$.
### 4.1 Model behaviour
Figure 2: $\langle\kappa_{\mathrm{eff}}/\kappa\rangle$ vs. $N_{\text{ch.}}$
(left) and vs. $p_{\perp,\text{prim.}}$ for $N_{\text{ch.}}>10$ (right). Solid
lines have string radius $R=1$ fm and dot-dashed lines have $R=0.5$ fm. Blue
and red lines are for minimum-bias event at 7 and 13 TeV respectively.
In this section, we explore the variation of the effective string tension
$\kappa_{\mathrm{eff}}$ with rope hadronization, for minimum bias
$\mathrm{pp}$ events. The $\kappa_{\mathrm{eff}}$ is shown for primary
hadrons, _i.e._ the effective string tension used to form a given hadron,
produced directly in the hadronization process. Results are shown for two
collision energies, $\sqrt{s}=$ 7 and 13 TeV, and two values of string radius,
$R=0.5$ and 1 fm.
In figure 2, the dependence of $\langle\kappa_{\mathrm{eff}}/\kappa\rangle$
with respect to $N_{\text{ch.}}$ in $|\eta|<0.5$ is shown on the left, and
$p_{\perp,\text{prim.}}$ on the right. On the right, only events with
$\mathrm{d}N_{\text{ch.}}/\mathrm{d}\eta>10$ are shown, to focus on events
with several parton interactions. (At 13 TeV this corresponds to keeping
roughly the 30% of events with the highest multiplicity [50].)
On the left plot of figure 2, it is seen that
$\langle\kappa_{\mathrm{eff}}/\kappa\rangle$ rises with around 30% for $R=1$
fm and 10% for $R=0.5$ fm, almost irrespective of $\sqrt{s}$, with the rise at
13 TeV being only slightly higher. The two main points to take away from this
figure, is a) that $\mathrm{d}N_{\text{ch.}}/\mathrm{d}\eta$ is a good proxy
for string density irrespective of collision energy, and thus works as a good
scaling variable, and b) that any result will be very sensitive to the choice
of $R$.
On the right plot of figure 2, we observe that the increase in
$\kappa_{\mathrm{eff}}$ is larger for primary hadrons in the lower $p_{\perp}$
bins for both values of $R$. This means that the lower $p_{\perp}$ primary
hadrons are formed from regions with high density of strings with more
overlaps with adjacent strings. However, the higher $p_{\perp}$ partons
correspond to “mini-jet” situations and are more separated in space-time from
the bulk of strings. Such strings have less overlaps resulting in a lower
$\kappa_{\mathrm{eff}}$. Hence the high $p_{\perp}$ primary hadrons formed
from such string break-ups show this effect.
In the lowest $p_{\perp}$ bins of $\langle\kappa_{\mathrm{eff}}/\kappa\rangle$
vs $p_{\perp,\text{prim.}}$ plot, it is seen that $\kappa_{\mathrm{eff}}$
drops to lower values. This behaviour arises from the fact that low
$p_{\perp}$ particles are biased towards low $\kappa_{\mathrm{eff}}$ values
due to the $p_{\perp}$-dependence on $\kappa$ in the tunneling probability in
eq. (1).
Overall we observe that rope hadronization significantly increases the string
tension at high-multiplicities and for low $p_{\perp}$ final-state particles.
For higher $p_{\perp}$ the effect is smaller, but does not disappear
completely.
### 4.2 Underlying event observables in reconstructed $\mathrm{Z}$ events
Figure 3: Associated particle production in Z$\rightarrow\ell^{-}\ell^{+}$
events at $\sqrt{s}=7$ TeV compared to the default P YTHIA tune and with rope
hadronization. Top row: Distribution of charged particle multiplicity,
$N_{\text{ch}}$, (top left) and summed scalar transverse momenta, $\Sigma
p_{\perp}$ (top right) measured for events with $p_{\perp}^{Z}$ range 0-6
GeV[51]. Bottom row: $\Sigma p_{\perp}$ distributions in different azimuthal
regions, in events with $p_{\perp}^{Z}$ range 10-20 GeV [52]. Left:
_transverse_ region, $\pi/3<|\Delta\phi_{Z}|<2\pi/3$, right: _towards_ region
$|\Delta\phi_{Z}|<\pi/3$.
Before moving on to study rope effects on jets, it is important to assess
whether rope formation drastically changes existing observables, currently
well described by the existing model. In events with a $\mathrm{Z}$-boson
present, the most likely place for such a change to occur, is in the UE. To
this end, we use a standard UE analyses implemented in the Rivet program [53].
In figure 3, $N_{\text{ch.}}$ and $\Sigma p_{\perp}$ for
Z$\rightarrow\ell^{-}\ell^{+}$ events in $\mathrm{pp}$ collisions at 7 TeV are
compared to ATLAS data[51, 52]. The $\mathrm{Z}$-boson is reconstructed from
the electron or muon channel with invariant mass $66<m_{\ell^{-}\ell^{+}}<166$
GeV in $|\eta|<2.5$.
The charged particle multiplicity and summed scalar $p_{\perp}$ distributions
for Z$\rightarrow\mu^{-}\mu^{+}$ channel with $0<p_{\perp}^{Z}<6$ GeV, are
shown in top row of figure 3. It is seen that adding rope hadronization,
overall preserves the distributions as produced by default P YTHIA 8\. We
note that rope hadronization has a slight effect of pushing particles from
lower to higher $\Sigma p_{\perp}$ regions, which follows from the
$p_{\perp}$-dependence of the tunnelling probability in eq. (1).
The particle $p_{\perp}$ in the away region (opposite azimuthal region to that
of the Z boson), balances the $p_{\perp}^{Z}$. Hence the towards and
transverse regions with respect to the Z boson are much less affected by a
recoiling jet and therefore have cleaner UE activity.333It should here be
noted that the charged particle activity in events with a hard interaction
such as Z-production is generally higher than in minimum bias events. These
regions are sensitive to the hadronization mechanism, rope hadronization
effects will be apparent here. So we look at the UE-sensitive observables such
as scalar summed $p_{\perp}/\delta\eta\delta\phi$ distributions for charged
particles in events with $p_{\perp}^{Z}$ in the range 10-20 GeV in the bottom
row of figure 3. These plots show the $\Sigma p_{\perp}$ distributions in the
transverse $(\pi/3<|\Delta\phi_{Z}|<2\pi/3)$ and towards
$(|\Delta\phi_{Z}|<\pi/3)$ regions[52]. We see that the rope hadronization
curve follows the default P YTHIA 8 curve, again preserving the overall
physics behaviour of P YTHIA 8, except for a slight shift in $\Sigma
p_{\perp}$, as in the top right plot.
We conclude that UE measurements are equally well described with rope
hadronization as without, and it is therefore not necessary to re-tune
fragmentation parameters before proceeding to give predictions for jet
observables.
Figure 4: Pion yields in Z$+$jet events in 13 TeV $\mathrm{pp}$ collisions vs.
$p_{\perp,\text{particle}}$ in the UE (top left), vs. $p_{\perp,\text{jet}}$
in the jet cone (top right), as a function of
$z=p_{\perp,\text{particle}}/p_{\perp,\text{jet}}$ (bottom left), and vs.
$p_{\perp,\text{jet}}$ for $0.4<z<0.6$ (bottom right).
### 4.3 Strangeness yields in Z$+$jet events
To investigate experimentally observable consequences of our rope model in
terms of the yield of different hadron species inside jets, we have chosen to
study its effects in Z$+$jets events at LHC energies. It has been shown in,
_e.g._ , ref. [54], that such events are very useful for separating regions of
phase space dominated by the UE from the regions dominated by jets. By
selecting events where the Z boson is well balanced by a hard jet in the
opposite azimuthal region, we can study the UE in a cone around the Z, where
there should be very little activity related to the jet, and thus we can get a
good estimate of the UE activity on an event-by-event basis. In this way we
can get a reliable way of correcting jet observables for UE effects, not only
for the transverse momentum of the jet but also for the flavour content.
#### 4.3.1 Overall jet features
To observe the modification in the flavour production in the jet, we want to
look at the yield ratios of different hadron species. Hence we have written a
Rivet analysis where we first locate a reconstructed Z boson for
$m_{\mu^{-}\mu^{+}}$ in the range 80-100 GeV and $|\eta|<$ 2.5 and search for
the hardest associated jet in the opposite azimuthal hemisphere. We further
restrict the Z boson by requiring it to be within $|\eta|<$ 1.9 and
$p_{\perp}^{Z}>8$ GeV using the standard Z-finding projection in Rivet. Once
we find such a Z boson in the event, we search for the associated hardest
(charged particle) jet using the anti-kT [55] algorithm with a radius
$R_{j}=0.4$ in $|\eta|<2.1$ with the azimuthal separation
$\Delta\phi_{\text{jet},Z}\geq 2\pi/3$.
Figure 5: Yield ratio of different strange hadron species and protons to pions
in the UE cone vs. $p_{\perp,\text{particle}}$, scaled by factors to show them
clearly. Solid lines are with rope hadronization and dot-dashed lines are for
default P YTHIA 8.
To subtract UE contributions from the jet $p_{\perp}$, we calculate a
characteristic $\Sigma p_{\perp,\text{UE}}$, by summing up the $p_{\perp}$ of
the charged final state particles (not including muons from the Z decay) that
lie within a cone of radius $\sqrt{2}R_{j}$ around the Z boson. Therefore, for
a given event, the yields of the particles is calculated twice: once within
the jet cone, then within a cone of radius $\sqrt{2}R_{j}$ with respect to the
Z boson. The latter serves as our underlying event reference and we subtract
half of this yield from the yield inside the jet cone to get the final yield
of the hadrons in that event associated with the jet. Denoting the initial
jet-$p_{\perp}$ as $p_{\perp,\text{pseudojet}}$, the corrected
$p_{\perp,\text{jet}}$ becomes:
$p_{\perp,\text{jet}}=p_{\perp,\text{pseudojet}}-0.5\times\Sigma
p_{\perp,\text{UE}}$ (15)
and the corresponding yields:
$\text{yield}_{\text{jet}}=\text{yield}_{\text{pseudojet}}-0.5\times\text{yield}_{\text{UE}}$
(16)
This method of UE subtraction can easily be extended to $\mathrm{p}A$ and $AA$
collisions to give a comparable result among the three systems. Similar
methods have previously been used in heavy ion collisions [56]. We do this
analysis for $\mathrm{pp}$ collisions at $\sqrt{s}=13$ TeV with
$p_{\perp,\text{jet}}\geq$ 10 GeV for string radius $R=1$ fm.
Figure 6: Yield ratio of different strange hadrons and protons to pions in the
jet cone, for $R_{j}=0.4$ vs. $p_{\perp,\text{jet}}$, scaled by factors to
show them clearly. Solid lines are with rope hadronization and dot-dashed
lines are for default P YTHIA 8.
To examine the model performance in reproducing general features of the jets,
such as particle multiplicity as a function of their transverse momentum and
of the transverse momentum of the jets, we look at the pions. Our rope model
is known to have very small effects on the overall multiplicity [30], and we
know that pions in general are dominating the particle production, even though
we expect a slight drop in pions, since high $\kappa_{\mathrm{eff}}$ will
favour strange hadrons and baryons over pions. In figure 4 we show the pion
yield as a function of particle $p_{\perp}$ in the UE cone, and the UE-
subtracted yield as a function of $p_{\perp,\text{jet}}$ in the jet cone in
figure 4. We also show the pion yield with respect to
$z=p_{\perp,\text{particle}}/p_{\perp,\text{jet}}$ and in the mid-$z$ region
as a function of $p_{\perp,\text{jet}}$. Indeed we find that the rope effects
are very small for pion production, both in the UE and in the jet, with the
possible exception of the lowest bin in the $z$ distribution. We will revisit
the bottom row plots in connection with strangeness yields in the jet cone in
section 4.3.2.
In the UE region, the density of strings is high resulting in a higher number
of overlaps among them. As a result, we would expect large effects due to rope
hadronization in the UE. In order to observe this effect, we look at the yield
ratio of the strange hadrons to pions in the UE cone. In figure 5, we show the
yield ratio to pions for strange mesons ($K^{0}_{S}$) and baryons ($\Lambda$,
$\Xi$ and $\Omega$) and protons with respect to $p_{\perp,\text{particle}}$.
Yields of $\Xi$ and $\Omega$ baryons have been scaled by a multiplicative
factor to show them in comparison to the other species. As expected, the
different yields are higher with rope hadronization turned on as compared to
default P YTHIA 8\. The highest enhancement for each species is observed for
the lowest $p_{\perp,\text{particle}}$ ranges which subsequently decreases for
higher particle $p_{\perp}$ (which follows figure 2 in section 4.1).
Therefore, this plot show us that with rope hadronization, we get increased
yields of baryons and strangeness. This plot also shows us the UE contribution
to strangeness yields to that of within the jet.
Turning to flavour production inside the jet cone in figure 6, we show the UE-
subtracted yield ratio to pions for the same set of hadron species as before,
now with respect to $p_{\perp,\text{jet}}$. As rope hadronization will
increase both strangeness and baryon production, the largest enhancement is
expected for multistrange baryons. For $K^{0}_{S}$, only a slight increase is
observed, while the increase for protons is higher. The $\Lambda$ yield due to
rope hadronization is even higher due to combined baryon and strangeness
enhancement. The yield of $\Xi$ is $\sim$ 20% higher due to rope
hadronization, than default P YTHIA 8 and the $\Omega$ yield with rope
hadronization is more than 50% higher. This shows that both baryon and
strangeness yields are enhanced by rope hadronization. We note that the
increase in the yield ratio due to rope hadronization is rather constant over
all $p_{\perp,\text{jet}}$. Hence if we look at the enhancement as a function
of the transverse momentum ratio of the particle species to that of the jet,
that would help us identify the $p_{\perp}$ ranges where rope effects are
higher.
Figure 7: Yield ratios as a function of
$z=p_{\perp,\text{particle}}/p_{\perp,\text{jet}}$ for $\mathrm{pp}$
collisions at $\sqrt{s}$=13 TeV: $2K^{0}_{S}/(\pi^{+}\pi^{-})$ (top left)
$(\Lambda+\bar{\Lambda})/(\pi^{+}\pi^{-})$ (top right),
$(\Omega^{-}+\Omega^{+})/(\pi^{+}\pi^{-})$ and
$(\Xi^{-}+\Xi^{+})/(\pi^{+}\pi^{-})$ (bottom left).
#### 4.3.2 Jet substructure observables
Now we take a closer look at the particle to pion yield ratios as a function
of $z$ and $p_{\perp,\text{jet}}$. Studies have been performed where the ratio
of $p_{\perp}$ of the individual sub-jets to that of the leading jet serves as
a distinguishing observable for jet modification[57]. Since we want to look at
the strange flavour yields in the jet cone, we take a simpler approach. We
only plot the yield ratios in bins of $z$, which is the ratio of the particle
$p_{\perp}$ to the jet $p_{\perp}$.
In figure 7, we show the yield ratio of strange hadrons to pions vs. $z$. We
observe that the particle yields are increased at low (close to the UE) to
intermediate $z$ values. Furthermore, this enhancement is smaller for
$K^{0}_{S}$ and larger for the strange baryon $\Lambda$, and for multistrange
baryons $\Xi$ and $\Omega$ as expected. However, strangeness and baryon
enhancement drops at higher $z$. This highlights the behaviour that rope
hadronization effects decrease with higher $p_{\perp}$, as we noted in figure
2 in section 4.1.
Figure 8: Yield ratios of particles with $0.4<z<0.6$, as a function of
$p_{\perp,\text{jet}}$ for $\mathrm{pp}$ collisions at $\sqrt{s}$=13 TeV:
$2K^{0}_{s}/(\pi^{+}\pi^{-})$ (top left),
$(\Lambda+\bar{\Lambda})/(\pi^{+}\pi^{-})$ (top right),
$(\Xi^{-}+\Xi^{+})/(\pi^{+}\pi^{-})$ (bottom left) and
$(\Omega^{-}+\Omega^{+})/(\pi^{+}\pi^{-})$ (bottom right).
We note that, even though the parallel frame formalism allows the calculation
of $\kappa_{\mathrm{eff}}$ in events with jets, the current implementation is
lacking in the region $z\approx 1$, as already mentioned in section 3.3. The
previously mentioned Catch-22 situation, is purely related to the
implementation, and can be further understood by considering the shape of the
Lund symmetric fragmentation function in eq. (2), which is vanishing near
$z=1$. For a particle with $z$ close to one, the pre-sampled overlap is
therefore likely to have been calculated with a too-small $z$, which in turn
means that it is calculated for the wrong part of the string. In $\mathrm{pp}$
collisions this effect is small but non-negligible, which we have confirmed by
an a posteriori check (as the correct overlaps can be calculated after the
fact, but too late to be used in event generation). Another issue, which would
be present even in a perfect implementation, and therefore potentially more
severe, is the absence of interactions between hadrons formed early in time,
and their surrounding environment. For most of the produced particles, and in
particular in $\mathrm{pp}$, this effect should also be small. But in the case
of high $z$, the particle is always produced early, and the effect could be
larger. We plan to develop the model further in this direction, but in the
meantime we will in the following show results for particles at intermediate
$z$ values ($0.4<z<0.6$) where the effects arising from both these issues,
should be negligible.
To test the modification in flavour yields at mid-$z$ values, we look at
particle yields as a function of $p_{\perp,\text{jet}}$. Since these particles
are neither close to the tip of the jet, nor to the UE, it is more reasonable
to the trial-hadron sampling of $\kappa_{\mathrm{eff}}$ in these regions.
Moreover, as the jet $p_{\perp}$ increases, the particles get further and
further away from the UE. In figure 8, we show the yield ratio of strange
hadrons to pions in the 0.4 < $z$ < 0.6 region vs. $p_{\perp,\text{jet}}$. We
observe that the yields from the rope hadronization case are distinct compared
to default P YTHIA 8\. The individual strange hadron yield to pion yield
ratio increases as we go from the $K^{0}_{s}$ meson to the $\Lambda$ baryon
(top row plots). For multistrange baryons, $\Xi$ and $\Omega^{-}$ (bottom row
plots), rope effects are amplified due to higher number of strange quarks,
resulting in a 20% - 50% increase in their yields in low
$p_{\perp,\text{jet}}$ ranges. However, as mentioned before, we would expect
the enhancement in the yields to drop at higher $p_{\perp,\text{jet}}$ bins.
This effect is rather small for $\Lambda$ but prominent for $\Xi$ and
$\Omega$. $\Omega$ (bottom right plot) is only shown up to 45 GeV due to
statistics.
## 5 Conclusion
We have here presented a study on how an effect from a dense system of colour
fluxtubes might be observed as strangeness enhancement in jets in high
multiplicity pp events. In such events it is essential to properly estimate
the interaction between non-parallel strings, including strings connected to a
hard scattered parton and strings in the underlying event. This problem was
solved in ref. [20], where the interaction of all string pairs can be
calculated in a Lorentz frame, where two string pieces lie symmetrically in
two parallel planes. We here show results for jet-triggered high-multiplicity
$\mathrm{pp}$ collisions. The generalization to $\mathrm{p}A$ and $AA$
collisions (using the Angantyr model [58]) will be presented in a future
publication.
The interacting strings can form “colour ropes”, which hadronize in a stepwise
manner by $q\bar{q}$ pair creation. The increased energy in the rope gives a
higher "effective string tension", $\kappa_{\mathrm{eff}}$, which increases
the number of strange quarks and diquarks in the breakups. In section 4.1 we
found that this results in an increase of $\kappa_{\mathrm{eff}}$ with
multiplicity in $\mathrm{pp}$ events at LHC energies. It is interesting to
note that the increase for a given multiplicity is almost independent of the
collision energy.
As expected we also found that the increase is quite dependent on the
transverse momentum, since high-$p_{\perp}$ particles are typically produced
in jets where the strings are not parallel with the bulk of the strings in the
underlying event, thus reducing the effective overlap with these. The
important question is then if the rope model, despite being reduced in jets,
anyway will result in a modification of the hadron composition of jets.
To study the effects on jets we focused our investigation on Z+jet events,
with the Z decaying to lepton pairs. As pointed out in _e.g._ ref. [54], it is
possible, in such events, to get a relatively clean separation between the
jets and the particle production in the underlying event. In particular the
hadrons produced in a cone around the direction of the Z particle should have
very little to do with the recoiling jet, and can therefore be used to correct
any observable in the jet cone for underlying-event contributions on an event-
by-event basis.
The modified $\kappa_{\mathrm{eff}}$ also affects the fragmentation
parameters. In section 4.2 results for multiplicity and the transverse
momentum distribution in the underlying event in pp Z+jet events, were
compared with results from default P YTHIA 8 and with data from ATLAS. After
confirming that the rope hadronization gives negligible effects on these
general features of the underlying event, we feel comfortable that we can
study strangeness and baryon enhancement in the jets in a way, which is not
biased by the underlying-event corrections.
In section 4.3 our main results for strangeness and baryon number enhancement
in jets were presented, with the underlying event subtracted. We note that the
effect is most important for strange baryons, and growing with the number of
strange quarks. Thus it is largest for $\Omega$ baryons, and from the plots
showing the $\Omega/\pi$ ratio as a function of the jet transverse momentum,
we note that rope effects are very small for large jet $p_{\perp}$ as
expected, but quite noticeable for low jet $p_{\perp}$.
From this we conclude that it may indeed be possible to find jet modifications
due to collective effects, in our rope model, in small collision systems. The
size of the effect is, however, a bit uncertain. In part this is due the
uncertainty in the transverse size of the string, and our canonical choice of
$R=1$ fm may be a bit large. Although it should be possible to tune this
parameter to fit the overall strangeness and baryon enhancement, it is then
also important to also take into account the effects of repulsion between the
strings. Both of these effects will be addressed in future publications.
Looking ahead, it is also interesting to investigate the effects of colour
reconnection, in particular models that include junction formations, which
will also influence the baryon production. In the end we hope to develop a
picture where most collective effects can be interpreted as interactions among
strings, not only in $\mathrm{pp}$ collisions but also in $\mathrm{p}A$ and
$AA$.
## Acknowledgements
This work was funded in part by the Knut and Alice Wallenberg foundation,
contract number 2017.0036, Swedish Research Council, contracts number
2016-03291, 2016-05996 and 2017-0034, in part by the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation
programme, grant agreement No 668679, and in part by the MCnetITN3 H2020 Marie
Curie Initial Training Network, contract 722104.
## Appendix A Dependence of fragmentation parameters on
$\kappa_{\mathrm{eff}}$
There are several hadronization parameters in P YTHIA 8, and even if they are
in principle independent, several of them has an implicit dependence on the
string tension. In our implementation of the rope hadronization, we take the
parameters as tuned to $\mathrm{e}^{+}\mathrm{e}^{-}$ data, where we expect no
rope effects, and for each breakup in the string fragmentation we rescale the
parameters according to the estimated change in string tension at that point,
due to the presence of overlapping string fields. The parameters under
consideration is the same as in our previous implementation [30], and the
dependence of the string tension is also the same. For completeness we list
them here, but for further details we refer to [30].
In the following we will denote the change in string tension by $h$, according
to $\kappa\mapsto\kappa_{\mathrm{eff}}=h\kappa$. The following parameters is
affected:
* •
$\rho$ (StringFlav:probStoUD444This is the parameter name in P YTHIA 8.): the
suppression of $s$ quark production relative to $u$ or $d$ type production.
This parameter has a simple scaling
$\rho\mapsto\tilde{\rho}=\rho^{1/h}.$ (17)
* •
$x$ (StringFlav:probSQtoQQ): the suppression of diquarks with strange quark
content relative to diquarks without strange quarks (in addition to the factor
$\rho$ for each extra $s$-quark) also scales like
$x\mapsto\tilde{x}=x^{1/h}.$ (18)
* •
$y$ (StringFlav:probQQ1toQQ0): the suppression of spin 1 diquarks relative to
spin 0 diquarks (not counting a factor three due to the number of spin states
of spin 1 diquarks) again scales like
$y\mapsto\tilde{y}=y^{1/h}.$ (19)
* •
$\sigma$ (StringPT:sigma): the width of the transverse momentum distribution
in string break-ups. This is directly proportional to $\sqrt{\kappa}$, giving
$\sigma\mapsto\tilde{\sigma}=\sigma\sqrt{h}.$ (20)
* •
$\xi$ (StringFlav:probQQtoQ): the global probability of having a diquark
break-up relative to a simple quark break-up. This has a somewhat more
complicated $\kappa$ dependence and also has uncertainties related to the so-
called popcorn model as described in [30]. We decompose it as three different
parameters, $\xi=\alpha\beta\gamma$ with different $\kappa$-dependence, where
$\beta$ is related to the probability to have a $q\bar{q}$ fluctuation in
general in the popcorn model which is independent of $\kappa$ and is treated
as an independent parameter, while $\gamma$ is related to the masses and
scales as
$\gamma\mapsto\tilde{\gamma}=\gamma^{1/h},$ (21)
and $\alpha$ is related to the different di-quark states with an indirect
dependence on $\rho$, $x$, and $y$
$\alpha\mapsto\tilde{\alpha}=\frac{1+2\tilde{x}\tilde{\rho}+9\tilde{y}+6\tilde{x}\tilde{\rho}y+3\tilde{y}\tilde{x}^{2}\tilde{\rho}^{2}}{2+\tilde{\rho}}.$
(22)
Taken together we get the following dependence:
$\xi=\alpha\beta\gamma\mapsto\tilde{\xi}=\tilde{\alpha}\beta\left(\frac{\xi}{\alpha\beta}\right)^{1/h}.$
(23)
* •
$b$ (StringZ:bLund): the parameter in the symmetric fragmentation function eq.
(2) scales with the $\rho$-parameter as follows
$b\mapsto\tilde{b}=\frac{2+\tilde{\rho}}{2+\rho}\,b.$ (24)
* •
$a$ (StringZ:aLund): the other parameter in eq. (2) has an indirect dependence
on $b$ through the normalisation of the splitting function, $f(z)$. Keeping
the normalisation unchanged does not give a simple analytic form for the
scaling of $a\mapsto\tilde{a}$, and instead we use a numeric integration
procedure.
## References
* [1] K. Adcox _et al._ , _Formation of dense partonic matter in relativistic nucleus-nucleus collisions at RHIC: Experimental evaluation by the PHENIX collaboration_ , Nucl. Phys. A 757, 184 (2005), 10.1016/j.nuclphysa.2005.03.086, nucl-ex/0410003.
* [2] J. Adams _et al._ , _Evidence from d + Au measurements for final state suppression of high p(T) hadrons in Au+Au collisions at RHIC_ , Phys. Rev. Lett. 91, 072304 (2003), 10.1103/PhysRevLett.91.072304, nucl-ex/0306024.
* [3] C. Adler _et al._ , _Disappearance of back-to-back high $p_{T}$ hadron correlations in central Au+Au collisions at $\sqrt{s_{NN}}$ = 200-GeV_, Phys. Rev. Lett. 90, 082302 (2003), 10.1103/PhysRevLett.90.082302, nucl-ex/0210033.
* [4] A. M. Sirunyan _et al._ , _Study of Jet Quenching with $Z+\text{jet}$ Correlations in Pb-Pb and $pp$ Collisions at ${\sqrt{s}}_{NN}=5.02\text{ }\text{ }\mathrm{TeV}$_, Phys. Rev. Lett. 119(8), 082301 (2017), 10.1103/PhysRevLett.119.082301, 1702.01060.
* [5] J. Adam _et al._ , _Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions_ , Nature Phys. 13, 535 (2017), 10.1038/nphys4111, 1606.07424.
* [6] V. Khachatryan _et al._ , _Evidence for collectivity in pp collisions at the LHC_ , Phys. Lett. B 765, 193 (2017), 10.1016/j.physletb.2016.12.009, 1606.06198.
* [7] J. Adolfsson _et al._ , _QCD challenges from pp to A–A collisions_ , Eur. Phys. J. A 56(11), 288 (2020), 10.1140/epja/s10050-020-00270-1, 2003.10997.
* [8] A. Majumder, _The In-medium scale evolution in jet modification_ (2009), 0901.4516.
* [9] B. Schenke, C. Gale and S. Jeon, _MARTINI: An Event generator for relativistic heavy-ion collisions_ , Phys. Rev. C 80, 054913 (2009), 10.1103/PhysRevC.80.054913, 0909.2037.
* [10] Y. He, T. Luo, X.-N. Wang and Y. Zhu, _Linear Boltzmann Transport for Jet Propagation in the Quark-Gluon Plasma: Elastic Processes and Medium Recoil_ , Phys. Rev. C 91, 054908 (2015), 10.1103/PhysRevC.91.054908, [Erratum: Phys.Rev.C 97, 019902 (2018)], 1503.03313.
* [11] K. C. Zapp, _JEWEL 2.0.0: directions for use_ , Eur. Phys. J. C 74(2), 2762 (2014), 10.1140/epjc/s10052-014-2762-1, 1311.0048.
* [12] K. C. Zapp, F. Krauss and U. A. Wiedemann, _A perturbative framework for jet quenching_ , JHEP 03, 080 (2013), 10.1007/JHEP03(2013)080, 1212.1599.
* [13] T. Sjöstrand and M. van Zijl, _A Multiple Interaction Model for the Event Structure in Hadron Collisions_ , Phys. Rev. D 36, 2019 (1987), 10.1103/PhysRevD.36.2019.
* [14] S. Gieseke, C. Rohr and A. Siodmok, _Colour reconnections in Herwig++_ , Eur. Phys. J. C 72, 2225 (2012), 10.1140/epjc/s10052-012-2225-5, 1206.0041.
* [15] P. Gras, S. Höche, D. Kar, A. Larkoski, L. Lönnblad, S. Plätzer, A. Siódmok, P. Skands, G. Soyez and J. Thaler, _Systematics of quark/gluon tagging_ , JHEP 07, 091 (2017), 10.1007/JHEP07(2017)091, 1704.03878.
* [16] G. S. Chahal and F. Krauss, _Cluster Hadronisation in Sherpa_ (2022), 2203.11385.
* [17] S. Gieseke, P. Kirchgaeßer and S. Plätzer, _Baryon production from cluster hadronisation_ , Eur. Phys. J. C 78(2), 99 (2018), 10.1140/epjc/s10052-018-5585-7, 1710.10906.
* [18] M. Baker, J. S. Ball and F. Zachariasen, _Dual QCD: A Review_ , Phys. Rept. 209, 73 (1991), 10.1016/0370-1573(91)90123-4.
* [19] C. Bierlich, _Soft modifications to jet fragmentation in high energy proton–proton collisions_ , Phys. Lett. B 795, 194 (2019), 10.1016/j.physletb.2019.06.018, 1901.07447.
* [20] C. Bierlich, S. Chakraborty, G. Gustafson and L. Lönnblad, _Setting the string shoving picture in a new frame_ , JHEP 03, 270 (2021), 10.1007/JHEP03(2021)270, 2010.07595.
* [21] B. Andersson, G. Gustafson and C. Peterson, _A Semiclassical Model for Quark Jet Fragmentation_ , Z. Phys. C 1, 105 (1979), 10.1007/BF01450386.
* [22] Andersson, B. and Gustafson, G. and Söderberg, B., _A General Model for Jet Fragmentation_ , Z. Phys. C 20, 317 (1983), 10.1007/BF01407824.
* [23] B. Andersson and G. Gustafson, _Semiclassical Models for Gluon Jets and Leptoproduction Based on the Massless Relativistic String_ , Z. Phys. C 3, 223 (1980), 10.1007/BF01577421.
* [24] B. Andersson, G. Gustafson and T. Sjöstrand, _A Three-Dimensional Model for Quark and Gluon Jets_ , Z. Phys. C 6, 235 (1980), 10.1007/BF01557774.
* [25] B. Andersson, G. Gustafson, G. Ingelman and T. Sjöstrand, _Parton Fragmentation and String Dynamics_ , Phys. Rept. 97, 31 (1983), 10.1016/0370-1573(83)90080-7.
* [26] B. Andersson, _The Lund model_ , vol. 7, Cambridge University Press, ISBN 978-0-521-01734-3, 978-0-521-42094-5, 978-0-511-88149-7, 10.1017/CBO9780511524363 (2005).
* [27] S. Ferreres-Solé and T. Sjöstrand, _The space–time structure of hadronization in the Lund model_ , Eur. Phys. J. C 78(11), 983 (2018), 10.1140/epjc/s10052-018-6459-8, 1808.04619.
* [28] C. B. Duncan and P. Skands, _Fragmentation of Two Repelling Lund Strings_ , SciPost Phys. 8, 080 (2020), 10.21468/SciPostPhys.8.5.080, 1912.09639.
* [29] C. Bierlich, S. Chakraborty, G. Gustafson and L. Lönnblad, _Hyperfine splitting effects in string hadronization_ (2022), 2201.06316.
* [30] C. Bierlich, G. Gustafson, L. Lönnblad and A. Tarasov, _Effects of Overlapping Strings in pp Collisions_ , JHEP 03, 148 (2015), 10.1007/JHEP03(2015)148, 1412.6259.
* [31] X. Artru, _Classical String Phenomenology. 1. How Strings Work_ , Phys. Rept. 97, 147 (1983), 10.1016/0370-1573(83)90081-9.
* [32] E. Brezin and C. Itzykson, _Pair production in vacuum by an alternating field_ , Phys. Rev. D 2, 1191 (1970), 10.1103/PhysRevD.2.1191.
* [33] T. Sjöstrand and M. Utheim, _A Framework for Hadronic Rescattering in pp Collisions_ , Eur. Phys. J. C 80(10), 907 (2020), 10.1140/epjc/s10052-020-8399-3, 2005.05658.
* [34] C. Bierlich, T. Sjöstrand and M. Utheim, _Hadronic rescattering in pA and AA collisions_ , Eur. Phys. J. A 57(7), 227 (2021), 10.1140/epja/s10050-021-00543-3, 2103.09665.
* [35] T. Sjöstrand, _Jet Fragmentation of Nearby Partons_ , Nucl. Phys. B 248, 469 (1984), 10.1016/0550-3213(84)90607-2.
* [36] K. Johnson, _The M.I.T. Bag Model_ , Acta Phys. Polon. B 6, 865 (1975).
* [37] G. ’t Hooft, _Topology of the Gauge Condition and New Confinement Phases in Nonabelian Gauge Theories_ , Nucl. Phys. B 190, 455 (1981), 10.1016/0550-3213(81)90442-9.
* [38] S. Nishino, K.-I. Kondo, A. Shibata, T. Sasago and S. Kato, _Type of dual superconductivity for the $SU(2)$ Yang–Mills theory_, Eur. Phys. J. C 79(9), 774 (2019), 10.1140/epjc/s10052-019-7280-8, 1903.10488.
* [39] A. Shibata, K.-I. Kondo, S. Nishino, T. Sasago and S. Kato, _Type of dual superconductivity for $SU(2)$ and $SU(3)$ Yang–Mills theories_ (2019), 1903.10487.
* [40] K.-I. Kondo, S. Kato, A. Shibata and T. Shinohara, _Quark confinement: Dual superconductor picture based on a non-Abelian Stokes theorem and reformulations of Yang–Mills theory_ , Phys. Rept. 579, 1 (2015), 10.1016/j.physrep.2015.03.002, 1409.1599.
* [41] P. Cea, L. Cosmai, F. Cuteri and A. Papa, _Flux tubes at finite temperature_ , JHEP 06, 033 (2016), 10.1007/JHEP06(2016)033, 1511.01783.
* [42] R. Sommer, _Scale setting in lattice QCD_ , PoS LATTICE2013, 015 (2014), 10.22323/1.187.0015, 1401.3270.
* [43] P. Cea, L. Cosmai, F. Cuteri and A. Papa, _Flux tubes in the SU(3) vacuum: London penetration depth and coherence length_ , Phys. Rev. D 89(9), 094505 (2014), 10.1103/PhysRevD.89.094505, 1404.1172.
* [44] M. Baker, P. Cea, V. Chelnokov, L. Cosmai, F. Cuteri and A. Papa, _Isolating the confining color field in the SU(3) flux tube_ , Eur. Phys. J. C 79(6), 478 (2019), 10.1140/epjc/s10052-019-6978-y, 1810.07133.
* [45] C. Bierlich, G. Gustafson and L. Lönnblad, _Collectivity without plasma in hadronic collisions_ , Phys. Lett. B 779, 58 (2018), 10.1016/j.physletb.2018.01.069, 1710.09725.
* [46] G. S. Bali, _Casimir scaling of SU(3) static potentials_ , Phys. Rev. D 62, 114503 (2000), 10.1103/PhysRevD.62.114503, hep-lat/0006022.
* [47] T. S. Biro, H. B. Nielsen and J. Knoll, _Color Rope Model for Extreme Relativistic Heavy Ion Collisions_ , Nucl. Phys. B 245, 449 (1984), 10.1016/0550-3213(84)90441-3.
* [48] L. Lönnblad, _Reconnecting colored dipoles_ , Z. Phys. C 70, 107 (1996), 10.1007/s002880050087.
* [49] J. R. Christiansen and P. Z. Skands, _String Formation Beyond Leading Colour_ , JHEP 08, 003 (2015), 10.1007/JHEP08(2015)003, 1505.01681.
* [50] S. Acharya _et al._ , _Charged-particle production as a function of multiplicity and transverse spherocity in pp collisions at $\sqrt{s}=5.02$ and 13 TeV_, Eur. Phys. J. C 79(10), 857 (2019), 10.1140/epjc/s10052-019-7350-y, 1905.07208.
* [51] G. Aad _et al._ , _Measurement of distributions sensitive to the underlying event in inclusive Z-boson production in $pp$ collisions at $\sqrt{s}=7$ TeV with the ATLAS detector_, Eur. Phys. J. C 74(12), 3195 (2014), 10.1140/epjc/s10052-014-3195-6, 1409.3433.
* [52] G. Aad _et al._ , _Measurement of event-shape observables in $Z\rightarrow\ell^{+}\ell^{-}$ events in $pp$ collisions at $\sqrt{s}=$ 7 TeV with the ATLAS detector at the LHC_, Eur. Phys. J. C 76(7), 375 (2016), 10.1140/epjc/s10052-016-4176-8, 1602.08980.
* [53] C. Bierlich _et al._ , _Robust Independent Validation of Experiment and Theory: Rivet version 3_ , SciPost Phys. 8, 026 (2020), 10.21468/SciPostPhys.8.2.026, 1912.05451.
* [54] S. Chatrchyan _et al._ , _Measurement of the Underlying Event Activity at the LHC with $\sqrt{s}=7$ TeV and Comparison with $\sqrt{s}=0.9$ TeV_, JHEP 09, 109 (2011), 10.1007/JHEP09(2011)109, 1107.0330.
* [55] M. Cacciari, G. P. Salam and G. Soyez, _The anti- $k_{t}$ jet clustering algorithm_, JHEP 04, 063 (2008), 10.1088/1126-6708/2008/04/063, 0802.1189.
* [56] R. B. Neufeld and I. Vitev, _The $Z^{0}$-tagged jet event asymmetry in heavy-ion collisions at the CERN Large Hadron Collider_, Phys. Rev. Lett. 108, 242001 (2012), 10.1103/PhysRevLett.108.242001, 1202.5556.
* [57] L. Apolinário, J. G. Milhano, M. Płoskoń and X. Zhang, _Novel subjet observables for jet quenching in heavy-ion collisions_ , Eur. Phys. J. C 78(6), 529 (2018), 10.1140/epjc/s10052-018-5999-2, 1710.07607.
* [58] C. Bierlich, G. Gustafson, L. Lönnblad and H. Shah, _The Angantyr model for Heavy-Ion Collisions in PYTHIA8_ , JHEP 10, 134 (2018), 10.1007/JHEP10(2018)134, 1806.10820.
|
# Adaptive Machine Translation with Large Language Models
Yasmin Moslem
ADAPT Centre
School of Computing
Dublin City University
Dublin, Ireland
<EMAIL_ADDRESS>&Rejwanul Haque
ADAPT Centre
Department of Computing
South East Technological University
Carlow, Ireland
<EMAIL_ADDRESS>
&John D. Kelleher
ADAPT Centre
School of Computer Science
Technological University Dublin
Dublin, Ireland
<EMAIL_ADDRESS>
&Andy Way
ADAPT Centre
School of Computing
Dublin City University
Dublin, Ireland
<EMAIL_ADDRESS>
###### Abstract
Consistency is a key requirement of high-quality translation. It is especially
important to adhere to pre-approved terminology and adapt to corrected
translations in domain-specific projects. Machine translation (MT) has
achieved significant progress in the area of domain adaptation. However, real-
time adaptation remains challenging. Large-scale language models (LLMs) have
recently shown interesting capabilities of in-context learning, where they
learn to replicate certain input-output text generation patterns, without
further fine-tuning. By feeding an LLM at inference time with a prompt that
consists of a list of translation pairs, it can then simulate the domain and
style characteristics. This work aims to investigate how we can utilize in-
context learning to improve real-time adaptive MT. Our extensive experiments
show promising results at translation time. For example, LLMs can adapt to a
set of in-domain sentence pairs and/or terminology while translating a new
sentence. We observe that the translation quality with few-shot in-context
learning can surpass that of strong encoder-decoder MT systems, especially for
high-resource languages. Moreover, we investigate whether we can combine MT
from strong encoder-decoder models with fuzzy matches, which can further
improve translation quality, especially for less supported languages. We
conduct our experiments across five diverse language pairs, namely English-to-
Arabic (EN-AR), English-to-Chinese (EN-ZH), English-to-French (EN-FR),
English-to-Kinyarwanda (EN-RW), and English-to-Spanish (EN-ES).
Figure 1: Evaluation results for GPT-3.5 zero-shot, and few-shot translation
with random context or fuzzy matches. Average scores across EN-AR, EN-ES, EN-
FR, and EN-ZH language pairs. While using a random context outperforms zero-
shot translation, using fuzzy matches reveals the best results.
## 1 Introduction
Adaptive MT is a type of machine translation that utilizes feedback from users
to improve the quality of the translations over time. Feedback usually
includes corrections to previous translations, terminology and style guides,
as well as ratings of the quality of the translations. This can be
particularly useful for domain-specific scenarios, where baseline MT systems
may have insufficient relevant data to accurately translate certain terms or
phrases. There are still several challenges to effectively incorporate user
feedback into the translation process, especially at inference time. In this
work, we use a relatively wide definition of adaptive MT to refer to learning
from similar translations (fuzzy matches) found in approved translation
memories (TMs) on the fly [Farajian et al., 2017, Wuebker et al., 2018, Peris
and Casacuberta, 2019, Etchegoyhen et al., 2021], as well as real-time
terminology-constrained MT [Hokamp and Liu, 2017, Post and Vilar, 2018, Dinu
et al., 2019, Michon et al., 2020].
Autoregressive decoder-only LLMs, such as GPT-3 [Brown et al., 2020, Ouyang et
al., 2022], BLOOM [BigScience Workshop et al., 2022], PaLM [Chowdhery et al.,
2022], and LLaMA [Touvron et al., 2023] are trained to predict the next word
given the previous context. During unsupervised pre-training, a language model
develops a broad set of pattern recognition abilities. It then uses these
abilities at inference time to rapidly recognize and adapt to the desired
task. In their experiments, Brown et al. [Brown et al., 2020] use the term
“in-context learning” to describe a scenario where a pre-trained language
model at inference time learns to replicate certain input-output text
generation patterns without further fine-tuning. They show that autoregressive
LLMs such as GPT-3 can perform well on diverse tasks, through zero-shot, one-
shot, and few-shot in-context learning without weight updates. Instead of
asking the model to directly perform a given task, the input can be augmented
with relevant examples, which help the model adapt its output. The key idea of
in-context learning is to learn from analogy. The model is expected to learn
the pattern hidden in the demonstration and accordingly make better
predictions [Dong et al., 2022].
Previous researchers investigated using neural language models for MT through
few-shot in-context learning [Vilar et al., 2022] and even in zero-shot
settings [Wang et al., 2021]. Other researchers proposed using LLMs for
generating synthetic domain-specific data for MT domain adaptation [Moslem et
al., 2022]. Recently, researchers [Agrawal et al., 2022, Zhang et al., 2023]
confirmed the importance of in-context example selection for the quality of MT
with LLMs.
The main contribution of this paper is investigating the capabilities of LLMs
such as GPT-3.5, GPT-4 (including ChatGPT), and BLOOM for real-time adaptive
MT through in-context learning. As illustrated by Figure 1, such LLMs can
achieve better translation quality through adapting its output to adhere to
the terminology and style used in previously approved translation pairs. In
particular, we would like to understand the quality with which such models can
perform the following tasks, without any further training:
* •
Adapting new translations to match the terminology and style of previously
approved TM fuzzy matches, at inference time;
* •
Matching or outperforming the quality of translations generated by encoder-
decoder MT models across a number of languages;
* •
Fixing translations from stronger encoder-decoder MT systems using fuzzy
matches, which is especially useful for low-resource languages; and
* •
Terminology-constrained MT, by first defining terminology in the relevant
sentences or dataset, and then forcing new translations to use these terms.
## 2 Experimental Setup
In all our experiments, we use GPT-3.5 _text-davinci-003_ model via its
official API.111https://openai.com/api/ For parameters, we use _top-p_ 1, with
_temperature_ 0.3 for the three translation tasks, and 0 for the terminology
extraction task.222To avoid over-generation, the option _stop_ can be set to
[‘\n’]. However, if a new line is generated by the model before the
translation, this might result in not generating a translation. Alternatively,
over-generation can be manually handled. For the maximum length of tokens, we
observe that French and Spanish tokens can be 3–4 times the number of English
source words, while other languages can be longer. Hence, we roughly choose a
length multiplier value, which we set to 8 for Arabic, 5 for Chinese and
Kinyarwanda, and 4 for French and Spanish. We used batch requests with a batch
size of 20 segments.333For higher values of few-shot translation into Arabic
using _text-davinci-003_ , we had to decrease the batch size to avoid
exceeding the tokens-per-minute limit. Our scripts are publicly
available.444https://github.com/ymoslem/Adaptive-MT-LLM
As we aim to simulate a document-level scenario where translators are required
to adhere to a project’s or client’s TM, we use the domain-specific dataset,
TICO-19 [Anastasopoulos et al., 2020], which includes 3070 unique segments.
From now on, we will refer to it as the “context dataset”. We focus on a range
of languages with diverse scripts and amounts of resources, namely English as
the source language, and Arabic, Chinese, French, Kinyarwanda, and Spanish as
the target languages.
Lang | Context | spBLEU ↑ | chrF++ ↑ | TER ↓ | COMET ↑
---|---|---|---|---|---
EN-AR | zero-shot | 27.6 | 48.36 | 70.6 | 41.28
random 2-shot | 28.94 | 49.35 | 70.55 | 43.32
fuzzy 1-shot | 36.38 | 55.08 | 63.99 | 55.1
fuzzy 2-shot | 38.41 | 56.57 | 62.31 | 57.36
fuzzy 3-shot | 39.75 | 57.52 | 61.12 | 59.68
fuzzy 4-shot | 40.84 | 58.27 | 60.39 | 62.16
fuzzy 5-shot | 41.33 | 58.64 | 59.95 | 62.65
fuzzy 7-shot | 41.81 | 59.1 | 59.38 | 64.01
EN-ES | zero-shot | 53.91 | 72.61 | 36.86 | 84.0
random 2-shot | 54.78 | 73.12 | 36.09 | 85.25
fuzzy 2-shot | 59.64 | 75.83 | 32.56 | 90.37
fuzzy 5-shot | 61.24 | 76.73 | 31.32 | 91.51
fuzzy 10-shot | 61.77 | 77.05 | 30.9 | 92.0
EN-FR | zero-shot | 44.87 | 65.29 | 50.34 | 58.67
random 2-shot | 45.91 | 65.4 | 49.92 | 57.6
fuzzy 1-shot | 48.39 | 66.58 | 48.18 | 59.49
fuzzy 2-shot | 49.79 | 67.41 | 46.79 | 61.38
fuzzy 3-shot | 50.96 | 68.06 | 45.85 | 61.97
fuzzy 4-shot | 51.89 | 68.5 | 44.94 | 62.7
fuzzy 5-shot | 51.94 | 68.43 | 45.09 | 62.81
fuzzy 10-shot | 53.72 | 69.39 | 43.82 | 63.57
EN-RW | zero-shot | 2.82 | 22.53 | 143.12 | N/A
random 2-shot | 3.8 | 25.19 | 129.88 | N/A
fuzzy 2-shot | 12.23 | 36.66 | 105.54 | N/A
fuzzy 5-shot | 14.96 | 39.84 | 100.11 | N/A
fuzzy 10-shot | 17.87 | 41.44 | 92.84 | N/A
EN-ZH | zero-shot | 32.41 | 40.82 | 99.45 | 59.87
random 2-shot | 38.72 | 44.06 | 87.56 | 68.39
fuzzy 2-shot | 46.18 | 49.12 | 69.0 | 73.9
fuzzy 5-shot | 47.94 | 50.28 | 64.96 | 74.86
fuzzy 10-shot | 49.11 | 51.22 | 63.14 | 75.3
Table 1: Adaptive MT with fuzzy matches for GPT-3.5 few-shot in-context
learning outperforms using random sentence pairs as context examples.
Increasing the number of fuzzy matches can improve the translation quality
further. The table shows consistent results for EN-AR, EN-ES, EN-FR, EN-RW,
and EN-ZH language pairs.
## 3 Adaptive MT with Fuzzy Matches
In translation environments, similar approved translated segments are usually
referred to as “fuzzy matches”, and are stored in parallel datasets, known as
translation memories (TMs).555Segments stored in a TM can be smaller than a
full sentence (e.g. a title) or larger. However, as most segments in a TM are
supposed to be sentence pairs, we use the two words interchangeably throughout
the paper. Researchers have investigated the possibilities of improving MT
quality and consistency with fuzzy matches [Knowles et al., 2018, Bulte and
Tezcan, 2019, Xu et al., 2020]. Incorporating fuzzy matches into the MT
process can help the system generate more accurate translations, and try to
ensure adherence to pre-approved terminology and preferred style requirements.
In this set of experiments, we investigate the possibility of forcing the
translation of a new sentence pair to adapt to fuzzy matches in the context
dataset. To extract fuzzy matches, we use embedding similarity-based
retrieval. Previous researchers have shown that approaches that depend on
embeddings to retrieve fuzzy matches can outperform those that use Edit
Distance [Hosseini et al., 2020, Pham et al., 2020]. To this end, we employ
the paraphrase mining module from the Sentence-Transformers library [Reimers
and Gurevych, 2019]. We use the _all-MiniLM-L6-v2_ model because of its high
accuracy and efficiency.666https://www.sbert.net/ For each sentence, we
retrieve up to $top\\_k$ other sentences. We experiment with diverse values of
1 to 10 sentence(s) from the context dataset.777For Arabic, we could only
integrate up to 7 matches (not 10 matches) because the tokenizer used by
GPT-3.5 generates many more tokens for some Unicode languages, which can
easily hit the max length of 4097 tokens. We observe that the issue has been
alleviated by newer models. Table 2 elaborates on the statistics of fuzzy
matches based on their similarity to the new source sentence in 2-shot and
5-shot scenarios.888While creating prompts, we arrange fuzzy matches in
descending order, making higher matches closer to the segment to be
translated. We experimented with reversing the order, and there was no
significant difference in terms of translation quality.
The following illustrations show the difference between zero-shot and few-shot
translation prompts. In the zero-shot prompt, only the source sentence and
language names are provided, encouraging the model to generate the
translation. The few-shot prompt incorporates translation examples to
influence the style of the output.
Prompt: EN-AR zero-shot translation __ _English:_ $<$source_segment$>$
_Arabic:_
Prompt: EN-AR two-shot translation __ _English:_ $<$source_fuzzy_match _2_ $>$
_Arabic:_ $<$target_fuzzy_match _2_ $>$ _English:_ $<$source_fuzzy_match _1_
$>$ _Arabic:_ $<$target_fuzzy_match _1_ $>$ _English:_ $<$source_segment$>$
_Arabic:_
Results illustrated by Figure 1 show that few-shot translation with GPT-3.5
using fuzzy matches as context outperforms few-shot translation with random
examples, although using random sentence pairs outperforms zero-shot
translation. As demonstrated by Table 1, across five language pairs, adding
more fuzzy matches improves translation quality further. At some point, there
might be diminishing returns of adding more similar sentences as their
similarity score decreases. In other words, increasing the number of fuzzy
matches from 2 sentences to 5 or 10 sentences incrementally improves
translation quality, but with smaller quality gains.
Figure 2: Evaluation results for GPT-3.5 few-shot translation with 5 or 10 fuzzy matches compared to encoder-decoder MT models (DeepL, Google, OPUS, and NLLB). Specifically, for EN-ES, EN-FR, and EN-ZH language pairs, few-shot translation with GPT-3.5 outperforms conventional systems. Similarity Score | Segment Statistics
---|---
fuzzy 2-shot | fuzzy 5-shot
>90% | 167 | 2.7% | 168 | 1.1%
89-80% | 751 | 12.2% | 1,103 | 7.2%
79-70% | 1,593 | 25.9% | 3,143 | 20.5%
69-60% | 1,825 | 29.7% | 4,661 | 30.4%
<60% | 1,804 | 29.4% | 6,275 | 40.9%
Total | 6,140 = 3,070*2 | 15,350 = 3,070*5
Table 2: Numbers and percentages of segments based on their similarity to the
new source segment, in the 2-shot and 5-shot experiments using fuzzy matches
for in-context learning. The English source is used to calculate similarity
across the 5 language pairs.
## 4 GPT-3 vs Encoder-Decoder MT Models
In this section, we aim to compare evaluation results we obtained from various
MT encoder-decoder Transformer-based systems [Vaswani et al., 2017] with those
from GPT-3.5. To this end, we translate our context dataset with a range of
open-source and commercial MT models, including DeepL Translate API,999DeepL
supports French, Spanish and Chinese, but not Arabic and Kinyarwanda. Google
Cloud Translation API, OPUS [Tiedemann, 2020],101010We use OPUS models from
the Tatoeba-Challenge, specifically the models augmented with back-
translation, and trained with Transformer-Big. and NLLB-200 [NLLB Team et al.,
2022]. We converted OPUS and NLLB models to the CTranslate2 [Klein et al.,
2020] format with int8 quantization for efficiency. Inference parameters
include _beam_size 4_ and _max_batch_size 2024_ , on a GPU _A100-SXM4-40GB_
(Google Colab Pro). For tokenization, we used SentencePiece [Kudo and
Richardson, 2018] with the source and target sub-word models provided for each
OPUS model, and the multilingual model provided by NLLB for
tokenization.111111$flores200\\_sacrebleu\\_tokenizer\\_spm.model$ is used for
both tokenization for NLLB and also for spBLEU [Goyal et al., 2022] in
sacreBLEU.
We observe that for high-resource languages, adaptive MT with fuzzy matches
using GPT-3.5 few-shot in-context learning (cf. Section 3) can outperform
strong encoder-decoder MT systems. For the English-to-French and English-to-
Spanish language pairs, few-shot translation with GPT-3.5 incorporating only 5
fuzzy matches outperforms strong encoder-decoder MT models, as demonstrated by
Figure 2. For English-to-Chinese translation, only when we used 10 fuzzy
matches could we achieve better results. However, for English-to-Arabic and
English-to-Kinyarwanda translations, results were not on par with the other
three language pairs. The results are detailed in Table 3.
Among the popular adaptive encoder-decoder MT systems is
ModernMT.121212https://www.modernmt.com/ Originally, the system adopted the
instance-based adaptation approach proposed by Farajian et al. [Farajian et
al., 2017]. To control our experiments with ModernMT to match those with
GPT-3.5 few-shot translation, we created a new TM for each segment to include
only the top-10 fuzzy matches for this segment. Table 3 illustrates the
evaluation results of ModernMT translation with and without a TM. In general,
using a TM with ModernMT improves translation quality. Moreover, we observe
that zero-shot translation performance (without a TM) of ModernMT outperforms
GPT-3.5 for the 4 supported language pairs. However, except for English-to-
Arabic, few-shot translation with GPT-3.5 using either 5 or 10 fuzzy matches
outperforms the translation quality of ModernMT using a TM with 10 fuzzy
matches per segment, for English-to-Chinese, English-to-French, and English-
to-Spanish language pairs.
## 5 Incorporating Encoder-Decoder MT
As we demonstrated in the previous section, encoder-decoder MT models have
achieved high translation quality for several language pairs. Nevertheless,
adaptive MT with LLM few-shot in-context learning can surpass such quality,
especially for high-resource languages. In this section, we investigate
whether we can utilize encoder-decoder MT models to further improve adaptive
translation with GPT-3.5. In the next subsections, we study two scenarios:
* •
appending fuzzy matches with MT from an encoder-decoder model to enhance in-
context learning.
* •
translating the source side of fuzzy matches, and using these MT translations
for few-shot in-context learning along with the original translations.
### 5.1 Fuzzy matches + new segment MT
Incorporating a translation from an encoder-decoder MT model with fuzzy
matches, we could achieve substantial improvements over the baseline MT
performance. As illustrated by Table 5, although OPUS English-to-Arabic
translation quality outperforms GPT-3.5 few-shot translation with 5 fuzzy
matches, appending these fuzzy matches with OPUS translation outperforms both
OPUS translation only and GPT-3.5 translation with fuzzy matches only.
Similarly, adding Google English-to-Chinese translation to 5 fuzzy matches
outperforms both baselines. Even for the very low-resource English-to-
Kinyarwanda language pair, we relatively notice a similar behaviour, using MT
outputs of OPUS or NLLB models.
However, we observe that if the translation with only fuzzy matches is
significantly better than the encoder-decoder MT baseline, we may not achieve
further gains. For example, the GPT-3.5 translations with 5 fuzzy matches are
already much better than the OPUS translation for English-to-French or Google
translation for English-to-Spanish. That is why incorporating the MT output
from OPUS or Google did not enhance the GPT-3.5 translation quality for these
language pairs.
Lang | System | spBLEU ↑ | chrF++ ↑ | TER ↓ | COMET ↑
---|---|---|---|---|---
EN-AR | OPUS (bt-big) | 43.11 | 60.79 | 57.24 | 63.64
NLLB 600M | 35.66 | 54.6 | 62.07 | 54.53
NLLB 1.2B | 41.1 | 58.51 | 57.15 | 63.85
NLLB 3.3B | 43.42 | 60.11 | 55.58 | 66.8
Google API | 43.56 | 61.58 | 57.79 | 65.5
ModernMT (no TM) | 47.17 | 62.82 | 53.53 | 66.64
ModernMT (TM) | 50.33 | 65.19 | 50.19 | 71.0
GPT-3 zero-shot | 27.6 | 48.36 | 70.6 | 41.28
GPT-3 fuzzy 5-shot | 41.33 | 58.64 | 59.95 | 62.65
GPT-3 fuzzy 7-shot | 41.81 | 59.1 | 59.38 | 64.01
EN-ES | OPUS (bt-big) | 54.99 | 72.66 | 36.26 | 83.69
NLLB 600M | 53.31 | 72.19 | 37.13 | 83.09
NLLB 1.2B | 56.1 | 73.85 | 34.96 | 85.91
NLLB 3.3B | 57.47 | 74.6 | 33.99 | 86.86
DeepL API | 55.39 | 72.87 | 36.21 | 85.68
Google API | 58.98 | 75.17 | 32.46 | 86.62
ModernMT (no TM) | 57.09 | 74.2 | 34.27 | 85.53
ModernMT (TM) | 59.22 | 75.4 | 32.79 | 86.99
GPT-3 zero-shot | 53.91 | 72.61 | 36.86 | 84.0
GPT-3 fuzzy 5-shot | 61.24 | 76.73 | 31.32 | 91.51
| GPT-3 fuzzy 10-shot | 61.77 | 77.05 | 30.9 | 92.0
EN-FR | OPUS (bt-big) | 46.05 | 65.08 | 49.8 | 56.29
NLLB 600M | 43.25 | 64.17 | 51.28 | 56.16
NLLB 1.2B | 46.3 | 66.25 | 48.68 | 59.76
NLLB 3.3B | 47.27 | 66.89 | 48.19 | 60.91
DeepL API | 47.38 | 66.45 | 48.47 | 61.01
Google API | 46.81 | 66.34 | 47.01 | 59.01
ModernMT (no TM) | 47.17 | 66.28 | 47.91 | 58.46
ModernMT (TM) | 49.24 | 67.41 | 46.17 | 59.84
GPT-3 zero-shot | 44.87 | 65.29 | 50.34 | 58.67
GPT-3 fuzzy 5-shot | 51.94 | 68.43 | 45.09 | 62.81
GPT-3 fuzzy 10-shot | 53.72 | 69.39 | 43.82 | 63.57
EN-RW | OPUS (Tatoeba 2021) | 1.38 | 15.32 | 153.58 | N/A
OPUS (2020) | 5.58 | 27.05 | 101.25 | N/A
NLLB 600M | 19.46 | 47.61 | 80.01 | N/A
NLLB 1.2B | 23.6 | 50.73 | 74.53 | N/A
NLLB 3.3B | 25.17 | 52.59 | 73.06 | N/A
Google API | 20.63 | 48.37 | 73.54 | N/A
GPT-3 zero-shot | 2.82 | 22.53 | 143.12 | N/A
GPT-3 fuzzy 5-shot | 14.96 | 39.84 | 100.11 | N/A
| GPT-3 fuzzy 10-shot | 17.87 | 41.44 | 92.84 | N/A
EN-ZH | OPUS (bt-big) | 37.51 | 40.72 | 121.49 | 50.4
NLLB 600M | 24.9 | 33.87 | 109.37 | 39.28
NLLB 1.2B | 29.02 | 37.45 | 110.22 | 50.05
NLLB 3.3B | 31.35 | 39.08 | 109.52 | 53.89
DeepL API | 37.79 | 47.67 | 100.83 | 69.92
Google API | 48.58 | 52.02 | 70.87 | 73.62
ModernMT (no TM) | 37.61 | 48.46 | 102.18 | 67.45
ModernMT (TM) | 39.85 | 50.95 | 101.53 | 69.64
GPT-3 zero-shot | 32.41 | 40.82 | 99.45 | 59.87
| GPT-3 fuzzy 5-shot | 47.94 | 50.28 | 64.96 | 74.86
| GPT-3 fuzzy 10-shot | 49.11 | 51.22 | 63.14 | 75.3
Table 3: Comparing GPT-3.5 few-shot translation using fuzzy matches with
encoder-decoder MT systems, DeepL Translate API, Google Cloud Translation API,
OPUS (Tatoeba-Challenge, with back-translation and Transformer-Big), and
NLLB-200 (600M, 1.2B & 3.3B parameters).
### 5.2 Fuzzy matches + all segments MT
In Section 5.1, we added MT of the new segment from an encoder-decoder model
to fuzzy matches, which enhanced GPT-3.5 in-context learning. In this
experiment, we include MT for all fuzzy matches and also for the new source
segment to be translated. For the English-to-Kinyarwanda and English-to-
Spanish language pairs, it is not clear whether including MT for all in-
context examples can significantly outperform including MT for only the new
source segment to be translated. Again, this depends on the quality of the
original MT and requires further investigation.
## 6 Bilingual Terminology Extraction
Terminology extraction is the task of automatically defining domain-specific
terms in a dataset. Extracted terms are naturally used for building glossaries
to help translators. Furthermore, it is possible to improve MT performance
through finding sentences that include these terms and fine-tuning the system
with them [Hu et al., 2019, Haque et al., 2020].
In this set of experiments, we ask GPT-3.5 to extract 5 bilingual terms from
each sentence pair in the context dataset. For parameters, we use temperature
0 and $top\\_p$ 1.
Lang | Sentences | Terms | Correct | %
---|---|---|---|---
EN-AR | 500 | 2,500 | 2,427 | 97.08
EN-ES | 500 | 2,500 | 2,397 | 95.88
EN-FR | 500 | 2,500 | 2,382 | 95.28
Table 4: Human evaluation results for the terminology extraction task for
English-to-Arabic (EN-AR), English-to-Spanish (EN-ES), and English-to-French
(EN-FR) language pairs. The majority of the terms that GPT-3 extracted ($>$
95%) were accurate.
Human evaluation was performed for Arabic, French,131313We observe that the
original English-to-French TICO-19 dataset includes several misaligned
translation pairs. This can negatively affect the quality of tasks using such
sentences. That is why it is important to filter parallel datasets to remove
possible misalignments. The evaluation sample has been manually refined to
include only well-aligned translation pairs. Automatic semantic filtering
approaches can be applied to large datasets. and Spanish. We provided the
evaluators with a random sample of 500 sentences and their extracted terms.
They were asked to use a 0-1 scale to determine whether each source and target
term were equivalent, and whether the extracted terms were actually in the
sentence pair (relevant inflexions are acceptable). In several cases where the
evaluators marked the extracted term pair with 0, the model had made up either
the source, target, or both; although it might be correct, it was not in the
provided sentence pair. In other cases, the extracted term was partial,
sometimes due to reaching the maximum length of tokens. Nevertheless, as Table
4 illustrates, the majority of the terms in the provided sample were
accurately extracted by the model.
Lang | System | spBLEU ↑ | chrF++ ↑ | TER ↓ | COMET ↑
---|---|---|---|---|---
EN-AR | MT (OPUS) | 43.11 | 60.79 | 57.24 | 63.64
GPT-3 fuzzy 5-shot | 41.33 | 58.64 | 59.95 | 62.65
GPT-3 fuzzy 5-shot + 1-MT | 45.9 | 62.9 | 55.14 | 67.74
EN-ES | MT (Google) | 58.98 | 75.17 | 32.46 | 86.62
GPT-3 fuzzy 2-shot | 59.64 | 75.83 | 32.56 | 90.37
GPT-3 fuzzy 2-shot + 1-MT | 59.82 | 75.73 | 32.16 | 89.0
GPT-3 fuzzy 2-shot + all-MT | 60.2 | 76.06 | 32.32 | 92.0
GPT-3 fuzzy 5-shot | 61.24 | 76.73 | 31.32 | 91.51
GPT-3 fuzzy 5-shot + 1-MT | 60.49 | 76.16 | 31.49 | 89.55
GPT-3 fuzzy 5-shot + all-MT | 61.1 | 76.52 | 31.8 | 92.07
EN-FR | MT (OPUS) | 46.05 | 65.08 | 49.8 | 56.29
GPT-3 fuzzy 5-shot | 51.94 | 68.43 | 45.09 | 62.81
GPT-3 fuzzy 5-shot + 1-MT | 47.95 | 66.72 | 48.34 | 59.69
EN-RW | MT #1 (Google) | 20.63 | 48.37 | 73.54 | N/A
GPT-3 fuzzy 5-shot | 14.96 | 39.84 | 100.11 | N/A
GPT-3 fuzzy 5-shot + 1-MT #1 | 22.51 | 49.69 | 72.97 | N/A
GPT-3 fuzzy 5-shot + all-MT #1 | 25.01 | 49.43 | 74.75 | N/A
MT #2 (NLLB 3.3B) | 25.17 | 52.59 | 73.06 | N/A
GPT-3 fuzzy 5-shot + 1-MT #2 | 25.59 | 53.12 | 72.73 | N/A
GPT-3 fuzzy 5-shot + all-MT #2 | 27.52 | 53.23 | 73.79 | N/A
EN-ZH | MT (Google) | 48.58 | 52.02 | 70.87 | 73.62
GPT-3 fuzzy 5-shot | 47.94 | 50.28 | 64.96 | 74.86
GPT-3 fuzzy 5-shot + 1-MT | 49.45 | 52.4 | 67.81 | 74.61
Table 5: Combining fuzzy matches with high-quality MT from encoder-decoder
systems can improve translation quality with GPT-3.5 few-shot in-context
learning, especially for low-resource and medium-resource languages. 1-MT
refers to appending fuzzy matches with the MT of the segment to be translated,
while all-MT refers to additionally adding MT for each segment of the fuzzy
matches along with its approved translation. For EN-AR and EN-RW improvements
are clearer than for EN-ES, EN-FR and EN-ZH, potentially due to the limited
support of EN-AR and EN-RW by GPT-3.5, which made them benefit more from
incorporating MT from stronger encoder-decoder models.
## 7 Terminology-Constrained MT
As observed in Section 3, adding more fuzzy matches enhances in-context
learning and hence improves translation quality. However, early in a real-
world translation project, we might not have so many fuzzy matches. By
incorporating domain-specific terminology, the system can produce translations
that are more accurate and consistent with the terminology used in that field.
In this section, we investigate integrating terms in the process when there
are $N$ fuzzy matches. For example, if we have only two fuzzy matches, we
either extract terms from these similar sentences or from a glossary, and use
those that match up to 5-gram phrases in the source sentence to be translated.
In this work, we use the terminology extraction process elaborated in Section
6. Obviously, if a pre-approved glossary is available, it can be used instead.
We investigate three scenarios:
* •
Few-shot translation with 2 fuzzy matches and their terms. As we do not have
terms for the segment to be translated, we use terms from the 2 fuzzy matches
if they are found in a set of n-grams (1-5) of the source segment to be
translated. Integrating terms into two-shot prediction, i.e. using both terms
and two fuzzy matches for in-context learning, outperforms using fuzzy matches
only.
* •
We automatically compile a glossary including all terms from the dataset, with
2+ frequency, and up to 5-grams. If there are multiple targets for the same
source, the term pair with the highest frequency is selected. Stop words and
terms with empty source or target sides are excluded. The list is sorted by
n-gram length, so terms with longer n-grams are prioritized. As illustrated by
Table 6, integrating terms from a glossary outperforms adding terms from only
two fuzzy matches, most likely due to the diversity that this option offers.
In prompts (cf. Appendix A), we use terms found in a set of n-grams (1-5) of
the source segment to be translated. We experiment with adding maximum 5 terms
and maximum 10 terms, which does not show a huge difference in performance; in
some cases only a smaller number of terms is available in the glossary.
* •
Zero-shot translation, i.e. without any fuzzy matches. This is similar to the
previous scenario, except that we only use terms from the glossary. In zero-
shot prediction, adding terms from the glossary improves translation quality.
As shown in Table 6, improvements are significant across all 5 language pairs.
Lang | GPT-3.5 Context | spBLEU ↑ | chrF++ ↑ | TER ↓ | COMET ↑
---|---|---|---|---|---
EN-AR | zero-shot | 27.6 | 48.36 | 70.6 | 41.28
zero-shot + max 5 terms (glossary) | 35.38 | 54.53 | 65.36 | 54.91
fuzzy 2-shot | 38.41 | 56.57 | 62.31 | 57.36
fuzzy 2-shot + terms (fuzzy) | 39.38 | 57.22 | 62.01 | 59.36
fuzzy 2-shot + max 5 terms (glossary) | 41.27 | 58.84 | 60.09 | 62.17
fuzzy 2-shot + max 10 terms (glossary) | 41.95 | 59.34 | 59.45 | 62.48
EN-ES | zero-shot | 53.91 | 72.61 | 36.86 | 84.0
zero-shot + max 5 terms (glossary) | 55.99 | 74.18 | 35.3 | 87.21
fuzzy 2-shot | 59.64 | 75.83 | 32.56 | 90.37
fuzzy 2-shot + terms (fuzzy) | 59.66 | 75.91 | 32.53 | 90.04
fuzzy 2-shot + max 5 terms (glossary) | 60.5 | 76.55 | 31.93 | 91.05
fuzzy 2-shot + max 10 terms (glossary) | 60.54 | 76.58 | 32.02 | 91.05
EN-FR | zero-shot | 44.87 | 65.29 | 50.34 | 58.67
zero-shot + max 5 terms (glossary) | 45.94 | 66.01 | 49.22 | 59.78
fuzzy 2-shot | 49.79 | 67.41 | 46.79 | 61.38
fuzzy 2-shot + terms (fuzzy) | 50.58 | 67.93 | 45.81 | 62.04
fuzzy 2-shot + max 3 terms (glossary) | 50.46 | 67.69 | 46.22 | 68.94
fuzzy 2-shot + max 5 terms (glossary) | 50.55 | 67.78 | 46.19 | 60.24
fuzzy 2-shot + max 10 terms (glossary) | 49.64 | 66.86 | 47.34 | 58.57
EN-RW | zero-shot | 2.82 | 22.53 | 143.12 | N/A
zero-shot + max 5 terms (glossary) | 7.26 | 30.83 | 115.44 | N/A
fuzzy 2-shot | 12.23 | 36.66 | 105.54 | N/A
fuzzy 2-shot + terms (fuzzy) | 12.43 | 36.48 | 102.22 | N/A
fuzzy 2-shot + max 5 terms (glossary) | 15.34 | 39.96 | 96.09 | N/A
fuzzy 2-shot + max 10 terms (glossary) | 15.49 | 40.53 | 96.0 | N/A
EN-ZH | zero-shot | 32.41 | 40.82 | 99.45 | 59.87
zero-shot + max 5 terms (glossary) | 36.31 | 44.72 | 96.45 | 68.6
zero-shot + max 10 terms (glossary) | 36.64 | 45.06 | 96.24 | 68.94
fuzzy 2-shot | 46.18 | 49.12 | 69.0 | 73.9
fuzzy 2-shot + terms (fuzzy) | 46.16 | 49.11 | 68.79 | 73.41
fuzzy 2-shot + max 5 terms (glossary) | 46.6 | 49.51 | 69.46 | 73.88
fuzzy 2-shot + max 10 terms (glossary) | 46.31 | 49.25 | 69.39 | 73.57
Table 6: Terminology-constrained MT with GPT 3.5 outperforms both zero-shot
and 2-shot translation with fuzzy matches, although gains are much higher for
zero-shot translation. For zero-shot translation, we experimented with adding
terms from a glossary. For 2-shot translation with fuzzy matches, we compared
adding terms from these 2 fuzzy matches to adding terms from a glossary. The
latter revealed better results.
We conducted human evaluation for English-to-Arabic, English-to-French, and
English-to-Spanish terminology-constrained MT, to see to what extent the model
adheres to the required terms, and how this affects the overall translation
quality. The evaluators are professional linguists in the respective
languages. We provided the evaluators with 4 sets of 100 randomly selected
sentence pairs (zero-shot, zero-shot with glossary terms, fuzzy two-shot, and
fuzzy two-shot with glossary terms). They were asked to evaluate the sentence-
level translation quality on a 1-4 scale [Coughlin, 2003] and the usage of
each provided term in the translation on a 0-1 scale, as elaborated by Table
7.
Lang | GPT-3 Context | Human Eval. ↑ | Terms ↑
---|---|---|---
EN-AR | Zero-shot | 2.80 | 0.67
Zero-shot + glossary terms | 3.19 | 0.94
Fuzzy two-shot | 2.89 | 0.80
Fuzzy two-shot + glossary terms | 3.03 | 0.94
EN-ES | Zero-shot | 3.76 | 0.87
Zero-shot + glossary terms | 3.93 | 0.96
Fuzzy two-shot | 3.77 | 0.89
Fuzzy two-shot + glossary terms | 3.84 | 0.97
EN-FR | Zero-shot | 3.55 | 0.89
Zero-shot + glossary terms | 3.64 | 0.97
Fuzzy two-shot | 3.50 | 0.91
Fuzzy two-shot + glossary terms | 3.55 | 0.92
Table 7: Human evaluation of terminology-constrained MT, for EN-AR, EN-ES, and
EN-FR. The results cover zero-shot and two-shot translation without and with
(maximum 5) glossary terms. The column “Human Eval.” refers to the average
evaluation score on a 1-4 scale. The column “Terms” refers to the average
number of terms that the model has successfully transferred into the
translation on a 0-1 scale.
According to the evaluators, for Arabic, French and Spanish, terminology-
constrained MT successfully transferred the provided glossary terms into the
target more often than zero-shot and few-shot translation without terminology
incorporation. In several cases, forcing glossary terms to be used could help
improve the overall translation quality; however, sometimes it was detrimental
to grammatical accuracy. Although we provided the model with longer terms
before shorter ones, contradictory terms can hurt translation quality. Hence,
it might be better to exclude shorter terms if they overlap with longer
ones.141414For example, “New York Times” can be transferred without
translation into the target, while “New York” might be translated. If the
model is provided with both terms while it is actually supposed to use the
former, this can cause confusion. In production workflows, linguists can be
provided with translation alternatives with and without fuzzy matches and/or
terminology to be able to use the best translation. Alternatively, automatic
quality estimation can be conducted to select the best translation.
Among interesting observations that human evaluation reveals is that in few-
shot translation with fuzzy matches (even _without_ terms), the number of
successfully used terms is more than those in zero-shot translation. This can
help enhance consistency with approved translations. Moreover, incorporating
glossary terms in a zero-shot prompt can result in quality gains comparable to
those of few-shot translation with fuzzy matches.
## 8 ChatGPT
At the time of writing this paper, OpenAI has released new conversational
models, publicly referred to as ChatGPT. This range of models includes:
GPT-3.5 Turbo and GPT-4. In this section, we briefly investigate the
translation capabilities of these models compared to GPT-3.5 Davinci.
Generally, we observe that both of the new models solve some tokenization
issues, especially for non-Latin languages such as Arabic. While
_gpt-3.5-turbo_ is more efficient than _text-davinci-003_ , it shows
comparable quality for both zero-shot and few-shot translation (with fuzzy
matches). The newest model _gpt-4_ provides better zero-shot translation
quality, while the quality of few-shot translation is relatively similar to
that of the two other models. Table 8 demonstrates the results.
Lang | Model | Context | spBLEU ↑ | chrF++ ↑ | TER ↓ | COMET ↑
---|---|---|---|---|---|---
EN-AR | GPT-3.5 Davinci | 0-shot | 27.6 | 48.36 | 70.6 | 41.28
GPT-3.5 Turbo | 38.06 | 56.35 | 61.34 | 62.68
GPT-4 | 40.29 | 57.86 | 59.55 | 64.25
GPT-3.5 Davinci | 2-shot | 38.41 | 56.57 | 62.31 | 57.36
GPT-3.5 Turbo | 46.04 | 62.18 | 55.03 | 73.35
GPT-4 | 47.52 | 63.28 | 53.04 | 73.7
EN-ES | GPT-3.5 Davinci | 0-shot | 53.91 | 72.61 | 36.86 | 84.0
GPT-3.5 Turbo | 52.91 | 70.87 | 38.86 | 82.28
GPT-4 | 56.93 | 74.41 | 34.35 | 87.89
GPT-3.5 Davinci | 2-shot | 59.64 | 75.83 | 32.56 | 90.37
GPT-3.5 Turbo | 60.35 | 76.51 | 32.05 | 91.57
GPT-4 | 60.16 | 76.51 | 31.77 | 91.86
EN-FR | GPT-3.5 Davinci | 0-shot | 44.87 | 65.29 | 50.34 | 58.67
GPT-3.5 Turbo | 46.85 | 66.75 | 48.31 | 61.34
GPT-4 | 47.39 | 67.14 | 48.03 | 61.93
GPT-3.5 Davinci | 2-shot | 49.79 | 67.41 | 46.79 | 61.38
GPT-3.5 Turbo | 49.88 | 68.33 | 46.27 | 63.62
GPT-4 | 49.75 | 68.38 | 45.97 | 64.04
EN-RW | GPT-3.5 Davinci | 0-shot | 2.82 | 22.53 | 143.12 | N/A
GPT-3.5 Turbo | 5.31 | 29.77 | 114.34 | N/A
GPT-4 | 8.95 | 35.28 | 93.15 | N/A
GPT-3.5 Davinci | 2-shot | 12.23 | 36.66 | 105.54 | N/A
GPT-3.5 Turbo | 12.49 | 39.37 | 105.51 | N/A
GPT-4 | 16.78 | 44.21 | 83.31 | N/A
EN-ZH | GPT-3.5 Davinci | 0-shot | 32.41 | 40.82 | 99.45 | 59.87
GPT-3.5 Turbo | 36.83 | 45.77 | 99.83 | 69.13
GPT-4 | 37.65 | 47.02 | 99.37 | 70.75
GPT-3.5 Davinci | 2-shot | 46.18 | 49.12 | 69.0 | 73.9
GPT-3.5 Turbo | 45.95 | 49.79 | 74.53 | 74.63
GPT-4 | 45.37 | 50.26 | 79.29 | 74.9
Table 8: Comparing GPT-3.5 _text-davinci-003_ to ChatGPT models
_gpt-3.5-turbo_ and _gpt-4_ for zero-shot and few-shot translation with 2
fuzzy matches
## 9 BLOOM and BLOOMZ
In this section, we compare GPT-3.5 to open-source multilingual models, namely
BLOOM [BigScience Workshop et al., 2022] and BLOOMZ [Muennighoff et al.,
2022]. While BLOOM is a general-purpose LLM, BLOOMZ belongs to a family of
models capable of following human instructions in a zero-shot manner.
We use BLOOM and BLOOMZ via the Hugging Face’s Inference
API.151515https://huggingface.co/inference-api As mentioned in Section 2,
recommended (sampling) parameters for translation with GPT-3.5 are top-p 1 and
temperature up to 0.3. For BLOOM, the same parameters are not good for
translation.161616Using lower sampling values of top-p and temperature such as
0.9 and 0.1, respectively, can generate good outputs. However, greedy search
shows better translation performance. We found that “greedy search” achieves
better results for BLOOM, which are reported in Table 9. We use a batch size
of 1, and set the $max\\_new\\_tokens$ parameter to be double the number of
words of the source sentence if it is less than 250, the maximum number of new
tokens allowed by BLOOM’s API; otherwise, we set it to 250 tokens. For
comparison purposes, we use the same values for BLOOMZ.171717BLOOMZ is trained
to generate the required output only; however, using BLOOM, we had to truncate
over-generated text outputs, excluding anything generated in a new line.
When providing each system with two fuzzy matches, generally GPT-3.5
outperforms both BLOOM and BLOOMZ for most language pairs, except English-to-
Arabic translation. The English-to-French translation quality of BLOOM and
GPT-3.5 is comparable.
Lang | System | spBLEU ↑ | chrF++ ↑ | TER ↓ | COMET ↑
---|---|---|---|---|---
EN-AR | BLOOM fuzzy 2-shot | 43.19 | 59.48 | 57.58 | 67.36
BLOOMZ fuzzy 2-shot | 36.29 | 53.33 | 66.86 | 58.4
GPT-3 fuzzy 2-shot | 38.41 | 56.57 | 62.31 | 57.36
EN-ES | BLOOM fuzzy 2-shot | 57.67 | 74.25 | 34.86 | 86.48
BLOOMZ fuzzy 2-shot | 53.07 | 70.44 | 40.45 | 81.38
GPT-3 fuzzy 2-shot | 59.64 | 75.83 | 32.56 | 90.37
EN-FR | BLOOM fuzzy 2-shot | 50.52 | 66.81 | 46.45 | 55.74
BLOOMZ fuzzy 2-shot | 45.1 | 62.73 | 51.69 | 47.49
GPT-3 fuzzy 2-shot | 49.79 | 67.41 | 46.79 | 61.38
EN-RW | BLOOM fuzzy 2-shot | 10.95 | 31.87 | 91.07 | N/A
BLOOMZ fuzzy 2-shot | 12.26 | 35.44 | 88.36 | N/A
GPT-3 fuzzy 2-shot | 12.23 | 36.66 | 105.54 | N/A
EN-ZH | BLOOM fuzzy 2-shot | 40.62 | 40.62 | 75.24 | 66.23
BLOOMZ fuzzy 2-shot | 34.82 | 38.23 | 80.03 | 59.92
GPT-3 fuzzy 2-shot | 46.18 | 49.12 | 69.0 | 73.9
Table 9: Comparing GPT-3.5 to BLOOM and BLOOMZ for few-shot translation with 2
fuzzy matches
## 10 Conclusion
In this work, we conducted several experiments to assess the performance of
GPT-3.5 across multiple translation tasks, namely adaptive MT using fuzzy
matches (cf. Section 3), MT post-editing (cf. Section 5), terminology
extraction (cf. Section 6), and terminology-constrained MT (cf. Section 7).
Moreover, we compared its translation quality with strong encoder-decoder MT
systems. Generally speaking, results obtained from these experiments are very
promising. While some high-resource languages such as English-to-French,
English-to-Spanish and even English-to-Chinese show excellent results, other
languages have lower support either because they are low-resource languages
such as English-to-Kinyarwanda or because of issues in the GPT-3.5 tokenizer
such as English-to-Arabic. Nevertheless, when we used GPT-3.5 for MT post-
editing of the English-to-Arabic translation obtained from OPUS, the quality
significantly surpassed that obtained from both OPUS and Google Translation
API. This means that different pipelines can be adopted in production for
different language pairs, based on the level of support of these languages by
an LLM.
Furthermore, we briefly compared GPT-3.5 translation quality with open-source
LLMs such as BLOOM and BLOOMZ. In the future, we would like to expand our
experiments with open-source LLMs to cover more aspects.
For adaptive MT with fuzzy matches, it would be interesting to investigate
_dynamic_ few-shot example selection. For instance, instead of selecting 5
fuzzy matches for all sentences, only high-quality fuzzy matches up to a
certain similarity score are used. Similarly, when incorporating glossary
terms or MT outputs from other systems, only those with certain quality
characteristics are utilized. This can potentially enhance performance gains.
For terminology extraction, we would like to try “phrases” instead of “terms”.
This would generate longer strings. We would like to see the effect of using
such longer phrases, especially for low-resource languages.
This work mainly aims at understanding the quality and level of support that
LLMs can achieve (out of the box) for a range of translation tasks across
diverse language pairs. In the future, we might consider starting with fine-
tuning the model, and then conducting similar experiments. This can be
especially beneficial for low-resource languages and rare domains, and can
help enhance quality and efficiency.
## Acknowledgements
This work is supported by the Science Foundation Ireland (SFI) Centre for
Research Training in Digitally-Enhanced Reality (d-real) under Grant No.
18/CRT/6224, the ADAPT Centre for Digital Content Technology under SFI’s Grant
No. 13/RC/2106_P2, and Microsoft Research.
We would like to extend our sincere thanks to Julie Locquet, Senior Linguist;
Philippe Locquet, Senior Linguist and Academic Program Manager at Wordfast;
and Dr Muhammed Yaman Muhaisen, Ophthalmologist and Linguist, for conducting
the evaluation of our translation tasks.
## References
* [Agrawal et al., 2022] Agrawal, Sweta, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, and Marjan Ghazvininejad. 2022\. In-context Examples Selection for Machine Translation. arXiv [cs.CL], December.
* [Anastasopoulos et al., 2020] Anastasopoulos, Antonios, Alessandro Cattelan, Zi-Yi Dou, Marcello Federico, Christian Federmann, Dmitriy Genzel, Franscisco Guzmán, et al. 2020\. TICO-19: the Translation Initiative for COvid-19. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, Online, December.
* [BigScience Workshop et al., 2022] BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, et al. 2022\. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. arXiv [cs.CL], November.
* [Brown et al., 2020] Brown, Tom B, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020\. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems (NeurIPS 2020), volume 33, pages 1877–1901.
* [Bulte and Tezcan, 2019] Bulte, Bram and Arda Tezcan. 2019\. Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1800–1809, Florence, Italy, July.
* [Chowdhery et al., 2022] Chowdhery, Aakanksha, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, et al. 2022\. PaLM: Scaling Language Modeling with Pathways. arXiv [cs.CL], April.
* [Coughlin, 2003] Coughlin, Deborah. 2003\. Correlating automated and human assessments of machine translation quality. In Proceedings of Machine Translation Summit IX: Papers, New Orleans, USA.
* [Dinu et al., 2019] Dinu, Georgiana, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019\. Training Neural Machine Translation to Apply Terminology Constraints. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3063–3068, Florence, Italy, July.
* [Dong et al., 2022] Dong, Qingxiu, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2022\. A Survey on In-context Learning. arXiv [cs.CL], December.
* [Etchegoyhen et al., 2021] Etchegoyhen, Thierry, David Ponce, Harritxu Gete, and Victor Ruiz. 2021\. Online Learning over Time in Adaptive Neural Machine Translation. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 411–420, Held Online, September.
* [Farajian et al., 2017] Farajian, M Amin, Marco Turchi, Matteo Negri, and Marcello Federico. 2017\. Multi-Domain Neural Machine Translation through Unsupervised Adaptation. In Proceedings of the Second Conference on Machine Translation, pages 127–137, Copenhagen, Denmark, September.
* [Goyal et al., 2022] Goyal, Naman, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc’aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022\. The Flores-101 evaluation benchmark for low-resource and multilingual machine translation. Trans. Assoc. Comput. Linguist., 10:522–538, May.
* [Haque et al., 2020] Haque, Rejwanul, Yasmin Moslem, and Andy Way. 2020\. Terminology-Aware Sentence Mining for NMT Domain Adaptation: ADAPT’s Submission to the Adap-MT 2020 English-to-Hindi AI Translation Shared Task. In Proceedings of the 17th International Conference on Natural Language Processing (ICON): Adap-MT 2020 Shared Task, pages 17–23, Patna, India, December.
* [Hokamp and Liu, 2017] Hokamp, Chris and Qun Liu. 2017\. Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1535–1546, Vancouver, Canada, July.
* [Hosseini et al., 2020] Hosseini, Kasra, Federico Nanni, and Mariona Coll Ardanuy. 2020\. DeezyMatch: A Flexible Deep Learning Approach to Fuzzy String Matching. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 62–69, Online, October.
* [Hu et al., 2019] Hu, Junjie, Mengzhou Xia, Graham Neubig, and Jaime Carbonell. 2019\. Domain Adaptation of Neural Machine Translation by Lexicon Induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2989–3001, Florence, Italy, July.
* [Klein et al., 2020] Klein, Guillaume, Dakun Zhang, Clément Chouteau, Josep Crego, and Jean Senellart. 2020\. Efficient and high-quality neural machine translation with OpenNMT. In Proceedings of the Fourth Workshop on Neural Generation and Translation, pages 211–217, Stroudsburg, PA, USA, July.
* [Knowles et al., 2018] Knowles, Rebecca, John Ortega, and Philipp Koehn. 2018\. A Comparison of Machine Translation Paradigms for Use in Black-Box Fuzzy-Match Repair. In Proceedings of the AMTA 2018 Workshop on Translation Quality Estimation and Automatic Post-Editing, pages 249–255, Boston, MA, March.
* [Kudo and Richardson, 2018] Kudo, Taku and John Richardson. 2018\. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium, November.
* [Michon et al., 2020] Michon, Elise, Josep Crego, and Jean Senellart. 2020\. Integrating Domain Terminology into Neural Machine Translation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3925–3937, Barcelona, Spain (Online), December. International Committee on Computational Linguistics.
* [Moslem et al., 2022] Moslem, Yasmin, Rejwanul Haque, John Kelleher, and Andy Way. 2022\. Domain-Specific Text Generation for Machine Translation. In Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 14–30, Orlando, USA, September.
* [Muennighoff et al., 2022] Muennighoff, Niklas, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, et al. 2022\. Crosslingual Generalization through Multitask Finetuning. arXiv [cs.CL], November.
* [NLLB Team et al., 2022] NLLB Team, Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, et al. 2022\. No Language Left Behind: Scaling Human-Centered Machine Translation. arXiv [cs.CL], July.
* [Ouyang et al., 2022] Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, et al. 2022\. Training language models to follow instructions with human feedback. arXiv [cs.CL], March.
* [Peris and Casacuberta, 2019] Peris, Álvaro and Francisco Casacuberta. 2019\. Online learning for effort reduction in interactive neural machine translation. Comput. Speech Lang., 58:98–126, November.
* [Pham et al., 2020] Pham, Minh Quang, Jitao Xu, Josep Crego, François Yvon, and Jean Senellart. 2020\. Priming Neural Machine Translation. In Proceedings of the Fifth Conference on Machine Translation, pages 516–527, Online, November.
* [Post and Vilar, 2018] Post, Matt and David Vilar. 2018\. Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1314–1324, New Orleans, Louisiana, June.
* [Reimers and Gurevych, 2019] Reimers, Nils and Iryna Gurevych. 2019\. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China, November.
* [Tiedemann, 2020] Tiedemann, Jörg. 2020\. The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT. In Proceedings of the Fifth Conference on Machine Translation, pages 1174–1182, Online, November.
* [Touvron et al., 2023] Touvron, Hugo, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, et al. 2023\. LLaMA: Open and Efficient Foundation Language Models. arXiv [cs.CL], February.
* [Vaswani et al., 2017] Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017\. Attention Is All You Need. In Advances in Neural Information Processing Systems (NIPS 2017), volume 30.
* [Vilar et al., 2022] Vilar, David, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, and George Foster. 2022\. Prompting PaLM for Translation: Assessing Strategies and Performance. arXiv [cs.CL], November.
* [Wang et al., 2021] Wang, Shuo, Zhaopeng Tu, Zhixing Tan, Wenxuan Wang, Maosong Sun, and Yang Liu. 2021\. Language Models are Good Translators. ArXiv.
* [Wuebker et al., 2018] Wuebker, Joern, Patrick Simianer, and John DeNero. 2018\. Compact Personalized Models for Neural Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 881–886, Brussels, Belgium.
* [Xu et al., 2020] Xu, Jitao, Josep Crego, and Jean Senellart. 2020\. Boosting Neural Machine Translation with Similar Translations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1580–1590, Online, July.
* [Zhang et al., 2023] Zhang, Biao, Barry Haddow, and Alexandra Birch. 2023\. Prompting Large Language Model for Machine Translation: A Case Study. arXiv [cs.CL], January.
## Appendix A Prompts
This appendix provides examples of the prompts we used for our experiments.
### A.1 Zero-shot Translation
Prompt: EN-AR zero-shot translation __ _English:_ $<$source_segment$>$
_Arabic:_
### A.2 Adaptive MT with Fuzzy Matches
Prompt: EN-AR two-shot translation __ _English:_ $<$source_fuzzy_match _2_ $>$
_Arabic:_ $<$target_fuzzy_match _2_ $>$ _English:_ $<$source_fuzzy_match _1_
$>$ _Arabic:_ $<$target_fuzzy_match _1_ $>$ _English:_ $<$source_segment$>$
_Arabic:_
### A.3 MT Post-editing
Prompt: EN-ZH two-shot + 1-MT __ _English:_ $<$source_fuzzy_match _2_ $>$
_Chinese:_ $<$target_fuzzy_match _2_ $>$ _English:_ $<$source_fuzzy_match _1_
$>$ _Chinese:_ $<$target_fuzzy_match _1_ $>$ _English:_ $<$source_segment$>$
_MT:_ $<$mt_segment$>$ _Chinese:_
Prompt: EN-ZH two-shot + all-MT __ _English:_ $<$source_fuzzy_match _2_ $>$
_MT:_ $<$mt_fuzzy_match _2_ $>$ _Chinese:_ $<$target_fuzzy_match _2_ $>$
_English:_ $<$source_fuzzy_match _1_ $>$ _MT:_ $<$mt_fuzzy_match _1_ $>$
_Chinese:_ $<$target_fuzzy_match _1_ $>$ _English:_ $<$source_segment$>$ _MT:_
$<$mt_segment$>$ _Chinese:_
### A.4 Terminology Extraction
Prompt: terminology extraction __ $<$_source_lang_ $>$: $<$source_sentence$>$
$<$_target_lang_ $>$: $<$target_sentence$>$ _Extract_ $<$number$>$ terms from
the above sentence pair. Type each $<$source_lang$>$ term and its
$<$target_lang$>$ equivalent in one line, separated by ’$<$separator$>$’. _1._
### A.5 Terminology-constrained MT
Prompt: EN-ES zero-shot + glossary terms __ _Terms:_ $<$src_term _1_ $>$ $=$
$<$tgt_term _1_ $>$ \- $<$src_term _2_ $>$ $=$ $<$tgt_term _2_ $>$ …
$<$src_term _5_ $>$ $=$ $<$tgt_term _5_ $>$ _English:_ $<$source_segment$>$
_Spanish:_
Prompt: EN-ES two-shot + fuzzy terms __ _Terms:_ $<$terms_fuzzy_match _2_ $>$
_English:_ $<$source_fuzzy_match _2_ $>$ _Spanish:_ $<$target_fuzzy_match _2_
$>$ _Terms:_ $<$terms_fuzzy_match _1_ $>$ _English:_ $<$source_fuzzy_match _1_
$>$ _Spanish:_ $<$target_fuzzy_match _1_ $>$ _Terms:_
$<$terms_from_fuzzy_matches _1+2_ $>$ _English:_ $<$source_segment$>$
_Spanish:_
Prompt: EN-ES two-shot + glossary terms __ _Terms:_ $<$terms_fuzzy_match _2_
$>$ _English:_ $<$source_fuzzy_match _2_ $>$ _Spanish:_ $<$target_fuzzy_match
_2_ $>$ _Terms:_ $<$terms_fuzzy_match _1_ $>$ _English:_ $<$source_fuzzy_match
_1_ $>$ _Spanish:_ $<$target_fuzzy_match _1_ $>$ _Terms:_
$<$terms_from_glossary$>$ _English:_ $<$source_segment$>$ _Spanish:_
|
# Almost CR manifolds with contracting CR automorphism
Jae-Cheon Joo Department of Mathematics, King Fahd University of Petroleum
and Minerals, 31261 Dhahran, Saudi Arabia<EMAIL_ADDRESS>and Kang-Hyurk
Lee Department of Mathematics and Research Institute of Natural Science,
Gyeongsang National University, Jinju, Gyeongnam, 52828, Republic of Korea
<EMAIL_ADDRESS>
###### Abstract.
In this paper, we deal with a strongly pseudoconvex almost CR manifold with a
CR contraction. We will prove that the stable manifold of the CR contaction is
CR equivalent to the Heisenberg group model.
Research of the second named author was supported by the National Research
Foundation of Korea (NRF) grant funded by the Korea government (No.
NRF-2019R1F1A1060891).
## 1\. Introduction
The aim of this paper is characterizing strongly pseudoconvex almost CR
manifolds admitting a CR contraction, equivalently pseudo-Hermitian manifolds
with a contracting pseudoconformal automorphism.
In geometry, one of main questions is how large transformation group of
manifolds are and how to chacterize manifolds with large transformation group.
In conformal and CR geometry, there is an interesting history about this
question. Obata considered conformal structures of Riemannian geometry and he
succeeded in characterizing the sphere $S^{n}$ under the noncompact action of
conformal transformation group on compact Riemannian manifolds ([7, 8]), and
those results were followed by Webster in CR case ([12]). In 1972,
Alekseevskii claimed that a Riemannian manifold either compact or noncompact
with a nonproper action of conformal transformation is conformally equivalent
to either $S^{n}$ or $\mathbb{R}^{n}$ ([1]), however it turned out later that
his proof contains an error. Alekseevskii’s claim has been reproved by Schoen
in both conformal and CR cases in 1995 ([10]; cf. [2, 11]). His theorem is as
follows.
###### Theorem 1.1 (Schoen [10]).
1. (1)
If the conformal automorphism group of a Riemannian manifold $M$ acts
nonproperly, then $M$ is conformally equivalent to either the unit sphere or
the Euclidean space.
2. (2)
If a strongly pseudoconvex CR manifold $M$ whose CR automorphism group acts
nonproperly on $M$, then $M$ is CR equivalent to either the Heisenberg group
or the unit sphere in the complex Euclidean space
On the other hand, in complex geometry it has been also studied the
characterization of domains in $\mathbb{C}^{n}$ by noncompact automorphism
group action. Among many interesting results, one of the most famous works was
proved by Wong-Rosay ([13, 9]): _a strongly pseudoconvex bounded domain
$\Omega$ in $\mathbb{C}^{n}$ with a noncompact automorphism group is
biholomorphic to the unit ball $\mathbb{B}^{n}$ in $\mathbb{C}^{n}$_. Note
that a holomorphic automorphism group $\mathrm{Aut}(\Omega)$ of $\Omega$
extends to a CR transformation group on the boundary $\partial\Omega$ and the
noncompactness of $\mathrm{Aut}(\Omega)$ implies that $\mathrm{Aut}(\Omega)$
acts nonproperly on $\partial\Omega$. Since the the unit sphere $S^{2n-1}$ in
$\mathbb{C}^{n}$ is the boundary of the unit ball and the Heisenberg group is
the unit sphere without one point, $S^{2n-1}\setminus\\{p\\}$, the Wong-Rosay
theorem can be regarded as the counterpart of Schoen’s result on the CR
manifold.
The Wong-Rosay theorem was generalized by Gaussier-Sukhov [3] and the second
author [5, 6] in almost complex setting. Without the integrability of the
almost complex structure, the unit ball with the standard complex structure is
not a unique model. There are many strongly pseudconvex domains, called _model
strongly pseudconvex domains_ whose automorphism groups are noncompact and
almost complex structures are non-integrable. In [6], there are a full
classification of such domains and a description of their automorphism groups.
As a CR counterpart, it is also natural to ask if we can generalize Schoen’s
result in almost CR case. In the previous paper [4], we have proved that the
boundary of each model strongly pseudoconvex domain is a strongly pseudoconvex
almost CR manifold which has a group structure that is isomorphic to that of
standard Heisenberg group as groups. We call them generalized Heisenberg
groups. It also turned out that the CR automorphism group of a generalized
Heisenberg group acts nonproperly since it always admit a contracting CR
automorphism as the standard one. A generalized Heisenberg group is known to
be parametrized by a skew symmetric complex $n\times n$ matrix $P$ and we
denote it by $\mathcal{H}_{P}$ (see Section 2.3 for defintion).
In [4], we proved the following theorem which is a generalization of Schoen’s
theorem for low dimensional almost CR manifolds.
###### Theorem 1.2 (Theorem 1.2 in [4]).
Let $M$ be a manifold with strongly pseudoconvex almost CR structure of real
dimension $5$ or $7$. Suppose that the CR automorphism group acts nonproperly
on $M$. If $M$ is noncompact, then $M$ is CR equivalent to $\mathcal{H}_{P}$
for a skew symmetric complex matrix $P$. If $M$ is compact, then the almost CR
structure of $M$ is integrable and $M$ is CR equivalent to the unit sphere in
the complex Euclidean space.
The difficulty of the proof of this theorem in arbitrary dimension comes from
the construction of the Yamabe equation which plays a crucial role in the
proof of Theorem 1.2 as well as Schoen’s theorem [10].
In order to get a characterization of $\mathcal{H}_{P}$ in arbitrary dimension
by its automorphism action, we restrict our attention to the case of admitting
a contracting automorphism. Recall that the generalized Heisenberg groups
admit contracting CR automorphisms. The main result in this paper is as
follows.
###### Theorem 1.3.
Let $M$ be a strongly pseudoconvex almost CR manifold. Suppose $M$ admits a
contracting CR automorphism. Then the stable manifold $\mathcal{W}$ of the
contracting automorphism is CR equivalent to $\mathcal{H}_{P}$ for some
complex skew-symmetric matrix $P$.
The idea of the proof of Theorem 1.3 is contructing a special contact form
$\theta_{0}$ on $\mathcal{W}$ for which $(\mathcal{W},\theta_{0})$ is indeed
equivalent to $(\mathcal{H}_{P},\theta_{P})$ as pseudo-Hermitian manifolds,
where $\theta_{P}$ is the canonical contact form on $\mathcal{H}_{P}$ defined
by (2.9). The consturcution is achieved by a standard dynamical method in
Section 3.
Outline of the paper. In section 2, we will introduce basic notions for the
strongly pseudconvex almost CR manifold, the pseudo-Hermitian structure and
their equivalence problem as studied in [4]. Then the contacting CR
automorpism and the stable manifold will be discussed in Section 3. And also
in Section 3.2, we will show that there is a smooth canonical contact form of
the contracting CR automorphism whose pseudo-Hermitian structure is CR
equivalent to the Heisenberg model for the proof of Theorem 1.3 (see Section
4). We also characterize the ambient manifold as the standard sphere under
further assumption that the contracting automorphism has another fixed point
which is contracting for the inverse map in Theorem 4.2.
Convention. Throughout this paper, we assume that every structure is
$C^{\infty}$-smooth. The summation convention is always assumed. Greek indices
will be used to indicate coefficients of complex and real tensors,
respectively. We will take the bar on Greek indices to denote the complex
conjugation of the corresponding tensor coefficients:
$\overline{Z}_{\alpha}=Z_{\bar{\alpha}}$,
$\bar{\omega}^{\alpha}=\omega^{\bar{\alpha}}$,
${\overline{R}}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\lambda\bar{\mu}}={R}^{\phantom{\bar{\beta}}\bar{\alpha}}_{\bar{\beta}\phantom{\bar{\alpha}}\bar{\lambda}\mu}$.
## 2\. Strongly pseudoconvex almost CR manifolds
In this section, we review the pseudo-Hermitian structure of the strongly
pseudoconvex almost CR manifold and its equivalence problem as in [4].
### 2.1. The pseudo-Hermitian structure
By an _almost CR manifold_ , we mean a real $(2n+1)$-dimensional manifold $M$
with a $2n$-dimensional subbundle $H$ of $TM$ equipped with an almost complex
structure $J$, which is called the almost CR structure. Since
$J^{2}=-\mathrm{Id}$, the complexified bundle $\mathbb{C}\otimes H$ splits
into two eigensubbundles $H_{1,0}$ and $H_{0,1}$ of complex dimension $n$,
corresponding to eigenvalues $i$ and $-i$ of $J$, respectively:
$\displaystyle\mathbb{C}\times H=H_{1,0}\oplus H_{0,1}\;,$ $\displaystyle
H_{1,0}=\left\\{X-iJX:X\in H\right\\}\;,\quad H_{0,1}=\left\\{X+iJX:X\in
H\right\\}\;.$
A complex-valued vector field is of type $(1,0)$ or $(0,1)$ if it is a section
to $H_{1,0}$ or $H_{0,1}$, respectively. The CR structure $(H,J)$ is
_integrable_ if the space of $(1,0)$-vector fields, $\Gamma(H_{1,0})$ is
closed under the Lie bracket:
$[\Gamma(H_{1,0}),\Gamma(H_{1,0})]\subset\Gamma(H_{1,0})$. If the Lie bracket
of two $(1,0)$-vector fields is always a section to $\mathbb{C}\times
H=H_{1,0}\oplus H_{0,1}$, then we call the CR structure is _partially
integrable_.
A diffeomorphism $f:M\rightarrow M$ is called a _CR automorphism_ if
$f_{*}H_{1,0}=H_{1,0}$. The group of all CR automorphisms of $M$ will be
denoted by $\mathrm{Aut}_{\mathrm{CR}}(M)$. This is the topological group with
the compact-open topology.
An almost CR manifold is said to be _strongly pseudoconvex_ if $H$ is a
contact distribution and if
$d\theta(X,JX)>0$
for a contact form $\theta$ and for any nonzero vector $X\in H$. Then the pair
$(M,\theta)$ is called a _pseudo-Hermitian manifold_. Let
$(Z_{\alpha})=(Z_{1},\ldots,Z_{n})$ be a $(1,0)$-frame (a local frame of
$H_{1,0}$). A $\mathbb{C}^{n}$-valued $1$-form
$(\theta^{\alpha})=(\theta^{1},\ldots,\theta^{n})$ is called the _admissible
coframe_ for $\theta$ with respect to $(Z_{\alpha})$ if
$\theta^{\alpha}(Z_{\beta})=\delta^{\alpha}_{\beta},\quad\theta^{\alpha}(Z_{\bar{\beta}})=0,\quad\theta^{\alpha}(T)=0$
for $\alpha,\beta=1,...,n$, where $T$ is the _characteristic vector field_ of
$\theta$, that is, a unique real tangent vector field $\theta(T)\equiv 1$ and
$T\,\raisebox{1.72218pt}{$\lrcorner$}\;d\theta\equiv 0$. The admissibility of
$(\theta^{\alpha})$ is equivalent to that the differential of the contact form
$\theta$ can be written by
(2.1)
$d\theta=2ig_{\alpha\bar{\beta}}\,\theta^{\alpha}\wedge\theta^{\bar{\beta}}+p_{\alpha\beta}\,\theta^{\alpha}\wedge\theta^{\beta}+p_{\bar{\alpha}\bar{\beta}}\,\theta^{\bar{\alpha}}\wedge\theta^{\bar{\beta}}$
where $(g_{\alpha\bar{\beta}})$ is a positive-definite hermitian symmetric
matrix which is called the Levi form, and $(p_{\alpha\beta})$ is a skew-
symmetric matrix. Throughout this paper, $(g^{\alpha\bar{\beta}})$ stands for
the inverse matrix of the Levi form $(g_{\alpha\bar{\beta}})$. If
$(p_{\alpha\beta})$ vanishes identically, then the almost CR structure is
partially integrable; thus we call $(p_{\alpha\beta})$ the _non-partial-
integrability_ with respect to $\theta,\theta^{\alpha}$.
Let
$\tilde{\theta}=e^{2f}\theta$
be another contact form, where $f$ is a real-valued smooth function on $M$.
###### Proposition 2.1 ([4]).
Let $(\theta^{\alpha})$ be a local admissible coframe for $\theta$ with
respect to a $(1,0)$-frame $(Z_{\alpha})$. Then the $\mathbb{C}^{n}$-valued
$1$-form $(\tilde{\theta}^{\alpha})$ defined by
$\tilde{\theta}^{\alpha}=e^{f}(\theta^{\alpha}+v^{\alpha}\theta)\;,$
is an admissible coframe for $\tilde{\theta}$, where $v^{\alpha}$ is defined
by
(2.2) $f_{\alpha}=iv_{\alpha}+p_{\alpha\beta}\,v^{\beta}.$
where $f_{\alpha}=Z_{\alpha}f$ and
$v_{\alpha}=v^{\bar{\beta}}g_{\alpha\bar{\beta}}$. Moreover, if we denote the
Levi form and non-partial-integrbility for $\tilde{\theta}$ with respect to
$(\tilde{\theta}^{\alpha})$ by $(\tilde{g}_{\alpha\bar{\beta}})$ and
$(\tilde{p}_{\alpha\beta})$, then
$\tilde{g}_{\alpha\bar{\beta}}=g_{\alpha\bar{\beta}},\quad\tilde{p}_{\alpha\beta}=p_{\alpha\beta}.$
The last argument of the proposition means that Equation 2.1 for
$\tilde{\theta}=e^{2}f$ is of the form
$d\tilde{\theta}=2ig_{\alpha\bar{\beta}}\,\tilde{\theta}^{\alpha}\wedge\tilde{\theta}^{\bar{\beta}}+p_{\alpha\beta}\,\tilde{\theta}^{\alpha}\wedge\tilde{\theta}^{\beta}+p_{\bar{\alpha}\bar{\beta}}\,\tilde{\theta}^{\bar{\alpha}}\wedge\tilde{\theta}^{\bar{\beta}}\;.$
### 2.2. The pseudo-Hermitian strucrure equation
Let $(M,\theta)$ be a pseudo-Hermitian manifold and $(\theta^{\alpha})$ be an
admissible coframe. The pseudo-Hermitian connection form
$({\omega}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}})$ for the Hermitian
metric $(g_{\alpha\bar{\beta}})$ in (2.1) is defined as follows.
###### Proposition 2.2 ([4]).
The pseudo-Hermitian connection form
$({\omega}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}})$ is uniquely
determined by following equations:
(2.3) $\displaystyle
d\theta^{\alpha}=\theta^{\beta}\wedge{\omega}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}+{T}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma}\theta^{\beta}\wedge\theta^{\gamma}+{N}^{\phantom{\bar{\beta}}\alpha}_{\bar{\beta}\phantom{\alpha}\bar{\gamma}}\theta^{\bar{\beta}}\wedge\theta^{\bar{\gamma}}+{A}^{\phantom{}\alpha}_{\phantom{\alpha}\bar{\beta}}\theta\wedge\theta^{\bar{\beta}}+{B}^{\phantom{}\alpha}_{\phantom{\alpha}\beta}{}\theta\wedge\theta^{\beta}\;,$
(2.4) $\displaystyle
dg_{\alpha\bar{\beta}}=\omega_{\alpha\bar{\beta}}+\omega_{\bar{\beta}\alpha}\;,$
(2.5)
$\displaystyle{T}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma}=-{T}^{\phantom{\gamma}\alpha}_{\gamma\phantom{\alpha}\beta}\;,\quad{N}^{\phantom{\bar{\beta}}\alpha}_{\bar{\beta}\phantom{\alpha}\bar{\gamma}}=-{N}^{\phantom{\bar{\gamma}}\alpha}_{\bar{\gamma}\phantom{\alpha}\bar{\beta}}\;,\quad
B_{\alpha\bar{\beta}}=B_{\bar{\beta}\alpha}\;.$
Then the connection form
$({\omega}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}})$ gives the
covariant derivative to tensor fields by
(2.6) $\displaystyle\nabla Z_{\alpha}$
$\displaystyle={\omega}^{\phantom{\alpha}\beta}_{\alpha\phantom{\beta}}\otimes
Z_{\beta}\;,$ $\displaystyle\nabla Z_{\bar{\alpha}}$
$\displaystyle={\omega}^{\phantom{\bar{\alpha}}\bar{\beta}}_{\bar{\alpha}\phantom{\bar{\beta}}}\otimes
Z_{\bar{\beta}}\;,$ $\displaystyle\nabla\theta^{\alpha}$
$\displaystyle=-{\omega}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}\otimes\theta^{\beta}\;,$
$\displaystyle\nabla\theta^{\bar{\alpha}}$
$\displaystyle=-{\omega}^{\phantom{\bar{\beta}}\bar{\alpha}}_{\bar{\beta}\phantom{\bar{\alpha}}}\otimes\theta^{\bar{\beta}}\;.$
Equations (2.1) and (2.3) are called the structure equations, and (2.4) and
(2.5) are called the compatibility conditions. Tensor coefficients
$p_{\alpha\beta}$,
${T}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma}$,
${N}^{\phantom{\bar{\beta}}\alpha}_{\bar{\beta}\phantom{\alpha}\bar{\gamma}}$,
${A}^{\phantom{}\alpha}_{\phantom{\alpha}\bar{\beta}}$,
${B}^{\phantom{}\alpha}_{\phantom{\alpha}\beta}{}$ are called the torsion
coefficients.
We define the curvature $2$-form by
${\Theta}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}=d{\omega}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}-{\omega}^{\phantom{\beta}\gamma}_{\beta\phantom{\gamma}}\wedge{\omega}^{\phantom{\gamma}\alpha}_{\gamma\phantom{\alpha}}\;.$
The coefficients of pseudo-Hermitian curvature is defined by
${R}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma\bar{\sigma}}=2{\Theta}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}(Z_{\gamma},Z_{\bar{\sigma}})\;.$
Let $\tilde{\theta}=e^{2f}\theta$ be a pseudoconformal change of the pseudo-
Hermitian structure $(M,\theta)$ and let $(\tilde{\theta}^{\alpha})$ the
admissible coframe for $\tilde{\theta}$ defined as in Proposition 2.1.
###### Proposition 2.3 ([4]).
The coefficieints of torsion and curvature of $\theta$ and $\tilde{\theta}$
with respect to $(\theta^{\alpha})$ and $(\tilde{\theta}^{\alpha})$ are
related as follows.
$\displaystyle{\widetilde{T}}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma}$
$\displaystyle=e^{-f}\left({T}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma}+v^{\alpha}p_{\beta\gamma}-p_{\gamma\rho}v^{\rho}{\delta}^{\phantom{}\alpha}_{\phantom{\alpha}\beta}+p_{\beta\rho}v^{\rho}{\delta}^{\phantom{}\alpha}_{\phantom{\alpha}\gamma}\right)\;,$
$\displaystyle{\widetilde{N}}^{\phantom{\bar{\beta}}\alpha}_{\bar{\beta}\phantom{\alpha}\bar{\gamma}}$
$\displaystyle=e^{-f}\left({N}^{\phantom{\bar{\beta}}\alpha}_{\bar{\beta}\phantom{\alpha}\bar{\gamma}}+v^{\alpha}p_{\bar{\beta}\bar{\gamma}}\right)\;,$
$\displaystyle{\widetilde{A}}^{\phantom{}\alpha}_{\phantom{\alpha}\bar{\beta}}$
$\displaystyle=e^{-2f}\left({A}^{\phantom{}\alpha}_{\phantom{\alpha}\bar{\beta}}-g^{\alpha\bar{\gamma}}v_{\bar{\gamma};\beta}-2iv^{\alpha}v_{\bar{\beta}}-2v^{\bar{\gamma}}{N}^{\phantom{\bar{\gamma}}\alpha}_{\bar{\gamma}\phantom{\alpha}\bar{\beta}}-2v^{\alpha}v^{\bar{\gamma}}p_{\bar{\gamma}\bar{\beta}}\right)\;,$
$\displaystyle{\widetilde{B}}^{\phantom{}\alpha}_{\phantom{\alpha}\beta}$
$\displaystyle=e^{-2f}\Big{(}{B}^{\phantom{}\alpha}_{\phantom{\alpha}\beta}-\frac{1}{2}(g^{\alpha\bar{\gamma}}v_{\bar{\gamma};\beta}+v_{\beta;\bar{\gamma}}g^{\alpha\bar{\gamma}})+{\delta}^{\phantom{}\alpha}_{\phantom{\alpha}\beta}f_{0}$
$\displaystyle\qquad\qquad-v^{\alpha}v^{\gamma}p_{\gamma\beta}-v_{\beta}v^{\bar{\gamma}}{p}^{\phantom{\bar{\gamma}}\alpha}_{\bar{\gamma}\phantom{\alpha}}-v^{\gamma}{T}^{\phantom{\gamma}\alpha}_{\gamma\phantom{\alpha}\beta}-v^{\bar{\gamma}}{T}^{\phantom{\bar{\gamma}\beta}\alpha}_{\bar{\gamma}\beta\phantom{\alpha}}\Big{)}\;,$
$\displaystyle{\widetilde{R}}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\lambda\bar{\mu}}$
$\displaystyle=e^{-2f}\Big{(}{R}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\lambda\bar{\mu}}-{\delta}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}f_{\lambda\bar{\mu}}-{\delta}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}f_{\bar{\mu}\lambda}+2ig_{\beta\bar{\mu}}{v}^{\phantom{}\alpha}_{\phantom{\alpha};\lambda}-2iv_{\beta\bar{\mu}}{\delta}^{\phantom{}\alpha}_{\phantom{\alpha}\lambda}$
$\displaystyle\qquad+i(g^{\alpha\bar{\gamma}}v_{\bar{\gamma};\beta}+v_{\beta;\bar{\gamma}}g^{\alpha\bar{\gamma}})g_{\lambda\bar{\mu}}-4{\delta}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}g_{\lambda\bar{\mu}}v^{\gamma}v_{\gamma}-4g_{\beta\bar{\mu}}{\delta}^{\phantom{}\alpha}_{\phantom{\alpha}\lambda}v^{\gamma}v_{\gamma}$
$\displaystyle\qquad-2ip_{\beta\gamma}v^{\gamma}v^{\alpha}g_{\lambda\bar{\mu}}-2iv_{\beta}v^{\bar{\gamma}}{p}^{\phantom{\bar{\gamma}}\alpha}_{\bar{\gamma}\phantom{\alpha}}g_{\lambda\bar{\mu}}+2iv^{\gamma}{T}^{\phantom{\gamma}\alpha}_{\gamma\phantom{\alpha}\beta}g_{\lambda\bar{\mu}}-2iv^{\bar{\gamma}}{T}^{\phantom{\bar{\gamma}\beta}\alpha}_{\bar{\gamma}\beta\phantom{\alpha}}g_{\lambda\bar{\mu}}\Big{)}\;.$
$\displaystyle{\widetilde{R}}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma\bar{\sigma}}$
$\displaystyle=e^{-2f}\Big{(}{R}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma\bar{\sigma}}-{\delta}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}f_{\gamma\bar{\sigma}}-{\delta}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}f_{\bar{\sigma}\gamma}+2ig_{\beta\bar{\sigma}}{v}^{\phantom{}\alpha}_{\phantom{\alpha};\gamma}-2iv_{\beta\bar{\sigma}}{\delta}^{\phantom{}\alpha}_{\phantom{\alpha}\gamma}$
$\displaystyle\qquad+i(g^{\alpha\bar{\lambda}}v_{\bar{\lambda};\beta}+v_{\beta;\bar{\lambda}}g^{\alpha\bar{\lambda}})g_{\gamma\bar{\sigma}}-4{\delta}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}}g_{\gamma\bar{\sigma}}v^{\lambda}v_{\lambda}-4g_{\beta\bar{\sigma}}{\delta}^{\phantom{}\alpha}_{\phantom{\alpha}\gamma}v^{\lambda}v_{\lambda}$
$\displaystyle\qquad-2ip_{\beta\lambda}v^{\lambda}v^{\alpha}g_{\gamma\bar{\sigma}}-2iv_{\beta}v^{\bar{\lambda}}{p}^{\phantom{\bar{\lambda}}\alpha}_{\bar{\lambda}\phantom{\alpha}}g_{\gamma\bar{\sigma}}+2iv^{\lambda}{T}^{\phantom{\lambda}\alpha}_{\lambda\phantom{\alpha}\beta}g_{\gamma\bar{\sigma}}-2iv^{\bar{\lambda}}{T}^{\phantom{\bar{\lambda}\beta}\alpha}_{\bar{\lambda}\beta\phantom{\alpha}}g_{\gamma\bar{\sigma}}\Big{)}\;.$
Moreover, the coefficients of covariant derivatives of $p_{\alpha\beta}$
change as follows.
$\displaystyle\tilde{p}_{\alpha\beta;\gamma}$
$\displaystyle=e^{-f}\left(p_{\alpha\beta;\gamma}-2p_{\alpha\beta}f_{\gamma}-2ip_{\alpha\gamma}v_{\beta}-2ip_{\gamma\beta}v_{\alpha}\right)\;,$
$\displaystyle\tilde{p}_{\alpha\beta;\bar{\gamma}}$
$\displaystyle=e^{-f}\left(p_{\alpha\beta;\bar{\gamma}}+2p_{\alpha\beta}f_{\bar{\gamma}}-2ip_{\alpha\lambda}v^{\lambda}g_{\beta\bar{\gamma}}-2ip_{\lambda\beta}v^{\lambda}g_{\alpha\bar{\gamma}}\right)\;.$
Here $v_{\bar{\gamma};\beta}$, $v_{\beta;\bar{\gamma}}$
$p_{\alpha\beta;\gamma}$, $p_{\alpha\beta;\bar{\gamma}}$ are coefficients of
covariant derivatives of the tensors $(v_{\alpha})$, $(p_{\alpha\beta})$ as
defined in (2.6):
$\displaystyle v_{\bar{\gamma};\beta}$
$\displaystyle=Z_{\beta}v_{\bar{\gamma}}-{\omega}^{\phantom{\bar{\gamma}}\bar{\sigma}}_{\bar{\gamma}\phantom{\bar{\sigma}}}(Z_{\beta})v_{\bar{\sigma}}\;,$
$\displaystyle v_{\beta;\bar{\gamma}}$
$\displaystyle=Z_{\bar{\gamma}}v_{\beta}-{\omega}^{\phantom{\beta}\sigma}_{\beta\phantom{\sigma}}(Z_{\bar{\gamma}})v_{\sigma}\;,$
$\displaystyle p_{\alpha\beta;\gamma}$
$\displaystyle=Z_{\gamma}p_{\alpha\beta}-{\omega}^{\phantom{\alpha}\sigma}_{\alpha\phantom{\sigma}}(Z_{\gamma})p_{\sigma\beta}-{\omega}^{\phantom{\beta}\sigma}_{\beta\phantom{\sigma}}(Z_{\gamma})p_{\alpha\sigma}\;,$
$\displaystyle p_{\alpha\beta;\bar{\gamma}}$
$\displaystyle=Z_{\bar{\gamma}}p_{\alpha\beta}-{\omega}^{\phantom{\alpha}\sigma}_{\alpha\phantom{\sigma}}(Z_{\bar{\gamma}})p_{\sigma\beta}-{\omega}^{\phantom{\beta}\sigma}_{\beta\phantom{\sigma}}(Z_{\bar{\gamma}})p_{\alpha\sigma}\;.$
### 2.3. Generalized Heiseneberg groups
Let $(z,t)=(z^{1},...,z^{n},t)$ be the standard coordinates on
$\mathbb{C}^{n}\times\mathbb{R}$ and let $P=(P_{\alpha\beta})$ be a constant
complex skew-symmetric matrix of size $n\times n$. Let
(2.7) $Z_{\alpha}=\frac{\partial}{\partial
z^{\alpha}}+\left(iz^{\bar{\alpha}}+P_{\alpha\beta}z^{\beta}\right)\frac{\partial}{\partial
t}$
for $\alpha=1,...,n$. The almost CR structure whose $H_{1,0}$ space is spanned
by $\\{Z_{\alpha}\\}$ will be denoted by $J_{P}$. We define the generalized
Heisenberg group corresponding to $P$ by the almost CR manifold
$(\mathbb{C}^{n}\times\mathbb{R},J_{P})$ and will denote it by
$\mathcal{H}_{P}$. In case $P=0$, $\mathcal{H}_{0}$ is the classic Heisenberg
group. Generally, we have
###### Proposition 2.4 ([4]).
Let $*=*_{P}$ be the binary operation on $\mathcal{H}_{P}$ defined by
$(z,t)*(z^{\prime},t^{\prime})=\left(z+z^{\prime},t+t^{\prime}+2\mathrm{Im}\,(z^{\alpha}z^{\prime\bar{\alpha}})-2\mathrm{Re}\,(P_{\alpha\beta}z^{\alpha}z^{\prime\beta})\right).$
Then this operation makes $\mathcal{H}_{P}$ a Lie group and $Z_{\alpha}$
defined in (2.7) is left invariant under $*$. In particular, $\mathcal{H}_{P}$
is a homogeneous almost CR manifold.
Since the canonical $(1,0)$-vector fields $Z_{1},\ldots,Z_{n}$ of $J_{P}$
satisfy
(2.8)
$[Z_{\alpha},Z_{\bar{\beta}}]=-2i\delta_{\alpha\bar{\beta}}\frac{\partial}{\partial
t}\;,\quad[Z_{\alpha},Z_{\beta}]=-2P_{\alpha\beta}\frac{\partial}{\partial
t}\;,$
the CR structure of $\mathcal{H}_{P}$ is integrable if and only if $P=0$. Each
model $\mathcal{H}_{P}$ admits the _dilation_
$D_{\tau}(z,t)=(e^{\tau}z,e^{2\tau}t)$ as a CR automorphism. Therefore the
Heisenberg group model has a CR-contraction at the origin $0$ so at every
point of $\mathcal{H}_{P}$ by the homogeneity.
Let $\theta_{P}$ be a real $1$-form on $\mathcal{H}_{P}$ defined by
(2.9)
$\theta_{P}=dt+iz^{\alpha}dz^{\bar{\alpha}}-iz^{\bar{\alpha}}dz^{\alpha}+P_{\alpha\beta}z^{\alpha}dz^{\beta}+P_{\bar{\alpha}\bar{\beta}}z^{\bar{\alpha}}dz^{\bar{\beta}}.$
Note that $\theta_{P}(Z_{\alpha})=0$ for every $\alpha=1,...,n$. Moreover,
$d\theta_{P}=2idz^{\alpha}\wedge
dz^{\bar{\alpha}}+P_{\alpha\beta}dz^{\alpha}\wedge
dz^{\beta}+P_{\bar{\alpha}\bar{\beta}}dz^{\bar{\alpha}}\wedge
dz^{\bar{\beta}}.$
Therefore, we see that $\mathcal{H}_{P}$ is a strongly pseudoconvex almost CR
manifold and $(\theta^{\alpha}=dz^{\alpha})$ is an admissible coframe for
$\theta$ which is dual to $(Z_{\alpha})$. Since
$d\theta^{\alpha}=ddz^{\alpha}=0$, we see that $(\mathcal{H}_{P},\theta_{P})$
has a vanishing pseudo-Hermitian connection
$({\omega}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}})$. Moreover all
coefficients of torsion and curvature, except $p_{\alpha\beta}$, vanish
identically and $p_{\alpha\beta}\equiv P_{\alpha\beta}$. Conversely,
###### Proposition 2.5 ([4]).
Let $(M,\theta)$ be a pseudo-Hermitian manifold. If
$p_{\alpha\beta;\gamma}\equiv{T}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma}\equiv{N}^{\phantom{\bar{\beta}}\alpha}_{\bar{\beta}\phantom{\alpha}\bar{\gamma}}\equiv{A}^{\phantom{}\alpha}_{\phantom{\alpha}\bar{\beta}}\equiv{B}^{\phantom{}\alpha}_{\phantom{\alpha}\beta}\equiv{R}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\lambda\bar{\mu}}\equiv
0$
for some (and hence for all) admissible coframe, then $(M,\theta)$ is locally
equivalent to $(\mathcal{H}_{P},\theta_{P})$ as a pseudo-Hermitian manifold.
### 2.4. CR mappings and diffeomorphisms of Heisenberg models
As we mentioned, except the standard Heisenberg group $\mathcal{H}_{0}$, every
CR structure of $\mathcal{H}_{P}$ is non-integrable. The non-integrability of
the structure gives some restriction to the CR mappings. The following lemma
for this observation will be used in the proof of Theorem 4.2.
###### Lemma 2.6 (cf. Proposition 3.3 in [4]).
Let $\mathcal{H}_{P}$ and $\mathcal{H}_{P^{\prime}}$ be non-integrable
Heisenberg group models of the same dimension. If there is a local CR
diffeomorphism
$G(z,t)=(w,s)=(w^{1},\ldots,w^{n},s)$
from $\mathcal{H}_{P}$ to $\mathcal{H}_{P}^{\prime}$, then
* $(1)$
each $w^{\lambda}$ ($\lambda=1,\ldots,n$) is indepedent of $t$-variable and
holomorphic in $z$-variables;
* $(2)$
$s(t)=rt+c(z)$ for some constant $r$ and real-valued function $c$;
* $(3)$
the constant $r$ is determined by
$r=\frac{1}{n}\sum_{\alpha,\beta=1}^{n}\left\lvert\frac{\partial
w^{\beta}}{\partial z^{\alpha}}\right\rvert^{2}\;.$
###### Proof.
We will use $(w,s)=(w^{1},\ldots,w^{n},s)$ as a standard coordinates of
$\mathcal{H}_{P^{\prime}}$ also. Let
$K_{\alpha}(z)=iz^{\bar{\alpha}}+P_{\alpha\beta}z^{\beta}$ and
$K^{\prime}_{\lambda}(w)=iw^{\bar{\lambda}}+P^{\prime}_{\lambda\mu}w^{\mu}$.
Then the canonical $(1,0)$-frames $(Z_{\alpha})$ and $(Z^{\prime}_{\lambda})$
of $\mathcal{H}_{P}$ and $\mathcal{H}_{P^{\prime}}$ can be determined by
$Z_{\alpha}=\frac{\partial}{\partial
z^{\alpha}}+K_{\alpha}\frac{\partial}{\partial t}\quad\text{and}\quad
Z^{\prime}_{\lambda}=\frac{\partial}{\partial
w^{\lambda}}+K^{\prime}_{\lambda}\frac{\partial}{\partial s}\;,$
respectively. Let us consider
$dF(Z_{\alpha})=(Z_{\alpha}w^{\lambda})\frac{\partial}{\partial
w^{\lambda}}+(Z_{\lambda}w^{\bar{\mu}})\frac{\partial}{\partial
w^{\bar{\mu}}}+(Z_{\alpha}s)\frac{\partial}{\partial s}\;.$
Since $F$ is a CR mapping, we have
$dF(Z_{\alpha})={a}^{\phantom{\alpha}\lambda}_{\alpha\phantom{\lambda}}Z^{\prime}_{\lambda}$
for some functions ${a}^{\phantom{\alpha}\lambda}_{\alpha\phantom{\lambda}}$.
Comparing two expressions of $dF(Z_{\alpha})$, we can conclude that
${a}^{\phantom{\alpha}\lambda}_{\alpha\phantom{\lambda}}=Z_{\alpha}w^{\lambda}$
so
(2.10) $dF(Z_{\alpha})=(Z_{\alpha}w^{\lambda})Z^{\prime}_{\lambda}\;,\quad
Z_{\alpha}w^{\bar{\mu}}=0\;,\quad
Z_{\alpha}s=(Z_{\alpha}w^{\lambda})K^{\prime}_{\lambda}\circ F\;.$
From the non-integrability of $\mathcal{H}_{P}$, we can choose $\alpha,\beta$
so that $P_{\alpha\beta}\neq 0$. Then we have
(2.11)
$[dF(Z_{\alpha}),dF(Z_{\beta})]=dF([Z_{\alpha},Z_{\beta}])=-2P_{\alpha\beta}dF\left(\partial/\partial
t\right)$
from (2.8). One can see that the local vector field
$[dF(Z_{\alpha}),dF(Z_{\beta})]=[(Z_{\alpha}w^{\lambda})Z^{\prime}_{\lambda},(Z_{\beta}w^{\mu})Z^{\prime}_{\mu}]$
on $\mathcal{H}_{P^{\prime}}$ has no terms in $\partial/\partial
w^{\bar{\mu}}$. From
$-2P_{\alpha\beta}dF\left(\partial/\partial
t\right)=-2P_{\alpha\beta}\left(\frac{\partial w^{\lambda}}{\partial
t}\frac{\partial}{\partial w^{\lambda}}+\frac{\partial w^{\bar{\mu}}}{\partial
t}\frac{\partial}{\partial w^{\bar{\mu}}}+\frac{\partial s}{\partial
t}\frac{\partial}{\partial s}\right)$
we have
$\frac{\partial w^{\bar{\mu}}}{\partial t}=0\quad\text{so}\quad\frac{\partial
w^{\mu}}{\partial t}=0$
for each $\mu=1,\ldots,n$. Simultaneously, we have
$Z_{\alpha}w^{\lambda}=\frac{\partial w^{\lambda}}{\partial
z^{\alpha}}\;,\quad Z_{\bar{\alpha}}w^{\lambda}=\frac{\partial
w^{\lambda}}{\partial z^{\bar{\alpha}}}=0$
from (2.10). This implies that each $w^{\lambda}$ is independent of
$t$-variable and holomorphic in $z$-variables. Indeed $w^{\alpha}$ is defined
on the open set $\mathcal{U}_{1}=\pi_{1}(\mathcal{U})$ in $\mathbb{C}^{n}$
where $\mathcal{U}$ is the domain of $F$ in $\mathbb{C}^{n}\times\mathbb{R}$
and $\pi_{1}:\mathbb{C}^{n}\times\mathbb{R}\to\mathbb{C}^{n}$ is the natural
projection. When we write
$[dF(Z_{\alpha}),dF(Z_{\beta})]=\left[\frac{\partial w^{\lambda}}{\partial
z^{\alpha}}Z^{\prime}_{\lambda},\frac{\partial w^{\mu}}{\partial
z^{\beta}}Z^{\prime}_{\mu}\right]={b}^{\phantom{\alpha\beta}\lambda}_{\alpha\beta\phantom{\lambda}}Z^{\prime}_{\lambda}-2P^{\prime}_{\lambda\mu}\frac{\partial
w^{\lambda}}{\partial z^{\alpha}}\frac{\partial w^{\mu}}{\partial
z^{\beta}}\frac{\partial}{\partial s}$
by some functions
${b}^{\phantom{\alpha\beta}\lambda}_{\alpha\beta\phantom{\lambda}}$, applying
$dF\left(\partial/\partial t\right)=(\partial s/\partial t)\partial/\partial
s$ to (2.11) we have that
${b}^{\phantom{\alpha\beta}\lambda}_{\alpha\beta\phantom{\lambda}}=0$ for any
$\lambda$ so that
$\frac{\partial s}{\partial
t}=\frac{P^{\prime}_{\lambda\mu}}{P_{\alpha\beta}}(\partial_{\alpha}w^{\lambda})(\partial_{\beta}w^{\mu})\;.$
Note that $P_{\alpha\beta}\neq 0$ by the choice of $\alpha,\beta$. This
implies that $\partial s/\partial t$ is independent of $t$-variable so
$s(z,t)=r(z)t+c(z)$ for some smooth real-valued functions $r=\partial
s/\partial t$ and $c$ defined on $\mathcal{U}_{1}$. The third equation of
(2.10) can be written as
$Z_{\alpha}s=\frac{\partial s}{\partial z^{\alpha}}+K_{\alpha}\frac{\partial
s}{\partial t}=\frac{\partial w^{\lambda}}{\partial
z^{\alpha}}K^{\prime}_{\lambda}\circ F$
Since $K_{\alpha}$, $\partial s/\partial t$, $\partial w^{\lambda}/\partial
z^{\alpha}$, $K^{\prime}_{\lambda}\circ F$ are all independent of
$t$-variable, so is $\partial s/\partial z^{\alpha}$. This means that
$\partial r/\partial z^{\alpha}=0$ for each $\alpha$ because $\partial
s/\partial z^{\alpha}=(\partial r/\partial z^{\alpha})t+\partial c/\partial
z^{\alpha}$. Hence the real-valued function $r$ is a constant:
$s(z,t)=rt+c(z)$.
From
$[dF(Z_{\alpha}),dF(Z_{\bar{\beta}})]=dF([Z_{\alpha},Z_{\bar{\beta}}])=-2i\delta_{\alpha\bar{\beta}}dF(\partial_{t})$,
one can easily get
$\delta_{\alpha\bar{\beta}}\frac{\partial t}{\partial
s}=\delta_{\lambda\bar{\mu}}\frac{\partial w^{\lambda}}{\partial
z^{\alpha}}\frac{\partial w^{\bar{\mu}}}{\partial
z^{\bar{\beta}}}\;,\quad\text{so}\quad n\frac{\partial t}{\partial
s}=\sum_{\alpha,\lambda}\left\lvert\frac{\partial w^{\lambda}}{\partial
z^{\alpha}}\right\rvert^{2}\;.$
This completes the proof. ∎
The standard Heisenberg group $\mathcal{H}_{0}$ is CR-equivalent to
$S^{2n+1}\setminus\\{p\\}$, the unit sphere minus one point. Therefore
$\mathcal{H}_{0}\setminus\\{0\\}$ is also equivalent to
$S^{2n+1}\setminus\\{p,-p\\}$. By the symmetry of $S^{2n+1}$,
$\mathcal{H}_{0}\setminus\\{0\\}$ admits a non-trivial involution CR
automorphism:
$(z,t)\mapsto\left(\frac{z}{-\left\lvert
z\right\rvert^{2}+it},\frac{-t}{\left\lvert
z\right\rvert^{4}+t^{2}}\right)\;.$
But for a non-integrable Heisenberg model $\mathcal{H}_{P}$, every CR
automorphism of $\mathcal{H}_{P}\setminus\\{0\\}$ is extended to a global CR
automorphism of $\mathcal{H}_{P}$ (Proposition 3.3 in [4]).
As an application of Lemma 2.6, we can describe the CR automorphism group of
the Heisenberg group model (see [4]).
###### Theorem 2.7.
Let $\mathcal{H}_{P}$ be a Heisenberg group model.
* $(1)$
$\mathcal{H}_{P}$ is CR equivalent to a Heisenberg group model
$\mathcal{H}_{P^{\prime}}$ if and only if there is a unitary matrix
$U=({U}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}})\in\mathrm{U}(n)$ with
$P_{\alpha\beta}={U}^{\phantom{\alpha}\lambda}_{\alpha\phantom{\lambda}}P^{\prime}_{\lambda\mu}{U}^{\phantom{\beta}\mu}_{\beta\phantom{\mu}}$.
* $(2)$
The isotropy group
$\mathrm{Aut}_{0}(\mathcal{H}_{P})=\left\\{F\in\mathrm{Aut}(\mathcal{H}_{P}):F(0)=0\right\\}$
can be composed by
$\mathrm{Aut}_{0}(\mathcal{H}_{P})=\mathrm{Aut}_{0}(\mathcal{H}_{P},\theta_{P})\oplus\\{D_{\tau}\\}$
where $\mathrm{Aut}_{0}(\mathcal{H}_{P},\theta_{P})$ is the pseudo-Hermitian
isotropy group.
* $(3)$
If $P=0$ equivalently $J_{P}$ is integrable, the pseudo-Hermitian isotropy
group $\mathrm{Aut}_{0}(\mathcal{H}_{0},\theta_{0})$ is isomorphic to the
unitary group $\mathrm{U}(n)$. If $P\neq 0$, then
$\mathrm{Aut}_{0}(\mathcal{H}_{P},\theta_{P})\simeq\left\\{U\in\mathrm{U}(n):U^{t}PU=P\right\\}\;.$
More precisely, every element of
$\mathrm{Aut}_{0}(\mathcal{H}_{P},\theta_{P})$ of the form:
$(z,t)\to(U(z),t)\;.$
for some $U\in\mathrm{U}(n)$.
## 3\. Contracting CR automorphism
In this section, we introduce the CR contraction and its stable manifold and
show that there is a canonical contact form of the CR contaction.
### 3.1. The CR contractions and the stable manifolds
Let $(M,\theta)$ be a paeudo-Hermitian manifold. We say that a CR automorphism
$\varphi$ of $M$ is _(weakly) contracting_ at $o\in M$ if
$\varphi^{*}\theta|_{o}=\mu\,\theta|_{o}$
for some real $0<\mu<1$. This definition is independent of choice of contact
form. Let $(\theta^{\alpha})$ is an admissible coframe for $\theta$ satisfying
$g_{\alpha\bar{\beta}}(o)=\delta_{\alpha\bar{\beta}}\;.$
Since $\varphi$ is preserving the CR structure, we can write
(3.1)
$\varphi^{*}\theta^{\alpha}=a^{\alpha}_{\beta}\theta^{\beta}+c^{\alpha}\theta,$
for some functions $a^{\alpha}_{\beta}$ and $c^{\alpha}$. Then from (2.1), we
see that
$\sum_{\gamma}a^{\gamma}_{\alpha}a^{\bar{\gamma}}_{\bar{\beta}}=\mu\delta_{\alpha\bar{\beta}}\quad\text{at
$o$.}$
Therefore, the matrix $A=(a^{\alpha}_{\beta})$ is a normal operator (hence,
diagonalizable) and its eigenvalues have modulus $\sqrt{\mu}$ at $o$. By a
unitary change of the frame, we may assume
$A=(a^{\alpha}_{\beta})=\begin{pmatrix}\lambda_{1}&0&\cdots&0\\\
0&\lambda_{2}&\cdots&0\\\ \vdots&&\ddots&\vdots\\\
0&\cdots&0&\lambda_{n}\end{pmatrix}$
at $o$, where $|\lambda_{\alpha}|=\sqrt{\mu}$ for $\alpha=1,...,n$.
###### Proposition 3.1.
There exists a contact form $\theta$ such that $c^{\alpha}$ in (3.1) vanishes
at $o$.
###### Proof.
Let $\theta$ be any pseudo-Hermitian structure on $M$. Assume again that
$(\theta^{\alpha})$ be an admissible coframe for $\theta$ such that
$g_{\alpha\bar{\beta}}(o)=\delta_{\alpha\bar{\beta}}$ and
$A=(a^{\alpha}_{\beta})$ in (3.1) is diagonal at $o$. Let
$\tilde{\theta}=e^{2f}\theta$ be a pseudoconformal change of $\theta$, where
$f$ is a real-valued smooth function with $f(o)=0$ and let
$(\tilde{\theta}^{\alpha})$ be its admissible coframe defined by
$\tilde{\theta}^{\alpha}=e^{f}(\theta^{\alpha}+v^{\alpha}\theta)$ as in
Proposition 2.1. When we let
$\varphi^{*}\tilde{\theta}^{\alpha}=\tilde{a}_{\beta}^{\alpha}\tilde{\theta}^{\beta}+\tilde{c}^{\alpha}\tilde{\theta}$,
it follows that $\tilde{a}_{\beta}^{\alpha}\equiv a_{\beta}^{\alpha}$ and
$\tilde{c}^{\alpha}=c^{\alpha}-(a^{\alpha}_{\beta}-\mu\delta^{\alpha}_{\beta})v^{\beta}\quad\text{at
$o$.}$
We want to have
$\varphi^{*}\tilde{\theta}^{\alpha}=a^{\alpha}_{\beta}\tilde{\theta}^{\beta}\quad\text{at
$o$.}$
From (3.1) and Proposition 2.1, $v=(v^{\alpha}(o))$ must be a solution of
$(A-\mu I)v=c,$
where $c=(c^{\alpha}(o))$. Note that the matrix $A-\mu I$ is invertible at $o$
since the eigenvalues of $A$ have modulus $\sqrt{\mu}>\mu$. Therefore, the
above equation is uniquely solved for given $c$, and by taking $f$ whose
derivative $(f_{\alpha})$ at $o$ gives rise to the solution
$v=(v^{\alpha}(o))$ of (2.2), we see that $\tilde{\theta}$ satisfies the
condition of this proposition. ∎
From this proposition, we may assume that
(3.2) $\varphi^{*}|_{o}=\begin{pmatrix}\mu&0&\cdots&0\\\
0&\lambda_{1}&\cdots&0\\\ \vdots&&\ddots&\vdots\\\
0&\cdots&0&\lambda_{n}\end{pmatrix}$
with respect to $\theta,\theta^{\alpha}$.
Note that $TM=H\oplus\mathbb{R}T$. With regard to this decomposition, we can
define a Riemannian metric $ds^{2}$ on $M$ so that
$ds^{2}(T,T)=1,\quad ds^{2}(T,X)=0,\quad ds^{2}(X,Y)=d\theta(X,JY)$
for any $X,Y\in H$. For $x\in M$, we denote by $|x|$ the geodesic distance
from $o$ to $x$. Then (3.2) implies that
(3.3) $|\varphi(x)|\leq\eta|x|$
for some $0<\eta<1$ if $x$ is in a sufficiently small neighborhood $U$ of $o$.
Let $\mathcal{W}$ be the _stable manifold_ of $\varphi$. That is,
$x\in\mathcal{W}$ if and only if there exists a neighborhood $V$ of $x$ such
that $\varphi^{k}\rightarrow o$ uniformly on $V$ as $k\rightarrow\infty$. From
(3.3), it turns out $\mathcal{W}$ is a nonempty open subset of $M$. In fact,
it can be seen easily that
$\mathcal{W}=\bigcup_{k\in\mathbb{Z}}\varphi^{k}(U).$
### 3.2. The canonical contact form of the CR contraction
Let $M$ be a strongly pseudoconvex almost CR manifold and $\varphi$ be a CR
contraction at $o\in M$. We have $0<\mu<1$ with
$\varphi^{*}\theta|_{o}=\mu\theta|_{o}$ for any contact form $\theta$. If a
contact form $\theta$ on the stable manifold $\mathcal{W}$ satisfies
(3.4) $\varphi^{*}\theta=\mu\theta,$
we call $\theta$ a _canonical contact form_ of $\varphi$.
###### Proposition 3.2.
There exists a unique (up to constant multiple) continuous canonical contact
form $\theta$ on $\mathcal{W}$.
###### Proof.
We first show the uniqueness. Suppose that $\theta_{1}$ and $\theta_{2}$ are
two continuous contact form satisfying (3.4) and that
$\theta_{1}|_{o}=\theta_{2}|_{o}$. Then $\theta_{1}-\theta_{2}=u\theta_{1}$
for some continuous function $u$ on $\mathcal{W}$ with $u(o)=0$. Then for any
$x\in\mathcal{W}$,
$u(\varphi(x))\mu\theta_{1}|_{x}=\varphi^{*}(u\theta_{1})|_{x}=\varphi^{*}(\theta_{1}-\theta_{2})|_{x}=\mu(\theta_{1}-\theta_{2})|_{x}=u(x)\mu\theta_{1}|_{x}.$
Therefore, we have $u(\varphi(x))=u(x)$ for every $x\in\mathcal{W}$.
Conesequently,
$u(x)=u(\varphi^{k}(x))\rightarrow u(o)=0$
as $k\rightarrow\infty$ for every $x\in\mathcal{W}$.
Let $\tilde{\theta}$ be any smooth contact form on $M$. Let
$\theta_{k}=\frac{1}{\mu^{k}}\,(\varphi^{k})^{*}\tilde{\theta}$
for $k=1,2,...$. If we denote
$\theta_{k}=u_{k}\tilde{\theta},$
then it turns out that
$u_{k}(x)=\frac{1}{\mu^{k}}\,v(\varphi^{k-1}(x))\cdots
v(x)=\prod_{j=1}^{k}a_{j}(x),$
where $\varphi^{*}\tilde{\theta}=v\tilde{\theta}$ and
$a_{j}(x)=v(\varphi^{j-1}(x))/\mu$. Since $\varphi^{k}\rightarrow o$ locally
uniformly on $\mathcal{W}$, it suffices to show that $u_{k}$ converges
uniformly on $U$ to guarantee the convergence of $\theta_{k}$ on
$\mathcal{W}$. Note that the infintie product $\prod a_{j}(x)$ converges
absolutely and uniformly on $U$ if so is the infinite series
$\sum|a_{j}(x)-1|$. Since $0<v(x)\leq\mu+C|x|$ for some constant $C$ in $U$,
we see that
$0<a_{j}(x)\leq 1+C|\varphi^{j-1}(x)|\leq 1+C|x|\eta^{j-1}$
on $U$, for some constant $C>0$ by (3.3). Since $0<\eta<1$, we conclude that
$\sum_{j}|a_{j}-1|\leq C|x|\sum_{j}\eta^{j-1}$ converges uniformly on $U$.
This yields that $u_{k}\rightarrow u$ locally uniformly on $\mathcal{W}$ for
some positive continuous function $u$ on $\mathcal{W}$. It is obvious that
$\theta=u\tilde{\theta}$ satisfies (3.4). ∎
The next proposition implies that the canonical contact form $\theta$ in
Proposition 3.2 is smooth.
###### Proposition 3.3.
A canonical contact form $\theta$ induced by a contracting CR automorphism
$\varphi$ is indeed $C^{\infty}$-smooth on $\mathcal{W}$.
###### Proof.
We will prove for every positive integer $s$, $\\{D^{(s)}u_{k}:k\geq 0\\}$ is
locally uniformly bounded on $\mathcal{W}$, where $D^{(s)}$ represents the
$s$-th order differential operator. We assume $\tilde{\theta}$ and
$\theta^{\alpha}$ were chosen such that $\varphi^{*}$ has the form of (3.2) at
$o$. Let $T$ be the characteristic vector field for $\tilde{\theta}$ and
$Z_{\alpha}$ be the dual frame for $\\{\theta^{\alpha}\\}$. Then there exists
a local coordinates $(z^{1},...,z^{n},t)\in\mathbb{C}^{n}\times\mathbb{R}$
such that
$\left.\frac{\partial}{\partial
z^{\alpha}}\right|_{o}=Z_{\alpha}|_{o},\quad\left.\frac{\partial}{\partial
t}\right|_{o}=T|_{o}.$
Therefore, the Jacobian of $\varphi$ in this coordinate system has the same
form with (3.2) at $o$. Shrinking the neighborhood $U$ of $o$ if necessary, we
may assume
(3.5) $|D\varphi|\leq\eta$
on $U$ for some $0<\eta<1$, where $D$ is the differential operator in this
coordinate system.
Claim. For each positive integer $s$, there exists a polynomial $P_{s}$
depending on $s$ and $\|\varphi\|_{(s)}$ such that
$|D^{(s)}\varphi^{j}|\leq P_{s}(j)\eta^{j-s+1}$
on $U$ whenever $j\geq s$, where $\|\cdot\|_{(s)}$ denotes the
$C^{s}(U)$-norm.
Assume for a while that this claim is true. Since
$a_{j}=v(\varphi^{j-1})/\mu$, if $j\geq s$, then
(3.6)
$|D^{(s)}a_{j}|=\mu^{-1}|D^{(s)}(v(\varphi^{j-1}))|\leq\widetilde{P}_{s}(j)\eta^{j-s}$
on $U$ for some polynomial $\widetilde{P}_{s}$ depending on $\|v\|_{(s)}$ and
$P_{1},...,P_{s}$, from the chain rule and Claim.
Now, we finish the proof of this proposition. Recall that
$u_{k}=\prod_{j=1}^{k}a_{j}.$ Therefore,
$\displaystyle D^{(s)}u_{k}$ $\displaystyle=$
$\displaystyle\sum_{j}D^{(s)}a_{j}\prod_{l\neq j}a_{l}+\sum_{j_{1}\neq
j_{2}}D^{(s-1)}a_{j_{1}}Da_{j_{2}}\prod_{l\neq j_{1},l\neq j_{2}}a_{l}$
$\displaystyle+\cdots+{\sum_{j_{1},...,j_{s}}}^{\prime}Da_{j_{1}}\cdots
Da_{j_{s}}\prod_{l\neq j_{1},...,l\neq j_{s}}a_{l},$
where ${\sum}^{\prime}$ means the summation over mutually distinct indices.
Taking $U$ sufficiently small, we may assume $\mu/2\leq v(x)\leq 2\mu$ for
$x\in U$. Then we have $a_{j}\geq 1/2$ on $U$ for every $j$. Since the
infinite product $\prod_{j}a_{j}$ converges uniformly on $U$, we see that
$\prod_{l\neq j_{1},...,l\neq j_{s}}a_{l}=a_{j_{1}}\cdots
a_{j_{s}}\prod_{l}a_{l}\leq 2^{s}\prod_{l}a_{l}\leq C<\infty$
on $U$ for some constant $C>0$. Therefore from (3.6),
$|D^{(s)}u_{k}|\leq C\sum_{j}\widetilde{P}_{s}(j)\eta^{j-s}\leq
C^{\prime}<\infty$
on $U$ for some constants $C$ and $C^{\prime}$ independent of $k$. This
implies that $\\{D^{(s)}u_{k}:k\geq 1\\}$ is locally uniformly bounded on
$\mathcal{W}$. Therefore, for each $s>1$, we have a subsequence of
$\\{u_{k}\\}$ convergent to $u$ in local $C^{s-1}$-sense. Since $s$ is
arbitrary, we conclude that $u$ is $C^{\infty}$-smooth.
Now we prove the claim above. In case $s=1$, it turns out easily that
$|D\varphi^{j}|\leq\eta^{j}$
on $U$ for every $j\geq 1$, from the chain rule and (3.5). If $s=2$, then
$\displaystyle D^{(2)}\varphi^{j}$ $\displaystyle=$ $\displaystyle
D(D\varphi(\varphi^{j-1})\cdot D\varphi^{j-1})$ $\displaystyle=$
$\displaystyle D^{(2)}\varphi(\varphi^{j-1})\cdot D\varphi^{j-1}\cdot
D\varphi^{j-1}+D\varphi(\varphi^{j-1})\cdot D^{(2)}\varphi^{j-1}.$
Therefore,
(3.7)
$|D^{(2)}\varphi^{j}|\leq\|\varphi\|_{(2)}\eta^{2j-2}+\eta|D^{(2)}\varphi^{j-1}|\leq\|\varphi\|_{(2)}\eta^{j-1}+\eta|D^{(2)}\varphi^{j-1}|.$
for every $j\geq 2$. If $j=2$, then $|D^{(2)}\varphi^{2}|\leq
2\|\varphi\|_{(2)}\eta.$ Therefore, if we choose
$P_{2}(j)=\|\varphi\|_{(2)}j$, the recursive relation (3.7) implies that
$|D^{(2)}\varphi^{j}|\leq P_{2}(j)\eta^{j-1}$
on $U$ for every $j\geq 2$.
For more general $s>1$, if we have already chosen $P_{1}\equiv
1,P_{2},\ldots,P_{s-1}$, then we can also show that
(3.8) $|D^{(s)}\varphi^{j}|\leq Q(j)\eta^{j-s+1}+\eta|D^{(s)}\varphi^{j-1}|$
on $U$ for $j\geq s$, where $Q(j)$ is a polynomial in $j$ determined by
$\|\varphi\|_{(s)}$ and $P_{1},...,P_{s-1}$. Therefore, if $j=s$, then
$|D^{(s)}\varphi^{j}|\leq C_{s}\eta$ for some $C_{s}$ depending on
$\|\varphi_{s}\|$ and if we choose $P_{s}$ a polynomial with degree greater
than that of $Q$ such that
$P_{s}(s)\geq C_{s},\quad P_{s}(j)\geq Q(j)+P_{s}(j-1),$
then the relation (3.8) yields the conclusion. ∎
## 4\. Proofs of main theorems
In this section, we will prove Theorem 1.3 and characterize the ambient
manifold $M$ as the standard sphere under further assumption that the
contracting automorphism has another fixed point which is contracting for the
inverse map.
Proof of Theorem 1.3. Assume that $M$ is a strongly pseudoconvex almost CR
manifold and $\varphi$ is a contracting CR automorphism at $o\in M$. Let
$\theta$ be a canonical pseudo-Hermitian structure on $\mathcal{W}$ as
obtained in Section 3.2 so that
$\varphi^{*}\theta=\mu\theta$
on the stable manifold $\mathcal{W}$ for some constant $0<\mu<1$. We denote by
$\|\cdot\|_{\theta}$ the norm of tensors measured by $\theta$. For instance,
for
$T={T}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma}\theta^{\beta}\wedge\theta^{\gamma}\otimes
Z_{\alpha}$,
$\|T\|_{\theta}^{2}={T}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma}{T}^{\phantom{\bar{\rho}}\bar{\eta}}_{\bar{\rho}\phantom{\bar{\eta}}\bar{\sigma}}\,g_{\alpha\bar{\eta}}g^{\beta\bar{\rho}}g^{\gamma\bar{\sigma}}.$
Obviously, $\|\cdot\|_{\theta}$ does not depend on the choice of coframe
$\\{\theta^{\alpha}\\}$.
Let $U$ be a sufficiently small neighborhood of $o$. Since $\mu$ is constant,
$\varphi^{-k}(U)$ strictly increases as $k$ increases. Therefore, for a point
$x\in\mathcal{W}$, there exists $k_{0}\geq 1$ such that $x\in\varphi^{-k}(U)$
for every $k\geq k_{0}$. Let
$\theta_{k}=(\varphi^{-k})^{*}\theta=\mu^{-k}\theta$. Since
$\varphi^{k}:(\varphi^{-k}(U),\theta)\rightarrow(U,\theta_{k})$
is a pseudohermitain equivalence, we have
$\|T(x)\|_{\theta}=\|T_{k}(x_{k})\|_{\theta_{k}},$
where $x_{k}=\varphi^{k}(x)\in U$ and $T_{k}$ is the torsion tensor for
$\theta_{k}$ on $U$. Since $\theta_{k}=\mu^{-k}\theta$ a psudoconformal change
of $\theta$, Proposition 2.3 yields that
$\|T_{k}(x_{k})\|_{\theta_{k}}=\mu^{k/2}\|T(x_{k})\|_{\theta}.$
Therefore,
$\|T(x)\|_{\theta}=\mu^{k/2}\|T(x_{k})\|_{\theta}\leq\mu^{k/2}\sup_{y\in
U}\|T(y)\|_{\theta}\rightarrow 0$
as $k\rightarrow\infty$. This means that $T\equiv 0$ on $\mathcal{W}$.
Similarly, we can show that
${N}^{\phantom{\bar{\beta}}\alpha}_{\bar{\beta}\phantom{\alpha}\bar{\gamma}}={A}^{\phantom{}\alpha}_{\phantom{\alpha}\bar{\beta}}={B}^{\phantom{}\alpha}_{\phantom{\alpha}\beta}=p_{\alpha\beta;\gamma}={R}^{\phantom{\beta}\alpha}_{\beta\phantom{\alpha}\gamma\bar{\sigma}}\equiv
0$
on $\mathcal{W}$. Therefore, we can conclude that $(M,\theta)$ is locally
equivalent to $(\mathcal{H}_{P},\theta_{P})$ from Proposition 2.5. We may
assume $(U,\theta)$ is pseudo-Hermitian equivalent to $(V,\theta_{P})$ for
some neighborhood $V$ of $0$ in $\mathcal{H}_{P}$ and let
$F:(U,\theta)\rightarrow(V,\theta_{P})$
be a pseudo-Hermitian equivalence map such that $F(o)=0$. Let
$\Lambda:\mathcal{H}_{P}\rightarrow\mathcal{H}_{P}$ be the contracting CR
automorphism on $\mathcal{H}_{P}$ defined by
(4.1) $\Lambda(z,t)=(\sqrt{\mu}\,z,\mu t).$
Note that
$\Lambda^{*}\theta_{P}=\mu\theta_{P}.$
Therefore, the map
$F_{k}:=\Lambda^{-k}\circ
F\circ\varphi^{k}:\varphi^{-k}(U)\rightarrow\Lambda^{-k}(V)$
is a pseudo-Hermitian equivalence between $(\varphi^{-k}(U),\theta)$ and
$(\Lambda^{-k}(V),\theta_{P})$. Since $\\{F_{k}:k\geq 1\\}$ is not compactly
divergent, ($F^{k}(o)=0$ for all $k$) we conclude that $\\{F_{k}\\}$ has a
subsequence converging to a pseudo-Hermitian equivalence
$\widetilde{F}:(\mathcal{W},\theta)\rightarrow(\mathcal{H}_{P},\theta_{P})$.
Altogether, we have proved Theorem 1.3. ∎
###### Remark 4.1.
Unlike Theorem 1.2, it is not very clear how to characterize the ambient
manifold $M$ of arbitrary dimension. According to former results in [10] and
[4], it may be natural to expect either $M=\mathcal{W}\simeq\mathcal{H}_{P}$
for some $P$ in case $M$ is noncompact, or $M$ is CR equivalent with the
standard sphere and
$\mathcal{W}=M\setminus\\{\mbox{pt}\\}\simeq\mathcal{H}_{0}$ in case $M$ is
compact. In [10, 4], this global characterization could be done from the
derivative estimates obtained from a PDE theory on the CR or subconformal
Yamabe equations (see Proposition 2.1 and 2.1’ in [10]). On the other hand, it
is still unknown whether there exists a curvature invariant which satisfies a
Yamabe-type equation under pseudoconformal changes or not, if $M$ is a
strongly pseudoconvex almost CR manifold of arbitrary dimension.
If $M$ is the standard sphere and if $\varphi$ is a contracting CR
automorphism with a contracting fixed point $o$, then $\varphi^{-1}$ is also a
contracting CR automorphism with another contracting fixed point $o^{\prime}$.
The next theorem states that the converse is also true even in almost CR
cases.
###### Theorem 4.2.
Let $M$ be a strongly pseudoconvex almost CR manifold and let $\varphi$ be a
contracting CR automorphism with a contracting fixed point $o$. Let
$\mathcal{W}$ be the stable manifold of $\varphi$ with respect to $o$. Suppose
that $\mathcal{W}\neq M$ and that there is a contracting fixed point
$o^{\prime}\in\partial\mathcal{W}$ of $\varphi^{-1}$. Then $M$ is CR
equivalent to the standard sphere $S^{2n+1}\subset\mathbb{C}^{n+1}$.
###### Proof.
Let $\widetilde{F}:(\mathcal{W},\theta)\to(\mathcal{H}_{P},\theta_{P})$ be a
pseudo-Hermitian equivalence for the canonical contact form $\theta$ of $o$.
Applying Theorem 1.3, we have also a pseudo-Hermitian equivalence
$\widetilde{F}^{\prime}:(\mathcal{W}^{\prime},\theta^{\prime})\rightarrow(\mathcal{H}_{P^{\prime}},\theta_{P^{\prime}})$
where $\mathcal{W}^{\prime}$ is the stable neighborhood of $o^{\prime}$ with
respect to $\varphi^{-1}$ and $\theta^{\prime}$ is the canonical contact form
on $\mathcal{W}^{\prime}$ with
$(\varphi^{-1})^{*}\theta^{\prime}=\mu^{\prime}\theta^{\prime}$
for some $0<\mu^{\prime}<1$. Let
$\mathcal{V}=\mathcal{W}\cap\mathcal{W}^{\prime}$ which is an open subset of
$M$ admitting $o,o^{\prime}$ as boundary points. Take a point
$p\in\mathcal{V}$. Then
(4.2) $\varphi^{k}(p)\to o\quad\text{and}\quad\varphi^{-k}(p)\to o^{\prime}$
as $k\to\infty$. Let us consider
1. (1)
$\mathcal{U}=\widetilde{F}(\mathcal{V})$,
$\mathcal{U}^{\prime}=\widetilde{F}^{\prime}(\mathcal{V})$ open subsets of
$\mathcal{H}_{P}$ and $\mathcal{H}_{P^{\prime}}$, respectively
2. (2)
$(z_{0},t_{0})=\widetilde{F}(p)\in\mathcal{U}$,
$(w_{0},s_{0})=\widetilde{F}^{\prime}(p)\in\mathcal{U}^{\prime}$,
3. (3)
$\psi=\widetilde{F}\circ\varphi\circ\widetilde{F}^{-1}$,
$\psi^{\prime}=\widetilde{F}^{\prime}\circ\varphi\circ\widetilde{F}^{\prime-1}$
corresponding CR automorphisms of $\varphi$ in $\mathcal{H}_{P}$ and
$\mathcal{H}_{P^{\prime}}$, respectively.
Since $\mathcal{U}$ and $\mathcal{U}^{\prime}$ are open in
$\mathbb{C}^{n}\times\mathbb{R}$, we may assume that $z_{0}\neq 0$ and
$w_{0}\neq 0$. Theorem 2.7 implies that
$\psi(z,t)=(\mu^{1/2}U(z),\mu
t)\;,\quad\psi^{\prime}(w,s)=(\mu^{\prime-1/2}U^{\prime}(w),\mu^{\prime-1}s)$
for some $U,U^{\prime}\in\mathrm{U}(n)$. Since $z_{0},w_{0}\neq 0$, we have
$z_{k}=\mu^{k/2}U^{k}(z_{0})\to 0\quad\text{and}\quad
w_{k}=\mu^{\prime-k/2}U^{k}(w_{0})\to\infty$
in $\mathbb{C}^{n}$.
Let us consider the local CR diffeomorphism
$G=\widetilde{F}^{\prime}\circ\widetilde{F}^{-1}:\mathcal{U}\subset\mathcal{H}_{P}\to\mathcal{U}^{\prime}\subset\mathcal{H}_{P^{\prime}}$
and denote by
$G(z,t)=(w,s)=(w^{1},\ldots,w^{n},s)\;.$
Now suppose that $\mathcal{H}_{P}$ is non-integrable and let $\mathcal{U}_{1}$
be the projection image of $\mathcal{U}$ to $\mathbb{C}^{n}$. Then
$w=(w^{1},\ldots,w^{n})$ is independent of $t$ and is a holomorphic mapping
from $\mathcal{U}_{1}$ to $\mathbb{C}^{n}$ by (1) of Lemma 2.6. Moreover
$\displaystyle G\circ\psi^{k}$
$\displaystyle=(\widetilde{F}^{\prime}\circ\widetilde{F}^{-1})\circ(\widetilde{F}\circ\varphi^{k}\circ\widetilde{F}^{-1})$
$\displaystyle=(\widetilde{F}^{\prime}\circ\varphi^{k}\circ\widetilde{F}^{\prime-1})\circ(\widetilde{F}^{\prime}\circ\widetilde{F}^{-1})=\psi^{\prime
k}\circ G$
and
$\displaystyle G(\psi^{k}(z_{0},t_{0}))$
$\displaystyle=\big{(}w(\psi^{k}(z_{0},t_{0})),s(\psi^{k}(z_{0},t_{0}))\big{)}=\big{(}w(z_{k}),s(\psi^{k}(z_{0},t_{0}))\big{)}\;,$
$\displaystyle\psi^{\prime k}(G(z_{0},t_{0})$ $\displaystyle=\psi^{\prime
k}(w_{0},s_{0})=(\mu^{\prime-k/2}U^{\prime k}(w_{0}),\mu^{\prime-k}s_{0})\;.$
Therefore we have $w(z_{k})=w_{k}=\mu^{\prime-k/2}U^{-k}(w_{0})$. As a
conclusion, the holomorphic mapping $w:\mathcal{U}_{1}\to\mathbb{C}^{n}$ is
not defined and diverges to infinity at the origin $0$ of $\mathbb{C}^{n}$,
namely,
$z_{k}\to 0\quad\text{but}\quad w(z_{k})=w_{k}\to\infty$
in $\mathbb{C}^{n}$. This implies that
$\sum_{\alpha,\beta=1}^{n}\left\lvert\frac{\partial w^{\beta}}{\partial
z^{\alpha}}(z_{k})\right\rvert^{2}\to\infty\quad\text{as $k\to\infty$.}$
It is a contradiction to Statement (3) of Lemma 2.6. This implies
$\mathcal{H}_{P}$ is integrable. Similarly $\mathcal{H}_{P^{\prime}}$ is also
an integrable Heisenberg group and as a consequence,
$\mathcal{W}\cup\mathcal{W}^{\prime}$ is an open submanifold of $M$ whose CR
structure is integrable. Notice that the automorphism $\varphi$ acts on
$\mathcal{W}\cup\mathcal{W}^{\prime}$ and generates a noncompact orbit with
two fixed points $o$ and $o^{\prime}$. Therefore
$\mathcal{W}\cup\mathcal{W}^{\prime}$ must be CR equivalent to the standard
sphere by Schoen’s theorem [10]. Then the conclusion follows since
$\mathcal{W}\cup\mathcal{W}^{\prime}$ is an open submanifold of $M$ without
boundary and hence $M=\mathcal{W}\cup\mathcal{W}^{\prime}$. ∎
## References
* [1] D. V. Alekseevskiĭ. Groups of conformal transformations of Riemannian spaces. Mat. Sb. (N.S.), 89(131):280–296, 356, 1972.
* [2] J. Ferrand. The action of conformal transformations on a Riemannian manifold. Math. Ann., 304(2):277–291, 1996.
* [3] H. Gaussier and A. Sukhov. On the geometry of model almost complex manifolds with boundary. Math. Z., 254(3):567–589, 2006.
* [4] J.-C. Joo and K.-H. Lee. Subconformal Yamabe equation and automorphism groups of almost CR manifolds. J. Geom. Anal., 25(1):436–470, 2015.
* [5] K.-H. Lee. Domains in almost complex manifolds with an automorphism orbit accumulating at a strongly pseudoconvex boundary point. Michigan Math. J., 54(1):179–205, 2006.
* [6] K.-H. Lee. Strongly pseudoconvex homogeneous domains in almost complex manifolds. J. Reine Angew. Math., 623:123–160, 2008.
* [7] M. Obata. Conformal transformations of Riemannian manifolds. J. Differential Geometry, 4:311–333, 1970.
* [8] M. Obata. The conjectures on conformal transformations of Riemannian manifolds. J. Differential Geometry, 6:247–258, 1971/72.
* [9] J.-P. Rosay. Sur une caractérisation de la boule parmi les domaines de ${\bf C}^{n}$ par son groupe d’automorphismes. Ann. Inst. Fourier (Grenoble), 29(4):ix, 91–97, 1979.
* [10] R. Schoen. On the conformal and CR automorphism groups. Geom. Funct. Anal., 5(2):464–481, 1995.
* [11] A. F. Spiro. Smooth real hypersurfaces in $\mathbb{C}^{n}$ with a non-compact isotropy group of CR transformations. Geom. Dedicata, 67(2):199–221, 1997.
* [12] S. M. Webster. On the transformation group of a real hypersurface. Trans. Amer. Math. Soc., 231(1):179–190, 1977.
* [13] B. Wong. Characterization of the unit ball in ${\bf C}^{n}$ by its automorphism group. Invent. Math., 41(3):253–257, 1977.
|
# Existence and structure of solutions for general $P$-area minimizing
surfaces
Amir Moradifam 111Department of Mathematics, University of California,
Riverside, California, USA. E-mail<EMAIL_ADDRESS>Amir Moradifam is supported
by the NSF grant DMS-1953620. Alexander Rowell 222Department of Mathematics,
University of California, Riverside, California, USA. E-mail:
<EMAIL_ADDRESS>
###### Abstract
We study existence and structure of solutions to the Dirichlet and Neumann
boundary problems associated with minimizers of the functional
$I(u)=\int_{\Omega}({\varphi}(x,Du+F)+Hu)\,dx$, where ${\varphi}(x,\xi)$,
among other properties, is convex and homogeneous of degree $1$ with respect
to $\xi$. We show that there exists an underlying vector field $N$ that
characterizes the existence and structure of all minimizers. In particular we
prove that if $\Omega$ satisfies a so called barrier condition, then $I(u)$
has a minimizer in $BV_{f}(\Omega)=\\{u\in
BV(\Omega):u|_{\partial\Omega}=f\\}$. The results in this paper generalize and
unify many results in the literature about existence of minimizers of least
gradient problems and $P-$area minimizing surfaces.
## 1 Introduction and Statement of Results
In the last two decades, numerous interesting work have been published on
existence, uniqueness and regularity of minimizers of functionals of the form
$\int_{\Omega}g(x,Du(x))+k(x,u)\,dx,$
where $g$ is convex and $k$ is locally Lipschitz or identically zero. For
background, we encourage the reader to explore the tree of references stemming
from [7, 8, 12, 13, 16, 23, 33, 24, 25, 26, 39]. This paper is a continuation
of the authors’ work in [33], where the authors proved existence and structure
of minimizers of P-area minimizing surfaces in the Heisenberg group (see also
[12, 33, 39] for background literature on P-minmal surfaces in the Heisenberg
group). Let $\Omega$ be a bounded open set in 2n, and
$X=(x_{1},x^{\prime}_{1},x_{2},x^{\prime}_{2},\dots,x_{n},x^{\prime}_{n})\in\Omega.$
Let $u:{}^{2n}\rightarrow\R$, and consider the graph $(X,u(X))$ in the
Heisenberg group of dimension $2n+1$ with prescribed $p$-mean curvature
$H(X)$. Then $u$ satisfies the equation
$\nabla\cdot\left(\frac{\nabla u-X^{*}}{|\nabla u-X^{*}|}\right)=H,$ (1)
where
$X^{*}=(x^{\prime}_{1},-x_{1},x^{\prime}_{2},-x_{2},\dots,x^{\prime}_{n},-x_{n})$.
Equation (1) is the Euler-Lagrange equation associated to the energy
functional
$\mathbb{E}(u)=\int_{\Omega}\left(|\nabla u-X^{*}|+Hu\right)dx_{1}\wedge
dx^{\prime}_{1}\wedge\dots\wedge dx_{n}\wedge dx^{\prime}_{n}.$ (2)
In [33] the authors investigated existence and structure of minimizers of the
more general energy functional
$\mathbb{I}(u)=\int_{\Omega}\left(a|\nabla u+F|+Hu\right)\,dx,$ (3)
under Dirichlet and Neumann boundary conditions and showed that there always
exists a vector field $N$ that determines existence and structure of
minimizers. Here $a\in L^{\infty}(\Omega)$ is a positive function and
$F\in(L^{\infty}(\Omega))^{n}$.
In this paper, we study a more general class of functionals which includes (3)
as a special case, namely
$I(u)=\int_{\Omega}{\varphi}(x,Du+F)+Hu,$ (4)
where ${\varphi}:\Omega\times{}^{n}\rightarrow\R$ is convex, continuous, and
homogeneous function of degree 1 with respect the the second argument. Unless
otherwise stated, we assume that $\Omega$ is a bounded open set in n with
Lipschitz boundary, $F\in(L^{2}(\Omega))^{n}$, $H\in L^{2}(\Omega)$, and
${\varphi}$ is assumed to satisfy the following conditions
1. ($C_{1}$)
There exists $\alpha>0$ such that
$0\leq{\varphi}(x,\xi)\leq\alpha\left|\xi\right|$ for all $\xi\in{}^{n}$.
2. ($C_{2}$)
$\xi\mapsto{\varphi}(x,\xi)$ is a norm for every $x$.
While it not generally required, for some of our results we will also assume
that
1. ($C_{3}$)
There exists $\beta>0$ such that
$0\leq\beta\left|\xi\right|\leq{\varphi}(x,\xi)$ for all $\xi\in{}^{n}$.
This problem is of particular interest since the energy functional $I(u)$ is
not strictly convex which makes analysis of existence and uniqueness of
minimizers a highly non-trivial problem. The Rockafellar-Fenchel duality shall
play a key role in our study of this problem.
A broad and active area of research is weighted least gradient problems, a
special case of (4) in which $F\equiv 0$, $H\equiv 0$, and
${\varphi}(x,\xi)=a|\xi|$, where $a\in L^{\infty}(\Omega)$ is a positive
function. This class of sub-class of problems have applications in
conductivity imaging and have been extensively studied by many authors, see
[21, 22, 29, 30, 31, 32, 34, 35, 36, 37, 40, 41, 42, 43]. Another interesting
special case of (4) is when $F\equiv 0$, $H\equiv 0$, and ${\varphi}$ is given
by
${\varphi}(x,\xi)=a(x)\left(\sum_{i,j=1}^{n}\sigma_{0}^{ij}(x)\xi_{i}\xi_{j}\right)^{1/2},$
where $\sigma_{0}=(\sigma^{ij})_{n\times n}$ with $\sigma^{ij}\in
C^{\alpha}(\Omega)$. This problem has applications in imaging of anisotropic
conductivity from the knowledge of the interior measurements of current
density vector field (see [21]). In [14], the authors study the case with
$H\equiv 0$ and show that, under the so called bounded slope condition, the
minimizers are Lipschitz continuous.
Next we present few preliminaries which are required to understand and the
energy functional (4). For an arbitrary $u\in BV_{loc}({}^{n})$, an associated
measure ${\varphi}(x,Du+F)$ is defined by
$\int_{A}{\varphi}(x,Du+F)=\int_{A}{\varphi}(x,v^{u}(x))|Du+F|\hskip
14.22636pt\text{for each bounded Borel set }A,$ (5)
with the vector-valued measure $Du+F$ having a corresponding total variation
measure $|Du+F|$, and $v^{u}(x)=\frac{dDu+F}{d|Du+F|}$ is the Radon-Nikodym
derivative. We use standard facts about functions of bounded variation as in
[2], [22], and [29]. For any open set $U$, we also have
$\int_{U}{\varphi}(x,Du+F)=\sup\left\\{\int_{U}(u\nabla\cdot Y-Y\cdot
F)dx:Y\in C_{c}^{\infty}(U;{}^{n}),\sup{\varphi}^{0}(x,Y(x))\leq 1\right\\},$
(6)
where ${\varphi}(x,\xi)$ has a dual norm on n, ${\varphi}^{0}(x,\xi)$, defined
by
${\varphi}^{0}(x,\xi):=\sup\left\\{\xi\cdot p:{\varphi}(x,p)\leq 1\right\\}.$
As a consequence of condition ($C_{1}$), the dual norm
${\varphi}^{0}(x,\cdot)$ has the equivalent definition
${\varphi}^{0}(x,\xi)=\sup\left\\{\frac{\xi\cdot
p}{{\varphi}(x,p)}:p\in{}^{n}\right\\}.$ (7)
###### Remark 1.1
The definition in (6) allows to define $\int_{\Omega}{\varphi}(x,Du+F)$ for
functions $u\in BV(\Omega)$ with $\nabla u\not\in W^{1,1}(\Omega)$. Indeed the
right hand side of (6) is well-defined for any integrable function $u$. To see
the motivation behind the definition (6), suppose $u\in W^{1,1}(\Omega)$ and
${\varphi}^{0}(x,Y)\leq 1$. For $p=\frac{Du+F}{|Du+F|}$ and $\xi=-Y$ it
follows from (7) that
$-Y\cdot\frac{Du+F}{|Du+F|}\leq{\varphi}\left(x,\frac{Du+F}{|Du+F|}\right).$
This implies
$\displaystyle\int_{\Omega}{\varphi}(x,Du+F)$
$\displaystyle=\int_{\Omega}{\varphi}\left(x,\frac{Du+F}{|Du+F|}\right)|Du+F|$
$\displaystyle\geq\int_{\Omega}-Y\cdot\frac{Du+F}{|Du+F|}|Du+F|$
$\displaystyle=\int_{\Omega}-Y\cdot Du-Y\cdot F$
$\displaystyle=\int_{\Omega}(u\nabla\cdot Y-Y\cdot F),\ \ \forall\ \ Y\in
C_{c}^{\infty}(U;{}^{n}).$
It is also easy to see that that the inequality above would become an equality
in the limit for a sequence of functions $Y_{n}\in C_{c}^{\infty}(U;{}^{n}).$
This paper is outlined as follows. In Section 2, we prove existence results
under the Neumann boundary condition. In Section 3, we study existence of
minimizers with Dirichlet boundary condition. Finally, in Section 4 we provide
existence of P-area minimizing surfaces under a so called barrier condition on
the boundary $\partial\Omega$.
## 2 Existence of minimizers with Neumann boundary condition
In this section we study the minimization problem
$\inf_{u\in\mathring{BV}(\Omega)}I(u):=\int_{\Omega}{\varphi}\left(x,Du+F\right)+Hu,$
(8)
where
$\mathring{BV}(\Omega)=\left\\{u\in BV(\Omega):\int_{\Omega}u=0\right\\}.$
We commence our study of minimizers of (8) by applying the Rockefeller-Fenchel
duality to the problem. Consider the functions
$E:(L^{2}(\Omega))^{n}\rightarrow\R$ and
$G:\mathring{H}^{1}(\Omega)\rightarrow\R$ defined as
$E(b)=\int_{\Omega}{\varphi}\left(x,b+F\right)\hskip
14.22636pt\text{and}\hskip 14.22636ptG(u)=\int_{\Omega}Hu,$
where $\mathring{H}(\Omega)=\\{u\in H^{1}(\Omega):\int_{\Omega}u=0\\}$. Then
(8) can be equivalently written as
$(P)\hskip 14.22636pt\inf_{u\in\mathring{H}^{1}(\Omega)}\\{E(\nabla
u)+G(u)\\}.$ (9)
The dual problem corresponding to (9), as defined by Rockafellar-Fenchel
duality [15], is
$(D)\hskip
14.22636pt\max_{b\in(L^{2}(\Omega))^{n}}\\{-E^{*}(b)-G^{*}(-\nabla^{*}b)\\}.$
(10)
Note that convex functions $E$ and $G$ have convex conjugates $E^{*}$ and
$G^{*}$. Furthermore, gradient operator
$\nabla:\mathring{H}^{1}(\Omega)\rightarrow L^{2}(\Omega)$ has a corresponding
adjoint operator $\nabla^{*}$. As computed in [33], we have
$G^{*}(-\nabla^{*}b)=\sup_{u\in\mathring{H}^{1}(\Omega)}\left\\{-\int_{\Omega}\nabla
u\cdot b-\int_{\Omega}Hu\right\\}.$
This can be more explicitly calculated by noting that for all real numbers
$c$, $cu\in\mathring{H}^{1}(\Omega)$ whenever $u\in\mathring{H}^{1}(\Omega)$.
Thus,
$G^{*}(-\nabla^{*}b)=\begin{cases}0&\text{ if }u\in\mathcal{D}_{0},\\\
\infty&\text{ if }u\not\in\mathcal{D}_{0}\end{cases}$ (11)
where
$\mathcal{D}_{0}:=\left\\{b\in(L^{2}(\Omega))^{n}:\int_{\Omega}\nabla u\cdot
b+Hu=0,\ \ \hbox{for all}\ \ u\in\mathring{H}^{1}(\Omega)\right\\}.$ (12)
The computation of $E^{*}(b)$ is done in Lemma 2.1 of [29], which yields
$E^{*}(b)=\begin{cases}-\langle F,b\rangle&\text{ if
}{\varphi}^{0}(x,b(x))\leq 1\ \ \hbox{in}\ \ \Omega\\\ \infty&\text{ otherwise
}.\end{cases}$ (13)
Thus the dual problem can be rewritten as
$(D)\hskip 14.22636pt\sup\\{\langle F,b\rangle:b\in\mathcal{D}_{0}\ \
\hbox{and}\ \ {\varphi}^{0}(x,b(x))\leq 1\ \ \hbox{in}\ \ \Omega\\}.$ (14)
Let the outer unit normal vector to $\partial\Omega$ be denoted by
$\nu_{\Omega}$. There is a unique function $[b,\nu_{\Omega}]\in
L^{\infty}_{\mathcal{H}^{n-1}}(\partial\Omega)$, whenever $\nabla\cdot b\in
L^{n}(\Omega)$ for every $b\in(L^{\infty}(\Omega))^{n}$, such that
$\int_{\partial\Omega}[b,\nu_{\Omega}]u\,d\mathcal{H}^{n-1}=\int_{\Omega}u\nabla\cdot
b\,dx+\int_{\Omega}b\cdot Du\,dx,\quad u\in C^{1}(\bar{\Omega}).$ (15)
Indeed in [1, 3] it was proved that the integration by parts formula (15)
holds for every $u\in BV(\Omega)$, as $u\mapsto(b\cdot Du)$ gives rise to a
Radon measure on $\Omega$ for $u\in BV(\Omega)$,
$b\in(L^{\infty}(\Omega))^{n}$, and $\nabla\cdot b\in L^{n}(\Omega)$.
###### Lemma 2.1
Let $b\in(L^{\infty}(\Omega))^{n}\cap\mathcal{D}_{0}.$ Then
$\nabla\cdot b=H-\int_{\Omega}Hdx\ \ \hbox{a.e. in}\ \ \Omega,$
and
$[b,\nu_{\Omega}]=0\ \ \mathcal{H}^{n-1}-a.e.\ \ \hbox{ on }\ \
\partial\Omega.$
The above lemma follows directly from equation (15) and the definition of
$D_{0}$. It also provides the insight that every solution $N$ to the dual
problem (D) satisfies equation $\nabla\cdot N=H-\int_{\Omega}Hdx\ \ \hbox{a.e.
in}\ \ \Omega$. Moreover, at every point on $\partial\Omega$, the unit normal
vector is orthogonal to $N$ in a weak sense.
###### Theorem 2.2
Let $\Omega$ be a bounded domain in n with Lipschitz boundary,
$F\in(L^{2}(\Omega))^{n}$, $H\in L^{2}(\Omega)$, and
${\varphi}:\Omega\times{}^{n}\rightarrow\R$ be a convex function satisfying
($C_{1}$) and ($C_{2}$). Then the duality gap is zero and the dual problem
$(D)$ has a solution, i.e. there exists a vector field $N\in\mathcal{D}_{0}$
with ${\varphi}^{0}(x,N)\leq 1$ such that
$\inf_{u\in\mathring{H}^{1}(\Omega)}\int_{\Omega}\left({\varphi}\left(x,Du+F\right)+Hu\right)dx=\langle
F,N\rangle.$ (16)
Moreover
${\varphi}\left(x,\frac{Du+F}{|Du+F|}\right)=N\cdot\frac{Du+F}{|Du+F|},\ \ \ \
|Du+F|-a.e.\ \ \hbox{in}\ \ \Omega,$ (17)
for any minimizer $u$ of (9).
Proof. It is easy to see that $I(v)=\int_{\Omega}({\varphi}(x,Dv+F)+Hv)$ is
convex, and $J:(L^{2}(\Omega))^{n}\rightarrow\R$ with
$J(p)=\int_{\Omega}({\varphi}(x,p+F)+Hu_{0})dx$ is continuous at $p=0$, for a
fixed $u_{0}$, due to ($C_{2}$). Thus, the conditions of Theorem III.4.1 in
[15] are satisfied. We infer that the optimization problems (D) and (P) have
the same optimum value, and the dual problem has a solution $N$ such that the
duality gap is zero, i.e., (16) holds.
Now let $u\in\mathring{H}^{1}(\Omega)$ be a minimizer of (9). Since
$N\in\mathcal{D}_{0}$, we have
$\displaystyle\langle N,F\rangle$
$\displaystyle=\int_{\Omega}{\varphi}(x,Du+F)+Hu$
$\displaystyle=\int_{\Omega}{\varphi}\left(x,\frac{Du+F}{|Du+F|}\right)|Du+F|+\int_{\Omega}Hu$
$\displaystyle\geq\int_{\Omega}N\cdot\frac{Du+F}{|Du+F|}|Du+F|+\int_{\Omega}Hu$
$\displaystyle=\int_{\Omega}N\cdot(Du+F)+Hu$
$\displaystyle=\int_{\Omega}N\cdot F+\int_{\Omega}N\cdot Du+Hu.$
$\displaystyle=\langle N,F\rangle$
Hence, the inequality above becomes an equality and (17) holds. $\Box$
###### Remark 2.3
The primal problem $(P)$ may not have a minimizer in
$u\in\mathring{H}^{1}(\Omega)$, but the dual problem $(D)$ always has a
solution $N\in(L^{2}(\Omega))^{n}$. Note also that the functional $I(u)$ is
not strictly convex, and it may have multiple minimizers (see [22]).
Furthermore, Theorem 2.2 asserts that $N$ determines $\frac{Du+F}{|Du+F|}$,
$|Du+F|-$a.e. in $\Omega$, for all minimizers $u$ of $(P)$. More precisely,
since almost everywhere in $\Omega$ we have
${\varphi}^{0}(x,N)\leq 1\implies{\varphi}(x,p)\geq N\cdot p$
for every $p\in S^{n-1}$. Therefore, the equality in (17) indicates that
$\frac{N\cdot p}{{\varphi}(x,p)}$
is maximized by $p=\frac{Du+F}{|Du+F|}$, $|Du+F|-$a.e. In the case that
$F\equiv 0$, $N$ determines the direction of the gradient of $u$
($\frac{Du}{|Du|}$) and hence the structure of the level sets of minimizers to
$(P)$.
We proceed to show that a solution to primal problem $(P)$ exists in
$BV(\Omega)$ provided that it is bounded below. The proof that relies on
standard facts about $BV$ functions.
###### Proposition 2.1
Let ${\varphi}:\Omega\times{}^{n}\rightarrow\R$ be a convex function
satisfying ($C_{1}$), ($C_{2}$), and ($C_{3}$). If there exists a constant
$C$, depending on $\Omega$, such that
$\max_{x\in\overline{\Omega}}|H(x)|<C,$ (18)
then the primal problem (P) has a minimizer.
Proof. Consider the minimizing sequence $u_{n}$ of functional $I(u)$. By
condition ($C_{3}$) we have
$\int_{\Omega}\beta|\nabla u_{n}+F|+Hu_{n}\leq\int_{\Omega}{\varphi}(x,\nabla
u_{n}+F)+Hu_{n}<c,$
for some constant $c$ independent of $n$. Moreover, the triangle inquality
implies
$\int\beta|\nabla u_{n}|-\int\beta|F|-\int|H||u_{n}|\leq\int\beta|\nabla
u_{n}|-\int\beta|F|+\int Hu_{n}\leq\int\beta|\nabla u_{n}+F|+Hu_{n}<c$
and
$\int\beta|\nabla u_{n}|\leq C+\int|H||u_{n}|+\int\beta|F|.$
Applying Poincaré’s inequality implies that there exists a constant
$C_{\Omega}$, independent of $n$, where
$\int\beta|\nabla u_{n}|\leq C+||H||_{L^{\infty}(\Omega)}C_{\Omega}\int|\nabla
u_{n}|+\int\beta|F|$ $\Rightarrow\left(\beta-
C_{\Omega}||H||_{L^{\infty}(\Omega)}\right)\int|\nabla u_{n}|\leq
C+\int\beta|F|.$
Finally,
$\int|\nabla u_{n}|\leq C^{\prime}=\frac{C+\int\beta|F|}{\left(\beta-
C_{\Omega}||H||_{L^{\infty}(\Omega)}\right)}$
provided that $\beta-C_{\Omega}||H||_{L^{\infty}(\Omega)}>0$ or equivalently
$||H||_{L^{\infty}(\Omega)}\leq C:=\frac{\beta}{C_{\Omega}}.$
It follows from standard compactness results for $BV$ functions that $u_{n}$
has a subsequence, denoted by $u_{n}$ again, such that $u_{n}$ converges
strongly in $L^{1}$ to a function $\hat{u}\in BV$, and $Du_{n}$ converges to
$D\hat{u}$ in the sense of measures. Since the functional $I(u)$ is lower
semicontinuous, $\hat{u}$ is a solution of the primal problem (8). $\Box$
## 3 Existence of minimizers with Dirichlet boundary condition
Now, let us consider minimizers of the main functional with a given Dirichlet
boundary condition on $\partial\Omega$. Let $\Omega$ be a bounded region in n
with Lipschitz boundary, $F\in(L^{2}(\Omega))^{n}$, $H\in L^{2}(\Omega)$,
$f\in L^{1}(\partial\Omega)$, ${\varphi}:\Omega\times{}^{n}\rightarrow\R$ a
convex function satisfying ($C_{1}$) and ($C_{2}$), and the minimization
problem becomes
$\inf_{u\in
BV_{f}(\Omega)}I(u):=\int_{\Omega}{\varphi}\left(x,Du+F\right)+Hu,$ (19)
where
$BV_{f}(\Omega)=\\{u\in BV(\Omega):u|_{\partial\Omega}=f\\}.$
We perform the substitution $\tilde{F}=F+\nabla f$ to rewrite (19) in terms of
BV functions that are zero on $\partial\Omega$. Since there always exists a
function $f\in W^{1,1}(\Omega)$ that is an extension of any function in
$L^{1}(\Omega)$, we have.
$\inf_{u\in
BV_{0}(\Omega)}I(u):=\int_{\Omega}{\varphi}\left(x,Du+\tilde{F}\right)+Hu+\int_{\Omega}Hfdx.$
Note that $\int_{\Omega}Hfdx$ is a constant, which implies (19) can be
represented by the minimization problem
$\inf_{u\in
BV_{0}(\Omega)}I(u):=\int_{\Omega}{\varphi}\left(x,Du+F\right)+Hu.$ (20)
In Section 2 boundedness from below of functional $I(u)$ was sufficient to
provide existence of minimizers in $\mathring{BV}(\Omega)$. This is not the
case for (19), nor (20). The main reason for nonexistence of minimizers is
that for a given minimizing sequence such that $u_{n}\rightarrow\hat{u}$ in
$L^{1}(\Omega)$ and $\hat{u}\in BV(\Omega)$, we have
$I(\hat{u})\leq\inf_{u\in BV_{0}(\Omega)}I(u),$
by the lower semicontinuity of $I(u).$ However, since $\partial\Omega$ is a
set of measure zero, the trace of $\hat{u}$ is not guaranteed to be zero. One
of our main goals in this section is to prove existence of minimizers for the
highly nontrivial problem (20), and in turn for (19).
### 3.1 The Dual Problem
The setup of the dual problem here is identical to that of Section 2, with the
exception of the function space of potential solutions. Let
$E:(L^{2}(\Omega))^{n}\rightarrow\R$ and $G:H^{1}_{0}(\Omega)\rightarrow\R$ be
defined as
$E(b)=\int_{\Omega}{\varphi}\left(x,b+F\right)\hskip
14.22636pt\text{and}\hskip 14.22636ptG(u)=\int_{\Omega}Hu.$
Then (29) can be equivalently written as
$(P^{\prime})\hskip 14.22636pt\inf_{u\in H^{1}_{0}(\Omega)}\\{E(\nabla
u)+G(u)\\}.$ (21)
The dual problem corresponding to (21), as defined by Rockafellar-Fenchel
duality [15], is
$(D^{\prime})\hskip
14.22636pt\sup_{b\in(L^{2}(\Omega))^{n}}\\{-E^{*}(b)-G^{*}(-\nabla^{*}b)\\}.$
(22)
Then $G^{*}(-\nabla^{*}b)$ is given by
$G^{*}(-\nabla^{*}b)=\sup_{u\in H^{1}_{0}(\Omega)}\left\\{-\int_{\Omega}\nabla
u\cdot b-\int_{\Omega}Hu\right\\},$
and more explicitly
$G^{*}(-\nabla^{*}b)=\begin{cases}0&\text{ if
}u\in\widetilde{\mathcal{D}}_{0}\\\ \infty&\text{ if
}u\not\in\widetilde{\mathcal{D}}_{0}(\Omega),\end{cases}$ (23)
where
$\widetilde{\mathcal{D}}_{0}:=\left\\{b\in(L^{2}(\Omega))^{n}:\int_{\Omega}\nabla
u\cdot b+Hu=0,\ \ \hbox{for all}\ \ u\in
H^{1}_{0}(\Omega)\right\\}\subseteq\mathcal{D}_{0}.$ (24)
Finally, we use Lemma 2.1 in [29] to get
$E^{*}(b)=\begin{cases}-\langle F,b\rangle&\text{ if
}{\varphi}^{0}(x,b(x))\leq 1\ \ \hbox{in}\ \ \Omega\\\ \infty&\text{ otherwise
}.\end{cases}$ (25)
We can therefore rewrite the dual problem as
$(D^{\prime})\hskip 14.22636pt\sup\\{\langle
F,b\rangle:b\in\widetilde{\mathcal{D}}_{0}\ \ \hbox{and}\ \
{\varphi}^{0}(x,b(x))\leq 1\ \ \hbox{in}\ \ \Omega\\}.$ (26)
A direct application of the integration by parts formula (15) implies that
$b\in(L^{\infty}(\Omega))^{n}\cap\widetilde{\mathcal{D}}_{0}$ if and only if
$\nabla\cdot b=H\ \ \hbox{a.e. in}\ \ \Omega.$
Next we proceed to prove the analog of Theorem 2.2.
###### Theorem 3.1
Let $\Omega$ be a bounded domain in n with Lipschitz boundary,
$F\in(L^{2}(\Omega))^{n}$, $H\in L^{2}(\Omega)$,
${\varphi}:\Omega\times{}^{n}\rightarrow\R$ a convex function satisfying
($C_{1}$), ($C_{2}$), and assume $(P^{\prime})$ is bounded below. Then the
duality gap is zero and the dual problem $(D^{\prime})$ has a solution, i.e.
there exists a vector field $N\in\widetilde{\mathcal{D}}_{0}$ with
${\varphi}^{0}(x,N)\leq 1$ such that
$\inf_{u\in
H_{0}^{1}(\Omega)}\int_{\Omega}\left({\varphi}\left(x,Du+F\right)+Hu\right)dx=\langle
F,N\rangle.$ (27)
Moreover
${\varphi}\left(x,\frac{Du+F}{|Du+F|}\right)=N\cdot\frac{Du+F}{|Du+F|},\ \ \ \
|Du+F|-a.e.\ \ \hbox{in}\ \ \Omega,$ (28)
for any minimizer $u$ of (21).
Proof. It is easy to show that $I(v)=\int_{\Omega}({\varphi}(x,Dv+F)+Hv)$ is
convex, and $J:(L^{2}(\Omega))^{n}\rightarrow\R$ with
$J(p)=\int_{\Omega}({\varphi}(x,p+F)+Hu_{0})dx$ is continuous at $p=0$, for a
fixed $u_{0}$, due to ($C_{2}$). Thus, the conditions of Theorem III.4.1 in
[15] are satisfied. We infer that the optimal values of $(D)$ and (P) are
equal, and the dual problem has a solution $N$ such that the duality gap is
zero, i.e. (27) holds.
Now let $u\in A_{0}$ be a minimizer of (21). Since
$N\in\widetilde{\mathcal{D}}_{0}$
$\displaystyle\langle N,F\rangle$
$\displaystyle=\int_{\Omega}{\varphi}(x,Du+F)+Hu$
$\displaystyle=\int_{\Omega}{\varphi}\left(x,\frac{Du+F}{|Du+F|}\right)|Du+F|+\int_{\Omega}Hu$
$\displaystyle\geq\int_{\Omega}N\cdot\frac{Du+F}{|Du+F|}|Du+F|+\int_{\Omega}Hu$
$\displaystyle=\int_{\Omega}N\cdot(Du+F)+Hu$
$\displaystyle=\int_{\Omega}N\cdot F+\int_{\Omega}N\cdot Du+Hu.$
$\displaystyle=\langle N,F\rangle$
Hence the inequality becomes an equality and (28) holds. $\Box$
###### Remark 3.2
Similar to the comments we made in Remark 3.1., the primal problem
$(P^{\prime})$ may not have a minimizer in $H_{0}^{1}(\Omega)$, but the dual
problem $(D^{\prime})$ always has a solution $N\in(L^{2}(\Omega))^{n}$. Note
also that the functional $I(u)$ is not strictly convex, and it may have
multiple minimizers (see [22]). Furthermore, Theorem 3.1 asserts that $N$
determines $\frac{Du+F}{|Du+F|}$, $|Du+F|-$a.e. in $\Omega$, for all
minimizers $u$ of $(P^{\prime})$. See Remark 3.1 for more details.
### 3.2 The relaxed problem
Now we investigate the existence of minimizier for the relaxed problem
associated to (20), namely
$\inf_{u\in A_{0}}I(u)=\inf_{u\in
A_{0}}\int_{\Omega}({\varphi}(x,Du+F)+Hu)dx+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|ds,$
(29)
where
$A_{0}:=\left\\{u\in H^{1}({}^{n}):u=0\text{ in }\Omega^{c}\right\\}.$
The benefit of considering the relaxed problem above is that any minimizing
sequence of (29) converges to a minimizer in $A_{0}$. This convergence result
is not guaranteed for (20). It can be easily verified that Proposition 2.1 can
be adapted to the relaxed problem, and (20) has a solution in $A_{0}$ when
bounded below.
###### Proposition 3.1
Let ${\varphi}:\Omega\times{}^{n}\rightarrow\R$ be a convex function
satisfying ($C_{1}$), ($C_{2}$), and ($C_{3}$). If there exists a constant
$C$, depending on $\Omega$, such that
$\max_{x\in\overline{\Omega}}|H(x)|<C,$ (30)
then the primal problem (20) has a minimizer in $A_{0}$.
Proof. Note that $\hat{u}\in A_{0}$ whenever $u_{n}\in A_{0}$ converges to
$\hat{u}$ in $L^{1}(\Omega)$. Then the proof follows as outlined in
Proposition 2.1. $\Box$
The stage is now set for the major result of this section. While proving
existence of minimizers to (20) is very difficult, the following theorem
demonstrates how problems (20) and (29) are related.
###### Theorem 3.3
Let $\Omega\subset{}^{n}$ be a bounded open set with Lipschitz boundary,
$F\in(L^{2}(\Omega))^{n}$, $H\in L^{2}(\Omega)$, and
${\varphi}:\Omega\times{}^{n}\rightarrow\R$ a convex function satisfying
($C_{1}$), ($C_{2}$), and ($C_{3}$). If the minimization problem (20) is
bounded below, then
$\min_{u\in
A_{0}}\left(\int_{\Omega}({\varphi}(x,Du+F)+Hu)dx+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|ds\right)=\inf_{\begin{subarray}{c}u\in
BV_{0}(\Omega)\end{subarray}}\int_{\Omega}{\varphi}(x,Du+F)+Hu$ (31)
Moreover, if $u$ is a minimizer of (29), then
${\varphi}(x,\nu_{\Omega})=[N,sign(-u)\nu_{\Omega}]\ \ \mathcal{H}^{n-1}-a.e.\
\ \hbox{on}\ \ \partial\Omega.$ (32)
Proof. It can be easily shown that $BV_{0}(\Omega)$ has a continuous embedding
into $A_{0}$, which implies
$\min_{u\in
A_{0}}\left(\int_{\Omega}({\varphi}(x,Du+F)+Hu)dx+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|ds\right)\leq\inf_{\begin{subarray}{c}u\in
BV_{0}(\Omega)\end{subarray}}\int_{\Omega}{\varphi}(x,Du+F)+Hu.$
It follows from Theorem 3.1 that there exists a vector field
$N\in\widetilde{\mathcal{D}}_{0}$ with
${\varphi}\left(x,\frac{Du+F}{|Du+F|}\right)=N\cdot\frac{Du+F}{|Du+F|},\ \ \ \
|Du+F|-a.e.\ \ \hbox{in}\ \ \Omega.$
Consider minimizer $u$ of the relaxed problem with
$u|_{\partial\Omega}=h|_{\partial\Omega}$, where $h\in W^{1,1}(\Omega)$. Since
$u-h\in\widetilde{\mathcal{D}}_{0}$, we have
$\displaystyle\min_{u\in A_{0}}(\int_{\Omega}({\varphi}(x,Du+F)+Hu)dx$
$\displaystyle+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|ds)=\int_{\Omega}{\varphi}(x,Du+F)+Hu+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|$
$\displaystyle=\int_{\Omega}{\varphi}\left(x,\frac{Du+F}{|Du+F|}\right)|Du+F|+\int_{\Omega}Hu+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|$
$\displaystyle\geq\int_{\Omega}N\cdot\frac{Du+F}{|Du+F|}|Du+F|+\int_{\Omega}Hu+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|$
$\displaystyle=\int_{\Omega}N\cdot(Du+F)+Hu+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|$
$\displaystyle=\int_{\Omega}N\cdot F+\int_{\Omega}N\cdot
Du+Hu+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|u|$
$\displaystyle=\langle N,F\rangle+\int_{\Omega}N\cdot D(u-h)+H(u-h)$
$\displaystyle+\int_{\Omega}N\cdot
Dh+Hh+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|h|$
$\displaystyle=\langle N,F\rangle+\int_{\Omega}N\cdot
Dh+Hh+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|h|$
$\displaystyle=\langle
N,F\rangle+\int_{\partial\Omega}[N,\nu_{\Omega}]h+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|h|$
$\displaystyle\geq\langle N,F\rangle$
$\displaystyle=\inf_{\begin{subarray}{c}u\in
BV_{0}(\Omega)\end{subarray}}\int_{\Omega}{\varphi}(x,Du+F)+Hu.$
The last inequality was achieved using integration by parts and the fact that
${\varphi}^{0}(x,N)\leq
1\implies[N,\nu_{\Omega}]\leq{\varphi}(x,\nu_{\Omega})$. Therefore, (31) holds
and all the inequalities in the above computation are equalities. This
provides the relationship
$\int_{\partial\Omega}[N,\nu_{\Omega}]h+\int_{\partial\Omega}{\varphi}(x,\nu_{\Omega})|h|=0$,
which implies that (32) holds. $\Box$
The next theorem follows directly from Theorem 3.1 and Theorem 3.3.
###### Theorem 3.4
Let $\Omega\subset{}^{n}$ be a bounded open set with Lipschitz boundary,
$F\in(L^{2}(\Omega))^{n}$, $H\in L^{2}(\Omega)$,
${\varphi}:\Omega\times{}^{n}\rightarrow\R$ a convex function satisfying
($C_{1}$), ($C_{2}$), ($C_{3}$), and assume $(P^{\prime})$ is bounded below.
Then there exists a vector field $N\in\widetilde{\mathcal{D}}_{0}$ with
${\varphi}^{0}(x,N)\leq 1$ such that
${\varphi}\left(x,\frac{Du+F}{|Du+F|}\right)=N\cdot\frac{Du+F}{|Du+F|},\ \ \ \
|Du+F|-a.e.\ \ \hbox{in}\ \ \Omega,$ (33)
for any minimizer $u$ of (20). Moreover, every minimizer of (20) is a
minimizer of (29), and if $u$ is a minimizer of (29), then
${\varphi}(x,\nu_{\Omega})=[N,sign(-u)\nu_{\Omega}]\ \ \
\mathcal{H}^{n-1}-a.e.\ \ \hbox{on}\ \ \partial\Omega.$ (34)
## 4 Existence of minimizers under the Barrier condition
Consider $F\in(L^{1}(\Omega)^{n})$, $H\in L^{\infty}(\Omega)$, and
$\psi:{}^{n}\times BV_{0}(\Omega)$ given to be
$\psi(x,u):={\varphi}(x,Du+F\chi_{E_{u}})+Hu,$ (35)
with $E_{u}$ representing the closure of the support of $u$ in $\Omega$. We
also define the $\psi$-perimeter of $E$ in $A$ by
$P_{\psi}(E;A):=\int_{A}{\varphi}\left(x,D\chi_{E}+F\chi_{E}\right)+H\chi_{E}.$
###### Definition 1
A function $u\in BV({}^{n})$ is $\psi$-total variation minimizing in
$\Omega\subset{}^{n}$ if
$\int_{\Omega}\psi(x,u)\leq\int_{\Omega}\psi(x,v)\text{ for all }v\in
BV({}^{n})\text{ such that }u=v\text{ a.e. in }\Omega^{c}.$
Also a set $E\subset{}^{n}$ of finite perimeter is $\psi$-area minimizing in
$\Omega$ if
$P_{\psi}(E;\Omega)\leq P_{\psi}(\tilde{E})$
for all $\tilde{E}\subset{}^{n}\text{ such that
}\tilde{E}\cap\Omega^{c}=E\cap\Omega^{c}\text{ a.e.}$.
In order to state the two major results of this section, Theorems 4.3 and 4.5,
we need the following preliminary lemmas. The argument is this section are
inspired by and similar to those in [33]. For a given function $u\in
BV(\Omega)$, define functions
$u_{1}=\max(u-\lambda,0)\text{ and }u_{2}=u-u_{1},$ (36)
for an arbitrary $\lambda\in\R$. Moving forward we shall use the function
$\chi_{\epsilon,\lambda}:=\min\left(1,\frac{1}{\epsilon}u_{1}\right)=\begin{cases}0&\text{
if }u\leq\lambda,\\\ \frac{1}{\epsilon}(u-\lambda)&\text{ if
}\lambda<u\leq\lambda+\epsilon,\\\ 1&\text{ if
}u>\lambda+\epsilon.\end{cases}$ (37)
which is shown to be $\psi$-total variation minimizing in Theorem 4.3.
###### Lemma 4.1
For $\chi_{\epsilon,\lambda}$ as defined in (37),
$P_{\psi}(E,\Omega)\leq\liminf_{\epsilon\rightarrow
0}\int_{\Omega}{\varphi}(x,D\chi_{\epsilon,\lambda}+F\chi_{\epsilon,\lambda})+H\chi_{\epsilon,\lambda}.$
Proof. Due to condition ($C_{2}$) we have
$\displaystyle\int_{\Omega}{\varphi}(x,D\chi_{\epsilon,\lambda}+F\chi_{\epsilon,\lambda})+H\chi_{\epsilon,\lambda}-\int_{\Omega}{\varphi}(x,D\chi_{E}+F\chi_{E})+H\chi_{E}$
$\displaystyle=$
$\displaystyle\int_{\Omega\cap\\{\lambda-\epsilon<u<\lambda+\epsilon\\}}{\varphi}(x,D\chi_{\epsilon,\lambda}+F\chi_{\epsilon,\lambda})+H\chi_{\epsilon,\lambda}-{\varphi}(x,D\chi_{E}+F\chi_{E})-H\chi_{E}$
$\displaystyle\geq$
$\displaystyle\int_{\Omega\cap\\{\lambda-\epsilon<u<\lambda+\epsilon\\}}{\varphi}(x,D\chi_{\epsilon,\lambda})-{\varphi}(x,F\chi_{\epsilon,\lambda})+H\chi_{\epsilon,\lambda}-{\varphi}(x,D\chi_{E})-{\varphi}(x,F\chi_{E})-H\chi_{E}$
$\displaystyle=$
$\displaystyle\int_{\Omega\cap\\{\lambda-\epsilon<u<\lambda+\epsilon\\}}{\varphi}(x,D\chi_{\epsilon,\lambda})-{\varphi}(x,D\chi_{E})+H\chi_{\epsilon,\lambda}-H\chi_{E}-{\varphi}(x,F\chi_{\epsilon,\lambda})-{\varphi}(x,F\chi_{E})$
$\displaystyle=$
$\displaystyle\int_{\Omega}{\varphi}(x,D\chi_{\epsilon,\lambda})-\int_{\Omega}{\varphi}(x,D\chi_{E})+\int_{\Omega}(H\chi_{\epsilon,\lambda}-H\chi_{E})$
$\displaystyle-\int_{\Omega\cap\\{\lambda-\epsilon<u<\lambda+\epsilon\\}}{\varphi}(x,F\chi_{\epsilon,\lambda})+{\varphi}(x,F\chi_{E}).$
Since the last two integrals converge to zero as $\epsilon\rightarrow 0$,
$\displaystyle\liminf_{\epsilon\rightarrow
0}\int_{\Omega}{\varphi}(x,D\chi_{\epsilon,\lambda}+F\chi_{\epsilon,\lambda})+H\chi_{\epsilon,\lambda}-P_{\psi}(E,\Omega)$
$\displaystyle=$ $\displaystyle\liminf_{\epsilon\rightarrow
0}\int_{\Omega}{\varphi}(x,D\chi_{\epsilon,\lambda}+F\chi_{\epsilon,\lambda})+H\chi_{\epsilon,\lambda}-\int_{\Omega}{\varphi}(x,D\chi_{E}+F\chi_{E})+H\chi_{E}$
$\displaystyle\geq$ $\displaystyle\liminf_{\epsilon\rightarrow
0}\int_{\Omega}{\varphi}(x,D\chi_{\epsilon,\lambda})-\int_{\Omega}{\varphi}(x,D\chi_{E})\geq
0,$
where the lower semi-continuity of $\int_{\Omega}{\varphi}(x,Dv)$ justifies
the last inequality (see [22]). $\Box$
The outer and inner trace of $w$ on $\partial\Omega$ are denoted by $w^{+}$
and $w^{-}$ respectively, under the assumptions that $\Omega$ is an open set
with Lipschitz boundary and $w\in BV({}^{n})$.
###### Lemma 4.2
Suppose $\Omega\subset{}^{n}$ is a bounded open region with Lipschitz
boundary, $g\in L^{1}(\partial\Omega;\mathcal{H}^{n-1})$, and define
$I_{\psi}(v;\Omega,g):=\
\int_{\partial\Omega}{\varphi}(x,g-v^{-}+F_{\chi_{v}})d\mathcal{H}^{n-1}+\int_{\Omega}\psi(x,Dv).$
Then $u\in BV({}^{n})$ is $\psi$-total variation minimizing in $\Omega$ if and
only if $u|_{\Omega}$ minimizes $I_{\psi}(\,\cdot\,;\Omega,g)$ for some $g$,
and moreover $g=u^{+}$.
Proof: Note that $v^{+},v^{-}\in L^{1}(\partial\Omega;\mathcal{H}^{n-1})$
whenever $v\in BV({}^{n})$. Conversely, there is a $v\in BV({}^{n})$ with
$g=v^{+}$ for each $g\in L^{1}(\partial\Omega;\mathcal{H}^{n-1})$.
Additionally
$\int_{\partial\Omega}\psi(x,Dv)=\int_{\partial\Omega}{\varphi}(x,Dv+F_{\chi_{v}})d\mathcal{H}^{n-1}=\
\int_{\partial\Omega}{\varphi}(x,v^{+}-v^{-}+F_{\chi_{v}})d\mathcal{H}^{n-1}.$
(38)
To see this, note that $|Dv|$ can only concentrate on a set of dimension $n-1$
if that set is a subset of the jump set of $v$, so (38) follows from standard
descriptions of the jump part of $Dv$.
Now if $u,v\in BV({}^{n})$ satisfy $u=v$ a.e. in $\Omega^{c}$, then
$\int_{\bar{\Omega}^{c}}\varphi(x,Du)=\int_{\bar{\Omega}^{c}}\varphi(x,Dv)$.
In addition, $u^{+}=v^{+}$, so using (38) we deduce that
$\int_{{}^{n}}\psi(x,Du)-\int_{{}^{n}}\psi(x,Dv)\ =\
I_{\varphi}(u;\Omega,u^{+})-I_{\varphi}(v;\Omega,u^{+}).$
The lemma easily follows from the above equality. $\Box$
The next theorem shows super level sets of $\psi$-total variation minimizing
functions in $\Omega$ are $\psi$-area minimizing in $\Omega$.
###### Theorem 4.3
Let $\Omega\subset{}^{n}$ be a bounded Lipschitz domain and $u\in BV({}^{n})$
a $\psi$-total variation minimizing function in $\Omega$. The super level sets
of $u$ are written as
$E_{\lambda}:=\left\\{x\in{}^{n}:u(x)\geq\lambda\right\\}.$ (39)
Then $E_{\lambda}$ is $\psi$-area minimizing in $\Omega$.
Proof. For a fixed $\lambda\in\R$, let $u_{1}$ and $u_{2}$ be as defined in
(36). Consider $g\in BV({}^{n})$ with
$\text{supp}(g)\subset\overline{\Omega}.$ Then
$\displaystyle\int_{\Omega}{\varphi}\left(x,Du_{1}+F\chi_{\\{u\geq\lambda\\}}\right)+Hu_{1}$
$\displaystyle+\int_{\Omega}{\varphi}\left(x,Du_{2}+F\chi_{\\{u<\lambda\\}}\right)+Hu_{2}=\int_{\Omega}{\varphi}\left(x,Du+F\right)+Hu$
$\displaystyle\leq\int_{\Omega}{\varphi}\left(x,D(u+g)+F\right)+H(u+g)$
$\displaystyle=\int_{\Omega}{\varphi}\left(x,Du_{1}+D(g\chi_{\\{u\geq\lambda\\}})+F\chi_{\\{u\geq\lambda\\}}\right)+H(u_{1}+g)$
$\displaystyle+\int_{\Omega}{\varphi}\left(x,Du_{2}+D(g\chi_{\\{u<\lambda\\}})+F\chi_{\\{u<\lambda\\}}\right)+Hu_{2}$
$\displaystyle\leq\int_{\Omega}{\varphi}\left(x,Du_{1}+D(g\chi_{\\{u\geq\lambda\\}})+F\chi_{\\{u\geq\lambda\\}}\right)+H(u_{1}+g)$
$\displaystyle+\int_{\Omega}{\varphi}\left(x,D(g\chi_{\\{u<\lambda\\}})\right)+\int_{\Omega}{\varphi}\left(x,Du_{2}+F\chi_{\\{u<\lambda\\}}\right)+Hu_{2}$
$\displaystyle=\int_{\Omega}{\varphi}\left(x,D(u_{1}+g)+F\chi_{\\{u\geq\lambda\\}}\right)+H(u_{1}+g)$
$\displaystyle+\int_{\Omega}{\varphi}\left(x,Du_{2}+F\chi_{\\{u<\lambda\\}}\right)+Hu_{2}.$
This implies
$\int_{\Omega}{\varphi}\left(x,Du_{1}+F\chi_{u_{1}}\right)+Hu_{1}\leq\int_{\Omega}{\varphi}\left(x,D(u_{1}+g)+F\chi_{u_{1}}\right)+H(u_{1}+g),$
for any $g\in BV({}^{n})$ such that $\text{supp}(g)\subset\overline{\Omega}$.
By definition, $u_{1}$ is $\psi$-total variation minimizing. Using the
argument outlined above $\chi_{\epsilon,\lambda}$, as defined in (37), is also
$\psi$-total variation minimizing.
The boundary of $E_{\lambda}$ has measure zero for a.e. $\lambda\in\R$, which
is represented by
$\mathcal{L}^{n}\left(\\{x\in\Omega:u(x)=\lambda\\}\right)=\mathcal{H}^{n-1}\left(\\{x\in\partial\Omega:u^{\pm}(x)=\lambda\\}\right)=0.$
(40)
Thus
$\chi_{\epsilon,\lambda}\rightarrow\chi_{\lambda}:=\chi_{E_{\lambda}}\text{ in
}L^{1}_{\text{loc}}({}^{n}),\hskip
14.22636pt\chi_{\epsilon,\lambda}^{\pm}\rightarrow\chi_{\lambda}^{\pm}\text{
in }L^{1}(\partial\Omega;\mathcal{H}^{n-1}),$
as $\epsilon\rightarrow 0$.
We apply Lemma 4.1 to get
$P_{\psi}(\chi_{\lambda},\Omega)\leq\liminf_{\epsilon\rightarrow
0}P_{\psi}(\chi_{\epsilon,\lambda},\Omega).$ (41)
It follows from the $L^{1}$ convergence of the traces that
$I_{\varphi}(\chi_{\lambda};\Omega,\chi_{\lambda}^{+})\leq\liminf_{k\to\infty}I_{\varphi}(\chi_{\epsilon,\lambda};\Omega,\chi_{\lambda,\epsilon}^{+}).$
(42)
For an arbitrary $F\subset{}^{n}$ with $\chi_{\lambda}=\chi_{F}$ a.e. in
$\Omega^{c}$,
$\displaystyle
I_{\varphi}(\chi_{\epsilon,\lambda};\Omega,\chi_{\epsilon,\lambda}^{+})$
$\displaystyle\leq I_{\varphi}(\chi_{F};\Omega,\chi_{\epsilon,\lambda}^{+})$
$\displaystyle\leq
I_{\varphi}(\chi_{F};\Omega,\chi_{\lambda}^{+})+\int_{\partial\Omega}{\varphi}(x,\chi_{\lambda}^{+}-\chi_{\epsilon,\lambda}^{+})\
d\mathcal{H}^{n-1}$ $\displaystyle\leq
I_{\varphi}(\chi_{F};\Omega,\chi_{\lambda}^{+})+\int_{\partial\Omega}\alpha|\chi_{\lambda}^{+}-\chi_{\epsilon,\lambda}^{+}|\
d\mathcal{H}^{n-1}$ $\displaystyle\leq
I_{\varphi}(\chi_{F};\Omega,{\chi_{\lambda}}^{+})+C\int_{\partial\Omega}|\chi_{\lambda}^{+}-\chi_{\epsilon,\lambda}^{+}|\
d\mathcal{H}^{n-1}.$
The inequality that follows is justified by the above, (42), and
$\chi_{\epsilon,\lambda}^{+}\rightarrow\chi_{\lambda}^{+}$ in
$L^{1}(\partial\Omega;\mathcal{H}^{n-1})$,
$I_{\varphi}(\chi_{\lambda};\Omega,\chi_{\lambda}^{+})\leq
I_{\varphi}(\chi_{F};\Omega,\chi_{\lambda}^{+}).$
This establishes that $E_{\lambda}$ is ${\varphi}$-area minimizing in
$\Omega$.
If $\lambda$ does not satisfy (40), then there exists an increasing sequence
$\lambda_{k}$ that converges to $\lambda$ and satisfies (40) for each $k$. In
which case,
$\chi_{\lambda_{k}}\rightarrow\chi_{\lambda}\text{ in
}L^{1}_{\text{loc}}({}^{n}),\hskip
14.22636pt\chi_{\lambda_{k}}^{\pm}\rightarrow\chi_{\lambda}^{\pm}\text{ in
}L^{1}(\partial\Omega;\mathcal{H}^{n-1}).$
Thus, by Lemma 4.2, $E_{\lambda}$ is $\psi$-area minimizing in $\Omega$.
$\Box$
It remains to lay out a few more definitions which would play a key role in
the proof of our main result in this section. Let
$BV_{f}(\Omega):=\left\\{u\in BV(\Omega):\lim_{r\rightarrow
0}\operatorname*{ess\,sup}_{y\in\Omega,|x-y|<r}|u(y)-f(y)|=0\text{ for
}x\in\partial\Omega\right\\}.$
For any measurable set $E$, consider
$E^{(1)}:=\\{x\in{}^{n}:\lim_{r\to 0}\frac{{\mathcal{H}}^{n}(B(r,x)\cap
E)}{{\mathcal{H}}^{n}(B(r))}=1\\}.$
###### Definition 2
Let $\Omega\subset{}^{n}$ be a bounded Lipschitz domain. We say that $\Omega$
satisfied the barrier condition if for every $x_{0}\in\partial\Omega$ and
$\epsilon>0$ sufficiently small, $V$ minimizes $P_{\psi}(\cdot;{}^{n})$ in
$\\{W\subset\Omega:W\setminus B(\epsilon,x_{0})=\Omega\setminus
B(\epsilon,x_{0})\\},$ (43)
implies
$\partial V^{(1)}\cap\partial\Omega\cap B(\epsilon,x_{0})=\emptyset.$
Intuitively speaking, (43) means that at any point $x_{0}\in\partial\Omega$
one can decrease the $\psi$-perimeter of $\Omega$ by pushing the boundary
inwards,
###### Lemma 4.4
Suppose $\Omega\subset{}^{n}$ is a bounded Lipschitz domain satisfying the
barrier condition, and $E\subset{}^{n}$ minimizes $P_{\psi}(\cdot;\Omega)$.
Then
$\left\\{x\in\partial\Omega\cap\partial E^{(1)}:B(\epsilon,x)\cap\partial
E^{(1)}\subset\overline{\Omega}\text{ for some
}\epsilon>0\right\\}=\emptyset.$
Proof. We proceed by contradiction. Suppose there exists
$x_{0}\in\partial\Omega\cap\partial E^{(1)}$ such that
$B(\epsilon,x_{0})\cap\partial E^{(1)}\subset\bar{\Omega}$ for some
$\epsilon>0$. Then $\tilde{V}=E\cap\Omega$ is a minimizer of
$P_{\psi}(\,\cdot\,;{}^{n})$ in (43), and
$x_{0}\in\partial{\tilde{V}}^{(1)}\cap\partial\Omega\cap
B(\epsilon,x_{0})\neq\emptyset.$
This is inconsistent with the barrier condition (43). $\Box$
Finally, we are ready to prove the main existence results of the this section.
###### Theorem 4.5
Consider $\psi:{}^{n}\times{}^{n}\rightarrow\R$ as defined in (35) and a
bounded Lipschitz domain $\Omega\subset{}^{n}$. Let
$||H||_{L^{\infty}(\overline{\Omega})}$ be small enough that Proposition 3.1
holds. If $\Omega$ satisfies the barrier condition with respect to $\psi$,
then for every $f\in C(\partial\Omega)$ the minimization problem (19) has a
minimizer in $BV_{f}(\Omega)$.
Proof. For a given $f\in C(\partial\Omega)$, it can be extended to $f\in
C(\Omega^{c})$. Furthermore, we can assume $f\in BV({}^{n})$ since every
${\mathcal{H}}^{n-1}$ integrable function on $\Omega$ is the trace of some
(continuous) function in $BV(\Omega^{c})$. Let
${\mathcal{A}}_{f}:=\\{v\in BV({}^{n}):\ \ v=f\ \ \hbox{on}\ \ \Omega^{c}\\},$
where any element $v$ of $BV_{f}(\Omega)$ is the restriction to $\Omega$ of a
unique element of ${\mathcal{A}}_{f}$. Then $\int_{{}^{n}}\psi(x,v)$ has as a
minimizer $u\in{\mathcal{A}}_{f}$, in view of Proposition 3.1.
Next we prove that $u\in BV_{f}(\Omega)$. Suppose this is not the case, then
there is an $x\in\partial\Omega$ and $\delta>0$ such that
$\operatorname*{ess\,sup}_{y\in\Omega,|x-y|<r}\big{(}f(x)-u(y))\geq\delta\qquad\mbox{
or }\ \
\operatorname*{ess\,sup}_{y\in\Omega,|x-y|<r}\big{(}u(y)-f(x))\geq\delta$ (44)
for every $r>0$. First, suppose that the latter condition holds. For
$E:=E_{f(x)+\delta/2}$ we have that $x\in\partial E^{(1)}$, justified by the
second alternative of (44) and the continuity of $f$. Note that Theorem 4.3
implies $E$ is $\psi$-area minimizing in $\Omega$. This there exists
$\epsilon>0$ such that $u<f(x)+\delta/2$ in $B(\varepsilon,x)\setminus\Omega$,
since $u\in{\mathcal{A}}_{f}$ and $f$ is continuous in $\Omega^{c}$. However,
Lemma 4.4 shows that this is impossible. In the case of the first alternative
of (44), a similar contradiction arises when $E:=\\{y\in{}^{n}:u(y)\leq
f(x)-\delta/2\\}$. Therefore we conclude that $u\in BV_{f}(\Omega)$. Moreover,
$u$ is $\psi$-total variation minimizing in $BV_{f}(\Omega)$ by Theorem 3.3.
$\Box$
## References
* [1] G. Alberti, A Lusin type theorem for gradients, J. Funct. Anal., Vol. 100 (1991), pp. 110-118.
* [2] M. Amar, G. Bellettini, A notion of total variation depending on a metric with discontinuous coefficients, Annales de l’institut Henri Poincaré(C) Analyse non linéaire 11 (1994), 91-133.
* [3] G. Anzellotti, Pairings between measures and bounded functions and compensated compactness, Ann. Mat. Pura Appl. (4) 135 (1983), 293-318 (1984).
* [4] F. Andreu-Vaillo, V. Caselles, J. M. Mazón, Parabolic quasilinear equations minimizing linear growth functionals, Progress in Mathematics, 223. Birkhäuser Verlag, Basel, 2004.
* [5] Z.M. Balogh, _Size of characteristic sets and functions with prescribed gradient_. J. Reine Angew. Math. 564 (2003), 63-83.
* [6] P. Bousquet, Boundary continuity of solutions to a basic problem in the calculus of variations, Adv. Calc. Var. 3 (2010), 1-27.
* [7] P. Bousquet, F. Clarke, Local Lipschitz continuity of solutions to a problem in the calculus of variations, J. Differential Equations 243 (2007), 489–503.
* [8] A. Cellina, On the bounded slope condition and the validity of the Euler Lagrange equation, SIAM J. Control Optim. 40 (2001/02), 1270–1279 (electronic).
* [9] J.-H. Cheng, J.-F. Hwang, _Properly embedded and immersed minimal surfaces in the Heisenberg group_. Bull. Aus. Math. Soc. 70 (2004), 507-520.
* [10] J.-H. Cheng, J.-F. Hwang, _Uniqueness of generalized p-area minimizers and integrability of a horizontal normal in the Heisenberg group_. Calc. Var. Partial Differential Equations 50 (2014), no. 3-4, 579-597.
* [11] J.-H. Cheng, J.-F. Hwang, A. Malchiodi, P. Yang, _Minimal surfaces in pseudohermitian geometry_. Annali della Scuola Normale Superiore di Pisa, Classe di Scienze 4(5) (2005), 129-177.
* [12] J.-H. Cheng, J.-F. Hwang, A. Malchiodi, P. Yang, _Existence and uniqueness for p-area minimizers in the Heisenberg group_. Math. Ann. 337 (2007), no. 2, 253-293.
* [13] F. Clarke, Continuity of solutions to a basic problem in the calculus of variations, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 4 (2005), 511–530.
* [14] S. Don, L. Lussardi, A. Pinamonti, G. Treu, _Lipschitz minimizers for a class of integral functionals under the bounded slope condition_. Nonlinear Analysis, Theory, Methods and Applications, 216 (2022), 112689.
* [15] I. Ekeland, R. Témam, _Convex analysis and variational problems_ , North-Holland-Elsevier, 1976.
* [16] A. Fiaschi, G. Treu, The bounded slope condition for functionals depending on $x,u,$ and $\nabla u$, SIAM J. Control Optim., 50 (2012), 991–1011.
* [17] B. Franchi, R. Serapioni, F. Serra Cassano, _Rectifiability and perimeter in the Heisenberg group_. Math. Ann. 321, 479-531 (2001).
* [18] N. Garofalo, D.-M Nhie, _Isoperimetric and Sobolev inequalities for Carnot-Caratheodory spaces and the existence of minimal surfaces_. Comm. Pure Appl. Math. 49, 1081-1144 (1996).
* [19] E. Giusti, Minimal Surfaces and Functions of Bounded Variations, Birkhäuser, Boston, 1984.
* [20] W. Górny, Planar least gradient problem: existence, regularity and anisotropic case, https://arxiv.org/abs/1608.02617.
* [21] N. Hoell, A. Moradifam, A. Nachman, Current Density Impedance Imaging with an Anisotropic Conductivity in a Known Conformal Class, SIAM J. Math. Anal., 46 (2014), 3969-3990.
* [22] R.L. Jerrard, A. Moradifam, A. Nachman, Existence and uniqueness of minimizers of general least gradient problems, J. Rein Angew. Math., 734 (2018), 71-97.
* [23] L. Lussardi, E.Mascolo, A uniqueness result for a class of non strictly convex variational problems, J. Math. Anal. Appl. 446 (2017), no. 2, 1687–1694.
* [24] C. Mariconda, G. Treu, Existence and Lipschitz regularity for minima, Proc. Amer. Math. Soc. 130 (2002), 395–404 (electronic).
* [25] C. Mariconda, G. Treu, Lipschitz regularity for minima without strict convexity of the Lagrangian, J. Differential Equations 243 (2007), 388–413.
* [26] C. Mariconda, G. Treu, Local Lipschitz regularity of minima for a scalar problem of the calculus of variations, Commun. Contemp. Math. 10 (2008), 1129–1149.
* [27] J. M. Mazón, The Euler–Lagrange equation for the Anisotropic least gradient problem, Nonlinear Analysis: Real World Applications 31 (2016) 452-472.
* [28] J. M. Mazón, J.D. Rossi, S.S. De León , Functions of Least Gradient and 1-Harmonic Functions, Indiana University Mathematics Journal 63 (2013) (4): 1067-1084.
* [29] A. Moradifam, Existence and structure of minimizers of least gradient problems, Indiana University Mathematics Journal 63 (2014), no. 6, 1819-1837.
* [30] A. Moradifam, Least gradient problems with Neumann boundary condition, J. Differential Equations 263 (2017), no. 11, 7900-7918.
* [31] A. Moradifam, A. Nachman, and A. Timonov, A convergent algorithm for the hybrid problem of reconstructing conductivity from minimal interior data, Inverse Problems, 28 (2012) 084003.
* [32] A. Moradifam, A. Nachman, and A. Tamasan, Conductivity imaging from one interior measurement in the presence of perfectly conducting and insulating inclusions, SIAM J. Math. Anal., 44 (2012) (6), 3969-3990.
* [33] A. Moradifam, A. Rowell, Existence and structure of P-area minimizing surfaces in the Heisenberg group, Journal of Differential Equations, 342 (2023), 325-342.
* [34] A. Nachman, A. Tamasan, and A. Timonov, Conductivity imaging with a single measurement of boundary and interior data, Inverse Problems, 23 (2007), pp. 2551–2563.
* [35] A. Nachman, A. Tamasan, and A. Timonov, Recovering the conductivity from a single measurement of interior data, Inverse Problems, 25 (2009) 035014 (16pp).
* [36] A. Nachman, A. Tamasan, and A. Timonov, Reconstruction of Planar Conductivities in Subdomains from Incomplete Data, SIAM J. Appl. Math. 70(2010), Issue 8, pp. 3342–3362.
* [37] A. Nachman, A. Tamasan, and A. Timonov, Current density impedance imaging, Tomography and inverse transport theory, 135-149, Contemp. Math. 559, AMS, 2011.
* [38] S.D. Pauls,_Minimal surfaces in the Heisenberg group_. Geometric Dedicata, 104 (2004), 201-231.
* [39] A. Pinamonti, F. Serra Cassano, G. Treu, D. Vittone, _BV minimizers of the area functional in the Heisenberg group under the bounded slope condition_. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 14 (2015), no. 3, 907-935.
* [40] P. Sternberg, G. Williams, and W. P. Ziemer, Existence, uniqueness and regularity for functions of least gradient, J. Rein Angew. Math. 430 (1992), 35-60.
* [41] P. Sternberg and W. P. Ziemer, Generalized motion by curvature with a Dirichlet condition, J. Differ. Eq., 114(1994), pp. 580–600.
* [42] P. Sternberg and W. P. Ziemer, The Dirichlet problem for functions of least gradient. Degenerate diffusions (Minneapolis, MN, 1991), 197–214, in IMA Vol. Math. Appl., 47, Springer, New York, 1993.
* [43] G. S. Spradlin and A. Tamasan, Not all traces on the circle come from functions of least gradient in the disk, Indiana University Mathematics Journal 63 (2014), no. 6, 1819-1837.
|
11institutetext: Department of Physics
Robeson Hall, 0435
Virginia Tech
850 West Campus Drive
Blacksburg, VA 24061, USA22institutetext: Department of Physics and Astronomy,
University of Pennsylvania,
Philadelphia, PA 19104, USA
# Orders of Vanishing and U(1) Charges in F-theory
Nikhil Raghuram 2 and Andrew P. Turner raghuram.nikhil at gmail.com turnerap
at sas.upenn.edu
###### Abstract
Many interesting questions about F-theory models, including several concerning
the F-theory swampland, involve massless matter charged under U(1) gauge
symmetries. It is therefore important to better understand the geometric
properties of F-theory models realizing various U(1) charges. We propose that,
for F-theory models described by elliptic fibrations in Weierstrass form, the
U(1) charge of light matter is encoded in the orders of vanishing of the
section components corresponding to the U(1) gauge symmetry. We give specific
equations relating the U(1) charges to the orders of vanishing that seem to
hold for both U(1)-charged singlets and for matter additionally charged under
a simply-laced nonabelian gauge algebra. Our formulas correctly describe
properties of F-theory models in the prior literature, and we give an argument
that they should describe the orders of vanishing for arbitrarily high U(1)
charges. They also resemble formulas for the $p$-adic valuations of elliptic
divisibility sequences developed by Stange StangeEllTrouble . These proposals
could serve as a U(1) analogue of the Katz–Vafa method, allowing one to
determine U(1) charges without resolution. Additionally, they predict
geometric information about F-theory models with general U(1) charges, which
may be useful for exploring the F-theory landscape and swampland.
## 1 Introduction
A major goal of the string theory program is understanding how to construct a
compactification of string theory realizing a desired massless spectrum. This
problem is important for both characterizing the landscape of string
compactifications and finding stringy realizations of our observed universe.
F-theory VafaF-theory ; MorrisonVafaI ; MorrisonVafaII has emerged as a
powerful tool for constructing string compactifications in large part because
it geometrizes several physical features. Much of the information about an
F-theory model’s massless spectrum, particularly regarding its unbroken gauge
symmetry and its light charged matter, is encoded in the mathematical
properties of its corresponding elliptic fibration. To be more specific, an
F-theory compactification down to $12-2d$ real dimensions is described by a
complex $d$-dimensional elliptically fibered Calabi–Yau manifold. Massless
nonabelian gauge bosons are supported along divisors in the base with singular
elliptic fibers. After resolution, the components in the fibers at codimension
one intersect in the pattern of an affine Dynkin diagram, in line with the
Kodaira classification of singularity types KodairaII . When supplemented with
information about monodromy TateMath , this affine Dynkin diagram tells us
which nonabelian gauge algebra is supported along this locus MorrisonVafaI ;
MorrisonVafaII ; BershadskyEtAlSingularities . Light charged matter,
meanwhile, is typically localized at codimension-two loci in the base where
the singularity type enhances. The representation of this charged matter can
be determined by resolving the singularities and determining how the
introduced fiber components at codimension two intersect the resolution
divisors for the codimension-one singularities.
While properly determining the nonabelian gauge algebra and matter
content111Since this paper focuses on massless matter, we typically omit the
word “massless” and use the term “matter” to refer to massless matter. with
this approach requires resolving singularities,222The string junction program
offers a valid method for determining nonabelian gauge groups and matter
representations that does not require resolutions. (See, for instance,
GrassiHalversonShanesonPhysics ; GrassiHalversonShanesonMath .) We do not
significantly discuss this approach here, although it would be interesting to
investigate how the results obtained here tie into these methods. one can
often read off singularity types and their associated physical data through
easier methods. If the elliptic fibration is in the Weierstrass form
$y^{2}=x^{3}+fxz^{4}+gz^{6}\,,$ (1)
the Kodaira table (given in Table 4) relates the singularity types and their
corresponding Lie algebras to the orders of vanishing of $f$, $g$, and
$\Delta=4f^{3}+27g^{2}$ at a particular divisor in the base. Furthermore, even
though the Kodaira classification strictly holds only at codimension one, we
can often heuristically use the Kodaira table to find singularity types at
codimension two in the base as well. One can then determine matter
representations with the Katz–Vafa method KatzVafa , in which one breaks the
adjoint of the enhanced singularity type’s corresponding Lie algebra to
representations of the nonabelian gauge algebra. These techniques allow one to
calculate the gauge group and charged matter content of a model simply by
considering orders of vanishing, making the process of constructing and
analyzing nonabelian F-theory models significantly easier. In addition to
letting us quickly determine the massless spectrum of a given model, they can
guide the process of constructing an F-theory model with a desired nonabelian
gauge algebra and charged matter spectrum by describing expected features of
the elliptic fibration.
However, many interesting questions in the F-theory program involve abelian
gauge algebras and light matter charged under them. Clearly, $\mathfrak{u}(1)$
algebras are important for phenomenological model building in F-theory: the
Standard Model gauge algebra has a $\mathfrak{u}(1)$ factor, and many
proposals for extending the Standard Model include extra $\mathfrak{u}(1)$
algebras RizzoZprime ; LangackerZprime ; GrimmWeigand ; HalversonTASI . They
also offer an interesting arena for questions about the landscape and
swampland VafaSwamp of F-theory models. As an example, consider 6D F-theory
models with a $\operatorname{U}(1)$ gauge group. It has not yet been
definitively determined which $\operatorname{U}(1)$ charges of massless matter
can and cannot occur in such models, even though there has been significant
progress on this problem and more general questions regarding constraints on
$\operatorname{U}(1)$ theories ParkTaylor ; ParkIntersection ; MorrisonParkU1
; KleversEtAlToric ; MonnierMoorePark ; RaghuramTaylorLargeCharge ;
CianciMayorgaPenaValandroHighU1 ; LeeWeigandAbelianSwamp ;
CollinucciEtAlHighCharge ; AndersonGrayOehlmannQuotients . However, there are
infinite families of charged matter spectra with unbounded
$\operatorname{U}(1)$ charges that are consistent with the known 6D low-energy
constraints ParkTaylor ; TaylorTurnerSwamp . Only a finite number of these
matter spectra can be realized in F-theory models
KumarMorrisonTaylorGlobalAspects , presenting the possibility of an infinite
swampland of such models. (See RaghuramTaylorTurnerEnhancement for recent
developments regarding some of these infinite families.)
For these reasons, much work has focused on $\mathfrak{u}(1)$ charged matter
in F-theory constructions, both for matter charged only under
$\mathfrak{u}(1)$ algebras and matter additionally charged under nonabelian
algebras GrimmWeigand ; DolanMarsanoSaulinaSchaferNameki ;
MarsanoSaulinaSchaferNamekiU1 ; MorrisonParkU1 ; CveticGrimmKlevers ;
MayrhoferPaltiWeigand ; BorchmannSU5TopSummary ; CveticKleversPiraguaMultU1 ;
GrimmKapferKeitelRational ; BraunGrimmKeitel ; CveticGrassiKleversPiragua ;
BorchmannSU5Top ; CveticKleversPiraguaSong ; KrippendorfEtAlGUT ;
BraunCollinucciValandro ; AntoniadisLeontaris ; KuntzlerTateTrees ;
KleversEtAlToric ; EsoleKangYau ; LawrieSacco ; LawrieEtAlRational ;
CveticKleversPiraguaTaylor ; GrimmKapferKleversArithmetic ; CveticEtAlHetU1 ;
MorrisonParkTall ; MorrisonParkTaylor ; WangU1s ; CveticLinU1 ;
MayorgaPenaValandro ; BuchmullerEtAlSO10 ; BaumeEtAlTorsion ; Raghuram34 ;
KimuraK3U1a ; LeeRegaladoWeigand ; CianciMayorgaPenaValandroHighU1 ;
TaylorTurnerGeneric ; KimuraK3U1b ; LeeWeigandAbelianSwamp ;
CollinucciEtAlHighCharge ; KimuraZ4 ; KimuraHalf3Fold ; KimuraHalfFourfold ;
OehlmannSchimannek ; RaghuramTaylorTurnerSM ; KnappScheideggerSchimannek .
However, there are still open questions regarding $\mathfrak{u}(1)$ algebras
in F-theory, in part because, unlike nonabelian algebras, they are not
associated with codimension-one singularities of the fibration. Instead, they
are associated with extra rational sections of the elliptic fibration
MorrisonVafaII . In more concrete terms, a rational section of a fibration in
Weierstrass form is described by a solution
$[x:y:z]=[\hat{x}:\hat{y}:\hat{z}]$ (2)
of the Weierstrass equations, where $\hat{x}$, $\hat{y}$, $\hat{z}$ are
sections of appropriate line bundles on the base. An elliptic fibration may
admit several or even an infinite number of such sections. They form a
finitely generated group under an operation known as elliptic curve addition.
There is a $\mathfrak{u}(1)$ algebra corresponding to each generating section
of this group with infinite order. Matter charged under a $\mathfrak{u}(1)$
algebra still occurs at codimension-two loci in the base with singular fibers,
but the $\mathfrak{u}(1)$ charge is determined by how the section intersects
the resolved singular fiber at the matter locus. Since $\mathfrak{u}(1)$
algebras involve sections of the elliptic fibration, much of the technology
for nonabelian gauge algebras does not carry over. The Katz–Vafa method, for
instance, cannot be used to read off $\mathfrak{u}(1)$ charges, as matter with
different $\mathfrak{u}(1)$ charges can occur at codimension-two loci with the
same singularity type. This can be seen, for instance, in the Morrison–Park
model MorrisonParkU1 , which has a single $\mathfrak{u}(1)$ gauge factor: in
this model, the $\bm{1}_{1}$ loci and $\bm{1}_{2}$ loci are all of type
$\text{I}_{2}$, where $(f,g,\Delta)$ vanish to orders $(0,0,2)$.
To the authors’ knowledge, the prior F-theory literature does not present any
systematic procedure analogous to the Katz–Vafa method for reading off
$\mathfrak{u}(1)$ charges from simple features such as orders of vanishing.
However, there were tentative indications in Raghuram34 that, at least for
singlets, the $\mathfrak{u}(1)$ charge is encoded in orders of vanishing of
the $\hat{x}$, $\hat{y}$, and $\hat{z}$ section components at a matter locus.
It was noted there that in Weierstrass models with a $\operatorname{U}(1)$
gauge symmetry admitting $\bm{1}_{3}$ and $\bm{1}_{4}$ matter, the section
components, particularly $\hat{z}$, vanish to orders larger than 1 at the
corresponding matter loci. The work also argued that the complicated non-UFD
structure KleversEtAlExotic of these models directly reflects these higher
orders of vanishing. These observations naturally suggest some correlation
between the singlet charge and the orders of vanishing of the section
components; in fact, Raghuram34 hypothesized a specific relation between
these quantities.
Our goal here is to develop these observations into a systematic framework
that could serve as a Katz–Vafa analogue for $\mathfrak{u}(1)$ charges.
Specifically, we aim to find rules relating $\mathfrak{u}(1)$ charges to the
orders of vanishing of the $\hat{x}$, $\hat{y}$, $\hat{z}$, and
$\hat{w}=3\hat{x}^{2}+f\hat{z}^{4}$ components333When an elliptic fibration is
in Weierstrass form, singularities occur at points on the fiber where
$y=3x^{2}+fz^{4}=0$. Thus, $\hat{w}$ provides valuable information about where
and how a section hits a singular point on a fiber. of the generating section.
In addition to singlet matter, we consider $\mathfrak{u}(1)$-charged matter
that is also charged under a semi-simple nonabelian gauge algebra. To make the
scope of the analysis more manageable, we assume that the nonabelian part of
the gauge algebra is simply-laced.444Some speculative thoughts on non-simply-
laced situations are presented in Section 13. Additionally, we only consider
matter in generic TaylorTurnerGeneric representations of the nonabelian gauge
factors, and we focus on matter loci where the elliptic fiber singularity type
undergoes a rank-one enhancement.555There are notable examples even for
$\mathfrak{u}(1)$-charged singlet matter where the singularity type undergoes
higher-rank enhancement, such as Grassi:2021wii , where the
$\mathfrak{u}(1)$-charged singlets are all supported at codimension-two
$\text{II}\to\text{IV}$ loci. Even with these simplifying assumptions, we
consider several matter representations commonly found in F-theory
models:666We do not significantly analyze matter supported along type III or
IV loci. It would be interesting to investigate these situations in future
work.
* •
the singlet representation occurring at codimension-two
$\text{I}_{1}\to\text{I}_{2}$ loci;
* •
the fundamental and antisymmetric representations of $\mathfrak{su}(n)$
occurring, respectively, at codimension-two
$\text{I}_{n}^{s}\to\text{I}_{n+1}$ and
$\text{I}_{n}^{s}\to\text{I}^{*}_{n-4}$ loci;
* •
the vector representations of $\mathfrak{so}(2n)$ occurring at codimension-two
$\text{I}_{n-4}^{*s}\to\text{I}_{n-3}^{*}$ loci;
* •
the spinor representations of $\mathfrak{so}(8)$, $\mathfrak{so}(10)$,
$\mathfrak{so}(12)$, and $\mathfrak{so}(14)$ occurring, respectively, at
codimension-two $\text{I}_{0}^{*s}\to\text{I}_{1}^{*}$,
$\text{I}_{1}^{*s}\to\text{IV}^{*}$, $\text{I}_{2}^{*s}\to\text{III}^{*}$, and
$\text{I}_{3}^{*s}\to\text{II}^{*}$ loci;
* •
the $\bm{27}$ representation of $\mathfrak{e}_{6}$ occurring at codimension-
two $\text{IV}^{*s}\to\text{III}^{*}$ loci;
* •
and the $\bm{56}$ representation of $\mathfrak{e}_{7}$ occurring at
codimension-two $\text{III}^{*}\to\text{II}^{*}$ loci.
We propose a set of formulas, described in Section 2, that seem to correctly
relate $\mathfrak{u}(1)$ charges and orders of vanishing for all of these
cases. The most important of these formulas describes the order of vanishing
of $\hat{z}$ at a codimension-two matter locus. To illustrate the basic idea,
consider matter that occurs at the intersection of a gauge divisor supporting
a simple nonabelian gauge algebra $\mathfrak{g}$ with the residual
$\text{I}_{1}$ discriminant locus. Suppose that, at this matter locus, the
singularity type enhances from one associated with $\mathfrak{g}$ to one
associated with an enhanced Lie algebra $\mathfrak{h}$. Then, if $G$ and $H$
are the universal covering groups of $\mathfrak{g}$ and $\mathfrak{h}$, the
order of vanishing for $\hat{z}$ at the matter locus,
$\operatorname{ord}_{2}(\hat{z})$, is schematically given by
$\operatorname{ord}_{2}(\hat{z})=\frac{1}{2}\mathopen{}\mathclose{{}\left(\frac{d_{G}}{d_{H}}q^{2}+\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{G}}\right)_{\mathcal{I}\mathcal{I}}-\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{H}}\right)_{\mathcal{J}\mathcal{J}}}\right)\,,$
(3)
where $q$ is the $\mathfrak{u}(1)$ charge,
$\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{G}}\right)_{\mathcal{I}\mathcal{I}}$
and
$\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{H}}\right)_{\mathcal{J}\mathcal{J}}$
are particular diagonal elements of the inverse Cartan matrices for $G$ and
$H$, and $d_{G}$ and $d_{H}$ are the orders (number of elements) of the
centers of $G$ and $H$.
We present three lines of evidence in support of these proposals. First, they
are satisfied by independently derived F-theory models from the prior
literature with $\mathfrak{u}(1)$ gauge algebras, as discussed in Section 12.
Yet these previous F-theory models support only relatively small
$\mathfrak{u}(1)$ charges, and we would like to test the formulas over a large
range of charges. Fortunately, a previously used strategy MorrisonParkU1 ;
Raghuram34 , which is reviewed in Section 5, allows us to probe the behavior
of models supporting large charges without explicitly constructing them.
Roughly, the $\mathfrak{u}(1)$ charge of matter is determined by how a
generating section $\hat{s}$ of an elliptic fibration behaves at a matter
locus in the base. One can also consider sections $m\hat{s}$ that are
multiples of the generating section under elliptic curve addition. If a model
admits matter with $\mathfrak{u}(1)$ charge $q$, then at the matter locus, the
non-generating section $m\hat{s}$ behaves as though it were a generating
section supporting charge $mq$. Therefore, if we wish to obtain the orders of
vanishing for generating section components in models supporting large
charges, we can start with a model admitting smaller charges and examine
multiples of the generating section. When we perform this analysis for the
various types of charged matter mentioned above, the resulting orders of
vanishing agree exactly with the proposed formulas. Finally, similar formulas
describe the $p$-adic valuations of elliptic divisibility sequences (EDSs)
associated with singular elliptic curves StangeEllTrouble , although these
formulas are written in a somewhat different format. Elliptic divisibility
sequences are closely related to the procedure above involving multiples of
generating sections, suggesting that the formulas from StangeEllTrouble
should resemble our proposals, as observed.
These three points provide strong evidence in favor of the proposals, but they
do not constitute a formal proof. We will not attempt to give such a proof
here. One might in fact expect that the proposals hold only heuristically,
given that the analogous Katz–Vafa method (at least as described above) is
itself somewhat heuristic KatzVafa ; GrassiMorrisonGroupReps ;
MorrisonTaylorMaS ; GrassiMorrison ; EsoleKangSpell ; Grassi:2018rva . In
particular, both methods involve applying the Kodaira classification at
codimension two in the base; even though this often gives correct results, the
Kodaira classification strictly holds only at codimension one. It would be
important in future work to more properly establish our proposals and
determine their exact range of validity.
However, even if they only hold heuristically, the proposals offer many
potential benefits. Just as the Katz–Vafa method allows one to quickly
determine nonabelian representations without resolution, these proposals would
allow one to easily determine $\mathfrak{u}(1)$ charges. Moreover, if one
wishes to find an F-theory model realizing particular $\mathfrak{u}(1)$
charges, these formulas predict properties of the model’s generating section.
Said another way, the formulas provide properties to aim for when constructing
an F-theory model with desired $\mathfrak{u}(1)$ charges. Since the proposals
give the orders of vanishing for arbitrary charges, they could be an
invaluable tool for exploring the F-theory landscape and swampland. Of course,
they are intrinsically interesting for the mathematics of elliptic fibrations,
particularly given their connection to $p$-adic valuations of elliptic
divisibility sequences.
The formulas also exemplify the general theme that simple representations of
light matter are easier to realize in string models than more complicated ones
IbanezUrangaSimpleReps ; TaylorTurnerGeneric . This contrasts with the
situation in quantum field theory: while one must still satisfy conditions
such as anomaly cancellation, the process of writing down a quantum field
theory with complicated matter representations is not significantly more
difficult than that for simpler representations. One can observe these ideas
when working with nonabelian gauge symmetries, for which the more complicated
representations tend to have larger dimensions. F-theory models with
$\mathfrak{su}(2)$ algebras, for instance, almost automatically support light
matter in the fundamental ($\bm{2}$) and adjoint ($\bm{3}$) representations.
By contrast, the $\bm{4}$ representation only occurs when the
$\mathfrak{su}(2)$ algebra is tuned in an intricate way KleversTaylor ;
KleversEtAlExotic . Even though matter representations with larger
$\mathfrak{u}(1)$ charges do not necessarily have larger dimensions, one would
still expect that it is easier to obtain small $\mathfrak{u}(1)$ matter
charges in F-theory models than large ones. Indeed, our proposals suggest that
the orders of vanishing for the section components increase with
$\mathfrak{u}(1)$ charge. According to the observations in Raghuram34 , these
larger orders of vanishing are associated with complicated structures in the
Weierstrass models. The proposals therefore give a concrete explanation as to
how large $\mathfrak{u}(1)$ charges are more difficult to realize than small
$\mathfrak{u}(1)$ charges.
The rest of this paper is organized as follows. Section 2 summarizes our
notations and results and shows how to apply the proposals in a particular
example. Given the length of this paper, we have attempted to make this
section as self-contained as possible. As such, a reader with sufficient
background knowledge of F-theory should be able to understand and apply our
formulas after reading only Section 2, at least at a mechanical level. In
Section 3, we review those aspects of $\mathfrak{u}(1)$ gauge algebras in
F-theory that are used in this paper. Because the centers of compact Lie
groups are important for our results, Section 4 discusses these centers and
their connection to $\mathfrak{u}(1)$ charges in more detail. This section
reviews ideas in CveticLinU1 , but it also explains in more detail our
notations for the centers first mentioned in Section 2. Additionally, we list
the allowed $\mathfrak{u}(1)$ charges for the various matter representations
considered here. Section 5 describes the general strategy for our analysis,
particularly the tactic of using multiples of generating sections to derive
information about models supporting large $\mathfrak{u}(1)$ charges. Section
5.1 discusses how the signs of $\mathfrak{u}(1)$ charges fit into our
strategies and our proposed formulas.
Sections 6, 7, 8, 9, 10 and 11, which are the bulk of this paper, contain the
detailed investigations of the specific types of charged matter we consider
here. Section 6 focuses on singlet matter with $\mathfrak{u}(1)$ charges,
while Sections 7, 8, 9 and 10 focus on $\mathfrak{u}(1)$-charged matter that
is additionally charged in representations of $\mathfrak{su}(n)$,
$\mathfrak{so}(2n)$, $\mathfrak{e}_{6}$, and $\mathfrak{e}_{7}$, respectively.
For each type of matter, we list explicit order-of-vanishing data for various
$\mathfrak{u}(1)$ charges and show that they satisfy the proposed formulas. We
also relate the expressions for each case to similar formulas for EDS
valuations in StangeEllTrouble . In Section 11, we consider matter charged
under multiple $\mathfrak{u}(1)$ and simple Lie algebras, although we do not
perform an exhaustive analysis of these situations. While these sections may
not be necessary for readers simply interested in the final results, they
provide important evidence in favor of the formulas in Section 2. In Section
12, we discuss how previous models in the F-theory literature follow our
proposed formulas. Section 13 provides some concluding thoughts and future
directions.
## 2 Summary of results
This section summarizes our results and describes our conventions. We also
provide an example of how to use these results to determine the
$\mathfrak{u}(1)$ charges in a particular model. We have attempted to make
this section as self-contained as possible, such that a reader solely
interested in the final results can consult this section without referring to
the rest of this paper. Nevertheless, such readers may benefit from Sections 3
and 4, which review the background material underlying these results. Section
5.1, which discusses signs of $\mathfrak{u}(1)$ charges, may also be helpful.
Finally, Appendix A summarizes how the results given in this section are
specialized for particular gauge algebras and representations. While Appendix
A does not contain any results that cannot be obtained using the material in
this section, some readers may find the specialized expressions given there to
be useful.
### 2.1 Notations and conventions
##### Weierstrass form
We typically work with elliptic fibrations in the global Weierstrass form
$y^{2}=x^{3}+fxz^{4}+gz^{6}\,.$ (4)
Here, $[x:y:z]$ are the homogeneous coordinates of a $\mathbb{P}^{2,3,1}$
space, and $f$, $g$ are holomorphic sections of line bundles on the base. The
discriminant of this elliptic fibration is
$\Delta\equiv 4f^{3}+27g^{2}\,.$ (5)
Since we focus on elliptically fibered Calabi–Yau manifolds, $f$ and $g$ are
respectively sections of $\mathcal{O}(-4K_{B})$ and $\mathcal{O}(-6K_{B})$,
where $K_{B}$ is the canonical class of the base $B$ of the elliptic
fibration. We may also write elliptic fibrations in the local Weierstrass form
$y^{2}=x^{3}+fx+g$ (6)
more commonly found in the F-theory literature. The global Weierstrass form
and the local form are related by going to a chart where $z=1$. However, the
zero section, which occurs at $[x:y:z]=[1:1:0]$, is more clearly visible in
the global Weierstrass form.
##### Section components
Rational sections of the elliptic fibration are described as
$[\hat{x}:\hat{y}:\hat{z}]$, where the section components $\hat{x}$,
$\hat{y}$, and $\hat{z}$ solve the Weierstrass form above. In line with the
embedding of the elliptic fiber in $\mathbb{P}^{2,3,1}$, we are free to
rescale the section components as
$[\hat{x}:\hat{y}:\hat{z}]\cong[\lambda^{2}\hat{x}:\lambda^{3}\hat{y}:\lambda\hat{z}]\,.$
(7)
Thus, even though the section components can in principle be rational, we can
clear denominators through a rescaling. Throughout this work, we assume that
the section components have been rescaled to clear denominators and to remove
any common factors that can be scaled away. Therefore, we take the section
components $\hat{x}$, $\hat{y}$, and $\hat{z}$ to be holomorphic sections of
line bundles on the base that solve the global Weierstrass form above. We also
define the section component
$\hat{w}=3\hat{x}^{2}+f\hat{z}^{4}\,.$ (8)
Under the rescaling above, $\hat{w}$ becomes $\lambda^{4}\hat{w}$.
##### Singularity types
As described more fully in Section 3, each singularity type in the Kodaira
classification is associated with an ADE Lie algebra,777As we are interested
in simply-laced gauge algebras in this paper, we focus on the split versions
of the singularity types. and in turn, a universal covering group for the ADE
algebra. Therefore, in an abuse of language, we often refer to the singularity
types by their associated universal covering groups. For instance, we may
refer to a split $\text{I}_{n}$ singularity as an $\operatorname{SU}(n)$
singularity. While this can be done unambiguously for most singularity types,
there are a few cases that require clarifications. The $\text{I}_{2}$ and III
singularity types are both associated with $\operatorname{SU}(2)$, while the
$\text{I}_{3}$ and IV singularity types are both associated with
$\operatorname{SU}(3)$. In this paper, the terms “$\operatorname{SU}(2)$
singularity” and “$\operatorname{SU}(3)$ singularity” will correspond to
$\text{I}_{2}$ and $\text{I}_{3}$ singularities, respectively; we always refer
to the III and IV singularity types using the Kodaira notation. Meanwhile, we
take the group for the $\text{I}_{1}$ and II singularity types to be
$\operatorname{SU}(1)$.
##### Shioda map
The Shioda map is defined to be ParkIntersection ; MorrisonParkU1 ;
GrimmKapferKeitelRational
$\sigma(\hat{s})=\mathcal{S}-\mathcal{Z}-\pi^{*}\mathopen{}\mathclose{{}\left(D_{B}}\right)+\sum_{\kappa,I,J}\mathopen{}\mathclose{{}\left(\mathcal{S}\cdot\alpha_{\kappa,I}}\right)\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)_{IJ}\mathcal{T}_{\kappa,J}\,.$
(9)
Here, $\mathcal{S}$ is the divisor corresponding to the section $\hat{s}$,
$\mathcal{Z}$ is the divisor corresponding to the zero section, and
$\pi^{*}(D_{B})$ is the pullback of a divisor $D_{B}$ in the base $B$. For a
6D F-theory model, described by an elliptically fibered threefold, the
$\pi^{*}(D_{B})$ term can be written as
$\sum_{\alpha}\mathopen{}\mathclose{{}\left((\mathcal{S}-\mathcal{Z})\cdot\mathcal{Z}\cdot
B^{\alpha}}\right)B_{\alpha}\,,$ (10)
where the $B_{\alpha}$ are pullbacks of the basis divisors of $H_{2}(B)$ and
the $\alpha$ indices are lowered and raised using the symmetric bilinear form
$\Omega_{\alpha\beta}$ for $H_{2}(B)$. The index $\kappa$ labels the simple
nonabelian gauge factors making up the gauge algebra, and $I,J$ run from $1$
to the rank of the $\kappa$th gauge factor. Finally,
$\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)$ is the
inverse Cartan matrix for the $\kappa$th gauge factor. Each nonabelian gauge
factor is associated with a codimension-one locus in the base where the fiber
(after resolution) consists of irreducible curves forming an affine Dynkin
diagram. The $\alpha_{\kappa,I}$ are the irreducible curves corresponding to
the simple roots of the $\kappa$th Lie algebra, and the
$\mathcal{T}_{\kappa,I}$ are the fibral divisors formed by fibering the
$\alpha_{\kappa,I}$ over the codimension-one locus. Note that we will always
assume the zero section is holomorphic; our primary focus is on elliptic
fibrations in Weierstrass form, for which this assumption is valid.
##### $\mathfrak{u}(1)$ charge units
We choose units for charges consistent with the definition of the Shioda map
in Eq. 9, as proposed in CveticLinU1 . In these units, singlet
$\mathfrak{u}(1)$ charges are integers, and the lattice of singlet charges has
unit spacing. The $\mathfrak{u}(1)$ charge of matter that is also charged
under a nonabelian gauge algebra may be fractional, owing to the term in
Shioda map involving the inverse Cartan matrix. However, our formulas can
easily be adapted for alternative charge normalizations: if the lattice of
singlet charges has a spacing of $n$, one simply replaces $q$ in Eqs. 18 and
20 with $q/n$.
##### Matter representations
When we refer to matter in some representation $\bm{\mathrm{R}}$ of a gauge
algebra, we often implicitly mean the matter supported at a particular
codimension-two locus in the F-theory base, which may involve matter fields in
the representations $\bm{\mathrm{R}}$ and its conjugate
$\overline{\bm{\mathrm{R}}}$. In 6D F-theory models, which have
$\mathcal{N}=(1,0)$ supersymmetry, codimension-two loci typically support full
hypermultiplets888If $\bm{\mathrm{R}}$ is a pseudoreal representation, matter
can occur in half-hypermultiplets. However, we are most interested in
representations that have nonzero $\mathfrak{u}(1)$ charges, so cases where
the entire representation (including the $\mathfrak{u}(1)$ charge) is
pseudoreal are not too important here. of $\bm{\mathrm{R}}$ matter, which
contain fields in both $\bm{\mathrm{R}}$ and $\overline{\bm{\mathrm{R}}}$. In
4D F-theory models, a chiral multiplet in a representation $\bm{\mathrm{R}}$
contains fields only in the $\bm{\mathrm{R}}$ representation, but there must
be an accompanying CPT-conjugate antichiral multiplet in the
$\overline{\bm{\mathrm{R}}}$ representation. Both of these multiplets are
supported at the same codimension-two locus. Thus, while we often describe
matter as being in a representation $\bm{\mathrm{R}}$ as a shorthand, one
should remember that, due to supersymmetry considerations, there actually may
be fields in both $\bm{\mathrm{R}}$ and $\overline{\bm{\mathrm{R}}}$.
##### Residues
We denote the least non-negative residue of $a$ modulo $b$ as
$\overline{a}_{b}$:
$\overline{a}_{b}=b\mathopen{}\mathclose{{}\left(\frac{a}{b}-\mathopen{}\mathclose{{}\left\lfloor\frac{a}{b}}\right\rfloor}\right)\,.$
(11)
For instance,
$\overline{11}_{6}=5\,,\quad\overline{12}_{6}=0\,,\quad\overline{13}_{6}=1\,.$
(12)
We also define $u_{b}(a)$ as
$u_{b}(a)=\min(\overline{a}_{b},b-\overline{a}_{b})\,.$ (13)
Roughly, $u_{b}(a)$ gives the distance from an integer $a$ to the nearest
multiple of $b$. For instance,
$u_{6}(11)=1\,,\quad u_{6}(12)=0\,,\quad u_{6}(13)=1\,.$ (14)
##### Orders of vanishing
We denote the order of vanishing of an expression $y$ at a locus $\\{x=0\\}$
as
$\operatorname*{ord}_{x=0}(y)\,.$ (15)
We also denote the order of vanishing of $y$ at a codimension-two locus
$\\{x_{1}=x_{2}=0\\}$ as
$\operatorname*{ord}_{x_{1},x_{2}}(y)\,.$ (16)
Since we are giving general proposals about the behavior at loci supporting
$\mathfrak{u}(1)$ charges, we often want to describe the orders of vanishing
at a codimension-one or codimension-two locus supporting a gauge group or
charged matter without specifying a particular locus. Therefore, we denote the
order of vanishing of $y$ at an unspecified codimension-one locus as
$\operatorname{ord}_{1}(y)$ and the order of vanishing at an unspecified
codimension-two locus as $\operatorname{ord}_{2}(y)$.
##### Gauge group centers
At various points, we discuss elements of the center $Z(G)$ of a simple Lie
group $G$. We refer to the order of $Z(G)$ (the number of elements in the
center of $G$) as $d_{G}$. This number also equals the determinant of the
Cartan matrix of $G$. We label elements of $Z(G)$ with the integer $\nu$; we
also use $\nu$ to denote the discrete quotient involving the $\nu$ element of
the center. For most of the gauge groups discussed here, the center is
$\mathbb{Z}_{m}$ for some $m$, and we let $\nu$ run from $0$ to $m-1$. For
$\operatorname{Spin}(4k)$, however, the center is
$\mathbb{Z}_{2}\times\mathbb{Z}_{2}$. Since there are four elements in this
center, we let $\nu$ run from 0 to 3, with $\nu=0$ referring to the identity
element of the center. At codimension-two loci in the base supporting charged
matter, the singularity type enhances, and one can associate a Lie group with
the enhanced singularity type. It is useful to consider the center of this
enhanced group, even though the enhanced group does not represent a physical
gauge group. We label elements of the enhanced group’s center with the integer
$\mu$.
It is also useful to define the function $T_{G}(\nu)$ and the triplet of
functions $\vec{\tau}_{G}(\nu)$, where $G$ is a simple Lie group. The
expressions for $T_{G}(\nu)$ and $\vec{\tau}_{G}(\nu)$, which are given in
Table 1, depend on the group $G$ in question. As discussed in Section 4, the
$T_{G}(\nu)$ are essentially diagonal elements of the inverse Cartan matrix of
$G$. The $\vec{\tau}_{G}(\nu)$ provide information about the orders of
vanishing of the $\hat{x}$, $\hat{y}$, and $\hat{w}$ section components.
Singularity Type | $G$ | $T_{G}(\nu)$ | $\vec{\tau}_{G}(\nu)$ | Valid Values of $\nu$
---|---|---|---|---
$\text{I}_{1}$ | — | $0$ | $(0,0,0)$ | 0
II | — | $0$ | $\mathopen{}\mathclose{{}\left(0,0,0}\right)$ | 0
$\text{I}_{n}$ | $\operatorname{SU}(n)$ | $\frac{\nu(n-\nu)}{n}$ | $\mathopen{}\mathclose{{}\left(0,1,1}\right)u_{n}(\nu)$ | $0,1,\ldots,n-1$
III | $\operatorname{SU}(2)$ | $\frac{\nu(2-\nu)}{2}$ | $\mathopen{}\mathclose{{}\left(1,1,1}\right)u_{2}(\nu)$ | $0,1$
IV | $\operatorname{SU}(3)$ | $\frac{\nu(3-\nu)}{3}$ | $\mathopen{}\mathclose{{}\left(1,1,2}\right)u_{3}(\nu)$ | $0,1,2$
$\text{I}_{n-4}^{*}$ | $\operatorname{Spin}(2n)$ | $\begin{cases}0&\nu=0\\\ 1&\nu=2\\\ \frac{n}{4}&\nu=1,3\end{cases}$ | $\begin{cases}\mathopen{}\mathclose{{}\left(0,0,0}\right)&\nu=0\\\ \mathopen{}\mathclose{{}\left(1,2,2}\right)&\nu=2\\\ \mathopen{}\mathclose{{}\left(1,\mathopen{}\mathclose{{}\left\lfloor\frac{n}{2}}\right\rfloor,\mathopen{}\mathclose{{}\left\lceil\frac{n}{2}}\right\rceil}\right)&\nu=1,3\end{cases}$ | $0,1,2,3$
$\text{IV}^{*}$ | $\operatorname{E}_{6}$ | $\frac{2\nu(3-\nu)}{3}$ | $\mathopen{}\mathclose{{}\left(2,2,3}\right)u_{3}(\nu)$ | $0,1,2$
$\text{III}^{*}$ | $\operatorname{E}_{7}$ | $\frac{3}{2}\nu$ | $\mathopen{}\mathclose{{}\left(2,3,3}\right)\nu$ | $0,1$
$\text{II}^{*}$ | $\operatorname{E}_{8}$ | $0$ | $\mathopen{}\mathclose{{}\left(0,0,0}\right)$ | $0$
Table 1: Expressions for $T_{G}(\nu)$ and $\vec{\tau}_{G}(\nu)$. While the
singularity types $\text{I}_{1}$ and II do not have listed gauge group, $G$
can be thought of as $\operatorname{SU}(1)$ in these situations. In this
paper, we focus on the split versions of these singularity types.
### 2.2 Proposal
Suppose a codimension-one locus $\\{\sigma=0\\}$ in the base supports a
simply-laced gauge algebra $\mathfrak{g}$ whose universal covering group is
$G$. The elliptic fibers along this locus are singular, and the singularity
type is that corresponding to $G$ in the Kodaira classification. If there is
also a $\mathfrak{u}(1)$ gauge algebra, the section components of its
associated generating section should vanish to orders
$\operatorname{ord}_{1}(\hat{z})=0\,,\quad\mathopen{}\mathclose{{}\left(\operatorname{ord}_{1}(\hat{x}),\operatorname{ord}_{1}(\hat{y}),\operatorname{ord}_{1}(\hat{w})}\right)=\vec{\tau}_{G}(\nu)$
(17)
for some value of $\nu$ denoting an element of the center of $G$. As described
in more detail in Section 4, $\nu$ encodes information about which component
of the resolved singular fibers along $\\{\sigma=0\\}$ is hit by the
generating section. This in turn (at least partially) describes the global
structure of the gauge group CveticLinU1 .
Now suppose there is some matter with $\mathfrak{u}(1)$ charge $q$ occurring
at a codimension-two locus in the base with an enhanced singularity type. At
least locally, this codimension-two locus can be thought of as the
intersection of the codimension-one loci $\\{\sigma_{i}=0\\}$,999Some
situations involve matter localized at nodal singularities of irreducible
codimension-one loci in the base. The most notable examples are
$\mathfrak{u}(1)$ charged singlets, which occur at double points of the
discriminant locus $\\{\Delta=0\\}$. Locally, such situations still look like
the intersection of multiple codimension-one loci, even though these loci may
be identified globally. For the case of singlets, the singularity type can be
thought of as enhancing from $\text{I}_{1}\times\text{I}_{1}$ to
$\text{I}_{2}$, or from $\operatorname{SU}(1)\times\operatorname{SU}(1)$ to
$\operatorname{SU}(2)$. with $i$ running from $1$ to $N$.101010In almost all
cases of interest, $N$ will be 2. However, we leave open the possibility that
$N$ can be greater than 2 to account for more exotic matter types, such as
trifundamental representations. Each of these loci has singular fibers of type
$G_{i}$, where we are referring to singularity types by their corresponding
ADE groups in the Kodaira classification.111111For the singularity types
$\text{I}_{1}$ and II, whose corresponding group can roughly be thought of as
$\operatorname{SU}(1)$, the only allowed value of $\nu$ is 0. As indicated in
Table 1, $T(\nu)$ and $\vec{\tau}(\nu)$ are 0 for these situations. Each
$\\{\sigma_{i}=0\\}$ locus also has a corresponding $\nu_{i}$ consistent with
the codimension-one orders of vanishings described above. At the codimension-
two locus, the singularity type enhances from $G_{1}\times\ldots\times G_{N}$
to some singularity type $H$, where $H$ is again an ADE group. The generating
section components for the $\mathfrak{u}(1)$ should then vanish to orders
$\operatorname{ord}_{2}(\hat{z})=\frac{1}{2}\mathopen{}\mathclose{{}\left(\frac{\prod_{i}d_{G_{i}}}{d_{H}}q^{2}+\sum_{i}T_{G_{i}}(\nu_{i})-T_{H}(\mu)}\right)$
(18)
and
$\mathopen{}\mathclose{{}\left(\operatorname{ord}_{2}(\hat{x}),\operatorname{ord}_{2}(\hat{y}),\operatorname{ord}_{2}(\hat{w})}\right)=(2,3,4)\times\operatorname{ord}_{2}(\hat{z})+\vec{\tau}_{H}(\mu)\,.$
(19)
at the codimension-two locus. Here $d_{G}$ represents the order (the number of
elements) of the center of $G$, while $\mu$ is an integer representing a
particular element of the center of $H$.
An important special case is matter charged under the gauge algebra
$\mathfrak{g}\oplus\mathfrak{u}(1)$, where $\mathfrak{g}$ is a simple Lie
algebra. We again assume that $\mathfrak{g}$ is simply-laced. There is a
codimension-one locus $\\{\sigma=0\\}$ along which the elliptic fibers are
singular with singularity type $G$. The orders of vanishing at
$\\{\sigma=0\\}$ are still given by Eq. 17 for some $\nu$ in the center of
$G$. Now suppose that matter charged under $\mathfrak{g}\oplus\mathfrak{u}(1)$
occurs at the intersection of $\\{\sigma=0\\}$ with the residual discriminant
locus, which has singularity type $\text{I}_{1}$. For the residual
discriminant locus, whose ADE group can be thought of as
$\operatorname{SU}(1)$, the associated $\nu$, $T(\nu)$, and $\vec{\tau}(\nu)$
are 0. If the enhanced singularity type at the codimension-two locus is $H$,
the orders of vanishing at the codimension-two locus are
$\operatorname{ord}_{2}(\hat{z})=\frac{1}{2}\mathopen{}\mathclose{{}\left(\frac{d_{G}}{d_{H}}q^{2}+T_{G}(\nu)-T_{H}(\mu)}\right)$
(20)
and
$\mathopen{}\mathclose{{}\left(\operatorname{ord}_{2}(\hat{x}),\operatorname{ord}_{2}(\hat{y}),\operatorname{ord}_{2}(\hat{w})}\right)=(2,3,4)\times\operatorname{ord}_{2}(\hat{z})+\vec{\tau}_{H}(\mu)\,.$
(21)
These expressions provide a procedure for determining the $\mathfrak{u}(1)$
charge of some matter given the orders of vanishing of the section components,
at least up to sign. From $f$, $g$, and the discriminant $\Delta$, one can
determine the $G_{i}$ and $H$, which in turn determine the $d_{G_{i}}$ and
$d_{H}$. One can also use Equations 17 and 19 to read off $\nu$ and $\mu$ for
the matter in question.121212There may be two values of $\nu$ (or $\mu$)
corresponding to a particular value of $\vec{\tau}_{G}(\nu)$ (or
$\vec{\tau}_{H}(\mu)$). However, $T_{G}(\nu)$ will be the same for these two
values of $\nu$, so this ambiguity does not cause any problems when using Eq.
18. This information can then be plugged into Eq. 18, and one can solve for
$q$ up to a sign.
To use these relations to predict the orders of vanishing for matter with a
particular $\mathfrak{u}(1)$ charge, one must determine the value of $\mu$
corresponding to $\nu$ and $q$. The relations between these parameters, which
depend on the specific groups and representations in question, are described
in Table 2. Alternatively, one can try all values of $\mu$ allowed for the
codimension-two singularity type $H$ and determine which values of $\mu$ give
sensible orders of vanishing for a desired $\nu$ and $q$. However, note that
even if the formulas predict reasonable orders of vanishing for a choice of
$\nu$ and $q$, it does not imply that one can actually find an F-theory model
that realizes the appropriate matter with this $\mathfrak{u}(1)$ charge. The
proposals should be thought of as giving the orders of vanishing assuming that
a model supporting the appropriate matter exists. In fact, one likely could
use these formulas to argue that certain $\mathfrak{u}(1)$ charges cannot be
realized in F-theory. One would first calculate the orders of vanishing
predicted by the formulas for the $\mathfrak{u}(1)$ charge in question. If
attempts to construct F-theory model with a rational section admitting these
orders of vanishing always lead to obstructions, it would hint that the
corresponding $\mathfrak{u}(1)$ charge cannot be realized.
For all the models discussed here, including several examples previously seen
in the literature, these formulas are satisfied exactly. However, these
formulas may more generally give the lowest orders of vanishing of the section
components. As a helpful analogy, consider the Kodaira table in Table 4, which
lists the orders of vanishing of $f$, $g$, and $\Delta$ for various
singularity types. In some elliptic fibrations, the orders of vanishing at a
locus supporting a particular singularity type may exceed those listed in the
table, so long as these larger orders of vanishing do not correspond to an
enhanced singularity type. For instance, while $f$, $g$, and $\Delta$
typically vanish to orders $(3,4,8)$ at an $\text{IV}^{*}$ locus, they could
alternatively vanish to orders $(4,4,8)$. However, if they vanish to orders
$(3,5,9)$, the singularity type would be $\text{III}^{*}$ rather than
$\text{IV}^{*}$. One might expect that the orders of vanishing of the section
components behave in a similar way. As an example, our formulas predict that,
for an $\mathfrak{su}(5)\oplus\mathfrak{u}(1)$ model with $\nu=1$,
$(\hat{x},\hat{y},\hat{z},\hat{w})$ should vanish to orders $(0,1,0,1)$ at the
codimension-one $\mathfrak{su}(5)$ locus. An
$\mathfrak{su}(5)\oplus\mathfrak{u}(1)$ model with $\nu=1$ and codimension-one
orders of vanishing such as $(0,1,0,2)$ may still be consistent with our
proposals. However, orders of vanishing such as $(0,2,0,2)$ would correspond
to different value of $\nu$, so the proposals would suggest
$\mathfrak{su}(5)\oplus\mathfrak{u}(1)$ models with such codimension-one
orders of vanishing would have a different global gauge group structure. Since
we do not analyze F-theory models exhibiting such behaviors here, these
thoughts on models with higher orders of vanishing are somewhat speculative.
It would therefore be important to more properly establish whether such
situations can occur and how to handle them in future work.
Gauge Factor | Enhancement | Representation | Allowed $q$ | $\mu$
---|---|---|---|---
— | $\text{I}_{1}\to\text{I}_{2}$ | $\bm{1}_{q}$ | $\mathbb{Z}$ | $\overline{q}_{2}$
$\operatorname{SU}(n)$ | $\text{I}_{n}^{(s)}\to\text{I}_{n+1}$ | $\bm{n}_{q}$ | $\frac{\nu}{n}+\mathbb{Z}$ | $\overline{nq}_{n+1}$
$\operatorname{SU}(n)$, odd $n$ | $\text{I}_{n}^{(s)}\to\text{I}^{*}_{n-4}$ | $\bm{\frac{n}{2}(n-1)}_{q}$ | $\frac{2\nu}{n}+\mathbb{Z}$ | $\overline{nq}_{4}$
$\operatorname{SU}(n)$, even $n$ | $\text{I}_{n}^{(s)}\to\text{I}^{*}_{n-4}$ | $\bm{\frac{n}{2}(n-1)}_{q}$ | $\frac{2\nu}{n}+\mathbb{Z}$ | $\overline{(q-\frac{2}{n}\nu)}_{2}+\overline{2\nu}_{4}$
$\operatorname{Spin}(2n)$ | ${\text{I}_{n-4}^{*(s)}}\to\text{I}^{*}_{n-3}$ | $\bm{2n}_{q}$ | $\frac{\nu}{2}+\mathbb{Z}$ | $\overline{(2q-\nu)}_{4}+\overline{2q}_{2}$
$\operatorname{Spin}(8)$ | ${\text{I}_{0}^{*}}^{(s)}\to\text{I}^{*}_{1}$ | ${\bm{8_{\text{s}}}}_{,q}$ | $\begin{cases}\frac{1}{2}+\mathbb{Z}&\nu=1,2\\\ \mathbb{Z}&\nu=0,3\end{cases}$ | $\overline{(2q-2\nu)}_{4}$
$\operatorname{Spin}(8)$ | ${\text{I}_{0}^{*}}^{(s)}\to\text{I}^{*}_{1}$ | ${\bm{8_{\text{c}}}}_{,q}$ | $\begin{cases}\frac{1}{2}+\mathbb{Z}&\nu=2,3\\\ \mathbb{Z}&\nu=0,1\end{cases}$ | $\overline{(2q-2\nu)}_{4}$
$\operatorname{Spin}(10)$ | ${\text{I}_{1}^{*}}^{(s)}\to{\text{IV}^{*}}$ | $\bm{16}_{q}$ | $\frac{4-\nu}{4}+\mathbb{Z}$ | $\overline{4q}_{3}$
$\operatorname{Spin}(12)$ | ${\text{I}_{2}^{*}}^{(s)}\to{\text{III}}^{*}$ | $\bm{32}_{q}\oplus\bm{1}_{2q}$ | $\begin{cases}\frac{1}{2}+\mathbb{Z}&\nu=2,3\\\ \mathbb{Z}&\nu=0,1\end{cases}$ | $\overline{(2q+\nu)}_{2}$
$\operatorname{Spin}(12)$ | ${\text{I}_{2}^{*}}^{(s)}\to{\text{III}}^{*}$ | $\bm{32^{\prime}}_{q}\oplus\bm{1}_{2q}$ | $\begin{cases}\frac{1}{2}+\mathbb{Z}&\nu=1,2\\\ \mathbb{Z}&\nu=0,3\end{cases}$ | $\overline{(2q+\nu)}_{2}$
$\operatorname{Spin}(14)$ | ${\text{I}_{3}^{*}}^{(s)}\to{\text{II}}^{*}$ | $\bm{64}_{q}\oplus\bm{14}_{2q}$ | $\frac{\nu}{4}+\mathbb{Z}$ | 0
$\operatorname{E}_{6}$ | ${\text{IV}^{*}}^{(s)}\to{\text{III}}^{*}$ | $\bm{27}_{q}$ | $\frac{\nu}{3}+\mathbb{Z}$ | $\overline{3q}_{2}$
$\operatorname{E}_{7}$ | ${\text{III}^{*}}\to{\text{II}}^{*}$ | $\bm{56}_{q}\oplus\bm{1}_{2q}$ | $\frac{\nu}{2}+\mathbb{Z}$ | 0
Table 2: Relations between $\nu$, $\mu$, and the $\mathfrak{u}(1)$ charge $q$
for various representations. Note that, in our conventions, the $\bm{32}$
representation of $\operatorname{Spin}(12)$ has highest weight
$[0,0,0,0,0,1]$, while the $\bm{32^{\prime}}$ representation has highest
weight $[0,0,0,0,1,0]$. This convention differs from Slansky but agrees with
YamatsuGroupTheory .
### 2.3 Example
As an example of how to read off $\mathfrak{u}(1)$ charges, consider the
F-theory model described by a Weierstrass equation of the form
$\displaystyle y^{2}$
$\displaystyle=x^{3}+\mathopen{}\mathclose{{}\left(c_{1}c_{3}-b^{2}c_{0}-\frac{1}{3}c_{2}^{2}}\right)xz^{4}$
(22)
$\displaystyle\qquad\quad+\mathopen{}\mathclose{{}\left(c_{0}c_{3}^{2}-\frac{1}{3}c_{1}c_{2}c_{3}+\frac{2}{27}c_{2}^{3}-\frac{2}{3}b^{2}c_{0}c_{2}+\frac{1}{4}b^{2}c_{1}^{2}}\right)z^{6}$
with
$\displaystyle c_{0}$
$\displaystyle=\sigma\Big{[}8c_{1,1}c_{3,0}^{3}+\sigma\mathopen{}\mathclose{{}\left(2c_{3,0}^{2}\mathopen{}\mathclose{{}\left(2b^{2}c_{1,1}^{2}+c_{1,2}}\right)-8bc_{1,1}c_{3,0}c_{2,1}+c_{2,1}^{2}}\right)$
(23)
$\displaystyle\qquad\quad+\sigma^{2}\mathopen{}\mathclose{{}\left(c_{2,1}\mathopen{}\mathclose{{}\left(4b^{3}c_{1,1}^{2}-bc_{1,2}}\right)+2c_{3,0}\mathopen{}\mathclose{{}\left(-4b^{4}c_{1,1}^{3}+b^{2}c_{1,1}c_{1,2}+c_{1,3}}\right)}\right)-\sigma^{3}c_{0,4}\Big{]}\,,$
$\displaystyle c_{1}$
$\displaystyle=\sigma\Big{[}2c_{3,0}\mathopen{}\mathclose{{}\left(2bc_{1,1}c_{3,0}+c_{2,1}}\right)+b\sigma\mathopen{}\mathclose{{}\left(c_{1,2}c_{3,0}-2bc_{1,1}c_{2,1}}\right)+b\sigma^{2}c_{1,3}+\sigma^{3}c_{1,4}\Big{]}\,,$
$\displaystyle c_{2}$ $\displaystyle=c_{3,0}^{2}+b\sigma
c_{2,1}+\sigma^{4}c_{2,4}\,,$ $\displaystyle c_{3}$
$\displaystyle=bc_{3,0}+\sigma^{4}c_{3,4}\,.$
We assume that the base of the elliptic fibration is complex two-dimensional,
giving us a 6D F-theory model. The parameters $b$ and $c_{i,j}$ are
holomorphic sections of line bundles over the base, with the line bundles
chosen such that $f$ and $g$ are holomorphic sections of
$\mathcal{O}(-4K_{B})$ and $\mathcal{O}(-6K_{B})$. This model is a tuned
version of the Morrison–Park form MorrisonParkU1 , and it admits a generating
section with components
$\displaystyle\hat{x}$
$\displaystyle=\frac{1}{3}b^{2}c_{3,0}^{2}-\frac{2}{3}\sigma\mathopen{}\mathclose{{}\left(b^{3}c_{2,1}}\right)+\sigma^{4}\mathopen{}\mathclose{{}\left(2bc_{3,0}c_{3,4}-\frac{2}{3}b^{2}c_{2,4}}\right)+\sigma^{8}c_{3,4}^{2}\,,$
(24) $\displaystyle\hat{y}$
$\displaystyle=-2\sigma\mathopen{}\mathclose{{}\left(b^{5}c_{1,1}c_{3,0}^{2}}\right)+\sigma^{2}\mathopen{}\mathclose{{}\left(b^{6}c_{1,1}c_{2,1}-\frac{1}{2}b^{5}c_{1,2}c_{3,0}}\right)-\frac{1}{2}\sigma^{3}\mathopen{}\mathclose{{}\left(b^{5}c_{1,3}}\right)$
$\displaystyle\qquad-\frac{1}{2}\sigma^{4}\mathopen{}\mathclose{{}\left(b^{2}\mathopen{}\mathclose{{}\left(b^{2}c_{1,4}-2bc_{2,4}c_{3,0}+4c_{3,0}^{2}c_{3,4}}\right)}\right)+b^{3}\sigma^{5}c_{2,1}c_{3,4}$
$\displaystyle\qquad+b\sigma^{8}c_{3,4}\mathopen{}\mathclose{{}\left(bc_{2,4}-3c_{3,0}c_{3,4}}\right)-\sigma^{12}c_{3,4}^{3}\,,$
$\displaystyle\hat{z}$ $\displaystyle=b\,,$ $\displaystyle\hat{w}$
$\displaystyle=-4\sigma\mathopen{}\mathclose{{}\left(b^{6}c_{1,1}c_{3,0}^{3}}\right)+b^{6}\sigma^{2}c_{3,0}\mathopen{}\mathclose{{}\left(6bc_{1,1}c_{2,1}-c_{3,0}\mathopen{}\mathclose{{}\left(4b^{2}c_{1,1}^{2}+c_{1,2}}\right)}\right)$
$\displaystyle\qquad+b^{6}\sigma^{3}\mathopen{}\mathclose{{}\left(bc_{2,1}\mathopen{}\mathclose{{}\left(c_{1,2}-4b^{2}c_{1,1}^{2}}\right)+c_{3,0}\mathopen{}\mathclose{{}\left(8b^{4}c_{1,1}^{3}-2b^{2}c_{1,1}c_{1,2}-c_{1,3}}\right)}\right)$
$\displaystyle\qquad+\sigma^{4}\mathopen{}\mathclose{{}\left(b^{6}c_{0,4}+b^{3}c_{3,0}\mathopen{}\mathclose{{}\left(b^{2}c_{1,4}-2bc_{2,4}c_{3,0}+4c_{3,0}^{2}c_{3,4}}\right)}\right)$
$\displaystyle\qquad+2b^{4}\sigma^{5}\mathopen{}\mathclose{{}\left(bc_{2,1}c_{2,4}+c_{3,0}c_{3,4}\mathopen{}\mathclose{{}\left(2bc_{1,1}c_{3,0}-3c_{2,1}}\right)}\right)$
$\displaystyle\qquad+b^{5}\sigma^{6}c_{3,4}\mathopen{}\mathclose{{}\left(c_{1,2}c_{3,0}-2bc_{1,1}c_{2,1}}\right)+b^{5}\sigma^{7}c_{1,3}c_{3,4}$
$\displaystyle\qquad+b^{2}\sigma^{8}\mathopen{}\mathclose{{}\left(b^{2}c_{2,4}^{2}+bc_{3,4}\mathopen{}\mathclose{{}\left(bc_{1,4}-8c_{2,4}c_{3,0}}\right)+14c_{3,0}^{2}c_{3,4}^{2}}\right)-4\sigma^{9}b^{3}c_{2,1}c_{3,4}^{2}$
$\displaystyle\qquad-\sigma^{12}c_{3,4}^{2}\mathopen{}\mathclose{{}\left(4b^{2}c_{2,4}-12bc_{3,0}c_{3,4}-3c_{3,4}^{2}\sigma^{4}}\right)\,.$
This indicates that the model has a $\mathfrak{u}(1)$ gauge algebra.
Additionally, the discriminant is proportional to $\sigma^{5}$:
$\Delta=2bc_{1,1}c_{3,0}^{4}\Delta^{\prime}\sigma^{5}+\mathcal{O}(\sigma^{6})\,,$
(25)
where
$\displaystyle\Delta^{\prime}$
$\displaystyle=-112b^{8}c_{1,1}^{3}c_{2,1}c_{3,0}^{2}-4b^{4}c_{1,3}c_{2,1}c_{3,0}^{2}+4b^{3}c_{0,4}c_{3,0}^{3}+128b^{9}c_{1,1}^{4}c_{3,0}^{3}+8b^{2}c_{1,4}c_{3,0}^{4}$
(26)
$\displaystyle\qquad-16bc_{2,4}c_{3,0}^{5}-4b^{6}c_{1,1}c_{2,1}\mathopen{}\mathclose{{}\left(c_{2,1}^{2}-3c_{1,2}c_{3,0}^{2}}\right)+12b^{7}c_{1,1}^{2}c_{3,0}\mathopen{}\mathclose{{}\left(3c_{2,1}^{2}-2c_{1,2}c_{3,0}^{2}}\right)$
$\displaystyle\qquad+b^{5}\mathopen{}\mathclose{{}\left(-2c_{1,2}c_{2,1}^{2}c_{3,0}+\mathopen{}\mathclose{{}\left(c_{1,2}^{2}+8c_{1,1}c_{1,3}}\right)c_{3,0}^{3}}\right)+32c_{3,0}^{6}c_{3,4}.$
One can verify that the split condition GrassiMorrison
$\displaystyle\frac{9g}{2f}\Bigg{|}_{\sigma=0}=-\psi^{2}$ (27)
is satisfied, indicating that the singularity type along $\\{\sigma=0\\}$ is
$\text{I}_{5}^{s}$ and that the supported gauge algebra is $\mathfrak{su}(5)$.
There are no other codimension-one loci supporting nonabelian gauge algebras
in the model, and the total gauge algebra is
$\mathfrak{su}(5)\oplus\mathfrak{u}(1)$.
There are two sources of charged matter in this model. If $\\{\sigma=0\\}$ is
a curve of genus $g$, there are $g$ hypermultiplets of adjoint ($\bm{24}$)
matter not localized at a codimension-two locus. The $\mathfrak{u}(1)$ charge
of this matter is 0, and this matter is therefore not of significant interest
to us; however, we must account for it to properly satisfy the 6D anomaly
cancellation conditions. The remaining matter is localized at codimension-two
loci. Along $\\{\sigma=0\\}$, the singularity type enhances to type
$\text{I}_{1}^{*}$ (or $\operatorname{Spin}(10)$) at $\\{\sigma=c_{3,0}=0\\}$
and to type $\text{I}_{6}$ (or $\operatorname{SU}(6)$) at $\\{\sigma=b=0\\}$,
$\\{\sigma=c_{1,1}=0\\}$, and $\\{\sigma=\Delta^{\prime}=0\\}$. By the
Katz–Vafa method, the $\text{I}_{1}^{*}$ locus supports hypermultiplets of
two-index antisymmetric ($\bm{10}$) matter, while the $\text{I}_{6}$ loci
support hypermultiplets of fundamental ($\bm{5}$) matter. There are also
codimension-two loci not along $\\{\sigma=0\\}$ where the singularity type
enhances to $\text{I}_{2}$: $\\{b=c_{3,4}=0\\}$ and
$\displaystyle V_{q=1}$
$\displaystyle=\\{\hat{y}/\sigma=\hat{w}/\sigma=0\\}\setminus$ (28)
$\displaystyle\qquad\Big{(}\\{\sigma=c_{3,0}=0\\}\cup\\{\sigma=b=0\\}\cup\\{\sigma=c_{1,1}=0\\}\cup\\{b=c_{3,4}=0\\}\Big{)}\,.$
These loci support singlets uncharged under the $\mathfrak{su}(5)$ algebra.
To determine the $\mathfrak{u}(1)$ charges, we should look at the orders of
vanishing of the section components. First, along $\\{\sigma=0\\}$, the
section components, listed in Eq. 24, vanish to orders
$\operatorname*{ord}_{\sigma=0}(\hat{z})=0\,,\quad\mathopen{}\mathclose{{}\left(\operatorname*{ord}_{\sigma=0}(\hat{x}),\operatorname*{ord}_{\sigma=0}(\hat{y}),\operatorname*{ord}_{\sigma=0}(\hat{w})}\right)=\mathopen{}\mathclose{{}\left(0,1,1}\right)\,.$
(29)
Comparing to Eq. 17 and noting that
$\vec{\tau}_{\operatorname{SU}(5)}(\nu)=(0,1,1)\times u_{5}(\nu)\,,$ (30)
we see that $\nu$ is either 1 or 4. Either can be chosen without affecting the
end results of our calculation, so we take $\nu$ to be 1. We additionally know
that the center of $\operatorname{SU}(5)$ is $\mathbb{Z}_{5}$, so
$d_{SU(5)}=5$.
The codimension-two orders of vanishing at the matter loci are listed in Table
3. This information allows us to determine the $\mathfrak{u}(1)$ charges up to
sign, which we demonstrate for three of the matter loci.
* •
First, consider the locus $\\{\sigma=c_{3,0}=0\\}$ locus, which supports
$\bm{10}$ matter. From Table 3, we see that
$\mathopen{}\mathclose{{}\left(\operatorname{ord}_{2}(\hat{x}),\operatorname{ord}_{2}(\hat{y}),\operatorname{ord}_{2}(\hat{w})}\right)-(2,3,4)\times\operatorname{ord}_{2}(\hat{z})=\vec{\tau}_{\operatorname{Spin}(10)}(\mu)=(1,2,3)\,.$
(31)
According to Table 1, $\mu$ is therefore either $1$ or $3$ for this matter
locus. Additionally, the center of $\operatorname{Spin}(10)$ is
$\mathbb{Z}_{4}$, and $d_{\operatorname{Spin}(10)}$ is 4. Plugging this
information into either Eq. 18 or Eq. 20 leads to
$q^{2}=\frac{d_{\operatorname{Spin}(10)}}{d_{\operatorname{SU}(5)}}\Big{(}2\operatorname{ord}_{2}(\hat{z})+T_{\operatorname{Spin}(10)}(\mu)-T_{\operatorname{SU}(5)}(\nu)\Big{)}=\frac{4}{5}\Big{(}2\times
0+\frac{5}{4}-\frac{4}{5}\Big{)}=\frac{9}{25}\,.$ (32)
Therefore, $\lvert q\rvert$ is $\frac{3}{5}$ for the matter supported here.
* •
Second, consider the locus $\\{\sigma=b=0\\}$, which supports $\bm{5}$ matter.
From Table 3,
$\mathopen{}\mathclose{{}\left(\operatorname{ord}_{2}(\hat{x}),\operatorname{ord}_{2}(\hat{y}),\operatorname{ord}_{2}(\hat{w})}\right)-(2,3,4)\times\operatorname{ord}_{2}(\hat{z})=\vec{\tau}_{\operatorname{SU}(6)}(\mu)=(0,3,3)\,.$
(33)
According to Table 1, $\mu$ is $3$ for this matter locus, and since the center
of $\operatorname{SU}(6)$ is $\mathbb{Z}_{6}$, $d_{\operatorname{SU}(6)}$ is
6. Plugging this information into either Eq. 18 or Eq. 20 leads to
$q^{2}=\frac{d_{\operatorname{SU}(6)}}{d_{\operatorname{SU}(5)}}\Big{(}2\operatorname{ord}_{2}(\hat{z})+T_{\operatorname{SU}(6)}(\mu)-T_{\operatorname{SU}(5)}(\nu)\Big{)}=\frac{6}{5}\Big{(}2\times
1+\frac{9}{6}-\frac{4}{5}\Big{)}=\frac{81}{25}\,.$ (34)
Therefore, $\lvert q\rvert$ is $\frac{9}{5}$ for the matter supported here.
* •
Third, consider the locus $\\{b=c_{3,4}=0\\}$, which supports singlet matter,
uncharged under $\operatorname{SU}(5)$, with a potentially nonzero
$\mathfrak{u}(1)$ charge. This locus consists of double points of the
discriminant curve. The singularity type enhances from
$\text{I}_{1}\times\text{I}_{1}$ to $\text{I}_{2}$, or, in terms of ADE
groups, from $\operatorname{SU}(1)\times\operatorname{SU}(1)$ to
$\operatorname{SU}(2)$. For the $\operatorname{SU}(1)$ loci, $\nu$ is 0, and
there are no nontrivial contributions from either
$\vec{\tau}_{\operatorname{SU}(1)}(\nu)$ or $T_{\operatorname{SU}(1)}(\nu)$.
We still need to determine $\mu$, however. According to the codimension-two
orders of vanishing,
$\mathopen{}\mathclose{{}\left(\operatorname{ord}_{2}(\hat{x}),\operatorname{ord}_{2}(\hat{y}),\operatorname{ord}_{2}(\hat{w})}\right)-(2,3,4)\times\operatorname{ord}_{2}(\hat{z})=\vec{\tau}_{\operatorname{SU}(2)}(\mu)=(0,0,0)\,,$
(35)
implying that $\mu$ is 0. Additionally, $d_{SU(2)}$ is 2. Plugging this
information into either Eq. 18 or Eq. 20 leads to
$q^{2}=d_{SU(2)}\Big{(}2\operatorname{ord}_{2}(\hat{z})+T_{\operatorname{SU}(2)}(\mu)\Big{)}=2\Big{(}2\times
1+0\Big{)}=4\,.$ (36)
Therefore, $\lvert q\rvert$ is $2$ for the matter supported here.
The values of $\lvert q\rvert$ for the other loci can be found by similar
procedures, and the results are summarized in the penultimate column of Table
3.
Locus | Enhancement | $\operatorname{ord}_{2}(\hat{x},\hat{y},\hat{z},\hat{w})$ | $\lvert q\rvert$ | $\mathfrak{su}(5)\oplus\mathfrak{u}(1)$ Rep.
---|---|---|---|---
$\\{\sigma=c_{3,0}=0\\}$ | $\text{I}_{5}\times\text{I}_{1}\rightarrow\text{I}_{1}^{*}$ | (1,2,0,3) | $\frac{3}{5}$ | $\bm{10}_{-\frac{3}{5}}$
$\\{\sigma=b=0\\}$ | $\text{I}_{5}\times\text{I}_{1}\rightarrow\text{I}_{6}$ | (2,6,1,7) | $\frac{9}{5}$ | $\bm{5}_{-\frac{9}{5}}$
$\\{\sigma=c_{1,1}=0\\}$ | $\text{I}_{5}\times\text{I}_{1}\rightarrow\text{I}_{6}$ | (0,2,0,2) | $\frac{4}{5}$ | $\bm{5}_{-\frac{4}{5}}$
$\\{\sigma=\Delta^{\prime}=0\\}$ | $\text{I}_{5}\times\text{I}_{1}\rightarrow\text{I}_{6}$ | (0,1,0,1) | $\frac{1}{5}$ | $\bm{5}_{\frac{1}{5}}$
$\\{b=c_{3,4}=0\\}$ | $\text{I}_{1}\times\text{I}_{1}\rightarrow\text{I}_{2}$ | (2,3,1,4) | $2$ | $\bm{1}_{2}$
$V_{q=1}$ | $\text{I}_{1}\times\text{I}_{1}\rightarrow\text{I}_{2}$ | (0,1,0,1) | $1$ | $\bm{1}_{1}$
Table 3: Matter loci for $\mathfrak{su}(5)\oplus\mathfrak{u}(1)$ example along
with the codimension-two orders of vanishing of the section components. For
typographical reasons, we describe the singularity types using the Kodaira
notation rather than the associated ADE gauge groups.
We have not yet found the signs of the charges, which cannot be directly
determined from our relations. For singlets, the sign of the charge is
irrelevant, as the hypermultiplets contain fields in both the $\bm{1}_{q}$ and
$\bm{1}_{-q}$ representations. For the other representations, the relative
sign of the $\mathfrak{u}(1)$ charges must be consistent with the global
structure of the gauge group, as described in more detail in CveticLinU1 and
in Section 4. In particular, the $\mathfrak{u}(1)$ charges for matter in a
specific representation of $\mathfrak{su}(5)$ must be separated by integers,
even if the charges themselves are fractional. Therefore, if we take the
$\mathfrak{u}(1)$ charge at $\\{\sigma=\Delta^{\prime}=0\\}$ to be
$+\frac{1}{5}$, the $\mathfrak{u}(1)$ charges at $\\{\sigma=b=0\\}$ and
$\\{\sigma=c_{1,1}=0\\}$ should be $-\frac{9}{5}$ and $-\frac{4}{5}$,
respectively. By similar arguments, the charge at $\\{\sigma=c_{3,0}=0\\}$
should be $-\frac{3}{5}$.131313A quick way of seeing this is to consider the
decomposition $\bm{5}\otimes\bm{5}=\bm{15}\oplus\bm{10}$. If
$\bm{5}_{\frac{1}{5}}$ and $\bm{5}_{-\frac{4}{5}}$ are valid representations,
then $\bm{5}_{\frac{1}{5}}\otimes\bm{5}_{-\frac{4}{5}}$ would have
$\mathfrak{u}(1)$ charge $-\frac{3}{5}$.
This analysis leads to the representations listed in the last column of Table
3. If one calculates the multiplicities of the matter loci and includes the
$\bm{24}_{0}$ adjoint matte, one can verify that this matter spectrum
satisfies the 6D gauge and gauge–gravitational anomaly cancellation conditions
ErlerAnomaly ; ParkTaylor ; ParkIntersection for the generating section
height
$0pt=-2K_{B}+2[b]-\frac{4}{5}[\sigma]\,.$ (37)
Note that we have determined the spectrum without performing any resolutions
of the singular Weierstrass model, illustrating the power of this approach.
## 3 Review of F-theory
In this section, we will review some basic aspects of F-theory that will be
important for the following discussion. Further information on F-theory can be
found in DenefLesHouches ; TaylorTASI ; WeigandTASI ; ParkIntersection ;
CveticLinReview .
F-theory can be thought of most straightforwardly as a non-perturbative
version of Type IIB string theory that allows for consistent compactifications
in the presence of 7-branes. These 7-branes cause the axiodilaton field to
vary over the compactification space, leading the theory to be strongly-
coupled in some regions. We track the variation of the axiodilaton by encoding
it as the complex structure of an elliptic curve; this leads to the structure
of an elliptic fibration $X$ over the compactification space $B$. We assume
here that the base $B$ is smooth.
The elliptic fibration associated with an F-theory model can be described
mathematically by a Weierstrass model 4,
$y^{2}=x^{3}+fxz^{4}+gz^{6}\,,$ (38)
where $[x:y:z]$ are the homogeneous coordinates of $\mathbb{P}^{2,3,1}$ and
$f,g$ are holomorphic sections of line bundles on the base $B$. We will always
consider elliptically fibered Calabi–Yau manifolds, in which case $f$ and $g$
are respectively sections of the line bundles $\mathcal{O}(-4K_{B})$ and
$\mathcal{O}(-6K_{B})$, with $K_{B}$ the canonical class of the base.
The elliptic fiber of this fibration may become singular over certain loci in
the base. Fiber singularities at codimension one in the base signal that
nonabelian gauge groups are supported at these loci (in the context of Type
IIB, this indicates the presence of 7-branes wrapping these loci), while fiber
singularities at codimension two in the base herald the presence of localized
charged matter. The possible codimension-one fiber singularities were
classified by Kodaira and Néron KodairaI ; KodairaII ; NeronClassification ,
and are listed in Table 4. Using the Kodaira classification, one can read off
the singularity type at a codimension-one locus by examining the orders of
vanishing of $f$, $g$, and the discriminant $\Delta=4f^{3}+27g^{2}$.
Singularity | | | | |
---|---|---|---|---|---
Type | $\operatorname*{ord}(f)$ | $\operatorname*{ord}(g)$ | $\operatorname*{ord}(\Delta)$ | Dynkin |
Diagram | $UniversalCovering\newline Group$ | | | |
$\text{I}_{0}$ | $\geq 0$ | $\geq 0$ | $0$ | — | —
$\text{I}_{1}$ | $0$ | $0$ | $1$ | — | —
II | $\geq 1$ | $1$ | $2$ | — | —
III | $1$ | $\geq 2$ | $3$ | $A_{1}$ | $\operatorname{SU}(2)$
IV | $\geq 2$ | $2$ | $4$ | $A_{2}$ | $\operatorname{SU}(3)$
$\text{I}_{n}$ | $0$ | $0$ | $N$ | $A_{n-1}$ | $\operatorname{SU}(n)$
$\text{I}^{*}_{0}$ | $\geq 2$ | $\geq 3$ | $6$ | $D_{4}$ | $\operatorname{Spin}(8)$
$\text{I}^{*}_{n}$ | $2$ | $3$ | $N+6$ | $D_{n+4}$ | $\operatorname{Spin}(2n+8)$
$\text{IV}^{*}$ | $\geq 3$ | $4$ | $8$ | $\operatorname{E}_{6}$ | $\operatorname{E}_{6}$
$\text{III}^{*}$ | $3$ | $\geq 5$ | $9$ | $\operatorname{E}_{7}$ | $\operatorname{E}_{7}$
$\text{II}^{*}$ | $\geq 4$ | $5$ | $10$ | $\operatorname{E}_{8}$ | $\operatorname{E}_{8}$
Non-minimal | $\geq 4$ | $\geq 6$ | $\geq 12$ | Not valid in F-theory at codim.-one
Table 4: The Kodaira classification of codimension-one singular fibers. The
orders of vanishing of $f,g,\Delta$ at the singular locus in the base are
given. The Dynkin diagram entry gives the intersection pattern of the
exceptional $\mathbb{P}^{1}$s introduced by the resolution procedure; these
are potentially subject to Tate monodromy, though we will only consider ADE
cases here. The gauge algebra entries list the universal cover of the gauge
algebras associated with the given singularity type; which gauge algebra is
actually realized by a given singularity is determined via the Tate split
conditions mentioned in the text. Note that the final row lists non-minimal
singularities, which cannot be resolved crepantly such that the fibration
remains flat.
If one resolves the codimension-one fiber singularities via a sequence of
blowups, new $\mathbb{P}^{1}$ curves are introduced in the fiber over the
codimension-one locus. The $\mathbb{P}^{1}$ curves (including the
$\mathbb{P}^{1}$ hit by the zero section that exists prior to resolution)
intersect in the pattern of an affine ADE Dynkin diagram, with the
$\mathbb{P}^{1}$ hit by the zero section serving as the affine node. Thus,
each singularity type can be associated with an ADE Lie algebra. In
compactifications to six or fewer dimensions, the $\mathbb{P}^{1}$ curves can
undergo monodromy when traversing a closed path along the discriminant locus,
identifying them as being associated with the same exceptional divisor. This
corresponds to a folding of the Dynkin diagram and thus a reduction of the
associated Lie algebra to one of non-ADE type. Whether this occurs can be
determined by considering certain algebraic conditions on $f,g,\Delta$ known
as the Tate split conditions, which we will not elaborate on further here. In
this paper, we will always consider split fibers, in which case the associated
Lie algebra is always the ADE algebra corresponding to the given Kodaira
singularity. This Lie algebra is precisely the nonabelian gauge algebra
supported at the given codimension-one locus in the F-theory model.141414It is
worth noting that, in some situations, the true gauge algebra of the theory
can differ from that indicated by the geometry due to T-branes
CecottiCordovaHeckmanVafaTBranes ; AndersonHeckmanVafaTBranes . In this paper,
we assume that T-brane effects can be ignored and that the geometry accurately
reflects the gauge algebra and light charged matter of the model. In the
discussion that follows, it is important to consider the exceptional divisors
found by fibering the new $\mathbb{P}^{1}$ curves over the codimension-one
locus, which we denote by the symbol $\mathcal{T}$. The number of such
exceptional divisors associated with a codimension-one locus equals the rank
of the supported gauge algebra.
As mentioned above, matter is supported at codimension-two loci where the
singularity type enhances. In the Type IIB picture, this occurs at the
intersection of the 7-brane stacks associated with the codimension-one
singular loci. While the Kodaira classification provides a complete
classification of all possible codimension-one singularities, there is as yet
no complete classification of the singularity types at codimension two;
nonetheless, we often refer to codimension-two singularities using the Kodaira
singularity type associated to the orders of vanishing of $(f,g,\Delta)$ at
the given codimension-two locus.
In most situations, the matter representations supported at a given
codimension-two locus can be read off without performing a full resolution to
codimension two using what is known as the Katz–Vafa method KatzVafa .
Consider a given codimension-one singular locus $\\{\sigma=0\\}$ supporting a
gauge algebra $\mathfrak{g}$. At a codimension-two locus where the singularity
enhances further, we can naively associate a larger gauge algebra
$\mathfrak{g}^{\prime}\supset\mathfrak{g}$ with the enhanced singularity type
using the Kodaira classification (note, however, that there is not actually an
enhanced gauge algebra supported at the codimension-two locus). We can then
consider the branching rule for the adjoint representation of
$\mathfrak{g}^{\prime}$ to representations of $\mathfrak{g}$. This branching
rule will always include the adjoint representation of $\mathfrak{g}$ as well
as some number of singlets; the remaining representations, which always appear
in conjugate pairs, are the Katz–Vafa prediction for the charged light matter
supported at the codimension-two locus.
The Katz–Vafa method, while convenient, does not always produce the correct
result; in order to properly determine the matter representations supported at
a codimension-two locus, one must carry out a resolution of the singular
Calabi–Yau manifold to codimension-two. The charged matter representations can
then be read off from the dual M-theory picture. In this context, nonabelian
charged matter arises from M2 branes wrapping particular combinations of the
$\mathbb{P}^{1}$ curves making up the resolved fiber over the codimension-two
locus; the group theoretic weight associated with a particular combination of
curves is determined by computing its intersection with the various
exceptional divisors. More concretely, suppose we have a model with a
semisimple gauge algebra $\mathfrak{g}$. We can expand $\mathfrak{g}$ as a
direct sum of $K$ simple Lie algebras labeled by the index $\kappa$:
$\mathfrak{g}=\bigoplus_{\kappa=1}^{K}\mathfrak{g}_{\kappa}\,.$ (39)
If a codimension-two locus supports matter in a representation $\bm{R}$ of
$\mathfrak{g}$, then the resolved fiber at that locus consists of a set of
irreducible curves. A (possibly reducible) curve $c$ that supports the weight
$\vec{w}$ for the representation $\bm{R}$ satisfies
$\mathcal{T}_{\kappa,I}\cdot c=w_{\kappa,I}\,.$ (40)
Here, $\mathcal{T}_{\kappa,I}$ is the exceptional divisor corresponding to the
$I$th simple root of $\mathfrak{g}_{\kappa}$, while $w_{\kappa,I}$ is the
element of $\vec{w}$ corresponding to the $I$th Cartan generator of
$\mathfrak{g}_{\kappa}$ in the Dynkin basis (the basis of fundamental
weights).
### 3.1 Abelian gauge algebras in F-theory and the Mordell–Weil group
Abelian $\mathfrak{u}(1)$ gauge factors arise in F-theory in a different
fashion, and are not associated with codimension-one fiber singularities.
Rather, abelian gauge factors are associated with additional rational sections
of the elliptic fibration MorrisonVafaII .
F-theory constructions described by a Weierstrass model 38 will always have at
least one rational section, the zero section $\hat{o}$, given by
$[\hat{x}:\hat{y}:\hat{z}]=[1:1:0]$. However, an elliptic fibration may have
additional rational sections. These rational sections form a group, known as
the Mordell–Weil group. We discuss the addition operation of this group in the
following section. According to the Mordell–Weil theorem, this group is
finitely generated, taking the form LangNeron
$\mathbb{Z}^{r}\oplus\Gamma\,.$ (41)
The group $\Gamma$ is the torsion subgroup, which is associated in the
F-theory context with the global structure of the gauge group; we will not
discuss the torsional part of the Mordell–Weil group further here. The integer
$r$ is known as the Mordell–Weil rank, and the $r$ independent sections other
than the zero section are known as generating sections.
An F-theory model associated with an elliptic fibration that has Mordell–Weil
rank $r$ has an abelian sector including a $\mathfrak{u}(1)^{r}$ gauge
algebra. The motivation for this most naturally comes from the dual M-theory
picture. To each generating section $\hat{s}$ there is associated a divisor
class $\sigma(\hat{s})$ whose Poincaré dual acts as an additional zero mode
for the M-theory 3-form $C_{3}$, indicating the presence of an additional
$\operatorname{U}(1)$ gauge boson associated with $\hat{s}$. The divisor
$\sigma(\hat{s})$ is given by the Shioda map, which takes the form 9:
$\sigma(\hat{s})=\mathcal{S}-\mathcal{Z}-\sum_{\alpha}\mathopen{}\mathclose{{}\left((\mathcal{S}-\mathcal{Z})\cdot\mathcal{Z}\cdot
B^{\alpha}}\right)B_{\alpha}+\sum_{\kappa,I,J}\mathopen{}\mathclose{{}\left(\mathcal{S}\cdot\alpha_{\kappa,I}}\right)\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)_{IJ}\mathcal{T}_{\kappa,J}\,.$
(42)
This is a homomorphism from the Mordell–Weil group to the Néron–Severi group
with rational coefficients. As discussed above, weights of nonabelian matter
representations are associated with particular combinations of fibral curves
in the resolved threefold; the $\operatorname{U}(1)$ charges of the matter
associated with a fibral curve $c$ are given by
$q=\sigma(\hat{s}_{i})\cdot c\,,$ (43)
with $\hat{s}_{i}$ being the generating sections of the Mordell–Weil group.
One final important ingredient is the Néron–Tate height pairing between
rational sections $\hat{s}_{i},\hat{s}_{j}$ (with $i,j$ possibly equal), given
by
$\displaystyle 0pt_{ij}$
$\displaystyle=-\pi_{*}(\sigma(\hat{s}_{i})\cdot\sigma(\hat{s}_{i}))$ (44)
$\displaystyle=-\pi_{*}(\mathcal{S}_{i}\cdot\mathcal{S}_{j})-K_{B}+n_{i}+n_{j}-\sum_{\kappa}\mathopen{}\mathclose{{}\left(\mathcal{S}_{i}\cdot\alpha_{\kappa,I}}\right)\mathopen{}\mathclose{{}\left(\mathcal{R}_{\kappa}^{-1}}\right)_{IJ}\mathopen{}\mathclose{{}\left(\mathcal{S}_{j}\cdot\alpha_{\kappa,J}}\right)b_{\kappa}\,,$
with $\pi_{*}$ the pushforward to the homology lattice of the base,
$n_{i}=\pi_{*}(\mathcal{S}_{i}\cdot\mathcal{Z})$ the locus along which the
section $\hat{s}_{i}$ intersects the zero section, and $\mathcal{R}_{\kappa}$
the normalized root matrix for the $\kappa$th nonabelian gauge factor. The
height matrix $0pt_{ij}$ plays the role of the 6D anomaly coefficients for the
abelian gauge factors.
### 3.2 The elliptic curve group law
As we have seen, the rational sections of an elliptic fibration form a group,
called the Mordell–Weil group. The addition operation of rational sections is
a fiberwise extension of the group law for elliptic curves, which we will
summarize here. For more information, see Silverman:1338326 .
The addition operation of the Mordell–Weil group of rational points of an
elliptic curve is induced by the standard addition in $\mathbb{C}$ via the
definition of the elliptic curve as a complex torus $\mathbb{C}/\Lambda$.
Under the map from this description of the elliptic curve to that given by the
Weierstrass equation, the induced operation $[+]$ is a well-defined addition
on rational points of the elliptic curve with identity element $Z$, typically
chosen to be the point $[1:1:0]$ when the elliptic curve is written in global
Weierstrass form. This addition operation is defined by the property that if
$P,Q,R$ are the three points of intersection of the elliptic curve with a line
(counted with multiplicity), then $P[+]Q[+]R=Z$. Algorithmically, given two
points $P$ and $Q$, we can find their sum $P[+]Q$ by forming the line that
passes through both $P$ and $Q$ (if $P=Q$, then we instead use the tangent
line to the elliptic curve at $P$). This line intersects the elliptic curve in
a third point $R$; the line passing through $R$ and $Z$ again intersects the
elliptic curve in a third point, which is the result $P[+]Q$.
As we will make frequent use of elliptic curve addition in this paper, we
include here explicit expressions for the addition law when the elliptic curve
is written in global Weierstrass form, with identity $Z=[1:1:0]$. Given two
distinct points $P_{1}=[x_{1}:y_{1}:z_{1}]$ and $P_{2}=[x_{2}:y_{2}:z_{2}]$,
$P_{3}=P_{1}[+]P_{2}$ has coordinates
$\displaystyle x_{3}$
$\displaystyle=x_{1}z_{1}^{2}\mathopen{}\mathclose{{}\left(x_{2}^{2}+fz_{2}^{4}}\right)+x_{2}z_{2}^{2}\mathopen{}\mathclose{{}\left(x_{1}^{2}+fz_{1}^{4}}\right)-2z_{1}z_{2}(y_{1}y_{2}-gz_{1}^{3}z_{2}^{3})\,,$
(45) $\displaystyle y_{3}$
$\displaystyle=-y_{1}^{2}y_{2}z_{2}^{3}-3x_{2}x_{1}^{2}y_{2}z_{2}z_{1}^{2}+3x_{1}x_{2}^{2}y_{1}z_{1}z_{2}^{2}+y_{2}^{2}y_{1}z_{1}^{3}-3gz_{1}^{3}z_{2}^{3}\mathopen{}\mathclose{{}\left(y_{2}z_{1}^{3}-y_{1}z_{2}^{3}}\right)$
$\displaystyle\qquad-
fz_{1}z_{2}\mathopen{}\mathclose{{}\left(x_{2}y_{2}z_{1}^{5}+2x_{1}y_{2}z_{1}^{3}z_{2}^{2}-2x_{2}y_{1}z_{2}^{3}z_{1}^{2}-x_{1}y_{1}z_{2}^{5}}\right)\,,$
$\displaystyle z_{3}$ $\displaystyle=x_{2}z_{1}^{2}-x_{1}z_{2}^{2}\,.$
Given a point $P=[x:y:z]$, the point $2P=P[+]P$ has coordinates
$\displaystyle x_{2P}$
$\displaystyle=\mathopen{}\mathclose{{}\left(3x^{2}+fz^{4}}\right)^{2}-8xy^{2}\,,$
(46) $\displaystyle y_{2P}$
$\displaystyle=-\mathopen{}\mathclose{{}\left(3x^{2}+fz^{4}}\right)^{3}+12xy^{2}\mathopen{}\mathclose{{}\left(3x^{2}+fz^{4}}\right)-8y^{4}\,,$
$\displaystyle z_{2P}$ $\displaystyle=2yz\,.$
The inverse of a point $P=[x:y:z]$ is simply $-P=[x:-y:z]$.
## 4 Gauge group centers and allowed charges
As a first step towards relating $\mathfrak{u}(1)$ charges to the orders of
vanishing of section components, we should examine the types of
$\mathfrak{u}(1)$ charges that can occur in F-theory models. The allowed
$\mathfrak{u}(1)$ charges of singlet matter are quantized. In line with the
definition of the Shioda map in Eq. 9, it is natural to normalize the charges
such that the singlet charges are integral and that the lattice of singlet
charges has a spacing of 1 CveticLinU1 . However, matter that is also charged
under a nonabelian gauge factor may have fractional charges in this
normalization. The allowed fractional charges in an F-theory model depend on
the global structure of the gauge group, the description of which involves the
center of the gauge group’s universal cover.
Since it plays a key role in this paper, let us review the relation between
fractional charges and elements of the center first described in CveticLinU1 .
Suppose we have a model with a gauge algebra
$\mathfrak{g}\oplus\mathfrak{u}(1)$, where $\mathfrak{g}$ is a semisimple Lie
algebra given by
$\mathfrak{g}=\bigoplus_{\kappa=1}^{K}\mathfrak{g}_{\kappa}\,.$ (47)
Recall from Section 3 that matter in a representation $\bm{R}_{q}$ of
$\mathfrak{g}\oplus\mathfrak{u}(1)$ comes from M2 branes wrapping curves in
the resolved fiber of a codimension-two locus in the base. Specifically, a
weight $\vec{w}$ of the representation $\bm{R}_{q}$ comes from wrapping an M2
brane on a curve $c$ satisfying,
$\mathcal{T}_{\kappa,I}\cdot c=w_{\kappa,I}\,,$ (48)
where $\mathcal{T}_{\kappa,I}$ is the fibral divisor corresponding to the
$I$th simple root of $\mathfrak{g}_{\kappa}$ and $w_{\kappa,I}$ is the element
of $\vec{w}$ corresponding to the $I$th Cartan generator of
$\mathfrak{g}_{\kappa}$ The charge of this matter under the $\mathfrak{u}(1)$
algebra is
$q=\sigma(\hat{s})\cdot c\,.$ (49)
Equation 9 states that the Shioda map is given by
$\sigma(\hat{s})=\mathcal{S}-\mathcal{Z}-\pi^{*}\mathopen{}\mathclose{{}\left(D_{B}}\right)+\sum_{\kappa,I}l_{\kappa,I}\mathcal{T}_{\kappa,I}\,,$
(50)
where
$l_{\kappa,I}=\sum_{J}(\mathcal{S}\cdot\alpha_{\kappa,J})\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)_{IJ}\,.$
(51)
In rough terms, the numbers $l_{\kappa,I}$ describe which of the components in
the ADE fibers along codimension-one loci are hit by the section. They
therefore depend on the behavior of the section at codimension one. However,
these numbers are allowed to be fractional, as the inverse Cartan matrix
$\mathcal{C}^{-1}_{\kappa}$ can have fractional entries. In fact, the only
fractional contributions to the charge $q$ come from the $l_{\kappa,I}$ term
in the Shioda map, implying that
$q-\sum_{\kappa,I}l_{\kappa,I}\mathcal{T}_{\kappa,I}\cdot c\in\mathbb{Z}\,.$
(52)
This statement is true for any weight of any $\bm{R}_{q}$ representation
realized in the F-theory model. Note that
$\sum_{\kappa,I}l_{\kappa,I}\mathcal{T}_{\kappa,I}\cdot c$ may differ by
integers for different weights in an irreducible representation $\bm{R}_{q}$.
If $\mathcal{Q}$ is the generator of the $\mathfrak{u}(1)$ algebra and
$E_{\kappa,I}$ are the elements of the Cartan sub-algebra of
$\mathfrak{g}_{\kappa}$, then
$\Upsilon=\mathcal{Q}-\sum_{\kappa,I}l_{\kappa,I}E_{\kappa,I}$ is an element
of the Cartan subalgebra of $\mathfrak{g}\oplus\mathfrak{u}(1)$.
Exponentiating this gives us an element $C=\exp[2\pi i\Upsilon]$ of
$G\times\operatorname{U}(1)$, the universal covering group for the algebra
$\mathfrak{g}\oplus\mathfrak{u}(1)$. $G$ is the simply connected Lie group
associated with $\mathfrak{g}$. For any representation $\bm{R}_{q}$ of
$\mathfrak{g}\oplus\mathfrak{u}(1)$, $C$ acts on a weight $\vec{w}$ of the
representation as
$C\vec{w}=\exp\mathopen{}\mathclose{{}\left[2\pi
i\mathopen{}\mathclose{{}\left(q-\sum_{\kappa,I}l_{\kappa,I}w_{\kappa,I}}\right)}\right]\vec{w}\,.$
(53)
The group element $C$ essentially multiplies each weight by a complex number
of magnitude 1. This complex number is the same for all weights in an
irreducible representation $\bm{R}_{q}$, as
$\sum_{\kappa,I}l_{\kappa,I}w_{\kappa,I}$ differs only by integers for the
different weights of this irreducible representation.151515One way of seeing
this is to note that two different weights for the same irreducible
representation differ by a root. The term
$\sum_{\kappa,I}l_{\kappa,I}w_{\kappa,I}$ essentially takes the form
$\sum_{\kappa,I,J}n_{J}\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)_{IJ}w_{\kappa,I}$,
where the $n_{J}$ are integers. The inverse Cartan matrices convert weight and
root vectors from the Dynkin basis to the basis of simple roots, and root
vectors have integral elements in the latter basis. Therefore, the difference
in
$\sum_{\kappa,I,J}n_{J}\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)_{IJ}w_{\kappa,I}$
for two different weights, which differ by a root, should be integral. Thus,
$C$ is proportional to the identity element and commutes with every element of
$G\times\operatorname{U}(1)$. In other words, $C$ is in the center of
$G\times\operatorname{U}(1)$. Moreover, Eqs. 48 and 52 imply that $C$ acts
trivially on any representation $\bm{R}_{q}$ realized in the F-theory model.
The gauge group of the F-theory model is therefore
$(G\times\operatorname{U}(1))/\Xi$, where $\Xi$ is a subgroup of the center of
$G\times\operatorname{U}(1)$ that includes $C$.
In summary, the $l_{i}$ coefficients in $\sigma(\hat{s})$, which control the
allowed fractional charges in the F-theory model and affect the global
structure of the gauge group, can be associated with an element of the gauge
group’s center. It is therefore worth describing the elements of the center in
more detail. If the gauge algebra is $\mathfrak{g}\oplus\mathfrak{u}(1)$ as
before, we are interested in the center $Z(G\times\operatorname{U}(1))$, where
$G$ is the simply connected Lie group associated with $\mathfrak{g}$. Since
$\mathfrak{g}$ is a sum of simple Lie algebras
$\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}\oplus\ldots\oplus\mathfrak{g}_{K}$,
the group $G$ is a product $G_{1}\times G_{2}\times\ldots G_{K}$ of the
universal covers of the $\mathfrak{g}_{\kappa}$. The center of
$G\times\operatorname{U}(1)$ is then
$Z\mathopen{}\mathclose{{}\left(G\times\operatorname{U}(1)}\right)=Z(G)\times
Z\mathopen{}\mathclose{{}\left(\operatorname{U}(1)}\right)=\mathopen{}\mathclose{{}\left(\prod_{\kappa}Z(G_{\kappa})}\right)\times\operatorname{U}(1)\,.$
(54)
An element of
$Z\mathopen{}\mathclose{{}\left(G\times\operatorname{U}(1)}\right)$ can be
written as $(\nu_{1},\nu_{2},\ldots,\nu_{K},\gamma)$, where $\nu_{\kappa}$ is
an element $Z(G_{\kappa})$ and $\gamma$ is an element of
$\operatorname{U}(1)$. We want elements of
$Z\mathopen{}\mathclose{{}\left(G\times\operatorname{U}(1)}\right)$ analogous
to $C$ above, so we let $\gamma$ be $\exp[2\pi i\mathcal{Q}]$. The
$l_{\kappa,I}$ are then encoded in a choice of the $\nu_{\kappa}$. The centers
of some of the compact simple Lie groups are given in Table 5. Many of these
centers are $\mathbb{Z}_{n}$ groups, so it is convenient to let the
$\nu_{\kappa}$ be integers corresponding to elements of the $\mathbb{Z}_{n}$.
Even for the $\mathbb{Z}_{2}\times\mathbb{Z}_{2}$ center of
$\operatorname{Spin}(4k)$, we label the elements of the center with integers
ranging from 0 to 3. The notations for each group are discussed in more detail
in the following subsections.
Algebra | Group | Center
---|---|---
$\mathfrak{su}(n)$ | $\operatorname{SU}(n)$ | $\mathbb{Z}_{n}$
$\mathfrak{so}(2n+1)$ | $\operatorname{Spin}(2n+1)$ | $\mathbb{Z}_{2}$
$\mathfrak{sp}(n)$ | $\operatorname{Sp}(n)$ | $\mathbb{Z}_{2}$
$\mathfrak{so}(4k)$ | $\operatorname{Spin}(4k)$ | $\mathbb{Z}_{2}\times\mathbb{Z}_{2}$
$\mathfrak{so}(4k+2)$ | $\operatorname{Spin}(4k+2)$ | $\mathbb{Z}_{4}$
$\mathfrak{e}_{6}$ | $\operatorname{E}_{6}$ | $\mathbb{Z}_{3}$
$\mathfrak{e}_{7}$ | $\operatorname{E}_{7}$ | $\mathbb{Z}_{2}$
$\mathfrak{e}_{8}$ | $\operatorname{E}_{8}$ | —
$\mathfrak{f}_{4}$ | $\operatorname{F}_{4}$ | —
$\mathfrak{g}_{2}$ | $\operatorname{G}_{2}$ | —
Table 5: Centers of various groups. If a center is not specified, the center
of the group is trivial. $n$ and $k$ are integers.
We can describe the relation between the $\nu_{\kappa}$ and the $l_{\kappa,I}$
more explicitly: because
$l_{\kappa,I}=\sum_{J}\mathopen{}\mathclose{{}\left(\mathcal{S}\cdot\alpha_{J}}\right)\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)_{IJ}\,,$
the $l_{\kappa,I}$ are determined by the behavior of the section along the
codimension-one loci supporting the $\mathfrak{g}_{\kappa}$ gauge algebras.
Let us first focus on the situation where $K=1$, such that the gauge algebra
is $\mathfrak{g}_{1}\oplus\mathfrak{u}(1)$. We additionally assume that
$\mathfrak{g}_{1}$ is simply-laced, as we primarily consider only simply-laced
algebras in this paper. At the codimension-one locus supporting
$\mathfrak{g}_{1}$, the fiber consists of irreducible curves forming an affine
ADE diagram corresponding to $\mathfrak{g}_{1}$; again, the curve representing
the affine node is the one hit by the zero section. Fig. 1 illustrates this
fiber for an example where $\mathfrak{g}_{1}=\mathfrak{so}(10)$. The
generating section for the $\mathfrak{u}(1)$ should intersect one of these
curves transversally at generic points along the codimension-one
locus.161616We should not expect the section to exhibit more extreme
behaviors, such as wrapping a curve, at generic points along the codimension-
one locus. Such behavior would require all the section components to vanish
simultaneously, at least prior to resolution. The section components would
then be proportional to appropriate powers of the codimension-one locus, and
these factors can be scaled away to remove the wrapping behavior. The numbers
$l_{\kappa,I}$ essentially indicate which of these curves is intersected by
the section. Each element of the center should in turn correspond to a curve
in the codimension-one fiber that may be hit by the section, and vice
versa.171717Along these lines, note that the different values of $\nu$
distinguish between, for instance, the $\text{I}_{n}^{(01)}$,
$\text{I}_{n}^{(0|1)}$, $\text{I}_{n}^{(0||1)}$, … models in the language of
KuntzlerTateTrees ; LawrieEtAlRational . This statement naively seems to be
problematic in the $\mathfrak{so}(10)$ example: while the codimension-one
fiber has six irreducible curves, there are only four elements of
$Z(\operatorname{Spin}(10))=\mathbb{Z}_{4}$. However, the irreducible curves
in the fiber may have multiplicities greater than one; these multiplicities
are given by the dual Kac labels (or comarks) of the highest root. Because a
section should intersect the fiber only once, it can only intersect one of the
irreducible curves with multiplicity $1$ KuntzlerTateTrees ;
LawrieEtAlRational . For the $\mathfrak{so}(10)\oplus\mathfrak{u}(1)$ example,
only the four curves at the ends of the fiber have multiplicity $1$. These
four curves exactly match the four elements of
$Z(\operatorname{Spin}(10))=\mathbb{Z}_{4}$, in line with the expectation that
the curve hit by the section at codimension one should specify an element of
the center. If the section hits the affine component, we say that $\nu=0$, the
identity element of the center. If the section hits the upper-left component
in Fig. 1, then we say that $\nu=2$, and if the section hits the upper-right
or lower-right curves, then we say that $\nu$ is $1$ or $3$, respectively.
Figure 1: Fiber at generic points along a codimension-one locus supporting an
$\mathfrak{so}(10)$ algebra. The shaded curve corresponds to the affine node
of the $\hat{D}_{5}$ Dynkin diagram. The numbers within each circle denote the
multiplicity of the curve.
The $l_{\kappa,I}$ numbers also provide a way to determine the allowed
fractional charges for an F-theory model. Again, suppose that the gauge
algebra is $\mathfrak{g}_{1}\oplus\mathfrak{u}(1)$. If the section hits the
affine node of the fiber at codimension one, the $l_{1,I}$ are $0$ and the
$\mathfrak{u}(1)$ charges for matter in any representation of
$\mathfrak{g}_{1}$ must be integral. If the section hits a curve corresponding
to the simple root $J$ of $\mathfrak{g}_{1}$, the $l_{1,I}$ are given by
$\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)_{IJ}$. To
find the allowed $\mathfrak{u}(1)$ charges for matter in an irreducible
representation $\bm{R}_{q}$ of $\mathfrak{g}_{1}\oplus\mathfrak{u}(1)$, we
consider any weight $\vec{w}$ of $\bm{R}_{q}$ in the Dynkin basis. The allowed
$\mathfrak{u}(1)$ charges are then given by
$\mathopen{}\mathclose{{}\left(\sum_{I}l_{1,I}w_{I}}\right)+j\,,\quad
j\in\mathbb{Z}\,.$ (55)
Of course, we can relate the curve hit by the section to an element of
$Z(G_{1})$, providing us with a way to determine the $\mathfrak{u}(1)$ charges
consistent with the global structure of the gauge group. Eq. 55 shows that the
allowed $\mathfrak{u}(1)$ charges for $\overline{\bm{R}}$ are the negative of
those for $\bm{R}$, since the weights of $\overline{\bm{R}}$ are the negative
of those of $\bm{R}$. This ensures that we can form $\bm{R}_{q}$
hypermultiplets by combining fields in the $\bm{R}_{q}$ and
$\overline{\bm{R}}_{-q}$ representations.
To illustrate these ideas, consider the
$\mathfrak{so}(10)\oplus\mathfrak{u}(1)$ example above and assume that, at
codimension-one, the section hits the upper-right curve in Fig. 1
corresponding to $\nu=1$. The Cartan and inverse Cartan matrices for
$\mathfrak{so}(10)$ are
$\mathcal{C}_{\mathfrak{so}(10)}=\begin{pmatrix}2&-1&0&0&0\\\ -1&2&-1&0&0\\\
0&-1&2&-1&-1\\\ 0&0&-1&2&0\\\
0&0&-1&0&2\end{pmatrix}\,,\qquad\mathcal{C}^{-1}_{\mathfrak{so}(10)}=\begin{pmatrix}1&1&1&\frac{1}{2}&\frac{1}{2}\\\
1&2&2&1&1\\\ 1&2&3&\frac{3}{2}&\frac{3}{2}\\\
\frac{1}{2}&1&\frac{3}{2}&\frac{5}{4}&\frac{3}{4}\\\
\frac{1}{2}&1&\frac{3}{2}&\frac{3}{4}&\frac{5}{4}\end{pmatrix}\,,$ (56)
and the upper-right curve corresponds to $J=4$. Therefore,
$l_{1,I}=\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{so}(10)}}\right)_{I4}\,.$
(57)
For the $\bm{10}$ representation of $\mathfrak{so}(10)$, the highest weight is
$[1,0,0,0,0]$, and
$\sum_{I=1}^{5}\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{so}(10)}}\right)_{I4}w_{I}=\frac{1}{2}\,.$
(58)
The allowed $\bm{10}$ charges are therefore $j+\frac{1}{2}$ for integer $j$.
The $\bm{16}$ representation, meanwhile, has a highest weight of
$[0,0,0,0,1]$, and
$\sum_{I=1}^{5}\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{so}(10)}}\right)_{I4}w_{I}=\frac{3}{4}\,.$
(59)
The allowed $\bm{16}$ charges are therefore $j-\frac{1}{4}$ for integer $j$.
It is useful to define a few functions of $\nu$. While the specific form of
these functions depends on the simple gauge algebra in question, as described
in detail below, we can state the general idea behind these functions without
reference to a particular gauge algebra. As mentioned above, each possible
value of $\nu$ is identified with a curve hit by the section at generic points
along the codimension-one locus supporting a simple gauge algebra
$\mathfrak{g}$. We define
$T_{G}(\nu)=\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{g}}}\right)_{II}\,,$
(60)
where $I$ denotes the curve identified with $\nu$. If $\nu$ is 0, we take
$T_{G}(\nu)$ to be 0 as well. In the $\mathfrak{so}(10)$ example, $\nu=1$
corresponds to the upper-right curve in Fig. 1, which is specified by $I=4$.
Since
$\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{so}(10)}}\right)_{44}=\frac{5}{4}\,,$
(61)
we have
$T_{\operatorname{Spin}(10)}(1)=\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{so}(10)}}\right)_{44}=\frac{5}{4}\,.$
(62)
Along similar lines,
$\displaystyle T_{\operatorname{Spin}(10)}(0)$ $\displaystyle=0\,,$ (63)
$\displaystyle T_{\operatorname{Spin}(10)}(2)$
$\displaystyle=\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{so}(10)}}\right)_{11}=\frac{1}{2}\,,$
$\displaystyle T_{\operatorname{Spin}(10)}(3)$
$\displaystyle=\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{so}(10)}}\right)_{55}=\frac{5}{4}\,.$
We also define a function $\vec{\tau}_{G}(\nu)$ whose output is a triplet of
integers. Roughly, it gives the values of
$\mathopen{}\mathclose{{}\left(\operatorname{ord}_{1}(\hat{x}),\operatorname{ord}_{1}(\hat{y}),\operatorname{ord}_{1}(\hat{w})}\right)$
such that the section hits the fibral curve associated with $\nu$, at least at
generic points along the codimension-one locus supporting
$\mathfrak{g}$.181818Intriguingly, the values of $\vec{\tau}_{G}(\nu)$ for
various $G$, $\nu$ also appear in the Tate’s algorithm table
BershadskyEtAlSingularities as the orders of vanishing of
$a_{2}+\frac{1}{4}a_{1}^{2}$, $a_{3}$, and $a_{4}$ for various singularity
types. This observation may be related to the shifts in $x$ and $y$ performed
when converting the elliptic fibration from Tate form to Weierstrass form. We
will describe $\vec{\tau}_{G}(\nu)$ in more detail when we individually
discuss the different gauge algebras. Even though we have developed these
ideas based on the codimension-one behavior of the sections, it will be useful
to use these same functions at codimension two as well. When these expressions
are used at codimension two, $\nu$ is replaced with $\mu$.
These ideas generalize naturally for models where $\mathfrak{g}$ consists of
more than one simple algebra. In particular, let $\mathfrak{g}$ be the direct
sum of simple algebras $\mathfrak{g}_{\kappa}$. Each $\mathfrak{g}_{\kappa}$
simple algebra is supported on a codimension-one locus in the base. The fiber
at this locus consists of a set of curves forming the affine Dynkin diagram of
$\mathfrak{g}_{\kappa}$. We have a set of numbers $l_{\kappa,I}$ for each
$\mathfrak{g}_{\kappa}$ that describe which of the fibral curves is hit by the
section, at least at generic points along the codimension-one locus for
$\mathfrak{g}_{\kappa}$. If the section hits the affine node, then
$l_{\kappa,I}=0$. If the section hits one of the other curves, then
$l_{\kappa,I}=\sum_{J}\mathopen{}\mathclose{{}\left(\mathcal{S}\cdot\alpha_{\kappa,J}}\right)\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\kappa}}\right)_{IJ}\,,$
(64)
where $\mathcal{C}^{-1}_{\kappa}$ is the inverse Cartan matrix for
$\mathfrak{g}_{\kappa}$. Of course, the section can only hit those curves with
multiplicity 1 in the fiber. As before, the $l_{\kappa,I}$ can alternatively
by described using a specific element of $Z(G\times\operatorname{U}(1))$. For
each $\kappa$, we can associate $l_{\kappa,I}$ with an element $\nu_{\kappa}$
of the center of $G_{\kappa}$, the universal covering group for
$\mathfrak{g}_{\kappa}$. Then, the gauge group has a quotient by a group
containing the element $(\nu_{1},\ldots,\nu_{K},\gamma)$ of
$Z(G\times\operatorname{U}(1))$, where $K$ is the total number of simple,
nonabelian gauge algebras. We can then find $T_{G_{\kappa}}(\nu_{\kappa})$ and
$\vec{\tau}_{G_{\kappa}}(\nu_{\kappa})$ for each $\mathfrak{g}_{\kappa}$.
We now describe how the ideas above are applied for the types of gauge
algebras discussed here. We focus only on the simply-laced algebras. For each
gauge algebra, we describe how the possible values of $\nu$ match with curves
in the resolved fibers and give explicit expressions for $T_{G}(\nu)$ and
$\vec{\tau}_{G}(\nu)$. We also discuss the allowed $\mathfrak{u}(1)$ charges
for matter in representations of these gauge algebras.
### 4.1 $\mathfrak{su}(n)$
The affine Dynkin diagram for $\mathfrak{su}(n)$, shown in Fig. 2, has $n$
nodes. The curves are labeled by $I$, which runs from 0 to $n-1$ with the
$I=0$ curve serving as the affine node of the diagram. All of the curves in
the resolved $\mathfrak{su}(n)$ fiber have multiplicity $1$, so any of them
can be hit by a section. This agrees with the fact that the center of
$\operatorname{SU}(n)$ is $\mathbb{Z}_{n}$, and each curve should correspond
to an element of the center. We take $\nu$ to be $n-I$, where $I$ refers to
the curve hit by the section, or 0 if the section hits the $I=0$ affine node
curve. In the language of KuntzlerTateTrees ; LawrieEtAlRational , $\nu=0$
corresponds to the $\text{I}_{n}^{(01)}$ fiber, $\nu=1$ corresponds to the
$\text{I}_{n}^{(0|1)}$ fiber, and in general $\nu$ corresponds to an
$\text{I}_{n}^{(0|\ldots|1)}$ fiber with $u_{\nu}(n)$ bars between the 0 and
the 1.
12$n-2$$n-1$0$\cdots$ Figure 2: Dynkin diagram for $\mathfrak{su}(n)$. The
numbers give the value of $I$ corresponding to each node; the affine node has
$I=0$.
The inverse Cartan matrix for $\operatorname{SU}(n)$ takes the form
YamatsuGroupTheory
$\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{su}(n)}}\right)_{IJ}=\frac{1}{n}\min(I,J)\mathopen{}\mathclose{{}\left(n-\max(I,J)}\right)\,.$
(65)
Therefore, we define
$T_{\operatorname{SU}(n)}(\nu)=\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{su}(n)}}\right)_{n-\nu,n-\nu}=\frac{\nu(n-\nu)}{n}\,.$
(66)
Since the highest weight of the fundamental representation is
$[1,0,\ldots,0]$, Eq. 55 implies that the allowed $\mathfrak{u}(1)$ charges
for the fundamental representation are
$\frac{\nu}{n}+j\text{ for }j\in\mathbb{Z}\,.$ (67)
The two-index antisymmetric representation has a highest weight of
$[0,1,0,\ldots,0]$, and its allowed $\mathfrak{u}(1)$ charges are
$\frac{2\nu}{n}+j\text{ for }j\in\mathbb{Z}\,.$ (68)
Note that the allowed charges for $\nu^{\prime}=n-\nu$ are the negative of
those for $\nu$. This fact agrees with the group structure of the center, as
the inverse of $\nu$ in $\mathbb{Z}_{n}$ is $n-\nu$.
For the antisymmetric representation of $\mathfrak{su}(n)$, the same charge
can occur for different values of $\nu$ when $n$ is even. When the gauge
algebra is $\mathfrak{su}(6)\oplus\mathfrak{u}(1)$, for instance,
$\bm{15}_{1}$ matter can occur when $\nu=0$ or when $\nu=3$. Because $\nu$
describes the global structure of the gauge group, a situation involving
$\bm{15}_{1}$ with $\nu=0$ is distinct from a situation involving
$\bm{15}_{1}$ with $\nu=3$. We should therefore expect that the orders of
vanishing for $\bm{15}_{1}$ should be different for $\nu=0$ and $\nu=3$. This
does not pose any issue if one is reading off the charges from a model, as one
can determine $\nu$ from the codimenson-one orders of vanishing. However, when
trying to predict the orders of vanishing for a certain charge in a spectrum,
one may also need to choose a value of $\nu$. The need for this extra
information is not a shortcoming of our approach; rather, it reflects the
unavoidable fact that models with different global gauge groups are distinct.
We also define
$\vec{\tau}_{\operatorname{SU}(n)}(\nu)=\mathopen{}\mathclose{{}\left(0,1,1}\right)\times
u_{n}(\nu)$ (69)
for $n\geq 2$, where $u_{n}(\nu)$ is defined in Section 2.1. These numbers
describe the orders of vanishing for $\hat{x}$, $\hat{y}$, and $\hat{w}$ such
that the section hits the appropriate curve in the $\text{I}_{n}$ fiber.
However, $\operatorname{SU}(2)$ and $\operatorname{SU}(3)$ can also be
realized by type III and type IV fibers. To account for these special cases,
we also define
$\vec{\tau}_{\text{III}}(\nu)=\mathopen{}\mathclose{{}\left(1,1,1}\right)\times
u_{2}(\nu)\,,\quad\vec{\tau}_{\text{IV}}(\nu)=\mathopen{}\mathclose{{}\left(1,1,2}\right)\times
u_{3}(\nu)\,.$ (70)
### 4.2 $\mathfrak{so}(2n)$
While the affine Dynkin diagram for $\mathfrak{so}(2n)$, shown in Fig. 3, has
$n$ nodes, only four of them occur with multiplicity 1 in the resolved fiber.
These are the nodes at the ends of the diagram, namely the affine node and the
three nodes labeled by $I=1$, $n-1$, and $n$. The center
$Z(\operatorname{Spin}(2n))$ is $\mathbb{Z}_{4}$ for odd $n$ and
$\mathbb{Z}_{2}\times\mathbb{Z}_{2}$ for even $n$. In either case,
$Z(\operatorname{Spin}(2n))$ has four elements, in agreement with the number
of multiplicity-one curves in the resolved fiber. For odd $n$, we label the
elements of $\mathbb{Z}_{4}$ by $\nu=0,1,2,3$. We then identify $\nu=0$ with
the affine node, $\nu=1$ with the $I=n-1$ node, $\nu=2$ with the $I=1$ node,
and $\nu=3$ with the $I=n$ node. For even $n$, we still label the elements of
$\mathbb{Z}_{2}\times\mathbb{Z}_{2}$ by $\nu=0,1,2,3$. We also identify
$\nu=0$ with the affine node, $\nu=1$ with the $I=n-1$ node, $\nu=2$ with the
$I=1$ node, and $\nu=3$ with the $I=n$ node. In the language of
KuntzlerTateTrees ; LawrieEtAlRational , $\nu=0$ corresponds to the
${\text{I}^{*}_{n-4}}^{(01)}$ fiber, $\nu=2$ corresponds to the
${\text{I}^{*}_{n-4}}^{(0|1)}$ fiber, and $\nu=1,3$ correspond to the
${\text{I}^{*}_{n-4}}^{(0||1)}$ fiber.
The inverse Cartan matrix for $\mathfrak{so}(2n)$ takes the form
YamatsuGroupTheory
$\mathcal{C}^{-1}_{\mathfrak{so}(2n)}=\frac{1}{2}\begin{pmatrix}2&2&2&\cdots&2&1&1\\\
2&4&4&\cdots&4&2&2\\\ 2&4&6&\cdots&6&3&3\\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\\
2&4&6&\cdots&2(n-2)&(n-2)&(n-2)\\\ 1&2&3&\cdots&(n-2)&n/2&(n-2)/2\\\
1&2&3&\cdots&(n-2)&(n-2)/2&n/2\end{pmatrix}\,.$ (71)
Therefore,
$T_{\operatorname{Spin}(2n)}(\nu)=\begin{cases}0&\nu=0\\\ 1&\nu=2\\\
\frac{n}{4}&\nu=1,3\end{cases}\,.$ (72)
The allowed charges for various representations depend on the value of $n$
modulo 4. The vector representation of $\mathfrak{so}(2n)$ has a highest
weight of $[1,0,\ldots,0]$. There are also two spinor representations for all
$n$, which have highest weights $[0,0,\ldots,0,1]$ and $[0,0,\ldots,1,0]$. For
odd $n$, these representations are conjugates of each other, and spinor
hypermultiplets contain fields in both of these representations. However, for
even $n$, these two spinors are self-conjugate and can therefore be supported
at distinct loci ( and can belong to distinct hypermultiplets in 6D contexts).
The allowed $\mathfrak{u}(1)$ charges for these representations are summarized
in Table 6. As for the antisymmetric representation of $\mathfrak{su}(n)$,
matter in particular representations can occur with the same charge for
different values of $\nu$. Note that, for odd $n$, the allowed charges for
$\nu=1$ and $\nu=3$ are negatives of each other, in line with the fact that
$\nu=1$ and $\nu=3$ are each other’s inverses in $\mathbb{Z}_{4}$. Finally, we
define
$\vec{\tau}_{\operatorname{Spin}(2n)}(\nu)=\begin{cases}(0,0,0)&\nu=0\\\
(1,2,2)&\nu=2\\\
(1,\mathopen{}\mathclose{{}\left\lfloor\frac{n}{2}}\right\rfloor,\mathopen{}\mathclose{{}\left\lceil\frac{n}{2}}\right\rceil)&\nu=1,3\end{cases}$
(73)
12$n-2$$n-1$$n$0$\cdots$ Figure 3: Dynkin diagram for $\mathfrak{so}(2n)$. The numbers give the value of $I$ corresponding to each node, with the affine node having $I=0$. Nodes marked with two concentric circles occur with multiplicity 2 in the fiber. Group | Irrep | $\nu$
---|---|---
0 | 1 | 2 | 3
$\mathfrak{so}(8\ell)$ | V | 0 | $\frac{1}{2}$ | 0 | $\frac{1}{2}$
S | 0 | $\frac{1}{2}$ | $\frac{1}{2}$ | 0
C | 0 | 0 | $\frac{1}{2}$ | $\frac{1}{2}$
$\mathfrak{so}(8\ell+2)$ | V | 0 | $\frac{1}{2}$ | 0 | $\frac{1}{2}$
S | 0 | $\frac{3}{4}$ | $\frac{1}{2}$ | $\frac{1}{4}$
C | 0 | $\frac{1}{4}$ | $\frac{1}{2}$ | $\frac{3}{4}$
$\mathfrak{so}(8\ell+4)$ | V | 0 | $\frac{1}{2}$ | 0 | $\frac{1}{2}$
S | 0 | 0 | $\frac{1}{2}$ | $\frac{1}{2}$
C | 0 | $\frac{1}{2}$ | $\frac{1}{2}$ | 0
$\mathfrak{so}(8\ell+6)$ | V | 0 | $\frac{1}{2}$ | 0 | $\frac{1}{2}$
S | 0 | $\frac{1}{4}$ | $\frac{1}{2}$ | $\frac{3}{4}$
C | 0 | $\frac{3}{4}$ | $\frac{1}{2}$ | $\frac{1}{4}$
Table 6: Fractional parts of the allowed $\mathfrak{u}(1)$ charges for various
representations of $\mathfrak{so}(2n)$. V refers to the vector representation
with highest weight $[1,0,\ldots,0]$. S refers to the spinor representation
with highest weight $[0,\ldots,0,1]$. C refers to the spinor representation
with highest weight $[0,\ldots,1,0]$.
### 4.3 $\mathfrak{e}_{6}$, $\mathfrak{e}_{7}$, and $\mathfrak{e}_{8}$
123$\times 3$$4$$5$$6$0 (a) $\mathfrak{e}_{6}$
012$\times 3$3$\times 4$$4$$\times 3$$5$$6$$7$ (b) $\mathfrak{e}_{7}$
12$\times 4$3$\times 6$$4$$\times 5$$5$$\times 4$$6$$\times 3$$7$0$8$$\times
3$ (c) $\mathfrak{e}_{8}$
Figure 4: Dynkin diagrams for $\mathfrak{e}_{6}$,$\mathfrak{e}_{7}$ and
$\mathfrak{e}_{8}$. The numbers within each circle give the value of $I$
corresponding to each node, with the affine node having $I=0$. Nodes marked
with two concentric circles occur with multiplicity greater than 1 in the
fiber. If the multiplicity is greater than 2, the multiplicity is indicated
just outside the node. Nodes with concentric circles without a given
multiplicity have multiplicity 2.
The affine Dynkin diagram for $\mathfrak{e}_{6}$, shown in Fig. 4(a), has
seven nodes. Three of these nodes—the $I=0$, $1$, and $5$ nodes—occur with
multiplicity 1 in the resolved fiber, in agreement with the three elements of
$Z(\operatorname{E}_{6})=\mathbb{Z}_{3}$. Labeling the elements of
$\mathbb{Z}_{3}$ with $\nu=0,1,2$, we associate $\nu=0$ with the $I=0$ affine
node, $\nu=1$ with the $I=1$ curve, and $\nu=2$ with the $I=5$ curve. In the
language of KuntzlerTateTrees ; LawrieEtAlRational , $\nu=0$ corresponds to
the ${\text{IV}^{*}}^{(01)}$ fiber, and $\nu=1,2$ correspond to the
${\text{IV}^{*}}^{(0|1)}$ fiber. The inverse Cartan matrix of
$\mathfrak{e}_{6}$ is
$\mathcal{C}^{-1}_{\mathfrak{e}_{6}}=\frac{1}{3}\begin{pmatrix}4&5&6&4&2&3\\\
5&10&12&8&4&6\\\ 6&12&18&12&6&9\\\ 4&8&12&10&5&6\\\ 2&4&6&5&4&3\\\
3&6&9&6&3&6\end{pmatrix}\,.$ (74)
Based on its diagonal entries for $I=1$ and $I=5$, we define
$T_{\operatorname{E}_{6}}(\nu)=\frac{2}{3}\nu(3-\nu)\,.$ (75)
We also define
$\vec{\tau}_{\operatorname{E}_{6}}(\nu)=\mathopen{}\mathclose{{}\left(2,2,3}\right)\times
u_{3}(\nu)$ (76)
In this paper, we are primarily interested in matter in the $\bm{27}$
representation, with highest weight $[1,0,0,0,0,0]$, and its conjugate. From
the form of the inverse Cartan matrix, the allowed $\mathfrak{u}(1)$ charges
for the $\bm{27}$ representation are
$\frac{\nu}{3}+j\text{ for }j\in\mathbb{Z}\,.$ (77)
The allowed charges for the $\overline{\bm{27}}$ representation are the
negatives of those for the $\bm{27}$ representation.
The affine Dynkin diagram for $\mathfrak{e}_{7}$, shown in Fig. 4(b), has two
nodes that occur with multiplicity $1$: the $I=0$ affine node and the $I=6$
node. The center of $\operatorname{E}_{7}$ is $\mathbb{Z}_{2}$, which has two
elements labeled by $\nu=0$ and $\nu=1$. We associate $\nu=0$ with the affine
$I=0$ node and $\nu=1$ with the $I=6$ node. In the language of
KuntzlerTateTrees ; LawrieEtAlRational , $\nu=0$ corresponds to the
${\text{III}^{*}}^{(01)}$ fiber, and $\nu=1,2$ correspond to the
${\text{III}^{*}}^{(0|1)}$ fiber. Since the inverse Cartan matrix of
$\mathfrak{e}_{7}$ has
$\mathopen{}\mathclose{{}\left(\mathcal{C}^{-1}_{\mathfrak{e}_{7}}}\right)_{I6}=\frac{1}{2}\mathopen{}\mathclose{{}\left(2,4,6,5,4,3,3}\right)\,,$
(78)
we define
$T_{\operatorname{E}_{7}}(\nu)=\frac{3}{2}\nu\,.$ (79)
We also define
$\vec{\tau}_{\operatorname{E}_{7}}(\nu)=\mathopen{}\mathclose{{}\left(2,3,3}\right)\times
u_{2}(\nu)=\mathopen{}\mathclose{{}\left(2,3,3}\right)\times\nu\,.$ (80)
We are primarily interested in matter in the pseudoreal $\bm{56}$
representation of $\operatorname{E}_{7}$, which has highest weight
$[0,0,0,0,0,1,0]$. The allowed $\mathfrak{u}(1)$ charges for $\bm{56}$ matter
are therefore
$\frac{\nu}{2}+j\text{ for }j\in\mathbb{Z}\,.$ (81)
Finally, the affine Dynkin diagram for $\mathfrak{e}_{8}$, shown in Fig. 4(c)
has only one node that occurs with multiplicity 1: the affine node. The center
of $\operatorname{E}_{8}$ is trivial, so we identify the affine node with
$\nu=0$. We will not consider matter charged under an $\mathfrak{e}_{8}$ gauge
algebra, as it is difficult if not impossible to find localized
$\mathfrak{e}_{8}$ matter without $f$ and $g$ vanishing to orders $(4,6)$ at
codimension two.191919Such loci do not admit an easy interpretation in terms
of charged matter, and they are in fact associated with tensionless non-
critical strings and superconformal field theories. See the review article
HeckmanRudeliusSCFT and references contained therein for more information.
However, we will encounter situations where the singularity type enhances to
$\operatorname{E}_{8}$ at codimension two. As evidenced by Eq. 18 and 19, the
formulas we develop here use $T(\nu)$ and $\vec{\tau}(\nu)$ for the group
corresponding to the codimension-two singularity type. Therefore, we define
$T_{\operatorname{E}_{8}}(\nu)=0\,,\quad\vec{\tau}_{\operatorname{E}_{8}}(\nu)=\mathopen{}\mathclose{{}\left(0,0,0}\right)\,.$
(82)
## 5 General strategy
The ultimate goal of this work is to argue that information about the
$\mathfrak{u}(1)$ charge spectrum of an F-theory model is encoded in the
orders of vanishing of the section components. In particular, we would like to
establish explicit relations between the orders of vanishing at a codimension-
two locus and the charge supported at that locus. Ideally, one would probe
this problem by examining the generating sections in F-theory models that
realize various types of charges. However, the currently known F-theory
constructions realize only a small subset of the possible charges. As an
example, consider F-theory models with just a $\operatorname{U}(1)$ gauge
group. Explicit constructions of this type have realized charges up to $q=\pm
6$ MorrisonParkU1 ; KleversEtAlToric ; Raghuram34 ; CollinucciEtAlHighCharge ;
OehlmannSchimannek ; KnappScheideggerSchimannek , even though indirect
arguments suggest F-theory $\operatorname{U}(1)$ models should be able to
admit charges at least as large as $q=\pm 21$ RaghuramTaylorLargeCharge .
Since only a limited number of charges have actually been observed in F-theory
models, one might imagine it would be difficult to make statements about
arbitrary charges.
Fortunately, as first pointed out in MorrisonParkU1 , there is a way to at
least conjecture about charges not seen in the currently known F-theory
constructions. To simplify the discussion, we assume that we are working with
a 6D F-theory model, although the arguments should carry over to lower-
dimensional models as well. Suppose our F-theory model has a $\mathfrak{u}(1)$
gauge algebra and that there is a codimension-two locus in the base supporting
matter with $\mathfrak{u}(1)$ charge $q$. At the geometric level, the
singularity type of the elliptic fiber enhances at this locus. After
resolution, there is at least one exceptional curve at this locus supporting
charged matter, which we refer to as $c$. Since the locus supports matter with
charge $q$, the generating section $\hat{s}$ behaves at this locus in a way
such that
$\sigma(\hat{s})\cdot c=q\,.$ (83)
The elliptic fibration also admits sections that are multiples of the
generating section. Since the Shioda map is a homomorphism, a section
$m\hat{s}$ satisfies
$\sigma(m\hat{s})\cdot c=m\sigma(\hat{s})\cdot c=mq$ (84)
at this same codimension-two locus in the base. It naively appears as if the
matter has “charge” $mq$ according to the section $m\hat{s}$. In the rest of
this paper, we will refer to the quantity $\sigma(\hat{s}^{\prime})\cdot c$ as
the _pseudo-charge_ when $\hat{s}^{\prime}$ is not a generating section. More
specifically, we will say that a section $\hat{s}^{\prime}$, which may not
necessarily be a generating section, “realizes a pseudo-charge $q^{\prime}$”
at a particular locus when $\sigma(\hat{s}^{\prime})\cdot c=q^{\prime}$. Of
course, since $m\hat{s}$ is not a generating section, it does not truly
correspond to a $\mathfrak{u}(1)$ gauge symmetry, and the model does not
genuinely have charge $mq$ matter supported at this locus. Nevertheless, we
expect that the section $m\hat{s}$ behaves as a generating section would in a
model that genuinely supported charge $mq$ matter, at least near the
codimension-two locus.
Thus, our strategy is to use non-generating sections realizing pseudo-charge
$q$ to glean information about the generating sections in models genuinely
admitting charge $q$ matter. Specifically, we start with a model admitting
some set of relatively small charges, which we refer to as the “seed” model.
Then, we consider multiples of the generating section, allowing us to find the
orders of vanishing for the section components for non-generating sections
admitting higher pseudo-charges. These data can then be used to establish and
test the expressions in Section 2.2 relating $\mathfrak{u}(1)$ charges to the
orders of vanishing. While we described this strategy in the context of a
model with just a $\mathfrak{u}(1)$ gauge algebra, it is equally applicable
whenever the gauge algebra includes a $\mathfrak{u}(1)$ factor, even if there
are other nonabelian or abelian gauge factors. In addition to admitting larger
pseudo-charges than the generating section, a non-generating section can be
associated with a different element $\nu$ of the center. If a generating
section $\hat{s}$ is associated with a center element $\nu$, then
$\hat{s}^{\prime}=m\hat{s}$ is associated with the center element
$\nu^{\prime}=m\nu$; in other words, the center element for $m\hat{s}$ is
found by adding together $m$ copies of $\nu$ according to the group law of the
center. This ensures that the pseudo-charges realized by $m\hat{s}$ are
allowed for $\nu^{\prime}=m\nu$ according to the rules in Section 4.
Key to this strategy is the assertion that a non-generating section realizing
pseudo-charge $q$ and a generating section admitting charge $q$ behave the
same way at the loci in question. This is a reasonable claim, since the
behavior near a codimension-two locus is a local property of the section. The
question of whether a section is a generating section, on the other hand,
involves global properties of the model; the local behavior of the section
near a particular locus would presumably not depend on such global properties.
We will also see more concrete, albeit circumstantial, evidence in favor of
this assertion. First, we can verify this claim in cases where we have an
explicit F-theory model genuinely admitting charge $q$. Additionally, there
are often multiple ways of obtaining a pseudo-charge $q$. For instance, a
pseudo-charge $q=4$ can be obtained from the section $4\hat{s}$ in a model
admitting $q=1$ matter or from the section $2\hat{s}$ in a model admitting
$q=2$ matter. If our assertion is true, then we should see the same local
behavior for a particular pseudo-charge regardless of how we obtain a
particular pseudo-charge. In fact, our analysis gives the same orders of
vanishing for each given pseudo-charge even when that pseudo-charge is
produced in independent ways.202020More precisely, we obtain the same orders
of vanishing whenever we produce a pseudo-charge with the same value of $\nu$.
See Section 4, particularly Section 4.1, for more details. This suggests that
our expressions relating the charge to the orders of vanishing hold more
broadly.
### 5.1 Signs
At this point, we should discuss how the sign of the charge fits into our
proposal. Using only the orders of vanishing of the $\hat{x}$, $\hat{y}$,
$\hat{z}$, and $\hat{w}$ section components, one should not be able to
determine the sign of the charge. When an elliptic fibration has a generating
section $\hat{s}$, we are equally free to choose the inverse section
$-\hat{s}$ as the generating section. If we take the generating section to be
$-\hat{s}$, the $\mathfrak{u}(1)$ charges should be the negative of those when
$\hat{s}$ is the generating section. However, for an elliptic fibration in
Weierstrass form, one can find the inverse of a section by flipping the sign
of the $\hat{y}$ component while leaving the other section components
unchanged. As a result, the orders of vanishing of the section components at a
particular locus should be the same for $\hat{s}$ and $-\hat{s}$. A formula
relating the orders of vanishing to the charge should therefore be insensitive
to the sign of the charge. Additionally, the question of whether to use
$-\hat{s}$ instead of $\hat{s}$ as the generating section is equivalent to
question of whether to use $\nu$ or its inverse in the center. Therefore,
formulas for the orders of vanishing should not distinguish between $\nu$ (or
$\mu$) and its inverse. These ideas are demonstrated by the proposed formulas
in Section 2.2: they only depend on the square of the charge $q$, and the
$T_{G}(\nu)$ and $\vec{\tau}_{G}(\nu)$ are the same for $\nu$ and its inverse
element.
This ambiguity between $\hat{s}$ and $-\hat{s}$ reflects the idea that the
overall sign of the charges is not important physically. We should be able to
flip the sign of all the $\mathfrak{u}(1)$ charges without changing the
physics. The more relevant property is the sign of a charge relative to other
charges in the theory. There are a few different types of relative signs we
should consider. First, if the gauge algebra has multiple $\mathfrak{u}(1)$
factors, matter in a representation of the gauge algebra has a (possibly zero)
charge for each $\mathfrak{u}(1)$ factor. The relative sign of the charges in
this representation can be important. In a 6D model with a
$\mathfrak{u}(1)^{2}$ gauge algebra, for example, one might wish to determine
if the representation of some hypermultiplet is $\bm{1}_{1,1}$ or
$\bm{1}_{1,-1}$. These representations are not conjugates of each other, and
$\bm{1}_{1,1}$ hypermultiplets should be distinguished from $\bm{1}_{1,-1}$
hypermultiplets. One can determine these sorts relative signs by applying the
proposed rules for the orders of vanishing to linear combinations of the
generating sections. For the $\mathfrak{u}(1)^{2}$ example, suppose the
generating sections are $\hat{s}_{1}$ and $\hat{s}_{2}$. Then, $\bm{1}_{1,-1}$
hypermultiplets would appear to be uncharged under the section
$\hat{s}_{1}+\hat{s}_{2}$, whereas $\bm{1}_{1,1}$ hypermultiplets would have
pseudo-charge $2$ according to $\hat{s}_{1}+\hat{s}_{2}$. One could therefore
distinguish between $\bm{1}_{1,1}$ and $\bm{1}_{1,-1}$ hypermultiplets by
applying the formulas in Section 2.2 to the section components of
$\hat{s}_{1}+\hat{s}_{2}$.
Second, one may want to determine the sign of one matter multiplet’s charge
relative to that of another matter multiplet. For example, let us consider a
model with a $\mathfrak{g}\oplus\mathfrak{u}(1)$ algebra, where $\mathfrak{g}$
is a nonabelian Lie algebra. Suppose that the model has two matter loci
supporting matter in the representations $\bm{R}_{q_{1}}$ and $\bm{R}_{q_{2}}$
for some irreducible representation $\bm{R}$ of $\mathfrak{g}$. The overall
sign of $q_{1}$ and $q_{2}$ is unimportant: we are free to flip the overall
sign of the $\mathfrak{u}(1)$ charges, after which the matter representations
would be $\bm{R}_{-q_{1}}$ and $\bm{R}_{-q_{2}}$. However, the relative sign
between $q_{1}$ and $q_{2}$ is unchanged by this flip and may therefore be a
meaningful property of the model. The equations in Section 2.2, particularly
Eq. 18, are insensitive to this relative sign.
While the inability to distinguish this relative sign is admittedly a
shortcoming of the proposals, this information can, in many cases, be obtained
relatively easily by alternative means. It is first important to recall that
matter fields in a representation $\bm{R}_{q}$ are typically accompanied by
matter fields in the representation $\overline{\bm{R}}_{-q}$. In 6D, for
example, full hypermultiplets in a representation $\bm{R}_{q}$ have fields
transforming in both representations $\bm{R}_{q}$ and
$\overline{\bm{R}}_{-q}$. If $\bm{R}$ is either a real or pseudoreal
representation of $\mathfrak{g}$, the $\bm{R}$ representation is isomorphic to
the $\overline{\bm{R}}$ representation, and a hypermultiplet of $\bm{R}_{q}$
matter can equivalently be viewed as a hypermultiplet of $\bm{R}_{-q}$ matter.
Even the relative signs of the charges are therefore unimportant when $\bm{R}$
is either real or pseudoreal. An important example is singlet matter in the
$\bm{1}_{q}$ representation, which is uncharged under the nonabelian gauge
algebra. The singlet representation is real, and thus even the relative sign
of the $\mathfrak{u}(1)$ charge is unimportant.
This leaves us with the situations where $\bm{R}$ is not self-conjugate. For
the charge normalization conventions used here, the allowed $\mathfrak{u}(1)$
charges for matter in the $\bm{R}_{q}$ representation, which may be
fractional, are separated by integers. Therefore, if one knows $\lvert
q_{1}\rvert$ and $\lvert q_{2}\rvert$, one can often use this condition to
find the relative sign difference between $q_{1}$ and $q_{2}$. As an example,
suppose that $\mathfrak{g}$ is $\mathfrak{su}(5)$ and $\bm{R}$ is the
fundamental ($\bm{5}$) representation. If $\lvert q_{1}\rvert$ is
$\frac{1}{5}$ and $\lvert q_{2}\rvert$ is $\frac{4}{5}$, one can automatically
deduce that $q_{1}$ and $q_{2}$ must have opposite signs.
This trick fails, however, when the allowed charges are either integral or
half-integral. Depending on the context, there are other strategies that may
be used to determine the relative signs. In 6D models with
$\mathfrak{g}\oplus\mathfrak{u}(1)$ gauge algebras, the massless spectrum must
satisfy the anomaly condition ErlerAnomaly ; ParkTaylor ; ParkIntersection
$\sum_{i}x_{\bm{R},q_{i}}E_{\bm{R}}q_{i}=0\,,$ (85)
where $x_{\bm{R},q_{i}}$ is the number of hypermultiplets in the
representation $\bm{R}_{q_{i}}$. The anomaly coefficient $E_{\bm{R}}$ is
defined by
$\operatorname{tr}_{\bm{R}}F^{3}=E_{\bm{R}}\operatorname{tr}F^{3}\,,$ (86)
where $F$ is the field strength for $\mathfrak{g}$,
$\operatorname{tr}_{\bm{R}}$ is the trace in the $\bm{R}$ representation, and
$\operatorname{tr}$ is the trace in the fundamental representation.212121For
$\mathfrak{su}(n)$, $E_{\bm{R}}$ is $1$ for the fundamental representation and
$n-4$ for the two-index antisymmetric representation. For all the other
representations considered in this paper, $E_{\bm{R}}$ is 0. Since this
formula is clearly sensitive to the sign of $q_{i}$, one can sometimes glean
information about the relative signs of the charges from this condition. In 4D
models, Yukawa couplings correspond to codimension-three loci in the base
where the singularity type enhances, which can often be viewed as the
intersection of codimension-two loci supporting the matter participating in
the Yukawa interaction. The Yukawa term in the Lagrangian must be invariant
under gauge transformations, so the $\mathfrak{u}(1)$ charges of matter fields
participating in the interaction must sum to 0. Thus, one can determine some
information about the relative signs by knowing that the matter at certain
loci admit a Yukawa interaction. In fact, one can often construct 6D and 4D
models with the same Weierstrass tuning, potentially allowing both of these
strategies to be used.
To summarize, the proposed formulas in Section 2.2 are not sensitive to the
sign of the charge, but the physically relevant signs can often be determined
without too much difficulty. There are some cases where certain signs are both
meaningful and difficult to determine, however. While one may need to perform
resolutions to fully determine the charges in such cases, the proposed
formulas still provide an easy method of determining the absolute value of the
charge, making them an invaluable tool. Of course, when attempting to find the
orders of vanishing corresponding to a given charge, one can use the formulas
in Section 2.2 for either choice of sign and still obtain the same result.
## 6 Charged singlets
The primary focus for the rest of this paper is applying the strategy outlined
in Section 5 to models admitting various types of matter. For each example, we
generate order-of-vanishing data and show that they satisfy the proposals. Let
us first consider loci supporting charged singlets. We can narrow our focus to
F-theory models where the total gauge algebra is simply $\mathfrak{u}(1)$,
although our results appear to be valid for singlets in more general contexts.
The matter in such models is supported at codimension-two loci in the base
where the elliptic curve singularity type enhances to $\text{I}_{2}$; in 6D
models, each such irreducible locus supports a single hypermultiplet of
matter. The corresponding $\mathfrak{u}(1)$ charges for this matter are
integral, as implied by the form of the Shioda map in Eq. 9. Typically, one
would determine the charge of the matter supported at a particular locus by
resolving the $\text{I}_{2}$ singularities and examining the behavior of the
section in the smooth model. In line with the proposals of Section 2.2,
however, we now ask whether one can determine these $\mathfrak{u}(1)$ charges
from orders of vanishing of the section components without performing a
resolution.
This question was already discussed in Raghuram34 , and the analysis there
suggested a particular relation between the charge and the orders of vanishing
in Weierstrass form. The work employed the strategy in Section 5 for an
F-theory construction described by the Weierstrass model MorrisonParkU1
$y^{2}=x^{3}+\mathopen{}\mathclose{{}\left(\hat{f}_{12}-3f_{6}^{2}}\right)xz^{4}+\mathopen{}\mathclose{{}\left(f_{9}^{2}+2f_{6}^{3}-\hat{f}_{12}f_{6}}\right)z^{6}\,.$
(87)
This elliptic fibration has Mordell–Weil rank $1$, and the generating section
$\hat{s}$ can be written as
$\hat{s}\colon[\hat{x}:\hat{y}:\hat{z}]=\mathopen{}\mathclose{{}\left[f_{6}:f_{9}:1}\right]\,.$
(88)
There are no additional nonabelian gauge symmetries in this model, and the
gauge algebra for this model is therefore $\mathfrak{u}(1)$. The only matter
in this model, supported at $\\{f_{9}=\hat{f}_{12}=0\\}$, has charge $q=1$, as |
# The stickiness property
for antisymmetric nonlocal minimal graphs
Benjamin Baronowitz , Serena Dipierro and Enrico Valdinoci Department of
Mathematics and Statistics
University of Western Australia
35 Stirling Highway, WA 6009 Crawley, Australia<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
We show that arbitrarily small antisymmetric perturbations of the zero
function are sufficient to produce the stickiness phenomenon for planar
nonlocal minimal graphs (with the same quantitative bounds obtained for the
case of even symmetric perturbations, up to multiplicative constants).
In proving this result, one also establishes an odd symmetric version of the
maximum principle for nonlocal minimal graphs, according to which the odd
symmetric minimizer is positive in the direction of the positive bump and
negative in the direction of the negative bump.
###### Key words and phrases:
Nonlocal minimal surfaces, stickiness, qualitative and quantitative behavior.
###### 2020 Mathematics Subject Classification:
35R11, 49Q05
S. Dipierro and E. Valdinoci are members of AustMS. S. Dipierro is supported
by the Australian Research Council DECRA DE180100957 “PDEs, free boundaries
and applications”. E. Valdinoci is supported by the Australian Laureate
Fellowship FL190100081 “Minimal surfaces, free boundaries and partial
differential equations”.
## 1\. Introduction
Nonlocal minimal surfaces were introduced in [MR2675483] as minimizers of a
fractional perimeter functional with respect to some given external datum. As
established in [MR3516886], when the external datum is a graph in some
direction, so is the whole minimizer, hence it is also common to consider the
case of nonlocal minimal graphs, i.e. of nonlocal minimal surfaces which
possess a graphical structure.
Nonlocal minimal graphs exhibit a quite peculiar phenomenon, discovered in
[MR3596708], called “stickiness”. Roughly speaking, different from the
classical minimal surfaces, in the fractional setting remote interactions are
capable of producing boundary discontinuities, which in turn are complemented
by boundary divergence of the derivative of the nonlocal minimal graph.
We focus here on the case in which the ambient space is of dimension $2$: this
is indeed the simplest possible stage to detect interesting geometric patterns
and, differently from the classical case, it already provides a number of
difficulties since in the nonlocal framework segments are in general not the
boundary of nonlocal minimal objects. Also, in the plane the stickiness
phenomenon is known to be essentially “generic” with respect to the external
datum, see [MR4104542], in the sense that boundary discontinuities and
boundary singularities for the derivatives can be produced by arbitrarily
small perturbations of a given external datum. In particular, these behaviors
can be obtained by arbitrarily small perturbations of the flat case in which
the external datum is, say, the sublevel of the zero function.
See also [MR4178752] for an analysis of nonlocal minimal graphs in dimension
$3$, and [MR3926519, MR4184583, 2020arXiv201000798D] for other examples of
stickiness.
Up to now, all the examples of stickiness in the setting of graphs were
constructed by using “one side bumps” in the perturbation (for instance,
adding suitable positive bumps to the zero function, or to a given function
outside a vertical slab). In this paper, we construct examples of stickiness
in which the datum is antisymmetric. In a nutshell, we will consider
arbitrarily small perturbations of the zero function by bumps possessing odd
symmetry: in this case, in principle, one may fear that the effects of equal
and opposite bumps would cancel each other and prevent the stickiness
phenomenon to occur, but we will instead establish that the stickiness
phenomenon is persistent also in this class of antisymmetric perturbations of
the flat case.
Also, we provide quantitative bounds on the resulting boundary discontinuity
which turn out to be as good as the ones available for positive bumps (up to
multiplicative constants).
The mathematical notation used in this paper goes as follows. Given
$s\in(0,1)$, $a<b$ and $u_{0}\in C(\mathbb{R})$, we say that $u\in
L^{\infty}_{\rm loc}(\mathbb{R})$ is an $s$-minimal graph in $(a,b)$ if:
* •
$u(x)=u_{0}(x)$ for every $x\in\mathbb{R}\setminus(a,b)$,
* •
for every $L>\|u\|_{L^{\infty}((a,b))}$ and every measurable function
$v:\mathbb{R}\to\mathbb{R}$ such that $|v(x)|\leqslant L$ for all $x\in(a,b)$
we have that
$\mbox{\rm{Per}}_{s}(E_{u},\Omega_{L})\leqslant\mbox{\rm{Per}}_{s}(E_{v},\Omega_{L})$
where
$\displaystyle E_{u}:=\big{\\{}(x,y)\in\mathbb{R}^{2}{\mbox{ s.t.
}}y<u(x)\big{\\}},$ $\displaystyle
E_{v}:=\big{\\{}(x,y)\in\mathbb{R}^{2}{\mbox{ s.t. }}y<v(x)\big{\\}},$
$\displaystyle\Omega_{L}:=(a,b)\times(-L,L),$
$\displaystyle\mbox{\rm{Per}}_{s}(E,\Omega):={\mathcal{L}}_{s}(E\cap\Omega,E^{c}\cap\Omega)+{\mathcal{L}}_{s}(E\cap\Omega,E^{c}\cap\Omega^{c})+{\mathcal{L}}_{s}(E\cap\Omega^{c},E^{c}\cap\Omega),$
$\displaystyle\Omega^{c}:=\mathbb{R}^{2}\setminus\Omega$ and
$\displaystyle{\mathcal{L}}_{s}(A,B):=\iint_{A\times
B}\frac{dX\,dY}{|X-Y|^{2+s}}.$
See Figure 1 for a sketch of the interactions detected by the definition
above. See also [MR4279395] for further information about nonlocal minimal
graphs and in particular Theorem 1.11 for existence and uniqueness results for
nonlocal minimal graphs.
We also recall that, while the minimizers of the classical perimeter have
vanishing mean curvature, a minimizer $E$ in $\Omega$ of the fractional
perimeter has vanishing fractional mean curvature, in the sense that, for
every $P\in\Omega\cap(\partial E)$,
$\int_{\mathbb{R}^{2}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX=0.$
Here above and in the sequel, the singular integral is intended in the Cauchy
principal value sense and the above equation reads in the sense of viscosity,
see [MR2675483, Theorem 5.1] for full details.
Numerical examples of the stickiness properties have recently been provided by
[MR3982031, MR4294645]. The theory of nonlocal minimal surfaces also presents
a number of interesting offsprings, such as regularity theory [MR3090533,
MR3107529, MR3331523, MR3680376, MR3798717, MR3934589, MR3981295, MR4050198,
MR4116635], isoperimetric problems and the study of constant mean curvature
surfaces [MR2799577, MR3322379, MR3412379, MR3485130, MR3744919, MR3836150,
MR3881478], geometric evolution problems [MR2487027, MR3156889, MR3713894,
MR3778164, MR3951024], front propagation problems [MR2564467], phase
transition problems [MR3133422], etc.
Figure 1. The interactions contributing to the fractional perimeter.
Having formalized the setting in which we work, we can now state our result
about the stickiness phenomenon in the antisymmetric setting as an arbitrarily
small perturbation of the flat case:
###### Theorem 1.1.
Let $\epsilon_{0}>0$, $\bar{C}>0$, $d>d_{0}>0$ and $h>0$, with
(1.1)
$\frac{h^{2+s}}{(h^{2}+2)^{\frac{2+s}{2}}}>\frac{2\bar{C}}{(1+s){\left(d-d_{0}\right)^{1+s}}}.$
Then, there exist a constant $C>0$, depending only on $s$, $\epsilon_{0}$,
$\bar{C}$, $d$ and $h$, and an infinitesimal sequence of $\eta$’s for which
the following claim holds true.
Let $u_{0}\in C(\mathbb{R})$. Assume that
(1.2) $u_{0}(x)=0\quad{\mbox{ for all }}x\in(d,d+h),$ (1.3) $u_{0}(x)\leqslant
0\quad{\mbox{ for all }}x\in(d,+\infty),$ (1.4)
$\|u_{0}\|_{L^{\infty}(\mathbb{R})}\leqslant\bar{C}\eta,$ (1.5)
$u_{0}(x)=-u_{0}(-x)\quad{\mbox{ for all }}x\in(d,+\infty)$
and
(1.6)
$\int_{-\infty}^{-d-h}\frac{u_{0}(x)}{|d_{0}-d-x|^{2+s}}\,dx\geqslant\eta.$
Let $u$ be the $s$-minimal graph with $u=u_{0}$ in
$\mathbb{R}\setminus(-d,d)$.
Then,
(1.7) $u(x)=-u(-x)\quad{\mbox{ for all }}x\in(-\infty,+\infty),$ (1.8)
$u(x)\geqslant 0\quad{\mbox{ for all }}x\in(-\infty,0],$ (1.9) $u(x)\leqslant
0\quad{\mbox{ for all }}x\in[0,+\infty)$
and, for every $x\in(-d,0)$,
(1.10) $u(x)\geqslant\frac{\eta^{\frac{2+\epsilon_{0}}{1-s}}}{C}.$
Figure 2. The antisymmetric stickiness phenomenon, as given in Theorem 1.1.
See Figure 2 for a sketch of the geometric scenario described in Theorem 1.1.
Interestingly, the antisymmetric stickiness power $\frac{2+\epsilon_{0}}{1-s}$
in (1.10) is the same as the one detected in [MR3596708, Theorem 1.4] for the
even symmetric case (hence, the antisymmetric case appears to be also
quantitative in agreement with the even symmetric configurations, up to
multiplicative constants).
Notice also that condition (1.1) simply states that $d$ is sufficiently large
with respect to other structural parameters. For instance, one can take
$\displaystyle\bar{C}:=2^{2+s}(1+s),\qquad
d:=3^{\frac{2+s}{2(1+s)}}2^{\frac{3+s}{1+s}}+2\qquad d_{0}:=1\quad{\mbox{ and
}}\quad h:=1.$
Notice that with this choice of parameters, condition (1.1) is satisfied. We
also define
$d_{1}:=2\left(\left(\frac{10}{9}\right)^{\frac{1}{1+s}}-1\right)\quad{\mbox{
and }}\quad d_{2}:=2\left(10^{\frac{1}{1+s}}-1\right),$
and we take $u_{0}\in C(\mathbb{R})$ satisfying (1.2), (1.3), (1.4) and (1.5),
and such that
$u_{0}\geqslant\bar{C}\eta\chi_{(-d-h-d_{2},-d-h-d_{1})}\quad{\mbox{ in
}}(-\infty,-d).$
With this choice of $u_{0}$ we have that (1.6) is also satisfied. Indeed,
using the change of variable $z:=d_{0}-d-x$,
$\displaystyle\int_{-\infty}^{-d-h}\frac{u_{0}(x)}{|d_{0}-d-x|^{2+s}}\,dx\geqslant\int_{-d-h-
d_{2}}^{-d-h-d_{1}}\frac{u_{0}(x)}{|d_{0}-d-x|^{2+s}}\,dx$
$\displaystyle\qquad\qquad\geqslant\int_{-d-h-d_{2}}^{-d-h-
d_{1}}\frac{\bar{C}\eta}{|d_{0}-d-x|^{2+s}}\,dx=\bar{C}\eta\int_{d_{0}+h+d_{1}}^{d_{0}+h+d_{2}}\frac{dz}{z^{2+s}}$
$\displaystyle\qquad\qquad=\frac{\bar{C}\eta}{1+s}\left[\frac{1}{(d_{0}+h+d_{1})^{1+s}}-\frac{1}{(d_{0}+h+d_{2})^{1+s}}\right]$
$\displaystyle\qquad\qquad=2^{2+s}\eta\left(\frac{1}{2^{1+s}\frac{10}{9}}-\frac{1}{2^{1+s}10}\right)=2\eta\frac{8}{10}>\eta,$
as desired.
The gist of the proof of Theorem 1.1 is to employ the auxiliary function
constructed in [MR3596708, Corollary 7.2], which, by suitably scaling the
picture, provides a localized barrier, whose nonlocal mean curvature has
possibly a “wrong sign” somewhere, but this sign discrepancy is controlled by
a constant times delta. However, since in our framework the solution will
cross the horizontal axis, the previous barrier cannot be exploited as it is
and needs to be modified to stay below the “expected” negative regions of the
true solution.
To this end, we first need to localize a “safe region near the boundary” for
the true solution, that is an interval of well-determined length in which one
can be sure that the solution is positive. The detection of this safe region
will be also obtained by a barrier argument, with a different barrier based on
the analysis of smooth functions with small oscillations.
The objective then becomes to slide from below an appropriate barrier whose
positive portion occurs precisely in the above safe region. For this one has
to scale and modify the barrier in [MR3596708, Corollary 7.2] by accounting
for the external data’s bumps: roughly speaking, the positive bump on the left
will produce an advantageous term, while the negative bump on the right and
the modification needed to lower the barrier outside the safe region will
produce a disadvantageous term of the same order. Thus, playing around with
constants, one detects a natural structure assuring that the advantageous
terms is greater than the sum of the initial error on the nonlocal mean
curvature, the contribution of the negative bump and the error coming from the
barrier modification.
In view of these comments, to localize the safe region near the boundary and
perform the proof of Theorem 1.1, we establish a result of general interest,
which can be seen as a “maximum principle for antisymmetric $s$-minimal
graphs”. Roughly speaking, the classical maximum principle for $s$-minimal
graphs (see e.g. [MR2675483, MR3516886]) states that if the external data of
an $s$-minimal graph are positive (or negative) then so is the $s$-minimal
graph. Here we deal with an antisymmetric configuration, hence it is not
possible for the external data of an $s$-minimal graph to have a sign (except
in the trivial case of identically zero datum, which produces the identically
zero $s$-minimizer). Hence, the natural counterpart of the maximum principle
in the antisymmetric framework is to assume that the external datum “on one
side” has a sign, thus forcing the external datum on the other sign to have
the opposite sign. Under this assumption, we show that the corresponding
$s$-minimal graph maintains the antisymmetry and sign properties of the datum,
according to the following result:
###### Theorem 1.2.
Let $d$, $h>0$. Let $u:\mathbb{R}\to\mathbb{R}$ be an $s$-minimal graph in
$(-d,d)$, with $u\in C(-\infty,-d)\cap C^{1,\frac{1+s}{2}}(-d-h,-d)$. Assume
that
(1.11) $u(x)=-u(-x)\quad{\mbox{ for all }}x\in(d,+\infty)$
and that
(1.12) $u(x)\leqslant 0\quad{\mbox{ for all }}x\in(d,+\infty).$
Then,
(1.13) $u(x)=-u(-x)\quad{\mbox{ for all }}x\in(-\infty,+\infty),$ (1.14)
$u(x)\geqslant 0\quad{\mbox{ for all }}x\in(-\infty,0]$
and
(1.15) $u(x)\leqslant 0\quad{\mbox{ for all }}x\in[0,+\infty).$
Figure 3. The maximum principle for antisymmetric nonlocal minimal graphs, as
given in Theorem 1.2.
See Figure 3 for a sketch of the configuration detected in Theorem 1.2.
The proofs of Theorems 1.1 and 1.2 are contained in Sections 2 and 3,
respectively.
## 2\. Proof of Theorem 1.1
We give here the proof of Theorem 1.1. At this stage, we freely use Theorem
1.2, whose proof is postponed to Section 3.
We also use the notation $X=(x,y)$ to denote points in $\mathbb{R}^{2}$, with
$x$, $y\in\mathbb{R}$.
Let $\epsilon_{0}>0$. Given $\delta\in(0,1)$ sufficiently small, we apply
[MR3596708, Corollary 7.2] and we find a set $H_{\delta}\subset\mathbb{R}^{2}$
that contains the halfplane $\mathbb{R}\times(-\infty,0)$ and is contained in
the halfplane $\mathbb{R}\times(-\infty,\delta)$, such that
$\displaystyle\left\\{(x,y)\in H_{\delta}{\mbox{ s.t.
}}x\in(-\infty,-d)\cup\left(-d+d_{0},+\infty\right)\right\\}$
$\displaystyle\qquad=\left\\{(x,y)\in\mathbb{R}^{2}{\mbox{ s.t.
}}x\in(-\infty,-d)\cup\left(-d+d_{0},+\infty\right){\mbox{ and
}}y\in(-\infty,0)\right\\},$
with
(2.1) $\begin{split}&\left\\{(x,y)\in H_{\delta}{\mbox{ s.t.
}}x\in\left(-d,-d+d_{0}\right)\right\\}\\\
&\qquad\supseteq\left\\{(x,y)\in\mathbb{R}^{2}{\mbox{ s.t.
}}x\in\left(-d,-d+d_{0}\right)\times\left(-\infty,\delta^{\frac{2+\epsilon_{0}}{1-s}}\right)\right\\}\end{split}$
and satisfying, for each $P=(p,q)\in\partial H_{\delta}$ with
$p\in\left(-d,-d+d_{0}\right)$,
(2.2) $\int_{\mathbb{R}^{2}}\frac{\chi_{\mathbb{R}^{2}\setminus
H_{\delta}}(X)-\chi_{H_{\delta}}(X)}{|X-P|^{2+s}}\,dX\leqslant C\delta,$
for some constant $C>0$ depending only on $s$, $\epsilon_{0}$ and $d$.
The strategy will be to take
(2.3) $\eta:=C_{\star}\delta$
in the statement of Theorem 1.1, with $C_{\star}>0$ to be sufficiently large,
according to the following computation. We let
$\displaystyle{\mathcal{A}}:=\big{\\{}(x,y)\in\mathbb{R}^{2}{\mbox{ s.t.
}}x\in(-\infty,-d){\mbox{ and }}0<y<u_{0}(x)\big{\\}},$
$\displaystyle{\mathcal{B}}:=\big{\\{}(x,y)\in\mathbb{R}^{2}{\mbox{ s.t.
}}x\in(0,+\infty){\mbox{ and }}-\bar{C}\eta<y<0\big{\\}}$ and $\displaystyle
U_{\delta}:=(H_{\delta}\cup{\mathcal{A}})\setminus{\mathcal{B}},$
see Figure 4.
Figure 4. The set $U_{\delta}$ used in the proof of Theorem 1.1.
We observe that ${\mathcal{A}}\cap H_{\delta}=\varnothing$ and
${\mathcal{B}}\subset H_{\delta}$. Consequently,
$\displaystyle\chi_{U_{\delta}}=\chi_{H_{\delta}}+\chi_{{\mathcal{A}}}-\chi_{{\mathcal{B}}}$
and $\displaystyle\chi_{\mathbb{R}^{2}\setminus
U_{\delta}}=\chi_{\mathbb{R}^{2}\setminus(H_{\delta}\cup{\mathcal{A}})}+\chi_{{\mathcal{B}}}\leqslant\chi_{\mathbb{R}^{2}\setminus
H_{\delta}}+\chi_{{\mathcal{B}}}.$
From this and (2.2), for all $P=(p,q)\in\partial H_{\delta}$ with
$p\in\left(-d,-d+d_{0}\right)$,
(2.4) $\begin{split}&\int_{\mathbb{R}^{2}}\frac{\chi_{\mathbb{R}^{2}\setminus
U_{\delta}}(X)-\chi_{U_{\delta}}(X)}{|X-P|^{2+s}}\,dX\\\
&\qquad\leqslant\int_{\mathbb{R}^{2}}\frac{\chi_{\mathbb{R}^{2}\setminus
H_{\delta}}(X)+\chi_{{\mathcal{B}}}(X)-\big{(}\chi_{H_{\delta}}(X)+\chi_{{\mathcal{A}}}(X)-\chi_{{\mathcal{B}}(X)\big{)}}}{|X-P|^{2+s}}\,dX\\\
&\qquad\leqslant
C\delta+2\int_{{\mathcal{B}}}\frac{dX}{|X-P|^{2+s}}-\int_{{\mathcal{A}}}\frac{dX}{|X-P|^{2+s}}\end{split}$
We note that, for all $P=(p,q)\in\partial H_{\delta}$ with
$p\in\left(-d,-d+d_{0}\right)$,
$\displaystyle\int_{{\mathcal{B}}}\frac{dX}{|X-P|^{2+s}}\leqslant\int_{{\mathcal{B}}}\frac{dX}{|x-p|^{2+s}}=\bar{C}\eta\int_{0}^{+\infty}\frac{dx}{|x-p|^{2+s}}=\bar{C}\eta\int_{0}^{+\infty}\frac{dx}{(x-p)^{2+s}}$
$\displaystyle\qquad\leqslant\bar{C}\eta\int_{0}^{+\infty}\frac{dx}{\left(x+d-d_{0}\right)^{2+s}}=\frac{\bar{C}\eta}{(1+s){\left(d-d_{0}\right)^{1+s}}}=\frac{\bar{C}C_{\star}\delta}{(1+s){\left(d-d_{0}\right)^{1+s}}}.$
This and (2.4) yield that
(2.5) $\int_{\mathbb{R}^{2}}\frac{\chi_{\mathbb{R}^{2}\setminus
U_{\delta}}(X)-\chi_{U_{\delta}}(X)}{|X-P|^{2+s}}\,dX\leqslant
C\delta+\frac{2\bar{C}C_{\star}\delta}{(1+s){\left(d-d_{0}\right)^{1+s}}}-\int_{{\mathcal{A}}}\frac{dX}{|X-P|^{2+s}}.$
Also, by (1.4), if $X=(x,y)\in{{\mathcal{A}}}$ and $P=(p,q)\in\partial
H_{\delta}$ with $p\in\left(-d,-d+d_{0}\right)$,
$|y-q|\leqslant|y|+|q|\leqslant 1+\delta\leqslant 2.$
Moreover, owing to (1.2) and (1.5), for all $x\in(-d-h,-d)$ we have that
$u_{0}(x)=-u_{0}(-x)=0$.
Accordingly, if $X=(x,y)\in{{\mathcal{A}}}$ then $x\leqslant-d-h$. As a
result, if $X=(x,y)\in{{\mathcal{A}}}$ and $P=(p,q)\in\partial H_{\delta}$
with $p\in\left(-d,-d+d_{0}\right)$,
$\begin{split}&|x-p|\geqslant\min\big{\\{}|x-p|,\,|x+p|\big{\\}}\geqslant\min\big{\\{}p-x,\,-p-x\big{\\}}\\\
&\qquad\geqslant\min\left\\{-d-(-d-h),\,d-d_{0}-(-d-h)\right\\}=\min\left\\{h,2d-d_{0}+h\right\\}=h.\end{split}$
From these remarks we obtain that
$\displaystyle|X-P|=|x-p|\sqrt{1+\frac{|y-q|^{2}}{|x-p|^{2}}}\leqslant|x-p|\sqrt{1+\frac{2}{h^{2}}}=|x-p|\sqrt{\frac{h^{2}+2}{h^{2}}}.$
Hence,
(2.6)
$\begin{split}&\int_{{\mathcal{A}}}\frac{dX}{|X-P|^{2+s}}\geqslant\frac{h^{2+s}}{(h^{2}+2)^{\frac{2+s}{2}}}\int_{{\mathcal{A}}}\frac{dX}{|x-p|^{2+s}}\\\
&\qquad\qquad=\frac{h^{2+s}}{(h^{2}+2)^{\frac{2+s}{2}}}\int_{-\infty}^{-d}\frac{u_{0}(x)}{|x-p|^{2+s}}\,dx=\frac{h^{2+s}}{(h^{2}+2)^{\frac{2+s}{2}}}\int_{-\infty}^{-d-h}\frac{u_{0}(x)}{|x-p|^{2+s}}\,dx.\end{split}$
Since, if $p\in\left(-d,-d+d_{0}\right)$ and $x\in(-\infty,-d-h)$,
$|x-p|=p-x\leqslant-d+d_{0}-x=|d_{0}-d-x|,$
we deduce from (2.6) that
$\int_{{\mathcal{A}}}\frac{dX}{|X-P|^{2+s}}\geqslant\frac{h^{2+s}}{(h^{2}+2)^{\frac{2+s}{2}}}\int_{-\infty}^{-d-h}\frac{u_{0}(x)}{|d_{0}-d-x|^{2+s}}\,dx.$
This, (1.6) and (2.3) give that
$\int_{{\mathcal{A}}}\frac{dX}{|X-P|^{2+s}}\geqslant\frac{h^{2+s}\eta}{(h^{2}+2)^{\frac{2+s}{2}}}=\frac{C_{\star}h^{2+s}\delta}{(h^{2}+2)^{\frac{2+s}{2}}}.$
Plugging this information into (2.5) we infer that
(2.7) $\int_{\mathbb{R}^{2}}\frac{\chi_{\mathbb{R}^{2}\setminus
U_{\delta}}(X)-\chi_{U_{\delta}}(X)}{|X-P|^{2+s}}\,dX\leqslant
C\delta+\frac{2\bar{C}C_{\star}\delta}{(1+s){\left(d-d_{0}\right)^{1+s}}}-\frac{C_{\star}h^{2+s}\delta}{(h^{2}+2)^{\frac{2+s}{2}}}.$
Now we define
$\vartheta:=\frac{h^{2+s}}{(h^{2}+2)^{\frac{2+s}{2}}}-\frac{2\bar{C}}{(1+s){\left(d-d_{0}\right)^{1+s}}}$
and we observe that $\vartheta>0$, owing to (1.1). Then, we deduce from (2.7)
that
(2.8) $\int_{\mathbb{R}^{2}}\frac{\chi_{\mathbb{R}^{2}\setminus
U_{\delta}}(X)-\chi_{U_{\delta}}(X)}{|X-P|^{2+s}}\,dX\leqslant C\delta-
C_{\star}\vartheta\delta\leqslant-\frac{C_{\star}\vartheta\delta}{2}<0,$
as long as $C_{\star}$ is large enough.
We can now use $U_{\delta}$ as a barrier for the sliding method. Specifically,
by (1.4) and the maximum principle in [MR2675483], we know that
$|u(x)|\leqslant\bar{C}\eta$ for all $x\in\mathbb{R}$ and therefore a
downwards translation of $U_{\delta}$ with magnitude $1$, that we denote by
$U_{\delta}-(0,1)$, is completely contained in
$E_{u}:=\big{\\{}(x,y)\in\mathbb{R}^{2}{\mbox{ s.t. }}y<u(x)\big{\\}}$. We
then slide up till we reach a touching point: given $\tau\in(0,1]$, we notice
that by construction no touching can occur between $E_{u}$ and
$U_{\delta}-(0,\tau)$ at points with abscissa in $\mathbb{R}\setminus[-d,0]$.
But these touching points cannot occur in $\left(-d+d_{0},0\right]$ either,
thanks to (1.14). And they cannot occur in $\left[-d,-d+d_{0}\right]$, thanks
to (2.8) and the maximum principle in [MR2675483] (see also [MR4104542,
Theorem 1.4]).
As a consequence, we have that $U_{\delta}\subseteq E_{u}$. By virtue of (2.1)
and (2.3), this establishes (1.10).
Also, the claims in (1.7), (1.8) and (1.9) follow from (1.13), (1.14) and
(1.15). $\Box$
## 3\. Proof of Theorem 1.2
The claim in (1.13) about the odd symmetry of $u$ is a consequence of (1.11)
and [MR3596708, Lemma A.1].
Thus, to complete the proof of Theorem 1.2 it suffices to establish (1.14),
since (1.15) would then follow from (1.13) and (1.14).
To prove (1.14) we argue by contradiction and assume that (1.14) is violated.
We recall that $u$ is uniformly continuous in $(-d,d)$, owing to [MR3516886,
Theorem 1.1]. As a consequence, we can redefine $u$ at the extrema of $(-d,d)$
by setting
(3.1) $u(-d):=\lim_{x\searrow-d}u(x)\quad{\mbox{and}}\quad
u(d):=\lim_{x\nearrow d}u(x)$
and we have that
(3.2) $u\in C([-d,d]).$
Furthermore,
(3.3) if $x\in(-\infty,-d)$ then $u(x)=-u(-x)\geqslant 0$,
thanks to (1.11) and (1.13).
Figure 5. The geometry involved in the proof of Theorem 1.2 (Part 1:
detachment from the boundary).
This, (3.2) and our contradictory assumption give that there exists
$p\in[-d,0]$ such that
(3.4) $u(p)=\min_{[-d,0]}u<0.$
We claim that
(3.5) $p\in(-d,0).$
Indeed, since $u(0)=-u(0)$ by (1.13), we have that $u(0)=0$ and accordingly
$p\neq 0$. Consequently, to prove (3.5) it remains to show that $p\neq-d$. For
this, suppose that $p=-d$. This, together with (3.3), gives that
(3.6) $\lambda:=\lim_{x\nearrow-d}u(x)\geqslant 0>u(-d),$
see Figure 5.
Hence, by [MR4104542, Corollary 1.3(ii)] (see also [MR3532394] for related
results), we infer that there exists $\mu>0$ and a continuous function $v$ in
$\big{(}u(-d)-\mu,u(-d)+\mu\big{)}$ such that $v$ is the inverse function of
$u$ (in the above domain of definition, with the understanding that $v=-d$
along the jump discontinuity of $u$ detected in (3.6)).
Now we pick a sequence of points $q_{k}\in(-d,0)$ with $q_{k}\searrow-d$ as
$k\to+\infty$; in particular, by (3.1), we can assume that
$u(q_{k})\in\big{(}u(-d)-\mu,u(-d)+\mu\big{)}$ and we thus obtain that
$v(u(q_{k}))=q_{k}$. We have that
(3.7) $u(q_{k})<u(-d),$
otherwise, since $v(y)=-d$ for all
$y\in\big{[}u(-d),\min\\{\lambda,u(-d)+\mu\\}\big{]}$, we would have that
$-d=v(u(q_{k}))=q_{k}>-d$, which is a contradiction.
Having proved (3.7), we find that
$u(q_{k})<u(-d)=u(p)=\min_{[-d,0]}u\leqslant u(q_{k})$
and this is a contradiction that completes the proof of (3.5).
We now recall that $u\in C^{\infty}(-d,d)$, thanks to [MR3090533, MR3331523].
For this reason, we can write the Euler-Lagrange equation for nonlocal minimal
surfaces (see [MR2675483, Theorem 5.1]) in a pointwise sense in $(-d,d)$. In
particular, by (3.5),
(3.8) $\int_{\mathbb{R}^{2}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX=0,$
where $P:=(p,u(p))$ and
$E:=\big{\\{}(x,y)\in\mathbb{R}^{2}{\mbox{ s.t. }}y<u(x)\big{\\}}.$
We will now reach a contradiction by showing that (3.8) is violated, since
“the set $E$ is too large”. To this end, we consider the sets
(3.9) $\begin{split}&F_{-}:=\big{\\{}x\in\mathbb{R}{\mbox{ s.t.
}}u(x)<u(p)\big{\\}}\\\ {\mbox{and
}}\quad&F_{+}:=\big{\\{}x\in\mathbb{R}{\mbox{ s.t.
}}u(x)>-u(p)\big{\\}},\end{split}$
see Figure 6 (notice in particular that the sets $F_{-}$ and $F_{+}$ are given
by the bold intervals along the horizontal axis in Figure 6).
Figure 6. The geometry involved in the proof of Theorem 1.2 (Part 2: detection
of cancellations via isometric regions). In particular, the sets $F_{-}$ and
$F_{+}$ in (3.9) are given by the bold intervals along the horizontal axis.
We notice that, in light of (1.12), (3.3) and (3.4),
$F_{-}\subseteq(0,+\infty)\quad{\mbox{ and }}\quad F_{+}\subseteq(-\infty,0).$
Furthermore, by (1.13), we have that
(3.10) $x\in F_{-}$ if and only if $-x\in F_{+}$.
The strategy now is based on the detection of suitable cancellations via
isometric regions that correspond to the sets $F_{-}$ and $F_{+}$. The gist is
to get rid of the contributions arising from the complement of $E$ below the
line $\\{y=u(p)\\}$. For this, one has to use suitable transformations
inherited from the geometry of the problem, such as an odd reflection through
the origin, a vertical translation of magnitude $2u(p)$ and the reflection
along the line $\\{y=u(p)\\}$. The use of these transformations will aim, on
the one hand, at detecting isometric regions in $E$ which cancel the ones in
the complement of $E$ in the integral computations, and, on the other hand, at
maintaining a favorable control of the distance from the point $P$, since this
quantity appears at the denominator of the integrands involved.
To employ this strategy, we use the notation $X=(x,y)$ and the transformation
$\widetilde{X}=(\widetilde{x},\widetilde{y})=\widetilde{X}(X):=\big{(}-x,2u(p)-y\big{)}$
and we claim that
(3.11) if $x\geqslant 0$, then $|X-P|\geqslant|\widetilde{X}-P|$.
To check this, we observe that when $x\geqslant 0$ it follows that
$xp\leqslant 0$ and therefore
$\displaystyle|\widetilde{X}-P|^{2}-|X-P|^{2}$ $\displaystyle=$
$\displaystyle\Big{|}\big{(}-x,2u(p)-y\big{)}-\big{(}p,u(p)\big{)}\Big{|}^{2}-\Big{|}(x,y)-\big{(}p,u(p)\big{)}\Big{|}^{2}$
$\displaystyle=$
$\displaystyle\Big{|}\big{(}-x-p,u(p)-y\big{)}\Big{|}^{2}-\Big{|}\big{(}x-p,y-u(p)\big{)}\Big{|}^{2}$
$\displaystyle=$ $\displaystyle(-x-p)^{2}-(x-p)^{2}$ $\displaystyle=$
$\displaystyle 4xp$ $\displaystyle\leqslant$ $\displaystyle 0,$
which establishes (3.11).
Besides, we define
$\displaystyle G:=\Big{\\{}X=(x,y)\in\mathbb{R}^{2}\setminus E{\mbox{ s.t.
$x\in F_{-}$ and $y<2u(p)-u(x)$}}\Big{\\}}$
and we claim that
(3.12) if $X\in G$, then $\widetilde{X}\in E\cap\\{\widetilde{x}\in
F_{+}\\}\cap\\{-u(\widetilde{x})<\widetilde{y}<2u(p)+u(\widetilde{x})\\}$.
Indeed, if $X\in G$, by (1.13) we have that $y>u(x)=-u(-x)$ and accordingly
(3.13) $\widetilde{y}=2u(p)-y<2u(p)+u(-x)=2u(p)+u(\widetilde{x}).$
In particuar, $\widetilde{y}<u(\widetilde{x})$. From this, (3.10) and (3.13)
we obtain that
(3.14) $\widetilde{X}\in E\cap\\{\widetilde{x}\in
F_{+}\\}\cap\\{\widetilde{y}<2u(p)+u(\widetilde{x})\\}.$
Additionally,
$\widetilde{y}=2u(p)-y>2u(p)-\big{(}2u(p)-u(x)\big{)}=u(x)=u(-\widetilde{x})=-u(\widetilde{x}).$
Combining this with (3.14) we obtain (3.12), as desired.
Now, in light of (3.11) and (3.12), and observing that $d\widetilde{X}=dX$,
(3.15) $\begin{split}&\int_{\mathbb{R}^{2}\cap\\{x\in
F_{-}\\}\cap\\{y<2u(p)-u(x)\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)}{|X-P|^{2+s}}\,dX\\\
&\qquad=\int_{G}\frac{dX}{|X-P|^{2+s}}\leqslant\int_{G}\frac{dX}{|\widetilde{X}(X)-P|^{2+s}}\\\
&\qquad\leqslant\int_{E\cap\\{\widetilde{x}\in
F_{+}\\}\cap\\{-u(\widetilde{x})<\widetilde{y}<2u(p)+u(\widetilde{x})\\}}\frac{d\widetilde{X}}{|\widetilde{X}-P|^{2+s}}\\\
&\qquad=\int_{\mathbb{R}^{2}\cap\\{\widetilde{x}\in
F_{+}\\}\cap\\{-u(\widetilde{x})<\widetilde{y}<2u(p)+u(\widetilde{x})\\}}\frac{\chi_{E}(\widetilde{X})}{|\widetilde{X}-P|^{2+s}}\,d\widetilde{X}.\end{split}$
Now we use the even reflection across the line $\\{y=u(p)\\}$, that is we
define
(3.16) $X_{*}=(x_{*},y_{*})=X_{*}(X):=\big{(}x,2u(p)-y\big{)}.$
We notice that
(3.17)
$|X_{*}-P|=\Big{|}\big{(}x,2u(p)-y\big{)}-\big{(}p,u(p)\big{)}\Big{|}=\Big{|}\big{(}x-p,u(p)-y\big{)}\Big{|}=|X-P|.$
As a result,
$\displaystyle\int_{\mathbb{R}^{2}\cap\\{x\in
F_{-}\\}\cap\\{y>2u(p)-u(x)\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)}{|X-P|^{2+s}}\,dX\leqslant\int_{\mathbb{R}^{2}\cap\\{x\in
F_{-}\\}\cap\\{y>2u(p)-u(x)\\}}\frac{dX}{|X-P|^{2+s}}$
$\displaystyle\quad=\int_{\mathbb{R}^{2}\cap\\{x\in
F_{-}\\}\cap\\{y>2u(p)-u(x)\\}}\frac{dX}{|X_{*}(X)-P|^{2+s}}=\int_{\mathbb{R}^{2}\cap\\{x\in
F_{-}\\}\cap\\{y_{*}<u(x_{*})\\}}\frac{dX_{*}}{|X_{*}-P|^{2+s}}$
$\displaystyle\qquad=\int_{\mathbb{R}^{2}\cap\\{x\in
F_{-}\\}}\frac{\chi_{E}(X_{*})}{|X_{*}-P|^{2+s}}\,dX_{*}.$
From this and (3.15), changing the names of the integration variables, we
arrive at
$\displaystyle\int_{\mathbb{R}^{2}\cap\\{x\in
F_{-}\\}}\frac{\chi_{\mathbb{R}^{2}\setminus E}(X)}{|X-P|^{2+s}}\,dX$
$\displaystyle\qquad\leqslant\int_{\mathbb{R}^{2}\cap\\{x\in
F_{+}\\}\cap\\{-u(x)<y<2u(p)+u(x)\\}}\frac{\chi_{E}(X)}{|X-P|^{2+s}}\,dX+\int_{\mathbb{R}^{2}\cap\\{x\in
F_{-}\\}}\frac{\chi_{E}(X)}{|X-P|^{2+s}}\,dX,$
that is
$\int_{\mathbb{R}^{2}\cap\\{x\in F_{-}\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX\leqslant\int_{\mathbb{R}^{2}\cap\\{x\in
F_{+}\\}\cap\\{-u(x)<y<2u(p)+u(x)\\}}\frac{\chi_{E}(X)}{|X-P|^{2+s}}\,dX.$
Therefore,
(3.18) $\begin{split}&\int_{\mathbb{R}^{2}\cap\\{x\in F_{-}\cup
F_{+}\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX\\\
\leqslant&\int_{\mathbb{R}^{2}\cap\\{x\in
F_{+}\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX+\int_{\mathbb{R}^{2}\cap\\{x\in
F_{+}\\}\cap\\{-u(x)<y<2u(p)+u(x)\\}}\frac{\chi_{E}(X)}{|X-P|^{2+s}}\,dX\\\
=&\int_{\mathbb{R}^{2}\cap\\{x\in F_{+}\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)}{|X-P|^{2+s}}\,dX-\int_{\mathbb{R}^{2}\cap\\{x\in
F_{+}\\}\cap\\{y\in(-\infty,-u(x))\cup(2u(p)+u(x),+\infty)\\}}\frac{\chi_{E}(X)}{|X-P|^{2+s}}\,dX.\end{split}$
Now, recalling the notation in (3.16), we note that
(3.19) if $X\in(\mathbb{R}^{2}\setminus E)\cap\\{x\in F_{+}\\}$, then
$X_{*}\in E\cap\\{x_{*}\in F_{+}\\}\cap\\{y_{*}\in(-\infty,-u(x_{*}))\\}$.
Indeed, if $X$ is as above then, since $u(p)<0$,
(3.20) $y_{*}=2u(p)-y<2u(p)-u(x)<-u(x)=-u(x_{*}).$
Hence, since $u(x_{*})=u(x)>-u(p)>0$,
$y_{*}<-u(x_{*})<u(x_{*}).$
By gathering this and (3.20) we obtain the desired result in (3.19).
Thus, by (3.17) and (3.19) we deduce that
$\displaystyle\int_{\mathbb{R}^{2}\cap\\{x\in
F_{+}\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)}{|X-P|^{2+s}}\,dX=\int_{(\mathbb{R}^{2}\setminus E)\cap\\{x\in
F_{+}\\}}\frac{dX}{|X_{*}(X)-P|^{2+s}}$
$\displaystyle\qquad\leqslant\int_{E\cap\\{x_{*}\in
F_{+}\\}\cap\\{y_{*}\in(-\infty,-u(x_{*}))\\}}\frac{dX_{*}}{|X_{*}-P|^{2+s}}.$
We write this inequality in the form
$\displaystyle\int_{\mathbb{R}^{2}\cap\\{x\in
F_{+}\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)}{|X-P|^{2+s}}\,dX\leqslant\int_{\mathbb{R}^{2}\cap\\{x\in
F_{+}\\}\cap\\{y\in(-\infty,-u(x))\\}}\frac{\chi_{E}(X)}{|X-P|^{2+s}}\,dX.$
Comparing with (3.18) we thereby deduce that
(3.21) $\int_{\mathbb{R}^{2}\cap\\{x\in F_{-}\cup
F_{+}\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX\leqslant 0.$
We also remark that
$\mathbb{R}\setminus(F_{+}\cup F_{-})=\big{\\{}x\in\mathbb{R}{\mbox{ s.t.
}}|u(x)|\leqslant-u(p)\big{\\}}.$
From this observation, (3.8) and (3.21) we infer that
(3.22)
$0\leqslant\int_{\mathbb{R}^{2}\cap\\{|u(x)|\leqslant-u(p)\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX.$
It is now useful to remark that
if $|u(x)|\leqslant-u(p)$ and $y<u(p)$, then $X=(x,y)\in E$,
since in this setting $u(x)\geqslant u(p)>y$.
Consequently, the inequality in (3.22) gives that
$\displaystyle 0$ $\displaystyle\leqslant$
$\displaystyle\int_{\mathbb{R}^{2}\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y>u(p)\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX$
$\displaystyle\qquad+\int_{\mathbb{R}^{2}\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y<u(p)\\}}\frac{\chi_{\mathbb{R}^{2}\setminus
E}(X)-\chi_{E}(X)}{|X-P|^{2+s}}\,dX$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{2}\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y>u(p)\\}}\frac{dX}{|X-P|^{2+s}}-2\int_{E\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y>u(p)\\}}\frac{dX}{|X-P|^{2+s}}$
$\displaystyle\qquad-\int_{\mathbb{R}^{2}\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y<u(p)\\}}\frac{dX}{|X-P|^{2+s}}.$
Hence, recalling (3.17) and noticing that, if $y\geqslant u(p)$ then
$u(y_{*})\leqslant u(p)$, we find that
$\displaystyle 0$ $\displaystyle\leqslant$
$\displaystyle\int_{\mathbb{R}^{2}\cap\\{|u(x_{*})|\leqslant-u(p)\\}\cap\\{y_{*}<u(p)\\}}\frac{dX_{*}}{|X_{*}-P|^{2+s}}-2\int_{E\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y>u(p)\\}}\frac{dX}{|X-P|^{2+s}}$
$\displaystyle\qquad-\int_{\mathbb{R}^{2}\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y<u(p)\\}}\frac{dX}{|X-P|^{2+s}}$
$\displaystyle=$
$\displaystyle-2\int_{E\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y>u(p)\\}}\frac{dX}{|X-P|^{2+s}}.$
This entails that the set
(3.23) $E\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y>u(p)\\}$ is of null measure.
On the other hand, we set
$\eta:=\min\left\\{\frac{|u(p)|}{1+8\|u\|_{C^{1}(-d/2,d/2)}},\frac{d}{2}\right\\}$
and we claim that
(3.24)
$\left[-\eta,\eta\right]\times\left[\frac{u(p)}{2},\frac{u(p)}{4}\right]\subseteq
E\cap\\{|u(x)|\leqslant-u(p)\\}\cap\\{y>u(p)\\}.$
Indeed, if
$(x,y)\in\left[-\eta,\eta\right]\times\left[\frac{u(p)}{2},\frac{u(p)}{4}\right]$,
then
$\displaystyle
y-u(x)<\frac{u(p)}{4}-u(x)+u(0)\leqslant\frac{u(p)}{4}+\|u\|_{C^{1}(-d/2,d/2)}|x|$
$\displaystyle\qquad\leqslant\frac{u(p)}{4}+\frac{|u(p)|\,\|u\|_{C^{1}(-d/2,d/2)}}{1+8\|u\|_{C^{1}(-d/2,d/2)}}\leqslant\frac{u(p)}{4}+\frac{|u(p)|}{8}\leqslant\frac{u(p)}{8}<0,$
which gives that $(x,y)\in E$.
Moreover,
$\displaystyle|u(x)|=|u(x)-u(0)|\leqslant\|u\|_{C^{1}(-d/2,d/2)}|x|\leqslant\frac{|u(p)|\,\|u\|_{C^{1}(-d/2,d/2)}}{1+8\|u\|_{C^{1}(-d/2,d/2)}}\leqslant|u(p)|$
and $y\geqslant\frac{u(p)}{2}>u(p)$. These considerations prove (3.24).
From (3.23) and (3.24) we obtain the desired contradiction. The proof of
(1.14) is thereby complete. $\Box$
## References
|
# Feeder Microgrid Management on an Active Distribution System during a Severe
Outage
Valliappan Muthukaruppan, Ashwin Shirsat, Rongxing Hu, Victor Paduani, Bei Xu,
Yiyan Li, Mesut Baran, Ning Lu, David Lubkeman, and Wenyuan Tang Corresponding
author: Mesut Baran (baran@ncsu.edu)The authors are with Department of
Electrical and Computer Engineering, North Carolina State University, Raleigh,
NC 27695 USA (email: vmuthuk2, ashirsa, rhu5, vdaldeg, bxu8, yli257, nlu2,
dllubkem, wtang8, baran@ncsu.edu). This material is based upon work supported
by U.S. Department of Energy’s Office of Energy Efficiency and Renewable
Energy (EERE) under Solar Energy Technologies Office Award Number DE-
EE0008770.
###### Abstract
Forming a microgrid on a distribution system with large scale outage after a
severe weather event is emerging as a viable solution to improve resiliency at
the distribution level. This option becomes more attractive when the
distribution system has high levels of distributed PV. The management of such
feeder-level microgrid has however many challenges, such as limited resources
that can be deployed on the feeder quickly, and the limited real-time
monitoring and control on the distribution system. Effective use of the
distributed PV is also challenging as they are not monitored and controlled.
To handle these challenges, the paper proposes a 2-stage hierarchical energy
management scheme to securely operate these feeder level micorgrids. The first
stage of the scheme solves a sequential rolling optimization problem to
optimally schedule the main resources (such as a mobile diesel generator and
battery storage unit). The second stage adopts a dispatching scheme for the
main resources to adjust the stage-1 set-points closer to real-time. The
proposed scheme has unique features to assure that the scheme is robust under
highly varying operating conditions with limited system observability: (i) an
innovative PV forecast error adjustment and a dynamic reserve adjustment
scheme to handle the extreme uncertainty on PV power output, and (ii) an
intelligent fuel management scheme to assure that the resources are utilized
optimally over the multiple days of the restoration period. The proposed
algorithm is tested on sample system with real-time data. The results show
that the proposed scheme performs well in maximizing service to loads by
effective use of all the resources and by properly taking into account the
challenging operating conditions.
###### Index Terms:
distribution system restoration, feeder-level microgrid, energy management,
reserve management, forecast error correction.
## Nomenclature
$\overline{E}^{ES}_{i}$
kWh rating of ES $i\in\mathcal{N}^{ES}$
$\left\\{\underline{F},\overline{F}\right\\}_{i}$
Min/Max fuel limits of DG $i\in\mathcal{N}^{DG}$
$\overline{S}^{ES/DG}_{i}$
kVA rating of ES/DG $i\in\mathcal{N}^{ES/DG}$
$\left\\{\underline{SoC},\overline{SoC}\right\\}_{i}$
Min/Max SoC limits of ES $i\in\mathcal{N}^{ES}$
$\alpha_{i},\beta_{i}$
Fuel consumption coefficients of DG
$\gamma_{t}$
Reserve factor for GFM ES unit
$\alpha^{MSD}$
Minimum service duration of load groups
$\alpha^{up}$
Minimum up time for DG
$w_{i}$
Priority weight for load groups
$\theta_{i}$
Power factor angle for DG $i\in\mathcal{N}^{DG}$
$C^{up}$
Start cost for DG
$\lambda_{n}$
Forecast error normalization factor
$F_{f}$
Final desired fuel reserve for DG
$\\{P_{i,t,\phi},Q_{i,t,\phi}\\}^{D}$
Real and Reactive power load demand
$\\{P_{i,t,\phi},Q_{i,t,\phi}\\}^{ES}$
Real and Reactive power output of ES
$\\{P_{i,t},Q_{i,t}\\}^{DG}$
Real and Reactive power output of DG
$P_{i,t,\phi}^{PV}$
Real Power output of BTM PV
$x_{n,t}$
Status of switch connecting $n^{th}$ LG
$y_{i,t}$
Status of switch controlling $i^{th}$ DG
$C_{i,t}$
Start up cost of $i^{th}$ DG
$\\{P_{i,k,\phi},Q_{i,k,\phi}\\}^{D}$
Real and Reactive power load demand
$\\{P_{i,k,\phi},Q_{i,k,\phi}\\}^{ES}$
Real and Reactive power output of ES
$\\{P_{i,k},Q_{i,k}\\}^{DG}$
Real and Reactive power output of DG
$P_{i,k,\phi}^{PV}$
Real Power output of BTM PV
$x_{n,k}$
Status of switch connecting $n^{th}$ LG
$\hat{x}_{n,t}$
Stage-1 schedule of $n^{th}$ switch in $\Delta t$
$\hat{P}_{i,t}^{DG}$
Stage-1 DG schedule in $\Delta t$
$\hat{\epsilon}_{t}$
Forecast error correction factor in $\Delta t$
## I Introduction
Recent increase in extreme weather events encourage utilities to take measures
to increase the resiliency especially at the distribution level in order to
provide service during such extreme events [1]. One promising technology is
the microgrids as it facilitates the operation of the healthy part of the
system during an extended outage caused by an extreme event [2, 3]. Formation
of a microgrid becomes more attractive especially on a distribution system
with high penetration of renewables, mainly distributed Photovoltaic (PV) [4].
Another promising technology for forming a microgrid on a distribution feeder
on demand is the mobile energy storage (MES) device as it provides the
flexibility to change location depending on changing grid or customer
conditions [5, 6, 7]. Hence, it becomes quite feasible to form a feeder level
microgrid during extended outage (day or more) [8, 9, 10]. If the feeder has
large penetration of distributed PV, then forming such a microgrid becomes
more attractive [5].
Forming a microgrid on a feeder has unique challenges [11, 12]. The main
feature is the mobile generation resources that can be brought to the
location. A typical approach is to bring a MES with a diesel generator (DG).
Both of these resources will have limited capacity and energy that needs to be
carefully managed [13]. Another issue is that the current distribution feeders
have only limited circuit switches for control, which makes it challenging to
ration the load when needed. Having a large amount of PV on the feeder can
help considerably with the microgrid operation. The challenge with utilizing
these resources is that most of these resources are not visible (i.e., not
monitored) and they are quite intermittent. These conditions make it
challenging to operate a feeder-level microgrid during an extended multi-day
outage. This paper focuses on this problem.
Existing literature address only some of the challenges identified. In [14,
15] authors focus on black start of a feeder-level microgrid by picking up
outage load with limited resources and with cold load pickup consideration in
[16, 17]. In [18, 19] load restoration over limited duration is considered and
it uses direct load control to selectively restore critical loads. In [20] a
similar problem is considered with direct load control but behind-the-meter
(BTM) PV is not included.
In [21, 22] load restoration problem over limited duration is considered and a
multi-microgrid formation and reconfiguration is adopted by leveraging
Distributed Energy Resource (DER) flexibility. The DER flexibility is
evaluated and implemented using an aggregator which coordinates with upstream
distribution system operator for restoration. For realizing the flexibility of
DERs and loads, author assumes energy storage (ES) devices collocated with the
PV, thermostatically controllable loads like HVAC can be fully leveraged by
the Distribution System Operator (DSO) at each house level. Such coordination,
flexibility on demand side and concept of DER aggregators is yet to be
implemented in many utilities. This framework also requires significant
communication between the aggregators and the DSO. In [11, 12] realistic
existing feeder infrastructure is considered in restoration but they do not
include the DER uncertainty in the restoration process.
This paper aims at development of a comprehensive microgrid management scheme
for the emerging case of operating a feeder level microgrid on an active
distribution system to provide service during a severe outage. The main
contribution of the proposed method is that all the main issues/challenges
associated with such a microgrid is considered and a robust management scheme
is developed to address these issues. The contributions include:
* •
Realistic distribution system operating conditions are considered: limited
load and PV visibility and controllability, and limited controllable switches
on the feeder.
* •
To make best use of the main resources for the microgrid - a DG, and a MES - a
novel DG fuel management scheme is adopted to ensure service to loads during
peak load conditions over multiple days.
* •
To address the main challenge associated with estimating BTM PV variability, a
new forecast error estimation and adjustment strategy to correct the high
forecast error has been introduced which significantly increases the amount of
PV utilization during restoration.
* •
A dynamic reserve adjustment strategy for the MES is also introduced in order
to securely operated the microgrid under severe cloud cover events and to
minimize the unintentional load shedding events.
The rest of the paper is organized as follows: sec-II introduces the feeder
microgrid management problem and the proposed scheme, sec-III outlines the new
robustness enhancement strategies proposed to handle the issues associated
with the BTM PV and multi-day restoration using limited resources. Section-IV
illustrates the performance of proposed scheme with a case study which
includes the IEEE 123 node system and operating conditions based on field
data.
## II Feeder Microgrid Management Scheme
### II-A Operating Conditions
To ensure realistic operating conditions for feeder level microgrid are
included in the management scheme, the following conditions are considered:
* •
Utility owned mobile devices MES and DG are quickly deployed at a proper
location on the distribution feeder which has lost power. The MES and DG are
the main resources and are designed to be connected to feeder at substation or
other proper location with necessary infrastructure. The microgrid controller
with the energy management scheme can monitor and control these resources.
* •
The feeder has high level of BTM PV which will provide supplemental power for
the microgrid. These resources are not visible to the microgrid controller, as
BTM PV is typically not monitored. Hence, there is a need to forecast at least
the net load (actual load minus the power from PV) for the management of the
microgrid.
* •
The distribution feeder has only a limited remotely controllable
sectionalizing switches. The microgrid controller can use these switches to
adjust the load on the feeder, as these switches divide the loads in to load
groups which can be disconnected as needed. The feeder has also some critical
loads and they should have higher priority in the restoration.
### II-B Energy Management Scheme
We propose a 2-stage hierarchical energy management framework as presented in
figure-1. Stage-1 is the scheduler to schedule the microgrid resources for the
next period (such as half or one hour) by taking in to account future load and
PV forecast. Stage-2 is the short term dispatching stage which determines the
proper dispatch levels for MES and the DG for the next dispatch period (of 1
to 5 minutes). After every dispatch cycle ($\Delta k$) and scheduling cycle
($\Delta t$) the real-time (RT) measurements are used to update the initial
condition of next dispatch or rolling horizon cycle respectively. These two
stages are outlined below.
Figure 1: Proposed Energy Management Framework
#### II-B1 Stage-1: Scheduling Problem
The stage-1 problem which solves a receding horizon optimization problem on
last day of restoration and a rolling horizon problem on other days of
restoration [23], to optimally allocate the available resources and control
the amount of load to be picked up in each interval through load groups over
the considered horizon which is typically 24 hours to ensure the adequate
availability of resources over the entire restoration period. The important
decision variables in this stage are the switch status $x_{n,t}$ and the
scheduled DG output $P^{DG}_{i,t}$. The Load and PV at each node given by
$P^{D/PV}_{i,t,\phi}$ is the stochastic variable obtained from forecast
information. The objective function is shown in (1) where the first term
maximizes the total expected load groups to be served denoted by $x_{n,t}$
with higher priority to load groups with critical loads (which have higher
weights $w_{n}$) and second term minimizes the start up cost of diesel
generator.
$\max_{x}\quad\sum_{t\in\mathcal{T}}\left[\sum_{i\in\mathcal{N}_{n}}x_{n,t}w_{n}\sum_{\phi\in\Phi}P_{i,t,\phi}^{D}\Delta
t-\sum_{i\in\mathcal{N}^{DG}}C_{i,t}\right]$ (1)
The constraints are defined for $t\in\mathcal{T}$, $\phi\in\Phi$, and
$n\in\mathcal{N}^{LG}$ unless explicitly stated. Equation (2a) highlights the
real power balance in the network over entire time horizon $\mathcal{T}$. The
circuit switches $x_{n,t}$ are the binary decision variables. The reactive
power balance in the network is shows in (2b). It is assumed that all BTM PVs
operate at unity power factor with no reactive power injection.
$\displaystyle\sum_{i\in\mathcal{N}^{ES}}P_{i,t,\phi}^{ES}+\sum_{i\in\mathcal{N}^{DG}}\frac{P_{i,t}^{DG}}{3}$
$\displaystyle=\sum_{i\in\mathcal{N}_{n}}x_{n,t}\left(P_{i,t,\phi}^{D}-P_{i,t,\phi}^{PV}\right)$
(2a)
$\displaystyle\sum_{i\in\mathcal{N}^{ES}}Q_{i,t,\phi}^{ES}+\sum_{i\in\mathcal{N}^{DG}}\frac{Q_{i,t}^{DG}}{3}$
$\displaystyle=\sum_{i\in\mathcal{N}_{n}}x_{n,t}Q_{i,t,\phi}^{D}$ (2b)
Equations (3a)-(3c) represent the real, reactive, and inverter limits of the
energy storage devices $\forall i\in\mathcal{N}^{ES}$. In equation-(3c),
$\gamma_{t}\leq 1$ indicates the reserve factor imposed on the inverter which
reduces the actual limits of the inverter to account for forecast error and
other modeling errors. Further, (3c) is a quadratic constraint which is
linearized using the m-sided polygon technique with m = 6 as proposed in [24].
$\displaystyle-
x_{n,t}\underline{P}_{i}^{ES}\leq\sum_{\phi\in\Phi}P_{i,t,\phi}^{ES}$
$\displaystyle\leq x_{n,t}\overline{P}_{i}^{ES}$ (3a) $\displaystyle
0\leq\sum_{\phi\in\Phi}Q_{i,t,\phi}^{ES}$ $\displaystyle\leq
x_{n,t}\overline{Q}_{i}^{ES}$ (3b)
$\displaystyle\left(\sum_{\phi\in\Phi}P_{i,t,\phi}^{ES}\right)^{2}+\left(\sum_{\phi\in\Phi}Q_{i,t,\phi}^{ES}\right)^{2}$
$\displaystyle\leq x_{n,t}\left(\gamma_{t}\overline{S}^{ES}_{i}\right)^{2}$
(3c)
Equations (4a)-(4b) represent the SoC constraints of the battery $\forall
i\in\mathcal{N}^{ES}$. The temporal change in SoC based on the expected output
of ES is shown in (4a) and the absolute SoC limits is defined in (4b) where
$\eta_{i}$ is the efficiency of $i^{th}$ ES unit.
$\displaystyle SoC_{i,t}$
$\displaystyle=SoC_{i,t-1}-\frac{\sum_{\phi\in\Phi}P_{i,t,\phi}^{ES}}{\overline{E}_{i}^{ES}\eta_{i}^{ES}}\Delta
t$ (4a) $\displaystyle\underline{SoC}_{i}$ $\displaystyle\leq
SoC_{i,t}\leq\overline{SoC}_{i}$ (4b)
Equations (5a)-(5d) represent the diesel generator constraints $\forall
i\in\mathcal{N}^{DG}$. Equations (5a) represents the real power limits of the
DG which is dispatched in fixed power factor mode with power factor angle
$\theta_{i}$ as indicated in (5b). The temporal change of fuel based on
expected dispatch of diesel generator is shown in (5c) and the fuel limits are
imposed using (5d).
$\displaystyle x_{i,t}\underline{S}_{i}^{DG}\cos\theta_{i}$ $\displaystyle\leq
P_{i,t}^{DG}\leq x_{i,t}\overline{S}_{i}^{DG}\cos\theta_{i}$ (5a)
$\displaystyle Q_{i,t}^{DG}$ $\displaystyle=P_{i,t}^{DG}\tan\theta_{i}$ (5b)
$\displaystyle F_{i,t}$
$\displaystyle=F_{i,t-1}-\left(y_{i,t}\alpha_{i}+\beta_{i}P_{i,t}^{DG}\right)\Delta
t$ (5c) $\displaystyle\underline{F}_{i}$ $\displaystyle\leq
F_{i,t}\leq\overline{F}_{i}$ (5d)
Equation (6a) represents the minimum service duration (MSD) constraint
$\forall n\in\mathcal{N}^{LG}$, $t\in\mathcal{T}\setminus\\{1\\}$. When a load
group is decided to be picked up by the microgrid, it has to be supplied for a
minimum duration of $\alpha^{MSD}$. A constraint to ensure load group
connectivity sequence to maintain radiality is also included.
$\displaystyle\sum_{t^{\prime}=t}^{t+\alpha^{MSD}-1}x_{n,t^{\prime}}$
$\displaystyle\geq\alpha^{MSD}(x_{n,t}-x_{n,t-1})$ (6a)
Equation (7a) represents the minimum up time ($\alpha^{up}$) constraint and
(7b), (7c) together compute the startup cost of diesel generators $\forall
i\in\mathcal{N}^{DG}$. Every time the diesel generator is switched on, a start
up cost of $C^{up}$ is incurred.
$\displaystyle\sum_{t^{\prime}=t}^{t+\alpha^{up}-1}y_{i,t^{\prime}}$
$\displaystyle\leq\alpha^{up}(y_{i,t}-y_{i,t-1})$ (7a) $\displaystyle C_{i,t}$
$\displaystyle\geq C^{up}\left(y_{i,t}-y_{i,t-1}\right)$ (7b) $\displaystyle
C_{i,t}$ $\displaystyle\geq 0$ (7c)
#### II-B2 Stage-2: Dispatching Problem
This stage utilizes a short-term forecast which is generally more accurate
than stage-1 forecast for actual dispatching of the resources and switch
control closer to real-time operation. The objective function is shown in (8)
where the first term maximizes the amount of load served with higher priority
to critical loads. Since the switch control is re-evaluated, the second term
minimizes the deviation of load group status from stage-1 results
$\hat{x}_{n,t}$. The third term minimizes the deviation of diesel generator
dispatch from the stage-1 scheduled value $\hat{P}^{DG}_{i,t}$. Here,
$k\in\mathcal{K}_{t}$ is the dispatch interval as indicated in figure-1.
$\max_{x}\quad\sum_{n\in\mathcal{N}^{LG}}\left[w_{n}x_{n,k}\sum_{\phi\in\Phi}P_{i,k,\phi}^{D}-w_{n}^{sw}\left[\hat{x}_{n,t}-x_{n,k}\right]^{2}\right]\\\
-\sum_{i\in\mathcal{N}^{DG}}\left[\hat{P}_{i,t}^{DG}-P_{i,k}^{DG}\right]^{2}$
(8)
The real power balance constraint is similar to stage-1 equation (2a) but
defined over $k\in\mathcal{K}_{t}$. The additional term involving
$\lambda_{n}$ and $\hat{\epsilon}_{t}$ is the forecast correction term which
is explained in section-III-B.
$\sum_{i\in\mathcal{N}^{ES}}P_{i,k,\phi}^{ES}+\sum_{i\in\mathcal{N}^{DG}}\frac{P_{i,k}^{DG}}{3}\\\
=\sum_{n\in\mathcal{N}^{LG}}\left[\sum_{i\in\mathcal{N}_{n}}x_{n,k}\left(P_{i,k,\phi}^{D}-P_{i,k,\phi}^{PV}\right)-\lambda_{n}\frac{\hat{\epsilon}_{t}}{3}\right]$
(9)
The reactive power constraint in (10) is similar to real power constraint.
Note that there is no forecast error correction term here, as the major issue
in forecast comes from BTM PV (as we will see in section-III-B) and there is
no reactive power contribution from PV..
$\sum_{i\in\mathcal{N}^{ES}}Q_{i,k,\phi}^{ES}+\sum_{i\in\mathcal{N}^{DG}}\frac{Q_{i,k}^{DG}}{3}=\sum_{n\in\mathcal{N}^{LG}}\sum_{i\in\mathcal{N}_{n}}x_{n,k}Q_{i,k,\phi}^{D}$
(10)
The constraints (3), (4), and (5) can be imported directly from stage-1
problem by replacing subscripts $t,\Delta t,\mathcal{T}$ by $k,\Delta
k,\mathcal{K}_{t}$ to represent the MES and DG constraints.
## III Enhancements for Robustness
The performance of the main proposed scheme depends heavily on the accuracy of
the forecast on the net load, and as indicated earlier this forecast can have
significant error. The other challenge is the effective scheduling of the
limited fuel of the DG to ensure service to loads over multiple days. To
address these challenges, and thus to enhance the robustness of the proposed
EMS we propose the following strategies:
* •
Multi-Day Fuel Rationing
* •
Learning-based Forecast Correction
* •
Dynamic Reserve Management
### III-A Multi-Day Fuel Rationing
For the generation mix of this feeder microgrid we consider a MES, a DG, and
many distributed BTM PVs. MES serves as the grid forming (GFM) resource due to
its fast dynamic response, and DG serves as the main energy source. As stated
earlier in sec-II-B1, we solve a receding horizon problem on final day of
restoration and rolling horizon problem on all other days with a horizon of 24
hours primarily because smart meter data is available real-time at 15-min
interval. Hence, we can forecast the load in a rolling fashion updated every
30-min to 1hr but PV forecast requires weather information which is impossible
to obtain in a rolling fashion by the utility. Also it is very challenging to
accurately forecast PV irradiance beyond 24 hour period. For this reason we
can only solve a rolling horizon problem with fixed horizon of 24 hours
initially. Since the resources need to be rationed over multiple-days we
propose a multi-day fuel rationing framework that ensures availability of DG
over all days of a restoration process even under significant uncertainty in
load and PV. We develop a fuel management algorithm that can efficiently
ration the fuel over multiple days.
$F_{i,r}=\left.\begin{cases}F_{i,t-1}-\frac{(F_{i,t-1}-F_{f})(\mid\mathcal{T}\mid
d+t-1)}{\mid\mathcal{T}\mid\cdot\mid\mathcal{D}\mid},&\text{for
}d<\mid\mathcal{D}\mid\\\ F_{f}&\text{for }d=\mid\mathcal{D}\mid\\\
\end{cases}\right.$ (11) $F_{i,\mid\mathcal{T}\mid}\leq F_{i,r}$ (12)
Equation-(11) highlights a piece-wise linear function defined $\forall
i\in\mathcal{N}^{DG}$, $t\in\mathcal{T}$. $F_{i,r}$ is the minimum fuel
reserve target at the end of each control cycle in stage-1 scheduling problem,
which is enforced as a constraint in (12). $F_{i,t-1}$ is the initial fuel at
start of each control cycle in stage-1 problem, $F_{f}$ is the final reserve
desired at the end of multi-day restoration, which can be set to a non-zero
value to ration some emergency fuel at the end of restoration to supply only
critical loads or to resynchronize the distribution system with main feeder by
picking up all the loads in the system. This framework can be extended to
energy storage devices acting as grid following (GFL) resources by
implementing the same reserve function on SoC of the batteries.
### III-B Learning-based Forecast Correction
The major issue with BTM PV is lack of real-time data. When individual houses
are net metered, it is easier to forecast the net load but dis-aggregating PV
from load and forecasting just the PV component becomes challenging. For the
EMS scheme, accurate forecast of the net load variation for each load group is
needed. The actual load patterns is usually quite predictable, it is the PV
variability that is challenging to forecast for each load group. We have
adopted an effective PV forecast method proposed in [25] for this purpose. The
method uses the irradiance forecast and the PV rating at each house to
estimate the BTM PV output. The low accuracy on individual house forecast can
become large when aggregated to feeder-level [26]. Handling such large errors
becomes very challenging since we have a MES with limited capacity.
Figure 2: High forecast error in short-term and day-head forecast of
aggregated BTM PV. (top) Real-time net load measurement, stage-1, and stage-2
forecast. (bottom) Total net load forecast error in stage-1 and stage-2
forecast. Zoomed plot shows stage-2 forecast more accurate than stage-1
forecast during no PV duration and error is close to zero.
Figure 2 shows the error in aggregated netload forecast which increases
significantly during the presence of PV (6:00 - 18:00 on day-1 and day-2).
There are two characteristic to the forecast error introduced by the PV. One
is the average forecast error due to error in PV rating assumed, unaccounted
rooftop PV systems and modeling errors. The other characteristic is the
instantaneous random errors introduced by cloud cover events. The errors can
be as high as 2 MW on a 3.5 MW circuit.
To address the average forecast error issue we introduce the learning-based
forecast correction scheme. The scheme uses the battery SOC to estimate the
forecast error in PV. This is based on the following relationship between the
two quantities:
$\Delta P^{D}_{t}\approx\kappa\Delta SoC_{j,t+1}$ (13)
###### Claim III.1.
In a islanded microgrid with battery as a slack generator and when the
proposed rolling optimization framework is used for energy management, then
the difference between the SOC computed at stage-1 and real-time SOC
measurement at each dispatch point given by $\Delta SoC_{j,t+1}$ is
approximately equal to the sum of average forecast error, average power loss
and average modeling error given by $\Delta P^{D}_{t}$ in the dispatch
interval $\Delta t$ with a constant
$\kappa=\frac{\overline{E}^{ES}\eta}{\Delta t}$.
This relationship is derived in the appendix. Hence, using this relationship,
we can use this estimated forecast error component from previous iteration to
adjust the stage-2 forecast error in the next control cycle as shown in (14).
Here, $K$ is the historical window length over which the average is taken.
$\hat{\epsilon}_{t}=\frac{\sum_{i=t-K}^{t-1}\Delta P^{D}_{i}}{K}$ (14)
This average value $\hat{\epsilon}_{t}$ is then normalized to each load group
based on their connected load (given by $\lambda_{n}$) and subtracted from the
stage-2 forecast as shown in (9).
### III-C Dynamic Reserve Management
Apart from the large average forecast error during PV in fig-2, the other
issue is the large fast variations caused by cloud cover events which is
difficult to forecast even with accurate weather information. Since these are
random events, we cannot modify the forecast values to account for these
errors as we did in sec III-B but instead we can prepare the system to be
robust against such events.
A common method to account for unknown errors in the system while dispatching
is to use a power reserve on the generation resource, which is the MES in this
case. The reserve provides a head room to supply unknown forecast errors and
modeling errors that are not accounted for in the dispatch. This is
incorporated as $\gamma_{t}$ in (3c) where, $\gamma_{t}\in[0,1]$ is the
reserve factor for the MES which is responsible for power balance in the
microgrid. Even though reserves are efficient in operating the microgrid
securely, a large reserve can significantly affect the number of customers
served.
To handle the forecast errors introduced by cloud cover events optimally we
propose a dynamic reserve management strategy. The first step is to estimate
the random errors from available data. We use the previous historical forecast
errors estimates $\Delta P^{D}_{[t-q,t]}$ to estimate the random spikes using
a moving average (MA) model as shown in (15). The parameters of MA model can
be estimated from forecast error estimates obtained from previous few hours.
$\hat{\Delta P^{D}_{t+1}}=\mu+\sum_{i=1}^{q}\theta_{i}\Delta
P_{t-i}^{D}+\Delta P_{t}^{D}$ (15)
The algorithm to estimate the required reserve given in alg-1 is based on
comparing previous interval net load forecast $\hat{P}^{net}_{t}$ given in
(16) to the actual power measurement at PCC given by $P^{net}_{t}$ which is
the actual total net load in the circuit.
$\hat{P}^{net}_{t}=\sum_{i\in\mathcal{N}}\left(P^{D}_{i,t}-P^{PV}_{i,t}\right)$
(16)
High reserve requirements are required during two conditions, when actual load
is more than expected and when actual PV is lower than expected. In both these
conditions net load of microgrid is more than expected. When net load is lower
than expected a high reserve will hinder the optimality of the solution. This
is further validated in our case study in sec IV-A. So it is critical to
dynamically modify the reserve to securely and optimally restore the loads. By
comparing the forecasted net load and the actual measured total net load from
previous control cycle, we can decide the level of reserve requirement as
shown in algorithm 1. This is based on the assumption that net load behavior
in the next control cycle $\Delta t$ will be approximately same as previous
control cycle $\Delta t$ following a trend. When the trend is higher net load
than expected then the reserve is dynamically modified depending on expected
forecast error and left at minimum value $\alpha$ in other scenarios.
Even though this method will address the dynamic forecast errors to some
extent, we need a back up protection like under frequency load trip or other
load shedding algorithm to securely operate the microgrid since it is very
difficult to predict the instantaneous cloud cover events from average power
measurements.
Algorithm 1 Dynamic Reserve Management
0: Net load forecast $\hat{P}^{net}_{t}$, Net load measurements $P^{net}_{t}$,
and Forecast error estimate $\Delta\hat{P}^{D}_{t+1}$
0: Power Reserve Factor $\gamma_{t}$
1: for $t=k$ to $\mid\mathcal{H}\mid$ do
2: if ($P^{net}_{t}>\hat{P}^{net}_{t}$) then
3: $\gamma_{t}=(1-\Delta\hat{P}^{D}_{t+1})$
4: else
5: $\gamma_{t}=1-\alpha$, where $\alpha$ is minimum reserve desired.
6: end if
7: $\gamma_{t}=\mathbb{P}[\gamma_{t}]$ where $\mathbb{P}$ operator thresholds
any input to the set $[\underline{\gamma_{t}},\overline{\gamma_{t}}]$
8: return $\gamma_{t}$
9: end for
## IV Case Study and Performance Evaluation
To illustrate the performance of the proposed energy management scheme a case
study is conducted. The sample system is the IEEE 123 node system shown in
fig-3. The power to the system is assumed to be unavailable due to outage and
to supply power to the loads during outage, utility brings in a mobile MES and
a DG to form the microgrid. Distribution feeder has 5 controllable switches
and they divide the feeder into 5 load groups as indicated on the figure.
Really high PV penetration of 80-100% is considered in each load zone as shown
in fig-3. The system resources are summarized in table-I and the simulation
parameters are given in table-II.
TABLE I: Location of Resources Resource | Location | Rating (kW) | | Rating
---
(L/kWh)
| Critical
---
Load Nodes
48, 65, 76 | 210, 140, 245 kW | -
Battery | Substation | 2000 kW | 8000 kWh
| Diesel
---
Generator
Substation | 4000 kW | 10000 L
Rooftop PV | All nodes | 19 - 111 kW | -
Figure 3: Modified IEEE 123 Node Test Feeder TABLE II: Simulation Parameters Parameter | Value | Parameter | Value
---|---|---|---
$\Delta t$, $\Delta k$, $\Delta h$ | 30, 5, 1 min | $\mathcal{T}$, $\mathcal{K}_{t}$ | 24 hr, 30 min
$w_{1}$,$w_{5}$ | 0.01 | $\alpha^{up}$ | 1 hr
$w_{2}$,$w_{3}$,$w_{4}$ | 0.4, 0.3, 0.2 | $\alpha^{MSD}$ | 2 hr
$\alpha_{i}$, $\beta_{i}$ $\forall i\in\mathcal{N}^{DG}$ | 84.87, 0.20 | $C^{up}$ | 6
The outage is assumed to be for a 48 hour period starting at midnight. The
forecast of load and PV for 4 day period is shown in fig. 4. The forecast are
obtained using the algorithm proposed in [25] trained on pecan street summer
data for load and corresponding weather data from nearby weather station in
Texas for irradiance forecast. As indicated in the figure, stage-1 load
forecast are obtained in a rolling manner every 30-min using smart meter data
while stage-1 irradiance forecast is obtained in a day-ahead stage since there
is no real-time data to update the forecasts. The day ahead forecast is
repeated to obtain a rolling forecast. Stage-2 load and irradiance are short
term forecast obtained closer to real-time. The total installed capacity of PV
at each node is utilized to get the final stage-1 and stage-2 PV forecast from
irradiance information. The following inference can be made from the load and
PV forecast results in fig. 4:
* •
The forecast error in load is small due to availability of real-time smart
meter data.
* •
Stage-2 forecast of load is much closer to real-time data compared to stage-1
forecast. While, short-term stage-2 forecast in PV provides no improvement
from stage-1 forecast due to unavailability of real-time information to update
the forecast.
* •
The PV is consistently over estimated since rating of PV might be smaller than
actual installed capacity which leads to significant error when multiplied
with irradiance forecast. Also, one irradiance profile is used to obtain the
output of all PV units which leads to further error in forecast.
* •
The cloud cover events are difficult to forecast from weather data and
eventually shows up as instantaneous random forecast errors in net load.
Figure 4: Total Load Forecast (top), and PV Forecast (bottom) against real-
time data for 4 days.
The simulations are carried out using a PC with Intel Core i7-11700F CPU @ 4.8
GHz processor and 64 GB RAM. The prposed schemes are implemented and solved in
Matlab using the Yalmip environment with GUROBI 9.5 solver. The microgrid is
simulated using OpenDSS and it is interfaced with Matlab using the COM
Interface. The simulations are run for 48 hours starting at Day-1 in fig. 4.
To assess the performance of the proposed scheme, the following performance
metrics are used: $P^{CL}_{\%}$ and $P^{NCL}_{\%}$ are the percentage of
critical and non-critical loads (NCLs) served. $P^{PV}_{\%}$ is the percentage
of available PV utilized during restoration. $T^{CL}$ and $T^{NCL}$ are the
average served duration of CL and NCL. $N^{CL}$ is the number of interruptions
in serving critical loads (CLs) and $N^{\mu G}$ is the number of times
microgrid is shutdown. $I^{\mu G}$ is the total duration microgrid was
shutdown due to unavailability of resourcses.
### IV-A Performance Evaluation
To demonstrate the effectiveness of the forecast error adjustment and dynamic
reserve management strategies, we considered 4 different cases:
* •
Base Case: Fixed power reserve of 400kW ($\gamma_{t}=0.8$), no forecast error
correction is included to correct the stage-2 forecast, and fixed target fuel
reserve through out two day restoration $F_{i,r}=500L$.
* •
Case-1: Fixed power reserve of 400kW ($\gamma_{t}=0.8$) through out the two
day restoration period. But, stage-2 forecast is corrected in every stage-1
control cycle based on historical estimation using a window length of 5 hrs.
Fuel reserve management as per eqn-(11).
* •
Case-2: 20% of current stage-1 net load forecast is kept as reserve in
stage-2, no forecast error correction included to correct the stage-2
forecast, and fuel reserve management as per eqn-(11).
* •
Case-3: Reserve is adjusted dynamically based on algorithm-1. Stage-2 forecast
is corrected similar to case-2. Fuel reserve management as per eqn-(11).
The 48 hour restoration is simulated using the proposed management scheme and
using the load and PV profiles shown in fig-4. The results are summarized in
table-IV. To evaluate the performance we introduce additional metrics such as
$N^{\mu G}_{UnSch}$ which is the number of unique unscheduled shutdown of
microgrid due to under frequency load shedding which is assumed to last for 30
min and the total duration of such unscheduled shutdown is given by $T^{\mu
G}_{UnSch}$. $T^{\mu G}_{Total}$ is the sum of total scheduled $T^{\mu
G}_{Sch}$ and unscheduled $T^{\mu G}_{UnSch}$ shutdown of the microgrid.
TABLE III: Different cases to evaluate forecast error correction and dynamic reserve management module Cases | | Error
---
Correction
Fuel Management | | Reserve
---
$1-\gamma_{t}$
Base Case | No | Fixed $F_{i,r}=500L$ | 400 kW
Case-1 | Yes | As per eqn (11) | 400 kW
Case-2 | No | As per eqn (11) | $0.2\sum(\hat{\boldsymbol{P}}^{D}-\hat{\boldsymbol{P}}^{PV})$
Case-3 | Yes | As per eqn (11) | as per algorithm-1
### IV-B Fuel Management
Figure-5 shows the actual usage of fuel over a two day period for day-1
simulation between base case with fixed reserve and case-4 with proposed fuel
management and corresponding DG output. The DG is predominantly used to either
charge the battery while serving low load or to completely serve the peak load
in the circuit. From the figure it is evident that the proposed fuel scheme
ensures higher fuel reserve for second day of restoration compared to fixed
fuel reserve scheme. This directly leads to about 7% increase in critical load
served which is highlighted in table-IV. The duration of critical loads served
is also considerably increased. This shows the importance of mulit-day fuel
rationing when only a day-ahead forecast is available.
Figure 5: (top) Comparison of DG dispatch between different fuel management
schemes. (bottom) Comparison of DG Fuel usage over two day restoration with
different fuel management schemes.
From the table we see that critical loads are not served 100%, this doesn’t
mean the loads are under outage during the restoration process. Since this EMS
is from a utility perspective, utility will communicate the hours critical
loads will not be served as scheduled outages from scheduling framework and
the critical loads will use the local generation during these outage hours.
Nevertheless, we want to minimize this duration since the local generation at
critical loads can be limited or unavailable.
### IV-C Forecast Error Adjustment and Reserve Management
In table-IV, both on day-1 and day-3 we see that Case-2 with forecast error
adjustment performs extremely well across all metrics compared to Case-1
without any error adjustment. There is both significant increase in load
served and PV utilized during restoration which highlights the importance of
forecast error adjustment. Both the cases suffer from unscheduled outages due
to violation of battery limits caused by high forecast error which leads to
about 2 to 3 hours of unscheduled shutdown on day-1. Unscheduled outages are a
inconvenience to the customer, especially to critical loads, and hence it is
desirable to reduce such unscheduled outages during restoration .
Figure 6: (top) Average Net Load Forecast Error Estimation from Battery SoC on Day-1 48 hr restoration. (bottom) Comparison of Stage-2 net load forecast error before and after correction using SoC based average forecast error estimation. TABLE IV: Case study of Energy Management with different robustness enhancement modules over multiple days Metric | Day-1 | Day-3
---|---|---
Base Case | Case-1 | Case-2 | Case-3 | Base Case | Case-1 | Case-2 | Case-3
$P^{CL}_{\%}$ | 72.73% | 78.13% | 76.78% | 79.2% | 71.69% | 75.59% | 72.59% | 76.44%
$P^{NCL}_{\%}$ | 71.1% | 74.63% | 72.88% | 75.244% | 67.76% | 70.55% | 68.45% | 71.38%
$P^{PV}_{\%}$ | 78.33% | 86.06% | 79.63% | 86.92% | 81.88% | 88.27% | 83.68% | 90.35%
$T^{\mu G}_{sch}$ | 2h 40m | 1h 30m | 2h 10m | 2h 30m | 4h 5m | 4h 5m | 3h 30m | 3h 30m
$N^{\mu G}_{sch}$ | 4 | 4 | 4 | 3 | 6 | 5 | 5 | 2
$T^{CL}$ | 37h 15m | 38h 40m | 38h 15m | 39h 20m | 34h 45m | 36h 5m | 35h 25m | 37h 5m
$N^{CL}$ | 8 | 8 | 8 | 9 | 10 | 8 | 9 | 7
$T^{\mu G}_{UnSch}$ | 1h | 30m | 2h | 0 | 1h 30m | 1h | 1h 30m | 30m
$N^{\mu G}_{UnSch}$ | 2 | 1 | 4 | 0 | 3 | 2 | 3 | 1
$T^{\mu G}_{Total}$ | 3h 40m | 2h 10m | 4h 10m | 2h 30m | 5h 35m | 5h 5m | 5h | 4h
The reason for better performance of Case-2 is the forecast error adjustment
strategy. Figure-6 shows the forecast error estimated from SoC measurements
compared against the actual forecasts. This figure highlights that the
estimation strategy is good at estimating the average forecast error but
cannot account for the instantaneous errors. Also, after adjustment the
stage-2 forecast error is close to zero compared to before correction. The
high average error during PV is captured in this case and by adjusting the
forecast we are able to absorb about 8% higher PV on day-1, which eventually
leads to increase in load served and better performance. The percentage
increase on day-3 results is lower because the actual PV available on day-4
being a cloudy day is low.
Case-3 is a simple reserve management strategy where we keep a percentage of
net-load forecast as reserve with an intuition that high reserve is required
at high PV and high load duration which are mutually exclusive. But Case-3
seems to perform much poorer than Case-2 which just has fixed reserve through
out the restoration. Having a high reserve hinders optimality which is visible
in the percentage of load served and PV absorbed but such high reserve in
case-3 must reduce the number of unscheduled outages which it fails to do so
compared to case-2. For this purpose we use the proposed dynamic reserve
management strategy (Case-4) which retains the benefits (day-1) and sometimes
performs better (day-2) than Case-2 in terms of percentage load served and PV
absorbed but the main benefit of case-4 is the reduction in number and
duration of unscheduled outages. It is important to note that proposed
strategy cannot completely eliminate the unscheduled outages because in some
instances the forecast error can be so high that limited resources cannot
handle such errors through reserves and we rely on under-frequency load
shedding during these events.
Figure 7: Dynamic reserve adjustment (top) compared against actual forecast
error (bottom)
The reserves of case-3 and case-4 are shown in fig-7. Case-3 keeps a high
reserve even during low forecast error regions between 18:00-6:00 which
hinders the performance of the algorithm. Also, during cloud cover events with
high spikes the reserve is not high enough which leads to unscheduled outages.
In contrast case-4 keeps the reserve to minimum during low forecast error
regions (no PV) and a high reserve only during significant cloud cover events
that happens during net load zero region which is the main cause of battery
violations since diesel generator is not dispatched during this low net load
conditions. Hence, the algorithm intelligently keeps a high reserve around
9:00 and 16:00 where net load is zero. The other cloud cover events around
12:00 noon is not significant since the actual forecast of PV is higher than
load and keeping high reserve will only hinder optimality.
## V Conculsion
This paper proposes a two-stage intelligent energy management strategy for a
feeder microgrid to be deployed on a distribution feeder to provide service
during a long outage. To assure that the proposed method has robust
performance, three special schemes have been introduced to addresses the
unique challenges associated with management of these microgrids: The multi-
day fuel rationing scheme assures that the load in the circuit is served over
multiple days of the restoration which increases the overall service to
critical loads and improves PV utilization. The novel state-of-charge based
forecast error estimation strategy which learns the forecast error online and
adjusts the near real-time forecast significantly increases the BTM PV
utilization during the restoration. The dynamic reserve management strategy
introduced to handle sudden cloud cover events that can lead to unscheduled
outages of the microgrid. A case study on a sample system illustrates the
effectiveness of the proposed scheme, as it shows that the proposed error
correction based reserve adjustment is quite effective even when there is high
variability in PV and considerably reduces the number of unscheduled outages
during the restoration process.
## References
* [1] “Resilience of the U.S. Electricity System: A Multi-Hazard Perspective,” https://www.energy.gov/policy/downloads/resilience-us-electricity-system-multi-hazard-perspective.
* [2] A. Hussain, V.-H. Bui, and H.-M. Kim, “Microgrids as a resilience resource and strategies used by microgrids for enhancing resilience,” _Applied Energy_ , vol. 240, pp. 56–72, Apr. 2019.
* [3] A. Hirsch, Y. Parag, and J. Guerrero, “Microgrids: A review of technologies, key drivers, and outstanding issues,” _Renewable and Sustainable Energy Reviews_ , vol. 90, pp. 402–411, Jul. 2018.
* [4] S. Booth, “Microgrid-Ready Solar PV - Planning for Resiliency,” _NREL/FS-7A40-70122_ , p. 2, 2017.
* [5] “Mobile Energy Storage Study — Mass.gov,” https://www.mass.gov/service-details/mobile-energy-storage-study.
* [6] H. Abdeltawab and Y. A. I. Mohamed, “Mobile energy storage sizing and allocation for multi-services in power distribution systems,” _IEEE Access_ , vol. 7, pp. 176 613–176 623, 2019.
* [7] J. Kim and Y. Dvorkin, “Enhancing distribution system resilience with mobile energy storage and microgrids,” _IEEE Transactions on Smart Grid_ , 2019\.
* [8] L. Che, M. Shahidehpour, and M. Shahidehpour, “Adaptive formation of microgrids with mobile emergency resources for critical service restoration in extreme conditions,” _IEEE Transactions on Power Systems_ , 2019.
* [9] S. Lei, J. Wang, C. Chen, and Y. Hou, “Mobile emergency generator pre-positioning and real-time allocation for resilient response to natural disasters,” _IEEE Transactions on Smart Grid_ , 2018.
* [10] S. Lei, C. Chen, H. Zhou, and Y. Hou, “Routing and scheduling of mobile power sources for distribution system resilience enhancement,” _IEEE Transactions on Smart Grid_ , 2019.
* [11] N. C. Koutsoukis, P. S. Georgilakis, and N. D. Hatziargyriou, “Service restoration of active distribution systems with increasing penetration of renewable distributed generation,” _IET Generation, Transmission & Distribution_, vol. 13, no. 14, pp. 3177–3187, 2019.
* [12] M.-G. Choi, J.-H. Choi, S.-Y. Yun, and S.-J. Ahn, “MILP-Based Service Restoration Method Utilizing Both Existing Infrastructure and DERs in Active Distribution Networks,” _IEEE Access_ , vol. 10, pp. 36 477–36 489, 2022.
* [13] J. Dugan, S. Mohagheghi, and B. Kroposki, “Application of Mobile Energy Storage for Enhancing Power Grid Resilience: A Review,” _Energies_ , vol. 14, no. 20, p. 6476, Oct. 2021.
* [14] C. Chen, J. Wang, F. Qiu, and D. Zhao, “Resilient Distribution System by Microgrids Formation After Natural Disasters,” _IEEE Transactions on Smart Grid_ , vol. 7, no. 2, pp. 958–966, Mar. 2016.
* [15] S. Meng, R. Roofegari nejad, and W. Sun, “Robust Distribution System Load Restoration with Time-Dependent Cold Load Pickup,” _IEEE Transactions on Power Systems_ , pp. 1–1, 2020.
* [16] C. Chen, J. Wang, and D. Ton, “Modernizing Distribution System Restoration to Achieve Grid Resiliency Against Extreme Weather Events: An Integrated Solution,” _Proceedings of the IEEE_ , vol. 105, no. 7, pp. 1267–1288, Jul. 2017.
* [17] W. Yuan, J. Wang, F. Qiu, C. Chen, C. Kang, and B. Zeng, “Robust Optimization-Based Resilient Distribution Network Planning Against Natural Disasters,” _IEEE Transactions on Smart Grid_ , vol. 7, no. 6, pp. 2817–2826, Nov. 2016.
* [18] H. Momen, A. Abessi, S. Jadid, M. Shafie-khah, and J. P. S. Catalão, “Load restoration and energy management of a microgrid with distributed energy resources and electric vehicles participation under a two-stage stochastic framework,” _International Journal of Electrical Power & Energy Systems_, vol. 133, p. 107320, Dec. 2021.
* [19] S. Poudel and A. Dubey, “Critical Load Restoration Using Distributed Energy Resources for Resilient Power Distribution System,” _IEEE Transactions on Power Systems_ , vol. 34, no. 1, pp. 52–63, Jan. 2019.
* [20] M. R. Kleinberg, K. Miu, and H.-D. Chiang, “Improving Service Restoration of Power Distribution Systems Through Load Curtailment of In-Service Customers,” _IEEE Transactions on Power Systems_ , vol. 26, no. 3, pp. 1110–1117, Aug. 2011.
* [21] W. Liu and F. Ding, “Collaborative Distribution System Restoration Planning and Real-Time Dispatch Considering Behind-the-Meter DERS,” _IEEE Transactions on Power Systems_ , vol. 36, no. 4, pp. 3629–3644, Jul. 2021.
* [22] ——, “Hierarchical Distribution System Adaptive Restoration With Diverse Distributed Energy Resources,” _IEEE Transactions on Sustainable Energy_ , vol. 12, no. 2, pp. 1347–1359, Apr. 2021.
* [23] L. Glomb, F. Liers, and F. Rösel, “A rolling-horizon approach for multi-period optimization,” _European Journal of Operational Research_ , vol. 300, no. 1, pp. 189–206, Jul. 2022.
* [24] H. Ahmadi and J. R. Martı´, “Linear Current Flow Equations With Application to Distribution Systems Reconfiguration,” _IEEE Transactions on Power Systems_ , vol. 30, no. 4, pp. 2073–2080, Jul. 2015.
* [25] Y. Li, S. Zhang, R. Hu, and N. Lu, “A meta-learning based distribution system load forecasting model selection framework,” _Applied Energy_ , vol. 294, p. 116991, Jul. 2021.
* [26] B. C. Erdener, C. Feng, K. Doubleday, A. Florita, and B.-M. Hodge, “A review of behind-the-meter solar forecasting,” _Renewable and Sustainable Energy Reviews_ , vol. 160, p. 112224, May 2022.
## Appendix
### V-A Proof of Claim-III.1
###### Proof.
The estimated SoC from stage-1 problem $\forall j\in\mathcal{N}^{ES}_{GFM}$ is
given by (17) where, $SoC_{j,t}$ is the actual SoC of battery obtained from
previous feedback from system, $\hat{P}^{D}$ is the forecasted net load demand
in the interval $[t,t+\Delta t]$.
$\hat{SoC}_{j,t+\Delta t}=SoC_{j,t}+\\\
\frac{\hat{P}^{D}_{t}-\sum_{i\in\mathcal{N}^{DG}}P_{i,t}^{DG}-\sum_{i\in\mathcal{N}^{ES}_{GFL}}P_{i,t}^{ES}}{\overline{E}_{j}^{ES}\eta_{j}}\Delta
t$ (17)
The actual measured SoC of battery $\forall j\in\mathcal{N}^{ES}_{GFM}$ can be
formulated as shown in (18).
$SoC_{j,t+\Delta t}=SoC_{j,t}-\\\ \frac{\int_{t}^{t+\Delta
t}\left[P^{total}(t)-\sum_{i\in\mathcal{N}^{DG}}P_{i}^{DG}(t)-\sum_{i\in\mathcal{N}^{ES}_{GFL}}P_{i}^{ES}(t)\right]dt}{\overline{E}_{j}^{ES}\eta_{j}}$
(18)
Where, $P^{total}(t)$ is total net demand in the system with power loss.
Difference in SoC between measured and stage-1 estimation given by $\Delta
SoC_{j,t+\Delta t}$ can be simplified as shown in eqn-19.
$\Delta SoC_{j,t+\Delta t}=SoC_{t+\Delta t}-\hat{SoC}_{j,t+\Delta t}$ $\Delta
SoC_{j,t+\Delta t}=\left[\frac{\hat{P}^{D}\Delta t-\int_{t}^{t+\Delta
t}P^{total}(t)dt}{\overline{E}_{j}^{ES}\eta_{j}}\right]\\\
-\left[\frac{\sum_{i\in\mathcal{N}^{DG}\cup\mathcal{N}^{ES}_{GFL}}\left(\int_{t}^{t+1}P_{i}(t)dt-\hat{P}_{i,t}\Delta
t\right)}{\overline{E}_{j}^{ES}\eta_{j}}\right]$ (19)
Now, $\int_{t}^{t+\Delta t}P^{total}(t)dt$ can be approximated by the average
value $\overline{P}^{total}\Delta t$, where $\overline{P}^{total}$ is the
average demand value in the interval $[t,t+\Delta t]$. Further, the grid-
following resources dispatch in stage-2 over the horizon $\mathcal{K}$,
$P^{G}_{i,k}$ is kept close to the scheduling value $\hat{P}^{G}_{i,t}$
$\forall i\in\mathcal{N}^{G}_{GFL},k\in\mathcal{K},t\in\mathcal{T}$ through
stage-2 objective as shown in (8). This ensures the stage-2 dispatch is close
to scheduled value ($P_{i}(t)\approx\hat{P}_{i,t}$) which cancels out second
term in (19).
$\displaystyle\Delta SoC_{j,t+\Delta t}\overline{E}^{ES}_{j}\eta_{j}$
$\displaystyle\approx(\hat{P}^{D}-\overline{P}^{total})\Delta
t-0\quad\text{since,}\quad P_{i}(t)\approx\hat{P}_{i,t}$ (20)
$\displaystyle\text{let, }\Delta P^{D}_{t}$
$\displaystyle\approx(\hat{P}^{D}-\overline{P}^{total})$ $\displaystyle\Delta
P^{D}_{t}$ $\displaystyle\approx\frac{\Delta
SoC_{j,t+1}\overline{E}^{ES}_{j}\eta_{j}}{\Delta t}$ $\displaystyle\Delta
P^{D}_{t}$ $\displaystyle\approx\kappa\Delta SoC_{j,t+1}$
Therefore, the average forecast and modeling error in the interval
$[t,t+\Delta t]$ is approximately equal to the difference in SoC computed from
feedback measurements and stage-1 results with a constant factor
$\kappa=\frac{\overline{E}^{ES}_{j}\eta_{j}}{\Delta t}$. ∎
|
In addition to reporting the quality guarantees of the model along training steps through local losses (cf. Figure <ref>), our experiments revealed that the absolute value difference $\norm{V_{\latentpolicy_{\decoderparameter}}}$ between the original and latent models operating under the latent policy quickly decreases and tends to converge to values in the same range (Figure <ref>).
Absolute value difference $\norm{V_{\latentpolicy_{\decoderparameter}}}$ reported along training steps.
This is consistent with the fact that minimizing local losses lead to close behaviors (cf. Eq. <ref>) and that the value function is Lipschitz-continuous w.r.t. $\bidistance_{\latentpolicy_{\decoderparameter}}$ (cf. Section <ref>).
§.§ Remark on formal verification
Recall that our bisimulation guarantees come by construction of the latent space.
Essentially, our learning algorithm spits out a distilled policy and a latent state space which already yields a guaranteed bisimulation distance between the original MDP and the latent MDP.
This is the crux of how we enable verification techniques like model checking.
In particular, bisimulation guarantees mean that reachability probabilities in the latent MDP compared to those in the original one are close.
Furthermore, the value difference of (omega-regular) properties (formulated through mu-calculus) obtained in the two models is bounded by this distance (cf. Sect. <ref> and Chatterjee et al., 2010).
Reachability is the key ingredient to model-check MDPs.
Model-checking properties is in most cases performed by reduction to the reachability of components or regions of the MDP: it either consists of (i) iteratively checking the reachability of the parts of the state space satisfying path formulae that comprise the specification, through a tree-like decomposition of the latter (e.g., for (P,R-)CTL properties, cf. Baier & Katoen, 2008), or (ii) checking the reachability to the part of the state space of a product of the MDP with a memory structure or an automaton that embeds the omega-regular property — e.g., for LTL [Baier et al., 2016, Sickert et al., 2016], LTLf [Wells et al., 2020], or GLTL [Littman et al., 2017], among other specification formalisms.
The choice of specification formalism is up to the user and depends on the case study. The scope of this work is focusing on learning to distill RL policies with bisimulation guarantees so that model checking can be applied, in order to reason about the behaviors of the agent. That being said, reachability is all we need to show that model checking can be applied.
§.§ Hyperparameters
parameters. All components (e.g., functions or distribution locations and scales, see Fig. <ref>) are represented and inferred by neural networks (multilayer perceptrons).
All the networks share the same architecture (i.e., number of layers and neurons per layer).
We use a simple uniform experience replay of size $10^{6}$ to store the transitions and sample them.
The training starts when the agent has collected $10^{4}$ transitions in $\mdp$.
We used minibatches of size $128$ to optimize the objective and we applied a minibatch update every time the agent executing $\policy$ has performed $16$ steps in $\mdp$.
We use the recursive $\epsilon$-perturbation trick of [Huang, 2020] with $\epsilon = \nicefrac{3}{4}$: when an episode ends, it restarts from the initial state with probability $\nicefrac{1}{4}$; before re-starting an episode, the time spent in the reset state labeled with $\labelset{reset}$ follows then the geometric distribution with expectation $\nicefrac{\epsilon}{1 - \epsilon} = 3$.
We chose the same latent state-action space size than [Delgrange et al., 2022], except for LunarLander that we decreased to $\log_2 \left| \latentstates \right| = 14$ and $\left|\latentactions\right| = 3$ to improve the scalability of the verification.
VAE-MDPs parameters. For the comparison of Sect. <ref>, we used the exact same VAE-MDP hyperparameter set as prescribed by [Delgrange et al., 2022], except for the state-action space of LunarLander that we also changed for scalability and fair comparison purpose.
[The code for conducting the VAE-MDPs experiments is available at <https://github.com/florentdelgrange/vae_mdp> (GNU General Public License v3.0).]
Hyperparameter search. To evaluate our , we realized a search in the parameter space defined in Table <ref>.
The best parameters found (in terms of trade-off between performance and latent quality) are reported in Table <ref>.
We used two different optimizers for minimizing the loss (referred to as the minimizer) and computing the Wasserstein terms (reffered to as the maximizer).
We used Adam DBLP:journals/corr/KingmaB14 for the two, but we allow for different learning rates $\textsc{Adam}_{\alpha}$ and exponential decays $\textsc{Adam}_{\beta_1}, \textsc{Adam}_{\beta_2}$.
We also found that polynomial decay for $\textsc{Adam}_{\alpha}$ (e.g., to $10^{-5}$ for $4\cdot 10^{5}$ steps) is a good practice to stabilize the experiment learning curves, but is not necessary to obtain high-quality and performing distillation.
Concerning the continuous relaxation of discrete distributions,
we used a different temperature for each distribution, as [Maddison et al., 2017] pointed out that doing so is valuable to improve the results.
We further followed the guidelines of Maddison et al., 2017 to choose the interval of temperatures and did not schedule any annealing scheme (in contrast to VAE-MDPs).
Essentially, the search reveals that the regularizer scale factors $\beta_{\scalebox{1.1}{$\cdot$}}$ (defining the optimization direction) as well as the encoder and latent transition temperatures are important to improve the performance of distilled policies.
For the encoder temperature, we found a nice spot in $\temperature_{\scriptscriptstyle \embed_{\encoderparameter}} = \nicefrac{2}{3}$, which provides the best performance in general, whereas the choice of $\temperature_{\scriptscriptstyle \latentprobtransitions_{\decoderparameter}}$ and $\beta_{\scalebox{1.1}{$\cdot$}}$ are (latent-) environment dependent.
The importance of the temperature parameters for the continuous relaxation of discrete distributions is consistent with the results of [Maddison et al., 2017], revealing that the success of the relaxation depends on the choice of the temperature for the different latent space sizes.
Hyperparameter search. $\temperature_X$ refers to the temperature used for component $X$.
$\textsc{Adam}_{\alpha}$ (minimizer)
$\set{0.0001, 0.0002, 0.0003, 0.001}$
$\textsc{Adam}_{\alpha}$ (maximizer)
$\set{0.0001, 0.0002, 0.0003, 0.001}$
$\set{0, 0.5, 0.9}$
$\set{0.9, 0.999}$
neurons per layer
$\set{64, 128, 256, 512}$
number of hidden layers
$\set{1, 2, 3}$
$\set{\text{ReLU}, \text{Leaky ReLU}, \text{tanh}, \frac{\text{softplus}\fun{2x + 2}}{2} - 1 \, \textit{(smooth ELU)}}$
$\set{10, 25, 50, 75, 100}$
$\set{10, 25, 50, 75, 100}$
$\set{5, 10, 15, 20}$
$\set{10, 20}$
use $\varepsilon$-mimic (cf. Delgrange et al., 2022)
$\set{\text{True}, \text{False}}$ (if True, a decay rate of $10^{-5}$ is used)
$\set{0.1, \nicefrac{1}{3}, \nicefrac{1}{2}, \nicefrac{2}{3}, \nicefrac{3}{5}, 0.99}$
$\set{0.1, \nicefrac{1}{3}, \nicefrac{1}{2}, \nicefrac{2}{3}, \nicefrac{3}{5}, 0.99}$
$\set{\nicefrac{1}{\left| \latentactions \right| - 1}, \nicefrac{1}{\fun{\left| \latentactions \right| - 1} \cdot 1.5}}$
$\temperature_{\scriptscriptstyle \actionencoder}$
$\set{\nicefrac{1}{\left| \latentactions \right| - 1}, \nicefrac{1}{\fun{\left| \latentactions \right| - 1} \cdot 1.5}}$
Final hyperparameters used to evaluate in Sect. <ref>
CartPole MountainCar Acrobot LunarLander Pendulum
$\log_2 \left| \latentstates \right|$ 9 10 13 14 13
$\left| \latentactions \right|$ 2 $=\left|\actions\right|$ $2=\left|\actions\right|$ $3=\left|\actions\right|$ $3$ $3$
activation tanh ReLU Leaky Relu ReLU ReLU
layers $[64, 64, 64]$ $[512, 512]$ $[512, 512]$ $[256]$ $[256, 256, 256]$
$\textsc{Adam}_{\alpha}$ (minimizer) $0.0002$ $0.0001$ $0.0002$ $0.0003$ $0.0003$
$\textsc{Adam}_{\alpha}$ (maximizer) $0.0002$ $0.0001$ $0.0001$ $0.0003$ $0.0003$
$\textsc{Adam}_{\beta_1}$ $0.5$ $0$ $0$ $0$ $0.5$
$\textsc{Adam}_{\beta_2}$ $0.999$ $0.999$ $0.999$ $0.999$ $0.999$
$\beta_{\localtransitionloss{\stationary{\policy}}}$ $10$ $25$ $10$ $50$ $25$
$\beta_{\steadystateregularizer{\policy}}$ $75$ $100$ $10$ $100$ $25$
$\ncritic$ 5 20 20 15 5
$\delta$ $20$ $10$ $20$ $20$ $10$
$\varepsilon$ $0$ $0$ $0$ $0$ $0.5$
$\temperature_{\scriptscriptstyle\latentprobtransitions_{\decoderparameter}}$ $\nicefrac{1}{3}$ $\nicefrac{1}{3}$ $0.1$ $0.75$ $\nicefrac{2}{3}$
$\temperature_{\scriptscriptstyle\embed_{\encoderparameter}}$ $\nicefrac{1}{3}$ $\nicefrac{2}{3}$ $\nicefrac{2}{3}$ $\nicefrac{2}{3}$ $\nicefrac{2}{3}$
$\temperature_{\scriptscriptstyle\latentpolicy_{\decoderparameter}}$ $\nicefrac{2}{3}$ $\nicefrac{1}{3}$ $0.5$ $0.5$ $0.5$
$\temperature_{\scriptscriptstyle \actionencoder}$ / / / $\nicefrac{1}{3}$ $\nicefrac{1}{3}$
Labeling functions. We used the same labeling functions as those described by Delgrange et al., 2022.
For completeness, we recall the labeling function used for each environment in Table <ref>.
Time to failure properties.
Based on the labeling described in Table <ref>, we formally detail the time to failure properties checked in Sect. <ref> whose results are listed in Table <ref> for each environment.
Let $\labelset{Reset} = \set{\mathsf{reset}} = \tuple{0, \dots, 1}$ (we assume here that the last bit indicates whether the current state is a reset state or not)
and define $\state \models \labelset{L}_1 \wedge \labelset{L}_2$ iff $s \models \labelset{L}_1$ and $s \models \labelset{L}_2$ for any $\state \in \states$, then
* CartPole: $\varphi = \until{\neg \labelset{Reset}}{\labelset{Unsafe}}$, where $\labelset{Unsafe} = \tuple{1, 1, 0}$
* MountainCar: $\varphi = \until{\neg \labelset{Goal}}{\labelset{Reset}}$, where $\labelset{Goal} = \tuple{1, 0, 0, 0}$
* Acrobot: $\varphi = \until{\neg \labelset{Goal}}{\labelset{Reset}}$, where $\labelset{Goal} = \tuple{1, 0, \dots, 0}$
* LunarLander: $\varphi = \until{\neg \labelset{SafeLanding}}{\labelset{Reset}}$, where $\labelset{SafeLanding} = \labelset{GroundContact} \wedge \labelset{MotorsOff}$, $\labelset{GroundContact} = \tuple{0, 1, 0, 0, 0, 0, 0}$, and $ \labelset{MotorsOff} = \tuple{0, 0, 0, 0, 0, 1, 0}$
* Pendulum: $\varphi = \eventually\fun{\neg\mathsf{Safe} \wedge \ltlnext \mathsf{Reset}}$,
where $\labelset{Safe} = \tuple{1, 0, 0, 0, 0}$,
$\eventually \labelset{T} = \neg \until{\emptyset}{\labelset{T}}$, and
$\state_{i} \models \ltlnext \labelset{T}$ iff $\state_{i + 1} \models \labelset{T}$,
for any $\labelset{T} \subseteq \atomicprops, {{\state}_{i : \infty}, {\action}_{i : \infty} }\in \inftrajectories{\mdp}$.
Intuitively, $\varphi$ denotes the event of ending an episode in an unsafe state, just before resetting the environment, which means that either the agent never reached the safe region or it reached and left it at some point.
Formally, $\varphi =
\set{\seq{\state}{\infty}, \seq{\action}{\infty} \, | \, \exists i \in \N, \state_i \not\models \labelset{Safe}\, \wedge \, \state_{i + 1} \models \labelset{Reset} } \subseteq \inftrajectories{\mdp}$.
$\states \subseteq$
Description, for $\vect{\state} \in \states$
$\labels\fun{\vect{s}} = \tuple{p_1, \dots, p_n, p_{\mathsf{reset}}}$
c]@l@$\vect{\state}_1$: cart position
$\vect{\state}_2$: cart velocity
$\vect{\state}_3$: pole angle (rad)
$\vect{\state}_4$: pole velocity at tip
c]@l@$p_1 = \condition{\vect{\state}_1 \geq 1.5}$: unsafe cart position
$p_2 = \condition{\vect{\state}_3 \geq 0.15} $: unsafe pole angle
c]@l@$\vect{\state}_1$: position
$\vect{\state}_2$: velocity
c]@l@$p_1 = \condition{\vect{\state}_1 > 1.5}$: target position
$p_2 = \condition{\vect{\state}_1 \geq \nicefrac{-1}{2}}$: right-hand side of the mountain
$p_3 = \condition{\vect{\state}_2 \geq 0}$: car going forward
c]@l@Let $\theta_1, \theta_2 \in \mathopen[0, 2\pi \mathclose]$ be the angles
of the two rotational joints,
$\vect{\state}_1 = \cos\fun{\theta_1}$
$\vect{\state}_2 = \sin\fun{\theta_1}$
$\vect{\state}_3 = \cos\fun{\theta_2}$
$\vect{\state}_4 = \sin\fun{\theta_2}$
$\vect{\state}_5$: angular velocity 1
$\vect{\state}_6$: angular velocity 2
c]@l@$p_1= \condition{-\vect{\state}_1 -\vect{\state}_3 \cdot \vect{\state}_1 + \vect{\state}_4 \cdot \vect{\state}_2 > 1}$: RL agent target
$p_2 = \condition{\vect{\state}_1 \geq 0}$: $\theta_1 \in [0, \nicefrac{\pi}{2}] \cup [\nicefrac{3\pi}{2}, 2\pi]$
$p_3 = \condition{\vect{\state}_2 \geq 0} $: $\theta_1 \in [0, \pi]$
$p_4 = \condition{\vect{\state}_3 \geq 0}$: $\theta_2 \in [0, \nicefrac{\pi}{2}] \cup [\nicefrac{3\pi}{2}, 2\pi]$
$p_5 = \condition{\vect{\state}_4 \geq 0}$: $\theta_2 \in [0, \pi]$
$p_6 = \condition{\vect{\state}_5 \geq 0}$: positive angular velocity (1)
$p_7 = \condition{\vect{\state}_6 \geq 0}$: positive angular velocity (2)
c]@l@Let $\theta \in \mathopen[0, 2\pi \mathclose]$ be the joint angle
$\vect{\state}_1 = \cos\fun{\theta}$
$\vect{\state}_2 = \sin\fun{\theta}$
$\vect{\state}_3$: angular velocity
$p_1 = \condition{\vect{\state}_1 \geq \cos\fun{\nicefrac{\pi}{3}}}$: safe joint angle
$p_2 = \condition{\vect{\state}_1 \geq 0}$: $\theta \in [0, \nicefrac{\pi}{2}] \cup [\nicefrac{3\pi}{2}, 2\pi]$
$p_3 = \condition{\vect{\state}_2 \geq 0}$: $\theta \in [0, \pi]$
$p_4 = \condition{\vect{\state}_3 \geq 0}$: positive angular velocity
c]@l@$\vect{\state}_1$: horizontal coordinates
$\vect{\state}_2$: vertical coordinates
$\vect{\state}_3$: horizontal speed
$\vect{\state}_4$: vertical speed
$\vect{\state}_5$: ship angle
$\vect{\state}_6$: angular speed
$\vect{\state}_7$: left leg contact
$\vect{\state}_8$: right leg contact
$p_1$: unsafe angle
$p_2$: leg ground contact
$p_3$: lands rapidly
$p_4$: left inclination
$p_5$: right inclination
$p_6$: motors shut down
Labeling functions for the OpenAI environments considered in our experiments [Delgrange et al., 2022].
We provide a short description of the state space and the meaning of each atomic proposition.
Recall that labels are binary encoded, for
$n = |\atomicprops| - 1$ (one bit is reserved for $\mathsf{reset}$) and $p_{\mathsf{reset}} = 1$ iff $\vect{s}$ is a reset state (cf. Appendix <ref>).
§ ON THE CURSE OF VARIATIONAL MODELING
Posterior collapse is a well known issue occurring in variational models (see, e.g., DBLP:conf/icml/AlemiPFDS018,DBLP:conf/iclr/TolstikhinBGS18,DBLP:conf/iclr/HeSNB19,DBLP:conf/icml/DongS0B20) which intuitively results in a degenerate local optimum where the model learns to ignore the latent space and use only the reconstruction functions (i.e., the decoding distribution) to optimize the objective.
VAE-MDPs are no exception, as pointed out in the original paper ().
Formally, VAE- and WAE-MDPs optimize their objective by minimizing two losses: a reconstruction cost plus a regularizer term which penalizes a discrepancy between the encoding distribution and the dynamics of the latent space model.
In VAE-MDPs, the former corresponds to the the distortion, and the later to the rate of the variational model (further details are given in Alemi et al., 2018, Delgrange et al., 2022), while in our WAE-MDPs, the former corresponds to the raw transition distance and the later to both the steady-state and transition regularizers.
Notably, the rate minimization of VAE-MDPs involves regularizing a stochastic embedding function $\embed_{\encoderparameter}\fun{\sampledot \mid \state}$ point-wise, i.e., for all different input states $\state \in \states$ drawn from the interaction with the original environment.
In contrast, the latent space regularization of the WAE-MDP involves the marginal embedding distribution $\encodersymbol_{\encoderparameter}$ where the embedding function $\embed_{\encoderparameter}$ is not required to be stochastic.
[Alemi et al., 2018] showed that posterior collapse occurs in VAEs when the rate of the variational model is close to zero, leading to low-quality representation.
Latent space distribution along training steps. The intensity of the blue hue corresponds to the frequency of latent states produced from $\embed_{\encoderparameter}$ during training.
The vanilla model collapses to a single state.
Rate of the variational model.
Distortion of the variational model.
Average point-wise entropy of $\embed_{\encoderparameter}\fun{\sampledot \mid \state}$, for $\state \in \states$ drawn from the interaction with the original environment.
Performance of the resulting distilled policy $\latentpolicy_{\decoderparameter}$ when deployed in the original environment (averaged over 30 episodes).
Comparison of the VAE-MDP in the CartPole environment (i) when the distortion and the rate are minimized as is (vanilla model) and (ii) when it makes use of annealing schemes, entropy regularization, and prioritized experience replay to avoid posterior collapse (cf. Delgrange et al., 2022).
While the former clearly fails to learn a useful latent representation, the later does so meticulously and smoothly in two distinguishable phases:
first, $\embed_{\encoderparameter}$ focuses on fairly distributing the latent space, setting up the stage to the concrete optimization occurring from step $4 \cdot 10^5$, where the entropy of $\embed_{\encoderparameter}$ is lowered, which allows to get the rate of the variational model away from zero.
Five instances of the models are trained with different random seeds, with the same hyperparameters than in Sect. <ref>.
Posterior collapse in VAE-MDPs. We illustrate the sensitivity of VAE-MDPs to the posterior collapse problem in Fig. <ref>, through the CartPole environment[
In fact, the phenomenon of collapsing to few state occurs for all the environments considered in this paper when their prioritized experience replay is not used, as illustrated in .
]: minimizing the distortion and the rate as is yields an embedding function which maps deterministically every input state to the same sink latent state (cf. Fig. <ref>).
Precisely, there is a latent state $\latentstate \in \latentstates$ so that $\embed_{\encoderparameter}\fun{\latentstate \mid \state} \approx 1$ and $\latentprobtransitions_{\decoderparameter}\fun{\latentstate \mid \latentstate, \latentaction} \approx 1$ whatever the state $\state \in \states$ and action $\latentaction \in \latentactions$.
This is a form of posterior collapse, the resulting rate quickly drops to zero (cf. Fig <ref>), and the resulting latent representation yields no information at all.
This phenomenon is handled in VAE-MDPs by using (i) prioritized replay buffers that allow to focus on inputs that led to bad representation, and (ii) modifying the objective function for learning the latent space model — the so-called evidence lower bound DBLP:journals/jmlr/HoffmanBWP13,DBLP:journals/corr/KingmaW13, or ELBO for short — and set up annealing schemes to eventually recover the ELBO at the end of the training process.
Consequently, the resulting learning procedure focuses primarily on fairly distributing the latent space, to avoid it to collapse to a single latent state,
to the detriment of learning the dynamics of the environment and the distillation of the RL policy.
Then, the annealing scheme allows to make the model learn to finally smoothly use the latent space to maximize the ELBO, and achieve consequently a lower distortion at the “price” of a higher rate.
Impact of the resulting learning procedure. The aforementioned annealing process, used to avoid that every state collapses to the same representation, possibly induces a high entropy embedding function (Fig. <ref>), which further complicates the learning of the model dynamics and the distillation in the first stage of the training process.
In fact, in this particular case, one can observe that the entropy reaches its maximal value, which yields a fully random state embedding function.
Recall that the VAE-MDP latent space is learned through independent Bernoulli distributions.
Fig. <ref> reports values centered around $4.188$ in the first training phase, which corresponds to the entropy of the state embedding function when $\embed_{\encoderparameter}\fun{\sampledot \mid \state}$ is uniformly distributed over $\latentstates$ for any state $\state \in \states$: $H\fun{\embed_{\encoderparameter}\fun{\sampledot \mid \state}} = \sum_{i=0}^{\log_2 \left| \latentstates \right| - \left| \atomicprops \right| = 6} - p_i\log~p_i - \fun{1 - p_i} \log\fun{1 - p_i} = 4.188 $, where $p_i = \nicefrac{1}{2}$ for all $i$.
The rate (Fig. <ref>) drops to zero since the divergence pulls the latent dynamics towards this high entropy (yet another form of posterior collapse), which hinders the latent space model to learn a useful representation.
However, the annealing scheme increases the rate importance along training steps, which enables the optimization to eventually leave this local optimum (here around $4\cdot10^5$ training steps).
This allows the learning procedure to leave the zero-rate spot, reduce the distortion (Fig. <ref>), and finally distill the original policy (Fig. <ref>).
As a result, the whole engineering required to mitigate posterior collapse slows down the training procedure.
This phenomenon is reflected in Fig. <ref>: VAE-MDPs need several steps to stabilize and set up the stage to the concrete optimization, whereas WAE-MDPs have no such requirements since they naturally do not suffer from collapsing issues (cf. Fig. <ref>), and are consequently faster to train.
Lack of representation guarantees. On the theoretical side, since VAE-MDPs are optimized via the ELBO and the local losses via the related variational proxies, VAE-MDPs do not leverage the representation quality guarantees induced by local losses (Eq. <ref>) during the learning procedure (as explicitly pointed out by .): in contrast to WAE-MDPs, when two original states are embedded to the same latent, abstract state, the former are not guaranteed to be bisimilarly close (i.e., the agent is not guaranteed to behave the same way from those two states by executing the policy), meaning those proxies do not prevent original states having distant values collapsing together to the same latent representation.
[m0]$\mdp = \mdptuple$MDP $\mdp$ with state space $\states$, action space $\actions$, transition function $\probtransitions$, labeling function $\labels$, atomic proposition space $\atomicprops$, and initial state $\sinit$.
[msinit]$\mdp_\state$MDP obtained by replacing the initial state of $\mdp$ by $\state \in \states$
[mtrajectory]$\trajectory = \trajectorytuple{\state}{\action}{T}$Trajectory
[mtrajectories]$\inftrajectories{\mdp}$Set of infinite trajectories of $\mdp$
[mpolicy]$\policy$Memoryless policy $\policy \colon \states \to \distributions{\actions} $
[pdist]$\distributions{\measurableset}$Set of measures over a complete, separable metric space $\measurableset$
$\condition{[\textit{cond}]}$indicator function: $1$ if the statement [cond] is true, and $0$ otherwise
$f_{\decoderparameter}$A function $f_{\decoderparameter} \colon \measurableset \to \R$ modeled by a neural network, parameterized by $\decoderparameter$, where $\measurableset$ is any measurable set
[mprobmeasure]$\Prob_{\policy}^{\mdp}$Unique probability measure induced by the policy $\policy$ in $\mdp$ on the Borel $\sigma$-algebra over measurable subsets of $\inftrajectories{\mdp}$
[mpolicies]$\mpolicies{\mdp}$Set of memoryless policies of $\mdp$
[mstationary]$\stationary{\policy}$Stationary distribution of $\mdp$ induced by the policy $\policy$
[mdiscount]$\discount$Discount factor in $\mathopen[0, 1\mathclose]$
[mstate]$\state$State in $\states$
[maction]$\action$Action in $\actions$
[mlimitingdistr]$\stationary{\policy}^{t}$Limiting distribution of the MDP defined as $\stationary{\policy}^{t}\fun{\state' \mid \state} = \Prob_{\policy}^{\mdp_{\state}}\fun{\set{\seq{\state}{\infty}, \seq{\action}{\infty} \mid \state_t = \state'}}$, for any source state $\state \in \states$
[mvalues]$\valuessymbol{\policy}{\sampledot}$Value function for the policy $\policy$
[mreachability]$\until{\labelset{C}}{\labelset{T}}$Constrained reachability event
[l1]$\latentmdp = \latentmdptuple$Latent MDP with state space $\latentstates$, action space $\latentactions$, reward function $\latentrewards$, labeling function $\latentlabels$, atomic proposition space $\atomicprops$, and initial state $\zinit$.
[lstate]$\latentstate$Latent state in $\latentstates$
[laction]$\latentaction$Latent action in $\latentactions$
[lembedding]$\embed$State embedding function, from $\states$ to $\latentstates$
[lembeddingaction]$\embeda$Action embedding function, from $\latentstates \times \latentactions$ to $\actions$
[l2latentspacemodel]$\tuple{\latentmdp, \embed, \embeda}$Latent space model of $\mdp$
[ldistance]$\distance_{\latentstates}$Distance metric over $\latentstates$
[lantentpolicy]$\latentpolicy$Latent policy $\latentpolicy\colon \latentstates \to \actions$; can be executed in $\mdp$ via $\embed$: $\latentpolicy\fun{\sampledot \mid \embed\fun{\state}}$
[localtransitionloss]$\localtransitionloss{\stationary{}}$Local transition loss under distribution $\stationary{}$
[localrewardloss]$\localrewardloss{\stationary{}}$Local reward loss under distribution $\stationary{}$
[lembedprob]$\embed\probtransitions$Distribution of drawing $\state' \sim \probtransitions\fun{\sampledot \mid \state, \action}$, then embedding $\latentstate' = \embed\fun{\state'}$, for any state $\state \in \states$ and action $\action \in \actions$
[mbidistance]$\bidistance_{\policy}$Bisimulation pseudometric
[pdiscrepancy]$D$Discrepancy measure; $D\fun{P, Q}$ is the discrepancy between distributions $P, Q \in \distributions{\measurableset}$
[pwasserstein]$\wassersteinsymbol{\distance}$Wasserstein distance w.r.t. the metric $\distance$; $\wassersteindist{\distance}{P}{Q}$ is the Wasserstein distance between distributions $P, Q \in \distributions{\measurableset}$
$\Lipschf{\distance}$Set of $1$-Lipschitz functions w.r.t. the distance metric $\distance$
[wbehavioral]$\stationary{\decoderparameter}$Behavioral model: distribution over $\states \times \actions \times \images{\rewards} \times \states$
[mtracedistance]$\tracedistance$Raw transition distance, i.e., metric over $\states \times \actions \times \images{\rewards} \times \states$
[wstationary]$\latentstationaryprior$Stationary distribution of the latent model $\latentmdp_{\decoderparameter}$, parameterized by $\decoderparameter$
[wmarginalencoder]$Q_{\encoderparameter}$Marginal encoding distribution over $\latentstates \times \latentactions \times \latentstates: \expectedsymbol{\state, \action, \state' \sim \stationary{\policy}} \embed_{\encoderparameter}\fun{\sampledot \mid \state, \action, \state'}$
[wembeddinga]$\embed^{\actions}_{\encoderparameter}$Action encoder mapping $\latentstates \times \actions$ to $\distributions{\latentactions}$
[wtransition]$\originaltolatentstationary{}$Distribution of drawing state-action pairs from interacting with $\mdp$, embedding them to the latent spaces, and finally letting them transition to their successor state in $\latentmdp_{\decoderparameter}$, in $\distributions{\latentstates \times \latentactions \times \latentstates}$
[mdistancestate]$\distance_{\states}$Metric over the state space
[mdistanceaction]$\distance_{\actions}$Metric over the action space
[mdistancerewards]$\distance_{\rewards}$Metric over $\images{\rewards}$
[wgenerate]$\generative_{\decoderparameter}$State-wise decoder, from $\latentstates$ to $\states$
[wdirac]$G_{\decoderparameter}$Mapping $\tuple{\latentstate, \latentaction, \latentstate'} \mapsto \tuple{\generative_{\decoderparameter}\fun{\latentstate}, \embeda_{\decoderparameter}\fun{\latentstate, \latentaction}, \latentrewards_{\decoderparameter}\fun{\latentstate, \latentaction}, \generative_{\decoderparameter}\fun{\latentstate'}}$
[wsteadystate]Steady-state regularizer
[wtransitionlossnet]Transition Lipschitz network
[wsteadystatenet]Steady-state Lipschitz network
$\sigmoid$Sigmoid function, with $\sigmoid\fun{x} = \nicefrac{1}{1 + \exp\fun{-x}}$
[lvalue]$\latentvaluessymbol{\latentpolicy}{\sampledot}$Latent value function
[lpolicies]$\latentpolicies$Set of (memoryless) latent policies
[wtemperarue]$\temperature$Temperature parameter
[pLogistic]$\logistic{\mu}{s}$Logistic distribution with location parameter $\mu$ and scale parameter $s$ |
one, at least for our data generating process.
Figure C.3 displays the estimate profiles of unobserved heterogeneity for
$N=10,000$ observations and a varying number of groups. Figures C.3(a) and
C.3(b) show that the time profiles do not exhibit a sufficient amount of
unobserved heterogeneity which results in biased coefficient estimates (see
Figure C.2). By contrast, adding more groups than necessary does not imply
that the estimator does not assign any individual observations to these
superfluous groups. Rather, the GFE splits existing groups which leads to an
“overfitting” of the time profiles (see Figures C.3(d) and C.3(e)). This
behavior is similar to what we observe in our empirical, real data
application. Adding more groups splits up the existing groups and the
generated trajectories of unobserved mental health profiles for the additional
groups are similar to the group that was split up.
Finally, we investigate the finite sample behavior of the BIC criterion with
two different penalty terms (Figures C.4 and C.5) in a setting with large $N$
and fixed $T$. As discussed in Section 5.2 the BIC preferred by Bonhomme and
Manresa (2015) (BIC standard) does not discriminate sufficiently for all
$G\geq$ in our our application. Our simulation exercise clearly shows that the
number of groups selected by the BIC standard depends on the number of
observations $N$ relative to the number of time periods $T$. As shown in
Figure C.5, the BIC selects the correct number of groups, $G=3$, for $1,000$
observations. However, when we increase $N$, the number of groups selected by
this BIC increases, indicating that the penalization used in this BIC is not
steep enough. As in our application, the BIC standard remains practically
unchanged when increasing the number of groups once $G>1$.
By contrast, the BIC with a steeper penalty term (in $G$) always chooses two
groups regardless of the number of observations. As indicated by the steep
increase in the value of this BIC, the penalization with respect to the number
of groups is too strong (Figure C.4). We observe a similar behavior in our
application.
(a) $N=1000$ (b) $N=1500$ (c) $N=2000$ (d) $N=10000$
Figure C.4: Results for the BIC with the steeper penalty in terms of $G$.
(a) $N=1000$ (b) $N=1500$ (c) $N=2000$ (d) $N=10000$
Figure C.5: Results for the BIC with the less steep penalty used in Bonhomme
and Manresa (2015).
(a) $N=2500$ (b) $N=3000$
Figure C.6: Estimated coefficient for $\xi$ with lags for different model
specifications from left to right: (1) OLS, (2) OLS with individual-specific
fixed-effects (OLS FE), (3) GFE estimator with two groups, (4) GFE estimator
with three groups, (5) GFE estimator with four groups, (6) GFE estimator with
five groups. Confidence intervals are depicted in red and are calculated using
analytical standard errors. The true contemporaneous effect is $\xi_{0}=0.2$
and the true data generating process contains lags of the form
$\xi_{1},\xi_{2},...,\xi_{5}=0.1$, that are ignored in estimation.
### D Model solution
Numerically, the decision function $\rho^{*}_{t}$ can be computed by backward
induction over $t$, starting with a guess in the far future which does not
affect behavior in initial periods. The backward recursion is a variation of
classical dynamic programming that takes into account the time-inconsistency
introduced through $\beta$. To facilitate formulating the dynamic program, we
denote by $F_{t}(M_{t})$ the (classically, without $\beta$) discounted sum of
future flow utilities given the decisions $\rho^{*}$ at age $\tau>t$
$\displaystyle
F_{t}(M_{t})=u(\rho^{*}_{t}(M_{t}),M_{t})+\operatorname{\mathbb{E}}_{t}\left[\sum_{\tau=t+1}^{\infty}\delta^{\tau-t}u(\rho^{*}_{\tau}(M^{*}_{\tau}),M^{*}_{\tau})\right].$
(D.1)
Every woman’s optimization problem specified by Equation (14) of computing
$\rho^{*}_{t}(M_{t})$ can be written compactly in terms of $F$ as
$\displaystyle\rho^{*}_{t}(M_{t})=\text{arg}\max_{\rho_{t}}\left\\{u(\rho_{t},M_{t})+\beta\delta\operatorname{\mathbb{E}}_{t}[F_{t+1}(M^{*}_{t+1})]\right\\}$
(D.2)
Moreover, the functions $F_{t}$ satisfy the recursion
$\displaystyle
F_{t}(M_{t})=u(\rho^{*}_{t}(M_{t}),M_{t})+\delta\operatorname{\mathbb{E}}_{t}[F_{t+1}(M^{*}_{t+1})]$
(D.3)
Our numerical approach via backward induction thus looks as follows. We
initialize by guessing the terminal condition $F_{T}(M_{T})=\bar{u}(M_{T})=0$
for $T=100$.343434This guess is incorrect but can be expected not to affect
behavior in early time periods $t=1,\ldots 8$. We verify this by checking that
results do not change if we initialize instead at $T=200$. In order to
sequentially compute the decision function $\rho^{*}_{t}$ and the functions
$F_{t}$, we alternate two steps backwards in time. Assume that $F_{t+1}$ is
already known. Then, we can compute $\rho^{*}_{t}(M_{t})$ by solving the
problem in Equation (D.2) for every value of $M_{t}$. Once
$\rho_{t}^{*}(M_{t})$ is known, we can compute $F_{t}$ from Equation (D.3) and
go back one more step in time. When solving this problem computationally, we
first discretize the state variables $M$ and $\rho$ over a suitable
grid.353535We assume that the choice of $\rho$ is discrete and the possible
value are $0,0.05,0.1,...,0.95,1$. For $M$ we simulate 1000 trajectories for
the two most extreme values of $\rho$, $\rho_{t}\equiv 0$ and $\rho_{t}\equiv
1$. We use the maximum and minimum of resulting mental health trajectories to
determine the boundaries of the grid. We choose 200 equidistant levels in
between. Thus, the maximizations do not have to be performed for every
possible value of $M$ and $\rho$, but only for every value on the grid. The
resulting discrete functions $F_{t}$ are interpolated using monotone Hermite
splines. To compute the conditional expectations at each age, we take a Monte
Carlo average over 1,000 possible scenarios for $M_{t+1}$ for the next step
given each combination $(M_{t},\rho_{t})$. In this way, the functions $F_{t}$
and $\rho^{*}_{t}$ can be computed backward in time one by one. With the
resulting decision functions $\rho^{*}_{t}$, we then simulate 10,000 optimal
mental health trajectories $M^{*}_{t}$ and the associated risky behavior
$\rho^{*}_{t}(M^{*}_{t})$. |
# Synthesizing Monolingual Data for Neural Machine Translation
Benjamin Marie Atsushi Fujita
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan
{bmarie<EMAIL_ADDRESS>
###### Abstract
In neural machine translation (NMT), monolingual data in the target language
are usually exploited through a method so-called “back-translation” to
synthesize additional training parallel data. The synthetic data have been
shown helpful to train better NMT, especially for low-resource language pairs
and domains. Nonetheless, large monolingual data in the target domains or
languages are not always available to generate large synthetic parallel data.
In this work, we propose a new method to generate large synthetic parallel
data leveraging very small monolingual data in a specific domain. We fine-tune
a pre-trained GPT-2 model on such small in-domain monolingual data and use the
resulting model to generate a large amount of synthetic in-domain monolingual
data. Then, we perform back-translation, or forward translation, to generate
synthetic in-domain parallel data. Our preliminary experiments on three
language pairs and five domains show the effectiveness of our method to
generate fully synthetic but useful in-domain parallel data for improving NMT
in all configurations. We also show promising results in extreme adaptation
for personalized NMT.
## 1 Introduction
Neural machine translation (NMT) systems usually require a large quantity of
parallel data for training. For most language pairs and domains, we do not
have such resources, or only in very small quantities, mainly because they are
costly to produce (Germann, 2001). Unlike parallel data, monolingual data are
readily available in large quantity for many languages. Previous work has
proposed various strategies to integrate monolingual data into NMT systems and
has confirmed their usefulness to improve NMT systems, especially in low-
resource configurations. The so-called _back-translation_ of monolingual data
(Sennrich et al., 2016a) is undoubtedly the most prevalent one. This approach
uses a target-to-source MT system to translate monolingual data in the target
language into the source language. The generated synthetic parallel data can
be used together with the original parallel data to increase the size of the
training data, and eventually to obtain better NMT systems.
Nonetheless, generating synthetic parallel data in large quantity with this
approach also requires a large quantity of monolingual data. For most domains
in most languages, however, a large quantity of monolingual data is
unavailable and thus generating large synthetic parallel data through back-
translation is impracticable.
In this preliminary work, we propose a new approach that leverages small in-
domain monolingual data to generate large synthetic in-domain parallel data.
We demonstrate that synthetic in-domain monolingual data generated by a GPT-2
model (Radford et al., 2019), fine-tuned on our very small in-domain
monolingual data, can be successfully translated by NMT to generate synthetic
in-domain parallel data. Our results on three language pairs and five domains
show improvements in BLEU for all configurations when using our synthetic data
to train NMT. We also show that this approach can be used in extreme
adaptation for personalized NMT.
(a) Generated by GPT-2 fine-tuned on Medical data
> $\ldots$ Because of methodological differences we could not obtain a
> comparable result for 17S viral nucleic acids or 16S viral nucleic acid
> using different methods.
> The SARI statement: A measure of the quality of health services, including
> the availability of drugs, is a basic criterion for measuring the quality of
> health services system.
> The 12 patients seen at the DCP + IC applied for six of these six HDCP
> methods (75%) successfully completed their pharmacy duties as per the
> guidelines.$\ldots$
(b) Generated by GPT-2 fine-tuned on IT data
> $\ldots$You can use the Page Colors application that you can find on Google+
> The maps of Portugal are free but you can acquire maps for other countries
> You can use the program Ringtone Maker which you can find on Google$\ldots$
(c) Generated by GPT-2 fine-tuned on tweets (natural disaster domain)
> $\ldots$A volcanic eruption in 1815 set off a massive effluence that sowed
> #wildfire on the west coast!Thanks NSW #NSWgovernors for treating these
> #shills
> 4.4 earthquake occurred near Negros Region, Chile at 22: 10 UTC! #earthquake
> #NegrosRegion
> Day: Malta - Black cloud surrounded by rain. 16: 35 JST 14 / 16 / 19 - 21:
> 45 JST 15 / 16 / 19 - 17: 00 JST$\ldots$
(d) Generated by GPT-2 not fine-tuned
> $\ldots$On Thursday, fossil fuels minister Emmanuel Ponting said the year
> 2000 was the year that the entire human genome was analyzed and ”explored
> for new genes.”
> Consider the mercenary work that Columbia University puts in.
> Such coins have been suggested by Buzzfeed tech reporter Alex Seitz, who
> wrote a very thorough investigation into the issue.$\ldots$
Figure 1: Examples of three raw consecutive lines inside one sequence
generated by different GPT-2 models. We manually added “$\ldots$” to show the
reader that they are extracts from a sequence. GPT-2 models, fine-tuning data,
and hyper-parameters used to obtain these examples are presented in Section 4.
## 2 Motivation
This work relies on three assumptions:
* •
GPT models generates mostly correct sentences.
* •
Sentences generated by a GPT model exhibit some of the characteristics of the
in-domain data on which the model has been fine-tuned, even if the data is
small.
* •
NMT training is robust, to some extent, to the noise in the texts generated by
GPT models.
For our two first assumptions, we can obtain some hints on their validity by
manually checking sentences generated by fine-tuned GPT-2 models. Examples of
such sentences are presented in Figure 1. We can see with these examples that
GPT-2 models successfully generate sentences that are mostly correct and
present characteristics of the domain on which they have been fine-tuned.
For our third assumption, we rely on previous work that shows that back-
translations in which artificial noise has been injected can improve
translation quality when used for training NMT (Edunov et al., 2018).
## 3 Synthesizing Large Parallel Data Leveraging Small Monolingual Data
### 3.1 Requirements
Our method has few requirements in terms of data that make it applicable in
most MT scenarios. Precisely, we need the following three types of data:
* •
a GPT model or large (general-domain) monolingual data: in this preliminary
work, we only exploit the smallest GPT-2 model released by OpenAI. For the
future, we plan to experiment with in-house GPT models, trained on large
general-domain monolingual data.
* •
small in-domain monolingual data: most of our experiments use 50k sentences
for each target domain, but experiments in extreme adaptation for personalized
NMT shows that our method is useful even when only hundreds of sentences are
available.
* •
some parallel data: all our experiments use at least 156k sentence pairs.
### 3.2 Synthetic Monolingual Data
We use GPT-2 (Radford et al., 2019)111https://github.com/openai/gpt-2 to
generate synthetic monolingual data. GPT models are auto-regressive
Transformer (Vaswani et al., 2017) decoders. Given some context, or no context
at all if this is the first token of the sequence, the model predicts the next
token.
To generate texts in a particular domain, we fine-tuned a given GPT-2 model on
a small amount of texts in the target domain and language. Since GPT-2 is
efficient for text generation, we can generate millions of in-domain
monolingual sentences.
### 3.3 Synthetic Parallel Data
Once the synthetic monolingual data are generated, it can be used in NMT as
any other monolingual data. In this work, we demonstrate its usefulness
through back-translation (Sennrich et al., 2016a) and forward translation to
generate in-domain synthetic parallel data.
For back-translation, we adopted the tagged approach (Caswell et al., 2019)
that has been shown to provide better results, especially for translating
texts that are not translationese (Marie et al., 2020). In this configuration,
the target side of the synthetic parallel data was generated by GPT-2, in
English, and the source side by NMT.
For forward translation, we did not use tags. In this configuration the source
side was generated by GPT-2, in English, while the target side was obtained
through NMT. Forward translation is known to underperform back-translation
(Bogoychev and Sennrich, 2019). Nonetheless, since we do not have GPT-2 models
in other languages than English, we could only exploit synthetic monolingual
data for translation directions with English on the source side through
forward translation.
## 4 Experiments
### 4.1 Data
#### 4.1.1 Training
We trained NMT systems for English–German (En-De), English–French (En-Fr), and
Japanese–English (Ja-En) on the following parallel data (numbers of sentence
pairs are given after pre-processing described in Section 4.2):
* •
En-De: WMT17222http://statmt.org/wmt17/translation-task.html parallel data
(5.1M sentence pairs)
* •
En-Fr: WMT14333http://statmt.org/wmt14/translation-task.html (32.7M sentence
pairs)
* •
En-Ja: Training parallel data provided in the MTNT dataset (Michel and Neubig,
2018b)444http://www.cs.cmu.edu/ pmichel1/mtnt/ which is a concatenation of
three different datasets (TED talks555https://wit3.fbk.eu/, The Kyoto Free
Translation Task (KFTT)666http://www.phontron.com/kftt/, and
JESC777https://nlp.stanford.edu/projects/jesc/) (3.9M sentence pairs)
#### 4.1.2 Validation
We used one validation dataset, for each language pair, to select the best
model after training NMT (see Section 4.2):
* •
En-De: WMT16 newstest888http://data.statmt.org/wmt20/translation-task/dev.tgz
(2,999 sentence pairs)
* •
En-Fr: WMT13 newstest999http://data.statmt.org/wmt20/translation-task/dev.tgz
(3,000 sentence pairs)
* •
En-Ja: Validation data provided in the MTNT
dataset101010http://www.cs.cmu.edu/ pmichel1/mtnt/ that is a concatenation of
data from TED Talks, KFTT, and JESC corpora (4,451 sentence pairs)
#### 4.1.3 Test
We used several datasets from different domains for evaluating the translation
quality of our NMT systems for each language pair:
* •
En-De:
* –
News domain: WMT17 news translation
task111111http://data.statmt.org/wmt20/translation-task/dev.tgz (3,004
sentence pairs)
* –
Medical domain: WMT14 medical translation task, khresmoi
summary121212http://www.statmt.org/wmt14/medical-task/khresmoi-summary-test-
set.tgz (1,000 sentence pairs)
* –
IT domain: WMT16 IT translation task, batch
3131313http://data.statmt.org/wmt16/it-translation-task/wmt16-it-task-
references.tgz (1,000 sentence pairs)
* •
En-Fr:
* –
News domain: WMT14 news translation
task141414http://data.statmt.org/wmt20/translation-task/dev.tgz (3,003
sentence pairs)
* –
Medical domain: WMT14 medical translation task, khresmoi
summary151515http://www.statmt.org/wmt14/medical-task/khresmoi-summary-test-
set.tgz (1,000 sentence pairs)
* –
Reddit domain: MTNT test sets,161616http://www.cs.cmu.edu/ pmichel1/mtnt/ one
for each translation direction (for En$\rightarrow$Fr: 1,020 sentence pairs,
for Fr$\rightarrow$En: 1,022 sentence pairs)
* •
En-Ja:
* –
News domain: ALT test set171717https://www2.nict.go.jp/astrec-
att/member/mutiyama/ALT/ (1,018 sentence pairs)
* –
Reddit domain: MTNT test sets,181818http://www.cs.cmu.edu/ pmichel1/mtnt/ one
for each translation direction (for En$\rightarrow$Ja: 1,002 sentence pairs,
for Ja$\rightarrow$En: 1,001 sentence pairs)
* –
Twitter natural disaster domain: Tweets test set compiled and translated by
ourselves (not publicly available) (1,400 sentence pairs)
#### 4.1.4 English Monolingual Data
English monolingual data are used as a source for back/forward translation and
for fine-tuning GPT-2. There is one dataset for each domain:
* •
News domain: News Crawl 2019191919http://data.statmt.org/news-
crawl/en/news.2019.en.shuffled.deduped.gz (1M lines for backward/forward
translation, 50k lines for GPT-2 fine-tuning)
* •
IT domain: English side of the training parallel data provided for the WMT16
IT translation task, batch 1 and batch 2202020http://ufallab.ms.mff.cuni.cz/
popel/batch1and2.zip, (2k lines for backward/forward translation and GPT-2
fine-tuning)
* •
Medical domain: English side of the En-Fr EMEA parallel
data212121http://opus.lingfil.uu.se/download.php?f=EMEA/en-fr.txt.zip provided
for the WMT14 medical translation task (100k lines for backward/forward
translation, 50k lines for GPT-2 fine-tuning)
* •
Reddit domain: English data crawled with the Reddit API (1M lines for
backward/forward translation, 50k lines for GPT-2 fine-tuning)
* •
Twitter natural disaster domain: English tweets crawled with the Twitter API
with the same keywords used to crawled the English tweets of the test set (not
publicly released) (148k lines for backward/forward translation, 50k lines for
GPT-2 fine-tuning)
### 4.2 Framework and Settings
We exploited GPT-2 through the gpt-2-simple
framework.222222https://github.com/minimaxir/gpt-2-simple We did not perform
any pre-processing on the monolingual data used for fine-tuning GPT-2. For
NMT, we tokenized and truecased all the data in English, French, and German,
with the Moses toolkit (Koehn et al., 2007).232323https://github.com/moses-
smt/mosesdecoder The truecaser has been trained on 1M lines randomly sampled
from the News Crawl corpora in each language.
For NMT, training data, validation data, and source side test set are all
segmented into subword units. We used byte-pair encoding (BPE) (Sennrich et
al., 2016b)242424https://github.com/rsennrich/subword-nmt for English, German,
and French, separately trained on 10M lines from the News Crawl 2019
corpora252525http://data.statmt.org/news-crawl/ for each language to learn 32k
BPE operations. For Japanese, we used SentencePiece (Kudo and Richardson,
2018)262626https://github.com/google/sentencepiece to learn 16k sentence
pieces also from the News Crawl 2019 corpus.
We used Marian (Junczys-Dowmunt et al., 2018)272727https://marian-
nmt.github.io/, version v1.7.6 1d4ba73 2019-05-11 17:16:31 +0100 for NMT with
standard hyper-parameters for training (see Table 1).
\--type transformer --max-length 120 --mini-batch-fit --valid-freq 5000
--save-freq 5000 --workspace 10000 --disp-freq 500 --beam-size 12
--normalize=1 --valid-mini-batch 16 --overwrite --early-stopping 5 --cost-
type=ce-mean-words --valid-metrics ce-mean-words bleu --keep-best --enc-depth
6 --dec-depth 6 --transformer-dropout 0.1 --learn-rate 0.0003 --lr-warmup
16000 --lr-decay-inv-sqrt 16000 --lr-report --label-smoothing 0.1 --devices 0
1 2 3 4 5 6 7 --optimizer-params 0.9 0.98 1e-09 --clip-norm 5 --sync-sgd
--exponential-smoothing
---
Table 1: Hyper-parameters of Marian used for training our NMT systems.
We performed decoding with a beam size of 12 and a length normalization at
1.0.
For evaluation, we used SacreBLEU (Post,
2018)282828https://github.com/mjpost/sacrebleu and report on BLEU scores
Papineni et al. (2002) for English, French, and German, and chrF scores
Popović (2015) for Japanese. Before evaluation, we post-processed the NMT
output by undoing BPE and SentencePiece subword segmentations. Then, except
for Japanese, we detokenized and detruecased the output with Moses.
### 4.3 Results with Back-translation
The performance of our NMT systems trained on several different sets of back-
translations is shown in Table 2.
First, we assessed to what extent the human-made in-domain monolingual data
used for fine-tuning GPT-2 are useful for back-translation. As we can see,
despite the small size of the data, it improves BLEU compared to the baseline
systems for all configurations. When using all the human-made in-domain
monolingual data, or up to 1M sentences, BLEU improvements are even larger for
almost all configurations (except for Ja$\rightarrow$En, Reddit). This result
confirms the usefulness of exploiting more in-domain monolingual data through
back-translation when available.
Using 1M sentences generated by a GPT-2 model that is not fine-tuned leads to
lower BLEU scores than using all the human-made in-domain monolingual data
(except for Ja$\rightarrow$En, Reddit).
The two last rows give the results of our approach: they use human-made in-
domain monolingual data only up to 50k sentences for fine-tuning the GPT-2
model, but millions of synthetic monolingual data. They show that the back-
translations of the monolingual data generated by the fine-tuned GPT-2 model
are useful. We obtained better, or comparable, BLEU scores when using the
back-translations of our synthetic monolingual data to train NMT systems than
when using the back-translations of human-made monolingual data. Using more
synthetic monolingual data (last row) also tends to lead to better BLEU scores
(except for Ja$\rightarrow$En, Reddit and Twitter).
System | Back-translated | De$\rightarrow$En | Fr$\rightarrow$En | Ja$\rightarrow$En
---|---|---|---|---
Data | News | Medical | IT | News | Medical | Reddit | News | Reddit | Twitter
Baseline | none | 32.9 | 36.4 | 42.0 | 36.6 | 48.4 | 34.5 | 14.5 | 7.8 | 5.5
\+ H-TBT | fine-tuning | 34.2 | 40.2 | 42.7 | 37.1 | 48.9 | 34.7 | 17.2 | 8.3 | 16.6
\+ H-TBT | all | 35.8 | 40.7 | 43.4 | 37.4 | 49.6 | 35.9 | 22.1 | 8.0 | 17.1
\+ GPT_notft-TBT | 1M sentences | 34.6 | 37.3 | 41.9 | 37.1 | 48.5 | 34.7 | 20.0 | 8.6 | 9.8
\+ GPT-TBT | 1M sentences | 35.5 | 42.6 | 42.6 | 37.4 | 49.3 | 35.7 | 20.9 | 9.3 | 17.7
\+ GPT-TBT | 10M sentences | 35.5 | 42.9 | 44.6 | 37.8 | 50.3 | 36.9 | 22.3 | 8.7 | 15.9
Table 2: BLEU scores of our NMT systems translating into English, for each
domain. “H-TBT” denotes systems trained on the back-translated human-made
monolingual data (the data used for fine-tuning GPT or all the monolingual
data described in Section 4.1.4). “GPT-TBT” denotes systems trained on the
back-translation of either 1M or 10M monolingual sentences generated by a
GPT-2 model fine-tuned on the in-domain monolingual data. “GPT_notft-TBT”
denotes a configuration in which GPT-2 has not been fine-tuned.
### 4.4 Results with Forward Translation
We performed similar experiments as in Section 4.3 but with forward
translation instead of back-translation. Our results are shown in Table 3.
We did not observe consistent improvements of BLEU and chrF scores when
exploiting human-made monolingual data (H-FT configurations). Increasing the
amount of monolingual data can also decrease or increase BLEU and chrF scores.
Our approach (GPT-FT) leads to better, or similar, scores than the H-FT
configurations that use all the human-made monolingual data.
We conclude that forward translations perform reasonably well, but not
consistently, in these configurations and that GPT-2 models in other languages
than English would be necessary to properly evaluate to which extent our
approach can improve BLEU and chrF scores when English is not the target
language.
System | Translated | En$\rightarrow$De (BLEU) | En$\rightarrow$Fr (BLEU) | En$\rightarrow$Ja (chrF)
---|---|---|---|---
Data | News | Medical | IT | News | Medical | Reddit | News | Reddit | Twitter
Baseline | none | 27.3 | 28.8 | 37.4 | 36.3 | 40.9 | 25.5 | 0.2436 | 0.1419 | 0.0987
\+ H-FT | fine-tuning | 27.9 | 29.6 | 38.6 | 36.5 | 40.9 | 23.3 | 0.2643 | 0.1400 | 0.0839
\+ H-FT | all | 27.9 | 29.7 | 38.6 | 36.2 | 41.6 | 23.4 | 0.2847 | 0.1348 | 0.0845
\+ GPT_notft-FT | 1M sentences | 27.4 | 28.7 | 36.7 | 36.0 | 40.5 | 22.5 | 0.2479 | 0.1301 | 0.0799
\+ GPT-FT | 1M sentences | 27.9 | 29.6 | 39.1 | 36.2 | 42.0 | 23.1 | 0.2513 | 0.1324 | 0.0832
\+ GPT-FT | 10M sentences | 28.0 | 30.1 | 38.9 | 36.3 | 42.3 | 23.3 | 0.2749 | 0.1321 | 0.0810
Table 3: BLEU and chrF scores of our NMT systems translating from English, for
each domain. “H-FT” denotes systems trained on forward-translated human-made
monolingual data (the data used for fine-tuning GPT or all the monolingual
data described in Section 4.1.4). “GPT-FT” denotes systems trained on the
forward-translation of either 1M or 10M monolingual sentences generated by a
GPT-2 model fine-tuned on the in-domain monolingual data. “GPT_notft-FT”
denotes a configuration in which GPT-2 has not been fine-tuned.
## 5 Extreme Adaptation for Personalized NMT
The objective of extreme adaptation for personalized NMT is to adapt a given
NMT system so that it can better translate texts written or spoken by a
specific person. Ideally, for such a task, we would require an amount as large
as possible of parallel data in which one side are texts written or spoken by
the target person in order to personalize our NMT system. Obviously, such a
large data does not exist and would be too costly to create. Thus, we propose
to synthesize such a data with our approach. The main difference with the
domain adaptation scenarios presented in Section 4 is that we cannot even
expect to obtain thousands of sentences of texts written by the target person
to fine-tune GPT-2.
For our extremely personalized NMT experiments, we used the Speaker Annotated
TED Talks (SATED) corpus (Michel and Neubig,
2018a)292929http://www.cs.cmu.edu/ pmichel1/sated/ available for:
* •
English–German (En-De): 156k sentence pairs, 1,670 speakers
* •
English–Spanish (En-Es): 183k sentence pairs, 1,922 speakers
* •
English–French (En-Fr): 178k sentence pairs, 1,887 speakers
Each sentence pair is provided with a tag that identifies the speaker. Note
that this corpus is already pre-processed: tokenized and lower-cased.
Validation data and test data contain two sentence pairs per speaker. In order
to generate synthetic monolingual data for each specific speaker, we exploit
the speaker tag by concatenating it to the English side of the parallel data
and then use the resulting data to fine-tune the GPT-2 model. Through fine-
tuning, we assume that GPT-2 learns the characteristics of each individual
speaker by relying on the speaker tag. At decoding time, we then expect GPT-2
to generate texts for a particular speaker when prompting it with its speaker
tag.
System | De$\rightarrow$En | Es$\rightarrow$En | Fr$\rightarrow$En
---|---|---|---
Baseline | 24.6 | 32.2 | 29.5
Speaker Tags | 24.7 | 32.2 | 29.9
Speaker Tags + GPT-TBT | 27.6 | 34.6 | 32.2
Speaker Tags + GPT_speaker-TBT | 28.5 | 35.6 | 32.4
Table 4: BLEU scores of our NMT systems for the SATED translation task.
“Speaker Tags” denotes the use of the speaker tags to tag each sentence pair
in the given parallel data and each source sentence in the test data. “GPT-
TBT” denotes systems trained on back-translation of 500k sentences generated
by a GPT-2 model fine-tuned on the English side of the SATED parallel data.
‘GPT_speaker-TBT” is similar to ‘GPT-TBT,” except (1) that the data used for
fine-tuning the GPT-2 model are also tagged with the speaker tag and (2) that
each sentence generated by the fine-tuned GPT-2 model is tagged with a speaker
tag.
The results of our experiments are presented in Table 4. In addition to the a
vanilla baseline NMT system, we used one of the adaptation approach (second
row) that uses the parallel data with the speaker tag concatenated to the
source sentence to train NMT systems (Michel and Neubig, 2018a). BLEU scores
with this approach are close to the scores of the baseline system. Then, we
tried our approach similarly to our experiments in Section 4.3 (third row). We
fine-tuned GPT-2 on the English side of the SATED parallel data and generated
500k sentences with the fine-tuned model. Then, we back-translated the
generated data with En$\rightarrow\ast$ baseline systems to obtain synthetic
parallel data for the three language pairs and concatenated it to the speaker
tagged SATED parallel data exploited in our experiments of the second row. The
NMT systems trained on the resulting data improve the BLEU scores by several
BLEU points for all the translation directions. In the last row, we finally
report on the results exploiting the speaker tags also when fine-tuning the
GPT-2 model. We generated 500k sentences, randomly prompting GPT-2 with one of
the speaker tags, and exploited the resulting speaker-tagged monolingual data
as for the other models. BLEU scores are further improved, implying that GPT-2
successfully exploits the speaker tag to generate better synthetic data for
each speaker.
## 6 Conclusion and Future Work
In this preliminary work, we showed that our approach can leverage small in-
domain monolingual data produced by human to generate a large synthetic in-
domain parallel data. Even though the synthetic parallel data are entirely
synthetic, as opposed to a standard backward/forward translation, we obtained
improvements in BLEU scores in all our configurations when using the generated
data to train NMT systems. We also reported on successful experiments in
extreme adaptation for personalized NMT.
In our future work, we would like to perform an in-depth analysis to better
understand our results. We will also conduct more experiments exploiting in-
house GPT models for other languages.
## References
* Bogoychev and Sennrich (2019) Nikolay Bogoychev and Rico Sennrich. 2019. Domain, translationese and noise in synthetic data for neural machine translation. _arXiv preprint arXiv:1911.03362_.
* Caswell et al. (2019) Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In _Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)_ , pages 53–63, Florence, Italy. Association for Computational Linguistics.
* Edunov et al. (2018) Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 489–500, Brussels, Belgium. Association for Computational Linguistics.
* Germann (2001) Ulrich Germann. 2001. Building a statistical machine translation system from scratch: How much bang for the buck can we expect? In _Proceedings of the ACL Workshop on Data-Driven Methods in Machine Translation_.
* Junczys-Dowmunt et al. (2018) Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In _Proceedings of ACL 2018, System Demonstrations_ , pages 116–121, Melbourne, Australia. Association for Computational Linguistics.
* Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007\. Moses: Open source toolkit for statistical machine translation. In _Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions_ , pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
* Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 66–71, Brussels, Belgium. Association for Computational Linguistics.
* Marie et al. (2020) Benjamin Marie, Raphael Rubino, and Atsushi Fujita. 2020. Tagged back-translation revisited: Why does it really work? In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5990–5997, Online. Association for Computational Linguistics.
* Michel and Neubig (2018a) Paul Michel and Graham Neubig. 2018a. Extreme adaptation for personalized neural machine translation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 312–318, Melbourne, Australia. Association for Computational Linguistics.
* Michel and Neubig (2018b) Paul Michel and Graham Neubig. 2018b. MTNT: A testbed for machine translation of noisy text. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 543–553, Brussels, Belgium. Association for Computational Linguistics.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, USA. Association for Computational Linguistics.
* Popović (2015) Maja Popović. 2015. chrF: character n-gram f-score for automatic MT evaluation. In _Proceedings of the Tenth Workshop on Statistical Machine Translation_ , pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
* Post (2018) Matt Post. 2018. A call for clarity in reporting BLEU scores. In _Proceedings of the Third Conference on Machine Translation: Research Papers_ , pages 186–191, Brussels, Belgium. Association for Computational Linguistics.
* Radford et al. (2019) A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019\. Language models are unsupervised multitask learners.
* Sennrich et al. (2016a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 86–96, Berlin, Germany. Association for Computational Linguistics.
* Sennrich et al. (2016b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, _Advances in Neural Information Processing Systems 30_ , pages 5998–6008. Curran Associates, Inc.
|
# Comparing Weak- and Unsupervised Methods for Resonant Anomaly Detection
Jack H. Collins 2,3 Pablo Martín-Ramiro 3,4 Benjamin Nachman 5 and David Shih
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>SLAC National Accelerator Laboratory, Stanford
University, Stanford, CA 94309, USAInstituto de Física Teórica, IFT-UAM/CSIC,
Universidad Autónoma de Madrid, 28049 Madrid, SpainPhysics Division, Lawrence
Berkeley National Laboratory, Berkeley, CA 94720, USABerkeley Institute for
Data Science, University of California, Berkeley, CA 94720, USANHETC,
Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854,
USA
###### Abstract
Anomaly detection techniques are growing in importance at the Large Hadron
Collider (LHC), motivated by the increasing need to search for new physics in
a model-agnostic way. In this work, we provide a detailed comparative study
between a well-studied unsupervised method called the autoencoder (AE) and a
weakly-supervised approach based on the Classification Without Labels (CWoLa)
technique. We examine the ability of the two methods to identify a new physics
signal at different cross sections in a fully hadronic resonance search. By
construction, the AE classification performance is independent of the amount
of injected signal. In contrast, the CWoLa performance improves with
increasing signal abundance. When integrating these approaches with a complete
background estimate, we find that the two methods have complementary
sensitivity. In particular, CWoLa is effective at finding diverse and
moderately rare signals while the AE can provide sensitivity to very rare
signals, but only with certain topologies. We therefore demonstrate that both
techniques are complementary and can be used together for anomaly detection at
the LHC.
††preprint: SLAC–PUB–17558 IFT-UAM/CSIC-21-24
## 1 Introduction
The LHC has the potential to address many of the most fundamental questions in
physics. Despite all the searches for physics beyond the Standard Model (BSM)
conducted by ATLAS atlasexoticstwiki ; atlassusytwiki and CMS cmsexoticstwiki
; cmssusytwiki ; cmsb2gtwiki , no significant evidence of new physics has been
found so far. These searches are designed to target specific new physics
signals that would be produced by particular, well-motivated theoretical
models. However, it is not feasible to perform a dedicated analysis for every
possible topology and therefore some potential signals may be missed. This
motivates the introduction of new methods that are less reliant on model
assumptions and that are sensitive to a broad spectrum of new physics
signatures.
A variety of machine-learning assisted anomaly detection techniques have been
proposed that span the spectrum from completely supervised to completely
unsupervised111Citation block taken from the Living Review 2102.02770 .
Background model dependent, non-machine learning models have also been studied
experimentally - see Ref.sleuth ; Abbott:2000fb ; Abbott:2000gx ;
Abbott:2001ke ; Aaron:2008aa ; Aktas:2004pz ; Cranmer:2005zn ; Aaltonen:2007dg
; Aaltonen:2007ab ; Aaltonen:2008vt ; CMS-PAS-EXO-14-016 ; CMS-PAS-EXO-10-021
; CMS:2020ohc ; Aaboud:2018ufy ; ATLAS-CONF-2014-006 ; ATLAS-CONF-2012-107 .
DAgnolo:2018cun ; Collins:2018epr ; Collins:2019jip ; DAgnolo:2019vbw ;
Farina:2018fyg ; Heimel:2018mkt ; Roy:2019jae ; Cerri:2018anq ; Blance:2019ibf
; Hajer:2018kqm ; DeSimone:2018efk ; Mullin:2019mmh ; 1809.02977 ;
Dillon:2019cqt ; Andreassen:2020nkr ; Nachman:2020lpy ; Aguilar-
Saavedra:2017rzt ; Romao:2019dvs ; Romao:2020ojy ; knapp2020adversarially ;
collaboration2020dijet ; 1797846 ; 1800445 ; Amram:2020ykb ; Cheng:2020dal ;
Khosa:2020qrz ; Thaprasop:2020mzp ; Alexander:2020mbx ;
aguilarsaavedra2020mass ; 1815227 ; pol2020anomaly ; Mikuni:2020qds ;
vanBeekveld:2020txa ; Park:2020pak ; Faroughy:2020gas ; Stein:2020rou ;
Kasieczka:2021xcg ; Chakravarti:2021svb ; Batson:2021agz ; Blance:2021gcs ;
Bortolato:2021zic (see Refs. Nachman:2020ccu ; Kasieczka:2021xcg for an
overview). Two promising approaches are CWoLa Hunting Collins:2018epr ;
Collins:2019jip and deep autoencoders (AE) Farina:2018fyg ; Heimel:2018mkt ;
Cerri:2018anq ; Roy:2019jae ; Blance:2019ibf :
* •
CWoLa Hunting is a weakly-supervised anomaly detection technique that uses the
idea of Classification Without Labels (CWoLa) Metodiev:2017vrx and trains a
classifier to distinguish two statistical mixed samples (typically a signal
region and a sideband region when used to search for new physics
Collins:2018epr ; Collins:2019jip ) with different amounts of (potential)
signal. The output of this classifier can then be used to select signal-like
events. This method has already been tested in a real search by the ATLAS
collaboration collaboration2020dijet .
* •
Autoencoders are the basis for a fully-unsupervised anomaly detection
technique that has been widely explored and used in many real-world scenarios.
A deep autoencoder is a neural network that learns to compress data into a
small latent representation and then reconstruct the original input from the
compressed version. The AE can be trained directly on a background-rich sample
to learn the features of background events and reconstruct them well. By
contrast, it will struggle to reconstruct anomalous (e.g. signal) events. The
reconstruction loss, defined by some chosen distance measure between the
original and reconstructed event, can then be used as a classification score
that selects anomalous events.
To date, there has not been a direct and detailed comparison between these two
methods.222Recently, the authors of the Tag N’ Train method Amram:2020ykb
also made comparisons between these approaches with the aim of combining them.
Our study has the orthogonal goal of directly comparing the two approaches in
detail as distinct methods to understand their complementarity. The goal of
this paper will be to provide such a comparison, describe the strengths and
weaknesses of the two approaches, and highlight their areas of
complementarity.
We will focus on the new physics scenario where a signal is localized in one
known dimension of phase space (in this case, the dijet invariant mass) on top
of a smooth background. While CWoLa Hunting explicitly requires a setup like
this to generate mixed samples, AEs technically do not, as they can function
as anomaly detectors in a fully unsupervised setting. However, even for AEs
one generally needs to assume something about the signal and the background in
order to enable robust, data-driven background estimation.
In this scenario, both models can be trained to exploit the information in the
substructure of the two jets to gain discriminating power between the signal
and background events. CWoLa Hunting, being able to take advantage of the weak
labels, should excel in the limit of moderately high signal rate in the sample
because it is able to take advantage of learnt features of the signal. It
should fail however in the limit of no signal. On the other hand, an
unsupervised approach like the AE is fully agnostic to the specific features
of the signal, and thus should be robust in the limit of low signal
statistics. While the behaviour of these strategies in the high and low signal
statistics limits can be understood on general grounds, it is the intermediate
regime in which the two strategies might have a ‘cross-over’ in performance
that is of most interest for realistic searches. It is therefore worth
analyzing in detail for some case studies the nature of this crossover and the
degree of complementary of the strategies.
In this work, we provide a detailed comparative analysis of the performance of
CWoLa Hunting and AEs at anomaly detection on a fully hadronic resonance
search. After evaluating the ability of both methods to identify the signal
events for different cross sections, we test whether they are able to increase
the significance of the signal region excess. Here we emphasize the importance
of going beyond the AUC metric and consider more meaningful performance
metrics such as the Significance Improvement Characteristic (SIC).
Furthermore, a realistic fit procedure based on ATLAS and CMS hadronic diboson
searches is implemented. We will confirm the general behavior of AE and CWoLa
Hunting approaches at large and small signal strengths described in the
previous paragraph, and we will demonstrate quantitatively the existence of a
cross-over region in a part of parameter space that could be of practical
relevance. We conclude that the approaches have complementary sensitivity to
different amounts or types of signals.
This paper is organized as follows. In Section 2, we describe the resonant
hadronic new physics signal that we consider and the simulation details for
the generated events. In Section 3, we introduce the details of CWoLa Hunting
and the AE and explain how they can be successfully implemented in this type
of new physics searches. We present results for the two models in Section 4
and discuss their performance at anomaly detection. Finally, the conclusions
are presented in Section 5.
## 2 Simulation
In order to investigate the performance of CWoLa Hunting and AEs in a generic
hadronic resonance search, we consider a benchmark new physics signal
$pp\rightarrow Z^{\prime}\rightarrow XY$, with $X\rightarrow jjj$ and
$Y\rightarrow jjj$. There is currently no dedicated search to this event
topology. The mass of the new heavy particle is set to
$m_{Z^{\prime}}=3.5\;{\mathrm{TeV}}$, and we consider two scenarios for the
masses of the new lighter particles: $m_{X}$, $m_{Y}=500\;{\mathrm{GeV}}$ and
$m_{X}$, $m_{Y}=300\;{\mathrm{GeV}}$. These signals typically produce a pair
of large-radius jets $J$ with invariant mass $m_{\text{JJ}}\simeq
3.5\;{\mathrm{TeV}}$, with masses of $m_{J}=500,300\;{\mathrm{GeV}}$ and a
three-prong substructure. These signals are generated in the LHC Olympics
framework Kasieczka:2021xcg .
For both signal models, we generated $10^{4}$ events. One million QCD dijet
events serve as the background and are from the LHC Olympics Kasieczka:2021xcg
dataset. All the events were produced and showered using Pythia 8.219
Sjostrand:2007gs and the detector simulation was performed using Delphes
3.4.1 deFavereau:2013fsa , with no pileup or multiparton interactions
included. All jets are clustered with FastJet 3.3.2 Cacciari:2011ma using the
anti-$k_{t}$ algorithm Cacciari:2008gp with radius parameter $R=1$. We
require events to have at least one large-radius jet with
$p_{T}>1.2\;{\mathrm{TeV}}$ and pseudo-rapidity $|\eta|<2.5$. The two hardest
jets are selected as the candidate dijet and a set of substructure variables
are calculated for these two jets as shown in Fig. 1. In particular, the
$N$-subjettiness variables $\tau_{i}^{\beta}$ were first proposed in Ref.
Thaler:2011gf ; Thaler:2010tr and probe the extent to which a jet has $N$
subjets. All $N$-subjettiness variables are computed using FastJet 3.3.2 and
angular exponent $\beta=1$ unless otherwise specified in the superscript. The
observable $n_{\text{trk}}$ denotes the number of constituents in a given jet.
Jets are ordered by mass in descending order.
Figure 1: A reduced set of the input features that we use for training the
models are shown for Jet $1$ (first and second rows) and Jet $2$ (third and
fourth rows) for the signals with
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ (red) and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ (blue), and the background
(green). We plot the same number of signal and background events for
visualization purpose.
## 3 Machine Learning Setup
In this section, we describe the machine learning setup and the strategies
that we follow to train CWoLa Hunting and the AE approaches.
### 3.1 Classification Without Labels (CWoLa)
The strategy closely follows Ref. Collins:2018epr ; Collins:2019jip . To
begin, we use a set of high-level observables computed from the two leading
jets. In particular, we consider the following set of input features for each
jet:
$Y_{i}=\left\\{m_{J},\,\sqrt{\tau_{1}^{(2)}}/\tau_{1}^{(1)},\,\tau_{21},\,\tau_{32},\,\tau_{43},\,n_{\text{trk}}\right\\}\,.$
(1)
A reduced set of input features is shown in Fig. 1.
We select all of the events in the range
$m_{JJ}\in[2800,5200]\;{\mathrm{GeV}}$ and split them uniformly in
$\log(m_{JJ})$ in $30$ bins. After selecting this range, $537304$ background
events remain in our sample. In order to test for a signal hypothesis with
mass $m_{JJ}=m_{\text{peak}}$, where $m_{\text{peak}}$ is the mean mass of the
injected signal, we build a signal region and a sideband region. The former
contains all of the events in the four bins centered around $m_{\text{peak}}$,
while the latter is built using the three bins below and above the signal
region. By doing this, we obtain a signal region in the range
$m_{JJ}\in(3371,3661)\;{\mathrm{GeV}}$ with a width of $290\;{\mathrm{GeV}}$,
and a lower and upper sidebands that are $202\;{\mathrm{GeV}}$ and
$234\;{\mathrm{GeV}}$ wide, respectively. The size of the signal region window
depends on the signal width333This is dominated by detector effects; for
models with a non-trivial off-shell width, this may not be optimal. and can be
scanned for optimal performance. In Fig. 2, we show the binned distribution of
a fraction of signal and all background events, with a signal-to-background
ratio of $S/B=6\cdot 10^{-3}$ and a naive expected significance
$S/\sqrt{B}=1.8\sigma$ in the signal region. Note that if a signal is present
in data, the signal region will have a larger density of signal events than
the mass sidebands, which are mainly populated by background events by
construction. In a real search the location of the mass peak of any potential
signal would be unknown, and thus the mass hypothesis must be scanned, as
described in Ref. Collins:2019jip .
After defining the signal and sideband regions, a CWoLa classifier is trained
to distinguish the events of the signal region from the events of the sideband
using the set of twelve input features that describe the jet substructure of
each event, presented in Eq. (1). In this way, the CWoLa classifier will
ideally learn the signal features that are useful to discriminate between both
regions. It is important to remark that the classifier performance should be
very poor when no signal is present in the signal region, but if a signal is
present with anomalous jet substructure then the classifier should learn the
information that is useful to distinguish the signal and sideband regions.
Figure 2: Distribution of a fraction of signal and all background events on
the $m_{JJ}$ plane. Events are divided in $30$ bins and a signal region and a
sideband region are defined, as described in the text in Section 3.1. The
amount of signal that has been injected corresponds to $S/B=6\cdot 10^{-3}$
and $S/\sqrt{B}=1.8\sigma$ in the signal region.
In this work, the classifiers that we use are fully connected neural networks
with four hidden layers. The first layer has $64$ nodes and a leaky Rectified
Linear Unit (ReLU) maasrectifier activation ReLu (with an inactive gradient
of $0.1$), and the second through fourth layers have $32$, $16$ and $4$ nodes
respectively, with Exponential Linear Unit (ELU) activation clevert2015fast .
The output layer has a sigmoid activation. The first three hidden layers are
followed by dropout layers with a $20\,\%$ dropout rate JMLR:v15:srivastava14a
. We use the binary cross-entropy loss function and the Adam optimizer adam
with learning rate of $0.001$ and learning rate decay of $5\cdot 10^{-4}$,
batch size of $20480$ and first and second moment decay rates of $0.8$ and
$0.99$, respectively. The training data is reweighted such that the low and
high sidebands have equal total weight, the signal region has the same total
weight as the sum of the sidebands, and the sum of all events weights in the
training data is equal to the total number of training events. This
reweighting procedure ensures that the two sideband regions have the same
contribution to the training process in spite of their different event rates,
and results in a classifier output peaked around $0.5$ in the absence of any
signal. All classifiers are implemented and trained using Keras keras with
TensorFlow tensorflow backend.
We implement a nested cross-validation procedure with five $k$-folds and
therefore all data are used for training, validation and testing. We
standardize all the input features from the training and validation sets using
training information, and those from the test set using training and
validation information. The full dataset is divided randomly, bin by bin, in
five event samples of identical size. We set one of the samples aside for
testing and perform four rounds of training and validation with the other
four, using one of the subsets for validation each time. For each round, we
train ten neural networks for $700$ epochs on the same training and validation
data, using a different initialization each time. We measure the performance
of each classifier on validation data using the metric
$\epsilon_{\text{val}}$, defined as the true positive rate for the correct
classification of signal region events, evaluated at a threshold with a false
positive rate $f=1\,\%$ for incorrectly classifying events from the sideband
region. Only the best out of the ten models is saved. We use an early stopping
criterion to stop training if the validation performance has not improved for
300 epochs. At the end of the four rounds, we use the mean of the outputs of
the four selected models to build an ensemble model which is more robust on
average than any individual model. This ensemble model is used to classify the
events in the test set, and the $x\,\%$ most signal-like events are selected
by applying a cut on the classifier output. This procedure is repeated for all
five choices of test set, and the selected most signal-like events from each
are combined into a signal-like sample. If a signal is present in data and
CWoLa Hunting is able to find it, it will show as a bump in the signal region
of the signal-like sample on the $m_{JJ}$ plane, and standard bump-hunting
techniques can be used to locate the excess.
It is worth mentioning that using an averaged model ensemble is important to
reduce any potential overfitting. The cross-validation procedure ensures that
even if an individual classifier learns any statistical fluctuations in the
training data, each model will tend to overfit different regions of the phase
space. As a result, the models will disagree in regions where overfitting has
occurred, but will tend to agree in any region where a consistent excess is
found.
### 3.2 Autoencoder
In this subsection we describe the strategy followed for the AE
implementation. In the first place, we take the two leading jets in each
event, ordered by mass, and consider the following set of input features for
each jet:
$Y_{i}=\left\\{m_{J},\,\tau_{21},\,\tau_{32},\,n_{\text{trk}},\,p_{T}\right\\}\,.$
(2)
After analyzing different sets of input features, we found that the collection
of $10$ features presented in Eq. (2) led to optimal performance. All the
input features are standardized for the analysis.
Unlike the CWoLa method, the AE is trained on all the available background
events in the full $m_{JJ}$ range. The AE only requires a signal region and a
background region for the purposes of background estimation through sideband
interpolation. For the anomaly score itself (the reconstruction error), the AE
is completely agnostic as to the $m_{JJ}$ range of the signal.
In this work, the AE that we consider is a fully connected neural network with
five hidden layers. The AE has an input layer with $10$ nodes. The encoder has
two hidden layers of $512$ nodes, and is followed by a bottleneck layer with
$2$ nodes and linear activation. The decoder has two hidden layers of $512$
nodes, and is followed by an output layer with $10$ nodes and linear
activation. All of the hidden layers have ReLU activation, and the first
hidden layer in the encoder is followed by a Batch Normalization layer. We use
the Minimum Squared Error (MSE) loss function and the Adam optimizer with
learning rate of $10^{-4}$, first and second moment decay rates of $0.9$ and
$0.999$, respectively, and a mini-batch size of $128$. In Appendix C we
describe our quasi-unsupervised model-selection procedure. We use Pytorch
NEURIPS2019_9015 for implementing and training the AE.
In order to achieve a satisfactory generalization power, we decided to build
an AE ensemble. For this purpose, we train fifty different models (i.e. the
ensemble components) with random initialization on random subsamples of
$50000$ background events. Each model is trained for only $1$ epoch. It is
important to note that the training sample size and number of training epochs
had a significant impact in the AE performance. When these are too large, the
AE learns too much information and losses both generalization power and its
ability to discriminate between signal and background events. For this reason,
our training strategy gives the AE more generalization power and makes it more
robust against overfitting.
The autoencoder ensemble is evaluated on the full dataset. The final MSE
reconstruction loss of an event is obtained by computing the mean over the
fifty different ensemble components. The optimal anomaly score is derived from
the SIC curve as described in Appendix C. The results presented in this paper
are for an AE trained on $S=0$. We have verified that including relevant
amounts of signal $S$ do not significantly change the results. Therefore, for
the sake of computational efficiency, we choose to present the AE trained with
$S=0$ everywhere.
## 4 Results
### 4.1 Signal benchmarks
Now we are ready to test the performance of CWoLa Hunting and the AE for
different amounts of injected signal. Importantly, we will quantify the
performance of CWoLa Hunting and the AE not using the full $m_{JJ}$ range, but
using a narrower slice $m_{JJ}\in(3371,3661)$ GeV, the signal region defined
in Sec. 3.1. This way, all performance gains from the two methods will be
measured relative to the naive significance obtained from a simple dijet
resonance bump hunt.
We define a set of eight benchmarks with a different number of injected signal
events. For this purpose, to the current sample of $537304$ background events
in the range $m_{JJ}\in[2800,5200]\;{\mathrm{GeV}}$, we add from $175$ to
$730$ signal events. This results in a set of benchmarks distributed over the
range $S/B\in[1.5\cdot 10^{-3},7\cdot 10^{-3}]$ in the signal region,
corresponding to an expected naive significance in the range
$S/\sqrt{B}\in[0.4,2.1]$. To test the consistency of both models when no
signal is present in data, we add a final benchmark with no signal events
which allows us to evaluate any possible biases. For each $S/B$ benchmark, the
performance of CWoLa Hunting is evaluated across ten independent runs to
reduce the statistical error using a random subset of signal events each time.
After exploring a large range of cross sections, we decided to examine this
range in $S/B$ because it is sufficient to observe an intersection in the
performance of the two methods. The observed trends continue beyond the limits
presented here.
Figure 3: Performance of CWoLa Hunting (blue) and the AE (orange) as measured
by the AUC metric on the signal with
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ (left plot) and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ (right plot). The error bars
denote the standard deviation on the AUC metric from statistical
uncertainties.
### 4.2 Supervised metrics
The performance of CWoLa Hunting and the AE in the signal region for different
$S/B$ ratios as measured by the Area Under the Curve (AUC) metric is presented
in Fig. 3 for the two signal hypotheses considered in this work. Even though
only a small fraction of signal events is used for training, the AUC metric is
computed using all the available signal to reduce any potential overfitting.
The results in both cases show that CWoLa Hunting achieves excellent
discrimination power between signal and background events in the large $S/B$
region, reaching AUC scores above $0.90$ and approaching the $0.98$ score from
the fully supervised case. As the number of signal events in the signal region
decreases, the amount of information that is available to distinguish the
signal and sideband regions in the training phase becomes more limited. As a
result, learning the signal features becomes more challenging and performance
drops in testing. When the $S/B$ ratio in the signal region is close to zero,
the signal and sideband regions become nearly identical and the classifier
should not be able to discriminate between both regions. For the benchmark
with no signal events, the AUC scores are only $0.43$ and $0.59$ for the
signals with larger and smaller jet masses, respectively444For visualization
purpose, this benchmark is not shown in the plot.. It is interesting to note
that, in the absence of signal, the AUC should converge to $0.5$. However, we
will see that the presence of background events (from a statistical
fluctuation) with a feature distribution that partially overlaps with the one
from signal events, located in a region of the phase space with low
statistics, allows the classifier to learn some information that turns out to
be useful to discriminate between signal and background. Importantly, this
does not imply that the information learnt by the classifier will be useful
for enhancing the signal excess, as we discuss in detail below. By contrast,
the AE performance is solid and stable across the whole $S/B$ range. The
reason is that, once the AE learns to reconstruct background events, its
performance is independent of the number of signal events used for training as
long as the contamination ratio is not too large. Interestingly, the AUC
curves from CWoLa Hunting and the AE cross at $S/B\sim 3\cdot 10^{-3}$.
The most standard way of measuring the performance of a given model is through
the Receiver Operating Characteristic (ROC) curve, and the area under this
curve, the AUC metric. These two metrics are useful to compare the overall
performance of different models in many classification tasks. However, the
goal of a resonant anomaly detection search is to find a localized signal over
a large background. For this purpose, the most important variables to consider
are the signal-to-background ratio ($S/B$) and the naive expected significance
($S/\sqrt{B}$). With this in mind, we will consider the Significance
Improvement Characteristic (SIC) Gallicchio:2010dq to measure the performance
of CWoLa Hunting and the AE at enhancing the significance of the signal
excess. The SIC metric measures the significance improvement after applying a
cut in the classifier output. In particular, any given cut will keep a
fraction $\epsilon_{S}$ of signal events and a fraction $\epsilon_{B}$ of
background events, which are defined as the signal and background efficiencies
of the cut. The significance improvement for this cut is thus given by
$\text{SIC}=\epsilon_{S}/\sqrt{\epsilon_{B}}$.
Figure 4: The SIC curves for CWoLa Hunting (top row) and the AE (bottom row)
are shown for the signals with
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ in the left and right plots,
respectively. For CWoLa Hunting, a SIC curve is shown for each of the
classifiers that were trained on mixed samples with different amounts of
injected signal.
Figure 5: Top row: The SIC value as a function of $S/\sqrt{B}$ for a set of
fixed signal efficiencies is shown for the signals with
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ (left plot) and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ (right plot). The
$\epsilon_{S}=17\,\%$ and $\epsilon_{S}=13\,\%$ signal efficiencies,
respectively, maximize the overall significance improvement for all $S/B$
benchmarks. Bottom row: The signal efficiencies are chosen such that the SIC
values are maximized for CWoLa Hunting and the AE. The SIC values associated
to the $1\,\%$ and $0.1\,\%$ are also shown for comparison. These values are
calculated using only the fraction of signal that defines each $S/B$
benchmark.
In order to find the localized signal over the large background, which we
presented in Fig. 2, we will use the SIC metric to find the optimal cut in the
classifiers output that leads to the maximal enhancement in $S/\sqrt{B}$ in
the signal region. The SIC curves for CWoLa Hunting and the AE are shown in
Fig. 4. The SIC curves are calculated using all the available signal and
background events in the signal region. For CWoLa Hunting, the results show
that the shape and the location of the peak of the SIC curve depend on the
amount of injected signal used during training. In order to find the signal
efficiency that leads to a maximal overall significance improvement for all
$S/B$ benchmarks, we analyze how the SIC value changes as a function of
$S/\sqrt{B}$ for a set of fixed signal efficiencies in the top row of Fig. 5.
We find that the signal efficiencies that yield the maximum overall
significance improvement for CWoLa Hunting are $\epsilon_{S}=17\,\%$ and
$\epsilon_{S}=13\,\%$ for the high and low jet mass signals, respectively. For
the AE, the optimal signal efficiencies are $\epsilon_{S}=16\,\%$ and
$\epsilon_{S}=18\,\%$, respectively. Now we will use these optimal signal
efficiencies to set an anomaly score threshold that maximizes the significant
improvement in the signal region for each model. In practice, model
independence would prevent picking a particular value and so we will later
compare these optimized values with fixed values at round logarithmically
spaced efficiencies.
### 4.3 Sideband fit and $p$-values
After evaluating the quality of the two methods at identifying the signal
events among the background, we compare how they perform at increasing the
significance of the signal region excess. For this purpose, we performed a
parametrized fit to the $m_{JJ}$ distribution in the sideband region. We then
interpolate the fitted background distribution into the signal region and
evaluate the $p$-value of the signal region excess.
For the CWoLa method, we used the following $4$-parameter function to fit the
background:
$\frac{d\sigma}{dm_{JJ}}=\frac{p_{0}(1-x)^{p_{1}}}{x^{p_{2}+p_{3}\ln(x)}}\,,$
(3)
where $x=m_{JJ}/\sqrt{s}$. We use the previous function to estimate the
background in the range $m_{JJ}\in[2800,5200]\;{\mathrm{GeV}}$. This function
has been previously used by both ATLAS Aad:2019hjw and CMS Sirunyan:2018xlo
collaborations in hadronic heavy resonance searches.
For the AE, we find that this function does not fit well the distribution of
surviving events on $m_{JJ}$ after applying a cut on the reconstruction error.
Instead, we found that a simple linear fit (on a narrower sideband region) is
able to describe the background distribution on the sideband with good
accuracy and it is sensitive to an excess on the signal region for the cuts
that we considered. For the cut based on the SIC curve and the $1\%$ cut, the
fit is implemented on the range $m_{JJ}\in[3000,4000]\;{\mathrm{GeV}}$. For
the $0.1\%$ cut, we need to extend this range to
$m_{JJ}\in[2800,4400]\;{\mathrm{GeV}}$. This range extension produces a better
fit $\chi^{2}$ in the sideband and mitigates a small bias in the predicted
signal at $S=0$.
The validity of sideband interpolation relies on the assumption that the
$m_{JJ}$ distribution for background events surviving a cut can still be well
modelled by the chosen functional forms. This is likely to be the case so long
as the selection efficiency of the tagger on background events is smooth and
monotonic in $m_{JJ}$, and most simply555Complete decorrelation is sufficient,
but not necessary to prevent bump-sculpting 2010.09745 . if it is constant in
$m_{JJ}$ (which would require signal features uncorrelated with $m_{JJ}$).
Figure 6: Significance of the signal region excess after applying different
cuts using the classifier output for CWoLa Hunting (left plots) and the AE
(right plots), for one of the runs corresponding to the benchmarks with
$S/B\simeq 4\cdot 10^{-3}$ (top row) and $S/B\simeq 2.4\cdot 10^{-3}$ (bottom
row) on the signal with $(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$. For
CWoLa, we show the $100\,\%,10\,\%,1\,\%,0.04\,\%$ most signal-like events.
For the AE, we show the $100\,\%$ and $0.6\,\%$ event selections. In both
cases, the smallest cut corresponds to the optimal cut according to the SIC
curve. The blue crosses denote the event selection in each signal region bin,
while the blue circles represent the event selection in each bin outside of
the signal region. The dashed red lines indicate the fit to the events outside
of the signal region, the grey band indicates the fit uncertainty and the
injected signal is represented by the green histogram.
In Fig. 6, we show the fit results for CWoLa Hunting and the AE for one of the
runs corresponding to the benchmarks with $S/B\simeq 4\cdot 10^{-3}$ and
$S/B\simeq 2.4\cdot 10^{-3}$ on the signal with
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$. After applying different
cuts using the classifiers outputs, the significance of signal region bump is
significantly increased. For the benchmark with more injected signal, CWoLa
Hunting yields a substantial significance increase of up to $7.6\sigma$, while
the AE is able to increase the bump significance by up to $4.1\sigma$. When
the amount of injected signal is reduced, the results show that CWoLa Hunting
becomes weaker and it rises the excess significance up to only $2.6\sigma$.
However, in this case the AE performs better than CWoLa Hunting, increasing
the bump significance up to $3.1\sigma$. This is an important finding because
it suggests that CWoLa Hunting and the AE may be complementary techniques
depending on the cross section. Note that the event distribution from the AE
is clearly shaped due to some correlations between the input features and
$m_{JJ}$. In particular, since the jet $p_{T}$ is very correlated with
$m_{JJ}$. However, the average jet $p_{T}$ scales monotonically (and roughly
linearly) with $m_{JJ}$, which means that no artificial bumps are created and
the distribution post-selection is still well modelled by the chosen fit
function. Finally, note that the fit to the raw distribution (i.e. no cut
applied) is lower than the naive expected significance $S/\sqrt{B}$ due to a
downward fluctuation in the number of background events in the signal region,
as discussed in Appendix A.
Figure 7: The significance of the signal region excess after applying
different cuts for CWoLa Hunting and the AE, for the signals with
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$, is shown in the left and
right plots, respectively. The plots in the top row show the cuts that
maximize the overall significance improvement for all benchmarks according to
the SIC curve, while the bottom row plots show results for fixed,
predetermined cuts. The best cuts for CWoLa Hunting (blue) correspond to the
$17\%$ and $13\%$ signal efficiencies for the signals with high and low jet
masses, respectively. For the AE (orange), the best cuts correspond to the
$16\%$ and $18\%$ signal efficiencies, respectively. The dotted lines denote
the naive expected significance, $S/\sqrt{B}$. The round cuts from the bottom
plots show the $1\%$ and $0.1\%$ event selections for CWoLa Hunting and the
AE. The initial significance of the bump ($100\,\%$ selection) is shown in
green.
In order to systematically study if CWoLa Hunting and the AE could be
complementary techniques depending on the cross section, we analyze their
performance at increasing the significance of the signal region excess for
different $S/B$ benchmarks and the two signal hypotheses in Fig. 7. The top
two plots show the cuts on the classifier output that lead to the largest
overall significance improvement according to the SIC curve. For CWoLa
Hunting, we show the median $p$-values from the ten independent runs for every
benchmark corresponding to the $17\,\%$ (top left) and $13\,\%$ (top right)
signal efficiencies, which correspond to fractions of signal-like events
between $0.04\,\%$ and $1.7\,\%$ depending on the benchmark. The error bars
represent the Median Absolute Deviation. Note that the fit result does not
always agree with the naive expected significance, $S/\sqrt{B}$, due to the
high uncertainties among the ten independent classifiers and the small
fractions of events considered in some cases. For the AE, we show the
$p$-values associated to the $16\,\%$ (top left) and $18\,\%$ (top right)
signal efficiencies, which correspond to the $0.36\,\%$ and $0.63\,\%$ most
signal-like events, respectively.
Importantly, there are other cuts that enhance the significance of the signal
region excess, as shown in the bottom plots of Fig. 7. In a real experimental
search, with no previous knowledge about any potential new physics signal, the
two models would be able to find the signal for fixed round cuts of $1\%$ and
$0.1\%$. For the AE, these cuts are applied in the signal region to derive an
anomaly score above which all the events in the full $m_{JJ}$ range are
selected. However, note that for these cuts the AE seems to sculpt small bumps
on the $m_{JJ}$ distribution even when no signal is present on data. We find
that the excess significance at $S/B=0$ is $0.89\sigma$, $0.56\sigma$ and
$1.06\sigma$ for the SIC-based, $1\%$ and $0.1\%$ cuts, respectively. We
checked that this is caused by the shaping of the $m_{JJ}$ distribution and
the small statistical fluctuations that appear for such tight cuts. We remark
that this effect is not produced by the signal.
The statistical analysis demonstrates two things. First, CWoLa Hunting is able
to increase the significance of the signal region excess up to
$3\sigma-8\sigma$ for $S/B$ ratios above $\sim 3\cdot 10^{-3}$ for both signal
hypotheses, even when the original fit shows no deviation from the background-
only hypothesis. By contrast, the AE shows a superior performance below this
range for the signal with $(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$,
boosting the significance of the excess up to $2\sigma-3\sigma$ in the low
$S/B$ region where CWoLa Hunting is not sensitive to the signal. Importantly,
there is again a crossing point in the performance of the two methods as
measured by their ability to increase the significance of the excess.
Therefore, our results show that the two methods are complementary for less-
than-supervised anomaly detection. Second, it is clear that the AE is not able
to increase the bump excess for the signal with
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ below $S/B\sim 3\cdot
10^{-3}$, even when it reaches a fairly solid AUC score, as shown in Fig. 3.
This means that even though the AE is able to classify a sizeable fraction of
signal events correctly, there is a significant fraction of background events
that yield a larger reconstruction error than the signal events. In other
words, the AE does not consider the signal events as sufficiently anomalous
and finds more difficult to reconstruct part of the background instead.
Therefore, cutting on the reconstruction error does not result in a larger
fraction of signal in the selected events. By construction, this is the main
limitation of the AE: it focuses its attention in anything that seems
anomalous, whether it is an exciting new physics signal or something that we
consider less exotic.
Finally, it is important to analyze the performance of CWoLa Hunting and the
AE when training on no signal. For consistency, both models should not sculpt
any bumps on the $m_{JJ}$ distribution when no signal is present on data. For
CWoLa Hunting, the AUC scores for the benchmark with $S/B=0$ are $0.43$ and
$0.59$ for the signal hypotheses with larger and smaller jet masses,
respectively. These numbers are slightly different to the expected value of
$0.5$ due to the presence in the background of a real, low-significance
statistical excess in the high mass region of phase space in the signal
region, which we have checked does not appear in repeated background
simulations. Indeed, even after this selection the signal region shows an
overall deficit, leading to overall significance of 0.
Figure 8: Density of events on the $(m_{j_{1}},m_{j_{2}})$ plane for the most
signal-like events selected by CWoLa Hunting for the signal hypothesis
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ (top row) and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ (bottom row). From left to
right, we show results for three benchmarks with $S/B\simeq 4\cdot
10^{-3},2.8\cdot 10^{-3},0$. The location of the injected signal is indicated
by a green cross. Note that the upper right plot shows a small statistical
fluctuation that disappears when averaging over a larger number of
simulations.
Figure 9: Density of events on the $(m_{j_{1}},m_{j_{2}})$ plane for the most
signal-like events selected by the AE for the signal hypothesis
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ (top row) and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ (bottom row). From left to
right, we show results for three benchmarks with $S/B\simeq 4\cdot
10^{-3},2.8\cdot 10^{-3},0$. The location of the injected signal is indicated
by a green cross.
### 4.4 What did the machine learn?
In order to illustrate this point, we can examine what the classifiers have
learnt by looking at the properties of the events which have been classified
as signal-like for three benchmarks with $S/B\simeq 4\cdot 10^{-3},2.8\cdot
10^{-3},0$. In Fig. 8 and Fig. 9 we show the density of events on the
$(m_{j_{1}},m_{j_{2}})$ plane for the most signal-like events selected by
CWoLa Hunting and the AE, respectively. The cuts applied in each case
correspond to the $0.1\,\%$ cut. For CWoLa Hunting, it is clear that the
classifier is able to locate the signal for the two mass hypotheses. In
addition, note that the upper and lower right plots show a small statistical
fluctuation that is produced by the different fractions of signal-like events
represented in each plot, which disappears when averaging over a larger number
of simulations.
The AE similarly identifies the high mass signal point, but fails to identify
the low mass one. This can be most easily understood by observing the
selection efficiency as a function of the two jet masses for the trained AE,
shown in Fig. 10. In the left plot, we show the total number of events on the
$(m_{j_{1}},m_{j_{2}})$ plane. In the middle and right plots, we show the
selection efficiencies for the $1\,\%$ and $0.1\,\%$ cuts. These results
illustrate that the AE has learnt to treat high mass jets as anomalous (since
these are rare in the training sample), and so the
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ signal is more easily
reconstructed than high mass QCD events. In other words, high mass QCD events
are regarded as more anomalous than signal events, and a sufficiently high
selection cut on the AE reconstruction error will eliminate the signal. We
remark again that this is one of the main limitations of the AE. Therefore, it
is crucial to find the cut that maximizes the fraction of signal within the
most anomalous events. As shown in Fig. 13 in Appendix B, that cut corresponds
to the anomaly score that maximizes the SIC curve in the signal region. In
contrast, the bottom row of Fig. 10 shows that CWoLa is able to learn the
signal features.
Figure 10: The total density of events on the $(m_{j_{1}},m_{j_{2}})$ plane is
plotted on the left. The $1\,\%$ and $0.1\,\%$ selection efficiencies for the
AE and CWoLa are plotted on the middle and right images, respectively. The top
row shows results for the AE, while the bottom row shows results for CWoLa.
The selection efficiency in a given bin is defined as the number of events
passing the $x\,\%$ cut divided by the total number of events in that bin.
These results correspond to the signal with
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ and $S/B\simeq 4\cdot
10^{-3}$.
## 5 Conclusions
In this article, we have compared weakly-supervised and unsupervised anomaly
detection methods, using Classification without Labels (CWoLa) Hunting and
deep autoencoders (AE) as representative of the two classes. The key
difference between these two methods is that the weak labels of CWoLa Hunting
allow it to utilize the specific features of the signal overdensity, making it
ideal in the limit of large signal rate, while the unsupervised AE does not
rely on any information about the signal and is therefore robust to small
signal rates.
We have quantitatively explored this complimentarity in a concrete case study
of a search for anomalous events in fully hadronic dijet resonance searches,
using as the target a physics model of a heavy resonance decaying into a pair
of three-prong jets. CWoLa Hunting was able to dramatically raise the
significance of the signal in our benchmark points in order to breach
$5\sigma$ discovery, but only if a sizeable fraction of signal is present
($S/B\gtrsim 4\times 10^{-3}$). The AE maintained classification performance
at low signal rates and had the potential to raise the significance of of one
of our benchmark signals to the level of $3\sigma$ in a region where CWoLa
Hunting lacked sensitivity.
Crucially, our results demonstrate that CWoLa Hunting is effective at finding
diverse and moderately rare signals and the AE can provide sensitivity to rare
signals, but only with certain topologies. Therefore, both techniques are
complementary and can be used together for anomaly detection. A variety of
unsupervised, weakly supervised, and semi-supervised anomaly detection
approaches have been recently proposed (see e.g. Ref. Kasieczka:2021xcg ),
including variations of the methods we have studied. It will be important to
explore the universality of our conclusions across a range of models for
anomaly detection at the LHC and beyond.
## Acknowledgments
BN and JC were supported by the U.S. Department of Energy, Office of Science
under contracts DE-AC02-05CH11231 and DE-AC02-76SF00515, respectively. DS is
supported by DOE grant DOE-SC0010008. PMR acknowledges Berkeley LBNL, where
part of this work has been developed. PMR further acknowledges support from
the Spanish Research Agency (Agencia Estatal de Investigación) through the
contract FPA2016-78022-P and IFT Centro de Excelencia Severo Ochoa under grant
SEV-2016-0597. This project has received funding/support from the European
Union’s Horizon 2020 research and innovation programme under the Marie
Skłodowska-Curie grant agreement No 690575 (RISE InvisiblesPlus).
## Appendix A Background fit
In this appendix, we briefly describe the details about the fit procedure and
discuss results from the fit to the background events. In order to evaluate
the significance of any potential excess in the signal region, the total
number of predicted signal region events is calculated by summing the
individual predictions from each signal region bin. The systematic uncertainty
of the fit in the signal region prediction is estimated by propagating the
uncertainties in the fit parameters. We test the validity of the fit using a
Kolmogorov–Smirnov test.
In Fig. 11 we show the fit to the background distribution using the
4-parameter function presented in Eq. (3). First, the Kolmogorov–Smirnov test
yields a $p$-value of $0.99$, which means that the fit describes the
background distribution well outside of the signal region. In addition, the
fit result produces a $p$-value of $0.5$. However, the residuals indicate that
the number of predicted events in the signal region is overestimated due to a
local negative fluctuation of size $n=123$ events666This has been validated as
a fluctuation with an independent sample.. As a result, the fit will always
underestimate the excess significance when a signal is injected in the signal
region. For example, if we introduce a number $n$ of signal events in the
signal region, the fit prediction will match the number of observed events and
therefore the excess significance will be exactly zero, even when a signal has
been injected.
Figure 11: Fit to the background distribution of dijet events and residuals
from the fit. The signal region events are indicated by blue crosses.
## Appendix B Density of events for the optimal cut
Figure 12: Density of events on the $(m_{j_{1}},m_{j_{2}})$ plane for the most
signal-like events selected by CWoLa for the signal hypothesis
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ (top row) and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ (bottom row). The optimal
cut is derived from the signal efficiency that maximizes the SIC curve. From
left to right, we show results for three benchmarks with $S/B\simeq 4\cdot
10^{-3},2.8\cdot 10^{-3},0$. The location of the injected signal is indicated
by a green cross. Note that the upper right plot shows a small statistical
fluctuation that disappears when averaging over a larger number of
simulations.
Figure 13: Density of events on the $(m_{j_{1}},m_{j_{2}})$ plane for the most
signal-like events selected by the AE for the signal hypothesis
$(m_{j_{1}},m_{j_{2}})=(500,500)\;{\mathrm{GeV}}$ (top row) and
$(m_{j_{1}},m_{j_{2}})=(300,300)\;{\mathrm{GeV}}$ (bottom row). The optimal
cut is derived from the signal efficiency that maximizes the SIC curve. From
left to right, we show results for three benchmarks with $S/B\simeq 4\cdot
10^{-3},2.8\cdot 10^{-3},0$. The location of the injected signal is indicated
by a green cross.
## Appendix C Autoencoder model selection
Here we will motivate the selection of the AE model used in the main body of
the paper. In general, the challenge or central tension of AE model selection
for anomaly detection is to choose a model that strikes a good balance between
compression and expressivity, between describing the bulk of the data just
well enough (i.e. with the right size latent space) without swallowing up all
the anomalies as well. Here we will put forth some guidelines that could be
used to find this balance in an unsupervised way. While a complete study is
well beyond the scope of this work, the two signals provide some evidence for
the usefulness of these guidelines.
To begin, it is useful to consider the AE as consisting of three components:
1. 1.
Choice of input features.
2. 2.
Latent space dimension.
3. 3.
Rest of the architecture.
Our philosophy is that item 1 defines the type of anomaly we are interested
in, and so cannot be chosen in a fully unsupervised way. In this paper, we
chose the input features to be $(m_{J},p_{T},\tau_{21},\tau_{32},n_{trk})$
because we observed they did well in finding the 3-prong $qqq$ signals. In
contrast, item 2 and item 3 can be optimized to some extent independent of the
anomaly (i.e. just from considerations of the background).
Our main handle for model selection will be the concept of FVU: fraction of
variance unexplained. This is a commonly used statistical measure of how well
a regression task is performing at describing the data. Let the input data be
$\vec{x}_{i}$, $i=1,\dots,N$ and the (vector-valued) regression function being
$\vec{y}_{i}=f(x_{i})\,.$ (4)
Let the data to be described be $\vec{Y}_{i}$. (So, for an AE,
$\vec{x}_{i}=\vec{Y}_{i}$.) Then the FVU $F$ is
$F={{1\over N}\sum_{i=1}^{N}(\vec{Y}_{i}-\vec{y}_{i})^{2}\over{1\over
N}\sum_{i=1}^{N}(\vec{Y}_{i}-\langle\vec{Y}\rangle)^{2}}\,,$ (5)
i.e. it is the MSE of the regression divided by the sample variance of the
data. In the following, we will be working with features standardized to zero
mean and unit variance, in which case the denominator (the sample variance) is
just $n$, the number of input features, and $F$ becomes
$F={1\over N}\sum_{i=1}^{N}{1\over n}\sum_{a=1}^{n}(Y_{ia}-y_{ia})^{2}\,,$ (6)
i.e. it is the MSE of the regression normalized to the number of input
features.
Figure 14: FVU vs number of latent dimensions, for 10 input features and
different AE architectures. The diagonal line is
$1-n_{latent}/n=1-n_{latent}/10$ indicating the nominal case of a latent
dimension just memorizing one of the input features.
Our criteria for whether it is worth adding another latent space dimension to
the AE is whether it substantially reduces the FVU. Here the measure of
“substantially reduces” is whether it decreases the FVU by significantly more
than $1/n$. A decrease of $1/n$ (or less) suggests that the AE is merely
memorizing one of the input features via the extra latent space dimension. In
that case, adding the latent space dimension should not help with anomaly
detection. Meanwhile, a decrease in FVU of significantly more than $1/n$
suggests that the latent space dimension is learning something nontrivial
about the inner workings of the data, capturing one of the underlying
correlations. In this case adding the latent space dimension may help with
anomaly detection performance.
We will demonstrate the effectiveness of this model selection criteria using
the two signals considered in this paper, $Z^{\prime}(3500)\rightarrow
X(m)X(m)$, $X\to qqq$ events with $m=500\;{\mathrm{GeV}}$ and
$m=300\;{\mathrm{GeV}}$.
We scan over the size of the latent space and hidden layers,
$n_{latent}=1,2,3,4,\dots$ and $n_{hidden}=128,256,512$, respectively. For
each architecture and choice of input features we train 10 models with random
initializations on a random subset of $50000$ background jets.
For evaluation, we feed all 1M QCD events and all the signal events to the
trained models. We compute the following metrics for each model:
$\langle\text{MSE}\rangle_{bg}$, $\sigma(\text{MSE})_{bg}$, $\max$(SIC) where
the SIC is computed by cutting on the MSE distribution. For all three metrics,
we only compute them using the MSE distribution in a window
$(3300,3700)\;{\mathrm{GeV}}$ in $m_{JJ}$.
Figure 15: Decrease in FVU from adding one more latent dimension vs number of
latent dimensions, for 10 input features and different AE architectures. The
horizontal line is $1/n=1/10$ indicating the nominal case of a latent
dimension just memorizing one of the input features.
Shown in Fig. 14 is the FVU versus the number of latent dimensions, for 5
input features and different AE architectures. Each point represents the
average MSE obtained from 10 independent trainings. We see that the FVU versus
$n_{latent}$ plot has a characteristic shape, with faster-than-nominal
decrease for small $n_{latent}$ (the AE is learning nontrivial correlations in
the data) and then leveling out for larger $n_{latent}$ (the AE is not
learning as much and is just starting to memorize input features).
In Fig. 15 we show the decrease in FVU with each added latent dimension,
versus the number of latent dimensions. From this we see that
$n_{latent}=1,2,3$ add useful information to the AE but beyond that the AE may
not be learning anything useful.
We also see from these plots that the FVU decreases with more $n_{hidden}$ as
expected, although it seems to be levelling off by the time we get to
$n_{hidden}=512$. This makes sense – for fixed $n_{latent}$ the bottleneck is
fixed, so increasing $n_{hidden}$ just increases the complexity of the
correlations that the AE can learn, with no danger of becoming the identity
map. This suggests that the best AE anomaly detector will be the largest
$n_{hidden}$ that we can take for fixed $n_{latent}$, although the gains may
level off for $n_{hidden}$ sufficiently large.
Figure 16: $\max$(SIC) vs. $n_{latent}$ for the $500\;{\mathrm{GeV}}$ signal
(left) and $300\;{\mathrm{GeV}}$ signal (right), 5 input features, and
$n_{hidden}=512$. The blue dots are the maxSICs for each of the 10 independent
trainings, while the orange dot is the max(SIC) obtained from the average of
the 10 MSE distributions.
Now we examine the performance of the various AE models on anomaly detection
of the $300\;{\mathrm{GeV}}$ and $500\;{\mathrm{GeV}}$ 3-prong signals. The
$\max$(SIC) versus $n_{latent}$ is shown in Fig. 16. We see that there is
decent performance on both signals for $n_{latent}=2,3,4,5$ with
$n_{latent}=2$ being especially good for both777We also see a rise in
performance for very large $n_{latent}$ which is puzzling and mysterious..
This is roughly in line with the expectations from the FVU plots. Importantly,
if we restricted to $n_{latent}=2,3$ which have the larger decreases in FVU,
we would not miss out on a better anomaly detector.
Finally in Fig. 17, we show the $\max$(SIC) for the MSE distributions averaged
over 10 trainings vs $n_{hidden}$, for $n_{latent}=2,3,4$. We see that
generally the trend is rising or flat with increasing $n_{hidden}$, which is
more or less consistent with expectations.
Figure 17: $\max$(SIC) of the averaged MSE distributions vs. $n_{hidden}$ for
the $500\;{\mathrm{GeV}}$ signal (left) and $300\;{\mathrm{GeV}}$ signal
(right), 5 input features, and $n_{latent}=2,3,4$.
To summarize, we believe we have a fairly model-independent set of criteria
for AE model selection, based on the FVU, which empirically works well on our
two signals. Admittedly this is too small of a sample size to conclude that
this method really works; it would be interesting to continue to study this in
future work. Based on these criteria, we fix the AE model in this paper to
have $n_{latent}=2$ and $n_{hidden}=512$.
## References
* (1) ATLAS Collaboration, “Exotic physics searches,” 2018. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/ExoticsPublicResults.
* (2) ATLAS Collaboration, “Supersymmetry searches,” 2018. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/SupersymmetryPublicResults.
* (3) CMS Collaboration, “Cms exotica public physics results,” 2018. https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsEXO.
* (4) CMS Collaboration, “Cms supersymmetry physics results,” 2018. https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsSUS.
* (5) CMS Collaboration, “Cms beyond-two-generations (b2g) public physics results,” 2018. https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsB2G.
* (6) M. Feickert and B. Nachman, “A Living Review of Machine Learning for Particle Physics,” arXiv:2102.02770 [hep-ph].
* (7) B. Knuteson, “A Quasi-Model-Independent Search for New High $p_{T}$ Physics at D0.” https://www-d0.fnal.gov/results/publications_talks/thesis/knuteson/thesis.ps. Ph.D. thesis, University of California at Berkeley (2000).
* (8) D0 Collaboration, B. Abbott et al., “Search for new physics in $e\mu X$ data at DØ using Sherlock: A quasi model independent search strategy for new physics,” Phys. Rev. D62 (2000) 092004, arXiv:hep-ex/0006011 [hep-ex].
* (9) D0 Collaboration, V. M. Abazov et al., “A Quasi model independent search for new physics at large transverse momentum,” Phys. Rev. D64 (2001) 012004, arXiv:hep-ex/0011067 [hep-ex].
* (10) D0 Collaboration, B. Abbott et al., “A quasi-model-independent search for new high $p_{T}$ physics at DØ,” Phys. Rev. Lett. 86 (2001) 3712–3717, arXiv:hep-ex/0011071 [hep-ex].
* (11) H1 Collaboration, F. D. Aaron et al., “A General Search for New Phenomena at HERA,” Phys. Lett. B674 (2009) 257–268, arXiv:0901.0507 [hep-ex].
* (12) H1 Collaboration, A. Aktas et al., “A General search for new phenomena in ep scattering at HERA,” Phys. Lett. B602 (2004) 14–30, arXiv:hep-ex/0408044 [hep-ex].
* (13) K. S. Cranmer, Searching for new physics: Contributions to LEP and the LHC. PhD thesis, Wisconsin U., Madison, 2005. http://weblib.cern.ch/abstract?CERN-THESIS-2005-011.
* (14) CDF Collaboration, T. Aaltonen et al., “Model-Independent and Quasi-Model-Independent Search for New Physics at CDF,” Phys. Rev. D78 (2008) 012002, arXiv:0712.1311 [hep-ex].
* (15) CDF Collaboration, T. Aaltonen et al., “Model-Independent Global Search for New High-p(T) Physics at CDF,” arXiv:0712.2534 [hep-ex].
* (16) CDF Collaboration, T. Aaltonen et al., “Global Search for New Physics with 2.0 fb-1 at CDF,” Phys. Rev. D79 (2009) 011101, arXiv:0809.3781 [hep-ex].
* (17) CMS Collaboration, C. Collaboration, “MUSiC, a Model Unspecific Search for New Physics, in pp Collisions at $\sqrt{s}=8$ TeV,”.
* (18) CMS Collaboration Collaboration, “Model Unspecific Search for New Physics in pp Collisions at sqrt(s) = 7 TeV,” Tech. Rep. CMS-PAS-EXO-10-021, CERN, Geneva, 2011. http://cds.cern.ch/record/1360173.
* (19) CMS Collaboration, “MUSiC, a model unspecific search for new physics, in pp collisions at sqrt(s)=13 TeV,”.
* (20) ATLAS Collaboration, M. Aaboud et al., “A strategy for a general search for new phenomena using data-derived signal regions and its application within the ATLAS experiment,” Eur. Phys. J. C79 (2019) 120, arXiv:1807.07447 [hep-ex].
* (21) ATLAS Collaboration, “A general search for new phenomena with the ATLAS detector in pp collisions at $\sqrt{s}=8$ TeV,” ATLAS-CONF-2014-006 no. ATLAS-CONF-2014-006, (Mar, 2014) . https://cds.cern.ch/record/1666536.
* (22) ATLAS Collaboration, “A general search for new phenomena with the ATLAS detector in pp collisions at sort(s)=7 TeV.,” ATLAS-CONF-2012-107 (Aug, 2012) . https://cds.cern.ch/record/1472686.
* (23) R. T. D’Agnolo and A. Wulzer, “Learning New Physics from a Machine,” Phys. Rev. D99 no. 1, (2019) 015014, arXiv:1806.02350 [hep-ph].
* (24) J. H. Collins, K. Howe, and B. Nachman, “Anomaly Detection for Resonant New Physics with Machine Learning,” Phys. Rev. Lett. 121 no. 24, (2018) 241803, arXiv:1805.02664 [hep-ph].
* (25) J. H. Collins, K. Howe, and B. Nachman, “Extending the search for new resonances with machine learning,” Phys. Rev. D99 no. 1, (2019) 014038, arXiv:1902.02634 [hep-ph].
* (26) R. T. D’Agnolo, G. Grosso, M. Pierini, A. Wulzer, and M. Zanetti, “Learning Multivariate New Physics,” arXiv:1912.12155 [hep-ph].
* (27) M. Farina, Y. Nakai, and D. Shih, “Searching for New Physics with Deep Autoencoders,” arXiv:1808.08992 [hep-ph].
* (28) T. Heimel, G. Kasieczka, T. Plehn, and J. M. Thompson, “QCD or What?,” SciPost Phys. 6 no. 3, (2019) 030, arXiv:1808.08979 [hep-ph].
* (29) T. S. Roy and A. H. Vijay, “A robust anomaly finder based on autoencoder,” arXiv:1903.02032 [hep-ph].
* (30) O. Cerri, T. Q. Nguyen, M. Pierini, M. Spiropulu, and J.-R. Vlimant, “Variational Autoencoders for New Physics Mining at the Large Hadron Collider,” JHEP 05 (2019) 036, arXiv:1811.10276 [hep-ex].
* (31) A. Blance, M. Spannowsky, and P. Waite, “Adversarially-trained autoencoders for robust unsupervised new physics searches,” JHEP 10 (2019) 047, arXiv:1905.10384 [hep-ph].
* (32) J. Hajer, Y.-Y. Li, T. Liu, and H. Wang, “Novelty Detection Meets Collider Physics,” arXiv:1807.10261 [hep-ph].
* (33) A. De Simone and T. Jacques, “Guiding New Physics Searches with Unsupervised Learning,” Eur. Phys. J. C79 no. 4, (2019) 289, arXiv:1807.06038 [hep-ph].
* (34) A. Mullin, H. Pacey, M. Parker, M. White, and S. Williams, “Does SUSY have friends? A new approach for LHC event analysis,” arXiv:1912.10625 [hep-ph].
* (35) G. M. Alessandro Casa, “Nonparametric semisupervised classification for signal detection in high energy physics,” arXiv:1809.02977 [hep-ex].
* (36) B. M. Dillon, D. A. Faroughy, and J. F. Kamenik, “Uncovering latent jet substructure,” Phys. Rev. D100 no. 5, (2019) 056002, arXiv:1904.04200 [hep-ph].
* (37) A. Andreassen, B. Nachman, and D. Shih, “Simulation Assisted Likelihood-free Anomaly Detection,” Phys. Rev. D 101 no. 9, (2020) 095004, arXiv:2001.05001 [hep-ph].
* (38) B. Nachman and D. Shih, “Anomaly Detection with Density Estimation,” Phys. Rev. D 101 (2020) 075042, arXiv:2001.04990 [hep-ph].
* (39) J. A. Aguilar-Saavedra, J. H. Collins, and R. K. Mishra, “A generic anti-QCD jet tagger,” JHEP 11 (2017) 163, arXiv:1709.01087 [hep-ph].
* (40) M. Romão Crispim, N. Castro, R. Pedro, and T. Vale, “Transferability of Deep Learning Models in Searches for New Physics at Colliders,” Phys. Rev. D 101 no. 3, (2020) 035042, arXiv:1912.04220 [hep-ph].
* (41) M. C. Romao, N. Castro, J. Milhano, R. Pedro, and T. Vale, “Use of a Generalized Energy Mover’s Distance in the Search for Rare Phenomena at Colliders,” arXiv:2004.09360 [hep-ph].
* (42) O. Knapp, G. Dissertori, O. Cerri, T. Q. Nguyen, J.-R. Vlimant, and M. Pierini, “Adversarially Learned Anomaly Detection on CMS Open Data: re-discovering the top quark,” arXiv:2005.01598 [hep-ex].
* (43) A. Collaboration, “Dijet resonance search with weak supervision using 13 TeV pp collisions in the ATLAS detector,” arXiv:2005.02983 [hep-ex].
* (44) B. M. Dillon, D. A. Faroughy, J. F. Kamenik, and M. Szewc, “Learning the latent structure of collider events,” arXiv:2005.12319 [hep-ph].
* (45) M. C. Romao, N. Castro, and R. Pedro, “Finding New Physics without learning about it: Anomaly Detection as a tool for Searches at Colliders,” arXiv:2006.05432 [hep-ph].
* (46) O. Amram and C. M. Suarez, “Tag N’ Train: A Technique to Train Improved Classifiers on Unlabeled Data,” arXiv:2002.12376 [hep-ph].
* (47) T. Cheng, J.-F. Arguin, J. Leissner-Martin, J. Pilette, and T. Golling, “Variational Autoencoders for Anomalous Jet Tagging,” arXiv:2007.01850 [hep-ph].
* (48) C. K. Khosa and V. Sanz, “Anomaly Awareness,” arXiv:2007.14462 [cs.LG].
* (49) P. Thaprasop, K. Zhou, J. Steinheimer, and C. Herold, “Unsupervised Outlier Detection in Heavy-Ion Collisions,” arXiv:2007.15830 [hep-ex].
* (50) S. Alexander, S. Gleyzer, H. Parul, P. Reddy, M. W. Toomey, E. Usai, and R. Von Klar, “Decoding Dark Matter Substructure without Supervision,” arXiv:2008.12731 [astro-ph.CO].
* (51) J. A. Aguilar-Saavedra, F. R. Joaquim, and J. F. Seabra, “Mass Unspecific Supervised Tagging (MUST) for boosted jets,” arXiv:2008.12792 [hep-ph].
* (52) K. Benkendorfer, L. L. Pottier, and B. Nachman, “Simulation-Assisted Decorrelation for Resonant Anomaly Detection,” arXiv:2009.02205 [hep-ph].
* (53) Adrian Alan Pol and Victor Berger and Gianluca Cerminara and Cecile Germain and Maurizio Pierini, “Anomaly Detection With Conditional Variational Autoencoders,” arXiv:2010.05531 [cs.LG].
* (54) V. Mikuni and F. Canelli, “Unsupervised clustering for collider physics,” arXiv:2010.07106 [physics.data-an].
* (55) M. van Beekveld, S. Caron, L. Hendriks, P. Jackson, A. Leinweber, S. Otten, R. Patrick, R. Ruiz de Austri, M. Santoni, and M. White, “Combining outlier analysis algorithms to identify new physics at the LHC,” arXiv:2010.07940 [hep-ph].
* (56) S. E. Park, D. Rankin, S.-M. Udrescu, M. Yunus, and P. Harris, “Quasi Anomalous Knowledge: Searching for new physics with embedded knowledge,” arXiv:2011.03550 [hep-ph].
* (57) D. A. Faroughy, “Uncovering hidden patterns in collider events with Bayesian probabilistic models,” arXiv:2012.08579 [hep-ph].
* (58) G. Stein, U. Seljak, and B. Dai, “Unsupervised in-distribution anomaly detection of new physics through conditional density estimation,” arXiv:2012.11638 [cs.LG].
* (59) G. Kasieczka et al., “The LHC Olympics 2020: A Community Challenge for Anomaly Detection in High Energy Physics,” arXiv:2101.08320 [hep-ph].
* (60) P. Chakravarti, M. Kuusela, J. Lei, and L. Wasserman, “Model-Independent Detection of New Physics Signals Using Interpretable Semi-Supervised Classifier Tests,” arXiv:2102.07679 [stat.AP].
* (61) J. Batson, C. G. Haaf, Y. Kahn, and D. A. Roberts, “Topological Obstructions to Autoencoding,” arXiv:2102.08380 [hep-ph].
* (62) A. Blance and M. Spannowsky, “Unsupervised Event Classification with Graphs on Classical and Photonic Quantum Computers,” arXiv:2103.03897 [hep-ph].
* (63) B. Bortolato, B. M. Dillon, J. F. Kamenik, and A. Smolkovič, “Bump Hunting in Latent Space,” arXiv:2103.06595 [hep-ph].
* (64) B. Nachman, “Anomaly Detection for Physics Analysis and Less than Supervised Learning,” arXiv:2010.14554 [hep-ph].
* (65) E. M. Metodiev, B. Nachman, and J. Thaler, “Classification without labels: Learning from mixed samples in high energy physics,” arXiv:1708.02949 [hep-ph].
* (66) T. Sjostrand, S. Mrenna, and P. Z. Skands, “A Brief Introduction to PYTHIA 8.1,” Comput. Phys. Commun. 178 (2008) 852–867, arXiv:0710.3820 [hep-ph].
* (67) DELPHES 3 Collaboration, J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi, “DELPHES 3, A modular framework for fast simulation of a generic collider experiment,” JHEP 02 (2014) 057, arXiv:1307.6346 [hep-ex].
* (68) M. Cacciari, G. P. Salam, and G. Soyez, “FastJet User Manual,” Eur. Phys. J. C72 (2012) 1896, arXiv:1111.6097 [hep-ph].
* (69) M. Cacciari, G. P. Salam, and G. Soyez, “The Anti-k(t) jet clustering algorithm,” JHEP 04 (2008) 063, arXiv:0802.1189 [hep-ph].
* (70) J. Thaler and K. Van Tilburg, “Maximizing Boosted Top Identification by Minimizing N-subjettiness,” JHEP 02 (2012) 093, arXiv:1108.2701 [hep-ph].
* (71) J. Thaler and K. Van Tilburg, “Identifying Boosted Objects with N-subjettiness,” JHEP 03 (2011) 015, arXiv:1011.2268 [hep-ph].
* (72) A. Maas, A. Hannun, and A. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of the International Conference on Machine Learning. Atlanta, Georgia, 2013.
* (73) V. Nair and G. Hinton, “Rectified linear units improve restricted boltzmann machines vinod nair,” vol. 27, pp. 807–814. 06, 2010.
* (74) D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” 2015.
* (75) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research 15 no. 56, (2014) 1929–1958. http://jmlr.org/papers/v15/srivastava14a.html.
* (76) D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980 [cs].
* (77) F. Chollet, “Keras.” https://github.com/fchollet/keras, 2017.
* (78) M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: A system for large-scale machine learning.,” in OSDI, vol. 16, pp. 265–283. 2016\.
* (79) A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, eds., pp. 8024–8035. Curran Associates, Inc., 2019. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
* (80) J. Gallicchio, J. Huth, M. Kagan, M. D. Schwartz, K. Black, and B. Tweedie, “Multivariate discrimination and the Higgs + W/Z search,” JHEP 04 (2011) 069, arXiv:1010.3698 [hep-ph].
* (81) ATLAS Collaboration, G. Aad et al., “Search for new resonances in mass distributions of jet pairs using 139 fb-1 of $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector,” JHEP 03 (2020) 145, arXiv:1910.08447 [hep-ex].
* (82) CMS Collaboration, A. M. Sirunyan et al., “Search for narrow and broad dijet resonances in proton-proton collisions at $\sqrt{s}=13$ TeV and constraints on dark matter mediators and other new particles,” JHEP 08 (2018) 130, arXiv:1806.00843 [hep-ex].
* (83) O. Kitouni, B. Nachman, C. Weisser, and M. Williams, “Enhancing searches for resonances with machine learning and moment decomposition,” arXiv:2010.09745 [hep-ph].
|
Oscar Higgott
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom
Matthew Wilson
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom
Department of Computer Science, University of Oxford, Oxford OX1 3QD, United Kingdom
James Hefford
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom
Department of Computer Science, University of Oxford, Oxford OX1 3QD, United Kingdom
James Dborin
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom
London Centre for Nanotechnology, University College London,
Gordon St., London WC1H 0AH, United Kingdom
Farhan Hanif
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom
Simon Burton
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom
Dan E. Browne
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom
Optimal local unitary encoding circuits for the surface code
The surface code is a leading candidate quantum error correcting code, owing to its high threshold, and compatibility with existing experimental architectures. Bravyi et al. [Bravyi et al., 2006] showed that encoding a state in the surface code using local unitary operations requires time at least linear in the lattice size $L$, however the most efficient known method for encoding an unknown state, introduced by Dennis et al. [Dennis et al., 2002], has $O(L^2)$ time complexity. Here, we present an optimal local unitary encoding circuit for the planar surface code that uses exactly $2L$ time steps to encode an unknown state in a distance $L$ planar code. We further show how an $O(L)$ complexity local unitary encoder for the toric code can be found by enforcing locality in the $O(\log L)$-depth non-local renormalisation encoder. We relate these techniques by providing an $O(L)$ local unitary circuit to convert between a toric code and a planar code, and also provide optimal encoders for the rectangular, rotated and 3D surface codes.
Furthermore, we show how our encoding circuit for the planar code can be used to prepare fermionic states in the compact mapping, a recently introduced fermion to qubit mapping that has a stabiliser structure similar to that of the surface code and is particularly efficient for simulating the Fermi-Hubbard model.
§ INTRODUCTION
One of the most promising error correcting codes for achieving fault-tolerant quantum computing is the surface code, owing to its high threshold and low weight check operators that are local in two dimensions [Kitaev, 2003, Dennis et al., 2002]. The stabilisers of the surface code are defined on the faces and sites of a $L \times L$ square lattice embedded on either a torus (the toric code) or a plane (the planar code). The toric code encodes two logical qubits, while the planar code encodes a single logical qubit.
An important component of any quantum error correction (QEC) code is its encoding circuit, which maps an initial product state of $k$ qubits in arbitrary unknown states (along with $n-k$ ancillas) to the same state on $k$ logical qubits encoded in a quantum code with $n$ physical qubits. The encoding of logical states has been realised experimentally for the demonstration of small-scale QEC protocols using various codes [Chiaverini et al., 2004, Lu et al., 2008, Schindler et al., 2011, Taminiau et al., 2014, Waldherr et al., 2014, Nigg et al., 2014, Kelly et al., 2015, Ofek et al., 2016, Cramer et al., 2016, Linke et al., 2017, Vuillot, 2017, Roffe et al., 2018, Gong et al., 2021], however one of the challenges of realising larger-scale experimental demonstrations of QEC protocols is the increasing complexity of the encoding circuits with larger system sizes, which has motivated the recent development of compiling techniques that reduce the number of noisy gates in unitary encoding circuits [Xu et al., 2021].
Encoding circuits can also be useful for implementing fermion-to-qubit mappings [Seeley et al., 2012], an important component of quantum simulation algorithms, since some mappings introduce stabilisers in order to mitigate errors [Jiang et al., 2019] or enforce locality in the transformed fermionic operators [Bravyi and Kitaev, 2002, Verstraete and Cirac, 2005, Steudtner and Wehner, 2019, Havlí čček et al., 2017]. Local unitary encoding circuits provide a method to initialise and switch between mappings without the need for ancilla-based stabiliser measurements and feedback.
The best known local unitary circuits for encoding an unknown state in the surface code are far from optimal. Bravyi et al. [Bravyi et al., 2006] showed that any local unitary encoding circuit for the surface code must take time that is at least linear in the distance $L$, however the most efficient known local unitary circuit for encoding an unknown state in the surface code was introduced by Dennis et al. [Dennis et al., 2002], and requires $\Omega(L^2)$ time to encode an unknown state in a distance $L$ planar code. Aguado and Vidal [Aguado and Vidal, 2008] introduced a Renormalisation Group (RG) unitary encoding circuit for preparing and unknown state in the toric code with $O(\log L)$ circuit depth, however their method requires non-local gates. More recently, Aharonov and Touati provided an $\Omega(\log L)$ lower bound on the circuit depth of preparing toric code states with non-local gates, demonstrating that the RG encoder is optimal in this setting [Aharonov and Touati, 2018], and an alternative approach for preparing a specific state in the toric code with non-local gates and depth $O(\log L)$ was recently introduced in Ref. [Liao and Feder, 2021]. Dropping the requirement of unitarity, encoders have been found that use stabiliser measurements [Łodyga et al., 2015, Horsman et al., 2012, Li, 2015] or local dissipative evolution [König and Pastawski, 2014], and it has been shown that local dissipative evolution cannot be used to beat the $\Omega(L)$ lower bound for local unitary encoders [König and Pastawski, 2014]. If only the logical $\bar{\ket{0}}$ state is to be prepared, then stabiliser measurements [Dennis et al., 2002] can be used, as well as optimal local unitaries that either use adiabatic evolution [Hamma and Lidar, 2008] or a mapping from a cluster state [Brown et al., 2011]. However, encoding circuits by definition should be capable of encoding an arbitrary unknown input state.
In this work, we present local unitary encoding circuits for both the planar and toric code that take time linear in the lattice size to encode an unknown state, achieving the $\Omega(L)$ lower bound given by Bravyi et al. [Bravyi et al., 2006]. Furthermore, we provide encoding circuits for rectangular, rotated and 3D surface codes, as well as a circuit that encodes a toric code from a planar code. Our circuits also imply optimal encoders for the 2D color code [Kubica et al., 2015], some 2D subsystem codes [Bombin et al., 2012, Bravyi et al., 2012] and any 2D translationally invariant topological code [Bombin et al., 2012]. On many Noisy Intermediate-Scale Quantum (NISQ) [Preskill, 2018] devices, which are often restricted to local unitary operations, our techniques therefore provide an optimal method for experimentally realising topological quantum order.
Another advantage of using a unitary encoding circuit is that it does not require the use of ancillas to measure stabilisers, therefore providing a more qubit efficient method of preparing topologically ordered states ($2\times$ fewer qubits are required to prepare a surface code state of a given lattice size).
Finally, we show how our unitary encoding circuits for the planar code can be used to construct $O(L)$ depth circuits to encode a Slater determinant state in the compact mapping [Derby et al., 2021], which can be used for the simulation of fermionic systems on quantum computers.
§ STABILISER CODES
An $n$-qubit Pauli operator $P=\alpha P_n$ where $P_n\in\{I,X,Y,Z\}^{\otimes n}$ is an $n$-fold tensor product of single qubit Pauli operators with the coefficient $\alpha\in\{\pm 1, \pm i\}$. The set of all $n$-qubit Pauli operators forms the $n$-qubit Pauli group $\mathcal{P}_n$. The weight $\mathrm{wt}(P)$ of a Pauli operator $P\in\mathcal{P}_n$ is the number of qubits on which it acts non-trivially. Any two Pauli operators commute if an even number of their tensor factors commute, and anti-commute otherwise.
Stabiliser codes [Gottesman, 1997] are defined in terms of a stabiliser group $\mathcal{S}$, which is an abelian subgroup of $\mathcal{P}_n$ that does not contain the element $-I$. Elements of a stabiliser group are called stabilisers. Since every stabiliser group is abelian and Pauli operators have the eigenvalues $\pm 1$, there is a joint $+1$-eigenspace of every stabiliser group, which defines the stabiliser code.
The check operators of a stabiliser code are a set of generators of $\mathcal{S}$ and hence all measure $+1$ if the state is uncorrupted. Any check operator $M$ that anticommutes with an error $E$ will measure -1 (since $ME\ket{\psi}=-EM\ket{\psi}=-E\ket{\psi}$). The centraliser $C(\mathcal{S})$ of $\mathcal{S}$ in $\mathcal{P}_n$ is the set of Pauli operators which commute with every stabiliser. If an error $E\in C(\mathcal{S})$ occurs, it will be undetectable. If $E\in\mathcal{S}$, then it acts trivially on the codespace, and no correction is required. However if $E\in C(\mathcal{S})\setminus \mathcal{S}$, then an undetectable logical error has occurred. The distance $d$ of a stabiliser code is the smallest weight of any logical operator.
A stabiliser code is a Calderbank-Shor-Steane (CSS) code if there exists a generating set for the stabiliser group such that every generator is in $\{I,X\}^n\cup \{I,Z\}^n$.
§ THE SURFACE CODE
The surface code is a CSS code introduced by Kitaev [Kitaev, 2003, Dennis et al., 2002], which has check operators defined on a square lattice embedded in a two-dimensional surface. Each site check operator is a Pauli operator in $\{I,X\}^n$ which only acts non-trivially on the edges adjacent to a vertex of the lattice. Each $\textit{plaquette}$ check operator is a Pauli operator in $\{I,Z\}^n$ which only acts non-trivially on the edges adjacent to a face of the lattice. In the toric code, the square lattice is embedded in a torus, whereas in the planar code the lattice is embedded in a plane, without periodic boundary conditions (see <Ref>). These site and plaquette operators together generate the stabiliser group of the code. While the toric code encodes two logical qubits, the surface code encodes a single logical qubit.
The check operators for (a) the toric code and (b) the planar code. Opposite edges in (a) are identified and each edge corresponds to a qubit.
§ ENCODING AN UNKNOWN STATE
We are interested in finding a unitary encoding circuit that maps a product state $\ket{\phi_0}\otimes\ldots\otimes\ket{\phi_{k-1}}\otimes \ket{0}^{\otimes (n-k)}$ of $k$ physical qubits in unknown states (along with ancillas) to the state of $k$ logical qubits encoded in a stabiliser code with $n$ physical qubits. Labelling the ancillas in the initial state $k, k+1,\ldots ,n-1$, we note that the initial product state is a $+1$-eigenstate of the stabilisers $Z_k,Z_{k+1},\ldots,Z_{n-1}$. Thus, we wish to find a unitary encoding circuit that maps the stabilisers $Z_k,Z_{k+1},\ldots,Z_{n-1}$ of the product state to a generating set for the stabiliser group $\mathcal{S}$ of the code. The circuit must also map the logical operators $Z_0,Z_1,\ldots,Z_{k-1}$ and $X_0,X_1,\ldots,X_{k-1}$ of the physical qubits to the corresponding logical operators $\bar{Z}_0,\bar{Z}_1,\ldots,\bar{Z}_{k-1}$ and $\bar{X}_0,\bar{X}_1,\ldots,\bar{X}_{k-1}$ of the encoded qubits (up to stabilisers).
Applying a unitary $U$ to an eigenstate $\ket{\psi}$ of an operator $S$ (with eigenvalue $s$) gives $US\ket{\psi}=sU\ket{\psi}=USU^\dagger U\ket{\psi}$: an eigenstate of $S$ becomes an eigenstate of $USU^\dagger$. Therefore, we wish to find a unitary encoding circuit that, acting under conjugation, transforms the stabilisers and logicals of the initial product state into the stabilisers and logicals of the encoded state.
The CNOT gate, acting by conjugation, transforms Pauli $X$ and $Z$ operators as follows:
\begin{align}\label{eq:cnotstabilisers}
XI \leftrightarrow XX, \quad IZ \leftrightarrow ZZ,
\end{align}
and leaves $ZI$ and $IX$ invariant. Here $\sigma \sigma^\prime$ for $\sigma,\sigma^\prime\in \{I, Z, X\}$ denotes $\sigma_C\otimes \sigma_T$ with $C$ and $T$ the control and target qubit of the CNOT respectively. Since $Z=HXH$ and $X=HZH$, a Hadamard gate $H$ transforms an eigenstate of $Z$ into an eigenstate of $X$ and vice versa. We will show how these relations can be used to generate unitary encoding circuits for the surface code using only CNOT and Hadamard gates.
As an example, consider the problem of generating the encoding circuit for the repetition code, which has stabilisers $Z_0Z_1$ and $Z_1Z_2$. We start in the product state $\ket{\phi}\ket{0}\ket{0}$ which has stabilisers $Z_1$ and $Z_2$. We first apply CNOT$_{01}$ which transforms the stabiliser $Z_1\rightarrow Z_0Z_1$ and leaves $Z_2$ invariant. Then applying CNOT$_{12}$ transforms $Z_2\rightarrow Z_1Z_2$ and leaves $Z_0Z_1$ invariant. We can also verify that the logical $X$ undergoes the required transformation $X_0\rightarrow \bar{X}_0\coloneqq X_0X_1X_2$.
§ GENERAL ENCODING METHODS FOR STABILISER CODES
There exists a general method for generating an encoding circuit for any stabiliser code [Gottesman, 1997, Cleve and Gottesman, 1997], which we review in Appendix A. The specific structure of the output of this method means it can immediately be rearranged to depth $O(n)$. Using general routing procedures presented in [Cheung et al., 2007, Beals et al., 2013, Brierley, 2015] the output circuit could be adapted to a surface architecture with overhead $O(\sqrt{n})$, giving a circuit with depth $O(n\sqrt{n})$. This matches the scaling $O(\min(2n^2,4nD\Delta))$ in depth for stabiliser circuits achieved in [Wu et al., 2019], where $D$ and $\Delta$ are the diameter and degree respectively of the underlying architecture graph. Any stabiliser circuit has an equivalent skeleton circuit [Maslov, 2007], and so can be implemented on a surface architecture with depth $O(n) = O(L^2)$, matching the previously best known scaling [Dennis et al., 2002] for encoding the planar code. $O(n)$ is an optimal bound on the depth of the set of all stabiliser circuits [Maslov, 2007], so we look beyond general methods and work with the specifics of the planar encoding circuit to improve on [Dennis et al., 2002].
§ OPTIMAL ENCODER FOR THE PLANAR CODE
Dennis et al. [Dennis et al., 2002] showed how the methods outlined in section <ref> can be used to generate an encoding circuit for the planar surface code. The inductive step in their method requires $\Omega(L)$ time steps and encodes a distance $L+1$ planar code from a distance $L$ code by turning smooth edges into rough edges and vice versa. As a result encoding a distance $L$ planar code from an unencoded qubit requires $
\Omega(L^2)$ time steps, which is quadratically slower than the lower bound given by Bravyi et al. [Bravyi et al., 2006].
Circuit to encode a distance 6 planar code from a distance 4 planar code. Each edge corresponds to a qubit. Each arrow denotes a CNOT gate, pointing from control to target. Filled black circles (centred on edges) denote Hadamard gates, which are applied at the beginning of the circuit. The colour of each CNOT gate (arrow) denotes the time step in which it is applied. The first, second, third and fourth time steps correspond to the blue, green, red and black CNOT gates respectively. Solid edges correspond to qubits originally encoded in the L=4 planar code, whereas dotted edges correspond to additional qubits that are encoded in the L=6 planar code.
However, here we present a local unitary encoding circuit for the planar code that requires only $2L$ time steps to encode a distance $L$ planar code. The inductive step in our method, shown in <Ref> for $L=4$, encodes a distance $L+2$ planar code from a distance $L$ planar code using 4 time steps, and does not rotate the code. This inductive step can then be used recursively to encode an unencoded qubit into a distance $L$ planar code using $2L$ time steps. If $L$ is odd, the base case used is the distance 3 planar code, which can be encoded in 6 time steps. If $L$ is even, a distance 4 planar code is used as a base case, which can be encoded in 8 time steps. Encoding circuits for the distance 3 and 4 planar codes are given in Appendix <ref>. Our encoding circuit therefore matches the $\Omega(L)$ lower bound provided by Bravyi et al. [Bravyi et al., 2006].
The transformation of the stabiliser generators of the $L=4$ planar surface code when the circuit in <Ref> is applied. Top: the four main types of site stabilisers acted on nontrivially by the encoding circuit (labelled a-d) are shown in red before (left) and after (right) the encoding circuit is applied. On the left we assume that the ancillas have already been initialised in the $\ket{+}$ state ($H$ applied). Bottom: the four main types of plaquette stabilisers (also labelled a-d) are shown in blue before (left) and after (right) the encoding circuit is applied. Plaquette c has two connected components after the circuit is applied (right), and is enclosed by a green dashed line for clarity.
Since the circuit for the inductive step in <Ref> uses only CNOT and $H$ gates, we can verify its correctness by checking that stabiliser generators and logicals of the distance $L$ surface code are mapped to stabiliser generators and logicals of the distance $L+2$ surface code using the conjugation rules explained in <Ref>.
We show how each type of site and plaquette stabiliser generator is mapped by the inductive step of the encoding circuit in <Ref>.
Note that the site stabiliser generator labelled c (red) is mapped to a weight 7 stabiliser in the $L=6$ planar code: this is still a valid generator of stabiliser group, and the standard weight four generator can be obtained by multiplication with a site of type b.
Similarly, the plaquette stabiliser generator labelled c becomes weight 7, but a weight four generator is recovered from multiplication by a plaquette of type a.
Therefore, the stabiliser group of the $L=4$ planar code is mapped correctly to that of the $L=6$ planar code, even though minimum-weight generators are not mapped explicitly to minimum-weight generators.
Using <Ref> it is straightforward to verify that the $X$ and $Z$ logical operators of the $L=4$ planar code are also mapped to the $X$ and $Z$ logicals of the $L=6$ planar code by the inductive step.
We can also encode rectangular planar codes with height $H$ and width $W$ by first encoding a distance $\min(H,W)$ square planar code and then using a subset of the gates in <Ref> (given explicitly in Appendix <ref>) to either increase the width or the height as required. Increasing either the width or height by two requires three time steps, therefore encoding a $H\times W$ rectangular planar code from an unencoded qubit requires $2\min(H,W)+3\left\lceil\frac{|H-W|}{2}\right\rceil$ time steps.
In Appendix <ref> we also provide an optimal encoder for the rotated surface code, which uses fewer physical qubits for a given distance $L$ [Bombin and Martin-Delgado, 2007]. Our encoding circuit also uses an inductive step that increases the distance by two using four time steps, and therefore uses $2L + O(1)$ time steps to encode a distance $L$ rotated surface code.
§ LOCAL RENORMALISATION ENCODER FOR THE TORIC CODE
In this section we will describe an $O(L)$ encoder for the toric code based on the multi-scale entanglement renormalisation ansatz (MERA). The core of this method is to enforce locality in the Renormalisation Group (RG) encoder given by Aguado and Vidal [Aguado and Vidal, 2008]. The RG encoder starts from an $L=2$ toric code and then uses an $O(1)$ depth inductive step which enlarges a distance $2^k$ code to a distance $2^{k+1}$ code, as shown in <Ref> for the first step ($k=1$) (and reviewed in more detail in Appendix <ref>). The $L=2$ base case toric code can be encoded using the method given by Gottesman in Ref. [Gottesman, 1997], as shown in Appendix <ref>. While the RG encoder takes $O(\log L)$ time, it is non-local in it's original form.
Encoding a distance 4 toric code from a distance 2 toric code using the Renormalisation Group encoder of Aguado and Vidal [Aguado and Vidal, 2008]. Dashed edges, dashed edges with a node and solid edges correspond to decoupled ancillae in $\ket{0}$, in $\ket{+}$, and to qubits entangled with the original code respectively. Opposite edges are identified. Arrows denote CNOT operations from control to target qubits, and monochromatic gates in stages (b) and (c) may be executed in a single timestep.
In order to enforce locality in the RG encoder, we wish to find an equivalent circuit that implements an identical operation on the same input state, using quantum gates that act locally on the physical architecture corresponding to the final distance $L$ toric code (here a gate is local if it acts only on qubits that belong to either the same site or plaquette). One approach to enforce locality in a quantum circuit is to insert SWAP gates into the circuit to move qubits adjacent to each other where necessary. Any time step of a quantum circuit can be made local on a $L\times L$ 2D nearest-neighbour (2DNN) grid architecture using at most $O(L)$ time steps, leading to at most a multiplicative $O(L)$ overhead from enforcing locality [Cheung et al., 2007, Beals et al., 2013, Brierley, 2015]. Placing an ancilla in the centre of each site and plaquette, we see that the connectivity graph of our physical architecture has a 2DNN grid as a subgraph. Therefore, using SWAP gates to enforce locality in the RG encoder immediately gives us a $O(L\log L)$ local unitary encoding circuit for the toric code which, while an improvement on the $O(L^2)$ encoder in Ref. [Dennis et al., 2002], does not match the $\Omega(L)$ lower bound.
However, we can achieve $O(L)$ complexity by first noticing that all `quantum circuit' qubits which are acted on non-trivially in the first $k$ steps of the RG encoder can be mapped to physical qubits in a $2^{k+1}\times 2^{k+1}$ square region of the physical architecture. Therefore, the required operations in iteration $k$ can all be applied within a $2^{k+1}\times 2^{k+1}$ region that also encloses the regions used in the previous steps. In Appendix <ref> we use this property to provide circuits for routing quantum information using SWAP gates (and no ancillas) that enforce locality in each of the $O(1)$ time steps in iteration $k$ using $O(2^{k+1})$ time steps. This leads to a total complexity of $\sum_{k=1}^{\log_2(L)-1} O(2^{k+1}) = O(L)$ for encoding a distance $L$ code, also achieving the lower bound given by Bravyi et al. [Bravyi et al., 2006]. In Appendix <ref> we provide a more detailed analysis to show that the total time complexity is $15L/2 - 6\log_2 L + 7 \sim O(L)$. Unlike the other encoders in this paper (which work for all $L$), the RG encoder clearly can only be applied when $L$ is a power of 2.
Circuit to encode a distance 5 toric code from a distance 5 planar code. Solid edges correspond to qubits in the original planar code and dotted edges correspond to qubits added for the toric code. Opposite edges are identified. Arrows denote CNOT gates, and filled black circles denote Hadamard gates applied at the beginning of the circuit. Blue and green CNOT gates correspond to those applied in the first and second time step respectively. Red CNOTs are applied in the time step that they are numbered with. The hollow circles denote the unencoded qubit that is to be encoded into the toric code.
§ ENCODING A TORIC CODE FROM A PLANAR CODE
While the method in section <ref> is only suitable for encoding planar codes, we will now show how we can encode a distance $L$ toric code from a distance $L$ planar code using only local unitary operations. Starting with a distance $L$ planar code, $2(L-1)$ ancillas each in a $\ket{0}$ state, and an additional unencoded logical qubit, the circuit in <Ref> encodes a distance $L$ toric code using $L+2$ time steps.
The correctness of this step can be verified using <Ref>: each ancilla initialised as $\ket{0}$ (stabilised by $Z$) is mapped to a plaquette present in the toric code but not the planar code.
Likewise, each ancilla initialised in $\ket{+}$ using an $H$ gate (stabilised by $X$) is mapped to a site generator in the toric code but not the planar code.
The weight-three site and plaquette stabilisers on the boundary of the planar code are also mapped to weight four stabilisers in the toric code.
Finally, we see that $X$ and $Z$ operators for the unencoded qubit (the hollow circle in <Ref>) are mapped to the second pair of $X$ and $Z$ logicals in the toric code by the circuit, leaving the other pair of $X$ and $Z$ logicals already present from the planar code unaffected.
Therefore, encoding two unencoded qubits in a toric code can be achieved using $3L+2$ time steps using the circuits given in this section and in section <ref>. Similarly, we can encode a planar code using the local RG encoder for the toric code, before applying the inverse of the circuit in <Ref>.
§ ENCODING A 3D SURFACE CODE
(a) Circuit to encode a $4\times 2$ planar code from a four qubit repetition code (where adjacent qubits in the repetition code are stabilised by $XX$). Applied to a column of qubits corresponding to a surface code $\bar{Z}$, this encodes a layer in the $yz$-plane of a 3D surface code. (b) Circuit to encode the $xz$-plane of a 3D surface code once the $yz$-plane layers and a layer in the $xy$-plane have been encoded. Arrows denote CNOT gates pointing from control to target, and blue, green, red and black CNOT gates correspond to the first, second, third and fourth time steps respectively. Solid and dotted edges correspond to qubits that are initially entangled and in a product state respectively.
We will now show how the techniques developed to encode a 2D planar code can be used to encode a distance $L$ 3D surface code using $O(L)$ time steps. We first encode a distance $L$ planar code using the method given in section <ref>. This planar code now forms a single layer in the $xy$-plane of a 3D surface code (where the $y$-axis is defined to be aligned with a $Z$-logical in the original planar code). Using the circuit given in <Ref>(a), we encode each column of qubits corresponding to a $Z$ logical in the planar code into a layer of the 3D surface code in the $yz$-plane (which has the same stabiliser structure as a planar code if the rest of the $x$-axis is excluded). Since each layer in the $yz$-plane can be encoded in parallel, this stage can also be done in $O(L)$ time steps. If we encode each layer in the $yz$-plane such that the original planar code intersects the middle of each layer in the $yz$-plane, then each layer in the $xz$-plane now has the stabiliser structure shown in <Ref>(b). Using the circuit in <Ref>(b) repeatedly, all layers in the $xz$-plane can be encoded in parallel in $O(L)$ time steps. Therefore, a single unknown qubit can be encoded into a distance $L$ 3D surface code in $O(L)$ time steps.
§ ENCODING CIRCUIT FOR THE COMPACT MAPPING
Fermion to qubit mappings are essential for simulating fermionic systems using quantum computers, and an encoding circuit for such a mapping is an important subroutine in many quantum simulation algorithms.
We now show how we can use our encoding circuits for the surface code to construct encoding circuits that prepare fermionic states in the compact mapping [Derby et al., 2021], a fermion to qubit mapping that is especially efficient for simulating the Fermi-Hubbard model.
A fermion to qubit mapping defines a representation of fermionic states in qubits, as well as a representation of each fermionic operator in terms of Pauli operators.
Using such a mapping, we can represent a fermionic Hamiltonian as a linear combination $H=\sum_i \alpha_i P_i$ of tensor products of Pauli operators $P_i$, where $\alpha_i$ are real coefficients.
We can then simulate time evolution $e^{-iHt}$ of $H$ (e.g. using a Trotter decomposition), which can be used in the quantum phase estimation algorithm to determine the eigenvalues of $H$.
The mapped Hamiltonian $H$ can also be used in the variational quantum eigensolver algorithm (VQE), where we can estimate the energy $\bra{\psi}H\ket{\psi}$ of a trial state $\ket{\psi}$ by measuring each Pauli term $\bra{\psi}P_i\ket{\psi}$ individually.
The Jordan-Wigner (JW) transformation maps fermionic creation ($a_i^\dagger$) and annihilation ($a_i$) operators to qubit operators in such a way that the canonical fermionic anti-commutation relations
\begin{equation}
\{a_{i}^{\dagger}, a_{j}^{\dagger}\}=0,\left\{a_{i}, a_{j}\right\}=0,\{a_{i}^{\dagger}, a_{j}\}=\delta_{i j}
\end{equation}
are satisfied by the encoded qubit operators.
The qubit operators used to represent $a_i^\dagger$ and $a_i$ are
\begin{align}
a_i^\dagger &\rightarrow Z_1\ldots Z_{i-1}\sigma_i^+ \\
a_i &\rightarrow Z_1\ldots Z_{i-1}\sigma_i^-
\end{align}
where $\sigma^+\coloneqq (X_i-iY_i)/2$ and $\sigma^-\coloneqq (X_i+iY_i)/2$.
Each electronic basis state (with $m$ modes) in the JW transformation is represented by $m$ qubits simply as a computational basis state $\ket{\omega_1,\omega_2,\ldots,\omega_m}$ where $\omega_i=1$ or $\omega_i=0$ indicates that mode $i$ is occupied or unoccupied by a fermion, respectfully.
A drawback of the Jordan-Wigner transformation is that, even if a fermionic operator acts on $O(1)$ modes, the corresponding JW-mapped qubit operator can still act on up to $O(m)$ qubits.
When mapped qubit operators have larger weight, the depth and number of gates required to simulate time evolution of a mapped Hamiltonian also tend to increase, motivating the design of fermion-to-qubit mappings that map fermionic operators to qubit operators that are both low weight and geometrically local.
Several methods have been proposed for mapping geometrically local fermionic operators to geometrically local qubit operators [Verstraete and Cirac, 2005, Whitfield et al., 2016, Steudtner and Wehner, 2019, Bravyi and Kitaev, 2002, Setia et al., 2019, Jiang et al., 2019, Derby et al., 2021], all of which introduce auxiliary qubits and encode fermionic Fock space into a subspace of the full $n$-qubit system, defined as the $+1$-eigenspace of elements of a stabiliser group $\mathcal{S}$.
Mappings that have this property as referred to as local.
We will now focus our attention on a specific local mapping, the compact mapping [Derby et al., 2021], since its stabiliser group is very similar to that of the surface code.
As we will see, this close connection to the surface code allows us to use the encoding circuits we have constructed for the surface code to encode fermionic states in the compact mapping.
The compact mapping maps nearest-neighbour hopping ($a_i^\dagger a_j + a_j^\dagger a_i$) and Coulomb ($a_i^\dagger a_i a_j^\dagger a_j$) terms to Pauli operators with weight at most 3 and 2, respectfully, and requires 1.5 qubits for each fermionic mode [Derby et al., 2021].
Rather than mapping individual fermionic creation and annihilation operators, the compact mapping instead defines a representation of the fermionic edge ($E_{j k}$) and vertex ($V_{j}$) operators, defined as
\begin{equation}
E_{j k}\coloneqq-i \gamma_{j} \gamma_{k}, \quad V_{j}\coloneqq-i \gamma_{j} \bar{\gamma}_{j},
\end{equation}
where $\gamma_j\coloneqq a_j+a_j^\dagger$ and $\bar{\gamma}_j\coloneqq (a_j-a_j^\dagger)/i$ are Majorana operators.
The vertex and edge operators must satisfy the relations
\begin{equation}
\left[E_{i j}, V_{l}\right]=0, \quad\left[V_{i}, V_{j}\right]=0, \quad\left[E_{i j}, E_{l n}\right]=0.
\end{equation}
for all $i\neq j \neq l \neq n$, and
\begin{equation}
\left\{E_{i j}, E_{j k}\right\}=0, \quad\left\{E_{j k}, V_{j}\right\}=0.
\end{equation}
In the compact mapping, there is a “primary” qubit associated with each of the $m$ fermionic modes, and there are also $m/2$ “auxiliary” qubits.
Each vertex operator $V_j$ is mapped to the Pauli operator $Z_j$ on the corresponding primary qubit.
We denote the mapped vertex and edge operators by $\tilde{V}_j$ and $\tilde{E}_{ij}$, respectfully, and so we have $\tilde{V}_j\coloneqq Z_j$.
Each edge operator $E_{ij}$ is mapped (up to a phase factor) to a three-qubit Pauli operator of the form $XYX$ or $XYY$, with support on two vertex qubits and a neighbouring “face” qubit.
The precise definition of the edge operators is not important for our purposes, and we refer the reader to Ref. [Derby et al., 2021] for details.
The vertex and edge operators define a graph (in which they correspond to vertices and edges, respectfully), and an additional relation that must be satisfied in the mapping is that the product of any loop of edge operators must equal the identity:
\begin{equation}\label{eq:identity_loop}
i^{(|p|-1)}\prod_{i=1}^{(|p|-1)}\tilde{E}_{p_i p_{i+1}}=1,
\end{equation}
where here $p=\{p_1,p_2,\ldots\}$ is a sequence of vertices along any cycle in the graph.
The relation of <Ref> can be satisfied by ensuring that the qubit operator corresponding to any mapped loop of edge operators is a stabiliser, if it is not already trivial, thereby ensuring that the relations are satisfied within the $+1$-eigenspace of the stabilisers.
ıin 0,1,2,3,4
(4*ı+1,1) – (4*ı+1, 21);
ıin 1,2,3,4
(1,4*ı+1) – (17, 4*ı+1);
in 0,...,9
in 1,...,10
[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (2*,2*) ;
[very thick, blue] (5,5) rectangle (9,9);
[blue, fill=white, inner sep=1] at (7,9) Y;
[blue, fill=white, inner sep=1] at (9,7) X;
[blue, fill=white, inner sep=1] at (5,7) X;
[blue, fill=white, inner sep=1] at (7,5) Y;
[blue] at (6.5,6.5) Z;
[blue] at (7.5,6.5) Z;
[blue] at (6.5,7.5) Z;
[blue] at (7.5,7.5) Z;
[very thick, blue] (13,1) – (13,5) – (17,5) – (17,1);
[blue, fill=white, inner sep=1] at (15,5) Y;
[blue, fill=white, inner sep=1] at (13,3) X;
[blue, fill=white, inner sep=1] at (17,3) X;
[blue] at (14.5,2.5) Z;
[blue] at (15.5,2.5) Z;
[blue] at (14.5,3.5) Z;
[blue] at (15.5,3.5) Z;
[very thick, red] (5,13) – (5,21);
[very thick, red] (1,17) – (9,17);
[red, fill=white, inner sep=1] at (5,19) Y;
[red, fill=white, inner sep=1] at (3,17) X;
[red, fill=white, inner sep=1] at (7,17) X;
[red, fill=white, inner sep=1] at (5,15) Y;
[red] at (4.5,16.5) Z;
[red] at (5.5,16.5) Z;
[red] at (4.5,17.5) Z;
[red] at (5.5,17.5) Z;
[very thick, red] (13,17) – (17,17);
[very thick, red] (17,13) – (17,21);
[red, fill=white, inner sep=1] at (15,17) X;
[red, fill=white, inner sep=1] at (17,15) Y;
[red, fill=white, inner sep=1] at (17,19) Y;
[red] at (16.5,16.5) Z;
[red] at (17.5,16.5) Z;
[red] at (16.5,17.5) Z;
[red] at (17.5,17.5) Z;
The stabilisers of the compact mapping. A primary qubit is associated with each black circle, and an auxiliary qubit is associated with each edge of the surface code lattice. There is a plaquette stabiliser (blue) associated with each face of the surface code lattice, acting as $YXXY$ on the edges adjacent to the face, and as $Z$ on each of the four closest primary qubits. There is also a site stabiliser (red) associated with each vertex of the surface code lattice, also acting as $YXXY$ on the edges adjacent to the vertex, and as $Z$ on each of the four closest primary qubits.
The stabiliser group $\mathcal{S}$ of the compact mapping is therefore defined by <Ref> and the definition of each $\tilde{E}_{ij}$.
The $+1$-eigenspace of $\mathcal{S}$ has dimension $2^{m+\Delta}$, where $m$ is the number of modes and $\Delta\in\{-1,0,1\}$ is the disparity, which depends on the boundary conditions chosen for the square lattice geometry.
We will only consider the case where $\Delta=1$, since this choice results in a stabiliser structure most similar to the surface code.
In this $\Delta=1$ case the full Fock space is encoded, along with a topologically protected logical qubit.
The stabilisers of the compact mapping (for the case $\Delta=1$) are shown in <Ref>, from which it is clear that the stabiliser group is very similar to that of the planar surface code, a connection which was first discussed in Ref. [Derby et al., 2021].
Indeed, if we consider the support of the stabilisers on only the auxiliary qubits (associated with the edges of the surface code lattice shown in <Ref>), we recover the stabiliser group of the planar surface code up to single-qubit Clifford gates acting on each qubit.
Using this insight, we can use our surface code encoding circuit to construct a local unitary encoding circuit that prepares a Slater determinant state in the compact mapping, which is often required for its use in quantum simulation algorithms.
Note that we can write each fermionic occupation operator $a_j^\dagger a_j$ for mode $j$ in terms of the corresponding vertex operator $V_j$ as $a_j^\dagger a_j=(I-V_j)/2$, where $I$ is the identity operator.
A Slater determinant state $\ket{\phi_{\mathrm{det}}}$ is then a joint eigenstate of the stabilisers and vertex operators:
\begin{align}
S_i\ket{\phi_{\mathrm{det}}}&=\ket{\phi_{\mathrm{det}}},\quad\forall S_i\in\mathcal{S}, \label{eq:determinant_stabilisers} \\
\tilde{V}_j\ket{\phi_{\mathrm{det}}}&=v_j\ket{\phi_{\mathrm{det}}},\quad\forall \tilde{V}_j\in \tilde{V}, \label{eq:vertex_op_eigenvalues}
\end{align}
where $\mathcal{S}$ is the stabiliser group of the mapping, $\tilde{V}$ is the set of mapped vertex operators, and $v_j\in\{+1,-1\}$ indicates whether mode $j$ is occupied (-1) or unoccupied (+1) [Jiang et al., 2019].
Let us denote the set of generators of $\mathcal{S}$ defined by the sites and plaquettes in <Ref> by $\{s_1,s_2,\ldots,s_r\}$ (i.e. $\mathcal{S}=\langle s_1,s_2,\ldots,s_r \rangle$).
For any Pauli operator $c$, we denote its component acting only on the primary qubits as $c^p$, and its component acting only on auxiliary qubits is denoted $c^a$.
With this notation we can decompose each stabiliser generator as $s_i=s_i^p\otimes s_i^a$, where $|s_i^p|=|s_i^a|=4$ in the bulk of the lattice.
For the compact mapping, where $\tilde{V}_j\coloneqq Z_j$, from <Ref> we see that the primary qubits are in a product state for all Slater determinant states, and so we can write the state of the system on all qubits as $\ket{\phi}=\ket{\phi}_{p}\otimes\ket{\phi}_{a}$, where $\ket{\phi}_{p}$ is the state of the primary qubits and $\ket{\phi}_{a}$ is the state of the auxiliary qubits.
Our circuit to prepare a Slater determinant state in the compact mapping then proceeds in three steps.
In step one we prepare each primary qubit in state $\ket{0}$ or $\ket{1}$ if the corresponding fermionic mode is unoccupied or occupied, respectfully.
This ensures that the state satisfies <Ref> as required, and we denote the resultant state on the primary qubits by $\ket{\phi_{det}}_{p}$.
It now remains to show how we can prepare the state on the auxiliary qubits such that <Ref> is also satisfied.
In step 2, we prepare a state $\ket{\phi_{surf}}_{a}$ on the auxiliary qubits that is in the $+1$-eigenspace of each stabiliser generator restricted to its support only on the auxiliary qubits.
In other words we prepare the state $\ket{\phi_{surf}}_{a}$ satisfying
\begin{equation}
S_i\ket{\phi_{surf}}_{a}=\ket{\phi_{surf}}_{a}\quad\forall S_i\in\mathcal{S}',
\end{equation}
where $\mathcal{S}'\coloneqq \langle s_1^a, s_2^a,\ldots,s_r^a\rangle$.
The generators of $\mathcal{S}'$ are the same as those of the planar surface code up to local Clifford gates, and so we can prepare $\ket{\phi_{surf}}_{a}$ by encoding the planar surface code on the auxiliary qubits using the circuit from <Ref> and applying $U_V$ ($U_H$) to each vertical (horizontal) edge of the lattice in <Ref>, where
\begin{align}
U_V &\coloneqq XHS=\frac{1}{\sqrt{2}}\left(\begin{array}{rr}
1 & -i \\
1 & i
\end{array}\right), \\
U_H &\coloneqq XHSH=\frac{1}{2}\left(\begin{array}{rr}
1-i & 1+i \\
1+i & 1-i
\end{array}\right).
\end{align}
This step can be verified by noticing that, under conjugation, $U_V$ maps $X\rightarrow Z$ and $Y\rightarrow X$, and $U_H$ maps $Y\rightarrow Z$ and $X\rightarrow X$, and so the generators of the surface code (<Ref>) are mapped to generators of $\mathcal{S}'$.
ıin 0,1,2,3,4
(4*ı+1,1) – (4*ı+1, 21);
ıin 1,2,3,4
(1,4*ı+1) – (17, 4*ı+1);
in 0,...,9
in 1,...,10
[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (2*,2*) ;
[very thick, blue] (9,5) rectangle (13,9);
[blue, fill=white, inner sep=1] at (9,7) Y;
[blue, fill=white, inner sep=1] at (5,7) Y;
[blue, fill=white, inner sep=1] at (1,7) Y;
[very thick, red] (1,13) – (9,13);
[very thick, red] (5,9) – (5,17);
[red, fill=white, inner sep=1] at (5,15) X;
[red, fill=white, inner sep=1] at (5,19) X;
A $-1$ syndrome on any individual plaquette stabiliser (blue) can be generated by a string of Pauli $Y$ operators (labelled in blue) on qubits on vertical edges joining it to the left (or right) boundary. Similarly, a $-1$ syndrome on any site stabiliser (red) can be generated by a string of Pauli $X$ operators (labelled in red) on qubits on vertical edges joining it to the top (or bottom) boundary.
Note that after step 2, the combined state of the primary and auxiliary qubits satisfies
\begin{align}
s_i^p\otimes s_i^a \ket{\phi_{det}}_{p}\otimes\ket{\phi_{surf}}_{a}&=b_i\ket{\phi_{det}}_{p}\otimes\ket{\phi_{surf}}_{a}
\end{align}
for each generator $s_i=s_i^p\otimes s_i^a$ of $\mathcal{S}$, where the eigenvalue $b_i\in\{-1,1\}$ is the parity of the primary qubits acted on non-trivially by $s_i^p$, satisfying $s_i^p\ket{\phi_{det}}_{p}=b_i\ket{\phi_{det}}_{p}$.
We say that $b_i$ is the syndrome of generator $s_i$.
In step 3, we apply a circuit that instead ensures that we are in the $+1$-eigenspace of elements of $\mathcal{S}$.
This can be done by applying a Pauli operator $R$, with support only on the auxiliary qubits, that commutes with each generator $s_i$ if its syndrome $b_i$ is 1 and anti-commutes otherwise.
Such a Pauli operator can always be found for any assignment of each $b_i\in\{1,-1\}$, as shown in <Ref>: for each stabiliser generator $s_i$, we can find a Pauli operator that we denote $V(s_i)$ which, acting only on the auxiliary qubits, anti-commutes with $s_i$ while commuting with all other generators (note that the choice of $V(s_i)$ is not unique).
Taking the product of operators $V(s_i)$ for all $s_i$ with syndrome $b_i=-1$, we obtain a single Pauli operator
\begin{equation}\label{eq:compact_correction}
R=\prod_{i\in\{i: b_i=-1\}}V(s_i)
\end{equation}
that returns the state of our combined system to the $+1$-eigenspace of elements of $\mathcal{S}$, such that it satisfies <Ref>.
Furthermore, since steps 2 and 3 have acted trivially on the primary qubits, <Ref> is still satisfied from step 1.
Therefore, a Slater determinant in the compact mapping can be encoded using the $O(L)$ depth unitary encoding circuit for the planar code as well as $O(1)$ layers of single qubit Clifford gates.
Note that the topologically protected logical qubit in the compact mapping is not used to store quantum information.
As a result, we can prepare any state in the codespace of the surface code in step 2, and it does not matter if the Pauli correction $R$ in step 3 acts non-trivially on the logical qubit.
The problem of finding a suitable correction $R$ in step 3 given the syndrome of each generator is essentially the same problem as decoding the XZZX surface code [Wen, 2003, Ataides et al., 2021] under the quantum erasure channel (and where every qubit is erased).
Therefore, any other suitable decoder could be used instead of using <Ref>, such as the variant of minimum-weight perfect matching used in Ref. [Ataides et al., 2021], or an adaptation of the peeling decoder [Delfosse and Zémor, 2020].
The encoding step for the surface code could instead be done using stabiliser measurements. However, since it is not otherwise necessary to measure the stabilisers of the mapping, the additional complexity of using ancillas, mid-circuit measurements and real-time classical logic might make such a measurement-based approach more challenging to implement on either NISQ or fault-tolerant hardware than the simple $O(L)$ depth local unitary encoding circuit we present. Furthermore, the $O(L)$ complexity of our encoding circuit is likely negligible compared to the overall complexity of most quantum simulation algorithms within which it could be used.
Our encoding circuits for the surface code may also be useful for preparing states encoded in other fermion-to-qubit mappings.
As an example, it has previously been observed that the Verstraete-Cirac transform also has a similar stabiliser structure to the surface code [Verstraete and Cirac, 2005, Steudtner and Wehner, 2019].
§ DISCUSSION
We have presented local unitary circuits for encoding an unknown state in the surface code that take time linear in the lattice size $L$. Our results demonstrate that the $\Omega(L)$ lower bound given by Bravyi et al. [Bravyi et al., 2006] for this problem is tight, and reduces the resource requirements for experimentally realising topological quantum order and implementing some QEC protocols, especially using NISQ systems restricted to local unitary operations. We have provided a new technique to encode the planar code in $O(L)$ time, as well as showing how an $O(L)$ local unitary encoding circuit for the toric code can be found by enforcing locality in the non-local RG encoder. We unify these two approaches by demonstrating how local $O(L)$-depth circuits can be used to convert between the planar and toric code, and generalise our method to rectangular, rotated and 3D surface codes.
We also show that our unitary encoding circuit for the planar code can be used to encode a Slater determinant state in the compact mapping [Derby et al., 2021], which has a similar stabiliser structure to the surface code.
This encoding circuit is therefore a useful subroutine for the simulation of fermionic systems on quantum computers, and it may be that similar techniques can be used to encode fermionic states in the Verstraete-Cirac transform, which has a similar stabiliser structure [Verstraete and Cirac, 2005].
Using known local unitary mappings from one or more copies of the surface code, our results also imply the existence of optimal encoders for any 2D translationally invariant topological code, some 2D subsystem codes [Yoshida, 2011, Bombin et al., 2012], as well as the 2D color code with and without boundaries [Kubica et al., 2015]. As an explicit example, the subsystem surface code with three-qubit check operators can be encoded from the toric code using the four time step quantum circuit given in Ref. [Bravyi et al., 2012].
The circuits we have provided in this work are not fault-tolerant for use in error correction: a single qubit fault at the beginning of the circuit can lead to a logical error on the encoded qubit.
Nevertheless, since our circuits have a lower depth than local unitary circuits given in prior work, we expect our circuits also to be more resilient to circuit noise (for example, our circuits have fewer locations for an idle qubit error to occur).
Fault-tolerance of the encoding circuit itself is also not required when using it to prepare fermionic states or to study topological quantum order: for these applications, our circuits could be implemented using either physical qubits (on a NISQ device) or logical qubits on a fault-tolerant quantum computer.
It would be interesting to investigate if our circuits could be adapted to be made fault-tolerant, perhaps for the preparation of a known state (e.g. logical $\ket{0}$ or $\ket{+}$).
Further work could also investigate optimal local unitary encoding circuits for surface codes based on different lattice geometries (such as the hexagonal lattice [Fujii and Tokunaga, 2012]), or for punctured [Raussendorf and Harrington, 2007, Fowler et al., 2009] or hyperbolic surface codes [Breuckmann and Terhal, 2016].
The authors would like to thank Mike Vasmer for informing us of the method for encoding stabiliser codes in Ref. [Gottesman, 1997], as well as Charlie Derby and Joel Klassen for insightful discussions on fermion-to-qubit mappings. We are also grateful for helpful discussions with Austin Fowler and Benjamin Brown, and would like to thank Selwyn Simsek and Adam Callison for pointing out a formatting error in an earlier version of this manuscript. We thank Engineering and Physical Sciences Research Council (EPSRC) for funding this work. Dan Browne and Simon Burton were funded by EPSRC grant EP/R043647/1 and the remaining authors by EPSRC grant number EP/L015242/1. In addition, Farhan Hanif and James Dborin gratefully acknowledge funding from University College London.
Note added: After the first preprint of this article, Ref. [Satzinger et al., 2021] introduced an alternative $O(L)$ unitary encoding circuit for the rotated surface code, using it to experimentally realise topological quantum order.
[Aguado and Vidal, 2008]
Miguel Aguado and Guifré Vidal.
Entanglement renormalization and topological order.
Phys. Rev. Lett., 100:0 070404, Feb 2008.
URL <https://link.aps.org/doi/10.1103/PhysRevLett.100.070404>.
[Aharonov and Touati, 2018]
Dorit Aharonov and Yonathan Touati.
Quantum circuit depth lower bounds for homological codes.
arXiv preprint arXiv:1810.03912, 2018.
[Ataides et al., 2021]
J Pablo Bonilla Ataides, David K Tuckett, Stephen D Bartlett, Steven T Flammia,
and Benjamin J Brown.
The xzzx surface code.
Nature communications, 120 (1):0 1–12, 2021.
[Beals et al., 2013]
Robert Beals, Stephen Brierley, Oliver Gray, Aram W Harrow, Samuel Kutin, Noah
Linden, Dan Shepherd, and Mark Stather.
Efficient distributed quantum computing.
Proceedings of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 4690 (2153):0 20120686, 2013.
[Bombin and Martin-Delgado, 2007]
H Bombin and Miguel A Martin-Delgado.
Optimal resources for topological two-dimensional stabilizer codes:
Comparative study.
Physical Review A, 760 (1):0 012305, 2007.
[Bombin et al., 2012]
Hector Bombin, Guillaume Duclos-Cianci, and David Poulin.
Universal topological phase of two-dimensional stabilizer codes.
New Journal of Physics, 140 (7):0 073048,
[Bravyi et al., 2006]
Sergey Bravyi, Matthew B Hastings, and Frank Verstraete.
Lieb-robinson bounds and the generation of correlations and
topological quantum order.
Physical review letters, 970 (5):0 050401,
[Bravyi et al., 2012]
Sergey Bravyi, Guillaume Duclos-Cianci, David Poulin, and Martin Suchara.
Subsystem surface codes with three-qubit check operators.
arXiv preprint arXiv:1207.1443, 2012.
[Bravyi and Kitaev, 2002]
Sergey B. Bravyi and Alexei Yu. Kitaev.
Fermionic quantum computation.
Annals of Physics, 2980 (1):0 210–226, 2002.
ISSN 0003-4916.
[Breuckmann and Terhal, 2016]
Nikolas P Breuckmann and Barbara M Terhal.
Constructions and noise threshold of hyperbolic surface codes.
IEEE transactions on Information Theory, 620
(6):0 3731–3744, 2016.
[Brierley, 2015]
Stephen Brierley.
Efficient implementation of quantum circuits with limited qubit
arXiv preprint arXiv:1507.04263, 2015.
[Brown et al., 2011]
Benjamin J Brown, Wonmin Son, Christina V Kraus, Rosario Fazio, and Vlatko
Generating topological order from a two-dimensional cluster state
using a duality mapping.
New Journal of Physics, 130 (6):0 065010,
[Cheung et al., 2007]
Donny Cheung, Dmitri Maslov, and Simone Severini.
Translation techniques between quantum circuit architectures.
In Workshop on Quantum Information Processing, 2007.
[Chiaverini et al., 2004]
John Chiaverini, Dietrich Leibfried, Tobias Schaetz, Murray D Barrett,
RB Blakestad, J Britton, Wayne M Itano, John D Jost, Emanuel Knill,
Christopher Langer, et al.
Realization of quantum error correction.
Nature, 4320 (7017):0 602, 2004.
[Cleve and Gottesman, 1997]
Richard Cleve and Daniel Gottesman.
Efficient computations of encodings for quantum error correction.
Physical Review A, 560 (1):0 76–82, Jul
ISSN 1094-1622.
URL <http://dx.doi.org/10.1103/PhysRevA.56.76>.
[Cramer et al., 2016]
Julia Cramer, Norbert Kalb, M Adriaan Rol, Bas Hensen, Machiel S Blok, Matthew
Markham, Daniel J Twitchen, Ronald Hanson, and Tim H Taminiau.
Repeated quantum error correction on a continuously encoded qubit by
real-time feedback.
Nature communications, 7:0 11526, 2016.
[Delfosse and Zémor, 2020]
Nicolas Delfosse and Gilles Zémor.
Linear-time maximum likelihood decoding of surface codes over the
quantum erasure channel.
Physical Review Research, 20 (3):0 033042,
[Dennis et al., 2002]
Eric Dennis, Alexei Kitaev, Andrew Landahl, and John Preskill.
Topological quantum memory.
Journal of Mathematical Physics, 430 (9):0
4452–4505, 2002.
[Derby et al., 2021]
Charles Derby, Joel Klassen, Johannes Bausch, and Toby Cubitt.
Compact fermion to qubit mappings.
Phys. Rev. B, 104:0 035118, Jul 2021.
URL <https://link.aps.org/doi/10.1103/PhysRevB.104.035118>.
[Fowler et al., 2009]
Austin G. Fowler, Ashley M. Stephens, and Peter Groszkowski.
High-threshold universal quantum computation on the surface code.
Phys. Rev. A, 80:0 052312, Nov 2009.
URL <https://link.aps.org/doi/10.1103/PhysRevA.80.052312>.
[Fujii and Tokunaga, 2012]
Keisuke Fujii and Yuuki Tokunaga.
Error and loss tolerances of surface codes with general lattice
Phys. Rev. A, 86:0 020303, Aug 2012.
URL <https://link.aps.org/doi/10.1103/PhysRevA.86.020303>.
[Gong et al., 2021]
Ming Gong, Xiao Yuan, Shiyu Wang, Yulin Wu, Youwei Zhao, Chen Zha, Shaowei Li,
Zhen Zhang, Qi Zhao, Yunchao Liu, Futian Liang, Jin Lin, Yu Xu, Hui Deng, Hao
Rong, He Lu, Simon C Benjamin, Cheng-Zhi Peng, Xiongfeng Ma, Yu-Ao Chen,
Xiaobo Zhu, and Jian-Wei Pan.
Experimental exploration of five-qubit quantum error correcting code
with superconducting qubits.
National Science Review, 01 2021.
ISSN 2095-5138.
URL <https://doi.org/10.1093/nsr/nwab011>.
[Gottesman, 1997]
Daniel Gottesman.
Stabilizer codes and quantum error correction.
arXiv preprint quant-ph/9705052, 1997.
[Hamma and Lidar, 2008]
Alioscia Hamma and Daniel A Lidar.
Adiabatic preparation of topological order.
Physical review letters, 1000 (3):0 030502,
[Havlí čček et al., 2017]
Vojtě ěch Havlí čček, Matthias Troyer, and James D. Whitfield.
Operator locality in the quantum simulation of fermionic models.
Phys. Rev. A, 95:0 032332, Mar 2017.
URL <https://link.aps.org/doi/10.1103/PhysRevA.95.032332>.
[Horsman et al., 2012]
Clare Horsman, Austin G Fowler, Simon Devitt, and Rodney Van Meter.
Surface code quantum computing by lattice surgery.
New Journal of Physics, 140 (12):0 123011, dec
URL <https://doi.org/10.1088/1367-2630/14/12/123011>.
[Jiang et al., 2019]
Zhang Jiang, Jarrod McClean, Ryan Babbush, and Hartmut Neven.
Majorana loop stabilizer codes for error mitigation in fermionic
quantum simulations.
Phys. Rev. Applied, 12:0 064041, Dec 2019.
URL <https://link.aps.org/doi/10.1103/PhysRevApplied.12.064041>.
[Kelly et al., 2015]
Julian Kelly, Rami Barends, Austin G Fowler, Anthony Megrant, Evan Jeffrey,
Theodore C White, Daniel Sank, Josh Y Mutus, Brooks Campbell, Yu Chen, et al.
State preservation by repetitive error detection in a superconducting
quantum circuit.
Nature, 5190 (7541):0 66, 2015.
[Kitaev, 2003]
A Yu Kitaev.
Fault-tolerant quantum computation by anyons.
Annals of Physics, 3030 (1):0 2–30, 2003.
[König and Pastawski, 2014]
Robert König and Fernando Pastawski.
Generating topological order: No speedup by dissipation.
Phys. Rev. B, 90:0 045101, Jul 2014.
URL <https://link.aps.org/doi/10.1103/PhysRevB.90.045101>.
[König and Pastawski, 2014]
Robert König and Fernando Pastawski.
Generating topological order: no speedup by dissipation.
Physical Review B, 900 (4):0 045101, 2014.
[Kubica et al., 2015]
Aleksander Kubica, Beni Yoshida, and Fernando Pastawski.
Unfolding the color code.
New Journal of Physics, 170 (8):0 083026,
[Li, 2015]
Ying Li.
A magic state’s fidelity can be superior to the operations that
created it.
New Journal of Physics, 170 (2):0 023037,
[Liao and Feder, 2021]
Pengcheng Liao and David L Feder.
Quantum circuit for toric code state preparation via graph states.
arXiv preprint arXiv:2103.12268, 2021.
[Linke et al., 2017]
Norbert M Linke, Mauricio Gutierrez, Kevin A Landsman, Caroline Figgatt,
Shantanu Debnath, Kenneth R Brown, and Christopher Monroe.
Fault-tolerant quantum error detection.
Science advances, 30 (10):0 e1701074, 2017.
[Łodyga et al., 2015]
Justyna Łodyga, Paweł Mazurek, Andrzej Grudka, and Michał Horodecki.
Simple scheme for encoding and decoding a qubit in unknown state for
various topological codes.
Scientific reports, 5:0 8975, 2015.
[Lu et al., 2008]
Chao-Yang Lu, Wei-Bo Gao, Jin Zhang, Xiao-Qi Zhou, Tao Yang, and Jian-Wei Pan.
Experimental quantum coding against qubit loss error.
Proceedings of the National Academy of Sciences, 1050
(32):0 11050–11054, 2008.
[Maslov, 2007]
Dmitri Maslov.
Linear depth stabilizer and quantum fourier transformation circuits
with no auxiliary qubits in finite-neighbor quantum architectures.
Physical Review A, 760 (5), Nov 2007.
ISSN 1094-1622.
URL <http://dx.doi.org/10.1103/PhysRevA.76.052310>.
[Nigg et al., 2014]
D. Nigg, M. Müller, E. A. Martinez, P. Schindler, M. Hennrich, T. Monz,
M. A. Martin-Delgado, and R. Blatt.
Quantum computations on a topologically encoded qubit.
Science, 3450 (6194):0 302–305, 2014.
ISSN 0036-8075.
URL <https://science.sciencemag.org/content/345/6194/302>.
[Ofek et al., 2016]
Nissim Ofek, Andrei Petrenko, Reinier Heeres, Philip Reinhold, Zaki Leghtas,
Brian Vlastakis, Yehan Liu, Luigi Frunzio, SM Girvin, Liang Jiang, et al.
Extending the lifetime of a quantum bit with error correction in
superconducting circuits.
Nature, 5360 (7617):0 441–445, 2016.
[Preskill, 2018]
John Preskill.
Quantum Computing in the NISQ era and beyond.
Quantum, 2:0 79, August 2018.
ISSN 2521-327X.
URL <https://doi.org/10.22331/q-2018-08-06-79>.
[Raussendorf and Harrington, 2007]
Robert Raussendorf and Jim Harrington.
Fault-tolerant quantum computation with high threshold in two
Phys. Rev. Lett., 98:0 190504, May 2007.
URL <https://link.aps.org/doi/10.1103/PhysRevLett.98.190504>.
[Roffe et al., 2018]
Joschka Roffe, David Headley, Nicholas Chancellor, Dominic Horsman, and Viv
Protecting quantum memories using coherent parity check codes.
Quantum Science and Technology, 30 (3):0
035010, jun 2018.
URL <https://doi.org/10.1088/2058-9565/aac64e>.
[Satzinger et al., 2021]
KJ Satzinger, Y Liu, A Smith, C Knapp, M Newman, C Jones, Z Chen, C Quintana,
X Mi, A Dunsworth, et al.
Realizing topologically ordered states on a quantum processor.
arXiv preprint arXiv:2104.01180, 2021.
[Schindler et al., 2011]
Philipp Schindler, Julio T. Barreiro, Thomas Monz, Volckmar Nebendahl, Daniel
Nigg, Michael Chwalla, Markus Hennrich, and Rainer Blatt.
Experimental repetitive quantum error correction.
Science, 3320 (6033):0 1059–1061, 2011.
ISSN 0036-8075.
URL <https://science.sciencemag.org/content/332/6033/1059>.
[Seeley et al., 2012]
Jacob T. Seeley, Martin J. Richard, and Peter J. Love.
The bravyi-kitaev transformation for quantum computation of
electronic structure.
The Journal of Chemical Physics, 1370 (22):0
224109, 2012.
[Setia et al., 2019]
Kanav Setia, Sergey Bravyi, Antonio Mezzacapo, and James D. Whitfield.
Superfast encodings for fermionic quantum simulation.
Phys. Rev. Research, 1:0 033033, Oct 2019.
URL <https://link.aps.org/doi/10.1103/PhysRevResearch.1.033033>.
[Steudtner and Wehner, 2019]
Mark Steudtner and Stephanie Wehner.
Quantum codes for quantum simulation of fermions on a square lattice
of qubits.
Phys. Rev. A, 99:0 022308, Feb 2019.
URL <https://link.aps.org/doi/10.1103/PhysRevA.99.022308>.
[Taminiau et al., 2014]
Tim Hugo Taminiau, Julia Cramer, Toeno van der Sar, Viatcheslav V Dobrovitski,
and Ronald Hanson.
Universal control and error correction in multi-qubit spin registers
in diamond.
Nature nanotechnology, 90 (3):0 171, 2014.
[Verstraete and Cirac, 2005]
Frank Verstraete and J Ignacio Cirac.
Mapping local hamiltonians of fermions to local hamiltonians of
Journal of Statistical Mechanics: Theory and Experiment,
20050 (09):0 P09012, 2005.
[Vuillot, 2017]
Christophe Vuillot.
Is error detection helpful on ibm 5q chips?
arXiv preprint arXiv:1705.08957, 2017.
[Waldherr et al., 2014]
Gerald Waldherr, Y Wang, S Zaiser, M Jamali, T Schulte-Herbrüggen, H Abe,
T Ohshima, J Isoya, JF Du, P Neumann, et al.
Quantum error correction in a solid-state hybrid spin register.
Nature, 5060 (7487):0 204, 2014.
[Wen, 2003]
Xiao-Gang Wen.
Quantum orders in an exact soluble model.
Phys. Rev. Lett., 90:0 016803, Jan 2003.
URL <https://link.aps.org/doi/10.1103/PhysRevLett.90.016803>.
[Whitfield et al., 2016]
James D. Whitfield, Vojtě ěch
Havlí čček, and Matthias Troyer.
Local spin operators for fermion simulations.
Phys. Rev. A, 94:0 030301, Sep 2016.
URL <https://link.aps.org/doi/10.1103/PhysRevA.94.030301>.
[Wu et al., 2019]
Bujiao Wu, Xiaoyu He, Shuai Yang, Lifu Shou, Guojing Tian, Jialin Zhang, and
Xiaoming Sun.
Optimization of cnot circuits on topological superconducting
arXiv preprint arXiv:1910.14478, 2019.
[Xu et al., 2021]
Xiaosi Xu, Simon C. Benjamin, and Xiao Yuan.
Variational circuit compiler for quantum error correction.
Phys. Rev. Applied, 15:0 034068, Mar 2021.
URL <https://link.aps.org/doi/10.1103/PhysRevApplied.15.034068>.
[Yoshida, 2011]
Beni Yoshida.
Classification of quantum phases and topology of logical operators in
an exactly solved model of quantum codes.
Annals of Physics, 3260 (1):0 15–95, 2011.
ISSN 0003-4916.
January 2011 Special Issue.
§ PROCEDURE FOR ENCODING A STABILISER CODE
§.§ Review of the General Method
In this section we review the general method for constructing an encoding circuit for arbitrary stabiliser codes given in [Gottesman, 1997, Cleve and Gottesman, 1997], and show how it can be used to find an encoding circuit for an $L=2$ toric code as an example.
We present the method here for completeness, giving the procedure in full and in the simplified case for which the code is CSS.
From a set of check operators one can produce a corresponding bimatrix
\[
\begin{pmatrix}[c|c]
L & R\\
\end{pmatrix}
\]
Rows and columns represent check operators and qubits respectively. $L_{ij} =1$ indicates that check operator $i$ applies $X$ to qubit $j$ as opposed to the identity, similarly for the right hand side $R_{ij} = 1$ implies check operator $i$ applies $Z$ to qubit $j$. If both $L_{ij} = 1$ and $R_{ij} = 1$, then check operator $i$ applies $Y$ on qubit $j$.
A CSS code has check operators $P_n \in \{I,X\}^{\otimes n } \cup \{I,Z\}^{\otimes n }$, its corresponding bimatrix takes the form,
\[
\begin{pmatrix}[c|c]
A & 0\\
0 & B
\end{pmatrix}
\]
$A$ and $B$ have full row rank since they each represent an independent subset of the check operators. Labelling the rank of $A$ as $r$, the rank of $B$ is $n-k-r$.
Via row addition, row swaps and column swaps, the left and right matrices of this simplified form can be taken to standard form [Gottesman, 1997] without changing the stabiliser group of the code. The standard form of the bimatrix is then
\[
\begin{pmatrix}[ccc|ccc]
I & A_1 & A_2 & B & C_1 & C_2\\
0 & 0 & 0 & D & I & E
\end{pmatrix}
\]
Where $I$,$A_1$,$A_2$ and $D,I,E$ have $(r)$, $(n-k-r)$, and $(k)$ columns respectively. We may also represent the set of logical $X$ operators as a bimatrix with each row representing the logical $X$ for a particular encoded qubit,
\[ \bar{X} =
\begin{pmatrix}[ccc|ccc]
U_1 & U_2 & U_3 & V_1 & V_2 & V_3\\
\end{pmatrix}
\]
It is shown in [Gottesman, 1997] that the logical $\bar{X}$ operator can be taken to the form
\[ \bar{X} =
\begin{pmatrix}[ccc|ccc]
0 & U_2 & I & V_1 & 0 & 0\\
\end{pmatrix}
\]
In the CSS case the check operator bimatrix reduces to
\[
\begin{pmatrix}[ccc|ccc]
I & A_1 & A_2 & 0 & 0 & 0\\
0 & 0 & 0 & D & I & E
\end{pmatrix}
\]
and the logical $X$ bimatrix to
\[ \bar{X} =
\begin{pmatrix}[ccc|ccc]
0 & E^T & I & 0 & 0 & 0\\
\end{pmatrix}
\]
To produce a circuit which can encode state $\ket{c_1 \dots c_k}$ for any values of the $c_i$ one should find a circuit which applies logical operators $\bar{X}_1^{c_1} \dots \bar{X}_{k}^{c_k}$ to the encoded $\ket{0}$ state $\ket{\bar{0}} \equiv \sum_{S \in \mathcal{S}}S \ket{0 \dots 0 }$.
Let $F_{c}$ be the operator corresponding to row $c$ of bimatrix $F$.
We denote by $F_{c(m)}$ the operator corresponding to $F_c$, with the operator on the $m^{th}$ qubit replaced with identity, and then controlled by the $m^{th}$ qubit.
\[ \bar{X}_1^{c_1} \dots \bar{X}_k^{c_k} \sum_{S \in \mathcal{S}}S \ket{0 \dots 0 } = \sum_{S \in \mathcal{S}}S \bar{X}_1^{c_1} \dots \bar{X}_k^{c_k} \ket{0 \dots 0 } \]
the application of the $X$ gates can be considered before applying the sum of stabiliser operations. Due to the $I$ in the form of $\bar{X}$,
\[\bar{X}_{k(n)} \ket{0_1 \dots 0_{n-k}}\ket{0_{n-k+1} \dots 0_{n-1} c_k} = \bar{X}^{c_k}_{k}\ket{0_1 \dots 0_n}.\]
we see that independently of $\ket{c_1 \dots c_k}$ we can implement
\begin{align*}
\bar{X}_{1(n-k)} \dots \bar{X}_{k(n)} & \ket{0_1 \dots 0_{n-k}} \ket{c_1 \dots c_k} \\
& = \bar{X}^{c_1}_{1} \dots \bar{X}^{c_k}_{k} \ket{0_1 \dots 0_n} \\
& \equiv \ket{0_1 \dots 0_r}\ket{Xc}
\end{align*}
where in the last line it is emphasised that since $U_1 = 0$, $X_{i(j)}$ acts trivially on the first $r$ qubits. Next to consider is $\sum_{S \in \mathcal{S}}S = (I + M_{n-k}) \dots (I + M_r) \dots (I + M_1)$.
We denote the right matrix of bimatrix $M$ as $R$. In standard form $M_i$ always performs $X$ on qubit $i$ and it performs $Z$ on qubit $i$ when $R_{ii} = 1$, giving
\begin{align*}
M_{i}\ket{0 \dots 0_{i} \dots 0} = Z_{i}^{R_{ii}}M_{i(i)}\ket{0 \dots 1_{i} \dots 0}
\end{align*}
and so
\begin{align*}
(I+M_{i})\ket{0 \dots 0} & = \ket{0 \dots 0_{i} \dots 0} + M_{i}\ket{0 \dots 0_{i} \dots 0} \\ & = Z_{i}^{R_{ii}}M_{i(i)}H_{i}\ket{0 \dots 0}
\end{align*}
or generally
\begin{align*}
& \prod_{i = 1}^{r}(I + M_{i}) (\ket{0_1 \dots 0_r}\ket{Xc}) \\
= & \prod_{i = 1}^{r} Z_{i}^{R_{ii}}M_{i(i)}H_{i} (\ket{0_1 \dots 0_r}\ket{Xc})
\end{align*}
The remaining products
\begin{equation}
\prod_{i = r+1}^{n-k}(I + M_{i})
\end{equation}
can be ignored since they consist only of $\sigma_{z}$ operations and may be commuted to the front to act on $\ket{0}$ states.
Given initially some $k$ qubits we wish to encode, and some additional $n-k$ auxiliary qubits, initialised in $\ket{0}$, a choice of generators for the stabiliser group is
\[
\begin{pmatrix}[cccccccc|cccccccc]
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
\end{pmatrix}
\]
The general circuit which transforms the initial generator set to the standard form bimatrix is given by,
\[
\prod_{i = 1}^{r}Z_{i}^{R_{ii}}M_{i(i)}H_{i} \prod_{j=1}^{k}\bar{X}_{j(n-k+j)}
\]
For CSS codes this reduces to
\[
\prod_{i = 1}^{r}M_{i(i)}H_{i} \prod_{j=1}^{k}\bar{X}_{j(n-k+j)}
\]
In the simplified case all gates are either initial $H$ gates or $CNOT$'s. We may write the circuit in two stages, performing first the $H$ gates and controlled $\bar{X}$ gates.
\begin{equation*}
\Qcircuit @C=1em @R=.7em {
& \gate{H} & \qw & \qw & \qw & \qw & \qw & \qw & \qw\\
& \cdots & & & & & & & \\
& \gate{H} & \qw & \qw & \qw & \qw & \qw & \qw & \qw\\
& \qw & \multigate{2}{{E}_{1(1)}} & \qw & \qw & \qw & \multigate{2}{{E}_{k(k)}} & \qw & \qw \\
& \cdots & \nghost{{E}_{1(1)}} & \cdots & & \cdots & \nghost{{E}_{k(k)}} & \cdots & \\
& \qw & \ghost{{E}_{1(1)}} & \qw & \qw & \qw & \ghost{{E}_{k(k)}} & \qw & \qw\\
& \qw & \qw & \qw & \cdots& & \ctrl{-1} & \qw & \qw\\
& \cdots & & & & & & & \\
& \qw & \ctrl{-3} & \qw & \cdots & & \qw & \qw & \qw \\
\end{equation*}
and in stage 2 the controlled $X$ gates.
\begin{equation*}
\Qcircuit @C=1em @R=.7em {
& \qw & \qw & \ctrl{3} & \qw & \cdots & & \qw & \qw\\
& \cdots & & & & & & & \\
& \qw & \qw & \qw & \qw & \cdots & & \ctrl{1} & \qw\\
& \qw & \qw & \multigate{5}{M_{1(1)}} & \qw & \cdots & & \multigate{5}{M_{r(r)}} & \qw \\
& \cdots & & \nghost{M_{1(1)}} & & \cdots & & & \\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw\\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw\\
& \cdots & & \nghost{M_{1(1)}} & & & & \nghost{M_{r(r)}} & \\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw \\
\end{equation*}
In the general case stage 1 is identical but stage 2 takes the form
\begin{equation*}
\Qcircuit @C=1em @R=.7em {
& \qw & \multigate{2}{\Omega_{z}} & \ctrl{3} & \qw & \cdots & & \multigate{1}{M_{r(r)}} & \\
& \cdots & \nghost{\sigma_{z}} & \qw & \qw & & & \ghost{M_{r(r)}} & \\
& \qw & \ghost{\sigma_{z}} & \qw & \qw & \cdots & & \ctrl{1} \qwx & \qw\\
& \qw & \qw & \multigate{5}{M_{1(1)}} & \qw & \cdots & & \multigate{5}{M_{r(r)}} & \qw \\
& \cdots & & \nghost{M_{1(1)}} & & \cdots & & & \\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw\\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw\\
& \cdots & & \nghost{M_{1(1)}} & & & & \nghost{M_{r(r)}} & \\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw \\
\end{equation*}
Where $\Omega_{z}$ consists of $Z$ operations on some of the first $r$ qubits and each $M_{i(i)}$ consists of controlled $Z$ gates on some of the first $r$ qubits and controlled Pauli gates on some of the following $n-r$ qubits. In the case of the $L=2$ toric code, with qubits labelled left to right and top to bottom the bimatrix is
\[
\begin{pmatrix}[cccccccc|cccccccc]
1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1\\
\end{pmatrix}
\]
The standard form of this bimatrix is
\[
\begin{pmatrix}[cccccccc|cccccccc]
1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0\\
\end{pmatrix}
\]
The circuit which encodes the above stabiliser set is
\begin{equation*}
\Qcircuit @C=1em @R=.7em {
& \gate{H} & \qw & \ctrl{3}& \ctrl{5} & \ctrl{7} & \qw &\qw &\qw &\qw &\qw &\qw \\
& \gate{H} & \qw & \qw & \qw & \qw &\ctrl{3} &\ctrl{4}&\ctrl{6} &\qw &\qw&\qw \\
& \gate{H} & \qw & \qw & \qw& \qw & \qw &\qw &\qw &\ctrl{3} &\ctrl{4}&\ctrl{5} \\
& \qw & \targ & \targ & \qw & \qw &\qw &\qw &\qw &\qw &\qw&\qw \\
& \targ & \qw & \qw & \qw &\qw &\targ &\qw &\qw &\qw &\qw&\qw \\
& \qw & \qw & \qw & \targ &\qw &\qw &\targ &\qw &\targ &\qw&\qw \\
& \ctrl{-2} & \qw & \qw & \qw &\qw &\qw &\qw &\qw &\qw &\targ&\qw \\
& \qw & \ctrl{-4} & \qw & \qw &\targ &\qw &\qw &\targ &\qw &\qw&\targ }
\end{equation*}
It is important to have kept track of which column represents which qubit since column swaps are performed in bringing the matrix to standard form. Taking this into account gives the $L=2$ circuit on the toric architecture.
§.§ Depth of the General Method
Any stabiliser circuit has an equivalent skeleton circuit [Maslov, 2007] (a circuit containing only generic two-qubit gates, with single-qubit gates ignored) which after routing on a surface architecture will have at worst $O(n)$ depth. The output of the general method for encoding a stabiliser code in fact already splits into layers of skeleton circuits. Stage 2 of the method applied to a CSS code has at worst $r(n-r)$ controlled Pauli gates $CP_{ij}$ with $i$,$j$ in $\{1 \dots r\}$ and $\{r+1 \dots n\}$ respectively, $CP_{ij}$ is implemented before $CP_{i'j'}$ so long as $i<i'$. Stage 2 then takes the form of a skeleton circuit and as such the number of timesteps needed is $O(n)$ for surface or linear nearest neighbour architectures. Stage 1 has at most $k(n-k-r)$ gates and also takes the form of a skeleton circuit. In the worst case scenario stage 2 includes, in addition to the $CP$ gates, controlled Z gates $CZ$ with targets on the first $r$ qubits. As noted in errata for [Gottesman, 1997], $i>j$ for any of the additional $CZ_{ij}$ in stage 2. All $CZ_{ij}$ can then be commuted to timesteps following all $CP$ gates since each $CP$ in a timestep following $CZ_{ij}$ takes the form $CP_{mn}$ with $n>m>i>j$. The circuit then splits into a layer of $CP$ gates and a layer of $CZ$ gates, each of which is a skeleton circuit, and so can be implemented in $O(n)$ timesteps on surface and linear nearest neighbour architectures.
§ ADDITIONAL PLANAR ENCODING CIRCUITS
§.§ Planar base cases and rectangular code
In <Ref> we provide encoding circuits for the $L=2$, $L=3$ and $L=4$ planar codes, requiring 4, 6 and 8 time steps respectively. These encoding circuits are used as base cases for the planar encoding circuits described in <Ref>. In <Ref> we provide encoding circuits that either increase the width or height of a planar code by two, using three time steps.
Encoding circuits for the L=2, L=3 and L=4 planar codes. Each edge corresponds to a qubit, each arrow denotes a CNOT gate pointing from control to target, and each filled black circle denotes a Hadamard gate applied at the beginning of the circuit. The colour of each CNOT gate corresponds to the time step it is implemented in, with blue, green, red, black, cyan and yellow CNOT gates corresponding to the first, second, third, fourth, fifth and sixth time steps respectively. The hollow circle in each of (a) and (b) denotes the initial unencoded qubit. The circuit in (c) encodes an L=4 planar code from an L=2 planar code, with solid edges denoting qubits initially encoded in the L=2 code.
§.§ Rotated Surface Code
In <Ref> we demonstrate a circuit that encodes an $L=7$ rotated surface code from a distance $L=5$ rotated code. For a given distance $L$, the rotated surface code uses fewer physical qubits than the standard surface code to encode a logical qubit [Bombin and Martin-Delgado, 2007]. Considering a standard square lattice with qubits along the edges, a rotated code can be produced by removing qubits along the corners of the lattice boundary, leaving a diamond of qubits from the centre of the original lattice. The diagram in <Ref> shows the resultant code, rotated $45^{\circ}$ compared to the original planar code, and with each qubit now denoted by a vertex rather than an edge. For a distance $L$ code the rotated surface code requires $L^2$ qubits compared to $L^2 + (L-1)^2$ for the planar code.
The encoding circuit in <Ref> takes 4 steps to grow a rotated code from a distance $L=5$ to $L=7$. This is a fixed cost for any distance $L$ to $L+2$. To produce a distance $L=2m$ code this circuit would be applied repeatedly $m+O(1)$ times to an $L=2$ or $L=3$ base case, requiring a circuit of total depth $2L + O(1)$.
The circuit in <Ref> can be verified by using <Ref> to see that a set of generators for the $L=5$ rotated code (along with the single qubit $Z$ and $X$ stabilisers of the ancillas) is mapped to a set of generators of the $L=7$ rotated code, as well as seeing that the $X$ and $Z$ logicals of the $L=5$ code map to the $X$ and $Z$ logicals of the $L=7$ rotated code.
(a) Circuit to increase the width of a planar code by two. (b) Circuit to increase the height of a planar code by two. Notation is the same as in <Ref>.
Encoding circuit for the $L=7$ rotated code from an $L=5$ rotated surface code (shown as a red outline). The colour of each arrow denotes the time step the gate is applied in. The gates are applied in the order: blue, red, black, purple.
The additional qubits are initialised in the $\ket{+}$ (red) or $\ket{0}$ (green) state.
The yellow squares denote a $Z$ stabiliser on the four corner qubits, and the brown squares represent an $X$ operator on the four corner qubits.
The rotated code has additional stabilizers between states on along the edges. In the $L=5$ code these are shown as a red arch (with $Z$ and $X$ stabilisers on the vertical and horizontal edges respectively), and the yellow and brown arches in the $L=7$ code edge are Z and X stabilizers between the two edge qubits.
§ RENORMALISATION GROUP ENCODER
§.§ Toric Code Encoder
Applying the Gottesman encoder to the toric code, as shown in Appendix <ref>, and then enforcing locality using SWAP gates, gives the following encoding circuit for the $L=2$ toric code that requires 10 time steps:
\begin{equation*}
\Qcircuit @C=0.8em @R=.7em {
\lstick{\ket{0}}& \gate{H} & \qw & \qw& \ctrl{1} & \qw & \ctrl{4} &\ctrl{3} &\targ &\ctrl{3} &\qw &\ctrl{3}&\targ&\ctrl{3}&\qw \\
\lstick{\ket{0}}& \qw & \qw & \targ & \targ & \qw &\qw &\qw&\qw &\qw &\qw&\qw&\qw&\qw&\qw \\
\lstick{\ket{\psi_0}}& \qw & \ctrl{1} & \qw & \qw& \targ & \qw &\qw &\qw &\qw &\qw&\qw&\qw&\qw&\qw \\
\lstick{\ket{0}}& \qw & \targ & \qw & \targ & \qw &\qw &\targ &\ctrl{-3} &\targ &\ctrl{2}&\targ&\ctrl{-3}&\targ&\qw \\
\lstick{\ket{0}}& \qw & \qw & \qw & \qw &\qw &\targ &\qw &\qw &\qw &\qw&\targ&\targ&\qw&\qw \\
\lstick{\ket{\psi_1}}& \qw & \qw & \ctrl{-4} & \qw &\qw &\targ &\qw &\qw &\qw &\targ&\qw&\qw&\targ&\qw \\
\lstick{\ket{0}}& \gate{H} & \qw & \qw & \qw &\ctrl{-4} &\qw &\qw &\qw &\qw &\qw&\qw&\ctrl{-2}&\ctrl{-1}&\qw\\
\lstick{\ket{0}}& \gate{H} & \qw & \qw & \ctrl{-4} &\qw &\ctrl{-2} &\qw &\qw &\qw &\qw&\ctrl{-3}&\qw&\qw&\qw }
\end{equation*}
where the qubits are numbered $0\ldots 7$ from top to bottom. This circuit encodes the initial unknown qubit states $\ket{\psi_0}$ and $\ket{\psi_1}$ into logical states $\ket{\bar{\psi_0}}$ and $\ket{\bar{\psi_1}}$ of an $L=2$ toric code with stabiliser group generators $X_0X_1X_2X_6$, $X_0X_1X_3X_7$, $X_2X_4X_5X_6$, $Z_0Z_2Z_3Z_4$, $Z_1Z_2Z_3Z_5$ and $Z_0Z_4Z_6Z_7$.
Equipped with an $L=2$ base code emulated as the central core of a $4\times 4$ planar grid, where the surrounding qubits are initially decoupled $+1$ $Z$ eigenstates, one can apply the local routing methods of Appendix <ref> to obtain the initial configuration as depicted in <Ref>. The ancillae qubits are then initialised as $\ket{0}$ or $\ket{+}$ eigenstates as depicted in <Ref>(a) by means of Hadamard operations where necessary, before the circuit is implemented through the sequence of CNOT gates as depicted in <Ref>(a)-(c).
By recursive application of <Ref>, it is seen that the circuit forms the stabiliser structure of an $L=4$ toric code on the planar architecture. Proceeding inductively, one can exploit the symmetry of a distance $L=2^k$ toric code to embed it in the centre of a $2L\times 2L$ planar grid, “spread-out” the core qubits in time linear in the distance, and ultimately perform the $L=2 \mapsto L=4$ circuit on each $4\times 4$ squarely-tesselated sub-grid.
Initial outwards spreading of qubits in a distance 4 toric code to prepare for the encoding of a distance 8 toric code. Solid black and unfilled nodes represent the routed qubits of the distance 4 code, and the ancillae respectively. One then executes the subroutine of <Ref> in each of the four 4x4 quadrants. This procedure generalises inductively for any targeted distance $2^k$ toric code.
§.§ Routing circuits for enforcing locality
To enforce locality in the Renormalisation Group encoder, which encodes a distance $L$ toric code, one can use SWAP gates to “spread out” the qubits between iteration $k$ and $k+1$, such that all of the $O(1)$ time steps in iteration $k+1$ are almost local on a $2^{k+1}\times 2^{k+1}$ region of the $L\times L$ torus. By almost local, we mean that the time step would be local if the $2^{k+1}\times 2^{k+1}$ region had periodic boundary conditions. Since at each iteration (until the final one) we use a region that is a subset of the torus, we in fact have a planar architecture (no periodic boundaries), and so it is not possible to simultaneously enforce locality in all of the $O(1)$ time steps in an iteration $k<\log L-2$ of the RG encoder, which are collectively local on a toric architecture. Thus it is necessary to emulate a toric architecture on a planar one. In a time step in iteration $k$, this can be achieved by using $3(2^{k}-1)$ time steps to move the top and bottom boundaries together (using SWAP gates) before applying any necessary gates which are now local (where the factor of three comes from the decomposition of a SWAP gate into 3 CNOT gates). Then $3(2^{k}-1)$ time steps are required to move the boundaries back to their original positions. The identical procedure can be applied simultaneously to the left and right boundaries. Thus there is an overhead of $3(2^{k+1}-2)$ to emulate a toric architecture with a planar architecture.
Starting from $L=2$ and ending on a size $L$ code gives an overall overhead to emulating the torus of 6($\sum_{i=1}^{\log_2(L)-2} 2^{i+1}-2) = 6L-12\log_2L$, since from <Ref> it can be seen that opposite edges need be made adjacent two times per iteration to enforce locality in it. Additionally, the time steps within each iteration must be implemented. Noticing that the red CNOT gates in <Ref>(b) can be applied simultaneously with the gates in <Ref>(a), this can be done in 6 time steps, leading to an additional $6\log_2(L) -6$ time steps in total in the RG encoder.
It is key to our routine to be able to “spread out” the qubits between each MERA step. We now show that this can be achieved in linear time by routing qubits through the planar grid. We firstly consider a single step of moving from a $2^k$ to a $2^{k+1}$ sized grid.
Our first observation is that while the qubits lie on the edges of our $2^k \cross 2^k$ grid, one can subdivide this grid into one of dimensions $(2^{k+1}+1) \cross (2^{k+1}+1)$, such that the qubits lie on corners of this new grid, labelled by their positions $(i,j)$ with the centre of the grid identified with $(0,0)$. Under the taxicab metric we can measure the distance of qubits from the centre as $M_{i,j}:=|(i,j)| = |i|+|j|$ and one can check that qubits only ever lie at odd values of this metric, essentially forming a series of concentric circles with $M_{i,j} = 2n+1, n\in\mathbb{N}$. See <Ref>.
\begin{equation*}
\tikzfigscale{1}{figs/target}
\end{equation*}
$L=4$ code showing the circles of the $M_{i,j}$ metric. Qubits from the previous iteration are in black and new ones in white.
A general routing step requires enlarging these circles such that the initial radii $R_I$ are mapped to final radii $R_F$ in the following fashion:
\begin{equation}\label{sorting matrix}
\begin{matrix}
R_I & & R_F & \hspace{1cm} STEPS \\
2^k-1 & \rightarrow & 2^{k+1}-1 & \hspace{1cm} 2^{k-1}\\
2^k-3 & \rightarrow & 2^{k+1}-7 & \hspace{1cm} 2^{k-1}-2\\
2^k-5 & \rightarrow & 2^{k+1}-9 & \hspace{1cm} 2^{k-1}-2 \\
2^k-7 & \rightarrow & 2^{k+1}-15 & \hspace{1cm} 2^{k-1}-4\\
& \vdots & &\hspace{1cm} \vdots\\
3 & \rightarrow & 7 & \hspace{1cm} 2\\
\end{matrix}
\end{equation}
Routing the qubits requires a series of SWAP gates to iteratively make the circles larger, e.g. $3\rightarrow 5\rightarrow 7$, the number of steps this requires is shown in table <ref>. At the initial time step, it is only possible to move the outermost circle ($R_I=2^k-1$) since all smaller circles are adjacent. One can check though, that the number of steps required to move these smaller circles is sufficiently small that it is possible to start moving them at a later time step. We provide a framework for the required steps in equation <ref>.
Thus all the qubits can be moved in $2^{k-1}$ steps. Each step requires (possibly) simultaneous SWAP gates, each of which can be decomposed into three CNOT gates. Thus the overall run time of each iteration is $3\cdot 2^{k-1}$. To start from the $L=2$ base code and enlarge to a desired $L=2^m$ requires $\log_2(L)-1$ iterations and thus the overall run time for the routing routine is given by the geometric series $\sum_{k=1}^{\log_2(L)-1} 3\cdot 2^{k-1} = \frac{3}{2}(L-2)$.
Combining this with the time to emulate a toric architecture with a planar architecture, and the 10 time steps required to encode the $L=2$ base case using the Gottesman encoder (see Appendix <ref>), the total number of time steps required for the local RG encoder is $15L/2 - 6\log_2 L + 7 \sim O(L)$, where $L$ must be a power of 2.
\begin{equation}\label{routingroutine}
\begin{matrix*}[r]
Time step & &&&&\\
1. &\hspace{0.95cm} 2^k-1\rightarrow 2^k+1 & WAIT & WAIT & \dots & WAIT \\
2. & 2^k+1\rightarrow 2^k+3 &\hspace{0.95cm} 2^k-3\rightarrow 2^k-1 & WAIT & \dots & WAIT\\
3. & 2^k+3\rightarrow 2^k+5 & 2^k-1\rightarrow 2^k+1 &\hspace{1.3cm} 2^k-5\rightarrow 2^k-3& \hspace{0.5cm}\dots & WAIT\\
\vdots & \vdots & \vdots & \vdots & \vdots & \\
2^{k-1}-1. & 2^{k+1}-5\rightarrow 2^{k+1}-3 & 2^{k+1}-9\rightarrow 2^{k+1}-7 & 2^{k+1}-13\rightarrow 2^{k+1}-11 & \dots & \hspace{0.5cm}3\rightarrow 5\\
2^{k-1}. & 2^{k+1}-3\rightarrow 2^{k+1}-1 & DONE & 2^{k+1}-11\rightarrow 2^{k+1}-9 & \dots & 5\rightarrow 7\\
\end{matrix*}
\end{equation}
|
# Probing black hole charge with gravitational microlensing of gravitational
waves
Uddeepta Deka<EMAIL_ADDRESS>International Centre for Theoretical
Sciences, Tata Institute of Fundamental Research, Bangalore 560089, India
Sumanta Chakraborty<EMAIL_ADDRESS>Indian Association for the Cultivation
of Science, 2A & 2B Raja S C Mullick Road, Kolkata 700032, India Shasvath J.
Kapadia<EMAIL_ADDRESS>The Inter-University Centre for Astronomy
and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411007, India International
Centre for Theoretical Sciences, Tata Institute of Fundamental Research,
Bangalore 560089, India Md Arif Shaikh<EMAIL_ADDRESS>Department
of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
International Centre for Theoretical Sciences, Tata Institute of Fundamental
Research, Bangalore 560089, India Parameswaran Ajith<EMAIL_ADDRESS>International Centre for Theoretical Sciences, Tata Institute of Fundamental
Research, Bangalore 560089, India
(August 28, 2024)
###### Abstract
Gravitational microlensing of gravitational waves (GWs) opens up the exciting
possibility of studying the spacetime geometry around the lens. In this work,
we investigate the prospects of constraining the ‘charged’ hair of a black
hole (BH) from the observation of a GW signal microlensed by the BH. The
charge can have electromagnetic or modified gravity origin. We compute the
analytic form of the lensing potential with charge and construct the lensed
waveforms for a range of BH mass, charge and impact parameters, assuming non-
spinning BHs. Using an approximate likelihood function, we explore how future
observations of microlensed GWs can constrain the charge of the BH lens. We
find that positive values of the charge parameter (that can be of
electromagnetic or modified gravity origin) can be tightly constrained using
lensed GW signals, while the constraints on negative values of the charge
parameter (modified gravity origin) are modest.
## I Introduction
The LIGO and Virgo gravitational-wave (GW) detectors have observed $\sim 90$
compact binary coalescence (CBC) events during its first three observing runs
(O1, O2, O3) Abbott _et al._ (2021a). Most of them are binary black hole
(BBH) mergers. The remaining are mergers of binary neutron stars (BNSs) Abbott
_et al._ (2017a, 2020a) and neutron star-black hole (NSBH) binaries Abbott
_et al._ (2021b).
These detections have enabled some of the most unique and stringent tests of
general relativity (GR) in the strong-field regime Abbott _et al._ (2021c).
These include a model-agnostic residual test that studies the statistical
properties of the strain data after subtracting out the expected GW signal
buried therein to ascertain if the residual is consistent with noise
Scientific _et al._ (2016); Abbott _et al._ (2019a, 2020b); an inspiral-
merger-ringdown consistency test that checks if the GW waveform is consistent
with GR’s prediction of the same by comparing the binary’s intrinsic
parameters inferred from the low- and high-frequency portions of the waveform
Ghosh _et al._ (2016, 2018); a test that compares the speed of GWs with
respect to the speed of light Abbott _et al._ (2019b), as well as one that
probes signatures of velocity dispersion as a consequence of a finite graviton
mass Will (1998); and a test of GW polarizations that searches for
polarizations of GWs beyond the two predicted by GR Abbott _et al._ (2017b,
2021d); Takeda _et al._ (2021); Wong _et al._ (2021).
The anticipated observations of gravitationally lensed GWs in future observing
runs 111No confirmed detection of GW lensing have thus far been reported
Hannuksela _et al._ (2019); Dai _et al._ (2020); McIsaac _et al._ (2020);
Abbott _et al._ (2021e, 2023); Janquart _et al._ (2023); Goyal _et al._
(2023a). Some arguments claiming the observation of lensed GWs – based on the
larger BH masses uncovered by GW observations relative to those inferred from
galactic X-ray binaries – have been made Broadhurst _et al._ (2018, 2022).
promise to enable additional novel tests of GR (see, e.g., Goyal _et al._
(2021); Fan _et al._ (2017); Hernandez (2022); Finke _et al._ (2021); Narola
_et al._ (2023); Chung and Li (2021); Goyal _et al._ (2023b); Mukherjee _et
al._ (2020); Collett and Bacon (2017)), while also shedding light on a number
of questions in astrophysics (see, e.g., Basak _et al._ (2023); Singh _et
al._ (2023); Magare _et al._ (2023)) and cosmology (see, e.g., Basak _et
al._ (2022); Jana _et al._ (2022); Hannuksela _et al._ (2020)). As with
electromagnetic (EM) waves, lensing of GWs results when large agglomerations
of matter lead to deviations in the trajectories of these waves Ohanian
(1974); Deguchi and Watson (1986); Wang _et al._ (1996); Nakamura (1998);
Bartelmann (2010).
Unlike the case involving EM waves, GW lensing is typically studied in two
different regimes. Lensing by galaxies or clusters can be analyzed using the
geometric optics (ray-optics) approximation since the wavelengths of the GWs
(detectable by LIGO-Virgo) are much smaller than the gravitational radii of
such lenses Ng _et al._ (2018); Dai _et al._ (2017); Smith _et al._ (2018).
Strong lensing in this regime results in the production of multiple copies of
the GWs Kormann _et al._ (1994); Koopmans _et al._ (2009) separated by time
delays spanning minutes to months Haris _et al._ (2018); More and More
(2022); Magare _et al._ (2023). On the other hand, lensing of GWs by massive
compact objects with gravitational radii comparable to the GW wavelength will
incur wave optics effects Takahashi and Nakamura (2003); Jung and Shin (2019);
Diego _et al._ (2019); Basak _et al._ (2022); Urrutia and Vaskonen (2022).
The result is the production of a single image with a modulated GW shape
Nakamura (1998). This shape carries with it imprints of the properties of the
lens, and can possibly be used to probe the nature of the compact lens.
Wave-optics effects modulating the GW waveform have been studied extensively
for simple lensing potential models, in particular the one corresponding to
the point mass lens 222Indeed, most searches for microlensing signatures
incurred by wave-optics effects in detected GW events assume a point-mass lens
model Basak _et al._ (2022); Hannuksela _et al._ (2019); Abbott _et al._
(2023). Nakamura (1998); Nakamura and Deguchi (1999); Takahashi and Nakamura
(2003). In this model, the frequency-dependent amplitude modulations of the GW
waveform are exclusively determined by the (redshifted) mass of the lens, and
the impact parameter (i.e, the location of the source in the lens plane)
Takahashi and Nakamura (2003). However, if the compact object has additional
hairs, they are also going to show up in the lensed GW waveform, and can
possibly be detected as and when microlensed GW signals are observed. The
additional hairs could point to the specific nature of the compact object or
point to a deviation from GR.
Besides the mass, the next obvious hair that a static and spherically
symmetric compact object can have is the electric charge $q$, which modifies
the radial and temporal components of the spacetime metric by
$\mathcal{O}(Q/r^{2})$, where $Q\equiv q^{2}$. Indeed, it is widely believed
that astrophysical objects should not have any net electric charge, or, even
if there is some electric charge, it should immediately get shielded Feng _et
al._ (2023); Pina _et al._ (2022); Bozzola and Paschalidis (2021) or
neutralised Eardley and Press (1975). Intriguingly, such a ‘charge-like’ hair
can arise from several other contexts as well and can have non-negligible
values. For example, in the context of an extra spatial dimension, where we
live in a four dimensional universe embedded in a five dimensional spacetime,
known as the braneworld scenario, the solution of the effective gravitational
field equations on the four-dimensional brane has exactly the same structure,
but with $Q$ being _negative_ Shiromizu _et al._ (2000); Dadhich _et al._
(2000); Harko and Mak (2004); Aliev and Gumrukcuoglu (2005); Maeda and Dadhich
(2007). Similarly, a positive value of $Q$ can arise in scalar coupled Maxwell
theories, described by Horndeski theories Babichev _et al._ (2015);
Barrientos _et al._ (2017); Babichev and Charmousis (2014). There have
already been extensive searches for both positive as well as negative values
of the ‘charge’ in various contexts — (a) in the weak field regime, e.g.,
through solar system tests Iorio and Saridakis (2012); Capozziello _et al._
(2013); Chakraborty and SenGupta (2014); Bhattacharya and Chakraborty (2017);
Mukherjee and Chakraborty (2018), (b) using EM waves from accretion on
supermassive BHs Maselli _et al._ (2015); Stuchlík and Kotrlová (2009);
Banerjee _et al._ (2021, 2017), (c) using GWs from BBH and BNS coalescence
Barausse and Yagi (2015); Toshmatov _et al._ (2016); Andriot and Lucena Gómez
(2017); Chakraborty _et al._ (2018); Chakravarti _et al._ (2020); Mishra
_et al._ (2022, 2023a); Gupta _et al._ (2021); Carullo _et al._ (2022); Gu
_et al._ (2023), and (d) with strong lensing of EM waves and measurement of BH
shadow Chakraborty and SenGupta (2017); Banerjee _et al._ (2022); Vagnozzi
_et al._ (2023); Banerjee _et al._ (2020).
Here we explore the possibility of constraining the charge hair of a BH from
the observation of GWs microlensed by it. We study the lensing-induced
modulations on the GW signal by introducing the charge in the lensing
potential. In particular, we model the spacetime surrounding the lens to be
described by a metric that is analogous to the Reisner-Nordström spacetime,
but with $\pm(Q/r^{2})$ term in the temporal and radial metric elements. If a
lensed GW signal suggest a positive value of $Q$, it is to be interpreted
either in the context of EM-charged BH, or, as a BH in modified theories of
gravity. While, if the lensed GW signal prefers a negative value for $Q$, then
it will provide a hint for the existence of extra dimensions. In what follows
we will evaluate the lensing potential associated with the new metric, and
numerically calculate the corresponding frequency-dependent magnification
function that modulates the GW, for a range of the lens masses, impact
parameters (i.e, the angular location of the source in the image plane), and
the charge $Q$.
To assess our ability to constrain the charge $Q$, we quantify the extent to
which the GW signal lensed by a BH of charge $Q$ deviates from another
waveform lensed by a BH of charge $Q_{\mathrm{true}}$. To that end, we compute
matches, defined as the inner product between two waveforms weighted by the
noise power spectral density (PSD) of the GW detector. We then use these to
construct an approximate likelihood on the lens parameters, which we sample to
assess how well we can recover the true value of the charge. We find that
positive charges can be well constrained, where a lens with a charge of
$Q=0.1$ can be identified as a charged lens at $>90\%$ confidence. On the
other hand, negatively charged lenses are weakly constrainable – only large
$|Q|$’s can be constrained well enough to exclude $Q=0$ at $>90\%$ confidence.
The paper is organized as follows: In Section II, we present the analytical
expression for the lensing potential in the presence of the charge $Q$. Using
the expression for the lensing potential, in Section III we numerically
compute the magnification function, as well as the GW waveform lensed by a
charged BH. In Section IV we explore the possibility of constraining the
charge parameter of a BH from the observation of GWs microlensed by it. We
conclude in Section V. Some of the detailed calculations are presented in
Appendix A.
_Notations and Conventions:_ We will set the fundamental constants $G$ and $c$
to unity and will use the mostly positive signature convention, such that the
Minkowski metric in Cartesian coordinate takes the form
$\eta_{\mu\nu}=\textrm{diag}(-1,+1,+1,+1)$. Greek indices $\mu,\nu,\cdots$
denote four-dimensional spacetime coordinates, while uppercase Roman indices
$A,B,\cdots$ denote five-dimensional spacetime indices.
We will refer to $Q$ as the ‘charge’, or ‘charge hair’ of a BH. Note that,
even a negative electric charge $q$ will correspond to a positive value of
$Q$, since $Q\equiv q^{2}$. Also, we will refer to lensing in the wave optics
regime ($GM_{\mathrm{L}}/c^{2}\sim\lambda_{\mathrm{GW}}$) as microlensing.
## II Lensing magnification by a charged black hole
In this section, we will determine the magnification function due to the
simplest possible hair of a BH, namely a charge that can either have an EM
origin or, may arise from theories of gravity beyond GR. Of course, any EM
charge is expected to be dissipated away with time, or, would be heavily
shielded by surrounding matter. But it would still be interesting Feng _et
al._ (2023); Pina _et al._ (2022); Bozzola and Paschalidis (2021) if any
residual electric charge present in an astrophysical BH, whether it could be
tested using gravitational lensing. More interesting is the case when this
charge arises from an alternative theory of gravity. We will provide some
examples of such modified theories below.
First of all, the presence of an extra spatial dimension can induce such a
charge in the four-dimensional spacetime metric outside of a BH. This is
because the effective gravitational field equations on the four-dimensional
vacuum spacetime (known as the brane) take the following form Shiromizu _et
al._ (2000); Dadhich _et al._ (2000)
$\displaystyle G_{\mu\nu}+E_{\mu\nu}=0\leavevmode\nobreak\ .$ (1)
Here $G_{\mu\nu}$ is the four-dimensional Einstein tensor and
$E_{\mu\nu}=W_{ABCD}e^{A}_{\mu}n^{B}e^{C}_{\nu}n^{D}$ is the projected five
dimensional Weyl tensor on the brane, with $e^{A}_{\mu}$ being the projector
and $n_{A}$ being the normal to the brane. Owing to the symmetries of the Weyl
tensor, it follows that $E^{\mu}_{\mu}=0$, which is akin to the traceless
property of the energy-momentum tensor of the EM field. Thus the above field
equation for the metric admits the following static and spherically symmetric
solution, $-g_{tt}=g^{rr}=1-(2M/r)+(Q/r^{2})$, with $Q<0$ and thus differing
from the Reissner-Nordström solution by the sign of the $(1/r^{2})$ term. In
the $f(T)$ gravity as well the same solution for the metric components was
obtained Capozziello _et al._ (2013), however, with a positive contribution
from the charge term $Q$. The Einstein-Gauss-Bonnet theory in higher dimension
also admits the same solution on the brane with $Q<0$, albeit the origin of
the charge term was from the coupling constant in the Gauss-Bonnet invariant
Maeda and Dadhich (2007). Finally, the same solution also arises in the
context of a sub-class of Horndeski theories of gravity, which involves the
following terms: $\beta G^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi$, as
well as $-(\gamma/2)T^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi$ in the
gravitational action, besides the Ricci scalar and the Maxwell term. In this
case, the charge $Q$ depends on the ratio $(\gamma/\beta)$ and is a positive
quantity Babichev _et al._ (2015). Thus, we observe that various cases yield
the same metric, with different origin and sign for the charge term. In
summary, there are several possibilities for a positive charge to appear in
the metric — (a) EM charge, (b) charge arising from the $f(T)$ theory of
gravity, (c) charge depending on the non-trivial coupling between gravity and
electromagnetism with scalar, as in Hornsdeski theories. While the negative
charge predominantly arises in the presence of higher dimensions, either from
the bulk Weyl tensor or, from the Gauss-Bonnet coupling. This makes the
charged metric an ideal ground to probe, since it arises in so many different
contexts, with many varieties and hence it is worthwhile to see the effect of
such a charge on the microlensing of GWs. In particular, when the wavelength
of GWs are comparable to the radius of the event horizon of these charged BHs
wave effects must be taken into account. In what follows, we compute the
effect of this charge term on the amplification of GWs due to microlensing.
Figure 1: The time delay contours (in units of $M_{\rm L}$, where $t_{\rm
d}=0$ at the global minima) as a function of the lens plane coordinates
$(x_{1},x_{2})$. The source position is shown by a star ($y=0.1$). The left,
center, and right panels correspond to the cases with $Q/M_{\rm
L}=-0.5,\,0\text{ and }0.5$, respectively. For each of these cases, the
locations of the images and their types are also indicated.
We will work within the thin lens approximation, i.e., the lens is not
considered a three-dimensional object, but rather a two-dimensional one. This
is because the size of the lens is much smaller compared to the distance of
the lens to the source as well as to the observer. Thus the magnification
function of the GW, as measured by an observer reads Takahashi and Nakamura
(2003),
$\displaystyle\mathcal{F}(f)=\frac{D_{\rm S}\xi_{0}^{2}}{D_{\rm L}D_{\rm
LS}}\frac{f}{i}\int d^{2}\vec{x}\exp\left[2\pi ift_{\rm
d}(\vec{x},\vec{y})\right]\leavevmode\nobreak\ ,$ (2)
where $\vec{x}\equiv\vec{\xi}/\xi_{0}$ is the normalized vector on the lens
plane and $\vec{y}=(D_{\rm L}/\xi_{0}D_{\rm S})\vec{\eta}$ is the normalized
vector on the source plane indicating the location of the source (impact
parameter). Further, $D_{\rm L}$ is the angular diameter distance to the lens,
$D_{\rm S}$ is the angular diameter distance to the source and $D_{\rm LS}$ is
the angular diameter distance between the source and the lens. Further, the
quantity $t_{\rm d}$ is the time taken by the GW to reach the observer from
the source when a lens is present, while $f$ is the frequency of the GW
emitted from the source. Since the source and the observer, in general, are
separated by a cosmological distance, the effect of the cosmological expansion
must be included in the above analysis. This modifies the above magnification
function to,
$\displaystyle\mathcal{F}(f)$ $\displaystyle=\frac{D_{\rm
S}\xi_{0}^{2}\left(1+z_{\rm L}\right)}{D_{\rm L}D_{\rm LS}}\frac{f}{i}\int
d^{2}\vec{x}\exp\left[2\pi ift_{\rm
d}(\vec{x},\vec{y})\right]\leavevmode\nobreak\ ,$ (3)
where, $z_{\rm L}$ is the redshift of the lens. Note that the magnification
function $\mathcal{F}(f)$ is so chosen that $|\mathcal{F}(f)|=1$ in the
absence of the lens. The time taken by the GW to arrive at the observer from
the lens is given by,
$\displaystyle t_{\rm d}(\vec{x},\vec{y})=\frac{D_{\rm
S}\xi_{0}^{2}\left(1+z_{\rm L}\right)}{D_{\rm L}D_{\rm
LS}}\left[\frac{|\vec{x}-\vec{y}|^{2}}{2}-\psi(\vec{x})\right]\leavevmode\nobreak\
.$ (4)
Here $\psi(\vec{x})$ is the two-dimensional deflection potential, depending on
the background geometry of the lens, on which the GW is propagating and it
satisfies the following differential equation,
$\displaystyle\nabla_{x}^{2}\psi=\frac{2\Sigma}{\Sigma_{\rm
cr}}\leavevmode\nobreak\ ,$ (5)
where, $\Sigma$ is the surface energy density of the lens on the lens plane
and $\Sigma_{\rm cr}$ is a critical energy density, to be fixed later. For a
point mass lens, one can simply consider, $\Sigma=M_{\rm
L}\delta^{2}(\vec{\xi})$, i.e., we can take the point mass to be at the origin
of the lens plane. In this case, choosing $\Sigma_{\rm cr}=(M_{\rm
L}/\pi\xi_{0}^{2})$, one obtains, $\psi(\vec{x})=\ln|\vec{x}|$. In the present
context, we are interested in a charged lens, which is either sourced by an EM
field or, arises in gravity theories beyond GR. The effects of these modified
theories of gravity are embedded through additional contributions to the
Einstein’s equations and effectively behave as a ‘charged’ field. In each of
these cases, the energy density arising from the extra contribution is given
by, $\rho_{\rm Q}=(Q/r^{4})$, or, the associated field behaves as
$\mathcal{O}(r^{-2})$ in three space dimensions T.Padmanabhan (2010). Thus on
the lens plane, which is a two-dimensional hypersurface, the falloff of the
field will simply become $\mathcal{O}(r^{-1})$ and hence the energy density
becomes $(Q/|\vec{\xi}|^{2})$ Hendi (2011); Hendi _et al._ (2020); Tang _et
al._ (2017); Singha _et al._ (2022). Thus the total energy density on the
two-dimensional surface becomes,
$\displaystyle\Sigma=M_{\rm
L}\delta^{2}(\vec{\xi})+\frac{Q}{|\vec{\xi}|^{2}}\leavevmode\nobreak\ .$ (6)
Given the above surface energy density, which involves both the contribution
from the point-mass lens and from the charge $Q$, the two-dimensional
gravitational potential can be obtained by solving Eq. (5), and that yields
(see Appendix A),
$\displaystyle\psi(\vec{x})=\ln|\vec{x}|\left(1+\frac{\pi Q}{M_{\rm
L}}\ln|\vec{x}|\right)\leavevmode\nobreak\ .$ (7)
Here, we have chosen the critical surface energy density $\Sigma_{\rm cr}$,
such that in the limit $Q\to 0$, we obtain an identical expression to that of
the point mass lens. This is one of the main results of this paper.
Given this gravitational potential, in the geometric optics regime, the
location of the images on the lens plane can be determined by invoking
Fermat’s principle of gravitational lensing Blandford and Narayan (1986).
Fermat’s principle states that the images are formed at the extrema of the
time delay surface, that is, at points on the lens plane satisfying the
condition:
$\displaystyle\frac{\partial t_{\rm d}(\vec{x},\vec{y})}{\partial\vec{x}}=0.$
(8)
Thus, the image locations can be obtained by solving the following algebraic
equation
$\displaystyle\frac{D_{\rm S}\xi_{0}^{2}}{D_{\rm L}D_{\rm LS}}\left(1+z_{\rm
L}\right)\left[|\vec{x}-\vec{y}|-\frac{1}{|\vec{x}|}\left(1+\frac{2\pi
Q}{M_{\rm
L}}\ln|\vec{x}|\right)\right]\frac{\vec{x}}{|\vec{x}|}=0\leavevmode\nobreak\
.$ (9)
Note that GW microlensing happens in the wave optics regime, where the
geometric approximation is not valid. However, an approximate understanding of
the image locations are useful in interpreting the lensing magnification that
we compute in the wave optics regime.
Figure 2: Number of images shown as a function of the impact parameter
$|\vec{y}|$, and the charge $Q/M_{\rm L}$. There exist two images for all
$Q/M_{\rm L}\leq 0$, irrespective of the impact parameter. However, for
$Q/M_{\rm L}>0$, there exists a region in the parameter space in which four
images are formed. See text for discussion.
It turns out that for $Q\leq 0$, the above algebraic equation has two
solutions for $\vec{x}$, the normalized vector on the lens plane. These will
be the image locations in the geometric optics limit. One of these solutions
corresponds to the saddle point, while the other is the minima, as depicted in
the left and centre panels in Fig. 1. Besides the locations of these two
points, Fig. 1 shows the contours of constant $t_{\rm d}(\vec{x},\vec{y})$ for
a fixed source location $\vec{y}$. Interestingly, for certain choices of
$Q>0$, and the impact parameter $|\vec{y}|$, there can even be four solutions
to Eq. (9) for $\vec{x}$, leading to the formation of four images. This can be
seen in Fig. 2 and the right panel in Fig. 1.
The formation of multiple images can be understood using caustics associated
with the lensing potential. For $Q\leq 0$, there are no caustics (except for
the lens location) and hence there are always two images. For any $Q>0$, a
caustic appears at a finite value of $|\vec{y}|$, which decreases as the
charge increases. Therefore, for $Q\lesssim 0.1$, the source will appear
within the caustic, leading to four images. As the charge increases, the
caustic appears at smaller values of $|\vec{y}|$, and for $Q\sim 0.3$, the
caustic reaches $|\vec{y}|\sim 0$. Thus the source goes out of the caustic,
leading to two images. Subsequent increase of $Q$, results in the caustic
forming at a larger value of $|\vec{y}|$, and hence the source again goes
within the caustic, leading to four images. This is precisely what Fig. 2
demonstrates.
Note that the above situation cannot be compared with the scenario involving
the BH shadow, which involves the lensing of EM waves in the strong gravity
regime. There, the source (emission from the accreting gas) is close to the
lens and the light rays experience the full three-dimensional potential of the
lens. In contrast, typical gravitational lensing (as considered here)
considers the weak field regime. Also, the distance between the source and the
lens is so large that the lens can be approximated by a projected two-
dimensional potential (thin lens approximation). Hence the details of the
images are quantitatively different in the two scenarios.
Another interesting feature of Fig. 1 corresponds to the fact that images and
the source appear along a line in the plane spanned by $\vec{x}$, which arises
from the axially symmetric nature of the lens. This is also reflected in the
fact that the lensing potential $\psi(\vec{x})$ depends on $|\vec{x}|$ alone.
Thus it follows that the solution of Eq. (9) depends on $|\vec{y}|$ alone, and
hence throughout the rest of the analysis we will denote $y\equiv|\vec{y}|$,
which is the only input required for determining the image location and the
associated time delay contours.
## III Numerical computation of the lensing magnification
Figure 3: The time domain magnification function $\widetilde{\mathcal{F}}(t)$
as a function of time $t$ (in units of $M_{\rm L}$, where $t=0$ at the global
minima). The panels from left to right correspond to cases with $Q/M_{\rm
L}=-0.5,\,0\text{ and }0.5$, respectively. The plots in each of these panels
correspond to impact parameter values $y=0.1,\,0.5\text{ and }1.0$. In the
left and the center panels, there are always two images – a minima image and a
saddle image. The minima image lies at $t=0$ while the saddle image can be
identified by the logarithmically diverging peak. On the right panel, for
$y=0.1$, there are four images – one minima image, two saddle images, and one
maxima image. The maxima image and one of the saddle images have similar time
delays, $t\sim 0.3$. These two images can be identified by the logarithmic
divergence followed by a sharp drop in the zoomed-inset figure on the left.
For $y=0.5\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ 1.0$, there is a
minima and a saddle image. However, there are features in addition to the
images as seen in the inset figure on the right.
In this section, we will determine the magnification function due to a charged
lens and will find out how the GW waveform is modulated due to lensing. For an
isolated point mass lens, the lensing magnification function $\mathcal{F}(f)$
can be derived analytically Nakamura and Deguchi (1999). Unfortunately, for
the present case of a charged lens, an analytical expression for the
magnification function cannot be obtained and one must resort to numerical
schemes. Implementing such numerical schemes is also difficult in the present
context owing to the highly oscillatory behaviour of the integral, which
renders the traditional numerical integration methods to be computationally
ineffective. Therefore, we developed and implemented a new numerical scheme to
compute the magnification function, which we outline below. This is an
extension of the methods used in Diego _et al._ (2019), based on the original
work by Ulmer and Goodman (1995). The new ingredient is to use a histogram for
an efficient computation of the diffraction integral Eq.(13) in time domain.
1. 1.
Fixing the length scale $\xi_{0}$: The time delay contours, as well as the
lensing potential, depend on the length scale $\xi_{0}$, which we fix to be of
the following form:
$\xi_{0}^{2}=4M_{0}\frac{D_{\rm L}D_{\rm{LS}}}{D_{\rm S}}\leavevmode\nobreak\
,$ (10)
where $M_{0}$ is an arbitrary mass scale. In the present problem, the only
relevant mass scale is set by the lens mass and hence we choose $M_{0}=M_{\rm
L}$. With this choice of the length scale $\xi_{0}$, Eq. (4) becomes,
$\displaystyle t_{\rm d}(\vec{x},\vec{y})$ $\displaystyle=4M_{\rm L}(1+z_{\rm
L})\times\left[\frac{|\vec{x}-\vec{y}|^{2}}{2}-\psi(\vec{x})\right]\leavevmode\nobreak\
.$ (11)
In the subsequent analysis, we will use the above expression for the time
delay.
Figure 4: The frequency domain magnification function $\mathcal{F}(f)$ as a
function of the GW frequency (in Hz) for a lens with mass $M_{\rm
L}=100M_{\odot}$, at redshift $z_{\rm L}=0.5$. The left panel corresponds to
the amplitude of $\mathcal{F}(f)$, while the right panel corresponds to its
phase. The plots corresponds to cases with $Q/M_{\rm L}=-0.5,\,0\text{ and
}0.5$ (lines with different thicknesses), for impact parameter values
$y=0.1,0.5\text{ and }1.0$ (lines with different colors). We restrict the
frequencies to the sensitivity band of the ground-based GW detectors.
2. 2.
Time domain magnification function: As we have already discussed, determining
the magnification function $\mathcal{F}(f)$ in the frequency domain requires
performing the highly oscillatory integral over the time function $t_{\rm
d}(\vec{x},\vec{y})$, which is numerically challenging. Thus, we will first
compute the magnification function in time domain, and then compute
$\mathcal{F}(f)$ using Fourier transform. For this purpose we rewrite Eq. (3)
in the following form:
$\mathcal{F}(f)=C(f)\int d^{2}\vec{x}\,\exp[2\pi ift_{\rm
d}(\vec{x},\vec{y})]\leavevmode\nobreak\ .$ (12)
We define the time domain magnification function $\widetilde{\mathcal{F}}(t)$
as the inverse Fourier transform of the ratio $\\{\mathcal{F}(f)/C(f)\\}$,
which yields:
$\displaystyle\widetilde{\mathcal{F}}(t)$ $\displaystyle\equiv\int
df\,\frac{\mathcal{F}(f)}{C(f)}\,\exp[-2\pi ift]$ $\displaystyle=\int
d^{2}\vec{x}\,\delta[t_{\rm d}(\vec{x},\vec{y})-t]\leavevmode\nobreak\ .$ (13)
As evident from the above expression, the time-domain magnification function
is related to the area between the time-domain contours in the lens plane. In
particular, $\widetilde{\mathcal{F}}(t)dt$ is the area $dS$ between the curves
$t$ and $t+dt$ of constant time delay, i.e.,
$\widetilde{\mathcal{F}}(t)=\frac{dS}{dt}\leavevmode\nobreak\ .$ (14)
Thus by computing the area between two infinitesimal time domain contours, we
can determine the magnification function $\widetilde{\mathcal{F}}(t)$ in the
time domain.
3. 3.
Histogram of the time delay map: To find the area between the contours of
constant time delay, we place a uniform grid on the lens plane and determine
the time delay at each of the grid points (see, e.g., Fig. 1). This yields the
number of grid points between time delays $t$ and $t+dt$, which leads to a
histogram for the number of grid points at each of the time delay contours
$t$, with bin size $dt$. This number is a proxy for the area between the time
delays $t$ and $t+dt$. Thereby allowing us to compute the time-domain
magnification function $\widetilde{\mathcal{F}}(t)$.
For the case of a charged lens, the above method of determining the area
between time delay contours leads to the desired time-domain magnification
function (Fig. 3). As evident, the amplification is lowest for $Q<0$ and is
the highest for $Q>0$. Also, the magnification function increases as the
impact parameter $y$ is decreased, since the GW passes closer to the lens and
is most affected. Another intriguing feature, already noted earlier, can also
be seen in Fig. 3, namely, the existence of four images for certain positive
values of $Q$ and for specific impact parameter $y$. For $Q/M_{\rm L}=0.5$ and
$y=0.1$, these four images can be seen explicitly. The one at $t=0$ is present
in all the plots, which have not been shown, then two additional images are
clearly visible, while the fourth one (maxima image) is very close to one of
the saddle images. For larger values of $y$, there are two images with
positive $Q$, but some additional features are present. These can be seen in
the inset figures of the right panel in Fig. 3.
4. 4.
Frequency domain magnification function: Having computed the time-domain
magnification function $\widetilde{\mathcal{F}}(t)$, it is straightforward to
determine the frequency domain magnification $\mathcal{F}(f)$. For this
purpose, we simply perform a fast Fourier Transform of
$\widetilde{\mathcal{F}}(t)$ and multiply by the overall factor $C(f)$,
yielding:
$\mathcal{F}(f)=C(f)\times\mathrm{FFT}\left[\widetilde{\mathcal{F}}(t)\right]\leavevmode\nobreak\
.$ (15)
Though the time-domain magnification function was real, the frequency-domain
amplification is complex. Following this, we have plotted the magnitude
$|\mathcal{F}(f)|$ and the argument $\textrm{arg}[\mathcal{F}(f)]$ in Fig. 4.
As expected, the frequency-domain magnification also is the largest for $Q>0$,
while is the smallest for $Q<0$, and the magnification function decreases as
the impact parameter increases.
Figure 5: GW signals lensed by charged BHs of mass $M_{\rm L}=100M_{\odot}$,
at redshift $z_{\rm L}=0.5$. The panels from the left to the right correspond
to cases with $Q/M_{\rm L}=-0.5,\,0\text{ and }0.5$ respectively, for various
impact parameters (shown in legends). The top and bottom panels show the
amplitude of lensed waveforms in the frequency and time domains, respectively.
The dashed grey lines indicate the unlensed waveforms.
5. 5.
Lensed waveform: Finally, the GW strain is going to be amplified by the
frequency-dependent magnification factor. In the frequency domain, the
unlensed GW strain $h^{U}(f|\vec{\theta})$ is going to be modified as:
$h^{L}(f;\vec{\theta},\vec{\lambda})=\mathcal{F}(f;\vec{\lambda})\leavevmode\nobreak\
h^{U}(f;\vec{\theta})\leavevmode\nobreak\ .$ (16)
Here $h^{L}(f;\vec{\theta},\vec{\lambda})$ is the lensed waveform,
$\vec{\theta}$ are the source parameters of the GW signal and
$\vec{\lambda}=(M_{\rm L}^{z},y,Q)$ are the lens parameters, where $M_{\rm
L}^{z}\equiv M_{\rm L}(1+z_{\rm L})$ is the redshifted lens mass.
The GW signal lensed by a charged lens has been shown in both the frequency
and the time domain in Fig. 5, along with the corresponding unlensed waveform.
It is evident (in particular, from the time domain plots) that the lensed
waveform for $Q<0$ has the least departure, while for $Q>0$ has the maximum
departure from the unlensed waveform. For $Q<0$, the lensed and unlensed
waveforms are mostly in phase with the unlensed, while for $Q>0$, these
waveforms are out of phase. As we will see, these properties make constraining
positive $Q$ much easier, than constraining negative values of $Q$.
As a verification of the numerical scheme described above, we have employed
the above method for determining the lensed waveform for a point mass
(uncharged) lens. Since the magnification function for a point mass lens can
be computed analytically Nakamura (1998), we can compare it with our numerical
results. Following this, we have evaluated the mismatch between the lensed
waveforms computed using the analytical form of the magnification function for
isolated point mass lens and lensed waveforms computed using our method with
$Q=0$. We find that the mismatch values are within permissible limits
($\lesssim 10^{-5}$). This validates the numerical method described above and
has been used for determining the magnification function for a charged lens.
## IV Prospective constraints on the black hole charge from lensing
observations
In this section, we will derive prospective constraints on the BH charge from
future observations of microlensed GW signals. We have already noticed that
positive values of the charge $Q$ significantly modify the GW waveform through
lensing and the effect is present in the amplitude, as well as in the phase of
the GW signal (Fig. 5). Thus, we expect that a BH lens without charge can be
efficiently distinguished from a charged BH lens for $Q>0$, while we expect
such a distinction will be less efficient for $Q<0$.
To get a better understanding of the prospective constraints on BH charge from
future observations of microlensed GWs, we estimate the (approximate) Bayesian
posteriors of $Q$ from simulated observations of microlensed GWs. The GW
signal microlensed by a BH can be parameterised by a set of source parameters
$\vec{\theta}$ and lens parameters $\vec{\lambda}$. Their posterior
distributions can be estimated from the observed data $d$ using the Bayes
theorem:
$p(\vec{\theta},\vec{\lambda}|d)=\frac{p(\vec{\theta},\vec{\lambda})p(d|\vec{\theta},\vec{\lambda},\mathcal{H}_{\mathrm{ML}})}{p(d|\mathcal{H}_{\mathrm{ML}})},$
(17)
where $p(\vec{\theta},\vec{\lambda})$ is the prior distribution of the source
and lens parameters,
$p(d|\vec{\theta},\vec{\lambda},\mathcal{H}_{\mathrm{ML}})$ is the likelihood
of getting data $d$ given the parameters $\vec{\theta},\vec{\lambda}$ and the
hypothesis $\mathcal{H}_{\mathrm{ML}}$ that the data contains a microlensed GW
signal. The denominator, $p(d|\mathcal{H}_{\mathrm{ML}})$, is the likelihood
marginalised over all the parameters (called, the evidence of the hypothesis
$\mathcal{H}_{\mathrm{ML}}$).
Figure 6: The top panel shows the 2D likelihood in the impact parameter $y$
and charge $Q/M_{\rm L}$ computed from a GW signal with $\rm{SNR}=25$
microlensed by a charged BH of mass $M_{\rm L}=500M_{\odot}$. The true values
of the impact parameter $(y=0.5)$ and charge ($Q/M_{\rm L}=-0.5,0,0.5$) are
indicated by the dashed red lines in each panel from left to right. The $95$
percentile levels are indicated by white lines. The bottom panel shows the 2D
likelihood in $M_{\rm L}$ and $Q/M_{\rm L}$ for a microlensed GW signal with
$y=0.5$. It can be seen that $Q/M_{\rm L}$ is largely uncorrelated with other
parameters $M_{\rm L}$ and $y$.
The posterior $p(Q|d)$ of the charge can be computed by marginalising
$p(\vec{\theta},\vec{\lambda}|d)$ over all parameters except $Q$. For
simplicity, we will assume that the lens parameters $\vec{\lambda}$ are
largely uncorrelated with the source parameters $\vec{\theta}$. Hence we need
to compute the likelihood only on $\vec{\lambda}$ for estimating prospective
constraints on $Q$. This is a reasonable assumption although recent work has
identified possible correlations between microlensing modulations and
modulations induced by spin-induced precession Mishra _et al._ (2023b). An
uncharged point mass lens in the background of a macro lens (e.g., a galaxy)
could also introduce more complex modulations in the GW signal Meena and Bagla
(2020); Diego _et al._ (2019); Cheung _et al._ (2021); Mishra _et al._
(2021), potentially mimicking some of the effect of a charged lens. Also, note
that currently we assume non-spining BH lenses. There could be some
correlations between the charge and the spin of a BH (see, e.g., Carullo _et
al._ (2022)). For the time being, we ignore these additional complexities.
It turns out that the lens parameters $M_{\mathrm{L}},y$ are also largely
uncorrelated with the charge $Q$ (see, Fig. 6 for an illustration). Thus, to
compute the expected bounds on $Q$, to a good approximation one needs to
compute the likelihood in $Q$ only. We employ the following approximation of
the expectation value of the likelihood
$\mathcal{L}(Q)\equiv\langle
p(d|Q,\mathcal{H}_{\mathrm{ML}})\rangle\simeq\exp{\left[-\rho^{2}\,\mathcal{M}(Q_{\mathrm{tr}},Q)\right]}.$
(18)
Above $\rho$ is the signal-to-noise ratio (SNR) of the signal in the data and
$\mathcal{M}(Q_{\mathrm{tr}},Q)$ is the _mismatch_ between the injected (true)
waveform $h^{L}(f,Q_{\mathrm{tr}})$ and the waveform $h^{L}(f,Q)$ with charge
$Q$.
$\mathcal{M}(Q_{\mathrm{tr}},Q)\equiv
1-4\int_{f_{\mathrm{low}}}^{f_{\mathrm{upp}}}\frac{h^{L}(f,Q_{\mathrm{tr}})\,h^{L*}(f,Q)\,df}{S_{n}(f)}.$
(19)
Here ${f_{\mathrm{low}}}$ and ${f_{\mathrm{upp}}}$ denote the lower and upper
frequency cutoff the detector band and $S_{n}(f)$ the one-sided power spectral
density of the detector noise.
When we assume a flat prior in $Q$, the expectation value of the posterior
distribution $P(Q|d)$ is the same as the likelihood $\mathcal{L}(Q)$. Hence we
present $\mathcal{L}(Q)$ and its 90% credible upper limits in Fig. 7 and Fig.
8. In these figures, we have assumed the PSD of the advanced LIGO (aLIGO)
detectors. Additionally, the unlensed signal is assumed to be due to a non-
spinning equal mass BH binary with component masses $20M_{\odot}$ each. We
compute the likelihood assuming true values of $(Q_{\mathrm{tr}}/M_{\rm
L})=0,-0.1\text{ and }-0.5$ in Fig. 7 and true values of
$(Q_{\mathrm{tr}}/M_{\rm L})=0,0.1\text{ and }0.5$ in Fig. 8.
It is evident from Eq.(18) that the likelihood drops when the mismatch between
the true value $Q_{\mathrm{tr}}$ and the chosen value $Q$ is large (i.e., when
the two waveforms are very different) or when the SNR of the signal is large.
These expected features can be seen in these figures. Negative values of $Q$
are least constrained for smaller values of the impact parameter $y$ and are
slightly better constrained for larger impact parameter $y$ (Fig. 7). Note
that for smaller SNR and for smaller impact parameters, negative values of $Q$
are unconstrained. Increasing the impact parameter, as well as for higher lens
mass $M_{\rm L}$, the constraints on $Q$ significantly improve. Increasing the
SNR further improves the situation.
Figure 7: Likelihood plots with $90\%$ credible bounds on $Q/M_{\rm L}$ using
aLIGO PSD, for various values of impact parameter $y$ (shown in the legend).
Panels from left to right correspond to SNR values of $25,50\text{ and }100$
respectively. In the top, middle and bottom panels, the GW signal is lensed by
a BH of charge $Q_{\mathrm{tr}}/M_{\rm L}=-0.5,-0.1$ and $0$ respectively
(indicated by horizontal dashed black lines). Here, we use the prior $-1\leq
Q/M_{\rm L}\leq 0$, that corresponds to situations such as the braneworld
scenario. Figure 8: Same as Fig. 7, except that the GWs are lensed by BHs of
charge $Q_{\mathrm{tr}}/M_{\rm L}=0.5,0.1$ and $0$ respectively (indicated by
horizontal dashed black lines). Here, we use the prior $0\leq Q/M_{\rm L}\leq
1$, which can correspond to a charge of EM or alternative gravity origin.
On the other hand, and quite contrary to the negative $Q$ case, as Fig. 8
demonstrates, even for SNR of 25, and impact parameter $y\sim 0.1$, positive
values of $Q$ can be measured with precision of $\Delta Q/Q\lesssim 0.1$ for
larger lens masses. The situation improves drastically for higher SNR. For
example, with an SNR of 100, values of $Q\gtrsim 0.02$ can be ruled out with
ninety percent confidence.
Thus using gravitational lensing of GWs, it is possible to rule out positive
values of charge (of EM or modified gravity origin) with a percentage level
accuracy. In contrast, for a negatively charged lens (expected in the presence
of extra dimensions), the accuracy is at least an order weaker.
## V Conclusion
Upcoming observations are expected to detect gravitationally lensed GWs. One
of the possible lenses is compact objects such as BHs. If the gravitational
radii of these lenses are comparable to the wavelength of GWs, lensing will
produce wave optics effects, producing characteristic deformations in the
observed signals. The exact nature of these deformations will depend on the
precise spacetime geometry (lensing potential) of the lens. Thus, lensing
observations can potentially probe the detailed nature of these lensing
objects.
In this paper, we derived an exact expression for the lensing potential of a
‘charged’ BH lens. The charge $Q$ can have an EM origin, in which case it is a
positive definite quantity ($Q$ is the square of the electric charge $q$), but
can also arise in (at least) four different situations, all of which are
beyond GR. These include — (a) the braneworld scenario, where the presence of
an extra spatial dimension modifies Einstein’s equations, and introduces a
charge term $Q$ that is negative; (b) the Gauss-Bonnet theory in higher
dimensions, which also leads to an effective four-dimensional spacetime with a
negative charge; (c) $f(T)$ theories of gravity, as well as (d) a certain
class of Horndeski theories also brings in a charge term in the spacetime
metric, which is a positive quantity. Thus, negative values of the charge
definitely hint at the presence of extra dimensions, while positive values of
the charge can have an EM origin, or can also arise from modified theories of
gravity. If one can detect the charge hair of a BH spacetime, it is possible
to comment on the fundamental questions, e.g., the existence of extra
dimension, as well as the signature of gravity theories beyond GR.
Using the lensing potential of charged BHs that we derived, we computed the
deformation of the lensed GW signals considering wave optics effects. This was
done using a numerical scheme that we recently developed. This scheme can be
used to compute the frequency-dependent lensing magnification for arbitrary
lensing potentials. We noticed interesting new observables in the case of
charged lenses. Owing to the axial symmetry of the lensing potential, the
images and the source always lie on a line on the lens plane (same as the case
of uncharged lens). For a negatively charged lens there are always two images
(same as the uncharged lens), while for a positively charged lens, there can
be two or four images, depending on the value of the charge and the impact
parameter. This new feature can be understood in terms of the (numerically
computed) structure of the caustics of charged lenses. This introduces rich
and complex effects in the lensed GW signals which are absent in those lensed
by uncharged BHs. These additional features will help us to identify the
presence of positively charged BH lenses, if they exist. On the other hand,
negatively charged lenses produce features very similar to those of uncharged
BHs, making it difficult to distinguish them from uncharged BHs.
We then explored the ability of future lensing observations to constrain the
charge of the lens. We consider lensing observations by a single LIGO detector
with sensitivity described by the PSD of the aLIGO observing run. We showed
that the charge can be constrained using observation of lensed GWs. In
particular, positive values of the charge can be constrained to an accuracy of
$10\%\,(1\%)$ if the lensed GW signal is detected with an SNR $\sim
25\,(100)$. On the other hand, negative values of the charge are constrained
to a much lower precision, $15\%$ at best (with SNR $\sim 100$). For smaller
SNRs, practically all possible negative values of $Q$ are allowed. In short,
the presence of an extra dimension is much harder to detect using lensing of
GWs, while there is excellent chance of detecting or constraining the presence
of EM charge, or, theories beyond GR.
Note that the expected constraints that we present here employ an approximate
likelihood, and neglect the possible correlations between the lensing-induced
modulations in the GW signals and modulations induced by other physical
effects such as orbital eccentricity of the binary source. We also ignored the
possible degeneracy between a GW signal lensed by a charged BH and a GW signal
lensed by an uncharged point mass lens in the presence of a macro lens (e.g.,
a galaxy), which can introduce more complex features. Additionally, we also
neglected the effect of the spin of the BH lens. While we expect these broad
conclusions to hold, the precise forecasts of the prospective constraints need
to be revisited in the future considering these additional complexities.
The numerical scheme that we employ to compute lensing magnification for
arbitrary lensing potentials is too expensive to employ in actual GW parameter
estimation that will require a large number of likelihood evaluations. In
order to employ in GW parameter estimation, we need to develop some
surrogate/semi-analytical models that interpolate the numerically computed
lensing magnifications over the parameter space of interest.
In this paper, we have worked with static and spherically symmetric spacetimes
so far, and hence it would be interesting to generalize the same to rotating
spacetimes, as all astrophysical objects are, in general, rotating. One could
explore the possibility of measuring the spin of the compact object from
lensing observations. One could also explore more general spacetimes: here we
have considered the cases where $-g_{tt}=g^{rr}$; it will be useful to
understand how to derive the lensing potential for spacetimes, with
$-g_{tt}\neq g^{rr}$. Another possibility is probing the astrophysical
environment of BHs using lensing observations. We hope to come back to these
issues in future works.
Note that some of the modifications to GR that induce an effective charge on
BHs could also cause other effects in the generation and propagation of GWs,
which we neglect here. Our proposal should be seen as a way of effectively
checking the consistency of the GW signal that is lensed by a BH in GR (in
this paper the Schwarzschild metric). Any observed inconsistency with the
Schwarzschild lens will need to be investigated further in order to ascertain
the nature of the charge. This is similar in spirit to various other tests of
GR using GW observations. In any case, future observations of lensed GWs are
very likely to offer new ways of probing the nature of compact objects.
###### Acknowledgements.
We are grateful to Otto Hannuksela for the careful review of the manuscript
and useful comments. We thank the members of the astrophysical relativity
group at ICTS for their valuable input. We acknowledge the support of the
Department of Atomic Energy, Government of India, under project no. RTI4001.
M.A.S.’s research was, in addition, supported by the National Research
Foundation of Korea under grant No. NRF-2021R1A2C2012473. S.C. thanks the
Albert Einstein Institute for its warm hospitality, where a part of this work
was performed. The visit to the Albert Einstein Institute is funded by the
Max-Planck Society through its Max-Planck-India mobility grant. Computations
were performed using the Alice computing cluster at the International Centre
for Theoretical Sciences.
## Appendix A The lensing potential and its properties
In this appendix, we will provide the basic steps in deriving the lensing
potential for a charged lens. For this purpose, we start by evaluating the
two-dimensional Laplacian acting on $(\ln|\vec{x}|)^{2}$. Taking a double
derivative with respect to $x_{1}$ yields,
$\displaystyle\partial_{x_{1}}^{2}(\ln|\vec{x}|)^{2}$
$\displaystyle=\partial_{x_{1}}\left(2\ln|\vec{x}|\frac{x_{1}}{|\vec{x}|^{2}}\right)$
$\displaystyle=2\ln|\vec{x}|\frac{1}{|\vec{x}|^{2}}+2\frac{x_{1}^{2}}{|\vec{x}|^{4}}-4\ln|\vec{x}|\frac{x_{1}^{2}}{|\vec{x}|^{4}}\leavevmode\nobreak\
.$ (20)
Along similar lines the derivative with respect to $x_{2}$ becomes,
$\displaystyle\partial_{x_{2}}^{2}(\ln|\vec{x}|)^{2}=2\ln|\vec{x}|\frac{1}{|\vec{x}|^{2}}+2\frac{x_{2}^{2}}{|\vec{x}|^{4}}-4\ln|\vec{x}|\frac{x_{2}^{2}}{|\vec{x}|^{4}}\leavevmode\nobreak\
.$ (21)
Adding these two double derivatives, the two-dimensional Laplacian acting on
$(\ln|\vec{x}|)^{2}$ gives us,
$\displaystyle\nabla_{x}^{2}(\ln|\vec{x}|)^{2}$
$\displaystyle=\partial_{x_{1}}^{2}(\ln|\vec{x}|)^{2}+\partial_{x_{2}}^{2}(\ln|\vec{x}|)^{2}$
$\displaystyle=4\ln|\vec{x}|\frac{1}{|\vec{x}|^{2}}+2\frac{x_{1}^{2}+x_{2}^{2}}{|\vec{x}|^{4}}-4\ln|\vec{x}|\frac{x_{1}^{2}+x_{2}^{2}}{|\vec{x}|^{4}}$
$\displaystyle=\frac{2}{|\vec{x}|^{2}}\leavevmode\nobreak\ .$ (22)
Thus the Laplacian acting on the $(\ln|\vec{x}|)^{2}$ indeed yields the energy
density arising from the charged nature of the gravitational lens. This shows
the origin of the expression for the lensing potential associated with the
charged lens in the main text.
Let us now describe an interesting property of any axially symmetric lens
system, namely the co-linear nature of the source and the images on the lens
plane, which is also applicable in the present context. From the fact that the
spacetime metric is axially symmetric, it follows that the lensing potential
will also satisfy the same symmetry, i.e., $\psi(\vec{x})=\psi(|\vec{x}|)$.
The associated time delay is then given by,
$\displaystyle t_{\rm d}(\vec{x},\vec{y})$
$\displaystyle=\frac{|\vec{x}-\vec{y}|^{2}}{2}-\psi(|\vec{x}|)$
$\displaystyle=\frac{|\vec{x}|^{2}}{2}-|\vec{x}||\vec{y}|\cos\theta+\frac{|\vec{y}|^{2}}{2}-\psi(|\vec{x}|)\leavevmode\nobreak\
,$ (23)
where $\vec{x}=(x_{1},x_{2})=(|\vec{x}|\cos\theta,|\vec{x}|\sin\theta)$, with
$\theta$ being the angle between the vectors $\vec{x}$ and $\vec{y}$. The
location of the images can be found by solving $\nabla_{x}t_{\rm
d}(\vec{x},\vec{y})=0$ as discussed in section II. This boils down to taking
the derivative of the time delay function with respect to $|\vec{x}|$ and
$\theta$. Extremizing the time delay with respect to the angle $\theta$
yields,
$\frac{\partial t_{\rm
d}}{\partial\theta}=|\vec{x}||\vec{y}|\sin\theta=0\leavevmode\nobreak\ ,$ (24)
which has only two solutions $\theta=0,\pi$. This implies that the images will
always lie along the direction of $\vec{y}$. In other words, for any axially
symmetric lensing potential, the lens, the impact parameter, and the images
are always co-linear on the lens plane. This feature can also be seen in Fig.
1.
## References
* Abbott _et al._ (2021a) R. Abbott _et al._ (LIGO Scientific, VIRGO, KAGRA), “GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During the Second Part of the Third Observing Run,” (2021a), arXiv:2111.03606 [gr-qc] .
* Abbott _et al._ (2017a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral,” Phys. Rev. Lett. 119, 161101 (2017a), arXiv:1710.05832 [gr-qc] .
* Abbott _et al._ (2020a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “GW190425: Observation of a Compact Binary Coalescence with Total Mass $\sim 3.4M_{\odot}$,” Astrophys. J. Lett. 892, L3 (2020a), arXiv:2001.01761 [astro-ph.HE] .
* Abbott _et al._ (2021b) R. Abbott _et al._ (LIGO Scientific, KAGRA, VIRGO), “Observation of Gravitational Waves from Two Neutron Star–Black Hole Coalescences,” Astrophys. J. Lett. 915, L5 (2021b), arXiv:2106.15163 [astro-ph.HE] .
* Abbott _et al._ (2021c) R. Abbott _et al._ (LIGO Scientific, VIRGO, KAGRA), “Tests of General Relativity with GWTC-3,” (2021c), arXiv:2112.06861 [gr-qc] .
* Scientific _et al._ (2016) LIGO Scientific, Virgo Collaborations, BP Abbott, R Abbott, TD Abbott, MR Abernathy, F Acernese, K Ackley, C Adams, T Adams, _et al._ , “Tests of general relativity with gw150914,” Physical review letters 116, 221101 (2016).
* Abbott _et al._ (2019a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “Tests of General Relativity with the Binary Black Hole Signals from the LIGO-Virgo Catalog GWTC-1,” Phys. Rev. D 100, 104036 (2019a), arXiv:1903.04467 [gr-qc] .
* Abbott _et al._ (2020b) R. Abbott _et al._ (LIGO Scientific, Virgo), “Properties and Astrophysical Implications of the 150 M⊙ Binary Black Hole Merger GW190521,” Astrophys. J. Lett. 900, L13 (2020b), arXiv:2009.01190 [astro-ph.HE] .
* Ghosh _et al._ (2016) Abhirup Ghosh _et al._ , “Testing general relativity using golden black-hole binaries,” Phys. Rev. D 94, 021101 (2016), arXiv:1602.02453 [gr-qc] .
* Ghosh _et al._ (2018) Abhirup Ghosh, Nathan K. Johnson-Mcdaniel, Archisman Ghosh, Chandra Kant Mishra, Parameswaran Ajith, Walter Del Pozzo, Christopher P. L. Berry, Alex B. Nielsen, and Lionel London, “Testing general relativity using gravitational wave signals from the inspiral, merger and ringdown of binary black holes,” Class. Quant. Grav. 35, 014002 (2018), arXiv:1704.06784 [gr-qc] .
* Abbott _et al._ (2019b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “Tests of General Relativity with GW170817,” Phys. Rev. Lett. 123, 011102 (2019b), arXiv:1811.00364 [gr-qc] .
* Will (1998) Clifford M Will, “Bounding the mass of the graviton using gravitational-wave observations of inspiralling compact binaries,” Physical Review D 57, 2061 (1998).
* Abbott _et al._ (2017b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence,” Phys. Rev. Lett. 119, 141101 (2017b), arXiv:1709.09660 [gr-qc] .
* Abbott _et al._ (2021d) R. Abbott _et al._ (LIGO Scientific, Virgo), “Tests of general relativity with binary black holes from the second LIGO-Virgo gravitational-wave transient catalog,” Phys. Rev. D 103, 122002 (2021d), arXiv:2010.14529 [gr-qc] .
* Takeda _et al._ (2021) Hiroki Takeda, Soichiro Morisaki, and Atsushi Nishizawa, “Pure polarization test of GW170814 and GW170817 using waveforms consistent with modified theories of gravity,” Phys. Rev. D 103, 064037 (2021), arXiv:2010.14538 [gr-qc] .
* Wong _et al._ (2021) Isaac C. F. Wong, Peter T. H. Pang, Rico K. L. Lo, Tjonnie G. F. Li, and Chris Van Den Broeck, “Null-stream-based Bayesian Unmodeled Framework to Probe Generic Gravitational-wave Polarizations,” (2021), arXiv:2105.09485 [gr-qc] .
* Hannuksela _et al._ (2019) OA Hannuksela, K Haris, KKY Ng, S Kumar, AK Mehta, D Keitel, TGF Li, and P Ajith, “Search for gravitational lensing signatures in ligo-virgo binary black hole events,” The Astrophysical Journal Letters 874, L2 (2019).
* Dai _et al._ (2020) Liang Dai, Barak Zackay, Tejaswi Venumadhav, Javier Roulet, and Matias Zaldarriaga, “Search for Lensed Gravitational Waves Including Morse Phase Information: An Intriguing Candidate in O2,” (2020), arXiv:2007.12709 [astro-ph.HE] .
* McIsaac _et al._ (2020) Connor McIsaac, David Keitel, Thomas Collett, Ian Harry, Simone Mozzon, Oliver Edy, and David Bacon, “Search for strongly lensed counterpart images of binary black hole mergers in the first two ligo observing runs,” Physical Review D 102 (2020), 10.1103/physrevd.102.084031.
* Abbott _et al._ (2021e) R. Abbott _et al._ (LIGO Scientific, VIRGO), “Search for Lensing Signatures in the Gravitational-Wave Observations from the First Half of LIGO–Virgo’s Third Observing Run,” Astrophys. J. 923, 14 (2021e), arXiv:2105.06384 [gr-qc] .
* Abbott _et al._ (2023) R. Abbott _et al._ (LIGO Scientific, VIRGO, KAGRA), “Search for gravitational-lensing signatures in the full third observing run of the LIGO-Virgo network,” (2023), arXiv:2304.08393 [gr-qc] .
* Janquart _et al._ (2023) Justin Janquart _et al._ , “Follow-up Analyses to the O3 LIGO-Virgo-KAGRA Lensing Searches,” (2023), 10.1093/mnras/stad2909, arXiv:2306.03827 [gr-qc] .
* Goyal _et al._ (2023a) Srashti Goyal, Shasvath Kapadia, Jean-Rene Cudell, Alvin K. Y. Li, and Juno C. L. Chan, “A rapid method for preliminary identification of subthreshold strongly lensed counterparts to superthreshold gravitational-wave events,” (2023a), arXiv:2306.04397 [gr-qc] .
* Broadhurst _et al._ (2018) Tom Broadhurst, Jose M. Diego, and George Smoot, “Reinterpreting Low Frequency LIGO/Virgo Events as Magnified Stellar-Mass Black Holes at Cosmological Distances,” (2018), arXiv:1802.05273 [astro-ph.CO] .
* Broadhurst _et al._ (2022) T. Broadhurst, J. M. Diego, and G. F. Smoot, “A uniform stellar origin for binary black holes revealed by lensing,” (2022), arXiv:2202.05861 [astro-ph.GA] .
* Goyal _et al._ (2021) Srashti Goyal, K. Haris, Ajit Kumar Mehta, and Parameswaran Ajith, “Testing the nature of gravitational-wave polarizations using strongly lensed signals,” Phys. Rev. D 103, 024038 (2021), arXiv:2008.07060 [gr-qc] .
* Fan _et al._ (2017) Xi-Long Fan, Kai Liao, Marek Biesiada, Aleksandra Piórkowska-Kurpas, and Zong-Hong Zhu, “Speed of gravitational waves from strongly lensed gravitational waves and electromagnetic signals,” Physical Review Letters 118, 091102 (2017).
* Hernandez (2022) Ignacio Magaña Hernandez, “Measuring the polarization content of gravitational waves with strongly lensed binary black hole mergers,” (2022), arXiv:2211.01272 [gr-qc] .
* Finke _et al._ (2021) Andreas Finke, Stefano Foffa, Francesco Iacovelli, Michele Maggiore, and Michele Mancarella, “Probing modified gravitational wave propagation with strongly lensed coalescing binaries,” Physical Review D 104 (2021), 10.1103/physrevd.104.084057.
* Narola _et al._ (2023) Harsh Narola, Justin Janquart, Leïla Haegel, K. Haris, Otto A. Hannuksela, and Chris Van Den Broeck, “How well can modified gravitational wave propagation be constrained with strong lensing?” (2023), arXiv:2308.01709 [gr-qc] .
* Chung and Li (2021) Adrian Ka-Wai Chung and Tjonnie G. F. Li, “Lensing of gravitational waves as a novel probe of graviton mass,” Physical Review D 104 (2021), 10.1103/physrevd.104.124060.
* Goyal _et al._ (2023b) Srashti Goyal, Aditya Vijaykumar, Jose María Ezquiaga, and Miguel Zumalacárregui, “Probing lens-induced gravitational-wave birefringence as a test of general relativity,” Physical Review D 108 (2023b), 10.1103/physrevd.108.024052.
* Mukherjee _et al._ (2020) Suvodip Mukherjee, Benjamin D Wandelt, and Joseph Silk, “Probing the theory of gravity with gravitational lensing of gravitational waves and galaxy surveys,” Monthly Notices of the Royal Astronomical Society 494, 1956–1970 (2020).
* Collett and Bacon (2017) Thomas E. Collett and David Bacon, “Testing the speed of gravitational waves over cosmological distances with strong gravitational lensing,” Physical Review Letters 118 (2017), 10.1103/physrevlett.118.091101.
* Basak _et al._ (2023) Soummyadip Basak, Aditya Kumar Sharma, Shasvath J. Kapadia, and Parameswaran Ajith, “Prospects for the Observation of Continuous Gravitational Waves from Spinning Neutron Stars Lensed by the Galactic Supermassive Black Hole,” Astrophys. J. Lett. 942, L31 (2023), arXiv:2205.00022 [gr-qc] .
* Singh _et al._ (2023) Mukesh Kumar Singh, Shasvath J. Kapadia, Soummyadip Basak, Parameswaran Ajith, and Shriharsh P. Tendulkar, “Déjà-vu et Déjà-entendu: Associating fast radio bursts with compact binary mergers via gravitational lensing,” (2023), arXiv:2304.02879 [astro-ph.HE] .
* Magare _et al._ (2023) Sourabh Magare, Shasvath J. Kapadia, Anupreeta More, Mukesh Kumar Singh, Parameswaran Ajith, and A. N. Ramprakash, “Gear-up for the Action Replay: Leveraging Lensing for Enhanced Gravitational-Wave Early-Warning,” (2023), arXiv:2302.02916 [astro-ph.HE] .
* Basak _et al._ (2022) S. Basak, A. Ganguly, K. Haris, S. Kapadia, A. K. Mehta, and P. Ajith, “Constraints on Compact Dark Matter from Gravitational Wave Microlensing,” Astrophys. J. 926, L28 (2022), arXiv:2109.06456 [gr-qc] .
* Jana _et al._ (2022) Souvik Jana, Shasvath J. Kapadia, Tejaswi Venumadhav, and Parameswaran Ajith, “Cosmography using strongly lensed gravitational waves from binary black holes,” (2022), arXiv:2211.12212 [astro-ph.CO] .
* Hannuksela _et al._ (2020) Otto A Hannuksela, Thomas E Collett, Mesut Çalışkan, and Tjonnie GF Li, “Localizing merging black holes with sub-arcsecond precision using gravitational-wave lensing,” Monthly Notices of the Royal Astronomical Society 498, 3395–3402 (2020).
* Ohanian (1974) Hans C Ohanian, “On the focusing of gravitational radiation,” International Journal of Theoretical Physics 9, 425–437 (1974).
* Deguchi and Watson (1986) S. Deguchi and W. D. Watson, “Diffraction in Gravitational Lensing for Compact Objects of Low Mass,” Astrophys. J. 307, 30 (1986).
* Wang _et al._ (1996) Yun Wang, Albert Stebbins, and Edwin L. Turner, “Gravitational lensing of gravitational waves from merging neutron star binaries,” Phys. Rev. Lett. 77, 2875–2878 (1996).
* Nakamura (1998) Takahiro T. Nakamura, “Gravitational lensing of gravitational waves from inspiraling binaries by a point mass lens,” Phys. Rev. Lett. 80, 1138–1141 (1998).
* Bartelmann (2010) Matthias Bartelmann, “Gravitational lensing,” Classical and Quantum Gravity 27, 233001 (2010).
* Ng _et al._ (2018) Ken K. Y. Ng, Kaze W. K. Wong, Tom Broadhurst, and Tjonnie G. F. Li, “Precise LIGO Lensing Rate Predictions for Binary Black Holes,” Phys. Rev. D 97, 023012 (2018), arXiv:1703.06319 [astro-ph.CO] .
* Dai _et al._ (2017) Liang Dai, Tejaswi Venumadhav, and Kris Sigurdson, “Effect of lensing magnification on the apparent distribution of black hole mergers,” Phys. Rev. D 95, 044011 (2017), arXiv:1605.09398 [astro-ph.CO] .
* Smith _et al._ (2018) Graham P. Smith, Mathilde Jauzac, John Veitch, Will M. Farr, Richard Massey, and Johan Richard, “What if LIGO’s gravitational wave detections are strongly lensed by massive galaxy clusters?” Mon. Not. Roy. Astron. Soc. 475, 3823–3828 (2018), arXiv:1707.03412 [astro-ph.HE] .
* Kormann _et al._ (1994) Robert Kormann, Peter Schneider, and Matthias Bartelmann, “Isothermal elliptical gravitational lens models,” Astronomy and Astrophysics 284, 285–299 (1994).
* Koopmans _et al._ (2009) LVE Koopmans, A Bolton, T Treu, O Czoske, MW Auger, M Barnabe, S Vegetti, R Gavazzi, LA Moustakas, and S Burles, “The structure and dynamics of massive early-type galaxies: On homology, isothermality, and isotropy inside one effective radius,” The Astrophysical Journal 703, L51 (2009).
* Haris _et al._ (2018) K. Haris, Ajit Kumar Mehta, Sumit Kumar, Tejaswi Venumadhav, and Parameswaran Ajith, “Identifying strongly lensed gravitational wave signals from binary black hole mergers,” (2018), arXiv:1807.07062 [gr-qc] .
* More and More (2022) Anupreeta More and Surhud More, “Improved statistic to identify strongly lensed gravitational wave events,” Mon. Not. Roy. Astron. Soc. 515, 1044–1051 (2022), arXiv:2111.03091 [astro-ph.CO] .
* Takahashi and Nakamura (2003) Ryuichi Takahashi and Takashi Nakamura, “Wave effects in the gravitational lensing of gravitational waves from chirping binaries,” The Astrophysical Journal 595, 1039 (2003).
* Jung and Shin (2019) Sunghoon Jung and Chang Sub Shin, “Gravitational-Wave Fringes at LIGO: Detecting Compact Dark Matter by Gravitational Lensing,” Phys. Rev. Lett. 122, 041103 (2019), arXiv:1712.01396 [astro-ph.CO] .
* Diego _et al._ (2019) J. M. Diego, O. A. Hannuksela, P. L. Kelly, T. Broadhurst, K. Kim, T. G. F. Li, G. F. Smoot, and G. Pagano, “Observational signatures of microlensing in gravitational waves at LIGO/Virgo frequencies,” Astron. Astrophys. 627, A130 (2019), arXiv:1903.04513 [astro-ph.CO] .
* Urrutia and Vaskonen (2022) Juan Urrutia and Ville Vaskonen, “Lensing of gravitational waves as a probe of compact dark matter,” Monthly Notices of the Royal Astronomical Society 509, 1358–1365 (2022).
* Nakamura and Deguchi (1999) Takahiro T. Nakamura and Shuji Deguchi, “Wave optics in gravitational lensing,” Progress of Theoretical Physics Supplement 133, 137–153 (1999).
* Feng _et al._ (2023) Justin C. Feng, Sumanta Chakraborty, and Vitor Cardoso, “Shielding a charged black hole,” Phys. Rev. D 107, 044050 (2023), arXiv:2211.05261 [gr-qc] .
* Pina _et al._ (2022) D. Marín Pina, M. Orselli, and D. Pica, “Event horizon of a charged black hole binary merger,” Phys. Rev. D 106, 084012 (2022), arXiv:2204.08841 [gr-qc] .
* Bozzola and Paschalidis (2021) Gabriele Bozzola and Vasileios Paschalidis, “General Relativistic Simulations of the Quasicircular Inspiral and Merger of Charged Black Holes: GW150914 and Fundamental Physics Implications,” Phys. Rev. Lett. 126, 041103 (2021), arXiv:2006.15764 [gr-qc] .
* Eardley and Press (1975) D. M. Eardley and W. H. Press, “Astrophysical processes near black holes,” Ann. Rev. Astron. Astrophys. 13, 381–422 (1975).
* Shiromizu _et al._ (2000) Tetsuya Shiromizu, Kei-ichi Maeda, and Misao Sasaki, “The Einstein equation on the 3-brane world,” Phys. Rev. D 62, 024012 (2000), arXiv:gr-qc/9910076 .
* Dadhich _et al._ (2000) Naresh Dadhich, Roy Maartens, Philippos Papadopoulos, and Vahid Rezania, “Black holes on the brane,” Phys. Lett. B 487, 1–6 (2000), arXiv:hep-th/0003061 .
* Harko and Mak (2004) T. Harko and M. K. Mak, “Vacuum solutions of the gravitational field equations in the brane world model,” Phys. Rev. D 69, 064020 (2004), arXiv:gr-qc/0401049 .
* Aliev and Gumrukcuoglu (2005) A. N. Aliev and A. E. Gumrukcuoglu, “Charged rotating black holes on a 3-brane,” Phys. Rev. D 71, 104027 (2005), arXiv:hep-th/0502223 .
* Maeda and Dadhich (2007) Hideki Maeda and Naresh Dadhich, “Matter without matter: Novel Kaluza-Klein spacetime in Einstein-Gauss-Bonnet gravity,” Phys. Rev. D 75, 044007 (2007), arXiv:hep-th/0611188 .
* Babichev _et al._ (2015) Eugeny Babichev, Christos Charmousis, and Mokhtar Hassaine, “Charged Galileon black holes,” JCAP 05, 031 (2015), arXiv:1503.02545 [gr-qc] .
* Barrientos _et al._ (2017) José Barrientos, Fabrizio Cordonier-Tello, Fernando Izaurieta, Perla Medina, Daniela Narbona, Eduardo Rodríguez, and Omar Valdivia, “Nonminimal couplings, gravitational waves, and torsion in Horndeski’s theory,” Phys. Rev. D 96, 084023 (2017), arXiv:1703.09686 [gr-qc] .
* Babichev and Charmousis (2014) Eugeny Babichev and Christos Charmousis, “Dressing a black hole with a time-dependent Galileon,” JHEP 08, 106 (2014), arXiv:1312.3204 [gr-qc] .
* Iorio and Saridakis (2012) Lorenzo Iorio and Emmanuel N. Saridakis, “Solar system constraints on f(T) gravity,” Mon. Not. Roy. Astron. Soc. 427, 1555 (2012), arXiv:1203.5781 [gr-qc] .
* Capozziello _et al._ (2013) Salvatore Capozziello, P. A. Gonzalez, Emmanuel N. Saridakis, and Yerko Vasquez, “Exact charged black-hole solutions in D-dimensional f(T) gravity: torsion vs curvature analysis,” JHEP 02, 039 (2013), arXiv:1210.1098 [hep-th] .
* Chakraborty and SenGupta (2014) Sumanta Chakraborty and Soumitra SenGupta, “Solar system constraints on alternative gravity theories,” Phys. Rev. D 89, 026003 (2014), arXiv:1208.1433 [gr-qc] .
* Bhattacharya and Chakraborty (2017) Sourav Bhattacharya and Sumanta Chakraborty, “Constraining some Horndeski gravity theories,” Phys. Rev. D 95, 044037 (2017), arXiv:1607.03693 [gr-qc] .
* Mukherjee and Chakraborty (2018) Sajal Mukherjee and Sumanta Chakraborty, “Horndeski theories confront the Gravity Probe B experiment,” Phys. Rev. D 97, 124007 (2018), arXiv:1712.00562 [gr-qc] .
* Maselli _et al._ (2015) Andrea Maselli, Leonardo Gualtieri, Paolo Pani, Luigi Stella, and Valeria Ferrari, “Testing Gravity with Quasi Periodic Oscillations from accreting Black Holes: the Case of the Einstein-Dilaton-Gauss-Bonnet Theory,” Astrophys. J. 801, 115 (2015), arXiv:1412.3473 [astro-ph.HE] .
* Stuchlík and Kotrlová (2009) Zdeněk Stuchlík and Andrea Kotrlová, “Orbital resonances in discs around braneworld Kerr black holes,” Gen. Rel. Grav. 41, 1305–1343 (2009), arXiv:0812.5066 [astro-ph] .
* Banerjee _et al._ (2021) Indrani Banerjee, Sumanta Chakraborty, and Soumitra SenGupta, “Looking for extra dimensions in the observed quasi-periodic oscillations of black holes,” JCAP 09, 037 (2021), arXiv:2105.06636 [gr-qc] .
* Banerjee _et al._ (2017) Indrani Banerjee, Sumanta Chakraborty, and Soumitra SenGupta, “Excavating black hole continuum spectrum: Possible signatures of scalar hairs and of higher dimensions,” Phys. Rev. D 96, 084035 (2017), arXiv:1707.04494 [gr-qc] .
* Barausse and Yagi (2015) Enrico Barausse and Kent Yagi, “Gravitation-Wave Emission in Shift-Symmetric Horndeski Theories,” Phys. Rev. Lett. 115, 211105 (2015), arXiv:1509.04539 [gr-qc] .
* Toshmatov _et al._ (2016) Bobir Toshmatov, Zdeněk Stuchlík, Jan Schee, and Bobomurat Ahmedov, “Quasinormal frequencies of black hole in the braneworld,” Phys. Rev. D 93, 124017 (2016), arXiv:1605.02058 [gr-qc] .
* Andriot and Lucena Gómez (2017) David Andriot and Gustavo Lucena Gómez, “Signatures of extra dimensions in gravitational waves,” JCAP 06, 048 (2017), [Erratum: JCAP 05, E01 (2019)], arXiv:1704.07392 [hep-th] .
* Chakraborty _et al._ (2018) Sumanta Chakraborty, Kabir Chakravarti, Sukanta Bose, and Soumitra SenGupta, “Signatures of extra dimensions in gravitational waves from black hole quasinormal modes,” Phys. Rev. D 97, 104053 (2018), arXiv:1710.05188 [gr-qc] .
* Chakravarti _et al._ (2020) Kabir Chakravarti, Sumanta Chakraborty, Khun Sang Phukon, Sukanta Bose, and Soumitra SenGupta, “Constraining extra-spatial dimensions with observations of GW170817,” Class. Quant. Grav. 37, 105004 (2020), arXiv:1903.10159 [gr-qc] .
* Mishra _et al._ (2022) Akash K. Mishra, Abhirup Ghosh, and Sumanta Chakraborty, “Constraining extra dimensions using observations of black hole quasi-normal modes,” Eur. Phys. J. C 82, 820 (2022), arXiv:2106.05558 [gr-qc] .
* Mishra _et al._ (2023a) Akash K. Mishra, Gregorio Carullo, and Sumanta Chakraborty, “Bounds on tidal charges from gravitational-wave ringdown observations,” (2023a), arXiv:2311.03556 [gr-qc] .
* Gupta _et al._ (2021) Pawan Kumar Gupta, Thomas F. M. Spieksma, Peter T. H. Pang, Gideon Koekoek, and Chris Van Den Broeck, “Bounding dark charges on binary black holes using gravitational waves,” Physical Review D 104 (2021), 10.1103/physrevd.104.063041.
* Carullo _et al._ (2022) Gregorio Carullo, Danny Laghi, Nathan K. Johnson-McDaniel, Walter Del Pozzo, Oscar J. C. Dias, Mahdi Godazgar, and Jorge E. Santos, “Constraints on Kerr-Newman black holes from merger-ringdown gravitational-wave observations,” Phys. Rev. D 105, 062009 (2022), arXiv:2109.13961 [gr-qc] .
* Gu _et al._ (2023) Hua-Peng Gu, Hai-Tian Wang, and Lijing Shao, “Constraints on charged black holes from merger-ringdown signals in gwtc-3 and prospects for the einstein telescope,” (2023), arXiv:2310.10447 [gr-qc] .
* Chakraborty and SenGupta (2017) Sumanta Chakraborty and Soumitra SenGupta, “Strong gravitational lensing — A probe for extra dimensions and Kalb-Ramond field,” JCAP 07, 045 (2017), arXiv:1611.06936 [gr-qc] .
* Banerjee _et al._ (2022) Indrani Banerjee, Sumanta Chakraborty, and Soumitra SenGupta, “Hunting extra dimensions in the shadow of Sgr A*,” Phys. Rev. D 106, 084051 (2022), arXiv:2207.09003 [gr-qc] .
* Vagnozzi _et al._ (2023) Sunny Vagnozzi _et al._ , “Horizon-scale tests of gravity theories and fundamental physics from the Event Horizon Telescope image of Sagittarius A,” Class. Quant. Grav. 40, 165007 (2023), arXiv:2205.07787 [gr-qc] .
* Banerjee _et al._ (2020) Indrani Banerjee, Sumanta Chakraborty, and Soumitra SenGupta, “Silhouette of M87*: A New Window to Peek into the World of Hidden Dimensions,” Phys. Rev. D 101, 041301 (2020), arXiv:1909.09385 [gr-qc] .
* T.Padmanabhan (2010) T.Padmanabhan, _Gravitation: Foundations and Frontiers_ (Cambridge University Press, Cambridge, UK, 2010).
* Hendi (2011) S. H. Hendi, “Charged BTZ-like Black Holes in Higher Dimensions,” Eur. Phys. J. C 71, 1551 (2011), arXiv:1007.2704 [gr-qc] .
* Hendi _et al._ (2020) S. H. Hendi, A. M. Tavakkoli, S. Panahiyan, B. Eslam Panah, and E. Hackmann, “Simulation of geodesic trajectory of charged BTZ black holes in massive gravity,” Eur. Phys. J. C 80, 524 (2020), arXiv:2002.01302 [gr-qc] .
* Tang _et al._ (2017) Zi-Yu Tang, Cheng-Yong Zhang, Mahdi Kord Zangeneh, Bin Wang, and Joel Saavedra, “Thermodynamical and dynamical properties of Charged BTZ Black Holes,” Eur. Phys. J. C 77, 390 (2017), arXiv:1610.01744 [hep-th] .
* Singha _et al._ (2022) Chiranjeeb Singha, Sumanta Chakraborty, and Naresh Dadhich, “Strong cosmic censorship conjecture for a charged BTZ black hole,” JHEP 06, 028 (2022), arXiv:2203.07708 [gr-qc] .
* Blandford and Narayan (1986) Roger Blandford and Ramesh Narayan, “Fermat’s Principle, Caustics, and the Classification of Gravitational Lens Images,” Astrophys. J. 310, 568 (1986).
* Ulmer and Goodman (1995) Andrew Ulmer and Jeremy Goodman, “Femtolensing: Beyond the semiclassical approximation,” The Astrophysical Journal 442, 67 (1995).
* Mishra _et al._ (2023b) Anuj Mishra, Ashish Kumar Meena, Anupreeta More, and Sukanta Bose, “Exploring the Impact of Microlensing on Gravitational Wave Signals: Biases, Population Characteristics, and Prospects for Detection,” (2023b), arXiv:2306.11479 [astro-ph.CO] .
* Meena and Bagla (2020) Ashish Kumar Meena and J. S. Bagla, “Gravitational lensing of gravitational waves: wave nature and prospects for detection,” Mon. Not. Roy. Astron. Soc. 492, 1127–1134 (2020), arXiv:1903.11809 [astro-ph.CO] .
* Cheung _et al._ (2021) Mark H. Y. Cheung, Joseph Gais, Otto A. Hannuksela, and Tjonnie G. F. Li, “Stellar-mass microlensing of gravitational waves,” Mon. Not. Roy. Astron. Soc. 503, 3326–3336 (2021), arXiv:2012.07800 [astro-ph.HE] .
* Mishra _et al._ (2021) Anuj Mishra, Ashish Kumar Meena, Anupreeta More, Sukanta Bose, and Jasjeet Singh Bagla, “Gravitational lensing of gravitational waves: effect of microlens population in lensing galaxies,” Mon. Not. Roy. Astron. Soc. 508, 4869–4886 (2021), arXiv:2102.03946 [astro-ph.CO] .
|
# Implicit differentiation for fast hyperparameter selection in non-smooth
convex learning
Quentin Bertrand∗<EMAIL_ADDRESS>
Université Paris-Saclay, Inria, CEA, Palaiseau, France Quentin Klopfenstein∗
<EMAIL_ADDRESS>
Institut Mathématique de Bourgogne, Université de Bourgogne, Dijon, France
Mathurin Massias<EMAIL_ADDRESS>
MaLGa, DIBRIS, Università degli Studi di Genova, Genova, Italy Mathieu Blondel
<EMAIL_ADDRESS>
Google Research, Brain team, Paris, France Samuel Vaiter
<EMAIL_ADDRESS>
CNRS and Institut Mathématique de Bourgogne, Université de Bourgogne, Dijon,
France Alexandre Gramfort<EMAIL_ADDRESS>
Université Paris-Saclay, Inria, CEA, Palaiseau, France Joseph Salmon
<EMAIL_ADDRESS>
IMAG, Université de Montpellier, CNRS, Montpellier, France
###### Abstract
Finding the optimal hyperparameters of a model can be cast as a bilevel
optimization problem, typically solved using zero-order techniques. In this
work we study first-order methods when the inner optimization problem is
convex but non-smooth. We show that the forward-mode differentiation of
proximal gradient descent and proximal coordinate descent yield sequences of
Jacobians converging toward the exact Jacobian. Using implicit
differentiation, we show it is possible to leverage the non-smoothness of the
inner problem to speed up the computation. Finally, we provide a bound on the
error made on the hypergradient when the inner optimization problem is solved
approximately. Results on regression and classification problems reveal
computational benefits for hyperparameter optimization, especially when
multiple hyperparameters are required.
## 1 Introduction
Almost all models in machine learning require at least one hyperparameter, the
tuning of which drastically affects accuracy. This is the case for many
popular estimators, where the regularization hyperparameter controls the
trade-off between a data fidelity term and a regularization term. Such
estimators, including Ridge regression (Hoerl and Kennard, 1970), Lasso
(Tibshirani, 1996; Chen et al., 1998), elastic net (Zou and Hastie, 2005),
sparse logistic regression (Koh et al., 2007), support-vector machine/SVM
(Boser et al., 1992; Platt, 1999) are often cast as an optimization problem
(Table 1)
Table 1: Examples of non-smooth inner problems as in (1). Inner problem, $\Phi$ | $f(\beta)$ | $g_{j}(\beta_{j},\lambda)$ | $e^{\lambda_{\max}}$
---|---|---|---
Lasso | $\frac{1}{2n}\|y-X\beta\|^{2}$ | $e^{\lambda}|\beta_{j}|$ | $\tfrac{1}{n}\lVert X^{\top}y\rVert_{\infty}$
elastic net | $\frac{1}{2n}\|y-X\beta\|^{2}$ | $e^{\lambda_{1}}|\beta_{j}|+\tfrac{1}{2}e^{\lambda_{2}}\beta_{j}^{2}$ | $\tfrac{1}{n}\lVert X^{\top}y\rVert_{\infty}$
sparse log. reg. | $\frac{1}{n}\sum_{i=1}^{n}\ln(1+e^{-y_{i}X_{i:}\beta})$ | $e^{\lambda}|\beta_{j}|$ | $\tfrac{1}{2n}\lVert X^{\top}y\rVert_{\infty}$
dual SVM | $\frac{1}{2}\lVert(y\odot X)^{\top}\beta\rVert^{2}-\sum_{j=1}^{p}\beta_{j}$ | $\iota_{[0,e^{\lambda}]}(\beta_{j})$ | $-$
$\displaystyle\hat{\beta}^{(\lambda)}\in\operatorname*{\mathrm{arg\,min}}_{\beta\in\mathbb{R}^{p}}\Phi(\beta,\lambda)\triangleq
f(\beta)+\underbrace{\sum_{j=1}^{p}g_{j}(\beta_{j},\lambda)}_{\triangleq
g(\beta,\lambda)}\enspace,$ (1)
with smooth $f:\mathbb{R}^{p}\rightarrow\mathbb{R}$ (i.e., with Lipschitz
gradient), proper closed convex (possibly non-smooth) functions
$g_{j}(\cdot,\lambda)$, and a regularization hyperparameter
$\lambda\in\mathbb{R}^{r}$. In the examples of Table 1, the computation of $f$
involves a design matrix $X\in\mathbb{R}^{n\times p}$; and the cost of
computing $\nabla f(\beta)$ is $\mathcal{O}(np)$. In the SVM example, since we
consider the dual problem, we chose to reverse the roles of $n$ and $p$ to
enforce $\beta\in\mathbb{R}^{p}$. We often drop the $\lambda$ dependency and
write $\hat{\beta}$ instead of $\hat{\beta}^{(\lambda)}$ when it is clear from
context.
For a fixed $\lambda$, the issue of solving efficiently Equation 1 has been
largely explored. If the functions $g_{j}$ are smooth, one can use solvers
such as L-BFGS (Liu and Nocedal, 1989), SVRG (Johnson and Zhang, 2013; Zhang
et al., 2013), or SAGA (Defazio et al., 2014). When the functions $g_{j}$ are
non-smooth, Equation 1 can be tackled efficiently with stochastic algorithms
(Pedregosa et al., 2017) or using working set methods (Fan and Lv, 2008;
Tibshirani et al., 2012) combined with coordinate descent (Tseng and Yun,
2009), see overview by Massias et al. (2020). The question of _model
selection_ , i.e., how to select the hyperparameter $\lambda\in\mathbb{R}^{r}$
(potentially multidimensional), is more open, especially when the dimension
$r$ of the regularization hyperparameter $\lambda$ is large.
For the Lasso, a broad literature has been devoted to parameter tuning. Under
strong hypothesis on the design matrix $X$, it is possible to derive
guidelines for the setting of the regularization parameter $\lambda$ (Lounici,
2008; Bickel et al., 2009; Belloni et al., 2011). Unfortunately, these
guidelines rely on quantities which are typically unknown in practice, and
Lasso users still have to resort to other techniques to select the
hyperparameter $\lambda$.
A popular approach for hyperparameter selection is _hyperparameter
optimization_ (Kohavi and John, 1995; Hutter et al., 2015; Feurer and Hutter,
2019): one selects the hyperparameter $\lambda$ such that the regression
coefficients $\hat{\beta}^{(\lambda)}$ minimize a given criterion
$\mathcal{C}:\mathbb{R}^{p}\rightarrow\mathbb{R}$. Here $\mathcal{C}$ should
ensure good generalization, or avoid overcomplex models. Common examples (see
Table 2) include the hold-out loss (Devroye and Wagner, 1979), the cross-
validation loss (CV, Stone and Ramer 1965, see Arlot and Celisse 2010 for a
survey), the AIC (Akaike, 1974), BIC (Schwarz, 1978) or SURE (Stein, 1981)
criteria. Formally, the hyperparameter optimization problem is a bilevel
optimization problem (Colson et al., 2007):
$\displaystyle\begin{aligned}
&\operatorname*{\mathrm{arg\,min}}_{\lambda\in\mathbb{R}^{r}}\left\\{\mathcal{L}(\lambda)\triangleq\mathcal{C}\left(\hat{\beta}^{(\lambda)}\right)\right\\}\\\
&{s.t.\leavevmode\nobreak\
}\hat{\beta}^{(\lambda)}\in\operatorname*{\mathrm{arg\,min}}_{\beta\in\mathbb{R}^{p}}\Phi(\beta,\lambda)\enspace.\end{aligned}$
(2)
Table 2: Examples of outer criteria used for hyperparameter selection.
Criterion | Problem type | Criterion $\mathcal{C}(\beta)$
---|---|---
Hold-out mean squared error | Regression | $\frac{1}{n}\|y^{\text{val}}-X^{\text{val}}\beta\|^{2}$
Stein unbiased risk estimate (SURE)111 For a linear model $y=X\beta+\varepsilon$, with $\varepsilon\sim~\mathcal{N}(0,\sigma^{2})$, the degree of freedom (dof, Efron 1986) is defined as $\text{dof}(\beta)=\sum_{i=1}^{n}\text{cov}(y_{i},(X\beta)_{i})/\sigma^{2}$. | Regression | $\|y-X\beta\|^{2}-n\sigma^{2}+2\sigma^{2}\text{dof}(\beta)$
Hold-out logistic loss | Classification | $\frac{1}{n}\sum_{i=1}^{n}\ln(1+e^{-y^{\text{val}}_{i}X_{i:}^{\text{val}}\beta})$
Hold-out smoothed Hinge loss222The smoothed Hinge loss is given by $\ell(x)=\frac{1}{2}-x$ if $x\leq 0$, $\frac{1}{2}(1-x)^{2}$ if $0\leq x\leq 1$ and 0 else. | Classification | $\frac{1}{n}\sum_{i=1}^{n}\ell(y^{\text{val}}_{i},X_{i:}^{\text{val}}\beta)$
Popular approaches to solve (the generally non-convex) Equation 2 include
zero-order optimization (gradient-free) techniques such as grid-search,
random-search (Rastrigin, 1963; Bergstra and Bengio, 2012; Bergstra et al.,
2013) or Sequential Model-Based Global Optimization (SMBO), often referred to
as Bayesian optimization (Mockus, 1989; Jones et al., 1998; Forrester et al.,
2008; Brochu et al., 2010; Snoek et al., 2012). Grid-search is a naive
discretization of Equation 2. It consists in evaluating the outer function
$\mathcal{L}$ on a grid of hyperparameters, solving one inner optimization
Equation 1 for each $\lambda$ in the grid (see Figure 1). For each inner
problem solution $\hat{\beta}^{(\lambda)}$, the criterion
$\mathcal{C}(\hat{\beta}^{(\lambda)})$ is evaluated, and the model achieving
the lowest value is selected. Random-search has a similar flavor, but one
randomly selects where the criterion must be evaluated. Finally, SMBO models
the objective function $\mathcal{L}$ via a function amenable to uncertainty
estimates on its predictions such as a Gaussian process. Hyperparameter values
are chosen iteratively to maximize a function such as the expected improvement
as described, e.g., by Bergstra et al. (2011). However, these zero-order
methods share a common drawback: they scale exponentially with the dimension
of the search space (Nesterov, 2004, Sec. 1.1.2).
When the hyperparameter space is continuous and the regularization path
$\lambda\mapsto\hat{\beta}^{(\lambda)}$ is well-defined and almost everywhere
differentiable, first-order optimization methods are well suited to solve the
bilevel optimization Equation 2. Using the chain rule, the gradient of
$\mathcal{L}$ with respect to $\lambda$, also referred to as the
_hypergradient_ , evaluates to
$\displaystyle\nabla_{\lambda}\mathcal{L}(\lambda)$
$\displaystyle=\hat{\mathcal{J}}^{\top}_{(\lambda)}\nabla\mathcal{C}(\hat{\beta}^{(\lambda)})\enspace,$
(3)
with $\hat{\mathcal{J}}_{(\lambda)}\in\mathbb{R}^{p\times r}$ the _Jacobian_
of the function $\lambda\mapsto\hat{\beta}^{(\lambda)}$,
$\displaystyle\hat{\mathcal{J}}_{(\lambda)}\triangleq\begin{pmatrix}&\tfrac{\partial\hat{\beta}^{(\lambda)}_{1}}{\partial\lambda_{1}}&\ldots&\tfrac{\partial\hat{\beta}^{(\lambda)}_{1}}{\partial\lambda_{r}}\\\
&\vdots&\ldots&\vdots\\\
&\tfrac{\partial\hat{\beta}^{(\lambda)}_{p}}{\partial\lambda_{1}}&\ldots&\tfrac{\partial\hat{\beta}^{(\lambda)}_{p}}{\partial\lambda_{r}}\end{pmatrix}\enspace.$
(4)
An important challenge of applying first-order methods to solve Equation 2 is
evaluating the hypergradient in Equation 3. There are three main algorithms to
compute the hypergradient $\nabla_{\lambda}\mathcal{L}(\lambda)$: implicit
differentiation (Larsen et al., 1996; Bengio, 2000) and automatic
differentiation using the reverse-mode (Linnainmaa, 1970; LeCun et al., 1998)
or the forward-mode (Wengert, 1964; Deledalle et al., 2014; Franceschi et al.,
2017). As illustrated in Figure 1, once the hypergradient in Equation 3 has
been computed, one can solve Equation 2 with first-order schemes, e.g.,
gradient descent.
Figure 1: 5-fold cross-validation error $\mathcal{C}(\beta^{(\lambda)})$:
(top) Lasso CV error with respect to $\lambda$ for multiple hyperparameter
optimization methods on the _real-sim_ dataset, and (bottom) elastic net CV
error with respect to $\lambda_{1}$ and $\lambda_{2}$ on the _rcv1_ dataset.
Crosses represent the $10$ (top) or $25$ (bottom) first error evaluations for
each method.
#### Contributions.
We are interested in tackling the bilevel optimization Equation 2, with a non-
smooth inner optimization Equation 1. More precisely,
* •
We show that classical algorithms used to compute hypergradients for smooth
inner problem have theoretically grounded non-smooth counterparts. We provide
in Theorem 9 an implicit differentiation formula for non-smooth optimization
problems. We obtain in Theorem 13, for the first time in the non-smooth case,
error bounds with respect to the hypergradient when the inner problem and the
linear system involved are only solved approximately. We obtain in Theorem 12
convergence rates on the hypergradient for iterative differentiation of non-
smooth optimization problems.
* •
Based on the former contributions we propose an algorithm to tackle Equation
2. We develop an efficient implicit differentiation algorithm to compute the
hypergradient in Equation 3, leveraging the sparsity of the Jacobian and
enabling the use of state-of-the-art solvers (Algorithm 5). We combine in
Algorithm 6 this fast hypergradient computation with a gradient descent scheme
to solve Equation 2.
* •
We provide extensive experiments on diverse datasets and estimators (Section
4). We first show that implicit differentiation significantly outperforms
other hypergradient methods (Section 4.1). Then, leveraging sparsity, we
illustrate computational benefits of first-order optimization with respect to
zero-order techniques for solving Equation 2 on Lasso, elastic net and
multiclass logistic regression (Section 4.2).
* •
We release our implementation as a high-quality, documented and tested Python
package: https://github.com/qb3/sparse-ho.
#### General notation.
We write $\lVert\cdot\rVert$ the Euclidean norm on vectors. For a set $S$, we
denote by $S^{c}$ its complement. We denote $[p]=\\{1,\dots,p\\}$. We denote
by $(e_{j})_{j=1}^{p}$ the vectors of the canonical basis of $\mathbb{R}^{p}$.
We denote the coordinate-wise multiplication of two vectors $u$ and $v$ by
$u\odot v$, and by $u\odot M$ the row-wise multiplication between a vector and
a matrix. The $i$-th line of the matrix $M$ is $M_{i:}$ and its $j$-th column
is $M_{:j}$. The spectral radius of a matrix $M\in\mathbb{R}^{n\times n}$ is
denoted $\rho(M)=\max_{i}|s_{i}|$ where $s_{1},\ldots,s_{n}$ are the
eigenvalues of $M$. For a matrix $M$, we write that $M\succ 0$ if $M$ is
positive definite. The regularization parameter, possibly multivariate, is
denoted by $\lambda=(\lambda_{1},\dots,\lambda_{r})^{\top}\in\mathbb{R}^{r}$.
We denote
$\hat{\mathcal{J}}_{(\lambda)}\triangleq(\nabla_{\lambda}\hat{\beta}_{1}^{(\lambda)},\dots,\nabla_{\lambda}\hat{\beta}_{p}^{(\lambda)})^{\top}\in\mathbb{R}^{p\times
r}$ the weak Jacobian (Evans and Gariepy, 1992) of $\hat{\beta}^{(\lambda)}$
with respect to $\lambda$.
#### Convex analysis.
For a convex function $h:\mathbb{R}^{p}\to\mathbb{R}$, the proximal operator
of $h$ is defined, for any $x\in\mathbb{R}^{p}$, as:
$\operatorname{prox}_{h}(x)=\operatorname*{\mathrm{arg\,min}}_{y\in\mathbb{R}^{p}}\frac{1}{2}\|x-y\|^{2}+h(y)$.
The subdifferential of $h$ at $x$ is denoted $\partial
h(x)=\left\\{u\in\mathbb{R}^{p}\,:\,\forall z\in\mathbb{R}^{p},h(z)\geq
h(x)+u^{\top}(z-x)\right\\}$. A function is said to be _smooth_ if it has
Lipschitz gradients. Let $f$ be a $L$-smooth function. Lipschitz constants of
the functions $\nabla_{j}f$ are denoted by $L_{j}$; hence for all
$x\in\mathbb{R}^{p}$, $h\in\mathbb{R}$:
$\displaystyle|\nabla_{j}f(x+he_{j})-\nabla_{j}f(x)|\leq L_{j}|h|\enspace.$
For a function $f$, its gradient restricted to the indices in a set $S$ is
denoted $\nabla_{S}f$. For a set $\Xi\subset\mathbb{R}^{p}$, its relative
interior is noted $\operatorname{ri}(\Xi)$, and its indicator function is
defined for any $x\in\mathbb{R}^{p}$ by $\iota_{\Xi}(x)=0$ if $x\in\Xi$ and
$+\infty$ otherwise. A function
$h:\mathbb{R}\rightarrow\mathbb{R}\cup\\{+\infty\\}$ is said to be proper if
$\operatorname{dom}(h)=\\{x\in\mathbb{R}:h(x)<+\infty\\}\neq\emptyset$), and
closed if for any $\alpha\in\mathbb{R}$, the sublevel set
$\\{x\in\operatorname{dom}(h):h(x)\leq\alpha\\}$ is a closed set.
For a function $\psi:\mathbb{R}^{p}\times\mathbb{R}^{r}\mapsto\mathbb{R}^{p}$,
we denote $\partial_{z}\psi$ the weak Jacobian with respect to the first
variable and $\partial_{\lambda}\psi$ the weak Jacobian with respect to the
second variable. The proximal operator of $g(\cdot,\lambda)$ can be seen as
such a function $\psi$ of $\beta$ and $\lambda$ (see Table 1 for examples):
$\displaystyle\mathbb{R}^{p}\times\mathbb{R}^{r}$
$\displaystyle\rightarrow\mathbb{R}^{p}$ $\displaystyle(z,\lambda)$
$\displaystyle\mapsto\operatorname{prox}_{g(\cdot,\lambda)}(z)=\psi(z,\lambda)\enspace.$
In this case we denote
$\partial_{z}\operatorname{prox}_{g(\cdot,\lambda)}\triangleq\partial_{z}\psi$
and
$\partial_{\lambda}\operatorname{prox}_{g(\cdot,\lambda)}\triangleq\partial_{\lambda}\psi$.
Since we consider only separable penalties $g(\cdot,\lambda)$,
$\partial_{z}\operatorname{prox}_{g(\cdot,\lambda)}$ is a diagonal matrix, so
to make notation lighter, we write
$\partial_{z}\operatorname{prox}_{g(\cdot,\lambda)}$ for its diagonal. We thus
have
$\displaystyle\partial_{z}\operatorname{prox}_{g(\cdot,\lambda)}=(\partial_{z}\operatorname{prox}_{g_{j}(\cdot,\lambda)})_{j\in[p]}$
$\displaystyle\in\mathbb{R}^{p}\quad(\text{by separability of $g$})$
$\displaystyle\partial_{\lambda}\operatorname{prox}_{g(\cdot,\lambda)}$
$\displaystyle\in\mathbb{R}^{p\times r}\enspace.$
Explicit partial derivatives formulas for usual proximal operators can be
found in Table 3.
Table 3: Partial derivatives of proximal operators used.
$g_{j}(\beta_{j},\lambda)$ | $\operatorname{prox}_{g_{j}(\cdot,\lambda)}(z_{j})$ | $\partial_{z}\operatorname{prox}_{g_{j}(\cdot,\lambda)}(z_{j})$ | $\partial_{\lambda}\operatorname{prox}_{g_{j}(\cdot,\lambda)}(z_{j})$
---|---|---|---
$e^{\lambda}\beta_{j}^{2}/2$ | $z_{j}/(1+e^{\lambda})$ | $1/(1+e^{\lambda})$ | $-z_{j}e^{\lambda}/(1+e^{\lambda})^{2}$
$e^{\lambda}|\beta_{j}|$ | $\operatorname{ST}(z_{j},e^{\lambda})$ | $|\operatorname{sign}(\operatorname{ST}(z_{j},e^{\lambda}))|$ | $-e^{\lambda}\operatorname{sign}(\operatorname{ST}(z_{j},e^{\lambda}))$
$e^{\lambda_{1}}|\beta_{j}|+\tfrac{1}{2}e^{\lambda_{2}}\beta_{j}^{2}$ | $\frac{\operatorname{ST}(z_{j},e^{\lambda_{1}})}{1+e^{\lambda_{2}}}$ | $\frac{|\operatorname{sign}(\operatorname{ST}(z_{j},e^{\lambda_{1}}))|}{1+e^{\lambda_{2}}}$ | $\left(\frac{-e^{\lambda_{1}}\operatorname{sign}(\operatorname{ST}(z_{j},e^{\lambda_{1}}))}{1+e^{\lambda_{2}}},\frac{-\operatorname{ST}(z_{j},e^{\lambda_{1}})e^{\lambda_{2}}}{(1+e^{\lambda_{2}})^{2}}\right)$
$\iota_{[0,e^{\lambda}]}(\beta_{j})$ | $\max(0,\min(z_{j},e^{\lambda}))$ | $\mathds{1}_{]0,e^{\lambda}[}(z_{j})$ | $e^{\lambda}\mathds{1}_{z_{j}>e^{\lambda}}$
## 2 Related work
The main challenge to evaluate the hypergradient
$\nabla_{\lambda}\mathcal{L}(\lambda)$ is the computation of the Jacobian
$\mathcal{J}_{(\lambda)}$. We first focus on the case where
$\Phi(\cdot,\lambda)$ is convex and smooth for any $\lambda$.
#### Implicit differentiation.
We recall how the _implicit differentiation_ 333Note that _implicit_ refers to
the implicit function theorem, but leads to an _explicit_ formula for the
gradient. formula of the gradient $\nabla_{\lambda}\mathcal{L}(\lambda)$ is
obtained for smooth inner optimization problems. We will provide a
generalization to non-smooth optimization problems in Section 3.2.
###### Theorem 1 (Bengio 2000)
Let
$\hat{\beta}^{(\lambda)}\in\operatorname*{\mathrm{arg\,min}}_{\beta\in\mathbb{R}^{p}}\Phi(\beta,\lambda)$
be a solution of Equation 1. Assume that for all $\lambda>0$,
$\Phi(\cdot,\lambda)$ is a convex smooth function,
$\nabla_{\beta}^{2}\Phi(\hat{\beta}^{(\lambda)},\lambda)\succ 0$, and that for
all $\beta\in\mathbb{R}^{p}$, $\Phi(\beta,\cdot)$ is differentiable over
$]0,+\infty[$. Then the hypergradient $\nabla_{\lambda}\mathcal{L}(\lambda)$
reads:
$\underbrace{\nabla_{\lambda}\mathcal{L}(\lambda)}_{\in\mathbb{R}^{r}}=\underbrace{-\nabla_{\beta,\lambda}^{2}\Phi(\hat{\beta}^{(\lambda)},\lambda)}_{\in\mathbb{R}^{r\times
p}}{\underbrace{\left(\nabla_{\beta}^{2}\Phi(\hat{\beta}^{(\lambda)},\lambda)\right)}_{\in\mathbb{R}^{p\times
p}}}^{-1}\underbrace{\nabla\mathcal{C}(\hat{\beta}^{(\lambda)})}_{\in\mathbb{R}^{p}}\enspace.$
(5)
Proof For a smooth convex function $\beta\mapsto\Phi(\beta,\lambda)$ the
first-order condition writes:
$\displaystyle\nabla_{\beta}\Phi(\hat{\beta}^{(\lambda)},\lambda)=0\enspace,$
(6)
for any $\hat{\beta}^{(\lambda)}$ solution of the inner problem. Moreover, if
$\lambda\mapsto\nabla_{\beta}\Phi(\hat{\beta}^{(\lambda)},\lambda)$ is
differentiable, differentiating Equation 6 with respect to $\lambda$ leads to:
$\displaystyle\nabla_{\beta,\lambda}^{2}\Phi(\hat{\beta}^{(\lambda)},\lambda)+\hat{\mathcal{J}}^{\top}_{(\lambda)}\nabla_{\beta}^{2}\Phi(\hat{\beta}^{(\lambda)},\lambda)=0\enspace.$
(7)
The Jacobian $\hat{\mathcal{J}}^{\top}_{(\lambda)}$ is computed by solving the
following linear system:
$\displaystyle\hat{\mathcal{J}}_{(\lambda)}^{\top}=-\underbrace{\nabla_{\beta,\lambda}^{2}\Phi(\hat{\beta}^{(\lambda)},\lambda)}_{\in\mathbb{R}^{r\times
p}}{\underbrace{\left(\nabla_{\beta}^{2}\Phi(\hat{\beta}^{(\lambda)},\lambda)\right)}_{\in\mathbb{R}^{p\times
p}}}^{-1}\enspace.$ (8)
Plugging Equation 8 into Equation 3 yields the desired result.
The computation of the gradient via implicit differentiation (Equation 5)
involves the resolution of a $p\times p$ linear system (Bengio, 2000, Sec. 4).
This potentially large linear system can be solved using different algorithms
such as conjugate gradient (Hestenes and Stiefel 1952, as in Pedregosa 2016)
or fixed point methods (Lions and Mercier 1979; Tseng and Yun 2009, as in
Grazzi et al. 2020). Implicit differentiation has been used for model
selection of multiple estimators with smooth regularization term: kernel-based
models (Chapelle et al., 2002; Seeger, 2008), weighted Ridge estimator (Foo et
al., 2008), neural networks (Lorraine et al., 2019) or meta-learning
(Franceschi et al., 2018; Rajeswaran et al., 2019). In addition to
hyperparameter selection, it has been applied successfully in natural language
processing (Bai et al., 2019) and computer vision (Bai et al., 2020).
Equation 1 is typically solved using iterative solvers. In practice, the
number of iterations is limited to reduce computation time, and also since
very precise solutions are generally not necessary for machine learning tasks.
Thus, Equation 6 is not exactly satisfied at machine precision, and
consequently the linear system to solve Equation 5 does not lead to the exact
gradient $\nabla_{\lambda}\mathcal{L}(\lambda)$, see Ablin et al. (2020) for
quantitative convergence results. However, Pedregosa (2016) showed that one
can resort to _approximate gradients_ when the inner problem is smooth,
justifying that implicit differentiation can be applied using an approximation
of $\hat{\beta}$. Interestingly, this approximation scheme was shown to yield
significant practical speedups when solving Equation 2, while preserving
theoretical properties of convergence toward the optimum.
#### Iterative differentiation.
Iterative differentiation computes the gradient
$\nabla_{\lambda}\mathcal{L}(\lambda)$ by differentiating through the iterates
of the algorithm used to solve Equation 1. Iterative differentiation can be
applied using the forward-mode (Wengert 1964; Deledalle et al. 2014;
Franceschi et al. 2017) or the reverse-mode (Linnainmaa 1970; LeCun et al.
1998; Domke 2012). Both rely on the chain rule, the gradient being decomposed
as a large product of matrices, computed either in a forward or backward way.
Note that forward and reverse modes are algorithm-dependent: in this section
we illustrate iterative differentiation for proximal gradient descent (PGD,
Lions and Mercier 1979; Combettes and Wajs 2005), using the forward-mode
(Algorithm 1), and the reverse-mode (Algorithm 2).
The most popular method in automatic differentiation is the reverse-mode, a
cornerstone of deep learning (Goodfellow et al., 2016, Chap. 8). Iterative
differentiation for hyperparameter optimization can be traced back to Domke
(2012), who derived (for smooth loss functions) a reverse-mode with gradient
descent, heavy ball and L-BFGS algorithms. It first computes the solution of
the optimization Equation 1 using an iterative solver, but requires storing
the iterates along the computation for a backward evaluation of the
hypergradient (Algorithm 2). Alternatively, the forward-mode computes jointly
the solution along with the gradient $\nabla_{\lambda}\mathcal{L}(\lambda)$.
It is memory efficient (no iterates storage) but more computationally
expensive when the number of hyperparameters ($r$) is large; see Baydin et al.
(2018) for a survey.
#### Resolution of the bilevel Equation 2.
From a theoretical point of view, solving Equation 2 using gradient-based
methods is also challenging, and results in the literature are quite scarce.
Kunisch and Pock (2013) studied the convergence of a semi-Newton algorithm
where both the outer and inner problems are smooth. Franceschi et al. (2018)
gave similar results with weaker assumptions to unify hyperparameter
optimization and meta-learning with a bilevel point of view. They required the
inner problem to have a unique solution for all $\lambda>0$ but do not have
second-order assumptions on $\Phi$. Recent results (Ghadimi and Wang, 2018; Ji
et al., 2020; Mehmood and Ochs, 2021) have provided quantitative convergence
toward a global solution of Equation 2, but under global joint convexity
assumption and exact knowledge of the gradient Lipschitz constant.
input : $\lambda\in\mathbb{R}^{r},\gamma>0,n_{\mathrm{iter}}\in\mathbb{N}$,
$\beta^{(0)}\in\mathbb{R}^{p}$, $\mathcal{J}^{(0)}\in\mathbb{R}^{p\times r}$
// jointly compute coef. & Jacobian
for _$k=1,\dots,n_{\mathrm{iter}}$_ do
// update the regression coefficients
$z^{(k)}=\beta^{(k-1)}-\gamma\nabla f(\beta^{(k-1)})$ ;
// GD step
$\mathrm{d}z^{(k)}=\mathcal{J}^{(k-1)}-\gamma\nabla^{2}f(\beta^{(k-1)})\mathcal{J}^{(k-1)}$
$\beta^{(k)}=\operatorname{prox}_{\gamma g(\cdot,\lambda)}(z^{(k)})$ ;
// prox. step
// update the Jacobian
$\mathcal{J}^{(k)}=\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})\odot\mathrm{d}z^{(k)}$
$\mathcal{J}^{(k)}\mathrel{+}=\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})$ ;
// $\mathcal{O}(pr)$
$v=\nabla\mathcal{C}(\beta^{n_{\mathrm{iter}}})$
return _$\beta^{n_{\mathrm{iter}}},\mathcal{J}^{n_{\mathrm{iter}}\top}v$_
Algorithm 1 Forward-mode PGD
input : $\lambda\in\mathbb{R}^{r},\gamma>0,n_{\mathrm{iter}}\in\mathbb{N}$,
$\beta^{(0)}\in\mathbb{R}^{p}$
// computation of $\hat{\beta}$
for _$k=1,\dots,n_{\mathrm{iter}}$_ do
$z^{(k)}=\beta^{(k-1)}-\gamma\nabla f(\beta^{(k-1)})$ ;
// GD step
$\beta^{(k)}=\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(z^{(k)}\right)$ ;
// prox. step
// backward computation of the gradient $g$
$v=\nabla\mathcal{C}(\beta^{(n_{\mathrm{iter}})})$, $h=0_{\mathbb{R}^{r}}$
for _$k=n_{\mathrm{iter}},n_{\mathrm{iter}}-1,\dots,1$_ do
$h\mathrel{+}=v^{\top}\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})$ ;
// $\mathcal{O}(pr)$
$v\leftarrow\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})\odot v$ ;
// $\mathcal{O}(p)$
$v\leftarrow(\operatorname{Id}-\gamma\nabla^{2}f(\beta^{(k)}))v$ ;
// $\mathcal{O}(np)$
return _ $\beta^{n_{\mathrm{iter}}},h$ _
Algorithm 2 Reverse-mode PGD
## 3 Bilevel optimization with non-smooth inner problems
We recalled above how to compute hypergradients when the inner optimization
problem is smooth. In this section we tackle the bilevel optimization Equation
2 with non-smooth inner optimization Equation 1. Handling non-smooth inner
problems requires specific tools detailed in Section 3.1. We then show how to
compute gradients with non-smooth inner problems using implicit
differentiation (Section 3.2) or iterative differentiation (Section 3.3). In
Section 3.4 we tackle the problem of approximate gradient for a non-smooth
inner optimization problem. Finally, we propose in Section 3.6 an algorithm to
solve the bilevel optimization Equation 2.
### 3.1 Theoretical framework
Differentiability of the regularization path. Before applying first-order
methods to tackle Equation 2, one must ensure that the regularization path
$\lambda\mapsto\hat{\beta}^{(\lambda)}$ is almost everywhere differentiable
(as in Figure 2). This is the case for the Lasso (Mairal and Yu, 2012) and the
SVM (Hastie et al., 2004; Rosset and Zhu, 2007) since solution paths are
piecewise differentiable (see Figure 2). Results for nonquadratic datafitting
terms are scarcer: Friedman et al. (2010) address the practical resolution of
sparse logistic regression, but stay evasive regarding the differentiability
of the regularization path. In the general case for problems of the form
Equation 1, we believe it is an open question and leave it for future work.
Figure 2: Regularization paths (coefficient values as a function of
$\lambda$), on the _diabetes_ and _breast cancer_ datasets for the Lasso, the
elastic net and sparse logistic regression. This illustrates the weak
differentiability of the paths. We used _diabetes_ for the Lasso and the
elastic net, and the $10$ first features of _breast cancer_ for the sparse
logistic regression.
#### Differentiability of proximal operators.
The key point to obtain an implicit differentiation formula for non-smooth
inner problems is to differentiate the fixed point equation of proximal
gradient descent. From a theoretical point of view, ensuring this
differentiability at the optimum is non-trivial: Poliquin and Rockafellar
(1996, Thm. 3.8) showed that under a _twice epi-differentiability_ condition
the proximal operator is differentiable at optimum. For the convergence of
forward and reverse modes in the non-smooth case, one has to ensure that,
after enough iterations, the updates of the algorithms become differentiable.
Deledalle et al. (2014) justified (weak) differentiability of proximal
operators as they are non-expansive. However this may not be a sufficient
condition, see Bolte and Pauwels (2020a, b). In our case, we show
differentiability after _support identification_ of the algorithms: active
constraints are identified after a finite number of iterations by proximal
gradient descent (Liang et al., 2014; Vaiter et al., 2018) and proximal
coordinate descent, see Nutini (2018, Sec. 6.2) or Klopfenstein et al. (2020).
Once these constraints have been identified convergence is linear towards the
Jacobian (see Theorems 12, 10, 11 and 3).
For the rest of this paper, we consider the bilevel optimization Equation 2
with the following assumptions on the inner Equation 1.
###### Assumption 2 (Smoothness)
The function $f:\mathbb{R}^{p}\rightarrow\mathbb{R}$ is a convex,
differentiable function, with a $L$-Lipschitz gradient.
###### Assumption 3 (Proper, closed, convex)
For all $\lambda\in\mathbb{R}^{r}$, for any $j\in[p]$, the function
$g_{j}(\cdot,\lambda):\mathbb{R}\rightarrow\mathbb{R}$ is proper, closed and
convex.
###### Assumption 4 (Non-degeneracy)
The problem admits at least one solution:
$\operatorname*{\mathrm{arg\,min}}_{\beta\in\mathbb{R}^{p}}\Phi(\beta,\lambda)\neq\emptyset\enspace,$
and, for any $\hat{\beta}$ solution of Equation 1, we have
$-\nabla
f(\hat{\beta})\in\operatorname{ri}\left(\partial_{\beta}g(\hat{\beta},\lambda)\right)\enspace.$
To be able to extend iterative and implicit differentiation to the non-smooth
case, we need to introduce the notion of generalized support.
###### Definition 5 (Generalized support, Nutini et al. 2019, Def. 1)
For a solution
$\hat{\beta}\in\operatorname*{\mathrm{arg\,min}}_{\beta\in\mathbb{R}^{p}}\Phi(\beta,\lambda)$,
its _generalized support_ $\hat{S}\subseteq[p]$ is the set of indices
$j\in[p]$ such that $g_{j}$ is differentiable at $\hat{\beta}_{j}$:
$\displaystyle\hat{S}\triangleq\\{j\in[p]:\partial_{\beta}g_{j}(\hat{\beta}_{j},\lambda)\text{
is a singleton}\\}\enspace.$
An iterative algorithm is said to achieve finite support identification if its
iterates $\beta^{(k)}$ converge to $\hat{\beta}$, and there exists $K\geq 0$
such that for all $j\notin\hat{S}$, for all $k\geq
K,\beta_{j}^{(k)}=\hat{\beta}_{j}$.
Examples. For the $\ell_{1}$ norm (promoting sparsity),
$g_{j}(\hat{\beta}_{j},\lambda)=e^{\lambda}|\hat{\beta}_{j}|$, the generalized
support is $\hat{S}\triangleq\\{j\in[p]:\hat{\beta}_{j}\neq 0\\}$. This set
corresponds to the indices of the non-zero coefficients, which is the usual
support definition. For the SVM estimator,
$g_{j}(\hat{\beta}_{j},\lambda)=\iota_{[0,e^{\lambda}]}(\hat{\beta_{j}})$.
This function is non-differentiable at $0$ and at $e^{\lambda}$. The
generalized support for the SVM estimator then corresponds to the set of
indices such that $\hat{\beta}_{j}\in]0,e^{\lambda}[$.
Finally, to prove local linear convergence of the Jacobian we assume
regularity and strong convexity on the generalized support.
###### Assumption 6 (Locally $\mathcal{C}^{2}$ and $\mathcal{C}^{3}$)
The map $\beta\mapsto f(\beta)$ is locally $\mathcal{C}^{3}$ around
$\hat{\beta}$. For all $\lambda\in\mathbb{R}^{r}$, for all $j\in\hat{S}$ the
map $g_{j}(\cdot,\lambda)$ is locally $\mathcal{C}^{2}$ around
$\hat{\beta}_{j}$.
###### Assumption 7 (Restricted injectivity)
Let $\hat{\beta}$ be a solution of Equation 1 and $\hat{S}$ its generalized
support. The solution $\hat{\beta}$ satisfies the following restricted
injectivity condition:
$\displaystyle\nabla^{2}_{\hat{S},\hat{S}}f(\hat{\beta})\succ 0\enspace.$
Assumptions 2 and 3 are classical to ensure inner problems can be solved using
proximal algorithms. Assumption 4 can be seen as a generalization of
constraint qualifications (Hare and Lewis, 2007, Sec. 1) and is crucial to
ensure _support identification_. Assumptions 6 and 7 are classical for the
analysis (Liang et al., 2017) and sufficient to derive rates of convergence
for the Jacobian of the inner problem once the generalized support has been
identified.
The next lemma guarantees uniqueness of Equation 1 under Assumptions 4 and 7.
###### Lemma 8 (Liang et al. 2017, Prop. 4.1)
Assume that there exists a neighborhood $\Lambda$ of $\lambda$ such that
Assumptions 4 and 7 are satisfied for every $\lambda\in\Lambda$. Then for
every $\lambda\in\Lambda$, Equation 1 has a unique solution, and the map
$\lambda\mapsto\hat{\beta}^{(\lambda)}$ is well-defined on $\Lambda$.
We first show how implicit and iterative differentiation can be used with a
non-smooth inner problem. Peyré and Fadili (2011) proposed to smooth the inner
optimization problem, Ochs et al. (2015); Frecon et al. (2018) relied on the
forward-mode combined with Bregman iterations to get differentiable steps. For
non-smooth optimization problems, implicit differentiation has been considered
for (constrained) convex optimization problems (Gould et al., 2016; Amos and
Kolter, 2017; Agrawal et al., 2019), Lasso-type problems (Mairal et al., 2012;
Bertrand et al., 2020), total variation penalties (Cherkaoui et al., 2020) and
generalized to strongly monotone operators (Winston and Kolter, 2020).
### 3.2 Hypergradient computation: implicit differentiation
The exact proof of Theorem 1 cannot be applied when
$\beta\mapsto\Phi(\beta,\lambda)$ is non-smooth, as Equations 7 and 6 no
longer hold. Nevertheless, instead of the optimality condition of smooth
optimization, Equation 6, one can leverage the fixed point iteration of
proximal gradient descent, which we will see in Equation 11. The main
theoretical challenge is to show the differentiability of the function
$\beta\mapsto\operatorname{prox}_{\gamma g}(\beta-\gamma\nabla f(\beta))$.
Besides, taking advantage of the generalized sparsity of the regression
coefficients $\hat{\beta}^{(\lambda)}$, one can show that the Jacobian
$\hat{\mathcal{J}}$ is row-sparse, leading to substantial computational
benefits when computing the hypergradient
$\nabla_{\lambda}\mathcal{L}(\lambda)$) for Equation 1,
###### Theorem 9 (Non-smooth implicit formula)
Suppose Assumptions 2, 3 and 6 hold. Let $0<\gamma\leq 1/L$, where $L$ is the
Lipschitz constant of $\nabla f$. Let $\lambda\in\mathbb{R}^{r}$, $\Lambda$ be
a neighborhood of $\lambda$, and
$\Gamma^{\Lambda}\triangleq\left\\{\hat{\beta}^{(\lambda)}-\gamma\nabla
f(\hat{\beta}^{(\lambda)})\,:\,\lambda\in\Lambda\right\\}$. In addition,
1. (H1)
Suppose Assumptions 4 and 7 hold on $\Lambda$.
2. (H2)
Suppose $\lambda\mapsto\hat{\beta}^{(\lambda)}$ is continuously differentiable
on $\Lambda$.
3. (H3)
Suppose for all $z\in\Gamma^{\Lambda}$,
$\lambda\mapsto\operatorname{prox}_{\gamma g(\cdot,\lambda)}(z)$ is
continuously differentiable on $\Lambda$.
4. (H4)
Suppose $\partial_{z}\operatorname{prox}_{\gamma g(\cdot,\lambda)}$ and
$\partial_{\lambda}\operatorname{prox}_{\gamma g(\cdot,\lambda)}$ are
Lipschitz continuous on $\Gamma^{\Lambda}\times\Lambda$.
Let $\hat{\beta}\triangleq\hat{\beta}^{(\lambda)}$ be the solution of Equation
1, $\hat{S}$ its generalized support of cardinality $\hat{s}$. Then the
Jacobian $\hat{\mathcal{J}}$ of the inner Equation 1 is given by the following
formula,
$\hat{z}=\hat{\beta}-\gamma\nabla f(\hat{\beta})$, and
$A\triangleq\operatorname{Id}_{\hat{s}}-\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}\odot\left(\operatorname{Id}_{\hat{s}}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\hat{\beta})\right)$:
$\displaystyle\hat{\mathcal{J}}_{\hat{S}^{c}:}$
$\displaystyle=\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\hat{z}\right)_{\hat{S}^{c}}\enspace,$ (9)
$\displaystyle\hat{\mathcal{J}}_{\hat{S}:}$
$\displaystyle=A^{-1}\left(\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}-\gamma\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}\odot\nabla^{2}_{\hat{S},\hat{S}^{c}}f(\hat{\beta})\hat{\mathcal{J}}_{\hat{S}^{c}}\right)\enspace.$
(10)
Proof According to Lemma 8, Assumptions 7 and 4 ensure Equation 1 has a
unique minimizer and $\lambda\mapsto\hat{\beta}^{(\lambda)}$ is well-defined
on $\Lambda$. We consider the proximal gradient descent fixed point equation:
$\displaystyle\hat{\beta}^{(\lambda)}=\operatorname{prox}_{\gamma
g_{(\cdot,\lambda)}}\left(\hat{\beta}^{(\lambda)}-\gamma\nabla
f(\hat{\beta}^{(\lambda)})\right)\enspace.$ (11)
Together with the conclusion of Lemma 8, Assumptions 2 and 6, and given (H2),
(H3) and (H4), we have that
$\lambda\mapsto\psi\left(\beta^{(\lambda)}-\gamma\nabla
f(\hat{\beta}^{(\lambda)}),\lambda\right)\triangleq\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\hat{\beta}^{(\lambda)}-\gamma\nabla
f(\hat{\beta}^{(\lambda)})\right)$ is differentiable at $\lambda$. One can
thus differentiate Equation 11 with respect to $\lambda$, which leads to:
$\displaystyle\hat{\mathcal{J}}=\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})\odot\left(\operatorname{Id}-\gamma\nabla^{2}f(\hat{\beta})\right)\hat{\mathcal{J}}+\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\hat{z}\right)\enspace,$ (12)
with $\hat{z}=\hat{\beta}-\gamma\nabla f(\hat{\beta})$. In addition to
$0<\gamma<1/L\leq 1/L_{j}$, the separability of $g$ and Assumptions 2, 3, 6
and 4 ensure (see LABEL:{lemma:diff_prox}) that for any $j\in\hat{S}^{c}$,
$\displaystyle\partial_{z}\operatorname{prox}_{\gamma
g_{j}(\cdot,\lambda)}\left(\hat{\beta}_{j}-\gamma\nabla_{j}f(\hat{\beta})\right)=0\enspace.$
(13)
Plugging Equation 13 into Equation 12 ensures Equation 9 for all
$j\in\hat{S}^{c}$:
$\displaystyle\hat{\mathcal{J}}_{j:}=\partial_{\lambda}\operatorname{prox}_{\gamma
g_{j}(\cdot,\lambda)}\left(\hat{\beta}_{j}-\gamma\nabla_{j}f(\hat{\beta})\right)\enspace.$
(14)
Plugging Equations 13 and 14 into Equation 12 shows that the Jacobian
restricted on the generalized support $\hat{S}$ satisfies the following linear
system:
$\displaystyle\left(\text{Id}_{\hat{s}}-\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\hat{z}\right)_{\hat{S}}\odot\big{(}\text{Id}_{\hat{s}}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\hat{\beta})\big{)}\right)$
$\displaystyle\hat{\mathcal{J}}_{\hat{S}:}=$
$\displaystyle-\gamma\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}\odot$
$\displaystyle\nabla^{2}_{\hat{S},\hat{S}^{c}}f(\hat{\beta})\hat{\mathcal{J}}_{\hat{S}^{c}:}+\partial_{\lambda}\operatorname{prox}_{g}(\hat{z})_{\hat{S}:}\enspace.$
Since $0<\gamma\leq 1/L$,
$\displaystyle\lVert\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}\odot(\operatorname{Id}_{\hat{s}}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\hat{\beta}))\rVert_{2}$
$\displaystyle\leq\lVert\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}\rVert\cdot\lVert\operatorname{Id}_{\hat{s}}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\hat{\beta})\rVert_{2}$
$\displaystyle<1\enspace.$ (15)
Since Equation 15 holds,
$A\triangleq\operatorname{Id}_{\hat{s}}-\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}\odot(\operatorname{Id}_{\hat{s}}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\hat{\beta}))$
is invertible, which leads to Equation 10.
###### Remark 10
In the smooth case a $p\times p$ linear system is needed to compute the
Jacobian in Equation 8. For non-smooth problems this is reduced to an
$\hat{s}\times\hat{s}$ linear system ($\hat{s}\leq p$ being the size of the
generalized support, e.g., the number of non-zero coefficients for the Lasso).
This leads to significant speedups in practice, especially for very sparse
vector $\hat{\beta}^{(\lambda)}$.
###### Remark 11
To obtain Theorem 9 we differentiated the fixed point equation of proximal
gradient descent, though one could differentiate other fixed point equations
(such as the one from proximal coordinate descent). The value of the Jacobian
$\hat{\mathcal{J}}$ obtained with different fixed point equations would be the
same, yet the associated systems could have different numerical stability
properties. We leave this analysis to future work.
### 3.3 Hypergradient computation: iterative differentiation
Instead of implicit differentiation, it is also possible to use iterative
differentiation on proximal solvers. In section Section 2 we presented forward
and reverse modes differentiation of proximal gradient descent (Algorithms 1
and 2). In this section we study the iterative differentiation of proximal
coordinate descent (Algorithms 3 and 4). To instantiate algorithms easily on
problems such as the Lasso, partial derivatives of usual proximal operators
can be found in Table 3.
For coordinate descent, the computation of the iterative Jacobian in a forward
way involves differentiating the following update:
$\displaystyle z_{j}$
$\displaystyle\leftarrow\beta_{j}-\gamma_{j}\nabla_{j}f(\beta)$
$\displaystyle\beta_{j}$
$\displaystyle\leftarrow\operatorname{prox}_{\gamma_{j}g_{j}}\left(\beta_{j}-\gamma_{j}\nabla_{j}f(\beta)\right)$
$\displaystyle\mathcal{J}_{j:}$
$\displaystyle\leftarrow\underbrace{\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j})}_{\in\mathbb{R}}\underbrace{\left(\mathcal{J}_{j:}-\gamma_{j}\nabla_{j:}^{2}f(\beta)\mathcal{J}\right)}_{\in\mathbb{R}^{p}}+\underbrace{\partial_{\lambda}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j})}_{\in\mathbb{R}^{p}}\enspace.$
We address now the convergence of the iterative Jacobian scheme, a question
which remained open in Deledalle et al. (2014, Section 4.1). We show next that
the forward-mode converges to the Jacobian in the non-smooth separable setting
of this paper. Moreover, we prove that the iterative Jacobian convergence is
locally linear after support identification.
input : $X\in\mathbb{R}^{n\times
p},y\in\mathbb{R}^{n},\lambda\in\mathbb{R}^{r},n_{\mathrm{iter}}\in\mathbb{N}$,
$\beta\in\mathbb{R}^{p}$, $\mathcal{J}\in\mathbb{R}^{p\times
r},\gamma_{1},\dots,\gamma_{p}$
// jointly compute coef. & Jacobian
for _$k=1,\dots,n_{\mathrm{iter}}$_ do
for _$j=1,\ldots,p$_ do
// update the regression coefficients
$z_{j}\leftarrow\beta_{j}-\gamma_{j}\nabla_{j}f(\beta)$ ;
// CD step
$\mathrm{d}z_{j}\leftarrow\mathcal{J}_{j:}-\gamma_{j}\nabla^{2}_{j:}f(\beta)\mathcal{J}$
$\beta_{j}\leftarrow\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j})$
;
// prox. step
// update the Jacobian
// diff. with respect to $\lambda$
$\mathcal{J}_{j:}\leftarrow\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j})\mathrm{d}z_{j}$
$\mathcal{J}_{j:}\mathrel{+}=\partial_{\lambda}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j})$
$\beta^{(k)}=\beta$
$\mathcal{J}^{(k)}=\mathcal{J}$
$v=\nabla C(\beta)$
return _$\beta^{n_{\mathrm{iter}}},\mathcal{J}^{\top}v$_
Algorithm 3 Forward-mode PCD
input : $X\in\mathbb{R}^{n\times
p},y\in\mathbb{R}^{n},\lambda\in\mathbb{R}^{r},n_{\mathrm{iter}}\in\mathbb{N}$,
$\beta\in\mathbb{R}^{p},\gamma_{1},\dots,\gamma_{p}$
// compute coef.
for _$k=1,\dots,n_{\mathrm{iter}}$_ do
for _$j=1,\ldots,p$_ do
// update the regression coefficients
$z_{j}\leftarrow\beta_{j}-\gamma_{j}\nabla_{j}f(\beta)$ ;
// CD step
$\beta_{j}\leftarrow\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j})$
;
// prox. step
$\beta^{(k,j)}=\beta;z_{j}^{(k)}=z_{j}$ ;
// store iterates
// compute gradient $g$ in a backward way
$v=\nabla C(\beta^{n_{\mathrm{iter}}})$, $h=0_{\mathbb{R}^{r}}$
for _$k=n_{\mathrm{iter}},n_{\mathrm{iter}}-1,\dots,1$_ do
for _$j=p,\dots,1$_ do
$h\mathrel{-}=\gamma_{j}v_{j}\partial_{\lambda}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\big{(}z_{j}^{(k)}\big{)}$
$v_{j}\mathrel{*}=\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\big{(}z_{j}^{(k)}\big{)}$
$v\mathrel{-}=\gamma_{j}v_{j}\nabla_{j:}^{2}f(\beta^{(k,j)})$ ;
// $\mathcal{O}(np)$
return _ $\beta^{n_{\mathrm{iter}}},h$ _
Algorithm 4 Reverse-mode PCD
###### Theorem 12 (Local linear convergence of the Jacobian)
Let $0<\gamma\leq 1/L$. Suppose Assumptions 2, 3 and 6 hold. Let
$\lambda\in\mathbb{R}^{r}$, $\Lambda$ be a neighborhood of $\lambda$, and
$\Gamma^{\Lambda}\triangleq\left\\{\hat{\beta}^{(\lambda)}-\gamma\nabla
f(\hat{\beta}^{(\lambda)})\,:\,\lambda\in\Lambda\right\\}$. In addition,
suppose hypotheses (H1) to (H4) from Theorem 9 are satisfied and the sequence
$(\beta^{(k)})_{k\in\mathbb{N}}$ generated by Algorithm 1 (respectively by
Algorithm 3) converges toward $\hat{\beta}$.
Then, the sequence of Jacobians $(\mathcal{J}^{(k)})_{k\geq 0}$ generated by
the forward-mode differentiation of proximal gradient descent (Algorithm 1)
(respectively by forward-mode differentiation of proximal coordinate descent,
Algorithm 3) converges locally linearly towards $\hat{\mathcal{J}}$.
Proof of Theorem 12 can be found in Section B.
Figure 3: Local linear convergence of the Jacobian for the SVM. Distance to
optimum for the coefficients $\beta$ (top) and the Jacobian $\mathcal{J}$
(bottom) of the forward-mode differentiation of proximal coordinate descent
(Algorithm 3) on multiple datasets. One epoch corresponds to one pass over the
data, i.e., one iteration with proximal gradient descent.
#### Comments on Figure 3.
We illustrate the results of Theorem 12 on SVM (for the Lasso and sparse
logistic regression, see Figures 10 and 11 in Section C) for multiple datasets
(_leukemia_ , _rcv1_ , _news20_ and _real-sim_ 444Data available on the
_libsvm_ website: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/).
The values of the hyperparameters $\lambda$ are summarized in Table 6.
Regression coefficients $\hat{\beta}^{(\lambda)}$ were computed to machine
precision (up to duality gap smaller than $10^{-16}$) using a state-of-the-art
coordinate descent solver implemented in Lightning (Blondel and Pedregosa,
2016). The exact Jacobian was computed via implicit differentiation (Equation
10). Once these quantities were obtained, we used the forward-mode
differentiation of proximal coordinate descent (Algorithm 3) and monitored the
distance between the iterates of the regression coefficients $\beta^{(k)}$ and
the exact solution $\hat{\beta}$. We also monitored the distance between the
iterates of the Jacobian $\mathcal{J}^{(k)}$ and the exact Jacobian
$\hat{\mathcal{J}}$. The red vertical dashed line represents the iteration
number where support identification happens. Once the support is identified,
Figures 10, 11 and 3 illustrate the linear convergence of the Jacobian.
However, the behavior of the iterative Jacobian before support identification
is more erratic and not even monotone.
### 3.4 Hypergradient computation with approximate gradients
As mentioned in Section 2, relying on iterative algorithms to solve Equation
1, one only has access to an approximation of $\hat{\beta}^{(\lambda)}$: this
may lead to numerical errors when computing the gradient in Theorem 9.
Extending the result of Pedregosa (2016, Thm. 1), which states that
hypergradients can be computed approximately, we give a stability result for
the computation of approximate hypergradients in the case of non-smooth inner
problems. For this purpose we need to add several assumptions to the previous
framework.
###### Theorem 13 (Bound on the error of approximate hypergradient)
For $\lambda\in\mathbb{R}^{r}$, let $\hat{\beta}^{(\lambda)}\in\mathbb{R}^{p}$
be the exact solution of the inner Equation 1, and $\hat{S}$ its generalized
support. Suppose Assumptions 2, 3 and 6 hold. Let $\Lambda$ be a neighborhood
of $\lambda$, and
$\Gamma^{\Lambda}\triangleq\left\\{\hat{\beta}^{(\lambda)}-\gamma\nabla
f(\hat{\beta}^{(\lambda)})\,:\,\lambda\in\Lambda\right\\}$. Suppose hypotheses
(H1) to (H4) from Theorem 9 are satisfied. In addition suppose
1. (H5)
The application $\beta\mapsto\nabla^{2}f(\beta)$ is Lipschitz continuous.
2. (H6)
The criterion $\beta\mapsto\nabla\mathcal{C}(\beta)$ is Lipschitz continuous.
3. (H7)
Both optimization problems in Algorithm 5 are solved up to precision
$\epsilon$ with support identification:
$\lVert\beta^{(\lambda)}-\hat{\beta}^{(\lambda)}\rVert\leq\epsilon$,
$A^{\top}$ is invertible, and $\lVert
A^{-1\top}\nabla_{\hat{S}}\mathcal{C}(\beta^{(\lambda)})-v\rVert\leq\epsilon$.
Then the error on the approximate hypergradient $h$ returned by Algorithm 5 is
of the order of magnitude of the error $\epsilon$ on $\beta^{(\lambda)}$ and
$v$:
$\lVert\nabla\mathcal{L}(\lambda)-h\rVert=\mathcal{O}(\epsilon)\enspace.$
Proof of Theorem 13 can be found in Section B.1. Following the analysis of
Pedregosa (2016), two sources of approximation errors arise when computing the
hypergradient: one from the inexact computation of $\hat{\beta}$, and another
from the approximate resolution of the linear system. Theorem 13 states that
if the inner optimization problem and the linear system are solved up to
precision $\epsilon$, i.e.,
$\lVert\hat{\beta}^{(\lambda)}-\beta^{(\lambda)}\rVert\leq\epsilon$ and
$\lVert
A^{-1\top}\nabla_{S}\mathcal{C}(\beta^{(\lambda)})-v\rVert\leq\epsilon$, then
the approximation on the hypergradient is also of the order of $\epsilon$.
###### Remark 14
The Lipschitz continuity of the proximity operator with respect to $\lambda$
(H4) is satisfied for usual proximal operators, in particular all the
operators in Table 3. The Lipschitz continuity of the Hessian and the
criterion, hypotheses (H5) and (H6), are satisfied for usual machine learning
loss functions and criteria, such as the least squares and the logistic loss.
###### Remark 15
To simplify the analysis, we used the same tolerance for the resolution of the
inner Equation 1 and the resolution of the linear system. Theorem 13 gives
intuition on the fact that the inner problem does not need to be solved at
high precision to lead to good hypergradients estimation. Note that in
practice one does not easily control the distance between the approximate
solution and the exact one $\lVert\beta^{(k)}-\hat{\beta}\rVert$: most
softwares provide a solution up to a given duality gap (sometimes even other
criteria), not $\lVert\beta^{(k)}-\hat{\beta}\rVert$.
### 3.5 Proposed method for hypergradient computation
We now describe our proposed method to compute the hypergradient of Equation
2. In order to take advantage of the sparsity induced by the generalized
support, we propose an implicit differentiation algorithm for non-smooth inner
problem that can be found in Algorithm 5. First, we compute a solution of the
inner Equation 1 using a solver identifying the generalized support (Liang et
al., 2014; Klopfenstein et al., 2020). Then, the hypergradient is computed by
solving the linear system in Equation 10. This linear system, as mentioned in
Section 2, can be solved using multiple algorithms, including conjugate
gradient or fixed point methods. Table 4 summarizes the computational
complexity in space and time of the described algorithms.
Table 4: Cost in time and space for each method: $p$ is the number of features, $n$ the number of samples, $r$ the number of hyperparameters, and $\hat{s}$ is the size of the generalized support (Definition 5, $\hat{s}\leq p$ and usually $\hat{s}\ll p$). The number of iterations of the inner solver is noted $n_{\mathrm{iter}}$, the number of iterations of the solver of the linear system is noted $n_{\text{sys}}$. Differentiation | Algorithm | Space | Time
---|---|---|---
Forward-mode PGD | Algorithm 1 | $\mathcal{O}(p\,r)$ | $\mathcal{O}(n\,p\,r\,n_{\mathrm{iter}})$
Reverse-mode PGD | Algorithm 2 | $\mathcal{O}(p\,n_{\mathrm{iter}})$ | $\mathcal{O}(n\,p\,n_{\mathrm{iter}}+n\,p\,n_{\mathrm{iter}})$
Forward-mode PCD | Algorithm 3 | $\mathcal{O}(p\,r)$ | $\mathcal{O}(n\,p\,r\,n_{\mathrm{iter}})$
Reverse-mode PCD | Algorithm 4 | $\mathcal{O}(p\,n_{\mathrm{iter}})$ | $\mathcal{O}(n\,p\,n_{\mathrm{iter}}+n\,p^{2}\,n_{\mathrm{iter}})$
Implicit differentiation | Algorithm 5 | $\mathcal{O}(p+\hat{s})$ | $\mathcal{O}(n\,p\,n_{\mathrm{iter}}+n\,\hat{s}\,n_{\mathrm{sys}})$
### 3.6 Resolution of the bilevel optimization Equation 2
From a practical point of view, once the hypergradient has been computed,
first-order methods require the definition of a step size to solve the non-
convex Equation 2. As the Lipschitz constant is not available for the outer
problem, first-order methods need to rely on other strategies, such as:
* •
Gradient descent with manually adjusted fixed step sizes (Frecon et al., 2018;
Ji et al., 2020). The main disadvantage of this technique is that it requires
a careful tuning of the step size for each experiment. In addition to being
potentially tedious, it does not lead to an automatic procedure.
* •
L-BFGS (as in Deledalle et al. 2014). L-BFGS is a quasi-Newton algorithm that
exploits past iterates to approximate the Hessian and propose a better descent
direction, which is combined with some line search (Nocedal and Wright, 2006).
Yet, due to the approximate gradient computation, we observed that L-BFGS did
not always converge.
* •
ADAM (Kingma and Ba, 2014). It turned out to be inappropriate to the present
setting. ADAM was very sensitive to the initial step size and required a
careful tuning for each experiment.
* •
Iteration specific step sizes obtained by line search (Pedregosa, 2016). While
the approach from Pedregosa (2016) requires no tuning, we observed that it
could diverge when close to the optimum. The adaptive step size strategy
proposed in Algorithm 6, used in all the experiments, turned out to be robust
and efficient across problems and datasets.
###### Remark 16 (Uniqueness)
The solution of Equation 1 may be non-unique, leading to a multi-valued
regularization path $\lambda\mapsto\hat{\beta}^{(\lambda)}$ (Liu et al., 2020)
and requiring tools such as _optimistic gradient_ (Dempe et al., 2015, Chap.
3.8). Though it is not possible to ensure uniqueness in practice, we did not
face experimental issues due to potential non-uniqueness. For the Lasso, this
experimental observation can be theoretically justified (Tibshirani, 2013):
when the design matrix is sampled from a continuous distribution, the solution
of the Lasso is almost surely unique.
###### Remark 17 (Initialization)
One advantage of the non-smooth case with the $\ell_{1}$ norm is that one can
find a good initialization point: there exists a value $\lambda_{\max}$ (see
Table 1) such that the solution of Equation 1 vanishes for
$\lambda\geq\lambda_{\max}$. Hence, a convenient and robust initialization
value can be chosen as $e^{\lambda}=e^{\lambda_{\max}}/100$. This is in
contrast with the smooth case, where finding a good initialization heuristic
is hard: starting in flat zones can lead to poor performance for gradient-
based methods (Pedregosa, 2016).
input : $\lambda\in\mathbb{R},\epsilon>0$
init : $\gamma>0$
;
// compute the solution of inner problem
Find $\beta$ such that:
$\Phi(\beta,\lambda)-\Phi(\hat{\beta},\lambda)\leq\epsilon$
;
// compute the gradient
Compute the generalized support $S$ of $\beta$,
$z={\beta}-\gamma\nabla f(\beta)$
$\mathcal{J}_{S^{c}:}=\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z)_{S^{c}}$
$s=|S|$
$A=\operatorname{Id}_{s}-\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z)_{S}\odot(\operatorname{Id}_{s}-\gamma\nabla^{2}_{S,S}f(\beta))$
Find $v\in\mathbb{R}^{s}$ s.t. $\lVert
A^{-1\top}\nabla_{S}\mathcal{C}(\beta)-v\rVert\leq\epsilon$
$None$
-γ∂zproxγg(⋅,λ)(z)S⊙∇2S,Scf(β)JSc
$\nabla\mathcal{L}(\lambda)=\mathcal{J}_{S^{c}:}^{\top}\nabla_{S^{c}}\mathcal{C}(\beta)+v^{\top}B$
return _
$\mathcal{L}(\lambda)\triangleq\mathcal{C}(\beta),\nabla\mathcal{L}(\lambda)$
_
Algorithm 5 Implicit differentiation
input : $\lambda\in\mathbb{R}^{r},(\epsilon_{i})$
init : $\textrm{use\\_adaptive\\_step\\_size}=\textrm{True}$
for _$i=1,\dots,\mathrm{iter}$_ do
$\lambda^{\mathrm{old}}\leftarrow\lambda$
// compute the value and the gradient
$\mathcal{L}(\lambda),\nabla\mathcal{L}(\lambda)\leftarrow{\rm\lx@cref{creftypecap~refnum}{alg:implicit}}(X,y,\lambda,\epsilon_{i})$
if _$\mathrm{use\\_adaptive\\_step\\_size}$_ then
$\alpha=1/\lVert\nabla\mathcal{L}(\lambda)\rVert$
$\lambda\mathrel{-}=\alpha\nabla\mathcal{L}(\lambda)$ ;
// gradient step
if _$\mathcal{L}(\lambda) >\mathcal{L}(\lambda^{\mathrm{old}})$_ then
$\mathrm{use\\_adaptive\\_step\\_size}=\mathrm{False}$
$\alpha\mathrel{/}=10$
return _ $\lambda$ _
Algorithm 6 Gradient descent with approximate gradient
## 4 Experiments
In this section, we illustrate the benefits of our proposed Algorithm 5 to
compute hypergradients and Algorithm 6 to solve Equation 2. Our package,
sparse-ho, is implemented in Python. It relies on Numpy (Harris et al., 2020),
Numba (Lam et al., 2015) and SciPy (Virtanen et al., 2020). Figures were
plotted using matplotlib (Hunter, 2007). The package is available under BSD3
license at https://github.com/qb3/sparse-ho, with documentation and examples
available at https://qb3.github.io/sparse-ho/. Online code includes scripts to
reproduce all figures and experiments of the paper.
Table 5: Characteristics of the datasets used for the experiments. name | $\\#\text{ samples }n$ | $\\#\text{ features }p$ | $\\#\text{ classes }q$ | density
---|---|---|---|---
_breast cancer_ | $569$ | $30$ | $-$ | $1$
_diabetes_ | $442$ | $10$ | $-$ | $1$
_leukemia_ | $72$ | $7129$ | $-$ | $1$
_gina agnostic_ | $3468$ | $970$ | $-$ | $1$
_rcv1_ | $20242$ | $19960$ | $-$ | $3.7\times 10^{-3}$
_real-sim_ | $72309$ | $20958$ | $-$ | $2.4\times 10^{-3}$
_news20_ | $19996$ | $632983$ | $-$ | $6.1\times 10^{-4}$
_mnist_ | $60.000$ | $683$ | $10$ | $2.2\times 10^{-1}$
_usps_ | $7291$ | $256$ | $10$ | $1$
_rcv1 (multiclass)_ | $15564$ | $16245$ | $53$ | $4.0\times 10^{-3}$
_aloi_ | $108000$ | $128$ | $1000$ | $2.4\times 10^{-1}$
### 4.1 Hypergradient computation
#### Comparison with alternative approaches (Figure 4).
Figure 4: Lasso with hold-out criterion: time comparison on the gina dataset
to compute a single hypergradient as a function of the number of features, for
two values of $\lambda$, $e^{\lambda}=e^{\lambda_{\max}}/10$ (left) and
$e^{\lambda}=e^{\lambda_{\max}}/100$ (right).
First, we compare different methods to compute the hypergradient:
* •
Forward-mode differentiation of proximal coordinate descent (Algorithm 3).
* •
Reverse-mode differentiation of proximal coordinate descent (Algorithm 4).
* •
cvxpylayers (Agrawal et al., 2019), a software based on cvxpy (Diamond and
Boyd, 2016), solving _disciplined parametrized programming_ and providing
derivatives with respect to the parameters of the program. It is thus possible
to use cvxpylayers to compute gradients with respect to the regularization
parameters.
Figure 4 compares the time taken by multiple methods to compute a single
hypergradient $\nabla\mathcal{L}(\lambda)$ for the Lasso (see Table 1), for
multiple values of $\lambda$. It shows the time taken to compute the
regression coefficients and the hypergradient, as a function of the number of
columns, sampled from the design matrix from the _gina_ dataset. The columns
were selected at random and $10$ repetitions were performed for each point of
the curves. In order to aim for good numerical precision, problems were solved
up to a duality gap of $10^{-6}$ for the forward-mode and the reverse-mode.
cvxpylayers relies on cvxpy, solving Equation 1 using a splitting conic solver
(O’Donoghue et al., 2019). Since the termination criterion of the splitting
conic solver is not exactly the duality gap (O’Donoghue et al., 2016, Sec.
3.5), we used the default tolerance of $10^{-4}$. The hypergradient
$\nabla\mathcal{L}(\lambda)$ was computed for hold-out mean squared error (see
Table 2).
The forward-mode differentiation of proximal coordinate descent is one order
of magnitude faster than cvxpylayers and two orders of magnitude faster than
the reverse-mode differentiation of proximal coordinate descent. The larger
the value of $\lambda$, the sparser the coefficients $\beta$ are, leading to
significant speedups in this regime. This performance is in accordance with
the lower time cost of the forward mode in Table 4.
#### Combining implicit differentiation with state-of-the art solvers (Figures
5 and 6).
We now compare the different approaches described in Section 3:
* •
Forward-mode differentiation of proximal coordinate descent (Algorithm 3).
* •
Implicit differentiation (Algorithm 5) with proximal coordinate descent to
solve the inner problem. For efficiency, this solver was coded in Numba (Lam
et al., 2015).
* •
Implicit differentiation (Algorithm 5) with state-of-the-art algorithm to
solve the inner problem: we used Celer (Massias et al., 2020) for the Lasso,
and Lightning (Blondel and Pedregosa, 2016) for the SVM.
Figure 5: Lasso with hold-out criterion: absolute difference between the
exact hypergradient (using $\hat{\beta}$) and the iterate hypergradient (using
$\beta^{(k)}$) of the Lasso as a function of time. Results are for three
datasets and two different regularization parameters. “Implicit diff. +
Celer)” uses Celer (Massias et al., 2020) instead of our proximal coordinate
descent implementation.
Figure 6: SVM with hold-out criterion: absolute difference between the exact
hypergradient (using $\hat{\beta}$) and the iterate hypergradient (using
$\beta^{(k)}$) of the SVM as a function of time. “Implicit diff. + Lightning”
uses Lightning (Blondel and Pedregosa, 2016), instead of our proximal
coordinate descent implementation.
Figure 5 shows for three datasets and two values of regularization parameters
the absolute difference between the exact hypergradient and the approximate
hypergradient obtained via multiple algorithms as a function of time. Figure 6
reports similar results for the SVM, on the same datasets, except _news20_ ,
which is not well suited for SVM, due to limited number of samples.
First, it demonstrates that implicit differentiation methods are faster than
the forward-mode of proximal coordinate descent (pink). This illustrates the
benefits of restricting the gradient computation to the support of the
Jacobian, as described in Section 3.5. Second, thanks to the flexibility of
our approach, we obtain additional speed-ups by combining implicit
differentiation with a state-of-the-art solver, Celer. The resulting method
(orange) significantly improves over implicit differentiation using a vanilla
proximal coordinate descent (green).
### 4.2 Resolution of the bilevel optimization problem
In this section we compare multiple methods to find the optimal
hyperparameters for the Lasso, elastic net and multiclass sparse logistic
regression. The following methods are compared:
* •
Grid-search: for the Lasso and the elastic net, the number of hyperparameters
is small, and grid-search is tractable. For the Lasso we chose a grid of $100$
hyperparameters $\lambda$, uniformly spaced between
$\lambda_{\max}-\ln(10^{4})$ and $\lambda_{\max}$. For the elastic net we
chose for each of the two hyperparameters a grid of 10 values uniformly spaced
between $\lambda_{\max}$ and $\lambda_{\max}-\ln(10^{4})$. The product grid
thus has $10^{2}$ points.
* •
Random-search: we chose $30$ values of $\lambda$ sampled uniformly between
$\lambda_{\max}$ and $\lambda_{\max}-\ln(10^{4})$ for each hyperparameter. For
the elastic net we chose $30$ points sampled uniformly in
$[\lambda_{\max}-\ln(10^{4}),\lambda_{\max}]\times[\lambda_{\max}-\ln(10^{4}),\lambda_{\max}]$.
* •
SMBO: this algorithm is SMBO using as criterion expected improvement (EI) and
the Tree-structured Parzen Estimator (TPE) as model. First it evaluates
$\mathcal{L}$ using $5$ values of $\lambda$, chosen uniformly at random
between $\lambda_{\max}$ and $\lambda_{\max}-\ln(10^{4})$. Then a TPE model is
fitted on the data points
$(\lambda^{(1)},\mathcal{L}(\lambda^{(1)})),\dots,(\lambda^{(5)},\mathcal{L}(\lambda^{(5)}))$.
Iteratively, the EI is used to choose the next point to evaluate $\mathcal{L}$
at, and this value is used to update the model. We used the hyperopt
implementation (Bergstra et al., 2013).
* •
1st order: first-order method with exact gradient (Algorithm 6 with constant
tolerances $\epsilon_{i}=10^{-6}$), with $\lambda_{\max}-\ln(10^{2})$ as a
starting point.
* •
1st order approx: a first-order method using approximate gradient (Algorithm 6
with tolerances $\epsilon_{i}$, geometrically decreasing from $10^{-2}$ to
$10^{-6}$), with $\lambda_{\max}-\ln(10^{2})$ as a starting point.
Outer criterion. In the Lasso and elastic net experiments, we pick a $K$-fold
CV loss as outer criterion555In our experiments the default choice is $K=5$..
Hence, the dataset $(X,y)$ is partitioned into $K$ hold-out datasets
$(X^{\text{train}_{k}},y^{\text{train}_{k}}),(X^{\text{val}_{k}},y^{\text{val}_{k}})$.
The bilevel optimization problems then write:
Figure 7: Lasso with cross-validation criterion: cross-validation loss as a
function of $\lambda$ (black line, top) and as a function of time (bottom).
Lighter markers correspond to earlier iterations of the algorithm.
$\displaystyle\operatorname*{\mathrm{arg\,min}}_{\lambda=(\lambda_{1},\lambda_{2})\in\mathbb{R}^{2}}\mathcal{L}(\lambda)=\frac{1}{K}\sum_{k=1}^{K}\lVert
y^{\text{val}_{k}}-X^{\text{val}_{k}}\hat{\beta}^{(\lambda,k)}\rVert^{2}_{2}$
(16) $\displaystyle{s.t.\leavevmode\nobreak\
}\hat{\beta}^{(\lambda,k)}\in\operatorname*{\mathrm{arg\,min}}_{\beta\in\mathbb{R}^{p}}\tfrac{1}{2n}\left\lVert
y^{\text{train}_{k}}-X^{\text{train}_{k}}\beta\right\rVert^{2}_{2}+e^{\lambda_{1}}\lVert\beta\rVert_{1}+\frac{e^{\lambda_{2}}}{2}\lVert\beta\rVert_{2}^{2},\quad\forall
k\in[K]\enspace,$
while Lasso CV is obtained taking $\lambda_{2}\to-\infty$ in the former. By
considering an extended variable $\beta\in\mathbb{R}^{K\times p}$, cross-
validation can be cast as an instance of Equation 2.
Figure 7 represents the cross-validation loss in Lasso CV as a function of the
regularization parameter $\lambda$ (black curve, three top rows) and as a
function of time (bottom). Each point corresponds to the evaluation of the
cross-validation criterion for one $\lambda$ value. The top rows show cross-
validation loss as a function of $\lambda$, for the grid-search, the SMBO
optimizer and the first-order method. The lightest crosses correspond to the
first iterations of the algorithm and the darkest, to the last ones. For
instance, Lasso grid-search starts to evaluate the cross-validation function
with $\lambda=\lambda_{\max}$ and then decreases to
$\lambda=\lambda_{\max}-\ln(10^{4})$. On all the datasets, first-order methods
are faster to find the optimal regularization parameter, requiring only $5$
iterations.
Figure 8: Elastic net cross-validation, time comparison ($2$
hyperparameters). Level sets of the cross-validation loss (black lines, top)
and cross-validation loss as a function of time (bottom) on _rcv1_ , _real-
sim_ and _news20_ datasets.
Figure 8 represents the level sets of the cross-validation loss for the
elastic net (three top rows) and the cross-validation loss as a function of
time (bottom). One can see that after $5$ iterations the SMBO algorithm (blue
crosses) suddenly slows down (bottom) as the hyperparameter suggested by the
algorithm leads to a costly optimization problem to solve, while first-order
methods converge quickly as for Lasso CV. In the present context, inner
problems are slower to solve for low values of the regularization parameters.
#### Multiclass sparse logistic regression ($\\#$ classes hyperparameters,
Figure 9).
We consider a multiclass classification problem with $q$ classes. The design
matrix is noted $X\in\mathbb{R}^{n\times p}$, and the target variable
$y\in\\{1,\dots,q\\}^{n}$. We chose to use a one-versus-all model with $q$
regularization parameters. We use a binary cross-entropy for the inner loss:
$\displaystyle\psi^{k}(\beta,\lambda_{k};X,y)\triangleq-\frac{1}{n}\sum_{i=1}^{n}\left(\mathbbm{1}_{y_{i}=k}\ln(\sigma(X_{i:}\beta))+(1-\mathbbm{1}_{y_{i}=k})\ln(1-\sigma(X_{i:}\beta))\right)+e^{\lambda_{k}}\lVert\beta\rVert_{1}\enspace,$
and a multiclass cross-entropy for the outer criterion:
$\displaystyle\mathcal{C}\left(\hat{\beta}^{(\lambda_{1})},\dots,\hat{\beta}^{(\lambda_{q})};X,y\right)\triangleq-\sum_{i=1}^{n}\sum_{k=1}^{q}\ln\left(\frac{e^{X_{i:}\hat{\beta}^{(\lambda_{k})}}}{\sum_{l=1}^{q}e^{X_{i:}\hat{\beta}^{(\lambda_{l})}}}\right)\mathbbm{1}_{y_{i}=k}\enspace.$
(17)
With a single train/test split, the bilevel problem to solve writes:
$\displaystyle\operatorname*{\mathrm{arg\,min}}_{\lambda\triangleq(\lambda_{1},\dots,\lambda_{q})\in\mathbb{R}^{q}}$
$\displaystyle\mathcal{C}\left(\hat{\beta}^{(\lambda_{1})},\dots,\hat{\beta}^{(\lambda_{q})};X^{\mathrm{test}},y^{\mathrm{test}}\right)$
(18) $\displaystyle{s.t.\leavevmode\nobreak}$
$\displaystyle\hat{\beta}^{(\lambda_{k})}\in\operatorname*{\mathrm{arg\,min}}_{\beta\in\mathbb{R}^{p}}\psi^{k}(\beta,\lambda_{k};X^{\mathrm{train}},y^{\mathrm{train}})\quad\forall
k\in[q]\enspace.$
Figure 9: Multiclass sparse logistic regression hold-out, time comparison
($\\#$ classes hyperparameters). Multiclass cross-entropy (top), accuracy on
the validation set (middle), and accuracy on the test set (bottom) as a
function of time on _mnist_ , _usps_ ($q=10$ classes), _rcv1_ ($q=53$
classes), _aloi_ ($q=1000$ classes).
Figure 9 represents the multiclass cross-entropy (top), the accuracy on the
validation set (middle) and the accuracy on the test set (unseen data,
bottom). When the number of hyperparameter is moderate ($q=10$, on _mnist_ and
_usps_), the multiclass cross-entropy reached by SMBO and random techniques is
as good as first-order techniques. This is expected and follows the same
conclusion as Bergstra and Bengio (2012); Frazier (2018): when the number of
hyperparameters is moderate, SMBO and random techniques can be used
efficiently. However, when the number of hyperparameters increases (_rcv1_ ,
$q=53$ and _aloi_ , $q=1000$), the hyperparameter space is too large: zero-
order solvers simply fail. On the contrary, first-order techniques manage to
find hyperparameters leading to significantly better accuracy.
## 5 Conclusion
In this work we considered the problem of hyperparameter optimization to
select the regularization parameter of linear models with non-smooth
objective. Casting this problem as a bilevel optimization problem, we proposed
to use first-order methods. We showed that the usual automatic differentiation
techniques, implicit differentiation, forward and reverse modes, can be used
to compute the hypergradient, despite the non-smoothness of the inner problem.
Experimentally, we showed the interest of first-order techniques to solve
bilevel optimization on a wide range of estimators ($\ell_{1}$ penalized
methods, SVM, etc.) and datasets. The presented techniques could also be
extended to more general bilevel optimization problems, in particular implicit
differentiation could be well suited for meta-learning problems, with a
potentially large number of hyperparameters.
## Acknowledgements
This work was partially funded by the ERC Starting Grant SLAB ERC-StG-676943 ,
the ANR CaMeLOt ANR-20-CHIA-0001-01, and the ANR grant GraVa ANR-18-CE40-0005.
Part of this work has been carried out at the Machine Learning Genoa (MaLGa)
center, Università di Genova (IT). M. M. acknowledges the financial support of
the European Research Council (grant SLING 819789).
## References
* Ablin et al. (2020) P. Ablin, G. Peyré, and T. Moreau. Super-efficiency of automatic differentiation for functions defined as a minimum. In _International Conference on Machine Learning_ , pages 32–41. PMLR, 2020.
* Agrawal et al. (2019) A. Agrawal, B. Amos, S. Barratt, S. Boyd, S. Diamond, and J. Z. Kolter. Differentiable convex optimization layers. In _NeurIPS_ , pages 9558–9570, 2019.
* Akaike (1974) H. Akaike. A new look at the statistical model identification. _IEEE Trans. Automat. Control_ , AC-19:716–723, 1974.
* Amos and Kolter (2017) B. Amos and J. Z. Kolter. Optnet: Differentiable optimization as a layer in neural networks. In _ICML_ , volume 70, pages 136–145, 2017.
* Arlot and Celisse (2010) S. Arlot and A. Celisse. A survey of cross-validation procedures for model selection. _Statistics surveys_ , 4:40–79, 2010.
* Bai et al. (2019) S. Bai, J. Z. Kolter, and V. Koltun. Deep equilibrium models. _NeurIPS_ , 2019.
* Bai et al. (2020) S. Bai, V. Koltun, and J. Z. Kolter. Multiscale deep equilibrium models. _NeurIPS_ , 2020.
* Baydin et al. (2018) A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind. Automatic differentiation in machine learning: a survey. _Journal of Machine Learning Research_ , 18(153):1–43, 2018.
* Belloni et al. (2011) A. Belloni, V. Chernozhukov, and L. Wang. Square-root Lasso: pivotal recovery of sparse signals via conic programming. _Biometrika_ , 98(4):791–806, 2011.
* Bengio (2000) Y. Bengio. Gradient-based optimization of hyperparameters. _Neural computation_ , 12(8):1889–1900, 2000\.
* Bergstra and Bengio (2012) J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. _Journal of Machine Learning Research_ , 13(2), 2012.
* Bergstra et al. (2011) J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for hyper-parameter optimization. In _NeurIPS_ , 2011.
* Bergstra et al. (2013) J. Bergstra, D. Yamins, and D. D. Cox. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In _Proceedings of the 12th Python in science conference_ , pages 13–20, 2013.
* Bertrand et al. (2020) Q. Bertrand, Q. Klopfenstein, M. Blondel, S. Vaiter, A. Gramfort, and J. Salmon. Implicit differentiation of Lasso-type models for hyperparameter optimization. _ICML_ , 2020.
* Bickel et al. (2009) P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. _Ann. Statist._ , 37(4):1705–1732, 2009.
* Blondel and Pedregosa (2016) M. Blondel and F. Pedregosa. Lightning: large-scale linear classification, regression and ranking in python, 2016.
* Bolte and Pauwels (2020a) J. Bolte and E. Pauwels. Conservative set valued fields, automatic differentiation, stochastic gradient methods and deep learning. _Mathematical Programming_ , pages 1–33, 2020a.
* Bolte and Pauwels (2020b) J. Bolte and E. Pauwels. A mathematical model for automatic differentiation in machine learning. _arXiv preprint arXiv:2006.02080_ , 2020b.
* Boser et al. (1992) B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In _Proceedings of the fifth annual workshop on Computational learning theory_ , pages 144–152. ACM, 1992.
* Brochu et al. (2010) E. Brochu, V. M. Cora, and N. De Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. 2010\.
* Chapelle et al. (2002) O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. _Machine learning_ , 46(1-3):131–159, 2002.
* Chen et al. (1998) S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. _SIAM J. Sci. Comput._ , 20(1):33–61, 1998.
* Cherkaoui et al. (2020) H. Cherkaoui, J. Sulam, and T. Moreau. Learning to solve TV regularised problems with unrolled algorithms. _NeurIPS_ , 33, 2020.
* Colson et al. (2007) B. Colson, P. Marcotte, and G. Savard. An overview of bilevel optimization. _Annals of operations research_ , 153(1):235–256, 2007.
* Combettes and Wajs (2005) P. L. Combettes and V. R. Wajs. Signal recovery by proximal forward-backward splitting. _Multiscale Modeling & Simulation_, 4(4):1168–1200, 2005.
* Defazio et al. (2014) A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In _NeurIPS_ , pages 1646–1654, 2014.
* Deledalle et al. (2014) C.-A. Deledalle, S. Vaiter, J. Fadili, and G. Peyré. Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection. _SIAM J. Imaging Sci._ , 7(4):2448–2487, 2014\.
* Dempe et al. (2015) S. Dempe, V. Kalashnikov, G. A. Pérez-Valdés, and N. Kalashnykova. Bilevel programming problems. _Energy Systems. Springer, Berlin_ , 2015.
* Devroye and Wagner (1979) L. Devroye and T. Wagner. Distribution-free performance bounds for potential function rules. _IEEE Transactions on Information Theory_ , 25(5):601–604, 1979.
* Diamond and Boyd (2016) S. Diamond and S. Boyd. CVXPY: A Python-embedded modeling language for convex optimization. _Journal of Machine Learning Research_ , 17(83):1–5, 2016.
* Domke (2012) J. Domke. Generic methods for optimization-based modeling. In _AISTATS_ , volume 22, pages 318–326, 2012.
* Efron (1986) B. Efron. How biased is the apparent error rate of a prediction rule? _J. Amer. Statist. Assoc._ , 81(394):461–470, 1986.
* Evans and Gariepy (1992) L. C. Evans and R. F. Gariepy. _Measure theory and fine properties of functions_. CRC Press, 1992.
* Fan and Lv (2008) J. Fan and J. Lv. Sure independence screening for ultrahigh dimensional feature space. _J. R. Stat. Soc. Ser. B Stat. Methodol._ , 70(5):849–911, 2008.
* Feurer and Hutter (2019) M. Feurer and F. Hutter. Hyperparameter optimization. In _Automated Machine Learning_ , pages 3–33. Springer, Cham, 2019\.
* Foo et al. (2008) C. S. Foo, C. B. Do, and A. Y. Ng. Efficient multiple hyperparameter learning for log-linear models. In _NeurIPS_ , pages 377–384, 2008.
* Forrester et al. (2008) A. Forrester, A. Sobester, and A. Keane. _Engineering design via surrogate modelling: a practical guide_. John Wiley & Sons, 2008.
* Franceschi et al. (2017) L. Franceschi, M. Donini, P. Frasconi, and M. Pontil. Forward and reverse gradient-based hyperparameter optimization. In _ICML_ , pages 1165–1173, 2017.
* Franceschi et al. (2018) L. Franceschi, P. Frasconi, S. Salzo, and M. Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In _ICML_ , pages 1563–1572, 2018.
* Frazier (2018) P.I. Frazier. A tutorial on Bayesian optimization. _arXiv preprint arXiv:1807.02811_ , 2018.
* Frecon et al. (2018) J. Frecon, S. Salzo, and M. Pontil. Bilevel learning of the group lasso structure. In _NeurIPS_ , pages 8301–8311, 2018.
* Friedman et al. (2010) J. Friedman, T. J. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. _J. Stat. Softw._ , 33(1):1–22, 2010.
* Ghadimi and Wang (2018) S. Ghadimi and M. Wang. Approximation methods for bilevel programming. _arXiv preprint arXiv:1802.02246_ , 2018.
* Goodfellow et al. (2016) I. Goodfellow, A. Courville, and Y. Bengio. _Deep learning_ , volume 1. MIT press Cambridge, 2016.
* Gould et al. (2016) S. Gould, B. Fernando, A. Cherian, P. Anderson, R. S. Cruz, and E. Guo. On differentiating parameterized argmin and argmax problems with application to bi-level optimization. _arXiv preprint arXiv:1607.05447._ , 2016.
* Grazzi et al. (2020) R. Grazzi, L. Franceschi, M. Pontil, and S. Salzo. On the iteration complexity of hypergradient computation. _ICML_ , 2020.
* Hare and Lewis (2007) W. L. Hare and A. S. Lewis. Identifying active manifolds. _Algorithmic Operations Research_ , 2(2):75–75, 2007.
* Harris et al. (2020) C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. Fern’andez del R’ıo, M. Wiebe, P. Peterson, P. G’erard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant. Array programming with NumPy. _Nature_ , 585(7825):357–362, 2020.
* Hastie et al. (2004) T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization path for the support vector machine. _Journal of Machine Learning Research_ , 5(Oct):1391–1415, 2004.
* Hestenes and Stiefel (1952) M. R. Hestenes and E. Stiefel. _Methods of conjugate gradients for solving linear systems_ , volume 49. NBS Washington, DC, 1952.
* Higham (2002) N. J. Higham. _Accuracy and stability of numerical algorithms_. SIAM, 2002.
* Hoerl and Kennard (1970) A. E. Hoerl and R. W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. _Technometrics_ , 12(1):55–67, 1970.
* Hunter (2007) J. D. Hunter. Matplotlib: A 2d graphics environment. _IEEE Annals of the History of Computing_ , 9(03):90–95, 2007.
* Hutter et al. (2015) F. Hutter, J. Lücke, and L. Schmidt-Thieme. Beyond manual tuning of hyperparameters. _KI-Künstliche Intelligenz_ , 29(4):329–337, 2015.
* Ji et al. (2020) K. Ji, J. Yang, and Y. Liang. Provably faster algorithms for bilevel optimization and applications to meta-learning. _arXiv preprint arXiv:2010.07962_ , 2020.
* Johnson and Zhang (2013) R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In _NeurIPS_ , pages 315–323, 2013.
* Johnson and Guestrin (2015) T. B. Johnson and C. Guestrin. Blitz: A principled meta-algorithm for scaling sparse optimization. In _ICML_ , pages 1171–1179, 2015.
* Jones et al. (1998) D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions. _Journal of Global optimization_ , 13(4):455–492, 1998.
* Kingma and Ba (2014) D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Klopfenstein et al. (2020) Q. Klopfenstein, Q. Bertrand, A. Gramfort, J. Salmon, and S. Vaiter. Model identification and local linear convergence of coordinate descent. _arXiv preprint arXiv:2010.11825_ , 2020.
* Koh et al. (2007) K. Koh, S.-J. Kim, and S. Boyd. An interior-point method for large-scale l1-regularized logistic regression. _Journal of Machine Learning Research_ , 8(8):1519–1555, 2007.
* Kohavi and John (1995) R. Kohavi and G. H. John. Automatic parameter selection by minimizing estimated error. In _Machine Learning Proceedings 1995_ , pages 304–312. Elsevier, 1995.
* Kunisch and Pock (2013) K. Kunisch and T. Pock. A bilevel optimization approach for parameter learning in variational models. _SIAM J. Imaging Sci._ , 6(2):938–983, 2013.
* Lam et al. (2015) S. K. Lam, A. Pitrou, and S. Seibert. Numba: A LLVM-based Python JIT Compiler. In _Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC_ , pages 1–6. ACM, 2015.
* Larsen et al. (1996) J. Larsen, L. K. Hansen, C. Svarer, and M. Ohlsson. Design and regularization of neural networks: the optimal use of a validation set. In _Neural Networks for Signal Processing VI. Proceedings of the 1996 IEEE Signal Processing Society Workshop_ , 1996.
* LeCun et al. (1998) Y. A. LeCun, L. Bottou, G. B. Orr, and K-R. Müller. Efficient backprop. In _Neural networks: Tricks of the trade_ , pages 9–48. Springer, 1998.
* Liang et al. (2014) J. Liang, J. Fadili, and G. Peyré. Local linear convergence of forward–backward under partial smoothness. In _NeurIPS_ , pages 1970–1978, 2014.
* Liang et al. (2017) J. Liang, J. Fadili, and G. Peyré. Activity identification and local linear convergence of Forward–Backward-type Methods. _SIAM J. Optim._ , 27(1):408–437, 2017.
* Linnainmaa (1970) S. Linnainmaa. The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. _Master’s Thesis (in Finnish), Univ. Helsinki_ , pages 6–7, 1970\.
* Lions and Mercier (1979) P-L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. _SIAM Journal on Numerical Analysis_ , 16(6):964–979, 1979.
* Liu and Nocedal (1989) D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. _Mathematical programming_ , 45(1-3):503–528, 1989.
* Liu et al. (2020) R. Liu, P. Mu, X. Yuan, S. Zeng, and J. Zhang. A generic first-order algorithmic framework for bi-level programming beyond lower-level singleton. _ICML_ , 2020.
* Lorraine et al. (2019) J. Lorraine, P. Vicol, and D. Duvenaud. Optimizing millions of hyperparameters by implicit differentiation. _arXiv preprint arXiv:1911.02590_ , 2019.
* Lounici (2008) K. Lounici. Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators. _Electron. J. Stat._ , 2:90–102, 2008.
* Mairal and Yu (2012) J. Mairal and B. Yu. Complexity analysis of the lasso regularization path. In _ICML_ , pages 353–360, 2012.
* Mairal et al. (2012) J. Mairal, F. Bach, and J. Ponce. Task-driven dictionary learning. _IEEE Trans. Pattern Anal. Mach. Intell._ , 34(4):791–804, 2012.
* Massias et al. (2018) M. Massias, A. Gramfort, and J. Salmon. Celer: a fast solver for the lasso with dual extrapolation. In _ICML_ , volume 80, pages 3315–3324, 2018.
* Massias et al. (2020) M. Massias, S. Vaiter, A. Gramfort, and J. Salmon. Dual extrapolation for sparse generalized linear models. _Journal of Machine Learning Research_ , 21(234):1–33, 2020.
* Mehmood and Ochs (2021) S. Mehmood and P. Ochs. Differentiating the value function by using convex duality. In _AISTATS_ , pages 3871–3879. PMLR, 2021.
* Mockus (1989) J. Mockus. The bayesian approach to local optimization. In _Bayesian Approach to Global Optimization_ , pages 125–156. Springer, 1989.
* Nesterov (2004) Y. Nesterov. _Introductory lectures on convex optimization_ , volume 87 of _Applied Optimization_. Kluwer Academic Publishers, Boston, MA, 2004.
* Nocedal and Wright (2006) J. Nocedal and S. J. Wright. _Numerical optimization_. Springer Series in Operations Research and Financial Engineering. Springer, New York, second edition, 2006.
* Nutini (2018) J. Nutini. _Greed is good: greedy optimization methods for large-scale structured problems_. PhD thesis, University of British Columbia, 2018.
* Nutini et al. (2019) J. Nutini, M. Schmidt, and W. Hare. “active-set complexity” of proximal gradient: How long does it take to find the sparsity pattern? _Optimization Letters_ , 13(4):645–655, 2019\.
* Ochs et al. (2015) P. Ochs, R. Ranftl, T. Brox, and T. Pock. Bilevel optimization with nonsmooth lower level problems. In _SSVM_ , pages 654–665, 2015.
* O’Donoghue et al. (2016) B. O’Donoghue, E. Chu, N. Parikh, and S. Boyd. Conic optimization via operator splitting and homogeneous self-dual embedding. _Journal of Optimization Theory and Applications_ , 169(3):1042–1068, 2016.
* O’Donoghue et al. (2019) B. O’Donoghue, E. Chu, N. Parikh, and S. Boyd. SCS: Splitting conic solver, version 2.1.2, 2019.
* Pedregosa (2016) F. Pedregosa. Hyperparameter optimization with approximate gradient. In _ICML_ , volume 48, pages 737–746, 2016.
* Pedregosa et al. (2017) F. Pedregosa, R. Leblond, and S. Lacoste-Julien. Breaking the nonsmooth barrier: A scalable parallel method for composite optimization. _NeurIPS_ , pages 56–65, 2017.
* Peyré and Fadili (2011) G. Peyré and J. M. Fadili. Learning analysis sparsity priors. In _Sampta_ , 2011.
* Platt (1999) J. C. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In _Advances in large margin classifiers_ , pages 61–74. MIT Press, 1999.
* Poliquin and Rockafellar (1996) R. A. Poliquin and R. T. Rockafellar. Generalized hessian properties of regularized nonsmooth functions. _SIAM Journal on Optimization_ , 6(4):1121–1137, 1996.
* Polyak (1987) B. T. Polyak. Introduction to optimization. _Inc., Publications Division, New York_ , 1, 1987.
* Rajeswaran et al. (2019) A. Rajeswaran, C. Finn, S. M. Kakade, and S. Levine. Meta-learning with implicit gradients. In _NeurIPS_ , pages 113–124, 2019.
* Rastrigin (1963) L. A. Rastrigin. The convergence of the random search method in the extremal control of a many parameter system. _Automaton & Remote Control_, 24:1337–1342, 1963.
* Rosset and Zhu (2007) S. Rosset and J. Zhu. Piecewise linear regularized solution paths. _Ann. Statist._ , 35(3):1012–1030, 2007.
* Schwarz (1978) G. Schwarz. Estimating the dimension of a model. _Ann. Statist._ , 6(2):461–464, 1978.
* Seeger (2008) M. W. Seeger. Cross-validation optimization for large scale structured classification kernel methods. _Journal of Machine Learning Research_ , 9:1147–1178, 2008\.
* Snoek et al. (2012) J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In _NeurIPS_ , pages 2960–2968, 2012.
* Stein (1981) C. M. Stein. Estimation of the mean of a multivariate normal distribution. _Ann. Statist._ , 9(6):1135–1151, 1981.
* Stone and Ramer (1965) L. R. A. Stone and J.C. Ramer. Estimating WAIS IQ from Shipley Scale scores: Another cross-validation. _Journal of clinical psychology_ , 21(3):297–297, 1965.
* Tibshirani (1996) R. Tibshirani. Regression shrinkage and selection via the lasso. _J. R. Stat. Soc. Ser. B Stat. Methodol._ , 58(1):267–288, 1996.
* Tibshirani et al. (2012) R. Tibshirani, J. Bien, J. Friedman, T. J. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani. Strong rules for discarding predictors in lasso-type problems. _J. R. Stat. Soc. Ser. B Stat. Methodol._ , 74(2):245–266, 2012.
* Tibshirani (2013) R. J. Tibshirani. The lasso problem and uniqueness. _Electron. J. Stat._ , 7:1456–1490, 2013.
* Tseng and Yun (2009) P. Tseng and S. Yun. Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization. _J. Optim. Theory Appl._ , 140(3):513, 2009.
* Vaiter et al. (2018) S. Vaiter, G. Peyré, and J. Fadili. Model consistency of partly smooth regularizers. _IEEE Trans. Inf. Theory_ , 64(3):1725–1737, 2018.
* Virtanen et al. (2020) P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, et al. Scipy 1.0: fundamental algorithms for scientific computing in python. _Nature methods_ , 17(3):261–272, 2020.
* Wengert (1964) R. E. Wengert. A simple automatic derivative evaluation program. _Communications of the ACM_ , 7(8):463–464, 1964\.
* Winston and Kolter (2020) E. Winston and Z. Kolter. Neural monotone operator equilibrium networks. _NeurIPS_ , 2020.
* Zhang et al. (2013) L. Zhang, M. Mahdavi, and R. Jin. Linear convergence with condition number independent access of full gradients. _NeurIPS_ , 26:980–988, 2013.
* Zou and Hastie (2005) H. Zou and T. J. Hastie. Regularization and variable selection via the elastic net. _J. R. Stat. Soc. Ser. B Stat. Methodol._ , 67(2):301–320, 2005.
## A Additional lemmas
### A.1 Differentiability of the proximal operator
Here we recall results on the differentiability of the proximal operator at
the optimum.
###### Lemma 18 (Klopfenstein et al. 2020, Lemmas 2 and 3)
Let $0<\gamma_{j}\leq 1/L_{j}$. Let $\lambda\in\mathbb{R}^{r}$ and $\Lambda$ a
neighborhood of $\lambda$. Consider a solution
$\hat{\beta}\in\operatorname*{\mathrm{arg\,min}}_{\beta\in\mathbb{R}^{p}}\Phi(\beta,\lambda)$
and $\hat{S}$ its generalized support. Suppose
1. 1.
Assumptions 2, 3 and 6 hold.
2. 2.
Assumption 4 hold on $\Lambda$.
Then, for all $j\in\hat{S}$, the map
$\beta\mapsto\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}$ is
differentiable at $\hat{\beta}_{\hat{S}}$. Moreover, for all
$j\in\hat{S}^{c}$, $\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}$ is
constant around $\hat{\beta}_{j}-\gamma_{j}\nabla_{j}f(\hat{\beta})$. Thus,
$\beta\mapsto\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(\beta_{j}-\gamma_{j}\nabla_{j}f(\beta))$
is differentiable at $\hat{\beta}$ with gradient $0$.
### A.2 Linear convergence
We now detail the following result: an asymptotic vector autoregressive
sequence, with an error term vanishing linearly to $0$, converges linearly to
its limit. In a more formal way:
###### Lemma 19
Let $A\in\mathbb{R}^{p\times p},b\in\mathbb{R}$ with $\rho(A)<1$. Let
$(\mathcal{J}^{(k)})_{k\in\mathbb{N}}$ be a sequence of $\mathbb{R}^{p}$ such
that:
$\mathcal{J}^{(k+1)}=A\mathcal{J}^{(k)}+b+\epsilon^{(k)}\enspace,$ (19)
with $(\epsilon^{(k)})_{k\in\mathbb{N}}$ a sequence which converges linearly
to $0$, then $(\mathcal{J}^{(k)})_{k\in\mathbb{N}}$ converges linearly to its
limit $\hat{\mathcal{J}}\triangleq(\operatorname{Id}-A)^{-1}b$.
Proof Assume $(\epsilon^{(k)})_{k\in\mathbb{N}}$ converges linearly. Then,
there exists $c_{1}>0,0<\nu<1$ such that:
$\lVert\epsilon^{(k)}\rVert\leq c_{1}\nu^{k}\enspace.$
Applying a standard result on spectral norms (see (Polyak, 1987, Chapter 2,
Lemma 1)) yields a bound on $\lVert A^{k}\rVert_{2}$. More precisely, for
every $\delta>0$ there is a constant $c_{2}(\delta)=c_{2}$ such that
$\displaystyle\lVert A^{k}\rVert_{2}\leq c_{2}(\rho(A)+\delta)^{k}\enspace.$
Without loss of generality, we consider from now on a choice of $\delta$ such
that $\rho(A)+\delta<1$. Since $\hat{\mathcal{J}}=(\operatorname{Id}-A)^{-1}b$
the limit $\hat{\mathcal{J}}$ of the sequence satisfies:
$\hat{\mathcal{J}}=A\hat{\mathcal{J}}+b\enspace.$ (20)
Taking the difference between Equations 19 and 20 yields:
$\mathcal{J}^{(k+1)}-\hat{\mathcal{J}}=A(\mathcal{J}^{(k)}-\hat{\mathcal{J}})+\epsilon^{(k)}\enspace.$
(21)
Unrolling Equation 21 yields
$\mathcal{J}^{(k+1)}-\hat{\mathcal{J}}=A^{k+1}(\mathcal{J}^{(0)}-\mathcal{J})+\sum_{k^{\prime}=0}^{k}A^{k^{\prime}}\epsilon^{(k-k^{\prime})}$.
Taking the norm on both sides and using the triangle inequality leads to
$\displaystyle\lVert\mathcal{J}^{(k+1)}-\hat{\mathcal{J}}\rVert_{2}\leq$
$\displaystyle\leavevmode\nobreak\ \lVert
A^{k+1}(\mathcal{J}^{(0)}-\mathcal{J})\rVert_{2}+\sum_{k^{\prime}=0}^{k}\lVert
A^{k^{\prime}}\rVert_{2}\lVert\epsilon^{(k-k^{\prime})}\rVert$
$\displaystyle\leq$ $\displaystyle\leavevmode\nobreak\ \lVert
A^{k+1}\rVert_{2}\cdot\lVert\mathcal{J}^{(0)}-\hat{\mathcal{J}}\rVert_{2}+c_{1}\sum_{k^{\prime}=0}^{k}\lVert
A^{k^{\prime}}\rVert_{2}\cdot\nu^{k-k^{\prime}}$ $\displaystyle\leq$
$\displaystyle\leavevmode\nobreak\
c_{2}(\rho(A)+\delta)^{k+1}\cdot\lVert\mathcal{J}^{(0)}-\hat{\mathcal{J}}\rVert_{2}+c_{1}\sum_{k^{\prime}=0}^{k}c_{2}(\rho(A)+\delta)^{k^{\prime}}\nu^{k-k^{\prime}}$
We can now split the last summand in two parts and obtain the following bound,
reminding that $\rho(A)+\delta<1$:
$\displaystyle\lVert\mathcal{J}^{(k+1)}-\hat{\mathcal{J}}\rVert_{2}\leq$
$\displaystyle\leavevmode\nobreak\
c_{2}(\rho(A)+\delta)^{k+1}\cdot\lVert\mathcal{J}^{(0)}-\hat{\mathcal{J}}\rVert_{2}$
$\displaystyle+c_{1}c_{2}\left(\sum_{k^{\prime}=0}^{k/2}(\rho(A)+\delta)^{k^{\prime}}\nu^{k-k^{\prime}}+\sum_{k^{\prime}=k/2}^{k}(\rho(A)+\delta)^{k^{\prime}}\nu^{k-k^{\prime}}\right)$
$\displaystyle\leq$ $\displaystyle\leavevmode\nobreak\
c_{2}(\rho(A)+\delta)^{k+1}\cdot\lVert\mathcal{J}^{(0)}-\hat{\mathcal{J}}\rVert_{2}+\frac{c_{1}c_{2}(\rho(A)+\delta)}{1-\rho(A)-\delta}\sqrt{\nu}^{k}$
$\displaystyle+\frac{c_{1}c_{2}\nu}{1-\nu}\sqrt{(\rho(A)+\delta)}^{k}\enspace.$
Thus, $(\mathcal{J}^{(k)})_{k\in\mathbb{N}}$ converges linearly towards its
limit $\hat{\mathcal{J}}$.
## B Proof of Theorem 12
See 12
Proof We first prove Theorem 12 for proximal gradient descent.
Proximal gradient descent case. Solving Equation 1 with proximal gradient
descent leads to the following updates:
$\displaystyle\beta^{(k+1)}=\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\underbrace{\beta^{(k)}-\gamma\nabla
f(\beta^{(k)})}_{z^{(k)}})\enspace.$ (22)
Consider the following sequence $(\mathcal{J}^{(k)})_{k\in\mathbb{N}}$ defined
by:
$\displaystyle\mathcal{J}^{(k+1)}=\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})\odot\left(\operatorname{Id}-\gamma\nabla^{2}f(\beta^{(k)})\right)\mathcal{J}^{(k)}+\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})\enspace.$ (23)
Note that if $\operatorname{prox}_{\gamma g(\cdot,\lambda)}$ is not
differentiable with respect to the first variable at $z^{(k)}$ (respectively
with respect to the second variable $\lambda$), any weak Jacobian can be used.
When (H3) holds, differentiating Equation 22 with respect to $\lambda$ yields
exactly Equation 23.
Assumptions 2, 3, 4 and 6 and the convergence of $(\beta^{(k)})$ toward
$\hat{\beta}$ ensure proximal gradient descent algorithm has finite
identification property (Liang et al., 2014, Thm. 3.1): we note $K$ the
iteration when identification is achieved. As before, the separability of $g$,
Assumptions 2, 3, 6 and 4 ensure (see LABEL:{lemma:diff_prox})
$\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{k})_{\hat{S}^{c}}=0$, for all $k\geq K$. Thus, for all
$k\geq K$,
$\displaystyle\mathcal{J}^{(k)}_{\hat{S}^{c}:}=\hat{\mathcal{J}}_{\hat{S}^{c}:}=\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})_{\hat{S}^{c}:}\enspace.$
The updates of the Jacobian then become:
$\displaystyle\mathcal{J}^{(k+1)}_{\hat{S}:}=\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})_{\hat{S}}\odot\left(\operatorname{Id}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\beta^{(k)})\right)\mathcal{J}_{\hat{S}:}^{(k)}+\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(z^{(k)})_{\hat{S}:}\enspace.$
From Assumption 6, we have that $f$ is locally $\mathcal{C}^{3}$ at
$\hat{\beta}$, $g(\cdot,\lambda)$ is locally $\mathcal{C}^{2}$ at
$\hat{\beta}$ hence $\operatorname{prox}_{g(\cdot,\lambda)}$ is locally
$\mathcal{C}^{2}$. The function
$\beta\mapsto\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\beta-\gamma\nabla
f(\beta))_{\hat{S}}\odot(\operatorname{Id}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\beta))$
is differentiable at $\hat{\beta}$. Using (H4) we have that
$\beta\mapsto\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\beta-\gamma\nabla f(\beta))_{\hat{S}:}$ is also
differentiable at $\hat{\beta}$. Using the Taylor expansion of the previous
functions yields:
$\displaystyle\mathcal{J}^{(k+1)}_{\hat{S}:}=\underbrace{\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}\odot\left(\operatorname{Id}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\hat{\beta})\right)}_{A}\mathcal{J}_{\hat{S}:}^{(k)}+\underbrace{\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}:}}_{b}+\underbrace{o(\lVert\beta^{(k)}-\hat{\beta}\rVert)}_{\epsilon^{(k)}}\enspace.$
(24)
Thus, for $0<\gamma\leq 1/L$,
$\displaystyle\rho(A)\leq\lVert A\rVert_{2}$
$\displaystyle\leq\underbrace{\lVert\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{z})_{\hat{S}}\rVert}_{\leq 1\text{ (non-
expansiveness)}}\,\cdot\underbrace{\lVert\operatorname{Id}-\gamma\nabla_{\hat{S},\hat{S}}^{2}f(\hat{\beta})\rVert_{2}}_{\quad\quad\quad\quad<1\text{
(\lx@cref{creftypecap~refnum}{ass:restricted_injectivity} and $0<\gamma\leq
1/L$)}}<1\enspace.$ (25)
The inequality on the derivative of the proximal operator comes from the non-
expansiveness of proximal operators. The second inequality comes from
Assumption 7 and $0<\gamma\leq 1/L$.
Assumptions 2, 3, 4, 6 and 7 and the convergence of $(\beta^{(k)})$ toward
$\hat{\beta}$ ensure $(\beta^{(k)})_{k\in\mathbb{N}}$ converges locally
linearly (Liang et al., 2014, Thm. 3.1). The asymptotic autoregressive
sequence in Equation 24, $\rho(A)<1$, and the local linear convergence of
$(\epsilon^{(k)})_{k\in\mathbb{N}}$, yield our result using Lemma 19.
We now prove Theorem 12 for proximal coordinate descent.
#### Proximal coordinate descent.
Compared to proximal gradient descent, the analysis of coordinate descent
requires studying functions defined as a the composition of $p$ applications,
each of them only modifying one coordinate.
Coordinate descent updates read as follows:
$\displaystyle\beta_{j}^{(k,j)}=\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\underbrace{\left(\beta_{j}^{(k,j-1)}-\gamma_{j}\nabla_{j}f(\beta^{(k,j-1)})\right)}_{\triangleq
z_{j}^{(k,j-1)}}\enspace.$ (26)
We consider the following sequence:
$\displaystyle\mathcal{J}^{(k,j)}_{j:}=\leavevmode\nobreak\ $
$\displaystyle\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j}^{(k,j-1)})\left(\mathcal{J}^{(k,j-1)}_{j:}-\gamma_{j}\nabla^{2}_{j:}f(\beta^{(k,j-1)})\mathcal{J}^{(k,j-1)}\right)$
$\displaystyle+\partial_{\lambda}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j}^{(k,j-1)})\enspace.$
(27)
Note that if $\operatorname{prox}_{\gamma g(\cdot,\lambda)}$ is not
differentiable with respect to the first variable at $z^{(k)}$ (respectively
with respect to the second variable $\lambda$), any weak Jacobian can be used.
When (H3) holds, differentiating Equation 26 with respect to $\lambda$ yields
exactly Section B.
Assumptions 3, 2, 4 and 6 and the convergence of
$(\beta^{(k)})_{k\in\mathbb{N}}$ toward $\hat{\beta}$ ensure proximal
coordinate descent has finite identification property (Klopfenstein et al.,
2020, Thm. 1): we note $K$ the iteration when identification is achieved. Once
the generalized support $\hat{S}$ (of cardinality $\hat{s}$) has been
identified, we have that for all $k\geq K$,
$\beta_{\hat{S}^{c}}^{(k)}=\hat{\beta}_{\hat{S}}$ and for any
$j\in\hat{S}^{c}$,
$\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j}^{(k,j-1)})=0$.
Thus
$\mathcal{J}_{j:}^{(k,j)}=\partial_{\lambda}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j}^{(k,j-1)})$.
Then, we have that for any $j\in\hat{S}$ and for all $k\geq K$:
$\displaystyle\mathcal{J}^{(k,j)}_{j:}$
$\displaystyle=\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j}^{(k,j-1)})\left(\mathcal{J}^{(k,j-1)}_{j:}-\gamma_{j}\nabla^{2}_{j,\hat{S}}f(\beta^{(k,j-1)})\mathcal{J}^{(k,j-1)}_{\hat{S}:}\right)$
$\displaystyle\hskip
10.00002pt+\partial_{\lambda}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j}^{(k,j-1)})-\gamma_{j}\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}(z_{j}^{(k,j-1)})\nabla^{2}_{j,\hat{S}^{c}}f(\beta^{(k,j-1)})\mathcal{J}^{(k,j-1)}_{\hat{S}^{c}:}\enspace.$
Let $e_{1},\dots,e_{\hat{s}}$ be the vectors of the canonical basis of
$\mathbb{R}^{\hat{s}}$. We can consider the applications
$\displaystyle\mathbb{R}^{p}$ $\displaystyle\rightarrow\mathbb{R}^{\hat{s}}$
$\displaystyle\beta$
$\displaystyle\mapsto\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\left(\beta_{j}-\gamma_{j}\nabla_{j}f(\beta)\right)\left(e_{j}-\gamma_{j}\nabla^{2}_{j,\hat{S}}f(\beta)\right)\enspace,$
and
$\displaystyle\mathbb{R}^{p}$
$\displaystyle\rightarrow\mathbb{R}^{\hat{s}\times r}$ $\displaystyle\beta$
$\displaystyle\mapsto\partial_{\lambda}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\left(\beta_{j}-\gamma_{j}\nabla_{j}f(\beta)\right)-\gamma_{j}\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\left(\beta_{j}-\gamma_{j}\nabla_{j}f(\beta)\right)\nabla^{2}_{j,\hat{S}^{c}}f(\beta)\hat{\mathcal{J}}_{\hat{S}^{c}:}\enspace,$
which are both differentiable at $\hat{\beta}$ using Assumption 6 and (H4).
The Taylor expansion of the previous functions yields:
$\displaystyle\mathcal{J}^{(k,j)}_{j:}$
$\displaystyle=\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\left(\hat{z}_{j}\right)\left(e_{j}-\gamma_{j}\nabla^{2}_{j,\hat{S}}f(\hat{\beta})\right)\mathcal{J}^{(k,j-1)}_{\hat{S}:}$
$\displaystyle\hskip
10.00002pt+\partial_{\lambda}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\left(\hat{z}_{j}\right)-\gamma_{j}\partial_{z}\operatorname{prox}_{\gamma_{j}g_{j}(\cdot,\lambda)}\left(\hat{z}_{j}\right)\nabla^{2}_{j,\hat{S}^{c}}f(\hat{\beta})\mathcal{J}^{(k,j-1)}_{\hat{S}^{c}:}$
$\displaystyle\hskip 10.00002pt+o(||\beta^{(k,j-1)}-\hat{\beta}||)\enspace.$
Let $j_{1},\dots,j_{\hat{s}}$ be the indices of the generalized support of
$\hat{\beta}$. When considering a full epoch of coordinate descent, the
Jacobian is obtained as the product of matrices of the form
$\displaystyle
A_{s}^{\top}=\left(\begin{array}[]{c|c|c|c|c|c|c}e_{1}&\ldots&e_{s-1}&v_{j_{s}}&e_{s+1}&\ldots&e_{\hat{s}}\end{array}\right)\in\mathbb{R}^{\hat{s}\times\hat{s}}\enspace,$
(29)
where
$v_{j_{s}}=\partial_{z}\operatorname{prox}_{\gamma_{j_{s}}g_{j_{s}}}\left(\hat{z}_{j_{s}}\right)\left(e_{s}-\gamma_{j_{s}}\nabla^{2}_{j_{s},\hat{S}}f(\hat{\beta})\right)\in\mathbb{R}^{\hat{s}}.$
A full epoch can then be written
$\displaystyle\mathcal{J}^{(k+1)}_{\hat{S}:}=\underbrace{A_{\hat{s}}A_{\hat{s}-1}\ldots
A_{1}}_{A}\mathcal{J}^{(k)}_{\hat{S}:}+b+\epsilon^{(k)}\enspace,$
for a certain $b\in\mathbb{R}^{\hat{s}}$.
The spectral radius of $A$ is strictly bounded by $1$ (Klopfenstein et al.,
2020, Lemma 8): $\rho(A)<1$. Assumptions 3, 2, 4 and 6 and the convergence of
$(\beta^{(k)})_{k\in\mathbb{N}}$ toward $\hat{\beta}$ ensure local linear
convergence of $(\beta^{(k)})_{k\in\mathbb{N}}$ (Klopfenstein et al., 2020,
Thm. 2). Hence, we can write the update for the Jacobian after an update of
the coordinates from $1$ to $p$:
$\mathcal{J}_{\hat{S}:}^{(k+1)}=A\mathcal{J}_{\hat{S}:}^{(k)}+b+\epsilon^{(k)}\enspace,$
(30)
with $(\epsilon^{(k)})_{k\in\mathbb{N}}$ converging linearly to 0.
Recalling $\rho(A)<1$, Lemma 19 and the last display yield our result using.
### B.1 Proof of Theorem 13 (approximate hypergradients)
See 13
Proof
#### Overview of the proof.
Our goal is to bound the error between the approximate hypergradient $h$
returned by Algorithm 5 and the true hypergradient
$\nabla\mathcal{L}(\lambda)$. Following the analysis of Pedregosa (2016), two
sources of approximation errors arise when computing the hypergradient:
* •
Approximation errors from the inexact computation of $\hat{\beta}$. Dropping
the dependency with respect to $\lambda$, we denote $\beta$ the approximate
solution and suppose the problem is solved to precision $\epsilon$ with
support identification (H7):
$\displaystyle\begin{cases}&\beta_{\hat{S}^{c}}=\hat{\beta}_{\hat{S}^{c}}\\\
&\lVert\beta_{\hat{S}}-\hat{\beta}_{\hat{S}}\rVert\leq\epsilon\enspace.\end{cases}$
* •
Approximation errors from the approximate resolution of the linear system,
using (H7) yields:
$\displaystyle\lVert
A^{-1\top}\nabla_{\hat{S}}\mathcal{C}(\beta)-v\rVert\leq\epsilon\enspace.$
The exact solution of the exact linear system $\hat{v}$ satisfies:
$\displaystyle\hat{v}=\hat{A}^{-1\top}\nabla_{\hat{S}}\mathcal{C}(\hat{\beta})\enspace,$
with
$\displaystyle A$
$\displaystyle\triangleq\operatorname{Id}_{|\hat{S}|}-\underbrace{\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\beta-\gamma\nabla
f(\beta)\right)_{\hat{S}}}_{\triangleq
C}\underbrace{\left(\operatorname{Id}_{|\hat{S}|}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\beta)\right)}_{\triangleq
D}\enspace,$ $\displaystyle\hat{A}$
$\displaystyle\triangleq\operatorname{Id}_{|\hat{S}|}-\underbrace{\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\hat{\beta}-\gamma\nabla
f(\hat{\beta})\right)_{\hat{S}}}_{\triangleq\hat{C}}\underbrace{\left(\operatorname{Id}_{|\hat{S}|}-\gamma\nabla^{2}_{\hat{S},\hat{S}}f(\hat{\beta})\right)}_{\triangleq\hat{D}}\enspace.$
* •
Using the last two points, the goal is to bound the difference between the
exact hypergradient and the approximate hypergradient,
$\lVert\nabla\mathcal{L}(\lambda)-h\rVert$. Following Algorithm 5, the exact
hypergradient reads
$\displaystyle\nabla\mathcal{L}(\lambda)=\hat{B}\hat{v}+\hat{\mathcal{J}}^{\top}_{\hat{S}^{c}:}\nabla_{\hat{S}^{c}}\mathcal{C}(\hat{\beta})\enspace,$
and similarly for the approximate versions:
$\displaystyle
h=Bv+\mathcal{J}^{\top}_{\hat{S}^{c}:}\nabla_{\hat{S}^{c}}\mathcal{C}(\beta)\enspace,$
with
$\displaystyle B$
$\displaystyle\triangleq\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\beta-\gamma\nabla
f(\beta)\right)_{\hat{S}:}-\gamma\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\beta-\gamma\nabla
f(\beta)\right)_{\hat{S}}\odot\left(\nabla^{2}_{\hat{S},\hat{S}^{c}}f(\beta)\right)\hat{\mathcal{J}}_{\hat{S}^{c}:}$
$\displaystyle\hat{B}$
$\displaystyle\triangleq\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\hat{\beta}-\gamma\nabla
f(\hat{\beta})\right)_{\hat{S}:}-\gamma\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}\left(\hat{\beta}-\gamma\nabla
f(\hat{\beta})\right)_{\hat{S}}\odot\left(\nabla^{2}_{\hat{S},\hat{S}^{c}}f(\hat{\beta})\right)\hat{\mathcal{J}}_{\hat{S}^{c}:}\enspace.$
We can exploit these decompositions to bound the difference between the exact
hypergradient and the approximate hypergradient:
$\displaystyle\lVert\nabla\mathcal{L}(\lambda)-h\rVert$
$\displaystyle=\lVert\hat{B}\hat{v}-Bv+\hat{\mathcal{J}}^{\top}_{\hat{S}^{c}:}\nabla_{\hat{S}^{c}}\mathcal{C}(\hat{\beta})-\hat{\mathcal{J}}^{\top}_{\hat{S}^{c}:}\nabla_{\hat{S}^{c}}\mathcal{C}(\beta)\rVert$
$\displaystyle\leq\lVert\hat{B}\hat{v}-Bv\rVert+\lVert\hat{\mathcal{J}}^{\top}_{\hat{S}^{c}:}\nabla_{\hat{S}^{c}}\mathcal{C}(\hat{\beta})-\hat{\mathcal{J}}^{\top}_{\hat{S}^{c}:}\nabla_{\hat{S}^{c}}\mathcal{C}(\beta)\rVert$
$\displaystyle\leq\lVert\hat{B}\hat{v}-B\hat{v}+B\hat{v}-Bv\rVert+\lVert\hat{\mathcal{J}}^{\top}_{\hat{S}^{c}:}(\nabla_{\hat{S}^{c}}\mathcal{C}(\hat{\beta})-\nabla_{\hat{S}^{c}}\mathcal{C}(\beta))\rVert$
$\displaystyle\leq\lVert\hat{v}\rVert\cdot\lVert\hat{B}-B\rVert+\lVert
B\rVert\cdot\lVert\hat{v}-v\rVert+L_{\mathcal{C}}\lVert\hat{\mathcal{J}}^{\top}_{\hat{S}^{c}:}\rVert\cdot\lVert\beta-\hat{\beta}\rVert\enspace.$
(31)
Bounding $\lVert\hat{v}-v\rVert$ and $\lVert\hat{B}-B\rVert$ in Equation 31
yields the desired result which is bounding the difference between the exact
hypergradient and the approximate hypergradient
$\lVert\nabla\mathcal{L}(\lambda)-h\rVert$.
#### Bound on $\lVert\hat{v}-v\rVert$.
We first prove that $\lVert A-\hat{A}\rVert=\mathcal{O}(\epsilon)$. Let
$L_{H}$ be the Lipschitz constant of the application
$\beta\mapsto\nabla^{2}f(\beta)$, then we have:
$\displaystyle\lVert A-\hat{A}\rVert_{2}$ $\displaystyle=\lVert
CD-\hat{C}\hat{D}\rVert_{2}$ $\displaystyle\leq\lVert
CD-C\hat{D}\rVert_{2}+\lVert C\hat{D}-\hat{C}\hat{D}\rVert_{2}$
$\displaystyle\leq\underbrace{\left\lVert C\right\rVert_{2}}_{\leq 1\text{
(non-expansiveness)}}\underbrace{\lVert D-\hat{D}\rVert_{2}}_{\leq
L_{H}\lVert\beta-\hat{\beta}\rVert\text{ using
\ref{hyp:lip_Hessian}}}+\underbrace{\lVert\hat{D}\rVert_{2}}_{\leq
1}\underbrace{\lVert
C-\hat{C}\rVert_{2}}_{\mathcal{O}(\lVert\beta-\hat{\beta}\rVert)\text{ using
\ref{hyp:prox_lip}}}$ $\displaystyle\leq
L_{H}\lVert\beta-\hat{\beta}\rVert+\mathcal{O}(\lVert\beta-\hat{\beta}\rVert)$
$\displaystyle=\mathcal{O}(\lVert\beta-\hat{\beta}\rVert)\enspace.$ (32)
Let $\tilde{v}$ be the exact solution of the approximate system
$A^{\top}\tilde{v}\triangleq\nabla_{\hat{S}}\mathcal{C}(\beta)$. The following
conditions are met:
* •
$\hat{v}$ is the exact solution of the exact linear system and $\tilde{v}$ is
the exact solution of the approximate linear system
$\displaystyle\hat{A}^{\top}\hat{v}$
$\displaystyle\triangleq\nabla_{\hat{S}}\mathcal{C}(\hat{\beta})$
$\displaystyle A^{\top}\tilde{v}$
$\displaystyle\triangleq\nabla_{\hat{S}}\mathcal{C}(\beta)\enspace.$
* •
One can control the difference between the exact matrix in the linear system
$\hat{A}$ and the approximate matrix $A$.
$\displaystyle\lVert
A-\hat{A}\rVert_{2}\leq\delta\lVert\beta-\hat{\beta}\rVert\enspace,$
for a certain $\delta>0$ (Equation 32).
* •
One can control the difference between the two right-hand side of the linear
systems
$\displaystyle\lVert\nabla_{\hat{S}}\mathcal{C}(\beta)-\nabla_{\hat{S}}\mathcal{C}(\hat{\beta})\rVert\leq
L_{\mathcal{C}}\lVert\beta-\hat{\beta}\rVert\enspace,$
since $\beta\mapsto\nabla\mathcal{C}(\beta)$ is $L_{\mathcal{C}}$-Lipschitz
continuous (H6).
* •
One can control the product of the perturbations
$\displaystyle\delta\cdot\lVert\beta-\hat{\beta}\rVert\cdot\lVert\hat{A}^{-1}\rVert_{2}\leq\rho<1\enspace.$
Conditions are met to apply the result by Higham (2002, Thm 7.2), which leads
to
$\displaystyle\lVert\tilde{v}-\hat{v}\rVert$
$\displaystyle\leq\frac{\epsilon}{1-\epsilon\lVert\hat{A}^{-1}\rVert\delta}\left(L_{\mathcal{C}}\lVert\hat{A}^{-1}\rVert+\lVert\hat{v}\rVert\cdot\lVert\hat{A}^{-1}\rVert\delta\right)$
$\displaystyle\leq\frac{\epsilon}{1-\rho}\left(L_{\mathcal{C}}\lVert\hat{A}^{-1}\rVert+\lVert\hat{v}\rVert\cdot\lVert\hat{A}^{-1}\rVert\delta\right)$
$\displaystyle=\mathcal{O}(\epsilon)\enspace.$ (33)
The bound on $\lVert\tilde{v}-\hat{v}\rVert$ finally yields a bound on the
first quantity in Equation 3, $\lVert v-\hat{v}\rVert$:
$\displaystyle\lVert v-\hat{v}\rVert$ $\displaystyle=\lVert
v-\tilde{v}+\tilde{v}-\hat{v}\rVert$ $\displaystyle\leq\lVert
v-\tilde{v}\rVert+\lVert\tilde{v}-\hat{v}\rVert$ $\displaystyle\leq\lVert
A^{-1}A(v-\tilde{v})\rVert+\lVert\tilde{v}-\hat{v}\rVert$
$\displaystyle\leq\lVert A^{-1}\rVert_{2}\times\underbrace{\left\lVert
A(v-\tilde{v})\right\rVert}_{\leq\epsilon\text{
\ref{hyp:espilon_sol}}}+\underbrace{\lVert\tilde{v}-\hat{v}\rVert}_{\mathcal{O}(\epsilon)\text{
(\lx@cref{creftypecap~refnum}{eq:bound_v_v_tilde})}}$
$\displaystyle=\mathcal{O}(\epsilon)\enspace.$ (34)
#### Bound on $\lVert B-\hat{B}\rVert_{2}$.
We now bound the second quantity in Equation 3 $\lVert B-\hat{B}\rVert_{2}$:
$\displaystyle\lVert B-\hat{B}\rVert_{2}$
$\displaystyle\leq\lVert\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\beta-\gamma\nabla
f(\beta))_{\hat{S}:}-\partial_{\lambda}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{\beta}-\gamma\nabla
f(\hat{\beta}))_{\hat{S}:}\rVert_{2}$
$\displaystyle+\gamma\lVert\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\hat{\beta}-\gamma\nabla
f(\hat{\beta}))_{\hat{S}}\nabla^{2}_{\hat{S},\hat{S}^{c}}f(\hat{\beta})\hat{\mathcal{J}}_{\hat{S}^{c}:}-\partial_{z}\operatorname{prox}_{\gamma
g(\cdot,\lambda)}(\beta-\gamma\nabla
f(\beta))_{\hat{S}}\nabla^{2}_{\hat{S},\hat{S}^{c}}f(\beta)\hat{\mathcal{J}}_{\hat{S}^{c}:}\rVert_{2}$
$\displaystyle\leq L_{1}\lVert\beta-\gamma\nabla
f(\beta)_{\hat{S}:}-\hat{\beta}+\gamma\nabla f(\hat{\beta})\rVert\text{ using
\ref{hyp:prox_lip}}$
$\displaystyle+L_{2}\lVert\hat{\beta}-\beta\rVert\cdot\lVert\hat{\mathcal{J}}_{\hat{S}^{c}:}\rVert\text{
using \ref{hyp:prox_lip} and
\lx@cref{creftypecap~refnum}{ass:taylor_expansion}}$
$\displaystyle=\mathcal{O}(\lVert\hat{\beta}-\beta\rVert)\enspace.$ (35)
Plugging Equations 34 and 35 into Equation 3 yields the desired result:
$\lVert\nabla\mathcal{L}(\lambda)-h\rVert=\mathcal{O}(\epsilon)$.
## C Additional experiments
Figures 10 and 11 are the counterparts of Figure 3 for the Lasso and sparse
logistic regression. It shows the local linear convergence of the Jacobian for
the Lasso, obtained by the forward-mode differentiation of coordinate descent.
The solvers used to determine the exact solution up to machine precision are
Celer (Massias et al., 2018, 2020) for the Lasso and Blitz (Johnson and
Guestrin, 2015) for the sparse logistic regression. Table 6 summarizes the
values of the hyperparameters $\lambda$ used in Figures 3, 10 and 11.
Figure 10: Local linear convergence of the Jacobian for the Lasso. Distance
to optimum for the coefficients $\beta$ (top) and the Jacobian $\mathcal{J}$
(bottom) of the forward-mode differentiation of proximal coordinate descent
(Algorithm 3) on multiple datasets.
Figure 11: Local linear convergence of the Jacobian for sparse logistic regression. Distance to optimum for the coefficients $\beta$ (top) and the Jacobian $\mathcal{J}$ (bottom) of the forward-mode differentiation of proximal coordinate descent (Algorithm 3) on multiple datasets. Table 6: Dataset characteristics and regularization parameters used in Figures 10, 11 and 3. Datasets | _leukemia_ | _rcv1_ | _news20_ | _real-sim_
---|---|---|---|---
# samples | $n=38$ | $n=$20242$$ | $n=$19996$$ | $n=$72309$$
# features | $p=$7129$$ | $p=$19959$$ | $p=$632982$$ | $p=$20958$$
Lasso | $e^{\lambda}=0.01\,e^{\lambda_{\text{max}}}$ | $e^{\lambda}=0.075\,e^{\lambda_{\text{max}}}$ | $e^{\lambda}=0.3\,e^{\lambda_{\text{max}}}$ | $e^{\lambda}=0.1\,e^{\lambda_{\text{max}}}$
Logistic regression | $e^{\lambda}=0.1\,e^{\lambda_{\text{max}}}$ | $e^{\lambda}=0.25\,e^{\lambda_{\text{max}}}$ | $e^{\lambda}=0.8\,e^{\lambda_{\text{max}}}$ | $e^{\lambda}=0.15\,e^{\lambda_{\text{max}}}$
SVM | $e^{\lambda}=10^{-5}$ | $e^{\lambda}=3\times 10^{-2}$ | $e^{\lambda}=10^{-3}$ | $e^{\lambda}=5\times 10^{-2}$
|
# Voice Spoofing Countermeasures: Taxonomy, State-of-the-art, experimental
analysis of generalizability, open challenges, and the way forward
Awais Khan Department of Computer Science and Engineering, Oakland
University, Rochester, MI 48309, USA Khalid Mahmood Malik Department of
Computer Science and Engineering, Oakland University, Rochester, MI 48309, USA
James Ryan Department of Computer Science and Engineering, Oakland
University, Rochester, MI 48309, USA Mikul Saravanan Department of Computer
Science and Engineering, Oakland University, Rochester, MI 48309, USA
###### Abstract
Malicious actors may seek to use different voice spoofing attacks to fool ASV
systems and even use them for spreading misinformation. Various
countermeasures have been proposed to detect these spoofing attacks. Due to
the extensive work done on spoofing detection in automated speaker
verification (ASV) systems in the last 6-7 years, there is a need to classify
the research and perform qualitative and quantitative comparisons on state-of-
the-art countermeasures. Additionally, no existing survey paper has reviewed
integrated solutions to voice spoofing evaluation and speaker verification,
adversarial/anti-forensics attacks on spoofing countermeasures and ASV itself,
or unified solutions to detect multiple attacks using a single model. Further,
no work has been done to provide an apples-to-apples comparison of published
countermeasures in order to assess their generalizability by evaluating them
across corpora. In this work, we conduct a review of the literature on
spoofing detection using hand-crafted features, deep learning, end-to-end, and
universal spoofing countermeasure solutions to detect speech synthesis (SS),
voice conversion (VC), and replay attacks. Additionally, we also review
integrated solutions to voice spoofing evaluation and speaker verification,
adversarial and anti-forensics attacks on voice countermeasures, and ASV. The
limitations and challenges of the existing spoofing countermeasures are also
presented. We report the performance of these countermeasures on several
datasets and evaluate them across corpora. For the experiments, we employ the
ASVspoof2019 and VSDC datasets along with GMM, SVM, CNN, and CNN-GRU
classifiers. (For reproduce-ability of the results, the code of the testbed
can be found at our GitHub
Repository111https://github.com/smileslab/Comparative-Analysis-Voice-
Spoofing).
###### Index Terms:
Voice Spoofing System, Anti Spoofing,Voice Spoofing Countermeasures,
Comparative Analysis, Speech Spoofing Detection.
## Introduction
Automatic speaker verification systems (ASVs) are now in wide use for
authentication, such as for over-the-phone banking, and are becoming an
increasingly important biometric solution in the current climate of the
pandemic to limit the spread of disease. Although ASVs can be used to validate
an identity, they are prone to spoofing attacks, which may allow an
unauthorized user to gain access to privileged information such as a bank
account. Some of these attacks, such as speech synthesis and voice conversion,
are also being used in cyberspace to spread misinformation and disinformation.
In addition, ASV systems are more prevalent in IoT devices, such as smart
speakers. A spoofing attack on IoT devices connected in a chain to control
someone’s home or other devices operated via the ASV system has the potential
to allow unauthorized access to security devices, e.g., IoT door locks [54].
Spoofing attacks on ASVs can be grouped into Physical Access (PA) attacks and
Logical Access (LA) attacks [4]. An example of a PA attack is a replay attack,
where the hacker records the original speaker’s voice using any recording
device without consent. Later, this prerecorded voice can be replayed onto the
ASV system in order to compromise the voice bio-metric based access control in
financial institutions and smart homes [54]. LA attacks, conversely, are
comprised of artificial, or machine-generated, cloned samples. LA attacks
consist of text-to-speech (TTS) synthesis and voice conversion (VC). For TTS
synthesis, cloned samples are generated using original speech and transcripts,
while VC attacks use only voice samples of the original speaker to train
different deep learning models, i.e., a vocoder, waveform, etc., to generate
the cloned samples. Again, the objective is to generate realistic voice
samples of a target speaker in order to compromise the security of an ASV
system, and thus gain access to someone’s home, bank account, or other voice-
controlled application. Due to a massive escalation in the availability of
high-quality recording equipment and the ease of their creation, replay
attacks are being generated more easily and rapidly, even by less tech-savvy
people, and can be classified into single-hop or multi-hop attacks [6]. More
specifically, a single-hop replay attack utilizes a single recording device to
play the audio sample of a verified user. In contrast, a multi-hop replay
attack utilizes multiple recording devices (i.e., a verified user’s voice is
recorded, then replayed to another recording device, and then replayed to the
ASV system) in a chained scenario. In our prior work [28], we reported a new
hybrid voice spoofing attack, i.e., a cloned replay, where the voice of a
target speaker is captured and replayed to an ASV system, that can also be
generated to spoof an ASV system. A more recent threat to ASVs and spoofing
counter measures is adversarial machine learning [75], where machine learning
algorithms may be deceived by injecting minor distortions into the audio
sample. According to Szegedy et al. [75], deep learning model predictions can
be easily modified by extremely small perturbations.
Several countermeasures have been proposed to defeat voice spoofing attacks,
and these are often comprised of two parts: the first one (front end) is the
feature representation scheme for the input speech signal, and the second one
(back end) is a classifier to distinguish between bonafide and spoofed
samples. The feature descriptor (front end) should be capable of effectively
capturing the traits of the dynamic vocal tracts of a bonafide speaker.
Similarly, the back-end classifier should be able to better learn the distinct
traits of bonafide and spoofed speech samples in order to accurately
discriminate against the spoofed speech samples. In contrast to the
traditional approach of front-end and back-end solutions, in the past few
years, the research community has focused on deep learning and end-to-end
solutions to combat voice spoofing attacks. Existing countermeasures have only
tackled single-hop spoofing attacks, and chained-replay attacks and their
countermeasures have been largely overlooked [55]. Recent efforts toward in
finding a unified solution to the problem of speaker verification, where a
single countermeasure may be applied to several attacks, have been studied.
Although these countermeasure solutions are only beginning to be explored,
there is a marked need for a unified solution as the way forward in ASV anti-
spoofing techniques [111] [35].
TABLE I: Comparison of the existing survey and review papers Paper | Presentation Attacks | Countermeasures | Integrated ASV | Experimental Analysis | Cross Corpus
---|---|---|---|---|---
| Replay | VC/Synthesis | Adversarial | Hand Crafted | Deep Learned | End-to-End | | |
[99] | | | $\times$ | | | | $\times$ | $\times$ | $\times$
[70] | | $\times$ | $\times$ | | $\times$ | $\times$ | $\times$ | | $\times$
[19] | | $\times$ | $\times$ | | $\times$ | $\times$ | $\times$ | |
[64] | | $\times$ | $\times$ | | $\times$ | $\times$ | $\times$ | $\times$ | $\times$
[69] | | | $\times$ | | | | $\times$ | $\times$ | $\times$
[79] | | | $\times$ | | | | $\times$ | $\times$ | $\times$
[56] | | | $\times$ | | | | $\times$ | $\times$ | $\times$
[36] | | | $\times$ | | | | $\times$ | $\times$ | $\times$
[Ours] | | | | | | | | |
The surveys on ASV systems and spoof detection techniques conducted to date
have primarily been focused on specific spoofing attacks and the
countermeasures for them. To the best of our knowledge, six articles [99, 64,
69, 36, 79, 56] have been presented as surveys, and two studies [70] and [19]
conducted a comparative analysis of voice spoofing countermeasures.
In a survey of voice spoofing countermeasures, Wu et al. (2015), in [99, 101,
102], provided a good classification of voice spoofing attacks known at the
time, along with ASV system vulnerabilities. However, the age of this review
means the advanced spoofing attacks that have grown prevalent in ASV systems
were not addressed. In addition, is focused on a dedicated countermeasure for
each sort of attack, rather than exploring a more unified solution. Following
this, Sahidullah et al., [70] and Font et al., [19] published comparative
analyses of the spoofing countermeasures that focused solely on replay
attacks. According to their work, published during 2017–2018, replay attacks
attracted the most attention in the wake of the ASVspoof 2017 challenge. The
results of [70], using the ASVspoof2015 dataset, demonstrated that an ensemble
of acoustic features, together with machine learning models, produced more
accurate results than individual classifiers. Kamble et al. [36] also
published a work that covered a subset of the specific speech corpora along
with the evaluation measures in this field.
Font et al. [19] performed a comparative analysis of nine different
countermeasures with cross-corpus evaluation of the countermeasures. However,
the authors only discussed the performance of the countermeasures against
replay attacks due to the nature of the dataset used for the study. Although
Sahidullah’s (2016) [70] study and Font’s (2017) [19] paper each provided a
comprehensive analysis of the replay attack, other types of spoofing attacks
that could severely disrupt an ASV system were ignored. Sahidullah et al.
surveyed four different types of spoofing attacks, spoofing procedures, and
countermeasures the next year, in 2019 [69], summarized some spoofing
challenges and presented countermeasures. However, a thorough examination of
all aspects of the ASV system was still wanting. For instance, [70] and [19]
focused only on front-end countermeasure design, and concentrated on the SS,
VC, replay, and mimicking attack types, alone where speech corpora, protocols,
classifier, and evaluation metrics all contributed equally to the construction
of an ASV system. These were absent from the [69] study.
Aside from [79], most of the articles did not present a detailed taxonomy of
modern voice presentation attack detection (PAD) methods, where the work was
structured based on identified elements. This survey, along with [56]
broadened the classification of relevant work and built on the taxonomy from
the most recent work on voice PAD. The significant contribution of these
articles was the presentation of trends and analyses of voice PAD that were
absent in the other survey articles. Mittal et al. (2021) described the
computation mechanisms of traditional and modern speech feature extraction, as
well as datasets and a combination of various front- and back-end techniques.
None of these articles, however, performed a fair experimental analysis of the
existing state-of-the-art countermeasures. In addition, existing reviews and
survey articles also lack cross-corpus evaluation in order to show the
generalizability of existing solutions. The comparison of the existing reviews
and surveys is presented in Table I.
Existing spoofing countermeasures have employed diverse features and
classifiers and, typically, have had their performance evaluated on only one
of several datasets, i.e., ASVspoof 2017, ASVspoof 2019, and ASVspoof 2021,
using different metrics. With no standardization, it is difficult to declare a
single countermeasure as the unified method that works best. Thus, there
exists a need to provide a thorough analysis of existing countermeasures in
order to show which method is the best fit for a certain scenario, i.e.,
spoofing type, dataset, single or multi-hop attack, etc. To the best of our
knowledge, no comparative analysis work on multiple voice spoofing attacks,
including single and multi-order attacks, has ever been presented. Moreover,
existing studies have ignored the important aspect of cross-corpora
evaluation, which is crucial to the evaluation of the generalized nature of
the countermeasure. To address these challenges for the existing
countermeasures, we present a comparative analysis of various spoofing
countermeasures. Motivated by discussions, the survey proposed in this paper
investigates significant contributions to the ASV system development chain.
More specifically, the main contributions of our work are:
* •
We present a baseline survey of voice spoofing countermeasures, involving
diverse factors, for the benefit of upcoming anti-spoofing systems.
* •
We present a comprehensive analysis of existing state-of-the-art voice
spoofing attacks, countermeasures (hand crafted features, along with deep
learning and end to end solutions), publicly available datasets, and the
performance evaluation parameters used in voice spoofing.
* •
We present an experimental analysis of state-of-the-art different
countermeasures, on several classifiers, for spoofing detection performance
using the VSDC, ASVspoof 2019 and 2021 datasets, and evaluate the results
across corpora.
* •
We cover one of the key limitations of cross corpus evaluation and
generalization mentioned by the in-field researchers and evaluate the
performance of the featured countermeasures on three large-scale publicly
available and diverse datasets, i.e., ASVspoof 2021, 2019 and VSDC, in terms
of min-tDCF and EER. The countermeasures are tested against four different
machine learning and deep learning classifiers.
TABLE II: Literature collection protocol. Preparation Protocol | Description
---|---
Purpose | • To identify current state-of-art voice spoofing attacks and countermeasures. • To critically compare both popular approaches and the features used in voice spoofing countermeasures. • To investigate open challenges that still need investigation in the domain of voice spoofing countermeasures, and paths forward.
Sources | Google Scholar, IEEE explore , Springer Link
Query | Queries were used on the data sources above for collection of sources: Audio Spoof Countermeasure/ Spoofed Audio Detection/ Audio Synthesis Detection/ Audio Replay Detection/ Audio Spoofing Countermeasures/ Automatic Speaker Verification/ Secure ASV System/ ASV spoofing/ Presentation Attacks/ Voice Conversion/ Adversarial Attacks on ASV/ Voice Replay Attacks/ Anti Spoofing Countermeasures/ Voice Spoofing Detection.
Method | Literature was categorized as follows: • Audio spoofing detection and countermeasures based on traditional and deep-learned methods, as well as handcrafted features. • Existing voice spoofing attacks, along with adversarial attacks and their taxonomy. • An extensive examination, testing and comparison of existing voice spoofing countermeasures using single and cross corpus evaluation. • Detailed discussion of the research gap, issues, limitation, and future trends in voice spoofing detection and countermeasures.
Size | A total of 150 papers were retrieved using this query mentioned above from the method until the search was discontinued on 04-20-2022. We selected literature that was relevant to the subject of audio spoof detection and excluded those that were not relevant to this subject or were white papers/articles.
Inclusions and Exclusions | Preference was given to peer-reviewed journal papers and conference proceedings articles. In addition, articles from the archive literature were also taken into account.
## Literature collection and selection criteria
In this survey, we review existing research papers that focus on techniques
for detecting and countering audio spoofing attacks. A detailed description of
the approach and protocols employed for the review is given in Table II. We
also display trends in spoofing attacks and countermeasures year over year by
examining Google Scholar papers released in the previous 6 years (2015–2022).
Figure 1 depicts the course of the published study. The rest of the paper is
structured as follows: Section 3 provides current voice spoofing
countermeasures. Section 4 discusses voice spoofing attacks as well as
adversarial attacks, and Sections 5 and 6 discuss publicly available data
sources and performance evaluation strategies. Section 7 describes the
acoustic features and classifiers used in a comparative analysis of the
countermeasures. Sections 8 and 9 present the experimental setup and analysis,
as well as a discussion of limitations and future trends. Finally, Section 10
provides our conclusions and motivation for future work.
Figure 1: Number of papers in the area of voice Spoofing Attacks and
Countermeasures by yearly publication.
## Voice Spoofing Attacks
ASV systems, are vulnerable to a variety of voice spoofing attacks, i.e.,
physical-access (PA) and logical-access (LA) attacks. These and other possible
attacks on ASV systems are graphically illustrated in Figure 2. PA attacks are
easier to conduct, compared to LA attacks, due to the availability of modern
high-quality recording devices. A replay attack can be generated using a
device as simple and commonly available as a mobile phone, compared to the
robust and sophisticated AI voice cloning algorithms required for LA attacks.
Hence, voice replay attacks are more commonly employed to spoof ASV systems in
order to accomplishing an attacker’s objectives e.g., to gain entry to
someone’s home. Moreover, we have demonstrated in our earlier work [6] that
voice replays can be generated not only in a single order but also in multi-
order attacks, by using different smart speakers such as Amazon Alexa and
Google Home.
Voice spoofing countermeasures, also known as presentation attack detection
(PAD) systems, aim to detect the following attacks (see also Figure 3):
* •
Direct attacks can further be broken down into device artifacts and
algorithmic artifacts, which are A) Physical access (PA) and B) Logical
Access (LA) scenarios, respectively.
* –
PA attacks occur when the samples are applied as an input to the ASV system
through the sensor, such as a microphone.
* *
Replay attacks are one example of ASV PAs. Replay attacks occur when the audio
of an authorized user is recorded and played back in order to deceive an
authentication system. Replay attacks are the easiest to attempt, as they do
not require any special technical knowledge, and can easily deceive ASV
systems. Because replayed audio includes background and/or other noise,
however, it can be detected.
* *
An impersonation attack occurs at the microphone level, where a person changes
how they talk to mimic the speech characteristics of a legitimate user. An
attacker can fool an ASV system if the impostor’s natural voice has similar
features.
* –
LA attacks occur when the audio samples bypass the sensor and are injected
directly into the model. Spoofing attacks based on logical access include C)
voice conversion, D) speech synthesis, and E) Adversarial attacks.
* *
Voice conversion uses an imposter’s natural voice to generate artificial
speech in order to match the targeted speaker’s voice. Attacks are usually
created by other models to fool a specific ASV system. Machine learning has
allowed the mapping of speech features between speakers to be accurate, and is
now computationally efficient enough to make ASV systems vulnerable to these
types of attacks. However, these attacks can still be detected because they
are not a perfect match to genuine audio.
* *
Speech synthesis attacks, also known as deepfake attacks, are similar to voice
conversion but use text as an input to the model in order to generate a voice
clip similar to a targeted speaker in an effort to fool an ASV system. Models
that generate accurate speech features can be trained using a small data set
of recorded audio.
* •
Indirect attacks include adversarial attacks, where the audio signal remains
unchanged while the attacker modifies the signal’s properties during ASV
processing. The recent work on adversarial attacks is discussed later in the
paper.
Figure 2: Existing Threats to Automatic Speaker Verification Systems.
### Proposed countermeasures to Adversarial Attacks
Adversarial attacks, such as the addition of Gaussian noise, can create small
perturbations in audio samples and pose a threat to ASV anti-spoofing models,
affecting accuracy and safety. The method described in [96] uses self-
supervised learning to leverage the knowledge of unlabeled data, which
significantly improves performance. They create high-level representations,
extracted by the self-supervised model, and a layer-wise noise to signal ratio
(LNSR) in order to quantify and measure the effectiveness of deep models in
countering adversarial noise. During training, different masking methods are
applied to 15% of the audio samples to create a model that reconstructs audio.
This defense is passive, i.e., not a proactive defense. Their proposed method,
called Mockingjay, filters the audio. This model was only tested on the
Projected Gradient Descent (PGD) and Fast Gradient Sign Method (FGSM) attacks
but not on other attacks. Also, as epsilon (the strength of an attack)
increases, the defense starts to weaken and eventually fails. Both white-box
and black-box scenarios were tested.
Another defensive method, [95], uses spatial smoothing, filtering, and
adversarial training. This method has only been tested against PGD attacks and
may perform poorly under other, perhaps stronger, attacks because different
attacks leave different patterns on the sample. These defense methods improved
the robustness of spoofing countermeasures against PGD attacks, but they are
still limited in their overall defensive capability because adversarial
training becomes weaker against unfamiliar attacks. The work of [98] creates a
model that fits the distribution of genuine speech, where it takes genuine
speech as the input and generates ”genuine” speech, or output with the same
distribution as the genuine speech, as the output. However, for spoofed
speech, it will generate very different output that amplifies the difference
in the distribution compared to genuine speech. Therefore, they propose a
genuinization transformer that uses genuine speech features with a
convolutional neural network (CNN). The genuinization transformer is then used
together with a Light Convolutional Neural Network (LCNN) system for the
detection of synthetic speech attacks. It was trained on the ASVspoof2019
dataset with Constant Q Cepstral Coefficients (CQCC) and Linear Frequency
Cepstral Coefficients (LFCC) as features, and achieved an EER of 4.07% and a
min-TDCF of 0.102. However, replay attacks were not done, and this model was
not tested across corpora. This paper, [105], introduces an inversion module
that derives four features: inverted constant-Q coefficients (ICQC), inverted
constant-Q cepstral coefficients (ICQCC), constant-Q block coefficients
(ICBC), inverted constant-Q linear block coefficient (ICLBC). The first two
use conventional Discrete Cosine Transform (DCT) and the latter two use
overlapped block transform. These features for synthetic speech detection are
evaluated using the ASVspoof 2015, noisy ASVspoof 2015, and ASVspoof 2019
logical access databases, and achieve an EER of 7.77% and a min-TDCF of 0.187.
Suthokumar et al, in [74], compare how genuine and spoofed speech varies
across different phonemes, and identifies certain phonemes (fricatives,
nasals, stops, and pauses) which are more informative in the detection of
replay attacks. The paper created four different fusion scoring methods to
incorporate phonetic information using phoneme-specific models, and achieved
an EER of 6.18% on the ASVspoof2017 V2 dataset. However, this model has to be
fused with a phenome independent model for better results. For logical
scenarios, this paper uses feature extraction, a densely connected network, a
squeeze and excitation block, and a feature fusion strategy. For physical
scenarios, their method consists of feature extraction, a multi-scale residual
network, an SE block and weighted average strategy. Both methods for logical
and physical scenarios combine physical and physiological characteristics to
resist spoofing attacks.
Adversarial attacks may also pose a threat to ASV systems. According to [27],
ASV systems are prone to adversarial attacks, which may reduce the accuracy of
such systems by up to 94%. Attacks can range from simple Gaussian noise to
more advanced attacks created in a targeted white-box setting. These
perturbations, imperceptible to humans, may cause audio classification and ASV
systems to fail completely.
Figure 3: Taxonomy of the Voice Spoofing Attacks. TABLE III: A Comparison of Adversarial Defensive Methods for Anti-Spoofing Models Paper | Technique | Dataset | Adversarial Testing | Key Performance | Limitations
---|---|---|---|---|---
[96] | Mockingjay with SSL | ASVspoof 2019 LA and LibriSpeech | FGSM and PGD | 90% accurate | limited to 2 attacks testing
[95] | Spatial Smoothing and Adversarial Training | ASVspoof 2019 LA | PGD | 92.40% (SENet), 98.60% (VGG) | Limited to 1 attack only
TABLE IV: Comparison of Adversarial Attack Defense for ASV Systems Author | Technique | Dataset | Tested Adversarial Attacks | Best Evaluation Performance | Limitations
---|---|---|---|---|---
[27] | Adversarial Lipschitz Regularization | Librispeech | PGD, FGSM, Carlini, Wagner | 73% adversarial accuracy | Fails with stronger attacks
[92] | Neural Vocoders | Voxceleb1 and Voxceleb2 | BIM | AUC: 99.94 | Single Attack Testing
[93] | TERA models | Voxceleb1 | BIM | 22.94% EER | Only tested on 1 type of attack
[94] | SSLMs | Voxceleb1 | BIM, FGSM, JSMA | R-Vector 14.59% GenEER X-Vexctor 11.29% GeEER | Degrades performance
Adversarial training is another way to reduce the effectiveness of an
adversarial attack, as done by [27]. In this paper, adversarial training– when
adversarial samples are included in the training dataset– and Adversarial
Lipschitz Regularization (ALR) are used to reduce the impact of adversarial
attacks. ALR is based on a function that ignores small changes in the input.
Adversarial training is beneficial for defense when the attacks are similar to
each other, but not all attacks will be similar. This method starts to fail
when introduced to different types of attacks, some stronger than others, and
this failure is evident when they are evaluated. The work of [92] uses neural
Vocoders to re-synthesize audio and finds differences between the ASV scores
of an original audio sample and a re-synthesized audio sample. This method
detects adversarial samples by first purifying the sample, i.e., removing
adversarial noise while generating the genuine waveform with reduced
distortion. This method is beneficial because it does not need to know the
attack algorithm used. This model is tested purely on a Basic Iterative Method
(BIM) adversarial attack, which extends an FGSM attack where FGSM is performed
multiple times with a small step size. Adversarial attack types exhibit
different effects, so judging performance on many types of attacks is
difficult. Further, only one type of neural vocoder was used. Vocoders, when
used to reconstruct phase information, inevitably introduce noise and
distortion. The authors claim that this noise is actually beneficial, but this
may not be entirely accurate. In addition, self-supervised learning-based
models are more common when it comes to detecting and purifying adversarial
samples. Self-supervised learning may be used to reconstruct audio, which is
beneficial when it comes to adversarial attacks on audio.
The work of [93] uses self-supervised learning-based models for adversarial
defense on ASV systems by using Transformer Encoder Representations from
Alteration (TERA) models. The authors use self-supervised learning in a
similar way to Mockingjay, which was a considerable influence on this paper.
This method is beneficial because it does not require knowledge about how the
adversarial samples were generated, as it purifies the samples. The authors
use Gaussian, mean, and median filters for audio samples. Similarly, the work
of [94] uses Self Supervised Learning Models (SSLMs) to remove superficial
noise from the inputs, and reconstructs clean samples from the interrupted
ones. They have two modules; one module purifies audio, while the other
compares the original and purified samples in order to detect adversarial
samples.
## Taxonomy of Voice Spoofing Countermeasures
This section provides a detailed analysis of existing state-of-the-art voice
spoofing techniques adopted to detect spoofing attacks as well as
countermeasures developed to combat those attacks. The countermeasure taxonomy
illustrated in Figure 4 includes a comprehensive classification of the
countermeasures.
### Conventional Handcrafted Spoofing Countermeasures
The countermeasures for audio spoof detection are classified into two
categories: conventional handcrafted features and enhanced deep learning
solutions. For voice spoofing detection, handcrafted features are commonly
employed, i.e., MFCC [112], CQCC [83], and others, with conventional machine
learning classifiers, e.g., Gaussian Mixture Models (GMMs) [7], Support Vector
Machine (SVM) [87], etc.
In [65], the author’s novel acoustic features were derived from the frequency-
warping and block transformation of filter bank log energies. A two-level Mel
and speech-based wrapping was used with an Overlapped Block Transformation
(OBT) to extract the conventional and inverted classification features.
Although the proposed features achieved a 0.99 classification accuracy on the
ASVspoof 2015 corpus development section, they were only applicable when the
spoofing attacks were known in advance. Aside from this, the performance of
the proposed approach was not evaluated and reported in the presence of
unknown adversarial attacks. Further, the approach was not validated across
corpora due to a lack of open-source datasets. In the ASVspoof2015 challenge,
the unified articles concentrate on countermeasures for speech synthesis and
voice conversion spoofing attacks. Conversely, replay attacks pose the
greatest threat to ASV systems. These attacks involve the playback of
recordings, acquired from registered speakers, in order to generate false
authentication tokens, and may be efficiently carried out using common
consumer devices.
Figure 4: A detailed taxonomy of existing voice spoofing countermeasures.
ASVspoof2017, the second in the series of challenges, focused on the
development of replay attack countermeasures. In [39], Kinnunen et al. mainly
followed this line of thinking, utilizing front-end features, i.e., CQCC,
MFCC, PLP, RFCC, Inverted Mel Frequency Cepstrum Coefficients (IMFCC), LPCC,
SCMC, SSFC, and HPCC, with a baseline GMM-UBM classifier. The performance of
these countermeasures was evaluated on the ASVspoof2017 dataset, where the
proposed systems achieved an EER of 31.5% for genuine vs. replay and 1.8% EER
for genuine vs. zero-effort imposters. In addition, the average detection EER
of all main submissions was 25.91%, whereas the best single system result had
an average detection EER of 6.73%. When these results were compared to those
from the prior challenge (ASVspoof2015), the detection of replay attacks was
found to be more complex than detecting speech synthesis and voice conversion
spoofing attacks. In [70], Sahidullah et al. present the first comparative
evaluation of six countermeasures (CMs) and their integration with automatic
speaker verification using the ASVspoof 2015 dataset. The CMs contain MFCCs,
IMFCCs, LFCCs, Cochlear Filter Cepstral Coefficients (CFCCs), CQCCs, and
Gammatone Frequency Cepstral Coefficients (GFCCs) as front-end cepstral
features, which are then coupled with GMM-UBM and i-Vector classifiers for
back-end speaker classification. According to the results presented in this
paper, the CM using CQCC features and the fusion of all six CMs has the
greatest potential to detect spoofing attacks in the context of spoofing
attacks. More significantly, the fusion of CM and CQCCs achieves the lowest
EER of 0.02%.
Cascading integration of ASV and CMs significantly decreases the False
Acceptance Rate (FAR). Although the significance of CQCC features is
demonstrated in this study, the presented system is not evaluated against
other forms of spoofing attacks, e.g., replay, deepfake, adversarial, or
others. Further, countermeasures are not tested cross-corpus. The results of
[70] demonstrate that an ensemble of acoustic features, together with machine
learning models, can produce more accurate results than individual
classifiers. Consequently, in [30], the author presents an ensemble of the
acoustic features of CQCCs, along with other classical features, i.e., MFCCs
and Perceptual Linear Predictive (PLP) features. The author proposes an
ensemble classifier set that contains numerous GMMs, i.e., Gradient Super
Vector-Boosting Decision Tree (GSV-GBDT) and GSV-Random Forest (GSV-RF)
classifiers. The experimental results show that the presented ensemble system
significantly outperforms the baseline GSV-SVM system. Specifically, using a
baseline GSV-SVM classifier, this approach achieves an EER of 10.4% on CQCC
features, 27.4% on MFCC features, and 37.0% on PLP features. In contrast, a
minimal EER of 9.5% is achieved by employing the ensemble model. Similarly to
[65], the biggest limitation of [30] is that it has only been tested against a
single replay attack. Other types of spoofing attacks, i.e., speech synthesis,
voice conversion, and the most current deepfake and adversarial attacks, are
not reviewed, and there is no cross-corpus examination of the techniques. In
[4], the authors examine the robust audio features, comprised of handcrafted
and auto-encoder-based learned features, to identify replay spoofing attacks.
The handcrafted features employed in this study are CQCCs, LPCCs, IMFCCs,
Rectangular Filter Cepstral Coefficients (RFCCs), Sub-band Centroid Frequency
Coefficients (SCFCs), and Sub-band Centroid Magnitude Coefficients (SCMC), as
well as spectrogram features. The paper uses an auto-encoder to learn a dense
representation of all of the features. A conventional GMM, along with a
Universal Background Model (UBM), is used as the baseline system to examine
the performance of the handcrafted and encoder-based features. The integrated
fused models, based on existing audio and machine-learned features, achieve
comparable results, with an identical error rate (EER) of 12.0%. In
particular, the handcrafted CQCC features outperform all other features, with
an EER of 17.5%. The best encoder-based feature observed was a spectrogram
with a minimum EER of 20.2%. Although the coupling of handcrafted features
with an auto-encoder-based system surpasses the state-of-the-art, the
presented system is only tested against replay attacks. In the presence of
other types of spoofing attacks, e.g., voice conversion and adversarial
attacks, the system’s performance tends to vary.
Rather than using standard standalone short-term power spectrum coefficients,
i.e., MFCCs, LFCCs, CQCCs, etc., to identify the spoofing attacks, the authors
of [81] introduced phase-based Teager Energy Operator (TEO) features. It was
discovered that the TEO phase features gave information that is complementary
to the information provided by the more commonly used CQCC, MFCC, and LFCC
feature sets. Although the TEO phase features were unable to perform well
alone, fusion with traditional features enhanced the accuracy of the spoofing
detection system. The results demonstrated that the standalone spoof detection
systems, developed with the TEO phase, MFCC, and LFCC, achieved an EER of
31.34%, 34.02%, and 16.80%, respectively, whereas, when the TEO phase feature
set was fused with the CQCC, MFCC, and LFCC feature sets, the EER was lowered
by 0.18%, 2.74%, and 1.41%, respectively. This improvement in system
performance showed the influence of TEO phase information on the spoof
detection system. However, the provided solution was only stated to be
resistant to replay spoofing attacks, and had no cross-corpus validation.
In accordance with [81], the authors of [63] present phase spectrum and multi-
resolution wavelet features, in addition to the commonly used front-end MFCC
features. This study combines MFCCs with Mel-Frequency Principal Coefficients
(MFPCs), CosPhase Principal Coefficients (CosPhasePCs), and Mel Wavelet Packet
Coefficients (MWPCs) to provide a reliable and robust defense against spoofing
attacks. The experimental results indicate that applying principal component
analysis (PCA) to MFCCs results in a considerable EER improvement over MFPC
features for all spoofing methods. However, the MFCC features prove inferior
in comparison with other front end features. In contrast, the implementation
of CosPhasePC modestly decreases EER in comparison to the MFPC features.
Although the multi-resolution wavelet transform features outperform the state-
of-the-art, achieving 0.05% EER for all known attacks, the proposed framework
has not been tested and reported cross-corpus and against unknown attacks.
Accordingly, following the significant performance of phase-oriented features,
the author sheds light on the importance of acoustic front-end features and
introduces a novel detection mechanism by modeling replayed speech as a
convolution of original speech [80]. Also in [80], the author proposes a novel
feature set, Magnitude-based Spectral Root Cepstral Coefficients (MSRCC) and
Phase-based Spectral Root Cepstral Coefficients (PSRCC), which outperforms the
baseline system (CQCC) and provides a 29.18% EER on the evaluation set of the
ASVspoof 2017 challenge database. With the GMM back-end classifier, the front-
end features MSRCC and PSRCC respectively produce 18.61% and 24.35% EER.
Conversely, with convolutional neural network (CNN) back-end classifiers,
MSRCC and PSRCC obtain 24.50% and 26.81% EER, respectively. In addition, the
score-level fusion of MSRCC and PSRCC results in 10.65% and 17.76% EER using
GMM and CNN classifiers, respectively. These findings suggest that the
proposed feature sets of MSRCC and PSRCC capture complementary information.
However, the cross-corpus examination was not reported.
The research in [8] shows the effectiveness of augmenting genuine training
data in the simulation of replay spoofing attacks. The author presents replay
spoofing countermeasure systems that improve the CQCC-GMM baseline with score
level fusion. Instead of using CQCC, the author used spectrograms as inputs to
analyze end-to-end feature representations. Finally, the author replaced the
subsequent GMM classifier with a Fully-connected Deep Neural Network (FDNN)
and a Bidirectional Long-Short-Term Memory neural network (Bi-LSTM). The
results of the experiments show that this data augmentation technique can
significantly increase the system’s performance. In particular, the baseline
CQCC-GMM model achieved an EER of 22.29%, while the DA-CQCC-GMM model obtained
an EER of 19.18% and the fused system achieved an EER of 16.39%. Although the
end-to-end FDNN and Bi-LSTM-based systems perform well for replay detection,
other sorts of current deepfake and adversarial attacks may have led the
system to perform inadequately. Furthermore, the provided system was not
evaluated in the presence of unknown attacks and audio recordings across
corpora.
Yang et al. [106] developed a Low-Frequency Frame-wise Normalization (LFFN)
technique to capture replay distortions. LFFN was combined with Constant-Q
Transforms (CQT) to extract two features: Constant-Q Normalization
Segmentation Coefficients (CQNSC) and Constant-Q Normalization Cepstral
Coefficients (CQNCC). This approach performed well on the ASVspoof 2017
version 2.0 dataset, with an EER of 10.63% on CQNSC and 10.31% on CQNCC
features. The novel Acoustic Ternary Patterns-Gammatone Cepstral Coefficient
(ATP-GTCC) feature was proposed in [53] to help develop a lightweight model
for single and multi-order replay attack detection. An SVM classifier was used
to better capture the harmonic distortions found in multi-order replay
samples, while ATP-GTCC was employed as a front end feature. This model
achieved an EER of 0.6% and 1% on the Voice Spoofing Detection Corpus (VSDC)
and the ASVspoof 2019 dataset, respectively. This system exceeded earlier
state-of-the-art techniques in terms of replay detection performance and
efficiency. However, the given approach is limited to single-order and multi-
order replay attacks, and has not been tested against other types of attacks,
including voice conversion, deepfake, and adversarial attacks.
In [100], Wu et al. use a common benchmarking database to analyze the
vulnerability of text-dependent speaker verification systems to the replay
attack and propose an anti-spoofing technique to protect the ASV systems. The
main idea behind this spoofing detection approach is to use a similarity score
to determine whether the provided sample is identical to any previously stored
voice samples. A back-end HMM classifier is trained using the front-end MFCC,
LPCC, and spectro-temporal features. Experiments on the RSR2015 database
reveal that in the presence of a replay attack, the EER and FAR, which were
identical at 2.92%, increased to 25.56% and 78.36%, respectively, proving the
speaker verification system’s vulnerability to replay attacks. In the presence
of replay spoofing attacks, the proposed spoofing countermeasure lowers the
FARs from 78.36% and 73.14% to 0.06% and 0.01% for male and female samples,
respectively. In [24], the authors use a feature fusion of GTCC, MFCC,
Spectral Flux, and Spectral Centroid for input audio presentation. This
countermeasure successfully detects multiple types of logical access attacks
and classifies the cloning algorithms used to produce the synthetic speeches.
It achieves an EER of 3.05%, compared to baseline models that achieve an EER
ranging between 5.06% and 9.57%. The authors of [3] present a voice spoofing
countermeasure using ELTP-LFCC features and a Deep Bidirectional LSTM to
combat TTS synthesis and converted voice samples in LA attacks. In this paper,
ELTP is further fused with LFCC to better capture the characteristics of the
vocal tract speech dynamics of both bonafide voice and cloning algorithm
artifacts. On the diverse ASVspoof 2019-LA dataset, performance evaluation
yields an EER of 0.74% and a min-TDCF of 0.008%. However, the presented system
is only tested on one dataset.
Rigorous experimentation was performed to illustrate the significance of the
proposed countermeasure for detecting LA-based voice spoofing attacks. In the
literature, frequency or amplitude modulation-based features were also
explored for voice spoofing detection. Gunendradasan et al. [23] used Spectral
Centroid Deviation (SCD) features to develop a replay attack detection system.
The Spectral Centroid Frequency (SCF) and Spectral Centroid Magnitude
Coefficient (SCMC) features were extracted from the same front-end as SCD and
used as complementary features. These features were used to train a GMM
classifier for replay attack detection. This method was evaluated on the
ASVspoof2017 dataset and provided a 60% improvement in EER as compared to the
ASVspoof CQCC baseline model.
Existing methods have also used various handcrafted features to address both
physical (replay) and logical access (clone, voice conversion) spoofing.
Monteiro et al. [57] proposed a system that can detect both logical and
physical access audio attacks against ASVs via an ensemble of three different
models. The baseline systems of LFCC-GMM and CQCC-GMM obtained an EER of 8.09%
and 9.57%, respectively, on the evaluation set. Although this system achieved
an EER of 9.87% for voice cloning attacks, for replay attacks, the baseline
systems achieved an EER of 11.04% and 13.54%, whereas the proposed system
achieved an EER of 1.74%. Zhizheng et al. [104] proposed an ASV spoofing and
countermeasures initiative with more focus on speech synthesis and voice
conversion spoofing attacks (presentation attacks). The authors also described
the post-evaluation results of the ASV systems achieved by the fusion of
several spectral features, i.e., MFCC, CFCCIF, and MFPC, as well as the back-
end classifiers, such as GMM, SVM, SVM-Fusion, and SVM with i-Vectors.
Some methods also exploit high-frequency components in order to capture the
traits of bonafide and spoofed signals. The work in [90] captured the high-
frequency content by using the inverted-MFCC (IMFCC), LPCC, LPCCres, CQCC,
MFCC, and Cepstrum features. This hybrid feature representation was then used
to train a GMM for the classification of bonafide and spoofed samples. Due to
the increased feature computation cost, this method was not suitable for local
deployment on resource-constrained voice-controlled systems. This paper, [22],
also highlighted the idea of analyzing the high-frequency band for replay
spoof detection. For this purpose, transmission line cochlea amplitude
modulation and transmission line cochlea frequency modulation features were
employed to train a GMM for replay attack detection. This method was evaluated
on the ASVspoof2017 dataset and achieved an EER of 7.59%. Although this method
[22] performed only slightly better than the baseline model, amplitude
modulation-based features have a high computational cost because they take
more than twice the amplitude frequency to modulate the signal.
In the literature, non-voice segments have also been explored to capture the
distortions of playback speech. Saranya et al., in [72], analyzed the channel
and reverberation information from non-voiced segments of the input audio for
replay spoofing detection. A voice activity detector was used to determine the
non-voiced segments, then a hybrid feature vector comprised of CQCC, MFCC, and
Mel-Filter bank-Slope was employed to capture the remnant vocal tract
information in the non-voiced segments of the input audio. These features were
later used to train a GMM for the classification of bonafide and replayed
audio samples. The performance of this method was evaluated on the
ASVspoof2017 dataset, and showed an improvement of 37% in EER over the
baseline method. In the work of [105], the authors introduced an inversion
module to derive four features: ICQC, ICQCC, ICBC, and ICLBC. Two of them used
conventional DCT, and the other two used overlapped block transforms. These
features for synthetic speech detection were evaluated using the ASVspoof
2015, noisy ASVspoof 2015, and ASVspoof 2019 logical access datasets, and
achieved an EER of 7.77% and a min-tDCF of 0.187. Despite the fact that they
were tested on multiple datasets, due to aadvances in spoofing attacks, i.e.,
adversarial or unknown attacks, they may not perform well in a real-world
scenario.
### Discussion
We performed a survey to determine the most frequently used features for
spoofing audio and speech countermeasures. Each handcrafted feature was
carefully designed and has its own dependencies to differentiate between
genuine and spoofed voice samples. Despite the fact that each spoofing attack
adds a distinct distortion to the speech sample in order to mislead an ASV
system, some recently published crafted features worked very well in the
classification of physical and logical attacks. Although the reported hand-
crafted features have been evaluated on several datasets, including
ASVspoof2015, ASVspoof2017, ASVspoof2019, and others, they are dependent on
the data source used to train and classify the spoofed and bonafide speech
samples. There is a significant need for a standardized, unified dataset to
train and validate the accuracy of the published feature sets in order to
choose the best among them. Furthermore, the existing features have only been
tested on training datasets so far, with no cross-corpus assessment.
Consequently, the list of features we have chosen covers the bulk of the
power, frequency, duration, amplitude, and phase spectrum in order to
distinguish between bonafide and spoofed voice samples. Moreover, we offer a
cross-corpus examination of the features and test performance on standard
datasets. The experiment and results section provides the details of the
performance analysis on the selected speech samples.
### Deep learning based Countermeasures
Recent years have witnessed a rise in the use of deep learning approaches to
prevent audio spoofing attacks. However, despite the benefits of handcrafted
features, these approaches were computationally complex and required powerful
computing resources. Inspired by the great success of deep learning in
automatic speech recognition, deep neural network (DNN) based systems were
developed for spoofing detection for the first time in 2015. In [10], a novel,
simple model for detecting spoofing attacks on a speaker verification system
was developed. The presented model was used to extract key features from audio
samples and construct compact, abstract, and resilient deep data
representations. A spoofing-discriminant network was used in the training of
spoofing algorithms. The proposed network then computed the s-vector, which is
the utterance level average of the final hidden layers. Finally, Mahalanobis
distance, along with normalization, was used, in conjunction with computed
s-vectors, to detect spoofing attacks. This model achieved the 3rd position in
the first spoofing detection challenge, ASVspoof 2015. In particular, the
proposed model attained an overall minimum EER of 2.281%. Specifically, the
presented system obtained an EER of 0.046% for known and 4.516% for unknown
replay attacks. Although the proposed system performed well in the ASVspoof
2015 competition, it has not been examined across corpora. Furthermore, when
spoofing techniques other than the replay attack are introduced into the
system, the performance of the proposed technique is taken into account.
Existing methods also employed handcrafted features to train deep neural
networks for spoofing detection. In [60], high-frequency regions were analyzed
by proposing the High Frequency Cepstral Coefficients (HFCC) feature and using
a back-end DNN model to classify bonafide and spoofed (replay) audio. When
comparing HFCC features to CQCC features using a baseline GMM back-end, HFCC
outperformed CQCC in the development and evaluation sets of ASVspoof2017. HFCC
received an EER of 5.9% in the development set and 23.9% in the evaluation
set, while CQCC received 11.0% EER in the development set and 24.7% in the
evaluation set. An attentive filtering network system was proposed in [45] to
detect replay attacks against ASV systems, and the proposed method enhanced
feature representations in both the frequency and time domains. This system
obtained an EER of 6.09% and 8.54% on the development and evaluation sets of
ASVspoof 2017, respectively, on a system comprised of two Attentive Filtering
models (one using sigmoid and the other using softmax as their activation
functions). This method achieved better results than the CQCC-GMM baseline
model, which obtained an EER of 12.08% and 29.35%. However, the combination of
feature extraction and the model’s architecture required significant
computational resources.
In [25], segment-based linear filter bank features, along with an attention-
enhanced DenseNet-BiLSTM model, were proposed for replay attack detection.
These features were extracted from the silent segments of the audio signal.
Triangle filters were used to examine noise in the high frequency bands
$3-8kHz$. The baseline system of CQCC-GMM obtained an EER of 2.36% on the
development set and an EER of 8.42% on the evaluation set of ASVspoof2017,
while features of this method, when used with a GMM, obtained an EER of 1.8%
and 7.92% on the development and evaluation sets, respectively. One limitation
of this method was the sensitivity of the signal-to-noise ratio during
segmentation of the silence and voiced components, where low SNR resulted in
false segmentation.
Even though the work of [98] uses LPS features, the enhanced modification was
before passing them on to an LCNN. The authors create a model that fits the
distribution of genuine speech, i.e., one that takes genuine speech as the
input and generates genuine speech as the output. However, if the speech is
spoofed, the output will be very different. They propose a genuinization
transformer that uses genuine speech features with a convolutional neural
network (CNN). The genuinization transformer is then used with an LCNN system
for the detection of synthetic speech attacks. The model achieves an EER of
4.07% and a min-tDCF of 0.102 on the ASVspoof2019 dataset, using CQCC and LFCC
features. However, this method has not been tested across corpora. The work of
[52] proposes a Conv1D Resblock with a residual connection, which allows the
model to learn a better feature representation from raw waveforms. They find
that feeding a raw waveform directly into a neural network is adequate. Only
the ASVspoof2019 dataset is used, and it achieves an EER of 2.98% and a min-
tDCF of 0.082. The work of [76] is an extension of a previously proposed GAT-
ST and uses a raw waveform. This captures discriminative cues in both the
spectral and temporal domains. The proposed RawGAT-ST model uses a one-
dimensional convolution layer to ingest raw audio. This end-to-end
architecture uses feature representation learning and a GAT that learns the
relationships between cues at different sub-band and temporal intervals. This
model uses the raw waveforms from the ASVspoof 2019 dataset and achieves an
EER of 1.06% and a min-tDCF of 0.0335. However, this model is only tested on
one dataset. Yang and Das, in [107], propose a new feature that can capture
discriminative information between natural and spoofed speech. The proposed
feature, eCQCC-STSSI, uses CQT to transform speech from the time domain to the
frequency domain. A DCT is used to decorrelate the feature dimensions and
concentrate the energy of the logarithmic octave power spectrum and
logarithmic linear power spectrum, respectively. The two DCT outputs are
concatenated to form the eCQCC feature vectors. Even though this feature was
tested on both ASVspoof2015 and ASVspoof2017, it has not been tested on other
datasets and may fail to properly extract features. The authors of [50]
concatenate four sub-features on the ASVspoof2019 dataset. These features,
Short-Term Spectral Statistics Information (STSSI), Octave-band Principal
Information (OPI), Full-band Principal Information (FPI), and Magnitude-Phase
Energy Information (MPEI), are fused to generate the delta acceleration
coefficients as features for spoofing detection such as CQSPIC, CQEPIC, and
CESPIC. The fusion of the features achieves an EER of 7.63% and a min-TDCF of
0.178, but has been tested in many scenarios. In [108], the authors propose a
heuristic feature extraction method based on Multi Level Transform (MLT),
which extracts valuable information from the octave power spectrum for
spoofing attack detection. It relies on MLT to extract relevant information
from previous DCT results. The authors apply it to the ASVspoof2015 and
ASVspoof2017v2 datasets, and achieve an EER of 14.45. However, it is only
tested on two domains. This paper, [74], compares genuine and spoofed speech
across different phonemes and shows that specific phonemes (fricatives,
nasals, stops, and pauses) are more informative in the detection of replay
attacks. The paper creates four different fusion scoring methods to
incorporate phonetic information using phoneme-specific models. This method is
tested on the ASVspoof2017 V2 dataset, and achieves an EER of 6.18%. However,
it has to be fused with a phenome-independent model for the best results. The
work of [88], which also looks at sound characteristics, creates Voice-Pop,
which identifies a live user by detecting the pop noise naturally incurred by
a user breathing while speaking close to the microphone. The authors use their
own dataset with GFCC features and achieve an EER of 5.4%. However, this
method was only tested in one domain. A few solutions use lightweight deep
learning systems to detect replay spoofing. A lightweight CNN [47], based on
Maximal Feature-Map (MFM) activation, is used to detect replay attacks in
[97]. MFM is able to minimize the dimensionality by using the most relevant
features for classification. This method, [97], is extended in [5] to
investigate the efficacy of angular margin-based softmax activation in
training a light CNN for cloning and replay spoof detection.
### End to End Countermeasures
End-to-end techniques have been shown to be effective in a variety of domains.
However, relatively little work has been done in the domain of voice spoofing
countermeasures to date. In [17], an end-to-end system was proposed to detect
replay attacks against ASVs. This system used a combination of a CNN, LSTM,
and DNN to take in raw audio as input and classify the audio into genuine,
synthetic/cloned, or replay. This model was evaluated on the ASVspoof 2015 and
BTAS 2016 datasets and achieved a half total error rate (HTER) of 1.56%,
compared to the GMM-PLP-39 baseline model of 2.96%. This system may be
enhanced in the feature extraction stage by employing better time and
frequency filtering networks. Recently, in [32], the author proposes a novel
heterogeneous stacking graph attention layer that models artifacts spanning
heterogeneous temporal and spectral domains with a heterogeneous attention
mechanism and a stack node. In concert with a new max graph operation that
involves a competitive mechanism and an extended readout scheme, their
approach, named ASSIST, achieves an EER of 0.83% and a min-tDCF of 0.0275 on
ASVspoof2021. However, the proposed method is only tested on one dataset.
In [77], a graph attention network (GATs) is proposed that works by applying a
self-attention mechanism to GNNs and modeling graph-structured data. Each node
in the graph is weighted according to its relevance to other nodes. GATs can
be used to model a specific sub-band or temporal segment using high-level
representations extracted from deep residual networks. It achieves a min-tDCF
of 0.0089 on the ASVspoof2019 dataset. However, GAT operates on filter bank
outputs, not on the waveform. It is only tested on one dataset.
### Discussion
The majority of these papers do not test across multiple datasets to establish
the generalizability of the solution, perhaps due to the lack of publicly
available datasets at the time of publication. However, there are numerous
datasets available for testing right now, and a detailed analysis of cross-
corpus evaluation of the solutions is presented in Table V. This analysis
shows that there are only two articles [83, 6]that have performed the cross-
corpus evaluation to date. However, there are some research gaps in the work
of these two papers, and they do not perform a thorough analysis of cutting-
edge techniques. Some of these papers use raw wave forms, which can save time
in feature extraction but need to be verified under controlled conditions.
Also, the depth of each model varies, which can increase accuracy on test sets
but take longer to train. In terms of deployment, ideally models need to be
lightweight enough to be run on smart speakers. New machine learning
techniques for the audio domain can be adapted from the video domain, which is
generally more developed. In a world that changes frequently, the models need
to be able to be adapted to new challenges, and some of these models are
designed for a specific task. New models need to be versatile enough for
various conditions. Additionally, these models need to be unbiased, an aspect
that has not been tested.
TABLE V: Cross corpus evaluations of existing methods Author | Corpus | Cross-Corpus
---|---|---
[3] | ASVspoof 2019 LA | No
[24] | ASVspoof 2019 LA | No
[31] | chsc2011 & GTZAN | No
[44] | NIST 2001 & 2006 & SRE | No
[73] | In-house dataset | No
[23] | ASVspoof 2017 v1.0 | No
[39] | ASVspoof 2017 | No
[60] | ASVspoof 2017 | No
[104] | ASVspoof 2015 | No
[59] | SPINE2000 | No
[67] | NIST 2010 evaluation | No
[72] | ASVspoof 2017 dataset | No
[22] | ASV spoof 2017 | No
[100] | Part I-RSR2015 corpus | No
[106] | ASVspoof 2017 v2.0 | No
[8] | ASVspoof 2017 | No
[55] | VSDC & ASVspoof 2019 | No
[13] | ASVspoof 2017, 2019 PA | No
[12] | ASVspoof 2017 | No
[83] | ASVspoof 2015, 2017 & RedDots | Yes
[109] | ASVspoof 2017 V2.0 | No
[95] | ASVspoof 2019 LA | No
[96] | ASVspoof 2019 LA | No
[37] | ASVspoof 2015,2017 | No
[90] | ASVspoof 2017 | No
[89] | ASVspoof 2019 | No
[6] | ASVspoof 2017,2019 & ReMASC | Yes
[17] | BTAS2016 dataset | No
[57] | ASVspoof 2019 | No
[81] | ASVspoof 2017 | No
[30] | ASVspoof 2017 | No
[71] | ASVspoof 2015 | No
[62] | VoxForge | No
[80] | ASVspoof 2017 | No
[25] | BTAS2016 & ASVspoof2017 | No
[45] | ASVspoof 2017 | No
### Unified voice spoofing countermeasures
By and large, existing countermeasures for voice spoofing attacks exclusively
target one type of attack (e.g., voice replay or speech synthesis), and only
recently few works have been reported that utilize a unified approach, capable
of dealing with multiple types of voice spoofing attack. In [29], the authors
introduce a novel ATCoP feature descriptor for the detection of voice
presentation attacks, capable of recognizing both LA and PA attacks. Although
a good all-around solution, this unified anti-spoofing technique is the most
effective at detecting single-and multi-order replay attacks. The experimental
results show that the presented approach is effective when tested on four
distinct datasets, with either replay or clone forgeries. Although this method
is evaluated against several datasets, the success of the given ATCoP
descriptors is not reported for DF attacks. In another work [28], the authors
recognized a novel hybrid voice spoofing attack, a cloned replay, which may
also be used to spoof an ASV system. A cloned replay attack duplicates the
audio of a target speaker and transmits it to an ASV system. The author
establishes the basis for a spoofing countermeasure capable of detecting
multi-order replay, cloning, and cloned-replay attacks through the use of the
proposed ATP-GTCC features. This approach is capable of identifying state-of-
the-art (SOTA) voice spoofing attacks with a unified solution.
Rostami and Homayounpour et al. [68] proposed an effective attention branch
network (EABN) for detecting LA and PA attacks. The provided technique [68]
achieves EERs of 0.86% and 1.89%, and min-TDCFs of 0.02% and 0.50%, against PA
and LA attacks, respectively. Despite outperforming SOTA approaches on LA and
PA attacks, [68] was not tested on deepfake attacks. In contrast, in [46],
SENet and ResNet were combined with statistical pooling to handle anti-
spoofing with deeper and faster-trained DNNs. It consisted of a SENet, a
median-standard ResNet, a dilated ResNet, and an attentive filtering network.
A GNN back end classifier was implemented using CQCC and LFCC features. It
obtained an EER of 0.59% for a PA and 6.7% against LA attacks. Because the
data was obtained directly from the ASVspoof2019 dataset with no data
augmentation or cross-corpus dataset, the proposed method’s performance was
not checked against DF attacks and may decline in a real-world scenario. In
[11] the authors presented Emphasized Channel Attention, Propagation, and
Aggregation Time-Delayed Neural Networks (ECAPA-TDNN) as their primary model.
The authors’ intention was to tackle the issue of channel variability by
employing an acoustic simulator in order to enhance the original datasets with
transmission codecs, compression codecs, and convolutional impulse responses.
The presented method attained an EER of 5.46% and a min-tDCF of 0.3094 in the
2021 LA task and an EER of 20.33% on the DF task. Although the presented work
prevents several types of attacks, it needs testing and reporting against PA
attacks. To the best of our knowledge, no countermeasures have been reported
for all four types of voice spoofing attacks, i.e., LA, PA, DF, and cloned
replay attacks. However, the community’s attention has lately switched to
spoofing-aware ASV systems, as mentioned in the section below. New integrated
spoofing technologies have been presented that conducted speech spoofing and
automated speaker verification simultaneously, in addition to unified
techniques for cutting-edge spoofing attacks.
### Integrated spoofing-aware ASV systems
The research community for secure voice-enabled systems is currently focused
on integrating research efforts on speaker verification and anti-spoofing.
Unlike existing anti-spoofing systems, which focus on independently
streamlined spoofing detection, speaker verification may also be embedded in
order to build a integrated system. A solution is considered as an Integrated
spoofing-aware ASV systems if it is an integrated and optimized system that
protects against spoofing attacks while also performing speaker verification.
The first paper in this regard is [35], in which the authors propose the
spoofing-aware speaker verification (SASV) challenge, which integrates speaker
verification and anti-spoofing. In this challenge, the organizers encourage
the development of integrated SASV systems that use new metrics to evaluate
joint model performance by releasing official protocols and baseline models.
The authors extend speaker verification by including spoofed trials in
addition to the standard set of target and impostor trials. Unlike the
existing ASVspoof challenge, which focuses on separate spoofing detection and
speaker verification systems, SASV aims to develop jointly optimized secure
ASV solutions. Open-source, pre-trained spoofing detection and speaker
verification models are used in two baseline SASV solutions. Participants have
free access to both models and baselines, which can be used to develop back
end fusion approaches or end-to-end solutions. The top performing system
reduces the equal error rate of a conventional speaker verification system
from 23.83% to 0.13% when tested with target, bonafide non-target, and spoofed
non-target trials. The SASV challenge results demonstrate the dependability of
today’s cutting-edge approaches to spoofing detection and speaker
verification. In this paper, [111], Zhang et al., develop a probabilistic
framework for combining the ASV and CM subsystem scores. In addition to the
probabilistic framework, the authors propose direct inference and fine-tuning
strategies, based on the framework, to predict the SASV score. Surprisingly,
these strategies reduced the SASV EER of the baseline to 1.53% in the official
SASV challenge evaluation trials. The author validates the efficacy of the
proposed modules through ablation studies and provides insights through score
distribution analysis.
The author of [51], expresses concern about the improvement of spoofing
robustness in automatic speaker verification (ASV) systems in the absence of a
separate countermeasure module. To address this issue, the ASVspoof 2019
baseline model is used, in accordance with the back-end machine learning
classifier’s probabilistic linear discriminant analysis (PLDA). Three
unsupervised domain adaptation techniques are used to optimize the back-end
using audio data from the ASVspoof 2019 dataset training partition. The
results show significant improvement in both the logical and physical access
scenarios, particularly in the latter, where the system is attacked by
replayed audio, with maximum relative improvements of 36.1% and 5.3% in
bonafide and spoofed cases, respectively. However, it is observed that
absolute error rates on spoof trials remain too high on spoofing attacks. This
demonstrates the challenge of making a conventional speaker embedding
extractor with a PLDA back-end work on a mix of bonafide and spoofed data.
Over the last few years, research has improved the performance of ASVs and
countermeasure systems, resulting in low EERs for each system. However,
research on the joint optimization of both systems is still in its early
stages. This paper, [82], proposes a Spoof-Aggregated-SASV (SA-SASV), an
ensemble-free, end-to-end solution for developing an SASV system with multi-
task classifiers. An SA-SASV system is further optimized by multiple losses
and more flexible training set requirements. The proposed system is trained
using the ASVspoof2019-LA dataset. SA-SASV EER results show that training in
complete automatic speaker verification and countermeasure datasets can
improve model performance even further. The results show that the SA-SASV
feature space outperforms previously published approaches in terms of
distinguishing spoof attacks and speaker identification. Furthermore, the SA-
SASV EER is reduced from 6.05%, produced by previous state-of-the-art
approaches, to 4.86% when no ensemble strategy is used. The article argues
that a larger set of data and distinct encoders will further improve the EER.
## Countermeasures
This section provides a detailed discussion of the selected countermeasures
used for comparative analysis of the voice spoofing countermeasures. For this
comparative analysis, we have selected a variety of audio features, including
the baseline features of the ASVspoof challenges, which are employed in the
voice spoofing detection literature. However, it should be noted that the
individual features are compared to one another, excluding feature fusion, to
investigate the effectiveness of the features themselves and not their
complementary interactions. Each feature was carefully crafted to offer as
much coverage as possible while maintaining a relatively small comparative
list, and each has its own distinct approach to spoofing detection. To fairly
choose this list of auditory features, we conducted a survey of the
literature, from 2015 to the present to determine the 14 most effective and
commonly used feature descriptors. We employ the frequently used machine
learning-based classifiers, i.e., GMM, SVM, and evaluate the performance of
the features on the recently released deep learning CNN and CNN-GRU
classifiers. The next section discusses the extraction of the SOTA
countermeasures.
### Constant Q cepstral coefficient (CQCC)) [83]
The ASVspoof 2019 challenge defined two baseline systems; one used the
Constant Q cepstral coefficient (CQCC) while the second employed linear
frequency cepstral coefficients (LFCC) [83] with a GMM as the back-end
classifier. CQCC features use the constant q transform (CQT) that allows the
capture of variable spectral resolution, dependent on frequency, i.e.,
increased temporal resolution at higher frequencies and lower at lower
frequencies. This variable resolution analysis allows for speech
characteristics to be better captured at differing frequencies. A CQCC
extraction was performed by first applying CQT to the windowed audio sample,
and then a fast fourier transform (FFT) was used to obtain the power spectrum
of the obtained windows, which were then log-scaled. After log scaling,
uniform re-sampling was performed, and finally a discrete cosine transform
(DCT) was used to obtain the final cepstral coefficients. The extraction of
CQCC features was performed using the following method:
$X^{CQ}(k,n)=\sum_{j=n-\left\lfloor N_{k}/2\right\rfloor}^{n+\left\lfloor
N_{k}/2\right\rfloor}x(j)a^{\\*}_{k}(j-n+N_{k}/2)$ (1)
$a_{k}(n)=\frac{1}{C}(\frac{n}{N_{k}})exp[i(2\pi
n\frac{f_{k}}{f_{s}}+\Phi_{k})]$ (2)
$C=\sum_{l=-\left\lfloor
N_{k}/2\right\rfloor}^{N_{k}/2}w(\frac{l+N_{k}/2}{N_{k}})$ (3)
$f_{k}=f_{1}2^{\frac{k-1}{B}}$ (4)
$Q=\frac{f_{k}}{f_{k+1}-f_{k}}=(2^{1/B}-1)^{-1}$ (5)
$N_{k}=\frac{f_{s}}{f_{k}}Q$ (6)
$CQCC(p)=\sum_{l=1}^{L}log|X^{CQ}(l)|^{2}cos[\frac{p(l-\frac{1}{2})\pi}{L}]$
(7)
where $a^{\\*}(n)$ represents a complex conjugate of $a^{\\*}(n)$, $N_{k}$
denotes a variable window lengths, w(t) is a window function (i.e. the Hanning
Function). $f_{k}$ is the central frequency, f1 refers to the center frequency
of the lowest bin, and B determines the number of bins, or octaves.
### Mel-Frequency cepstral coefficients (MFCC) [58]
The Mel-Frequency Cepstral Coefficients (MFCC) [58] feature is well known and
has been used successfully in various forms of audio recognition. To obtain
MFCC features, the audio sample is divided into a number of equal windows.
Following that, an FFT is performed to acquire the spectrum, which is then
passed through a Mel-scaled filterbank made of triangle bandpass filters. Log-
scaling is then performed, and the cepstral coefficients are obtained using a
DCT.
MFCCs are computed as follows:
$MFCC(q)=\sum_{m=1}^{M}log[MF(m)]cos[\frac{q(m-\frac{1}{2})\pi}{M}]$ (8)
$MF(m)=\sum_{k=1}^{K}|X^{DFT}(k)|^{2}H_{m}(k)$ (9)
where k refers to the discrete fourier transform (dft) index and H(k) denotes
the triangular weighting-shaped function for the m-th Mel-scaled band-pass
filter.
### Inverse Mel-Frequency cepstral coefficient (IMFCC) [9]
One variation of MFCC, the inverse Mel-Frequency cepstral coefficients [9],
has also been employed for voice spoofing. These features are capable of
capturing important traits of the audio signals by analyzing the high
frequency components by using an inverted Mel-scaled filter bank made up of
triangular band pass filters. IMFCC is obtained as follows:
$IMF(m)=\sum_{k=1}^{K}|X^{DFT}(k)|^{2}H_{m}(k)$ (10)
$IMFCC(q)=\sum_{m=1}^{M}log[IMF(m)]cos[\frac{q(m-\frac{1}{2})\pi}{M}]$ (11)
where k and H(k) refer to the index and triangular weighting-shaped function
for the m-th Inverted Mel-scaled band pass filter.
### Linear Frequency Cepstral Coefficient (LFCC) [113]
Another common variant of MFCC is the Linear Frequency Cepstral Coefficient
(LFCC) [113], which is derived via the same process as MFCC but uses a linear
frequency filter bank in place of the Mel filter bank. In the linear frequency
filter bank, the bandwidth of the triangular filter remains constant, in
contrast to the Mel filter bank, which changes slope over the frequency range.
Because of this, LFCC has been proven to be more robust than MFCC in detecting
artifacts at higher frequencies[31].
$f(m)=(\frac{N}{F_{s}})(f_{1}+m\frac{f_{h}-f_{l}}{M+1})$ (12)
where f(m) are the frontier points of the linear filterbank.
### Magnitude-based Spectral Root Cepstral Coefficients (MSRCC) and Phase-
based Spectral Root Cepstral Coefficients (PSRCC) [80]
Magnitude-based Spectral Root Cepstral Coefficients (MSRCC) and Phase-based
Spectral Root Cepstral Coefficients (PSRCC) are spectral root features [80]
that capture both the magnitude and phase information, respectively, to
produce distinct traits in bonafide and spoofed audio. These features are
computed as follows:
$MSRCC(q)=\sum_{m=1}^{M}(MFM(m))^{\gamma}cos[\frac{q(m-\frac{1}{2})\pi}{M}]$
(13)
$MFM(m)=\sum_{k=1}^{K}|S(k)|^{2}H_{m}(k)$ (14)
$PSRCC(q)=\sum_{m=1}^{M}(MFP(m))^{\gamma}cos[\frac{q(m-\frac{1}{2})\pi}{M}$
(15)
$MFP(m)=\sum_{k=1}^{K}\measuredangle{S(k)}H_{m}(k)$ (16)
where $S(k)$ is the k-point of the DFT of signal $s(n)$ and $H_{m}(k)$ is the
triangular weighting-shaped function for the m-th Mel-scaled bandpass filter.
### Spectral Centroid Magnitude Coefficients (SCMC) and Spectral Centroid
Frequency Coefficients (SCFC) [44]
Spectral Centroid Magnitude Coefficients (SCMC) and Spectral Centroid
Frequency Coefficients (SCFC) are the two-dimensional representations of sub-
band energy in the speech spectrum [44]. These two-dimensional features
include the centroid extraction of magnitude and frequency-based features from
the speech spectrum. Spectral centroid features were previously employed for
speaker recognition, and sub-band spectral features have demonstrated
significant efficacy in noisy speech recognition. Centroid features carry
formant-related information, which has been shown to be robust to the presence
of noise in the vocal sample [44]. Spectral centroid frequency (SCF) is the
weighted average frequency for a specific sub-band, where the weights are the
normalized energy of each frequency component of the sub-band. Spectral
centroid magnitude (SCM) is the weighted average magnitude for a given sub-
band. These coefficients are derived as follows:
$SCM_{t}^{i}=\frac{\sum_{k=1}^{M/2+1}f(k)S_{t}(k)w_{i}(k)}{\sum_{k=1}^{M/2+1}S_{t}(k)w_{i}(k)}$
(17)
$SCF_{t}^{i}=\frac{\sum_{k=1}^{M/2+1}f(k)S_{t}(k)w_{i}(k)}{\sum_{k=1}^{M/2+1}f(k)w_{i}(k)}$
(18)
where $S_{t}(k)$ and $f(k)$ are the power spectral magnitude of the t-th
frame, and normalized frequency corresponding to the k-th frequency component,
respectively. SCFCs are extracted directly using linear rectangular filter
banks, whereas SCM is passed through log scaling and DCT to obtain SCMC
features.
Figure 5: Extraction of GTCC, MFCC, IMFCC, LFCC and RFCC Features.
### All-Pole Group Delay Function (APGDF) [67]
Unlike the features derived from the magnitude spectrum, e.g., MFCCs, the All-
Pole Group Delay Function (APGDF) is extracted from the phase spectrum, which
is largely ignored by the commonly used cepstral coefficients, e.g., GTCC,
LPCC, and CQCC [67]. APGDF provides an efficient way to utilize the
information from the voice signal’s phase spectrum. Group delay functions
employ parametric all-pole models instead of spectral transforms. APGDFs
enhance recognition accuracy and give comparable results to traditional
magnitude-based MFCC features by integrating all-pole analysis, filters, and
group delay functions. Moreover, APGDF features are computationally less
complex than the recently cited MGDF features [59]. APGDF features are
extracted as follows:
$APGDF(i)=\frac{A_{P}(G)D_{P}(G)+A_{I}(G)D_{I}(G)}{|A(P)|^{2}}$ (19)
$A_{P}=|A_{P}|\sigma^{i\vartheta(P)}$ (20)
Where $A_{P}(G)$ and $D_{P}(G)$ are Fourier transforms, $|A_{P}|$ refers to
the magnitude spectrum and $i\vartheta(P)$ represents the phase spectrum.
Finally, a DCT is used to convert the group delay function into cepstral
coefficients [67].
### Rectangular Filter Cepstral Coefficient (RFCC)
Another tested feature is the Rectangular Filter Cepstral Coefficient (RFCC).
Similar to MFCC features, RFCCs are extracted using a rectangular window
spaced on a linear scale instead of triangular filters on the mel-scale. RFCC
features were initially proposed for use in automatic speaker recognition
systems operating under noisy conditions.
Figure 6: Extraction of CQCC, RPS, SCMC, SCFC, MSRCC, PSRCC and APGDF
Features.
### Linear prediction cepstral coefficients (LPCC) [91]
Linear Prediction Cepstral Coefficient (LPCC) features have been historically
used to capture the emotion information found in speech. Unlike other cepstral
features, LPCCs use linear prediction analysis prior to cepstral analysis. The
audio signal first undergoes band limiting, then a high-emphasis filter is
applied. After high-emphasis, a 30ms hamming window is applied to the windows
at a 10ms interval. The next step is to calculate the first through tenth
linear predictor coefficients through the use of auto-correlation. Next, these
linear predictor coefficients are transformed into the cepstral coefficients
by using the following equations:
$\begin{split}&c_{1}=a_{1}\\\
&c_{n}=\sum_{k=1}^{n-1}(1-k/n)a_{k}c_{n-k}+a_{n}\ \ \ \ 1<n\leq p\end{split}$
(21) $h(t)=\sum_{k=1}^{N}A_{k}cos(\phi_{k}(t))\ \ \ \ \phi_{k}=2\pi
kf_{0}t+\theta_{k}$ (22)
where $c_{i}$ and $a_{i}$ are the i-th cepstral and linear predictor
coefficients, respectively.
### Gammatone Cepstral Coefficients (GTCC) [85]
Gammatone Cepstral Coefficients (GTCCs) [85] are similar to MFCC features, but
are much more robust to noise than MFCC, offering superior performance in
classification tasks. A gammatone filter provides more frequency components in
the low-frequency range with narrow bandwidth while providing fewer frequency
components in the high-frequency range with wider bandwidth, effectively
revealing the spectral information. GTCC features are computed by applying an
FFT to each frame in order to get the spectrum. The spectra of each window are
then passed through a gammatone filter bank to obtain the energy at each
subband. A logarithmic function is then applied to these energies, and
finally, DCT is applied to obtain the GTCCs. 13 to 20 coefficients are usually
extracted and are considered optimal for audio signal analysis. GTCCs are
extracted using the following equation:
$GTCC_{m}=\sqrt{\frac{2}{N}}\sum_{n=1}^{N}log(X_{n})cos[\frac{\pi
n}{N}(m-\frac{1}{2})]\ \ \ \ 1\leq m\leq M$ (23)
where $X_{n}$ is the signal energy of the nth spectral band, N is the number
of gammatone filters used, and M is the number of GTCCs to be extracted.
### Relative Phase Shift (RPS) [73]
Relative Phase Shift (RPS) features [73] are a representation of harmonic
phase information. A harmonic analysis model represents each frame of a signal
as a sum of sinusoidal harmonically related to pitch and fundamental
frequency. The RPS representation calculates the difference between the
instantaneous phase of every harmonic and fundamental component at a fixed
point over the period of the signal. RPS features reveal a structured pattern
in the phase information of voiced segments, and can be derived as follows:
$h(t)=\sum_{k=1}^{N}A_{k}cos(\phi_{k}(t))\ \ \ \ \phi_{k}=2\pi
kf_{0}t+\theta_{k}$ (24)
A graphical representation for the extraction of features is presented in
Figure 5 and 6. The next section contains the experimental details, data
sources and the system information used to perform the experimentation and the
comparative analysis of the features.
## Publicly Available Datasets
This section of the paper discusses and identifies the various datasets used
in cutting-edge ASV systems. Early stages of voice spoofing detection research
involved speech and speaker recognition databases, i.e., YOHO [43], NIST [1],
and WSJ [18]. However, to accurately account for research progress, there was
a dire need for a common dataset as well as a performance metric to evaluate
spoofing countermeasures. Consequently, this need was discussed and addressed
at the INTERSPEECH 2013 special session on spoofing and ASV countermeasures.
This special session motivated the research community to organize the first
Automatic Speaker Verification Spoofing and Countermeasures Challenge,
ASVspoof, in 2015, which took place at INTERSPEECH that year. The dataset
released for this challenge included two types of spoofing attacks: synthetic
speech (SS) and voice conversion (VC). In the years following, three
additional ASVspoof challenges were organized: ASVspoof 2017, ASVspoof 2019,
and ASVspoof 2021, each with publicly available datasets for download. There
are several common publicly available datasets which are used by voice PAD
researchers, i.e., ASVspoof, AVspoof, ReMASC, Spoofing and anti-spoofing (SAS)
corpus, RedDots, Vox Celeb, voicePA, and BioCPqDPA, among others. We made a
concerted effort to cover all of the existing datasets (2015–2022) used for
spoof detection and countermeasure development. Details of the publicly
available datasets which address spoofing attacks are presented in Figure 7.
### Spoofing and Antispoofing (SAS) corpus [101]
The Spoofing and Antispoofing (SAS) corpus [101] contains a wide range of
spoofed speech samples generated using nine different approaches, two of which
are speech synthesis, and the other seven are voice conversion. This database
[101] contains two protocols: one for testing the ASV system and another for
generating spoofed speech sounds. Consequently, they enable the speech
synthesis community to create spoofing materials attentively while remaining
unaware of speaker verification spoofing and anti-spoofing. Periods of silence
not found in natural speech were removed from the samples, resulting in a more
realistic Speech Synthesis (SS) and Voice Conversion (VC) spoof corpus (Wu et
al., 2015a, 2015b, 2015c). The SAS corpus contains speech produced using
various spoofing methods, represented in 300,000 samples of each type. Without
using a countermeasure, ASV systems were extremely vulnerable to spoofing
attacks in SAS [101].
### RedDots [48, 40]
The RedDots project [48] was launched as a follow-up to a special session at
INTERSPEECH 2014, with collaboration from multiple sites. Its goal is to
collect speech data through mobile crowd sourcing, which allows for a larger
population and greater diversity. On January 29, 2015, the project was
launched. At the time of our investigation, the project had 89 speakers from
21 countries, 72 men and 17 women, for a total of 875 complete sessions. The
purpose of this special session is to bring together research efforts to
investigate potential approaches and gain a better understanding of
speaker/channel phonetic variability. The RedDots dataset [48] is intended to
have a higher number of recordings per session, with shorter sessions spent on
each. One goal is to collect 52 sessions per speaker, one session per week,
for a year. Because of this, each session is limited to two minutes, with a
total of 24 sentences (10 common, 10 unique, 2 free choices, and 2 free texts)
for each session. This dataset [48] includes a diverse set of inter- and
intra-speaker types. Following that, the replayed RedDots database [40] was
created by re-recording the original corpus utterances under different
environmental conditions. Both of these databases assist in the development of
replay-resistant ASV systems because the original RedDots provided genuine
utterances. An overview of the RedDots database [48] is presented in Table
VII.
### AVspoof [42]
The AVspoof dataset is designed to assist ASV systems in the development of
anti-spoofing techniques. This database was used in the BTAS 2016 [42, 18]
challenge. The AVspoof database includes replay spoofing attacks in addition
to synthetic speech and VC spoofing attacks. The replay attacks are generated
by various recording devices. The SS attacks were generated by the following
techniques, i.e., Hidden Markov Model (HMM) and Festvox, which account for the
vast majority of the VC attacks. These sessions are recorded by participants
in a variety of environments and with a variety of recording devices. Speakers
are instructed to read out sentences, phrases, and speak freely about any
topic for 3 to 10 minutes. To make competition more difficult, ”unknown”
attacks are included in the test set [42]. The organizers of the challenge
provide a baseline evaluation system based on the open source Bob toolbox
[42]. In the baseline system, simple spectrogram-based ratios serve as
features, and logistic regression is used as a pattern classifier [42]. The
statistics of the database are summarized in Table VI.
TABLE VI: A summary of AVspoof Database. Subset | Genuine | Spoof (PA) | Spoof (LA)
---|---|---|---
Training | 4973 | 38580 | 17890
Development | 4995 | 38580 | 17890
Evaluation | 5576 | 43320 | 20060
### VoxCeleb [61]
VoxCeleb [61] is an audio-visual dataset comprised of short clips of human
voices extracted from YouTube interview videos. Each segment lasts at least 3
seconds. VoxCeleb features speech from people of various ethnicities, accents,
professional backgrounds, and ages. 61% of speakers are male and 39% are
female. The data was collected randomly, with ambient noise, laughter,
overlapping speech, pose deviation, and a variety of lighting conditions. This
dataset is available in two versions: VoxCeleb1 [61] and VoxCeleb2 [14]. Each
has audio files, face clips, metadata about speakers, and so on, in the
training and testing sets. The Finnish-language sets assist ASV systems in the
detection of mimicry attacks [86]. The details of the dataset [61] are shown
in table VII.
### voicePA [41]
The voicePA dataset was created with the assistance of the AVspoof dataset.
Its bonafide data is a subset of the AVspoof dataset’s genuine data, uttered
by 44 speakers in four recording sessions held in various settings [89]. These
sessions were recorded using high-quality microphones from a laptop, a Samsung
S3, and an Apple smartphone 3GS. Spoofed data consists of 24 different types
of presentation attacks that are captured using five devices in three
different environments. These spoof utterances are based on real-world data.
The original dataset’s SS and VC spoofed audio samples are also replayed in
the voicePA dataset [41]. Table VII contains detailed information about the
voicePA [41] dataset.
### BioCPqD‑PA [41]
The Portuguese language BioCPqD‑PA dataset [41] was collected by recording 222
people in a variety of environmental conditions. The dataset was comprised of
27,253 authentic recordings and 391,687 samples, which had been subjected to a
presentation attack. In the presentation audio data, one laptop was used with
24 different setups consisting of 8 loudspeakers and 3 microphones, while
another single laptop was used to capture real data. This dataset [41] was
divided into three parts: training, development, and evaluation. Each part was
collected using a different set of microphones and loudspeakers, and each set
was comprised of the audio recording from all of the participating speakers.
An overview of the BioCPqD‑PA [41] dataset is presented in Table VII.
TABLE VII: A summary of the existing databases. Dataset | Language | Attacks | Speakers | Utterances
---|---|---|---|---
RedDots | 5 | Replay | 3750 | 5000
VoxCeleb | 6+ | Mimicry | 1251 (V1), 7000+ (V2) | 100,000 (V1), 1,000,000 (V2)
VoicePA | - | SS, VC | 44 | -
BioCPqD-PA | 1 | 222 | 3750 | 418,940
TABLE VIII: A summary of ASVspoof Challenge 2015 database. | Speaker | Utterances
---|---|---
Subset | Male | Female | Genuine | Spoofed
Training | 10 | 15 | 3750 | 12625
Development | 15 | 20 | 3497 | 49875
Evaluation | 20 | 26 | 9404 | 193404
### ASVspoof 2015 [102]
The ASVspoof 2015 Challenge database is the first significant release for
research into spoofing and countermeasures [102]. For LA attacks, the database
contains natural and spoofed speech generated by speech synthesis and VC.
There are no noticeable effects from the channel or background noise. The
database is split into three sections: training, development, and evaluation.
The evaluation subset is made up of both known and unknown attacks. The known
attacks contain the very same five algorithms that were used to generate the
development dataset and are thus referred to as known (S1-S5) attacks. Other
spoofing algorithms are also included in unknown (S6-S10) attacks that are
directly used in the test data. Table VIII contains a detailed description of
the challenge database.
### ASVspoof 2017 [38]
The ASVspoof 2017 Challenge dataset [38] was constructed on the RedDots
dataset [48]. The dataset [38] contains replayed data samples with text-
dependent speech. It included the voices of 42 different speakers, recorded
using 61 combinations of distinct recording devices, replay devices, and
environmental conditions. It was collected over the course of 179 sessions.
The original ASVspoof 2017 database [38] contained some inconsistencies, but
the issue was resolved in ASVspoof 2017 Version 2.0 [16]. In addition to the
corrected data, a more detailed description of recording and playback devices,
as well as acoustic environments, was provided [16]. The details of the
ASVspoof 2017 [38] dataset are presented in Table IX.
TABLE IX: A summary of ASVspoof Challenge 2017 database version 2.0 Subset | Speaker | Genuine | Spoofed
---|---|---|---
Training | 10 | 1507 | 1507
Development | 8 | 760 | 950
Evaluation | 24 | 1298 | 12008
### ReMASC [21]
The ReMASC (Realistic Replay Attack Microphone Array Speech Corpus) is a
database of speech recordings developed to support research into and the
security of voice-controlled systems. The ReMASC database is comprised of
authentic and replayed recordings of speech samples, captured in actual
circumstances, and utilizes cutting-edge voice assistant development kits. In
particular, the ReMASC dataset contains recordings from four systems, each
with a different transmitter and receiver, under a range of atmospheric
situations, with varying levels of background noise and relative speaker-to-
device locations. This is the first database that was specifically developed
to safeguard voice-controlled systems (VCS) from various types of replay
attacks in varied contexts. Table X contains the sampling details of the eMASC
[21] dataset.
TABLE X: A summary of ReMASC Database. Environment | Subject | Genuine | Replayed
---|---|---|---
Outdoor | 12 | 960 | 6900
Indoor1 | 23 | 2760 | 23104
Indoor2 | 10 | 1600 | 7824
Vehicle | 10 | 3920 | 7644
Total | 55 | 9240 | 45472
Figure 7: Existing Datasets used for Automatic Speaker Verification Systems.
### ASVspoof 2019 [89]
The ASVspoof 2019 challenge [89] is an extension of the previously held
ASVspoof challenges and focuses on countermeasures for all three major attack
types, namely, SS, VC, and replay. It is divided into the logical access (LA)
and physical access (PA) subsets. LA contains TTS and VC spoof speech samples,
and PA has replay-spoof speech. Both of these subsets are further partitioned
into training, development, and evaluation subsets. The training subset is
generated by 8 males and 12 females; the development subset by 4 males and 6
females, and the evaluation subset by 21 males and 27 females. The spoofed
speech signals are generated using one of the two VC and four speech synthesis
algorithms. Details of the ASVspoof 2019 [89] dataset are shown in Table XI.
### ASVspoof 2021 [15]
ASVspoof 2021 [15] is the fourth offering in the series of spoofing challenges
designed to promote spoofing research and the development of countermeasures
to protect automatic speaker verification systems from manipulation. ASVspoof
2021 [15] introduces a new task involving deepfake speech detection, in
addition to a continued focus on logical and physical access tasks, and has a
number of advancements over previous editions. ASVspoof 2021, in particular,
is divided into three sub-challenges. The first is a logical access sub-
challenge that builds on the 2019 challenge by emphasising robustness to
channel variation; the second is a physical access sub-challenge, similar to
the 2019 setup, but with recordings made in real-world physical environments.
The third is a speech deepfake detection sub-challenge (no ASV). ASVspoof 2021
contains technically difficult data to encourage broad generalization of
countermeasures.
TABLE XI: A summary of ASVspoof Challenge 2019 database. | Speaker | LA Attacks | PA Attacks
---|---|---|---
Subset | Male | Female | Genuine | Spoofed | Genuine | Spoofed
Training | 8 | 12 | 2580 | 22800 | 5400 | 48600
Development | 8 | 12 | 2548 | 22296 | 5400 | 24300
Evaluation | - | - | 71747 | 137457
The logical access (LA) task is similar to that of ASVspoof 2017 and 2019, but
this version includes the computation and transmission of text-to-speech (TTS)
and voice conversion (VC) attacks. In comparison to ASVspoof 2017, the
physical access (PA) task includes genuine and replayed samples but with a
more tightly controlled setup. The new speech deepfake task (DF), similar to
the LA task, includes compressed audio. The protocols used in the training and
development sections are the same as they were in ASVspoof 2019, and therefore
ASVSpoof 2021 does not include development or training subsets. There are new
metrics for the evaluation partition, including a slightly revised t-DCF
metric for the LA and PA tasks. Whereas the EER is used for the evaluation of
DF tasks.
### Voice Spoofing Detection Corpus (VSDC) [6]
The VSDC dataset [6] was designed to be a standard dataset comprising both
real audio recordings from various surroundings and microphones, as well as
spoofed audio files generated through various controlled environments,
spoofing realistic scenarios. This data can be analyzed and utilized to
develop countermeasures to avoid audio spoofing attacks in the replay
category. Since an Internet of Things (IoT) device may be exploited as a point
of replay to another device, audio replay attacks have become more
significant. This offers a perfect atmosphere for carrying out repeat attacks.
This data set was generated with the intention of simulating these attacks in
a controlled environment. While the primary goal of this dataset is to detect
multi-hop replay attacks, it is not restricted to that. It could be used to
investigate typical replay attacks, the influence of different microphones and
surroundings on an audio file, or how an individual’s vocal range affects the
accuracy of a voice control system. The details of the number of sample and
development architecture can be found in [6].
## Performance Evaluation Parameters
This section describes the performance evaluation criteria used in this
survey, and the specific metrics are defined in the following subsections.
After an extensive review of published work, we observed that Equal Error Rate
(EER) was the primary criterion used for the performance evaluation of voice
spoofing and ASV systems. Of the articles surveyed for this paper, more than
three-quarters evaluated their presented work using EER, for instance [26]. A
small number of the investigated papers picked accuracy as the sole
performance rating criterion, however, the majority of recent work uses
multiple rating criteria to evaluate the performance of the presented method,
e.g., EER with min-tDCF, EER with Half Total Error Rate (HTER), and False
Match Rate (FMR) with False Non-match Rate (FNMR). As not all research works
were examined using the same criteria, comparative analysis of the results
became problematic.
### Equal error rate (EER)
An ASV system either approves or rejects the specified identities. A
categorization might be correct or erroneous in four ways. These are True
Acceptance (TA), True Rejection (TR), False Acceptance (FA), and False
Rejection (FR). TA and TR are favorable outcomes, but FR and FA are damaging
to the system. These possibilities are determined by a preset threshold [83].
In the case of FA, a faked speech sample with a score higher than or equal to
the preset threshold is allowed. Conversely, in the case of FR, a speech
sample with a score less than the predefined threshold is rejected. To
evaluate ASV performance, the Equal Error Rate (EER), the value at which the
False Acceptance Rate (FAR) and False Rejection Rate (FRR) become equal, is
employed.
${FAR=\frac{FA}{TA}\\\ }$ (25) ${FRR=\frac{FR}{TT}\\\ }$ (26)
where FAR and FRR denote the false acceptance and false rejection rate, FA and
TA represent the number of false acceptance and false rejection, and TA and TT
refer to the total number of false and genuine speech samples.
### Detection Error Tradeoff (DET) curve
A Detection Error Tradeoff (DET) graph is a graphical representation of errors
for classification model systems that compares the false rejection and the
false acceptance rate. The x- and y-axes are scaled non-linearly by their
standard normal deviations (or simply by logarithmic transformation),
resulting in tradeoff curves that are more linear than ROC curves and use the
majority of the area to highlight substantial differences in the crucial
operating zone. With its effectiveness for binary classification, ASV systems
are initially evaluated using the DET curve. It generates EER curves by
plotting FAR on the x-axis and FRR on the y-axis.
### Half Total Error Rate (HTER)
Spoofed speech with a higher score than the predefined threshold will be
misidentified as bonafide samples, whereas bonafide samples with a lower score
will be incorrectly classified as spoofed speech samples. Since the two
errors, FAR and FRR, are inversely connected, it is essential to depict
performance as a function of the threshold. The Half Total Error Rate is one
such metric (HTER). HTER’s computation is shown below:
${HTER=\frac{FAR+FRR}{2}\\\ }$ (27)
### Tandem Detection Cost Function (t‑DCF)
The ASVspoof challenge series was created to lead anti-spoofing research for
automated speaker verification (ASV). The 2015 and 2017 challenge editions
featured evaluating spoofing countermeasures (CMs) in isolation from ASV using
an equal error rate (EER) metric. While this was a strategic method of
evaluation at the time, it had certain flaws. First, when ASV and CMs are
coupled, the CM EER is not always a trustworthy predictor of performance.
Second, the EER operating point is inappropriate for user authentication
systems with a high target user prior but a low spoofing attack prior, such as
telephone banking. As a consequence, the community will shift from CM-centric
to ASV-centric evaluation using a new tandem detection cost function (t-DCF)
measure. It extends the typical DCF used in ASV research to spoofing attack
circumstances. The t-DCF metric is made up of six components in two parts:
(Part i) false alarm(1-2) and miss costs(3-4) for both systems; and (Part ii)
prior probability of target(5) and spoof trials(6) (with an implied third,
non-target prior). The results of t-DCF in [37] justify the inclusion of the
DCF-based metric in the roadmap for future ASVspoof challenges, as well as
additional biometric anti-spoofing evaluations.
## Comparative analysis of voice spoofing countermeasures
In this section of the paper, we discuss the datasets and classifiers used to
differentiate between spoofed and real speech samples. The dataset subsection
contains information on speech samples and corpus details that were utilized
in training, development, and testing, as well as cross-corpus evaluation of
countermeasures. The subsection on feature and classifier configuration
describes the back end classification approach used to evaluate the
effectiveness of the countermeasures. Next, the experimental analysis’s
environmental setup is described.
### Dataset
For the comparative analysis, we used two datasets to perform the
experimentation: VSDC [6] and ASVspoof 2019 [89]. The VSDC dataset consists of
a wide variety of microphones and speakers. Although the VSDC dataset contains
types of bonafide, first-order, and second-order replay recordings, it is
designed to provide a diverse set of recording and playback environments.
Specifically, the microphones range from professional grade to cellphones, and
the speakers for replay range from professional to local residents. In the
VSDC, a total of fifteen (15) speakers, ages 18–60 years old, participated in
data collection. Out of fifteen (15) speakers, ten (10) are male, and five (5)
are female. Some of the speakers are not native English speakers. Each speaker
recorded the original file by repeating a given set of phrases typical of
commands given to VCDs. Some of the volunteers recorded these original phrase
sets multiple times using different microphones in diverse environments. In
total, 198 different 0PR source sets, consisting of at least 9 phrases, were
recorded, resulting in 1,926 0PR source phrases being spoken.1PR and 2PR
samples were obtained from 0PR samples. There were 22 different microphones
used, with 14 different microphone-speaker configurations for 1PR and 8
different configurations used for 2PR. 1PR and 2PR each include 4,923 samples,
giving VSDC a total of 11,772 samples.
In contrast, ASVspoof 2019 is the larger dataset, consisting of 107 speakers.
The ASVspoof 2019 dataset includes bonafide and spoof recordings, with LA and
PA attacks. The recording environments are more uniform and less diverse than
in VSDC, consisting of only hemi-anechoic chambers of varying room sizes. The
ASVspoof2019 dataset brings additional diversity in the length of the audio
recordings, both bonafide and spoofed, that VSDC does not. The ASVspoof 2019’s
PA dataset consists of 54,000 samples in the training data set, 29,700 samples
in the validation set, and 137,457 samples in the evaluation set. The samples
were recorded from 20 individuals, consisting of 12 female and 8 male
participants. The recordings utilized a 16-kHz sampling rate and 16-bit
resolution, along with having 27 different acoustic configurations. The
configurations had variations using 3 room sizes, 3 different levels of
reverberation, and 3 different speaker-to-ASV distances. For the replay sample
of the data set, 9 different configurations were used, each having 3 different
attacker-to-talker distances and 3 different speaker quantities. It should
also be noted that configurations for the testing data set differed from those
of the training and validation sets. The more detailed description of the
dataset is provided in section VI and subsection J.
TABLE XII: Configuration details of the comparative countermeasures. Features
---
Feature | Type | Configuration
CQCC[83] | Cepstral | No. of FFT bins (per octave)=96
LFCC [113] | Cepstral | No. of Filters=20
MFCC [58] | Cepstral | No. of Filters=14
IMFCC [9] | Cepstral | No. of Filters=20
SSFC [80] | Cepstral | No. of Coeffs.=20
RFCC [73] | Cepstral | No. of Filters=20
PSRCC [80] | Cepstral | No. of Coeffs.=13
MSRCC [80] | Cepstral | No. of Filters=13
LPCC [91] | Cepstral | LP order=20
SCMC [44] | magnitude | No. of Sub-bands=20
SCFC [44] | frequency | No. of Sub-bands=20
GTCC [85] | Cepstral | No. of Coeffs.=14
RPS [73] | Cepstral | No. of Coeffs.=20
APGDF [67] | Phase based | LP Order=20
### Feature and classifier Configuration
To test the effectiveness of the state-of-the-art (SOTA) spoofing
countermeasures, four different classifiers (two machine learning and two deep
learning) comprised of a Gaussian Mixture Model (GMM) [84], a Support Vector
Machine (SVM-rbf) [55], a Convolutional Neural Network (CNN) [34], and a CNN-
Gated Recurrent Unit (CNN-GRU) [34] were used. The baseline GMM model given by
ASVspoof2019 for CQCC and LFCC features was expanded to compute the
performance of all frontend features. The GMM was trained for ten iterations,
and the score of the speech samples was computed using a log-likelihood ratio.
EER and min-TDCF were used to assess the effectiveness of each countermeasure
with GMM baseline classification. Next, an SVM classifier with an RBF kernel
was used to evaluate the performance of the SOTA countermeasures. The GMM and
SVM classifiers were chosen because they had the best reported results for the
respective datasets. For example, the SVM-rbf classifier [55] achieved the
best performance on the VSDC dataset, whereas CQCC-GMM and LFCC-GMM were the
baseline classifiers for the ASVspoof2019 competition. In addition, CNN and
CNN-GRU [34] were used to determine the performance of the SOTA
countermeasures on deep learning-based classifiers. The effectiveness of each
countermeasure is described below in the experiment section. Section V also
completely describes the extraction of the countermeasure chosen for
comparative study. Although we assured the codes’ availability for
reproducibility, the two-dimensional characteristics of the features was
converted to one-dimensional by employing the mean average of the retrieved
features. The features are detailed in the code and the ReadMe.txt file. The
code and ReadMe.txt file contain details of the features. Further details of
the extraction mechanism used for the frontend features are mentioned in Table
XII.
### Experimental Setup
The setup for the experiments consisted of a pipeline of feature extraction
and then evaluation of the extracted features on the ML and DL-based
classifiers. The feature extraction and classifiers were run on Oakland
University’s Matilda High Performance Cluster (HPC). The standard compute
nodes were used for feature extraction and model training, each of which
consists of 192 GB of RAM and 40 CPU cores at 2.50 GHz. The models (GMM, SVM,
CNN, and CNN-GRU) were trained and tested utilizing the HPC’s GPU nodes, made
up of four NVIDIA Tesla V100 16G GPUs, 192 GB of RAM, and 48 CPU cores running
at 2.10 GHz. The VLFEAT@matlab API was used for the GMMs, which consisted of
512 Gaussian clusters, and the GMMs were trained for 10 iterations.
## Experimental Results
In this section, we report the results of the experiments performed to
determine the effectiveness of the SOTA countermeasures. The effectiveness of
each countermeasure was tested against standalone as well as cross-corpus
evaluation. Standalone performance analysis implies the countermeasure’s
capability of detecting the spoofing artifacts extant in the training samples
and similar ones existing in the testing samples. In contrast, cross-corpus
evaluation determines the performance of each countermeasure against unknown
and distinctively configured spoofing artifacts. For a fair comparison of the
countermeasures, we test their effectiveness against four different ML and DL
models (GMM, SVM, CNN, and CNN-GRU) and two diverse datasets, VSDC and
ASVspoof2019. The VSDC dataset is used to evaluate the performance of
countermeasures on smart home-based ASV systems, where microphonic
discrepancies are more frequent, while the countermeasures are also tested
against the ASVspoof19 industry standard spoofing discrepancies. Lastly, we
comprehensively discuss the overall performance, both standalone and cross-
corpus, of the countermeasures on all datasets and classifiers.
### Comparative Analysis of SOTA countermeasures on GMM based classifier
In this experiment, we evaluate the performance of the SOTA countermeasure
with the baseline GMM classifier provided by the ASVspoof [103] challenge
community. Although the classifier is initially provided with LFCC and CQCC
features, we expanded it to evaluate the SOTA countermeasures. The
experimental results show that each countermeasure performs significantly
differently for speech synthesis (SS), VC, and replay attacks. The results
demonstrate that the samples with replay attacks are much more difficult to
detect in comparison with the SS and VC samples. During the evaluation, it is
observed that the performance of the SOTA countermeasures declines drastically
in the presence of replay artifacts in the speech samples.
The countermeasures were considered and analyzed for LA and PA attacks in the
ASVspoof19 dataset examination. The countermeasures were tested against the
development and evaluation speech samples, and the results are reported in
Table XIII. The results demonstrated that each countermeasure achieved a good
EER and min-TDCF against the development set of each spoofing attack (LA and
PA). In particular, the SCMC [44] countermeasure achieved a lower EER of 0.01,
while the lowest minimum t-DCF was obtained by the IMFCC [9] and APGDF [67]
countermeasures in the testing of development samples. However, in evaluation
set testing, the APGDF [67] countermeasure achieved a lower EER and m-TDCF of
5.75 and 0.14, respectively. The results demonstrated that the APGDF [67] and
CQCC [83] countermeasures outperformed the comparatives with an overall lower
EER and min-TDCF in the development and evaluation testing XIII. In addition,
some countermeasures performed optimally against the LA attacks comprised of
SS and VC but failed to perform optimally when replay artifacts existed in the
speech sample. For instance, the SCMC [44] countermeasures obtained the best
EER of 0.01 and the second best EER of 5.91 for the development and evaluation
test sets, however, SCMC failed to provide optimal performance in the case of
PA attack testing. Similarly, the EER of APGDF [67] countermeasure was higher
than SCMC in the development set, but APGDF [67] achieved the overall best
results in the rest of the testing. The detailed results for SOTA
countermeasures are reported in Table XIII.
TABLE XIII: Experimental performance of the countermeasures with ASVspoof2019 and GMM based classifier. Feature | Dataset | Development | Evaluation
---|---|---|---
EER | m-tdcf | EER | m-tdcf
LFCC [113] | ASVspoof-LA | 2.70 | 0.06 | 8.08 | 0.21
ASVspoof-PA | 11.9 | 0.25 | 13.5 | 0.30
CQCC [83] | ASVspoof-LA | 0.43 | 0.02 | 9.57 | 0.23
ASVspoof-PA | 4.87 | 0.19 | 11.0 | 0.24
LPCC [91] | ASVspoof-LA | 2.31 | 0.06 | 10.17 | 0.28
ASVspoof-PA | 38.5 | 0.83 | 46.6 | 0.98
MSRCC [80] | ASVspoof-LA | 9.45 | 0.19 | 10.9 | 0.28
ASVspoof-PA | 12.7 | 0.28 | 15.7 | 0.38
PSRCC [80] | ASVspoof-LA | 9.13 | 0.14 | 10.7 | 0.21
ASVspoof-PA | 12.7 | 0.28 | 15.7 | 0.38
SCFC [44] | ASVspoof-LA | 14.7 | 0.39 | 20.6 | 0.54
ASVspoof-PA | 17.6 | 0.35 | 21.6 | 0.47
SCMC [44] | ASVspoof-LA | 0.01 | 0.54 | 5.91 | 0.15
ASVspoof-PA | 12.5 | 0.27 | 13.9 | 0.33
MFCC [58] | ASVspoof-LA | 7.06 | 0.16 | 10.56 | 0.25
ASVspoof-PA | 11.5 | 0.23 | 13.7 | 0.32
IMFCC [9] | ASVspoof-LA | 0.04 | 0.01 | 10.9 | 0.24
ASVspoof-PA | 12.8 | 0.29 | 13.7 | 0.32
RFCC [73] | ASVspoof-LA | 2.71 | 0.07 | 8.06 | 0.22
ASVspoof-PA | 11.8 | 0.26 | 13.9 | 0.33
RPS [73] | ASVspoof-LA | 9.28 | 0.19 | 11.9 | 0.29
ASVspoof-PA | 15.3 | 0.32 | 14.0 | 0.36
SSFC [80] | ASVspoof-LA | 8.10 | 0.16 | 10.3 | 0.27
ASVspoof-PA | 13.8 | 0.29 | 13.9 | 0.31
GTCC [85] | ASVspoof-LA | 9.25 | 0.17 | 10.8 | 0.24
ASVspoof-PA | 12.7 | 0.28 | 15.7 | 0.38
APGDF [67] | ASVspoof-LA | 0.22 | 0.01 | 5.75 | 0.14
ASVspoof-PA | 8.66 | 0.17 | 10.6 | 0.25
TABLE XIV: Experimental performance of the countermeasures with VSDC and GMM based classifier. Features | APGDF [67] | IMFCC [9] | GTCC [85] | CQCC [83] | LFCC [113] | RPS [73] | PSRCC [80]
---|---|---|---|---|---|---|---
EER | 40.29 | 50.00 | 56.00 | 20.0 | 36.33 | 55.21 | 55.11
Features | LPCC [91] | MFCC [58] | RFCC [73] | SCFC [44] | SCMC [44] | SSFC [80] | MSRCC [80]
EER | 5.86 | 49.40 | 48.87 | 45.49 | 49.91 | 53.11 | 54.73
Along with the ASVspoof2019, we also examined the performance of the SOTA
countermeasure with the VSDC dataset, and the results are shown in Table XIV.
In the instance of the VSDC dataset, we computed the EER to assess the
effectiveness of the countermeasures. The results showed that the LPCC [91]
countermeasure exhibited the best and CQCC [83] obtained the second best EER
of 5.86 and 20.0 in the VSDC dataset testing, respectively. Although the VSDC
dataset contains 1PR and 2PR replay samples, this experiment only included 1PR
testing because the goal was to distinguish between spoofed and genuine
samples.It was observed from the results that GTCC [85] features proved to be
an expensive countermeasure. The GTCC countermeasure [85] obtained the highest
EER of 55.0. The results demonstrate the inadequacy of the countermeasures in
testing the speech sample with the artifacts of distinct microphonic
discrepancies. Except for the LFCC [113] countermeasure, none of the
countermeasures achieved an EER lower than 40.0. The detailed countermeasure
results of this experiment are presented in Table XIV.
From this experiment, we can infer that the countermeasures are totally
dependent on the dataset characteristics. In the ASVspoof2019 dataset, for
instance, the APGDF [67] features produced the best results. On the VSDC
dataset, however, the APGDF [67] features failed to generate effective
outcomes. Similarly, LPCC [91] did not generate effective results for the
ASVspoof datasets, particularly in the ASVspoof-PA assessment set, but
produced the best results in VSDC testing. Furthermore, we can see that the
front end features that worked well against logical attacks did not perform
well against physical attacks. The VSDC dataset also includes PA (replay)
attacks that resulted in the performance degradation of each countermeasure.
However, despite the dataset’s wide variability, the APGDF [67], CQCC [83],
and LPCC [91] features produce comparable results independent of the dataset’s
specific configuration.
### Performance analysis of SOTA countermeasures with an SVM classifier
In this experiment, we evaluate the effectiveness of SOTA countermeasures
against the SVM classifier. The SVM classifier has been shown to be a superior
optimal classifier to the GMM classifier [55]. An SVM with an RBF distribution
was utilized in the experiment. The RBF distribution was selected after a
thorough analysis of all feasible distributions. The scikit-learn [66] machine
learning library was used to train and test the SVM classifier. In addition,
the classification report [66] was used to compute the precision, recall,
F1-score, and accuracy of the countermeasures. The results of this experiment
on the VSDC dataset are mentioned in Table XV, and demonstrate that the CQCC
[83] countermeasure proved to have the most robust front-end attributes in
comparison with the SOTA countermeasures. CQCC [83] features, in particular,
achieved the lowest EER of 0.27, the highest precision, recall, and F1-scores
of 0.92, 0.85, and 0.88, respectively, as well as the highest accuracy of
0.93. Spectral features such as SCFC [44], on the other hand, performed poorly
in the case of the VSDC dataset, which contains audio samples with micro-
phonic inconsistencies. For instance, the SCFC [44] features had the lowest
precision (0.39), recall (0.50), and F1-score (0.44), and achieved an accuracy
of 0.78. Similarly, the SCMC [44] features were found to be the second worst
classifier, after the SCFC [44], with a 0.53 EER. This indicates that the
spectral centroid magnitude and frequency-based coefficients failed to perform
effectively when micro-phonic distinction (i.e., VSDC) exists in the speech
sample. In contrast, the constant Q-cepstral-based coefficient successfully
overcame the microphone’s fluctuation.
In the case of the ASVspoof2019 dataset, each of the SOTA countermeasures was
tested against the SVM classifier, along with the development and evaluation
set of the datasets, and the results are reported in Table XVI. The results
demonstrate that in the testing of LA attacks that exist in the development
set and evaluation set, the CQCC [83] and IMFCC [9] countermeasures achieved
comparable performance, with an EER of 0.30 and 0.29, respectively. Similarly,
for the evaluation set, the CQCC [83] and IMFCC [9] countermeasures achieved
the best EER of 0.69 and 0.76, respectively. Although the EER of 0.69 and 0.76
was not good enough for any spoofing system, none of the other countermeasures
achieved an EER lower than these countermeasures. From this experiment, it was
concluded that the performance of the majority of the spectral and cepstral
countermeasures declined when tested against different classifiers with
different configurations. However, the CQCC [83] countermeasure performed far
better than the other countermeasures. In addition, in the presence of micro-
phonic distinctions, CQCC [83] features work effectively, while the rest of
the spectral features fail to perform adequately. Furthermore, the phase-based
APGDF [67] features were the third best features compared to the state-of-the-
art that achieved the optimal results. Detailed results of the SOTA
countermeasures are presented in Table XVI.
Based on the experiments involving the SOTA countermeasures with machine
learning-based GMM and SVM-based classifiers, it is determined that the CQCC
[83] is the best countermeasure against voice spoofing attacks. More
specifically, in the testing of replay attacks, where the performance of the
state-of-the-art classifiers drops significantly, the CQCC [83] features
scored the optimal EER of 0.27, 0.43, and 0.30 in GMM and SVM testing,
respectively. The CQCC [83] countermeasure achieved the lowest EER and min-
TDCF when in development. The results of both ML-based classifiers show that
the phase-based APGDF [67], linear LFCC [113] , and inverted Mel-based IMFCC
[9] countermeasures performed better than the comparative methods in all
experiments and received the optimal scores in all performance measurements.
APGDF [67] also had the lowest EER and min-TDCF in the ASVspoof2019-PA dataset
evaluation. Similar results are obtained with the VSDC and SVM classifiers. In
comparison to the comparative approaches, the SCFC [44] features were shown to
be the weakest. SCFC [44] failed to perform well in any experiment, and had
the highest EER and min-TDCF. Although the CQCC [83] and APGDF [67] front end
features clearly outperformed all comparable features in the ASVspoof2019-LA,
ASVspoof2019-PA, and VSDC datasets, the question of performance when the ASV
encounters unfamiliar samples from unknown surroundings and devices persists.
Although the ML-based GMM and SVM classifiers showed the capabilities of the
countermeasure, they have become the conventional classifiers for spoof
detection. In contrast, several deep learning and end-to-end solutions have
been published recently to differentiate between spoofed and bonafide speech
samples. Therefore, we tested the countermeasures against the recent CNN and
CNN-GRU based classifiers and also performed cross-corpus experiments in order
to validate the generalized performance of both the highest performing
countermeasures. The results are discussed in the next subsection.
TABLE XV: Experimental performance of the countermeasures with VSDC dataset and SVM based classifier. Feature | EER | Precision | Recall | F1-score | Accuracy
---|---|---|---|---|---
LFCC [113] | 0.49 | 0.88 | 0.74 | 0.79 | 0.88
CQCC [83] | 0.27 | 0.92 | 0.85 | 0.88 | 0.93
LPCC [91] | 0.44 | 0.88 | 0.77 | 0.81 | 0.89
PSRCC [80] | 0.40 | 0.79 | 0.76 | 0.77 | 0.86
MSRCC [80] | 0.46 | 0.84 | 0.76 | 0.79 | 0.86
SCFC [44] | 0.80 | 0.39 | 0.50 | 0.44 | 0.78
SCMC [44] | 0.53 | 0.87 | 0.72 | 0.76 | 0.87
MFCC [58] | 0.46 | 0.82 | 0.75 | 0.77 | 0.86
IMFCC [9] | 0.71 | 0.88 | 0.64 | 0.67 | 0.84
RPS [73] | 0.40 | 0.76 | 0.78 | 0.77 | 0.85
RFCC [73] | 0.51 | 0.87 | 0.73 | 0.76 | 0.87
GTCC [85] | 0.42 | 0.82 | 0.76 | 0.78 | 0.86
APGDF [67] | 0.43 | 0.89 | 0.77 | 0.81 | 0.89
SSFC [80] | 0.44 | 0.87 | 0.77 | 0.80 | 0.88
TABLE XVI: Experimental performance of the countermeasures with ASVspoof2019 and an SVM based classifier. Feature | Dataset | Development | Evaluation
---|---|---|---
EER | Acc | EER | Acc
LFCC [113] | ASVspoof19-LA | 0.58 | 0.90 | 0.91 | 0.12
ASVspoof19-PA | 0.72 | 0.75 | 0.83 | 0.11
CQCC [83] | ASVspoof19-LA | 0.30 | 0.94 | 0.69 | 0.29
ASVspoof19-PA | 0.42 | 0.29 | 0.80 | 0.19
LPCC [91] | ASVspoof19-LA | 0.50 | 0.88 | 0.80 | 0.15
ASVspoof19-PA | 0.61 | 0.20 | 0.81 | 0.19
SCFC [44] | ASVspoof19-LA | 0.50 | 0.89 | 0.90 | 0.16
ASVspoof19-PA | 0.75 | 0.19 | 0.89 | 0.14
SCMC [44] | ASVspoof19-LA | 0.49 | 0.93 | 0.88 | 0.12
ASVspoof19-PA | 0.75 | 0.20 | 0.86 | 0.15
PSRCC [80] | ASVspoof19-LA | 0.49 | 0.89 | 0.89 | 0.11
ASVspoof19-PA | 0.84 | 0.25 | 0.89 | 0.12
MSRCC [80] | ASVspoof19-LA | 0.51 | 0.90 | 0.91 | 0.11
ASVspoof19-PA | 0.84 | 0.25 | 0.89 | 0.12
MFCC [58] | ASVspoof19-LA | 0.50 | 0.89 | 0.80 | 0.17
ASVspoof19-PA | 0.76 | 0.22 | 0.86 | 0.16
IMFCC [9] | ASVspoof19-LA | 0.29 | 0.94 | 0.76 | 0.24
ASVspoof19-PA | 0.51 | 0.26 | 0.82 | 0.18
RFCC [73] | ASVspoof19-LA | 0.46 | 0.93 | 0.87 | 0.14
ASVspoof19-PA | 0.84 | 0.26 | 0.88 | 0.14
RPS [73] | ASVspoof19-LA | 0.52 | 0.90 | 0.91 | 0.10
ASVspoof19-PA | 0.86 | 0.25 | 0.89 | 0.11
SSFC [80] | ASVspoof19-LA | 0.50 | 0.90 | 0.89 | 0.12
ASVspoof19-PA | 0.87 | 0.29 | 0.90 | 0.10
GTCC [85] | ASVspoof19-LA | 0.50 | 0.90 | 0.90 | 0.10
ASVspoof19-PA | 0.84 | 0.25 | 0.89 | 0.12
APGDF [67] | ASVspoof19-LA | 0.34 | 0.94 | 0.82 | 0.17
ASVspoof19-PA | 0.60 | 0.17 | 0.83 | 0.15
### Performance analysis of the SOTA countermeasures with a CNN-based
classifier
In this subsection, we show the effectiveness of SOTA countermeasures along
with the DL-based CNN and CNN-GRU classifier. The CNN classifier is one of the
most recent and most advanced neural network-based classifiers for the
detection of voice spoofing on ASV systems. The SOTA countermeasures were
tested against the CNN classifier, and the results are shown in Table XVII.
The same architectural CNN was used in the experiments that was adopted in
[34]. The open source code is implemented in the mentioned configuration in
section VIII subsection C.
TABLE XVII: Experimental performance analysis of the countermeasures on the training and testing subsets of the same dataset with a CNN based classifier. Features | Training | Testing | EER | min-tdcf
---|---|---|---|---
IMFCC [9] | VSDC | VSDC | 0.06 | 0.17
ASVspoof19 | ASVspoof19 | 0.19 | 0.50
SSFC[80] | VSDC | VSDC | 0.03 | 0.07
ASVspoof19 | ASVspoof19 | 0.18 | 0.46
RFCC [73] | VSDC | VSDC | 0.13 | 0.25
ASVspoof19 | ASVspoof19 | 0.05 | 0.13
SCFC [44] | VSDC | VSDC | 0.10 | 0.30
ASVspoof19 | ASVspoof19 | 0.14 | 0.36
SCMC[44] | VSDC | VSDC | 0.14 | 0.37
ASVspoof19 | ASVspoof19 | 0.44 | 0.94
PSRCC [80] | VSDC | VSDC | 0.08 | 0.10
ASVspoof19 | ASVspoof19 | 0.43 | 0.75
MSRCC [80] | VSDC | VSDC | 0.13 | 0.17
ASVspoof19 | ASVspoof19 | 0.43 | 0.75
RPS [73] | VSDC | VSDC | 0.04 | 0.11
ASVspoof19 | ASVspoof19 | 0.21 | 0.65
CQCC [83] | VSDC | VSDC | 0.04 | 0.13
ASVspoof19 | ASVspoof19 | 0.17 | 0.45
LFCC [113] | VSDC | VSDC | 0.03 | 0.03
ASVspoof19 | ASVspoof19 | 0.16 | 0.43
MFCC [58] | VSDC | VSDC | 0.06 | 0.16
ASVspoof19 | ASVspoof19 | 0.26 | 0.62
LPCC [91] | VSDC | VSDC | 0.02 | 0.05
ASVspoof19 | ASVspoof19 | 0.19 | 0.50
GTCC [85] | VSDC | VSDC | 0.03 | 0.07
ASVspoof19 | ASVspoof19 | 0.43 | 0.75
APGDF [67] | VSDC | VSDC | 0.01 | 0.04
ASVspoof19 | ASVspoof19 | 0.17 | 0.45
The results for the VSDC dataset demonstrate that the APGDF [67] front end
features outperform the SOTA countermeasures with an EER of 0.01 and a min
T-DCF of 0.04. Following all of the countermeasures, the linear-based features
perform well with the CNN-based classifier to detect voice spoofing attacks.
For instance, according to performance analysis with EER, LPCC [91] and LFCC
[113] rank second and third, with EERs of 0.026 and 0.032, respectively.
Similarly, in the case of ASVspoof19, the rectangular-based RFCC [73] features
outperform all evaluated countermeasures on the ASVspoof19 dataset, with an
EER of 0.054 and a min T-DCF of 0.139. The magnitude-based feature was found
to be significantly deficient during the ASVspoof19 evaluation. For instance,
MSRCC [80] features had the highest EER and min-TDCF values, at 0.443 and
0.947, respectively. In contrast, the highest EER of 0.21, and the highest
min-TDCF score of 0.43, were seen in the SCFC [44] features against VSDC.
Moreover, when compared to the GMM classifier, the CNN classifier performed
significantly better across the board. For instance, the highest EER recorded
with the GMM classifier was 0.80 on the LFCC feature, while the highest EER
achieved with the CNN classifier was just 0.265 with the MFCC [58] features.
Using the same feature set but replacing the classifier, the EER was reduced
by nearly 65%. This experiment clearly shows the significance of back end |
Duality for Knizhnik–Zamolodchikov and Dynamical Operators
Duality for Knizhnik–Zamolodchikov
and Dynamical Operators††This paper is a contribution to the Special Issue on
Representation Theory and Integrable Systems in honor of Vitaly Tarasov on the
60th birthday and Alexander Varchenko on the 70th birthday. The full
collection is available at https://www.emis.de/journals/SIGMA/Tarasov-
Varchenko.html
Vitaly TARASOV †‡ and Filipp UVAROV †
V. Tarasov and F. Uvarov
† Department of Mathematical Sciences, Indiana University – Purdue University
Indianapolis,
† 402 North Blackford St, Indianapolis, IN 46202-3216, USA<EMAIL_ADDRESS><EMAIL_ADDRESS>
‡ St. Petersburg Branch of Steklov Mathematical Institute,
‡ Fontanka 27, St. Petersburg, 191023, Russia<EMAIL_ADDRESS>
Received February 25, 2020, in final form April 10, 2020; Published online
April 25, 2020
We consider the Knizhnik–Zamolodchikov and dynamical operators, both
differential and difference, in the context of the
$(\mathfrak{gl}_{k},\mathfrak{gl}_{n})$-duality for the space of polynomials
in $kn$ anticommuting variables. We show that the Knizhnik–Zamolodchikov and
dynamical operators naturally exchange under the duality.
Knizhnik–Zamolodchikov operators; dynamical operators; the
$(\mathfrak{gl}_{k},\mathfrak{gl}_{n})$-duality
17B37; 81R10; 81R50; 39A12
## 1 Introduction
The Knizhnik–Zamolodchikov (KZ) operators is a family of pairwise commuting
differential operators acting on $U(\mathfrak{gl}_{k})^{\otimes n}$-valued
functions. They play an important role in conformal field theory,
representation theory, and they are closely related to the famous Gaudin
Hamiltonians. The difference analogue of the KZ operators is the quantum
Knizhnik–Zamolodchikov (qKZ) operators. There are rational, trigonometric, and
elliptic versions of the KZ and qKZ operators. For a review and references
see, for example, [2].
There exist other families of commuting differential or difference operators
called the dynamical differential (DD) or dynamical difference (qDD)
operators, respectively. There are rational and trigonometric versions of the
DD and qDD operators as well. It is known that the rational DD operators
commute with the rational KZ operators, the trigonometric DD operators commute
with the rational qKZ operators, and the rational qDD operators commute with
the trigonometric KZ operators, see [4, 5, 7]. The qDD operators appear as the
action of the dynamical Weyl groups [3]. The DD operators are also known as
the Casimir connection, see [8, 9].
Together with the KZ, DD, qKZ, and qDD operators associated with
$U(\mathfrak{gl}_{k})^{\otimes n}$, we will simultaneously consider similar
operators associated with $U(\mathfrak{gl}_{n})^{\otimes k}$ interchanging $k$
and $n$. Let $P_{kn}$ be the space of polynomials in $kn$ commuting variables.
One can define actions of $U(\mathfrak{gl}_{k})^{\otimes n}$ and
$U(\mathfrak{gl}_{n})^{\otimes k}$ on the space $P_{kn}$. It is well known
that the $U(\mathfrak{gl}_{k})^{\otimes n}$\- and
$U(\mathfrak{gl}_{n})^{\otimes k}$-actions on $P_{kn}$ commute, which
manifests the $(\mathfrak{gl}_{k},\mathfrak{gl}_{n})$-duality, see, for
example, [1]. Consider the images of the KZ, DD, qKZ, qDD operators associated
with $U(\mathfrak{gl}_{k})^{\otimes n}$ and $U(\mathfrak{gl}_{n})^{\otimes k}$
under the corresponding actions. It was proved in [6] that the images of the
rational (trigonometric) KZ operators associated with
$U(\mathfrak{gl}_{n})^{\otimes k}$ coincide with the images of the rational
(trigonometric) DD operators associated with $U(\mathfrak{gl}_{k})^{\otimes
n}$. Similarly, the images of the rational qKZ operators associated with
$U(\mathfrak{gl}_{n})^{\otimes k}$, up to an action of a central element in
$\mathfrak{gl}_{n}$, coincide with the images of the rational qDD operators
associated with $U(\mathfrak{gl}_{k})^{\otimes n}$. In this paper we obtain a
similar duality for the case of $U(\mathfrak{gl}_{k})^{\otimes n}$\- and
$U(\mathfrak{gl}_{n})^{\otimes k}$-actions on the space of polynomials in $kn$
anticommuting variables, see Theorem 4.4.
The duality for the rational and trigonometric KZ and DD operators is proved
in a straightforward way. To prove the duality for the rational qKZ and qDD
operators, we study the eigenvalues of the rational $R$-matrix and compare
them to the eigenvalues of the operator $B^{\langle 2\rangle}_{12}(t)$, which
is used in the construction of the qDD operators.
The $(\mathfrak{gl}_{k},\mathfrak{gl}_{n})$-duality for classical integrable
models related to Gaudin Hamiltonians and the actions of $\mathfrak{gl}_{k}$
and $\mathfrak{gl}_{n}$ on the space of polynomials in anticommuting variables
was studied in [10, Section 3.3]. The result of [10] resembles what one can
expect for Bethe algebras of Gaudin models discussed in our work. We will
study those Bethe algebras in an upcoming paper.
The paper is organized as follows. In Section 2, we introduce necessary
notations. In Section 3, we define the KZ, DD, qKZ, and qDD operators. In
Section 4, we formulate and prove the main result.
## 2 Basic notation
Let $e_{ab}$, $a,b=1,\dots,k$, be the standard basis of the Lie algebra
$\mathfrak{gl}_{k}$: $[e_{ab},e_{cd}]=\delta_{bc}e_{ad}-\delta_{ad}e_{cb}$. We
take the Cartan subalgebra $\mathfrak{h}\subset\mathfrak{gl}_{k}$ spanned by
$e_{11},\dots,e_{kk}$, and the nilpotent subalgebras $\mathfrak{n}_{+}$ and
$\mathfrak{n}_{-}$ spanned by the elements $e_{ab}$ for $a<b$ and $a>b$
respectively. We have standard Gauss decomposition
$\mathfrak{gl}_{k}=\mathfrak{n}_{+}\oplus\mathfrak{h}\oplus\mathfrak{n}_{-}$.
Let $\varepsilon_{1},\dots,\varepsilon_{k}$ be the basis of $\mathfrak{h}^{*}$
dual to $e_{11},\dots,e_{kk}$:
$\langle\varepsilon_{a},e_{bb}\rangle=\delta_{ab}$. We identify
$\mathfrak{h}^{*}$ with $\mathbb{C}^{k}$ mapping
$l_{1}\varepsilon_{1}+\dots+l_{k}\varepsilon_{k}$ to $(l_{1},\dots,l_{k})$.
The root vectors of $\mathfrak{gl}_{k}$ are $e_{ab}$ for $a\neq b$, the
corresponding root being equal to
$\alpha_{ab}=\varepsilon_{a}-\varepsilon_{b}$. The roots $\alpha_{ab}$ for
$a<b$ are positive. The simple roots are $\alpha_{1},\dots,\alpha_{k-1}$:
$\alpha_{a}=\varepsilon_{a}-\varepsilon_{a+1}$. Denote by $\rho$ the half-sum
of positive roots.
We choose the standard invariant bilinear form $(\,,\,)$ on
$\mathfrak{gl}_{k}$: $(e_{ab},e_{cd})=\delta_{ad}\delta_{bc}$. It defines an
isomorphism $\mathfrak{h}\rightarrow\mathfrak{h}^{*}$ The induced bilinear
form on $\mathfrak{h}^{*}$ is $(\varepsilon_{a},\varepsilon_{b})=\delta_{ab}$.
For a $\mathfrak{gl}_{k}$-module $W$ and a weight
$\boldsymbol{l}\in\mathfrak{h}^{*}$, let $W[\boldsymbol{l}]$ be the weight
subspace of $W$ of weight $\boldsymbol{l}$.
For any $\boldsymbol{l}=(l_{1},\dots,l_{k})$ with $l_{a}\geq l_{a+1}$, we
denote by $V_{\boldsymbol{l}}$ the irreducible $\mathfrak{gl}_{k}$-module with
highest weight $\boldsymbol{l}$. Also, for any $m\in\mathbb{Z}_{\geq 0}$ we
write $V_{m}$ instead of $V_{(1^{m})}$, where
$(1^{m})=(\underbrace{1,\dots,1}_{k},0,\dots,0)$. Thus, $V_{0}=\mathbb{C}$ is
the trivial $\mathfrak{gl}_{k}$-module, $V_{1}=\mathbb{C}^{k}$ with the
natural action of $\mathfrak{gl}_{k}$, and $V_{m}$ is the $m$-th antisymmetric
power of $V_{1}$.
The element $I=\underset{a,b=1}{\overset{k}{\sum}}e_{ab}e_{ba}$ is central in
$U(\mathfrak{gl}_{k})$. It acts as multiplication by
$(\boldsymbol{l},\boldsymbol{l}+2\rho)$ in the irreducible
$\mathfrak{gl}_{k}$-module $V_{\boldsymbol{l}}$.
Consider the algebra $\mathfrak{X}_{k}=\bigwedge^{\bullet}\mathbb{C}^{k}$,
which will be identified with the ring of all polynomials in anticommuting
variables $x_{1},\dots,x_{k}$. In parrticular, $x_{i}^{2}=0$ for all
$i=1,\dots,k$.
We define left derivation $\partial_{a}$ as follows: if $g(x)=x_{b_{1}}\dots
x_{b_{m}}$ for some $m$, where $b_{s}\neq a$ for any $s$, then
$\displaystyle\partial_{a}g(x)=0,\qquad\partial_{a}(x_{a}g(x))=g(x).$
The operators of left multiplication by $x_{1},\dots,x_{k}$ and left
derivations $\partial_{1},\dots,\partial_{k}$ make the space
$\mathfrak{X}_{k}$ into the irreducible representation of the Clifford algebra
$\operatorname{Cliff}_{k}$.
The Lie algebra $\mathfrak{gl}_{k}$ acts on the space $\mathfrak{X}_{k}$ by
the rule: $e_{ab}\cdot p=x_{a}\partial_{b}p$ for any $p\in\mathfrak{X}_{k}$.
Denote the obtained $\mathfrak{gl}_{k}$-module by $V_{\bullet}$, then
$\displaystyle V_{\bullet}=\bigoplus_{l=0}^{k}V_{l},$ (2.1)
the submodule $V_{l}$ being spanned by homogeneous polynomials of degree $l$.
A highest weight vector of the submodule $V_{l}$ is $x_{1}x_{2}\cdots x_{l}$.
The $\mathfrak{gl}_{k}$-action on $\mathfrak{X}_{k}$ naturaly extends to a
$U(\mathfrak{gl}_{k})^{\otimes n}$-action on $(\mathfrak{X}_{k})^{\otimes n}$.
For any $g\in U(\mathfrak{gl}_{k})$, set
$g_{(i)}=1\otimes\cdots\otimes\underset{i\text{-th}}{g}\otimes\dots\otimes
1\in U(\mathfrak{gl}_{k})^{\otimes n}$. We consider $U(\mathfrak{gl}_{k})$ as
the diagonal subalgebra of $U(\mathfrak{gl}_{k})^{\otimes n}$, that is, the
embedding $U(\mathfrak{gl}_{k})\hookrightarrow U(\mathfrak{gl}_{k})^{\otimes
n}$ is given by the n-fold coproduct: $x\mapsto x_{(1)}+\dots+x_{(n)}$ for any
$x\in\mathfrak{gl}_{k}$. This corresponds to the standard
$\mathfrak{gl}_{k}$-module structure on $(\mathfrak{X}_{k})^{\otimes n}$ as
the tensor product of $\mathfrak{gl}_{k}$-modules.
Let $\Omega=\underset{a,b=1}{\overset{k}{\sum}}e_{ab}\otimes e_{ba}$ be the
Casimir tensor, and let
$\displaystyle\Omega^{+}=\frac{1}{2}\underset{a=1}{\overset{k}{\sum}}e_{aa}\otimes
e_{aa}+\underset{1\leq a<b\leq k}{\sum}e_{ab}\otimes e_{ba},$
$\displaystyle\Omega^{-}=\frac{1}{2}\,\underset{a=1}{\overset{k}{\sum}}e_{aa}\otimes
e_{aa}+\underset{1\leq a<b\leq k}{\sum}e_{ba}\otimes e_{ab},$
so that $\Omega=\Omega^{+}+\Omega^{-}$.
## 3 The KZ, qKZ, DD and qDD operators
Fix a nonzero complex number $\kappa$. Consider differential operators
$\nabla_{z_{1}},\dots,\nabla_{z_{n}}$, and
$\widehat{\nabla}_{z_{1}},\dots,\widehat{\nabla}_{z_{n}}$ with coefficients in
$U(\mathfrak{gl}_{k})^{\otimes n}$ depending on complex variables
$z_{1},\dots,z_{n}$, and $\lambda_{1},\dots,\lambda_{k}$:
$\displaystyle\nabla_{z_{i}}(z;\lambda)=\kappa\frac{\partial}{\partial
z_{i}}-\sum_{a=1}^{k}\lambda_{a}(e_{aa})_{(i)}-\sum_{j=1,\,j\neq
i}^{n}\frac{\Omega_{(ij)}}{z_{i}-z_{j}},$
$\displaystyle\widehat{\nabla}_{z_{i}}(z;\lambda)=\kappa
z_{i}\frac{\partial}{\partial
z_{i}}-\sum_{a=1}^{k}\left(\lambda_{a}-\frac{e_{aa}}{2}\right)(e_{aa})_{(i)}-\sum_{j=1,\,j\neq
i}^{n}\frac{z_{i}\Omega^{+}_{(ij)}+z_{j}\Omega^{-}_{(ij)}}{z_{i}-z_{j}}.$
The differential operators $\nabla_{z_{1}},\dots,\nabla_{z_{n}}$ (resp.,
$\widehat{\nabla}_{z_{1}},\dots,\widehat{\nabla}_{z_{n}}$) are called the
rational (resp., trigonometric) Knizhnik–Zamolodchikov (KZ) operators.
Introduce differential operators $D_{\lambda_{1}},\dots,D_{\lambda_{k}}$, and
$\widehat{D}_{\lambda_{1}},\dots,\widehat{D}_{\lambda_{k}}$ with coefficients
in $U(\mathfrak{gl}_{k})^{\otimes n}$ depending on complex variables
$z_{1},\dots,z_{n}$, and $\lambda_{1},\dots,\lambda_{k}$:
$\displaystyle
D_{\lambda_{a}}(z;\lambda)=\kappa\frac{\partial}{\partial\lambda_{a}}-\sum_{i=1}^{n}z_{i}(e_{aa})_{(i)}-\sum_{b=1,\,b\neq
a}^{k}\frac{e_{ab}e_{ba}-e_{aa}}{\lambda_{a}-\lambda_{b}},$
$\displaystyle\widehat{D}_{\lambda_{a}}(z;\lambda)=\kappa\lambda_{a}\frac{\partial}{\partial\lambda_{a}}+\frac{e_{aa}^{2}}{2}-\sum_{i=1}^{n}z_{i}(e_{aa})_{(i)}$
$\displaystyle\hphantom{\widehat{D}_{\lambda_{a}}(z;\lambda)=}{}-\sum_{b=1}^{k}\sum_{1\leq
i<j\leq n}(e_{ab})_{(i)}(e_{ba})_{(j)}-\sum_{b=1,\,b\neq
a}^{k}\frac{\lambda_{b}}{\lambda_{a}-\lambda_{b}}(e_{ab}e_{ba}-e_{aa}).$
The differential operators $D_{\lambda_{1}},\dots,D_{\lambda_{k}}$ (resp.
$\widehat{D}_{\lambda_{1}},\dots,\widehat{D}_{\lambda_{k}}$) are called the
rational (resp., trigonometric) differential dynamical (DD) operators, see [4,
7].
For any $a,b=1,\dots,k$, $a\neq b$, introduce the series $B_{ab}(t)$ depending
on a complex variable $t$:
$\displaystyle
B_{ab}=1+\sum_{s=1}^{\infty}\frac{e_{ba}^{s}e_{ab}^{s}}{s!}\prod_{j=1}^{s}(t-e_{aa}+e_{bb}-j)^{-1}.$
The action of this series is well defined in any finite-dimensional
$\mathfrak{gl}_{k}$-module $W$, giving an End($W$)-valued rational function of
$t$.
Denote $\lambda_{bc}=\lambda_{b}-\lambda_{c}$. Consider the products
$X_{1},\dots,X_{k}$ depending on the complex variables $z_{1},\dots,z_{n}$,
and $\lambda_{1},\dots,\lambda_{k}$:
$\displaystyle X_{a}(z;\lambda)=(B_{ak}(\lambda_{ak})\cdots
B_{a,a+1}(\lambda_{a,a+1}))^{-1}$
$\displaystyle\hphantom{X_{a}(z;\lambda)=}{}\times\prod_{i=1}^{n}\big{(}z_{i}^{-e_{aa}}\big{)}_{(i)}B_{1a}(\lambda_{1a}-\kappa)\cdots
B_{a-1,a}(\lambda_{a-1,a}-\kappa).$
The products $X_{1},\dots,X_{k}$ act in any $n$-fold tensor product
$W_{1}\otimes\dots\otimes W_{n}$ of finite-dimensional
$\mathfrak{gl}_{k}$-modules.
Denote by $T_{u}$ a difference operator acting on a function $f(u)$ by
$(T_{u}f)(u)=f(u+\kappa).$
Introduce difference operators $Q_{\lambda_{1}},\dots,Q_{\lambda_{k}}$:
$\displaystyle Q_{\lambda_{a}}(z;\lambda)=X_{a}(z;\lambda)T_{\lambda_{a}}.$
The operators $Q_{\lambda_{1}},\dots,Q_{\lambda_{k}}$ are called the
(rational) difference dynamical (qDD) operators [5].
For any finite-dimensional irreducible $\mathfrak{gl}_{k}$-modules $V$ and
$W$, there is a distinguished rational function $R_{VW}(t)$ of $t$ with values
in $\operatorname{End}(V\otimes W)$ called the rational $R$-matrix. It is
uniquely determined by the $\mathfrak{gl}_{k}$-invariance
$\displaystyle[R_{VW}(t),g\otimes 1+1\otimes g]=0\qquad\text{for any}\quad
g\in\mathfrak{gl}_{k},$ (3.1)
the commutation relations
$\displaystyle R_{VW}(t)\left(te_{ab}\otimes 1+\sum_{c=1}^{k}e_{ac}\otimes
e_{cb}\right)=\left(te_{ab}\otimes 1+\sum_{c=1}^{k}e_{cb}\otimes
e_{ac}\right)R_{VW}(t),$ (3.2)
and the normalization condition
$\displaystyle R_{VW}(t)v\otimes w=v\otimes w,$ (3.3)
where $v$ and $w$ are the highest weight vectors of $V$ and $W$, respectively.
Denote $z_{ij}=z_{i}-z_{j}$ and $R_{ij}(t)=(R_{W_{i}W_{j}}(t))_{(ij)}$.
Consider the products $K_{1},\dots,K_{n}$ depending on the complex variables
$z_{1},\dots,z_{n}$, and $\lambda_{1},\dots,\lambda_{k}$:
$\displaystyle K_{i}(z;\lambda)=(R_{in}(z_{in})\cdots
R_{i,i+1}(z_{i,i+1}))^{-1}\prod_{a=1}^{k}\big{(}\lambda_{a}^{-e_{aa}}\big{)}_{(i)}R_{1i}(z_{1i}-\kappa)\cdots
R_{i-1,i}(z_{i-1,i}-\kappa).$
The products $K_{1},\dots,K_{n}$ act in any $n$-fold tensor product
$W_{1}\otimes\cdots\otimes W_{n}$ of $\mathfrak{gl}_{k}$-modules.
Introduce difference operators $Z_{z_{1}},\dots,Z_{z_{n}}$:
$\displaystyle Z_{z_{i}}(z;\lambda)=K_{i}(z;\lambda)T_{z_{i}}.$
The operators $Z_{z_{1}},\dots,Z_{z_{n}}$ are called (rational) quantized
Knizhnik–Zamolodchikov (qKZ) operators.
It is known that the introduced operators combine into three commutative
families, see [4, 5, 7] for more references.
###### Theorem 3.1.
The operators $\nabla_{z_{1}},\dots,\nabla_{z_{k}}$,
$D_{\lambda_{1}},\dots,D_{\lambda_{k}}$ pairwise commute.
###### Theorem 3.2.
The operators $\widehat{\nabla}_{z_{1}},\dots,\widehat{\nabla}_{z_{n}}$,
$Q_{\lambda_{1}},\dots,Q_{\lambda_{k}}$ pairwise commute.
###### Theorem 3.3.
The operators $\widehat{D}_{\lambda_{1}},\dots,\widehat{D}_{\lambda_{k}}$,
$Z_{z_{1}},\dots,Z_{z_{n}}$ pairwise commute.
## 4 The $\boldsymbol{(\mathfrak{gl}_{k},\mathfrak{gl}_{n})}$-duality
Consider the ring $\mathfrak{P}_{kn}$ of polynomials in $kn$ anticommuting
variables $x_{ai}$, $a=1,\dots,k$, $i=1,\dots,n$. As a vector space,
$\mathfrak{P}_{kn}$ is isomorphic to $(\mathfrak{X}_{k})^{\otimes n}$, the
isomorphism $\phi_{1}\colon(\mathfrak{X}_{k})^{\otimes
n}\rightarrow\mathfrak{P}_{kn}$ being given by
$\displaystyle\phi_{1}(p_{1}\\!\otimes\dots\otimes
p_{n})(x_{11},\dots,x_{kn})=p_{1}(x_{11},\dots,x_{k1})p_{2}(x_{12},\dots,x_{k2})\cdots
p_{n}(x_{1n},\dots,x_{kn}),\\!\\!\\!$ (4.1)
and to $(\mathfrak{X}_{n})^{\otimes k}$, the isomorphism
$\phi_{2}:(\mathfrak{X}_{n})^{\otimes k}\rightarrow\mathfrak{P}_{kn}$ being
given by
$\displaystyle\phi_{2}(p_{1}\\!\otimes\dots\otimes
p_{k})(x_{11},\dots,x_{kn})=p_{1}(x_{11},\dots,x_{1n})p_{2}(x_{21},\dots,x_{2n})\cdots
p_{k}(x_{k1},\dots,x_{kn}).\\!\\!\\!$ (4.2)
We transfer the $\mathfrak{gl}_{k}$-action on $(\mathfrak{X}_{k})^{\otimes n}$
to $\mathfrak{P}_{kn}$ using isomorphism $\phi_{1}$. Similarly, we transfer
the $\mathfrak{gl}_{n}$-action on $(\mathfrak{X}_{n})^{\otimes k}$ to
$\mathfrak{P}_{kn}$ using isomorphism $\phi_{2}$.
We will write superscripts $\langle k\rangle$ and $\langle n\rangle$ to
distinguish objects associated with Lie algebras $\mathfrak{gl}_{k}$ and
$\mathfrak{gl}_{n}$, respectively. For example, $e_{ab}^{\langle k\rangle}$,
$a,b=1,\dots,k$ denote the generators of $\mathfrak{gl}_{k}$, and
$e_{ij}^{\langle n\rangle}$, $i,j=1,\dots,n$ denote the generators of
$\mathfrak{gl}_{n}$. Then $\mathfrak{P}_{kn}$ is isomorphic to
$\big{(}V_{\bullet}^{\langle k\rangle}\big{)}^{\otimes n}$ as a
$\mathfrak{gl}_{k}$-module by (4.1), and it is isomorphic to
$\big{(}V_{\bullet}^{\langle n\rangle}\big{)}^{\otimes k}$ as a
$\mathfrak{gl}_{n}$-module by (4.2).
It is easy to check that $\mathfrak{gl}_{k}$\- and $\mathfrak{gl}_{n}$-actions
on $\mathfrak{P}_{kn}$ commute, therefore $\mathfrak{P}_{kn}$ is a
$\mathfrak{gl}_{k}\oplus\mathfrak{gl}_{n}$-module. We have the following
theorem, see for example [1]:
###### Theorem 4.1.
For any partition $\boldsymbol{l}$, denote its transpose by
$\boldsymbol{l}^{\prime}$. The
$\mathfrak{gl}_{k}\oplus\mathfrak{gl}_{n}$-module $\mathfrak{P}_{kn}$ has the
decomposition:
$\displaystyle\mathfrak{P}_{kn}=\bigoplus_{\begin{subarray}{c}\boldsymbol{l}=(l_{1},\dots,l_{k}),\\\
l_{1}\leq n.\end{subarray}}V^{\langle k\rangle}_{\boldsymbol{l}}\otimes
V^{\langle n\rangle}_{\boldsymbol{l}^{\prime}}.$
Fix vectors $\boldsymbol{l}=(l_{1},\dots,l_{n})\in\mathbb{Z}_{\geq 0}^{n}$ and
$\boldsymbol{m}=(m_{1},\dots,m_{k})\in\mathbb{Z}_{\geq 0}^{k}$ such that
$\sum\limits_{i=1}^{n}l_{i}=\sum\limits_{a=1}^{k}m_{a}$. Let
$\displaystyle\mathcal{Z}_{kn}[\boldsymbol{l},\boldsymbol{m}]=\left\\{(d_{ai})_{\begin{subarray}{1}a=1,\dots,k\\\
i=1,\dots,n\end{subarray}}\in\\{0,1\\}^{kn}\,\bigg{|}\,\sum_{a=1}^{k}d_{ai}=l_{i},\,\sum_{i=1}^{n}d_{ai}=m_{a}\right\\}.$
Denote by
$\mathfrak{P}_{kn}[\boldsymbol{l},\boldsymbol{m}]\subset\mathfrak{P}_{kn}$ the
span of all monomials $x^{\boldsymbol{d}}=x_{11}^{d_{11}}\cdots
x_{k1}^{d_{k1}}\cdots x_{1n}^{d_{1n}}\cdots x_{kn}^{d_{kn}}$ such that
$\boldsymbol{d}=(d_{ai})\in\mathcal{Z}_{kn}[\boldsymbol{l},\boldsymbol{m}]$.
Then by (2.1), the maps $\phi_{1}$ and $\phi_{2}$ induce the isomorpfisms of
the respective weight subspaces $\big{(}V_{l_{1}}^{\langle
k\rangle}\otimes\cdots\otimes V_{l_{n}}^{\langle
k\rangle}\big{)}[m_{1},\dots,m_{k}]$ and $\big{(}V_{m_{1}}^{\langle
n\rangle}\otimes\cdots\otimes V_{m_{k}}^{\langle
n\rangle}\big{)}[l_{1},\dots,l_{n}]$ with the space
$\mathfrak{P}_{kn}[\boldsymbol{l},\boldsymbol{m}]$.
There is another description of the isomorphisms $\phi_{1}$ and $\phi_{2}$.
For any $\boldsymbol{a}=(a_{1},\dots,a_{r})$,
$\boldsymbol{i}=(i_{1},\dots,i_{s})$, such that $1\leq a_{1}<\dots<a_{r}\leq
k$, $1\leq i_{1}<\dots<i_{s}\leq n$, define: $e_{\boldsymbol{a}}^{\langle
k\rangle}=e^{\langle k\rangle}_{a_{1}1}e^{\langle k\rangle}_{a_{2}2}\cdots
e^{\langle k\rangle}_{a_{r}r}$, $e_{\boldsymbol{i}}^{\langle
n\rangle}=e^{\langle n\rangle}_{i_{1}1}e^{\langle n\rangle}_{i_{2}2}\cdots
e^{\langle n\rangle}_{i_{s}s}$.
Fix
$\boldsymbol{d}=(d_{ai})\in\mathcal{Z}_{kn}[\boldsymbol{l},\boldsymbol{m}]$.
Let $\boldsymbol{a}_{j}=\big{(}a_{1}^{j},\dots,a_{l_{j}}^{j}\big{)}$ be such
that $a_{1}^{j}<a_{2}^{j}<\dots<a_{l_{j}}^{j}$ and $d_{a_{p}^{j},j}=1$ for all
$j=1,\dots,n$, $p=1,\dots,l_{j}$. Similarly, let
$\boldsymbol{i}_{s}=\big{(}i_{1}^{s},\dots,i_{m_{l}}^{s}\big{)}$ be such that
$i_{1}^{s}<i_{2}^{s}<\dots<i_{m_{s}}^{s}$ and $d_{s,i_{p}^{s}}=1$ for all
$s=1,\dots,k$, $p=1,\dots,m_{s}$. Introduce the following vectors:
$\displaystyle v_{\boldsymbol{d}}^{\langle k\rangle}=e^{\langle
k\rangle}_{\boldsymbol{a}_{1}}v_{l_{1}}^{\langle k\rangle}\otimes\dots\otimes
e^{\langle k\rangle}_{\boldsymbol{a}_{n}}v_{l_{n}}^{\langle k\rangle},\qquad
v_{\boldsymbol{d}}^{\langle n\rangle}=e^{\langle
n\rangle}_{\boldsymbol{i}_{1}}v_{m_{1}}^{\langle n\rangle}\otimes\dots\otimes
e^{\langle n\rangle}_{\boldsymbol{i}_{k}}v_{m_{k}}^{\langle n\rangle},$
where $v_{l_{j}}^{\langle k\rangle}=x_{1}x_{2}\cdots x_{l_{j}}$ and
$v_{m_{s}}^{\langle n\rangle}=x_{1}x_{2}\cdots x_{m_{s}}$ are highest weight
vectors for the modules $V_{l_{j}}^{\langle k\rangle}$ and $V_{m_{l}}^{\langle
n\rangle}$, respectively.
###### Lemma 4.2.
The vectors $v_{\boldsymbol{d}}^{\langle k\rangle}$,
$\boldsymbol{d}\in\mathcal{Z}_{kn}[\boldsymbol{l},\boldsymbol{m}]$ form a
basis of the weight subspace $\big{(}V_{l_{1}}^{\langle
k\rangle}\otimes\dots\otimes V_{l_{n}}^{\langle
k\rangle}\big{)}[m_{1},\dots,m_{k}]$. Similarly, the vectors
$v_{\boldsymbol{d}}^{\langle n\rangle}$,
$\boldsymbol{d}\in\mathcal{Z}_{kn}[\boldsymbol{l},\boldsymbol{m}]$ form a
basis of the weight subspace $\big{(}V_{m_{1}}^{\langle
n\rangle}\otimes\cdots\otimes V_{m_{k}}^{\langle
n\rangle}\big{)}[l_{1},\dots,l_{n}]$.
Let $\varepsilon(\boldsymbol{d})$ be a sign function such that
$x_{11}^{d_{11}}\cdots x_{1n}^{d_{1n}}\cdots x_{k1}^{d_{k1}}\cdots
x_{kn}^{d_{kn}}=\varepsilon(\boldsymbol{d})x^{\boldsymbol{d}}$.
###### Lemma 4.3.
We have $\phi_{1}\big{(}v^{\langle
k\rangle}_{\boldsymbol{d}}\big{)}=x^{\boldsymbol{d}}$ and
$\phi_{2}\big{(}v^{\langle
n\rangle}_{\boldsymbol{d}}\big{)}=\varepsilon(\boldsymbol{d})x^{\boldsymbol{d}}$.
Consider the action of KZ, qKZ, DD and qDD operators for the Lie algebras
$\mathfrak{gl}_{k}$ and $\mathfrak{gl}_{n}$ on $\mathfrak{P}_{kn}$-valued
functions of $z_{1},\dots,z_{n}$, and $\lambda_{1},\dots,\lambda_{k}$,
treating the space $\mathfrak{P}_{kn}$ as a tensor product
$\big{(}V_{\bullet}^{\langle k\rangle}\big{)}^{\otimes n}$ of
$\mathfrak{gl}_{k}$-modules and as a tensor product
$\big{(}V_{\bullet}^{\langle n\rangle}\big{)}^{\otimes k}$ of
$\mathfrak{gl}_{n}$-modules. We will write $F\backsimeq G$ if the operators
$F$ and $G$ act on the $\mathfrak{P}_{kn}$-valued functions in the same way.
Denote $\boldsymbol{1}^{\langle
k\rangle}=(\underset{k}{\underbrace{1,1,\dots,1}})$ and
$\boldsymbol{1}^{\langle n\rangle}=(\underset{n}{\underbrace{1,1,\dots,1}})$
###### Theorem 4.4.
For any $i=1,\dots,n$ and $a=1,\dots,k$ the following relations hold
$\displaystyle\nabla_{z_{i}}^{\langle k\rangle}(z,\lambda,\kappa)\backsimeq
D_{z_{i}}^{\langle n\rangle}(\lambda,-z,-\kappa),$ (4.3)
$\displaystyle\nabla_{\lambda_{a}}^{\langle
n\rangle}(\lambda,z,\kappa)\backsimeq D_{\lambda_{a}}^{\langle
k\rangle}(z,-\lambda,-\kappa),$ (4.4)
$\displaystyle\widehat{\nabla}_{z_{i}}^{\langle
k\rangle}(z,\lambda,\kappa)\backsimeq-\widehat{D}_{z_{i}}^{\langle
n\rangle}\big{(}{-}\lambda+\boldsymbol{1}^{\langle
k\rangle},z,-\kappa\big{)},$ (4.5)
$\displaystyle\widehat{\nabla}_{\lambda_{a}}^{\langle
n\rangle}(\lambda,z,\kappa)\backsimeq-\widehat{D}_{\lambda_{a}}^{\langle
k\rangle}\big{(}{-}z+\boldsymbol{1}^{\langle
n\rangle},\lambda,-\kappa\big{)},$ (4.6) $\displaystyle Z_{z_{i}}^{\langle
k\rangle}(z,\lambda,\kappa)\backsimeq N_{i}^{\langle
n\rangle}(z)Q_{z_{i}}^{\langle n\rangle}(\lambda,-z,-\kappa),$ (4.7)
$\displaystyle Z_{\lambda_{a}}^{\langle n\rangle}(\lambda,z,\kappa)\backsimeq
N_{a}^{\langle k\rangle}(\lambda)Q_{\lambda_{a}}^{\langle
k\rangle}(z,-\lambda,-\kappa),$ (4.8)
where
$\displaystyle N_{i}^{\langle n\rangle}(z)=\frac{\prod\limits_{1\leq
j<i}C_{ji}^{n}(z_{ji}-\kappa)}{\prod\limits_{i<j\leq
n}C_{ij}^{n}(z_{ij})},\qquad N_{a}^{\langle
k\rangle}(\lambda)=\frac{\prod\limits_{1\leq
b<a}C_{ba}^{k}(\lambda_{ba}-\kappa)}{\prod\limits_{a<b\leq
k}C_{ab}^{k}(\lambda_{ab})}.$
and
$\displaystyle C_{ab}^{\langle
k\rangle}(t)=\frac{\Gamma\big{(}t+e_{aa}^{\langle
k\rangle}+1\big{)}\Gamma\big{(}t-e_{bb}^{\langle
k\rangle}\big{)}}{\Gamma\big{(}t+e_{aa}^{\langle k\rangle}-e_{bb}^{\langle
k\rangle}+1\big{)}\Gamma(t)},\qquad C_{ij}^{\langle
n\rangle}(t)=\frac{\Gamma\big{(}t+e_{ii}^{\langle
n\rangle}+1\big{)}\Gamma\big{(}t-e_{jj}^{\langle
n\rangle}\big{)}}{\Gamma\big{(}t+e_{ii}^{\langle n\rangle}-e_{jj}^{\langle
n\rangle}+1\big{)}\Gamma(t)}.$ (4.9)
###### Proof.
Verification of relations (4.3), (4.4), (4.5), (4.6) is straightforward. For
(4.7) and (4.8), we have to show that
$\displaystyle R_{ij}^{\langle k\rangle}(t)\backsimeq C_{ij}^{\langle
n\rangle}(t)B_{ij}^{\langle n\rangle}(-t),$ (4.10) $\displaystyle
R_{ab}^{\langle n\rangle}(t)\backsimeq C_{ab}^{\langle
k\rangle}(t)B_{ab}^{\langle k\rangle}(-t).$ (4.11)
We will prove relation (4.11). Relation (4.10) can be proved similarly.
Note, that both action of $R_{ab}^{\langle n\rangle}(t)$ on
$\mathfrak{P}_{kn}$ and action of $C_{ab}^{\langle k\rangle}(t)B_{ab}^{\langle
k\rangle}(-t)$ on $\mathfrak{P}_{kn}$ involve only the variables
$x_{a1},\dots,x_{an}$, $x_{b1},\dots,x_{bn}$. Therefore, it is sufficient to
prove (4.11) for the case of $k=2$, $a=1$, $b=2$.
The $\mathfrak{gl}_{n}$\- module $\mathfrak{P}_{2,n}$ is isomorphic to
$V_{\bullet}^{\langle n\rangle}\otimes V_{\bullet}^{\langle n\rangle}$.
Consider the submodule $V_{m_{1}}^{\langle n\rangle}\otimes V_{m_{2}}^{\langle
n\rangle}\subset\mathfrak{P}_{2,n}$. We have the following decomposition of
the $\mathfrak{gl}_{n}$-module:
$\displaystyle V_{m_{1}}^{\langle n\rangle}\otimes V_{m_{2}}^{\langle
n\rangle}=\bigoplus_{m=\max(0,m_{1}+m_{2}-n)}^{\min(m_{1},m_{2})}V_{\boldsymbol{l}(m)}^{\langle
n\rangle}.$ (4.12)
Here $\boldsymbol{l}(m)=(2,2,\dots,2,1,\dots,1,0,\dots,0)$, where $2$ repeats
$m$ times and $1$ repeats $m_{1}+m_{2}-2m$ times. Denote by $v_{m}$ a highest
weight vector of the summand $V_{\boldsymbol{l}(m)}^{\langle n\rangle}$ given
by formula (A.1).
Define the scalar product on $\mathfrak{P}_{2,n}$ by the rule: $\langle
f,f\rangle=1$, if $f\in\mathfrak{P}_{2,n}$ is a nonzero monomial, and $\langle
f,h\rangle=0$, if $f,h\in\mathfrak{P}_{2,n}$ are two non-proprosional
monomials.
###### Lemma 4.5.
We have $\langle v_{m},v_{m}\rangle\neq 0$ for every $m$.
The proof is straightforward by formula (A.1).
###### Lemma 4.6.
$\big{\langle}w_{1},e_{ij}^{\langle
n\rangle}w_{2}\big{\rangle}=\big{\langle}e_{ji}^{\langle
n\rangle}w_{1},w_{2}\big{\rangle}$ for any $w_{1},w_{2}\in\mathfrak{P}_{2,n}$,
and $i,j=1,\dots,n$.
The proof is straightforward.
###### Corollary 4.7.
If vectors $w$ and $\tilde{w}$ belong to distinct summands of decomposition
(4.12), then $\langle w,\tilde{w}\rangle=0$.
###### Proof.
The summands of decomposition (4.12) are eigenspaces of the operator
$I^{\langle n\rangle}$, and the corresponding eigenvalues are distinct. Lemma
4.6 implies that the operator $I^{\langle n\rangle}$ is symmetric with respect
to the scalar product $\langle\cdot,\cdot\rangle$, which implies the
statement. ∎
Denote
$\displaystyle L_{ij}(t)=t\big{(}e_{ij}^{\langle
n\rangle}\big{)}_{(1)}+\sum_{k=1}^{n}\big{(}e_{ik}^{\langle
n\rangle}\big{)}_{(1)}\big{(}e_{kj}^{\langle n\rangle}\big{)}_{(2)},\qquad$
$\displaystyle M_{ij}(t)=t\big{(}e_{ij}^{\langle
n\rangle}\big{)}_{(1)}+\sum_{k=1}^{n}\big{(}e_{kj}^{\langle
n\rangle}\big{)}_{(1)}\big{(}e_{ik}^{\langle n\rangle}\big{)}_{(2)},$
$\displaystyle\alpha_{m}(t)=\langle L_{m_{1}+m_{2}-m+1,m}(t)\cdot
v_{m},v_{m-1}\rangle,\qquad$ $\displaystyle\beta_{m}(t)=\langle
M_{m_{1}+m_{2}-m+1,m}(t)\cdot v_{m},v_{m-1}\rangle.$
###### Lemma 4.8.
The functions $\alpha_{m}(t)$ and $\beta_{m}(t)$ are nonzero, and
$\displaystyle\frac{\alpha_{m}(t)}{\beta_{m}(t)}=\frac{t+1+m_{1}-m}{t-1+m-m_{2}}.$
(4.13)
The proof is given in Appendix A.
Due to relation (3.1), for any $m$, there exists a scalar function
$\rho_{m}(t)$ such that $R_{12}^{\langle n\rangle}(t)w=\rho_{m}(t)w$ for any
$w\in V_{\boldsymbol{l}(m)}^{\langle n\rangle}$.
###### Lemma 4.9.
It holds that
$\frac{\rho_{m}(t)}{\rho_{m-1}(t)}=\frac{\alpha_{m}(t)}{\beta_{m}(t)}.$ (4.14)
###### Proof.
Let us single out the term $V_{\boldsymbol{l}(m-1)}$ in the decomposition
(4.12):
$V_{m_{1}}^{\langle n\rangle}\otimes V_{m_{2}}^{\langle
n\rangle}=V_{\boldsymbol{l}(m-1)}\bigoplus\tilde{V}.$
Then we can write $L_{m_{1}+m_{2}-m+1,m}(t)\cdot v_{m}=w+\tilde{w}$, where
$w\in V_{\boldsymbol{l}(m-1)}$ and $\tilde{w}\in\tilde{V}$. By the definition
of $L_{m_{1}+m_{2}-m+1,m}(t)$, the vector $w$ has weight
$\boldsymbol{l}(m-1)$. Therefore, $w=av_{m-1}$ for some scalar $a$. By
Corollary 4.7, we have
$\alpha_{m}(t)=\langle L_{m_{1}+m_{2}-m+1,m}(t)\cdot
v_{m},v_{m-1}\rangle=a\langle v_{m-1},v_{m-1}\rangle.$
Notice that $R_{12}^{\langle n\rangle}(t)\tilde{w}\in\tilde{V}$, because
$R$-matrix $R_{12}^{\langle n\rangle}(t)$ acts as a multiplication by a scalar
function on each summand of the decomposition (4.12). Then, by Corollary 4.7,
$\big{\langle}R_{12}^{\langle
n\rangle}(t)\tilde{w},v_{m-1}\big{\rangle}\allowbreak=0$, and
$\displaystyle\begin{split}&\big{\langle}R_{12}^{\langle
n\rangle}(t)L_{m_{1}+m_{2}-m+1,m}(t)\cdot v_{m},v_{m-1}\big{\rangle}\\\
&\qquad{}=\big{\langle}R_{12}^{\langle
n\rangle}(t)w,v_{m-1}\big{\rangle}=\rho_{m-1}(t)a\langle
v_{m-1},v_{m-1}\rangle=\rho_{m-1}(t)\alpha_{m}(t).\end{split}$
On the other hand, relation (3.2) gives
$\displaystyle\big{\langle}R_{12}^{\langle
n\rangle}(t)L_{m_{1}+m_{2}-m+1,m}(t)\cdot v_{m},v_{m-1}\big{\rangle}$
$\displaystyle\qquad{}=\big{\langle}M_{m_{1}+m_{2}-m+1,m}(t)R_{12}^{\langle
n\rangle}(t)\cdot v_{m},v_{m-1}\big{\rangle}=\rho_{m}(t)\beta_{m}(t).$
Thus we get $\alpha_{m}(t)\rho_{m-1}(t)=\rho_{m}(t)\beta_{m}(t)$, which is
relation (4.14). ∎
By formulae (4.14), (4.13),
$\displaystyle\rho_{m}(t)=\prod_{s=1}^{m}\frac{\rho_{s}(t)}{\rho_{s-1}(t)}=\prod_{s=0}^{m-1}\frac{t+m_{1}-s}{t-m_{2}+s},$
(4.15)
where we used that $\rho_{0}=1$ by the normalization condition (3.3).
Consider a decomposition of the $\mathfrak{gl}_{2}$-module:
$\displaystyle V_{l_{1}}^{\langle 2\rangle}\otimes\dots\otimes
V_{l_{n}}^{\langle 2\rangle}=\bigoplus_{0\leq
m\leq|\boldsymbol{l}|/2}V_{(|\boldsymbol{l}|-m,m)}^{\langle 2\rangle}\otimes
W_{m}^{\langle 2\rangle},$
where $|\boldsymbol{l}|=\sum_{i}l_{i}$ and $W_{m}^{\langle 2\rangle}$ are
multiplicity spaces. Let
$\big{(}V_{l_{1}}^{\langle 2\rangle}\otimes\dots\otimes V_{l_{n}}^{\langle
2\rangle}\big{)}[m_{1},m_{2}]_{m}=\big{(}V_{l_{1}}^{\langle
2\rangle}\otimes\dots\otimes V_{l_{n}}^{\langle
2\rangle}\big{)}[m_{1},m_{2}]\cap\big{(}V_{(|\boldsymbol{l}|-m,m)}^{\langle
2\rangle}\otimes W_{m}^{\langle 2\rangle}\big{)}.$
###### Lemma 4.10.
It holds that
$\displaystyle B_{12}^{\langle 2\rangle}(t)\big{|}_{\big{(}V_{l_{1}}^{\langle
2\rangle}\otimes\dots\otimes V_{l_{n}}^{\langle
2\rangle}\big{)}[m_{1},m_{2}]_{m}}=\prod_{s=m}^{m_{2}-1}\frac{t+m_{2}-s}{t-m_{1}+s}.$
(4.16)
###### Proof.
The modules $V_{l_{i}}^{\langle 2\rangle}$ can have only the following highest
weights: $(1,1)$, $(1,0)$, $(0,0)$. Moreover, the modules $V_{(0,0)}^{\langle
2\rangle}$ and $V_{(1,1)}^{\langle 2\rangle}$ are one-dimensional and the
elements $e_{12}$, $e_{21}$, and $e_{11}-e_{22}$ act there by zero. Thus, it
is enough to consider the case when $V_{l_{i}}^{\langle
2\rangle}=V_{(1,0)}^{\langle 2\rangle}$ for all $i$, so formula (4.16) follows
from [6, formula (5.13)]. ∎
Comparing formulas (4.15), (4.16), and (4.9), we conclude
$\displaystyle\rho_{m}(t)=\big{(}B_{12}^{\langle 2\rangle}(-t)C_{12}^{\langle
2\rangle}(t)\big{)}\big{|}_{\big{(}V_{l_{1}}^{\langle
2\rangle}\otimes\dots\otimes V_{l_{n}}^{\langle
2\rangle}\big{)}[m_{1},m_{2}]_{m}}.$ (4.17)
###### Lemma 4.11.
For the Casimir elements $I^{\langle 2\rangle}$ and $I^{\langle n\rangle}$, we
have
$I^{\langle 2\rangle}-2\sum_{a=1}^{2}e_{aa}^{\langle
2\rangle}\backsimeq-I^{\langle n\rangle}+n\sum_{i=1}^{n}e_{ii}^{\langle
n\rangle}.$ (4.18)
The proof is straightforward.
Recall that in the irreducible $\mathfrak{gl}_{k}$-module $V_{\boldsymbol{l}}$
the element $I^{\langle k\rangle}$ acts as a multiplication by
$(\boldsymbol{l},\boldsymbol{l}+2\rho)$. Then it is easy to verify that
$\displaystyle\left(I^{\langle 2\rangle}-2\sum_{a=1}^{2}e_{aa}^{\langle
2\rangle}\right)\bigg{|}_{V_{(|\boldsymbol{l}|-m,m)}^{\langle 2\rangle}\otimes
W_{m}^{\langle 2\rangle}}=\left(-I^{\langle
n\rangle}+n\sum_{i=1}^{n}e_{ii}^{\langle
n\rangle}\right)\bigg{|}_{V_{\boldsymbol{l}(m)}^{\langle n\rangle}}.$ (4.19)
Comparing formulae (4.18) and (4.19), and using the fact that the Casimir
elements act on distinct irreducible modules as a multiplication by distinct
scalar functions, we get that under isomorphisms (4.1) and (4.2) the
respective images of $V_{(|\boldsymbol{l}|-m,m)}^{\langle 2\rangle}\otimes
W_{m}^{\langle 2\rangle}$ and $V_{\boldsymbol{l}(m)}^{\langle n\rangle}$ in
$\mathfrak{P}_{2,n}$ coincide. To indicate that, we will write
$V_{(|\boldsymbol{l}|-m,m)}^{\langle 2\rangle}\otimes W_{m}^{\langle
2\rangle}\backsimeq V_{\boldsymbol{l}(m)}^{\langle n\rangle}$.
We also have $\big{(}V_{l_{1}}^{\langle 2\rangle}\otimes\dots\otimes
V_{l_{n}}^{\langle
2\rangle}\big{)}[m_{1},m_{2}]\backsimeq\big{(}V_{m_{1}}^{\langle
n\rangle}\otimes V_{m_{2}}^{\langle n\rangle}\big{)}[l_{1},\dots,l_{n}]$.
Therefore $\big{(}V_{m_{1}}^{\langle n\rangle}\otimes V_{m_{2}}^{\langle
n\rangle}\big{)}[l_{1},\dots,l_{n}]\cap V_{\boldsymbol{l}(m)}^{\langle
n\rangle}\backsimeq\big{(}V_{l_{1}}^{\langle 2\rangle}\otimes\dots\otimes
V_{l_{n}}^{\langle 2\rangle}\big{)}[m_{1},m_{2}]_{m}$. Now we see that (4.17)
gives us relation between actions of operators $B_{12}^{\langle 2\rangle}(t)$,
$C_{12}^{\langle 2\rangle}(t)$, and $R_{12}^{\langle n\rangle}(t)$ on one
particular submodule of $\mathfrak{P}_{2,n}$.
Theorem 4.4 is proved. ∎
## Appendix A Proof of Lemma 4.8
Let
$v_{m}=\underset{i=1}{\overset{m}{\prod}}x_{1i}x_{2i}\underset{\varepsilon}{\sum}x_{\varepsilon_{1},m+1}x_{\varepsilon_{2},m+2}..x_{\varepsilon_{m_{1}+m_{2}-2m},m_{1}+m_{2}-m}$
(A.1)
with
$\\{\varepsilon\\}=\big{\\{}(\varepsilon_{1,}\varepsilon_{2},\dots,\varepsilon_{m_{1}+m_{2}-2m})\colon\varepsilon_{i}=1\text{
or }2\text{, }\sum_{i}\varepsilon_{i}=m_{1}-m+2(m_{2}-m)\big{\\}}$. One can
easily prove that $v_{m}$ is a highest weight vector of weight
$\boldsymbol{l}(m)$ .
It follows from the construction of the scalar product
$\langle\cdot,\cdot\rangle$ that $\alpha_{m}(t)$ (resp. $\beta_{m}(t)$) equals
the sum of the coefficients of those monomials presented in
$L_{m_{1}+m_{2}-m+1,m}(t)\cdot v_{m}\qquad\text{(resp.}\
M_{m_{1}+m_{2}-m+1,m}(t)\cdot v_{m})$
that also appear in $v_{m-1}$. In fact, all monomials either in
$L_{m_{1}+m_{2}-m+1,m}(t)\cdot v_{m}\qquad\text{or in}\
M_{m_{1}+m_{2}-m+1,m}(t)\cdot v_{m}$
appear in $v_{m-1}$ as well.
We will start with $\alpha_{m}$. Denote $s=m_{1}+m_{2}-m+1$. Let us inspect
what happens when we apply various terms of $L_{sm}(t)$ to $v_{m}$. For the
sum $\underset{k=1}{\overset{n}{\sum}}(e_{sk})_{(1)}(e_{km})_{(2)}$, we can
assume that $m\leq k<s$. If $k>m$, then the operator
$(e_{sk})_{(1)}(e_{km})_{(2)}$ will send a monomial in $v_{m}$ to zero if and
only if this monomial does not depend on $x_{1k}$. That is, we look at all
terms in $v_{m}$ corresponding to $\varepsilon_{k-m}=1$. There are
$C_{m_{1}+m_{2}-2m-1}^{m_{2}-m}$ such terms with the same contribution
$(-1)^{m_{1}+m_{2}+m}$. We leave the details of this calculation to a reader.
Under the assumption $m<k<s$, there are $m_{1}+m_{2}-2m$ different values of
$k$, which yield the overall contribution
$(-1)^{m_{1}+m_{2}+m}(m_{1}+m_{2}-2m)C_{m_{1}+m_{2}-2m-1}^{m_{2}-m}$ to
$\alpha_{m}(t)$.
If $k=m$, then we have $(e_{sk})_{(1)}(e_{km})_{(2)}\cdot
v_{m}=(e_{sm})_{(1)}\cdot v_{m}$. Therefore all $C_{m_{1}+m_{2}-2m}^{m_{1}-m}$
terms in $v_{m}$ equally contribute $(-1)^{m_{1}+m_{2}+m}(m_{1}+m_{2}-2m)$.
Finally, the term $t(e_{ij})_{(1)}$ in $L_{sm}(t)$ generates the contribution
$t(-1)^{m_{1}+m_{2}+m}C_{m_{1}+m_{2}-2m}^{m_{1}-m}$ to $\alpha_{m}(t)$, which
can be seen similarly to the case $k=m$ considered above.
Thus we obtained
$\displaystyle(-1)^{m_{1}+m_{2}+m}\alpha_{m}=(t+1)C_{m_{1}+m_{2}-2m}^{m_{1}-m}+(m_{1}+m_{2}-2m)C_{m_{1}+m_{2}-2m-1}^{m_{2}-m}.$
The similar arguments give us
$\displaystyle(-1)^{m_{1}+m_{2}+m}\beta_{m}=(t-1)C_{m_{1}+m_{2}-2m}^{m_{1}-m}-(m_{1}+m_{2}-2m)C_{m_{1}+m_{2}-2m-1}^{m_{1}-m}.$
Since
$(m_{1}+m_{2}-2m)C_{m_{1}+m_{2}-2m-1}^{m_{1}-m}=(m_{1}-m)C_{m_{1}+m_{2}-2m}^{m_{1}-m},$
and
$(m_{1}+m_{2}-2m)C_{m_{1}+m_{2}-2m-1}^{m_{2}-m}=(m_{2}-m)C_{m_{1}+m_{2}-2m}^{m_{1}-m}.$
Lemma 4.8 is proved.
### Acknowledgements
V. Tarasov was supported in part by Simons Foundation grant 430235.
## References
* [1] Cheng S.-J., Wang W., Dualities and representations of Lie superalgebras, Graduate Studies in Mathematics, Vol. 144, Amer. Math. Soc., Providence, RI, 2012.
* [2] Etingof P., Frenkel I., Kirillov A., Lectures on representation theory and Knizhnik–Zamolodchikov equations, Mathematical Surveys and Monographs, Vol. 58, Amer. Math. Soc., Providence, RI, 1998.
* [3] Etingof P., Varchenko A., Dynamical Weyl groups and applications, Adv. Math. 167 (2002), 74–127, arXiv:math.QA/0011001.
* [4] Felder G., Markov Y., Tarasov V., Varchenko A., Differential equations compatible with KZ equations, Math. Phys. Anal. Geom. 3 (2000), 139–177, arXiv:math.QA/0001184.
* [5] Tarasov V., Varchenko A., Difference equations compatible with trigonometric KZ differential equations, Int. Math. Res. Not. 2000 (2000), 801–829, arXiv:math.QA/0002132.
* [6] Tarasov V., Varchenko A., Duality for Knizhnik–Zamolodchikov and dynamical equations, Acta Appl. Math. 73 (2002), 141–154, arXiv:math.QA/0112005.
* [7] Tarasov V., Varchenko A., Dynamical differential equations compatible with rational qKZ equations, Lett. Math. Phys. 71 (2005), 101–108, arXiv:math.QA/0403416.
* [8] Toledano-Laredo V., The trigonometric Casimir connection of a simple Lie algebra, J. Algebra 329 (2011), 286–327, arXiv:1003.2017.
* [9] Toledano-Laredo V., Yang Y., The elliptic Casimir connection of a simple Lie algebra, arXiv:1805.12261.
* [10] Vicedo B., Young C., $(\mathfrak{gl}_{M},\mathfrak{gl}_{N})$-dualities in Gaudin models with irregular singularities, SIGMA 14 (2018), 040, 28 pages, arXiv:1710.08672.
|
# Simplify-then-Translate: Automatic Preprocessing for Black-Box Translation
Sneha Mehta1, Bahareh Azarnoush2, Boris Chen2, Avneesh Saluja2,
Vinith Misra2, Ballav Bihani2, Ritwik Kumar2
1Department of Computer Science, Virginia Tech, VA
2Netflix Inc., CA
<EMAIL_ADDRESS>{bazarnoush, bchen, asaluja,
vmisra, bbihani<EMAIL_ADDRESS>
###### Abstract
Black-box machine translation systems have proven incredibly useful for a
variety of applications yet by design are hard to adapt, tune to a specific
domain, or build on top of. In this work, we introduce a method to improve
such systems via automatic pre-processing (APP) using sentence simplification.
We first propose a method to automatically generate a large in-domain
paraphrase corpus through back-translation with a black-box MT system, which
is used to train a paraphrase model that “simplifies” the original sentence to
be more conducive for translation. The model is used to preprocess source
sentences of multiple low-resource language pairs. We show that this
preprocessing leads to better translation performance as compared to non-
preprocessed source sentences. We further perform side-by-side human
evaluation to verify that translations of the simplified sentences are better
than the original ones. Finally, we provide some guidance on recommended
language pairs for generating the simplification model corpora by
investigating the relationship between ease of translation of a language pair
(as measured by BLEU) and quality of the resulting simplification model from
back-translations of this language pair (as measured by SARI), and tie this
into the downstream task of low-resource translation.
## 1 Introduction
Modern translation systems built on top of a sequence transduction approach
(Sutskever, Vinyals, and Le, 2014; Bahdanau, Cho, and Bengio, 2014) have
greatly advanced the state and quality of machine translation (MT). These
systems generally rely on the availability of large-scale parallel corpora,
and while unsupervised (Lample et al., 2017) or semi-supervised (Saluja et
al., 2014) approaches are a popular area of research, production-grade
translation systems still primarily leverage bitexts when training. Efforts
such as WMT111http://www.statmt.org/wmt18/ provide such corpora for select
language pairs, which has enabled neural MT systems to achieve state-of-the-
art performance on those pairs. However, low resource MT (language pairs for
which parallel data is scarce) remains a challenge.
In this work, we focus on improving translation quality for low-resource
translation (i.e., from English _into_ a low-resource language) in the _black-
box MT_ (BBMT) setting - namely a system which has been trained and tuned a
priori and for which we cannot access the model parameters or training data
for fine-tuning or improvements. Examples of such systems include those
provided by commercial vendors e.g., Google
Translate222https://translate.google.com/ or Microsoft
Translator333https://www.bing.com/translator. While some provide the option of
fine-tuning on domain-specific data under certain conditions, how to improve
the performance of such black-box systems on domain-specific translation tasks
remains an open question. We investigate methods to leverage the BBMT system
to preprocess input source sentences in a way that preserves meaning and
improves translation in the target language. Specifically, a large-scale
parallel corpus for English simplification is obtained by back-translating
(Sennrich, Haddow, and Birch, 2016a) the reference translations of several
high-resource target languages. The resulting parallel corpus is used to train
an Automatic Preprocessing model (APP) (§ 2), which transforms source sentence
into a form that preserves meaning while also being easier to translate into a
low-resource language. In effect, the APP model attacks the longstanding
problem of handling complex idiomatic and non-compositional phrases (Lin,
1999; Sag et al., 2002), and simplifies these expressions to more literal,
compositional ones that we hypothesize are easier to translate.
We use the APP model to simplify the source sentences of a variety of low-
resource language pairs and compare the performance of the black-box MT system
on the original and simplified sentences. Note that only one APP model needs
to be trained per source language and this model can be applied to a variety
of low-resource language pairs as long as the source language is the same. In
our study we focus on the domain of conversational language as used in
dialogues of TV shows. We picked this domain since here language tends to be
colloquial and idiomatic. We empirically show improvement in translation
quality in this domain across a variety of low resource target languages (§3).
This improvement is further corroborated with side-by-side human evaluations
(§4.2) and evaluating on post-editing efficiency (§4.1). Lastly, we perform an
empirical analysis to probe further into which high-resource language pairs
should be selected to obtain a good quality simplification corpus for a given
language, before discussing connections with related work (§5) and concluding.
Figure 1: Machine vs. Human Translation
## 2 APP with Back Translations
Consider the example in Fig. 1. The source sentence “The vice president should
feel free to jump in” has been translated by Google Translate 444This specific
translation is observed in translate.google.com as of September 5, 2019.
incorrectly to Hindi as “Vice President should feel free to jump inside”. The
system was unable to correctly translate the idiomatic and non-compositional
phrase “jump in”, which in this context means “intervene” or “get involved”,
and instead translated it literally. An expert human Hindi translator would
take these idiomatic expressions into account when generating the Hindi
translation. Indeed, when back-translating the reference translation into
English we obtain “The Vice President should feel free to take part” (in the
conversation). Such instances where MT systems incorrectly translate sentences
containing phrases, idioms or complex words for low-resources (i.e. with
smaller training sets) languages pairs are fairly common.
In other words, the back-translation is different in meaning than the natural
source sentence. This problem is prevalent even when these BBMT systems are
fine-tuned on domain-specific data and is exacerbated when dealing with low-
resource language pairs, simply because the paucity of data does not allow the
MT models to infer the translations of the myriad of phrases and complex
words. For instance, in the example above ‘jump in’ was interpreted
compositionally as ‘jump’ $+$ ‘in’. To generate better quality or even
acceptable translations, it is imperative to simplify such complex sentences
into simpler forms while still preserving meaning.
Using automated models to simplify such sentences is a well-studied problem.
Though, when it comes to the ultimate task of domain-specific translation, it
is not entirely clear what data is best suited to train such simplification
models. Open source datasets like WikiLarge (Zhang and Lapata, 2017) or Simple
PPDB (Zhao et al., 2018) are good options to explore, but the domain mismatch
and dataset size may pose a challenge. In particular, WikiLarge dataset
contains 296K sentence pairs of descriptive text while our domain of interest
in this study is conversational dialogues from TV Shows and movies. Collecting
a large amount of domain-specific simplification data could be prohibitive,
forcing one to consider alternatives when constructing their simplification
models.
To address this problem we make use of the observation that translating back-
translations is easier than translating naturally occurring source sentences,
which has been corroborated by numerous studies (Graham, Haddow, and Koehn,
2019; Zhang and Toral, 2019). Consider a set of around 30K uniformly-sampled
sentence pairs from the English-Bulgarian (En-Bg) subtitles corpus appearing
on TV shows and movies from a subscription video-on-demand
provider555www.netflix.com. The BLEU score in the natural or direct direction
(En $\rightarrow$ Bg) is only 10.20, whereas when following the reverse
direction (Bg $\rightarrow$ En $\rightarrow$ Bg) and translating back-
translations instead of original source sentences, the BLEU score dramatically
improves to 33.39. Probing a little deeper, Fig. 2 shows the distributions of
sentence-level GLEU scores (Wu and others, 2016) for two language pairs:
English-Bulgarian (right) and English-Hindi (left). We observe the trend that
GLEU scores have generally improved when using back-translations. The area
where the blue curve dominates the red curve can be considered as the ‘scope-
of-simplification’.
(a) English-Hindi
(b) English-Bulgarian
Figure 2: Sentence-level GLEU for direct translations and back-translations.
The red density curve represents the distribution of GLEU scores obtained by
the direction translation direction and the blue density curve represents the
distribution of GLEU scores obtained by using back-translations.
Thus it seems that human reference translations when back-translated to the
original language (in this case English) are a rich source of simplifications
(e.g. “jump in” is simplified to “take part”). This observation leads to two
immediate corollaries - 1) by back-translating the ground truth human
translations to the source language we obtain a (perhaps noisy) simplified
version of the original source, and 2) we can learn a function to map the
source sentences to their simplified forms by training a sequence-to-sequence
(S2S) model from the aforementioned generated parallel corpus. We term the
resulting simplification model an Automated Preprocessing (or APP) model.
We formalize our APP model as a S2S model from source sentences in one
language to back-translations of ground truth sentences i.e., translationese
targets in the same language. The translationese targets can be obtained from
multiple high-resource language pairs using black-box MT systems and the
trained APP model can be applied to a variety of low resource language pairs.
Let $(X^{i},Y^{i})$ be the $i^{\textrm{th}}$ training bitext corpus ($X$ is
source, $Y$ is target) with source language $s_{i}$ and target language
$t_{i}$ for $i\in\\{1,\dots,M\\}$, where $M$ is the number of training
language pairs, and let $j\in\\{M+1,\dots,M+N\\}$ refer to the $N$ test bitext
corpora. Note that for the experiments in this paper, $s_{i}$ is fixed to
English $\forall i\in\\{1,\dots,M+N\\}$ so we simply refer to it as $s$. The
APP procedure is as follows:
* •
Obtain back-translations of target train sets $Y^{i}$ for $i=1$ to $M$ to
language $s_{i}$ given by $T^{1},T^{2},\dots,T^{M}$ using BBMT models
$MT_{t_{i}\rightarrow s}\forall i$.
* •
Train an APP simplification model $f_{APP}$ on the combined parallel corpus
$\bigcup_{i=1}^{M}\\{(X^{i},T^{i})\\}$.
* •
At test time, preprocess the source $X_{s}^{j}$ for each test language pair
$j$ using the trained APP model to obtain the simplified source $X^{j*}$,
where
$X^{j*}=f_{APP}(X^{j})$ (1)
* •
Translate the simplified source using the BBMT model for the $j^{th}$ test
language pair.
$\hat{Y}^{j*}=MT_{s\rightarrow t_{j}}(X^{j*})$ (2)
APP provides in-domain simplification bitext at scale and from the same BBMT
system that we eventually use to translate into low-resource languages, thus
providing a more flexible solution than using precompiled simplification
corpora. In the next sections, we compare the performance of BBMT system
outputs with and without APP simplifications i.e. $\hat{Y}^{j}*$ &
$\hat{Y}^{j}$ respectively. Further, we compare the APP models trained on in-
domain vs out-of-domain corpora.
## 3 Evaluation
We first compare the in-domain APP model to a S2S model trained on the
WikiLarge corpus, by evaluating downstream translation performance on low-
resource languages, followed by an evaluation based on human judgements and
post-editing efficiency. These experiments are conducted on subtitles dataset
from a subscription video-on-demand provider. We also validate the approach on
another subtitles dataset and verify that the improvements we see in the first
set of experiments generalize to other corpora in the same domain. In all of
our experiments we used the Google Translate BBMT system.
### 3.1 Datasets and Metrics
#### FIGS
This dataset comes from subtitles appearing on 12,301 TV shows and movies from
a subscription video-on-demand provider. Titles include but are not limited
to: “How to Get Away with Murder”, “Star Trek: Deep Space Nine”, and “Full
Metal Alchemist”. We take four high-resource language pairs namely: English-
French (1.3 million parallel subtitle events i.e., sentence pairs), English-
Italian (1.0M), English-German (1.2M) and English-Spanish (6.5M). We
collectively refer to this dataset as FIGS. We use the APP simplification
procedure to obtain English simplification parallel corpora resulting in 9.5M
subtitle events i.e., sentence pairs. This dataset contains short sentences
with an average length of 7. For evaluation, we pick 7 low-resource language
pairs namely: English-Hungarian (En-Hu), English-Ukrainian (En-Uk), English-
Czech (En-Cs), English-Romanian (En-Ro), English-Bulgarian (En-Bg), English-
Hindi (En-Hi), and English-Malay (En-Ms). The test set statistics for each
dataset is given in Table 1. We refer to the APP model trained on this dataset
as ‘FigsAPP’.
Table 1: FIGS test set statistics Language | #sentences
---|---
En-Hu | 27,393
En-Uk | 30,761
En-Cs | 35,505
En-Ro | 47,054
En-Bg | 30,714
En-Hi | 23,496
En-Ms | 11,713
#### Wikilarge
The WikiLarge dataset (Zhang and Lapata, 2017) was compiled by using sentence
alignments from other Wikipedia-based datasets (Zhu, Bernhard, and Gurevych,
2010; Woodsend and Lapata, 2011; Kauchak, 2013), and contains 296K instances
of 1-to-1 and 1-to-many alignments. This is a widely used benchmark for test
simplification tasks. The train split contains 296K sentence pairs and the
validation split contains 992 sentence pairs. We call the simplification model
trained on this dataset as ‘WikiAPP’.
#### Open Subtitles
The Open Subtitles dataset (Lison and Tiedemann, 2016) is a collection of
translated movie subtitles obtained from
opensubtitles.org666http://www.opensubtitles.org/ . It contains 62 languages
and 1,782 bitexts. We first train two MT models using two high resource
language pairs (Es-En and Ru-En) using the Transformer architecture (Vaswani
et al., 2017). Then using the MT models above we train two APP models obtained
from the same language pairs English-Spanish (En-Es) and English-Russian (En-
Ru). We sample 5M sentence pairs each for training MT and APP models from the
corresponding Open Subtitles corpora and filter out short ($length<3$) and
long ($length>50$) sentence pairs. Note that training sets for both MT and APP
models are disjoint. For evaluation, we pick the following six language pairs:
three randomly picked pairs English-Armenian (En-Hy), English-Ukrainian(En-Uk)
and English-Bulgarian (En-Bg) and three pairs in which the target language is
similar to Spanish including English-Catalan (En-Ca), English-Portuguese (En-
Pt) and English-Romanian (En-Ro). We sample 50,000 pairs from the low-resource
test bitexts and filter out pairs with length less than 3 and greater than 50.
We call the APP model obtained from En-Es dataset as ‘OSEsAPP’ and from En-Ru
dataset as ‘OSRuAPP’.
#### Turk and PWKP
To test the performance of APP models obtained from different language pairs (
§ 4.3) we pick two open source simplification datasets. The first dataset is
the Turk (Xu et al., 2016) dataset which contains 1-to-1 alignments focused on
paraphrasing transformations, and multiple (8) simplification references per
original sentence (collected through Amazon Mechanical Turk). To evaluate the
performance on this dataset we use the SARI metric they introduced. The next
dataset we use is the test set of the PWKP dataset (Zhu, Bernhard, and
Gurevych, 2010). This dataset contains only 1-to-1 mapping between source and
reference and hence we use the BLEU metric to evaluate the simplification
performance.
#### Metrics
We evaluate the translation performance of the BBMT system using the commonly
used BLEU metric (Papineni et al., 2002). For subtitle generation, expert
linguists post-edit subtitles at the event or dialog level, hence it is useful
to look at the impact that simplification brings at the sentence-level,
motivating the choice of sentence-level GLEU (Wu and others, 2016), which has
been shown to be better correlated with human judgements at the sentence-level
as compared to sentence-level BLEU. Furthermore, we compute the normalized
edit (Levenshtein) distance between a translation output and human reference
translation also known as translation error rate (TER), which has been shown
to correlate well with the amount of post-editing effort required by a human
(Snover et al., 2006). This metric provides yet another way to evaluate the
quality of translation and completes our comprehensive suite of metrics.
### 3.2 Implementation
For training the APP simplification model we use the Transformer architecture
(Vaswani et al., 2017) through the
tensor2tensor777https://github.com/tensorflow/tensor2tensor library. We also
evaluate BLEU using the implementation in that library and report the uncased
version of the metric. For computing the TER score we use the implementation
provided by (Snover et al., 2006) 888http://www.cs.umd.edu/ snover/tercom/.
All experiments are based on the transformer base architecture with 6 blocks
in the encoder and decoder. We use the same hyper-parameters for all
experiments, i.e., word representations of size 512 and feed-forward layers
with inner dimension 4096. Dropout is set to 0.2 and we use 8 attention heads.
Models are optimized with Adam (Kingma and Ba, 2014) using $\beta_{1}=0.9$,
$\beta_{2}=0.98$, and $\epsilon=1e-9$, with the same learning rate schedule as
Vaswani et al. (2017). We use 50,000 warmup steps. All models use label
smoothing of 0.1 with a uniform prior distribution over the vocabulary. We run
all experiments using machines with 4 Nvidia V100 GPUs. We use a sub-word
vocabulary of size 32K implemented using the word-piece algorithm (Sennrich,
Haddow, and Birch, 2016a) to deal with out-of-vocabulary words and the open
vocabulary problem in S2S language models.
## 4 Results
Table 2 compares the performance of APP models trained on FIGS (in-domain)
dataset and the WikiLarge (out-of-domain) datasets on the FIGS test set using
the BBMT system. The values in columns 1-3 indicate the BLEU score after
translating the original sentence and after simplifying using the FigsAPP and
WikiAPP models respectively. There is uniform improvement across all languages
when using the FigsAPP model, ranging from 3.5% (relative) for En-Uk to 11.6%
for En-Bg. On the other hand, performance degrades significantly on all
languages when simplified using a model trained on the WikiLarge dataset. Fig.
3 shows the distribution of sentence-level GLEU for each target language in
the FIGS test set. Mean GLEU increases for En-Hu, En-Ro, En-Ms and En-Bg.
Table 3 shows the results of APP on the Open Subtitles dataset. It can be
noted that the performance improves for Catalan (ca) and Portuguese (pt) which
are languages similar to the language used for training the simplification
corpus. Additionally for Bulgarian an improvement of 4.1% can be observed.
Moreover, the performance of OSEsAPP is better than the performance of
OSRuAPP. This can be attributed to the fact that En-Ru is a harder language
pair to translate than En-Es and hence the simplification model obtained from
En-Ru is of worse quality than En-Es. We further elaborate this point in
section § 4.3.
Table 2: In-domain vs out-of-domain simplification performance. It is evident that APP models trained on an out-of-domain simplification corpus (WikiLarge) degrades performance, whereas in-domain simplification corpora (FIGS) boosts performance. language pair | original | FigsAPP | WikiApp
---|---|---|---
En-Hu | 17.69 | 18.92 | 11.86
En-Uk | 17.57 | 18.18 | 12.51
En-Cs | 21.62 | 22.35 | 16.71
En-Ro | 26.08 | 27.98 | 21.56
En-Bg | 14.9 | 16.63 | 12.39
En-Hi | 14.45 | 15.53 | 11.39
En-Ms | 19.14 | 20.37 | 13.00
Table 3: Translation performance (BLEU) before and after of OSEsAPP and OSRuAPP models on six low resource target language pairs from the Open Subtitles corpus. language pair | original | OSEsAPP | OSRuAPP
---|---|---|---
En-Ca | 27.25 | 27.84 | 23.36
En-Hu | 7.05 | 6.28 | 5.79
En-Pt | 25.11 | 25.5 | 22.28
En-Ro | 26.18 | 25.03 | 22.40
En-Uk | 11.73 | 11.77 | 11.61
En-Bg | 23.71 | 24.68 | 23.90
### 4.1 Post-Editing Efficiency
Figure 3: Sentence-level GLEU scores Table 4: Translation Edit Rate (TER) score of translations before and after applying the APP simplification for test language-pairs from the FIGS dataset. ‘original’ and ‘simple’ columns show the TER for translations before and after APP where the last column indicates the percentage decrease in TER. language | original | simple | %$\Delta$
---|---|---|---
En-Hu | 0.86 | 0.80 | -7%
En-Uk | 0.74 | 0.70 | -5.4%
En-Cs | 0.78 | 0.77 | -1.3%
En-Ro | 0.72 | 0.67 | -6.9%
En-Bg | 0.83 | 0.77 | -7.3%
En-Hi | 0.62 | 0.60 | -3.3%
En-Ms | 0.76 | 0.72 | -5.3%
We also observe that TER decreases for all languages, which is intuitive to
understand because the APP simplification brings the sentences closer to their
literal human translation. Table 4 shows TER score for the FIGS test corpora
for translating into seven low resource languages, before and after applying
the APP simplification. As can be seen, TER decreases for all languages after
simplification with a reduction of 6.9%, 6.9% and 7.2% for target languages
Hungarian, Romanian and Bulgarian respectively. The reduction in TER
correspondingly translates to a reduction in post-editing effort required by
translators using the BBMT system as an assistive tool.
### 4.2 Human Evaluation
Simplified sentences with worse GLEU than their baseline non-simplified
counterparts might not necessarily be of worse quality; rather, they may just
be phrased differently than the reference sentence. We thus perform a
Figure 4: Human Evaluation Results
side-by-side human evaluation to verify if APP-simplified translations improve
MT quality, which allows us to assess via human evaluation these supposedly
worse translations and validate translations with GLEU improvements at the
same time. For this purpose, we restricted evaluation to five languages from
the FIGS test set (hu, uk, cs, ro, and bg) and sampled 100 sentences from the
fraction of sentences for which $\Delta GLEU>0.4$ and 100 sentences from the
sentences for which $\Delta GLEU<0$, where $\Delta GLEU$ for one sentence pair
(x, y) is defined as;
$\Delta GLEU=GLEU(MT(x^{*}),y)-GLEU(MT(x),y)$ (3)
and $x^{*}$ is the simplified source sentence. For each language we show the
source sentence, the BBMT output of the original sentence, the BBMT output of
the simplified sentence, and the ground truth human translation per language
side-by-side to expert linguists. We ask them to rate the quality of the three
translations (original BBMT, simplified BBMT and human translation) according
to the scale used by Wu and others (2016) described in Table 5. Since
translation is generally easier for shorter sentences, in order to get a
representative sample of challenging sentences we only selected sentences with
more than 4 tokens for this study.
Table 5: Human evaluation ratings and their description Rating | Description
---|---
0 | completely nonsense translation
2 | | the sentence preserves some of the meaning of the
---
source sentence but misses significant parts
4 | | the sentence retains most of the meaning of the source
---
sentence, but may have some grammar mistakes
6 | | perfect translation: the meaning of the translation is
---
completely consistent with the source, and the grammar
is correct
Fig 4 displays results from the human evaluation, with corresponding values
stated in Table 6. The green box represents scores from translations of
original sentences, the orange box represents scores from translations from
simplified sentences and the blue box represents the ground truth human
translation. As expected the human translation has the highest score. Also
worth noting is the jump in score intervals between original and simplified
translations. For Hungarian, Czech, and Bulgarian median scores jump from
$2\rightarrow 4$, with improvements for Ukrainian ($3\rightarrow 4$) and
Romanian ($2\rightarrow 3$) as well. Thus at least for these languages we can
conclude that APP simplification results in improved translation output, as
determined by expert human translators. This concurs with the initial
observation e.g., for Bulgarian in Fig. 2, where the number of sentences with
score of 0.0 are squashed and their scores shifted higher. This means
simplification can improve upon erroneous or bad-quality translations. Table 6
shows the mean human scores for original, simplified and reference
translations as well as percentage of sentences for which human score
improved, worsened and remained the same after simplification for all 200
sentences per language.
Table 6: Human Evaluation statistics for the FIGS test language pairs. ‘Original Mean’, ‘Simple Mean’ and ‘Human Mean’ represent the mean human scores before, after APP and ground truth human scores respectively. The next three columns indicates the percentage of samples with improved (% +ve), worse (% -ve), and same (% same) performance after simplification. lang | | Original
---
Mean
| Simple
---
Mean
| Human
---
Mean
% +ve | % -ve | % same
Hu | 2.52 | 3.11 | 4.45 | 38.5% | 18.5% | 43%
Uk | 3.02 | 3.61 | 5.8 | 43% | 15% | 42%
Cs | 2.43 | 3.12 | 4.77 | 49% | 16.5% | 34.5%
Ro | 2.56 | 3.0 | 3.91 | 41.5% | 25.5% | 33%
Bg | 2.34 | 3.33 | 5.365 | 51% | 22% | 27%
### 4.3 Ease of Translation vs Simplification Quality
We further seek to investigate the relationship between the translation
language pair used to generate the simplification corpus and the quality of
the resulting simplification model. To this end, we train translation systems
on two language pairs: English-Spanish (En-Es) and English-Russian (En-Ru).
The En-Es pair is an easier language pair to translate than En-Ru as reflected
by the BLEU scores of the SOTA MT systems on these pairs. We pick an En-Es
system trained to reach a BLEU of 42.7 and an En-Ru system trained to a BLEU
of 34.23. We hypothesize that the APP models resulting from an easier language
pair will be of better quality because it is easier to generate a good quality
parallel simplification corpus. To test this hypothesis, we train
simplification models obtained from the En-Es and En-Ru translation pairs and
test the simplification performance on two standard simplification test sets.
Table 7: Simplification performance of APP models trained on corpora generated by En-Es and En-Ru datasets on Turk and PWKP open source datasets. (Metric) Dataset | Wiki | En-Es | En-Ru
---|---|---|---
(SARI) Turk | 31.9 | 26.7 | 23.2
(BLEU) PWKP | 53.5 | 32.7 | 23.9
Table 7 presents the results of this experiment, specifically the performance
of the simplification models trained via automatically-generated
simplification corpora obtained from the En-Es and En-Ru OpenSubtitles models.
We also test the performance of an in-domain simplification model trained on
the WikiLarge dataset. As expected, the performance of the in-domain model
exceeds the performance of the models trained on En-Es and En-Ru datasets. It
is also interesting to note that the En-Es model outperforms the En-Ru models
on both datasets, underlining our hypothesis that easier language pairs result
in better APP models. This idea can be used to inform the high-resource
language pair(s) to pick for training an APP model for a target language pair.
For instance, to simplify English it would be better to pick a high-resource
pair which is easier to translate from English, e.g. Spanish. While
simplifying Catalan(ca), it would be good to pick a translation pair like
Catalan-Spanish which would be easier to translate than Catalan-English.
### 4.4 Qualitative Analysis
Table 8: Positive and negative qualitative examples of simplifications brought about by APP. | Original:
---
Simple:
| I still, think you’re nuts, but not as nuts as I thought
---
I still think you’re crazy, but not as crazy as I thought
| Original:
---
Simple:
| in a town only five miles from Kabul.
---
in a city eight kilometers from Kabul.
| Original:
---
Simple:
| This case is so far over your head, it’d make your nose
---
bleed.
This case is so complicated that it would bleed your nose.
| Original:
---
Simple:
| When I was marooned here, my first meal was a pheasant.
---
When I was stranded here, my first meal was a pheasant.
| Original:
---
Simple:
| She jumped from the window of Room 180.
---
He jumped out of the window of Room 180.
Here we provide qualitative examples of simplifications generated by the APP
approach and how it helps in improving BBMT performance. Table 8 shows some
example simplifications from the FIGS test datasets. Phrases highlighted in
bold were converted to ‘simpler’ phrases. In the first example “you’re nuts”
was replaced by a non-idiomatic/simpler phrase “you’re crazy”. In the second
example, distance of five miles was almost accurately converted from imperial
to the metric system whereas in the third example non-compositional phrase “so
far over your head” was translated into the compositional phrase “so
complicated”. In the next example, an infrequent word “marooned” was replaced
by its more frequent counterpart “stranded”. Finally, the last example shows
the kinds of errors introduced by the APP model where it occasionally replaces
pronouns like ‘it’, ‘she’ by ‘he’.
Table 9 gives examples of how APP simplifications can help the BBMT systems
make fewer errors. The first example shows a sample from English-Romanian
translation pair. Direct translation of the source makes the BBMT system
incorrectly translate “fixating” as “fixing” (as observed in the
backtranslation of the BBMT output) where as simplifying “fixating on” as
“thinking about” produces a more meaningful translation. Similarly in the
second example from the English-Catalan pair of the Open Subtitles corpus, APP
replaces the colloquial word ‘swell’ by its meaning ‘great’ and hence results
in a translation that is identical to the reference.
Table 9: Qualitative examples of how APP simplification can help mitigate BBMT errors on the FIGS and Open Subtitles datasets. Here x is the input source. | x:
---
BBMT(x):
APP(x):
BBMT(APP(x)):
Reference:
| If only we can stop fixating on the days
---
Dacă numai putem opri fixarea pe zile
If only we could stop thinking about the days
Dacă am putea să nu ne mai gândim la zile
Trebuie să nu ne mai gândim la zile
| x:
---
BBMT(x):
APP(x):
BBMT(APP(x)):
Reference:
| Another swell party, Jay.
---
Una altra festa de l’onatge, Jay.
Another great party, Jay.
Una altra gran festa, Jay.
Una altra gran festa, Jay.
## 5 Related Work
Automatic text simplification (ATS) systems aim to transform original texts
into their lexically and syntactically simpler variants. In theory, they could
also simplify texts at the discourse level, but most systems still operate
only on the sentence level. The motivation for building the first ATS systems
was to improve the performance of machine translation systems and other text
processing tasks, e.g. parsing, information retrieval, and summarization
(Chandrasekar, Doran, and Srinivas, 1996). It was argued that simplified
sentences (which have simpler sentential structures and reduced ambiguity)
could lead to improvements in the quality of machine translation
(Chandrasekar, Doran, and Srinivas, 1996) A large body of work, since then has
investigated text simplification for machine translation and found that this
approach can improve fluency of the translation output and reduce technical
post-editing effort (Štajner and Popovic, 2016).
Researchers have attempted to build simplification systems for different
languages such as English, Spanish, (Saggion et al., 2015), and Portuguese
(Aluísio et al., 2008). Wubben, van den Bosch, and Krahmer (2012) use phrase-
based machine translation for sentence simplification based on the PWKP
dataset (Zhu, Bernhard, and Gurevych, 2010). However, these systems are
modular, rule-based (Poornima, Dhanalakshmi, and Soman, 2011), limited by data
or language specific.
End-to-end simplification which is more similar to our work also has been
studied by applying RNN-based sequence-to-sequence models to the PWKP dataset
or transformer based models that are integrated with paraphrase rules (Zhao et
al., 2018) and trained on English to English parallel simplification corpora
and are current state-of-the-art. These methods are limited by the
availability of parallel simplification corpora and especially the ones that
can adapt to new domains. We propose a general framework that can be used to
collect large-scale data for any language to train in-domain end-to-end data-
driven lexical simplification systems.
Our work capitalizes on the observation that synthetically-generated source
sentences resulting from reversing the translation direction on a parallel
corpus yield better translations. These “back-translations” (Sennrich, Haddow,
and Birch, 2016b; Poncelas et al., 2018) can augment relatively scarce
parallel data with by translating the plentiful target monolingual data to the
source language.
Various methods have been explored to improve low-resource translation. (Zoph
et al., 2016) transfer parameters from an MT model trained on a high-resource
language pair to low-resource language pairs and observe performance
improvement. To improve performance on spoken language domain, researchers
have finetuned state-of-the-art models trained on domains in which data is
more abundant (Luong and Manning, ) whereas others have used data augmentation
techniques (Fadaee, Bisazza, and Monz, 2017) to bring improvements in low-
resource translation. Above approaches assume access to the underlying MT
system whereas we assume a black-box scenario.
## 6 Conclusion
In this work we introduced a framework for generating a large-scale parallel
corpus for sentence simplification, and demonstrated how the corpus can be
used to improve the performance of black-box MT systems (especially on low-
resource language pairs) and increase the post-editing efficiency at the
subtitle-event i.e., sentence level. Moreover, we perform thorough empirical
analysis to give insights into language pairs to select for simplifying a
given language. Our results suggest that easier a language pair to translate,
the better the simplification model that will result.
It should be noted that even though this work mainly focuses on simplification
of English, our method is general and can be used to automatically generate
simplification parallel corpora and thus data-driven simplification models
using state-of-the-art architectures for any given language. Moreover, it
accommodates collecting multiple reference simplifications for a given source
sentence by leveraging open-source multilingual corpora. Using the insight
that translating multiword expressions and non-compositional phrases is hard
and simplifying these expressions before translating helps, our work merges
two important sub-fields of NLP (machine translation and sentence
simplification) and paves the path for future research in both of these
fields.
## References
* Aluísio et al. (2008) Aluísio, S. M.; Specia, L.; Pardo, T. A.; Maziero, E. G.; and Fortes, R. P. 2008\. Towards brazilian portuguese automatic text simplification systems. In Proceedings of the Eighth ACM Symposium on Document Engineering, DocEng ’08, 240–248. New York, NY, USA: ACM.
* Bahdanau, Cho, and Bengio (2014) Bahdanau, D.; Cho, K.; and Bengio, Y. 2014\. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473.
* Chandrasekar, Doran, and Srinivas (1996) Chandrasekar, R.; Doran, C.; and Srinivas, B. 1996\. Motivations and methods for text simplification. In Proceedings of the 16th conference on Computational linguistics-Volume 2, 1041–1044. Association for Computational Linguistics.
* Fadaee, Bisazza, and Monz (2017) Fadaee, M.; Bisazza, A.; and Monz, C. 2017\. Data augmentation for low-resource neural machine translation. In ACL, 567–573. Vancouver, Canada: Association for Computational Linguistics.
* Graham, Haddow, and Koehn (2019) Graham, Y.; Haddow, B.; and Koehn, P. 2019\. Translationese in machine translation evaluation. CoRR abs/1906.09833.
* Kauchak (2013) Kauchak, D. 2013\. Improving text simplification language modeling using unsimplified text data. In ACL, 1537–1546.
* Kingma and Ba (2014) Kingma, D. P., and Ba, J. 2014\. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
* Lample et al. (2017) Lample, G.; Conneau, A.; Denoyer, L.; and Ranzato, M. 2017\. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.
* Lin (1999) Lin, D. 1999\. Automatic identification of non-compositional phrases. In ACL, 317–324.
* Lison and Tiedemann (2016) Lison, P., and Tiedemann, J. 2016\. Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. European Language Resources Association.
* (11) Luong, M.-T., and Manning, C. D. Stanford neural machine translation systems for spoken language domains.
* Papineni et al. (2002) Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002\. Bleu: a method for automatic evaluation of machine translation. In ACL, 311–318. Association for Computational Linguistics.
* Poncelas et al. (2018) Poncelas, A.; Shterionov, D.; Way, A.; Wenniger, G. M. d. B.; and Passban, P. 2018\. Investigating backtranslation in neural machine translation. arXiv preprint arXiv:1804.06189.
* Poornima, Dhanalakshmi, and Soman (2011) Poornima, C.; Dhanalakshmi, V.; and Soman, K. 2011\. Rule based sentence simplification for english to tamil machine translation system. International Journal of Computer Applications 25(8):38–42.
* Sag et al. (2002) Sag, I. A.; Baldwin, T.; Bond, F.; Copestake, A. A.; and Flickinger, D. 2002\. Multiword expressions: A pain in the neck for nlp. In Proceedings of the Third International Conference on Computational Linguistics and Intelligent Text Processing, CICLing ’02, 1–15.
* Saggion et al. (2015) Saggion, H.; Štajner, S.; Bott, S.; Mille, S.; Rello, L.; and Drndarevic, B. 2015\. Making it simplext: Implementation and evaluation of a text simplification system for spanish. ACM Trans. Access. Comput. 6(4):14:1–14:36.
* Saluja et al. (2014) Saluja, A.; Hassan, H.; Toutanova, K.; and Quirk, C. 2014\. Graph-based semi-supervised learning of translation models from monolingual data. In ACL. Baltimore, Maryland: Association for Computational Linguistics.
* Sennrich, Haddow, and Birch (2016a) Sennrich, R.; Haddow, B.; and Birch, A. 2016a. Improving neural machine translation models with monolingual data. In ACL, 86–96. Berlin, Germany: Association for Computational Linguistics.
* Sennrich, Haddow, and Birch (2016b) Sennrich, R.; Haddow, B.; and Birch, A. 2016b. Neural machine translation of rare words with subword units. In ACL, 1715–1725. Berlin, Germany: Association for Computational Linguistics.
* Snover et al. (2006) Snover, M.; Dorr, B. J.; Schwartz, R.; Micciulla, L.; and Makhoul, J. 2006\. A study of translation edit rate with targeted human annotation. Proceedings of Association for Machine Translation in the Americas 223 – 231.
* Štajner and Popovic (2016) Štajner, S., and Popovic, M. 2016\. Can text simplification help machine translation? In Proceedings of the 19th Annual Conference of the European Association for Machine Translation, 230–242.
* Sutskever, Vinyals, and Le (2014) Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014\. Sequence to sequence learning with neural networks. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., NeurIPS. Curran Associates, Inc. 3104–3112.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017\. Attention is all you need. In NeurIPS.
* Woodsend and Lapata (2011) Woodsend, K., and Lapata, M. 2011\. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In EMNLP, 409–420. Stroudsburg, PA, USA: Association for Computational Linguistics.
* Wu and others (2016) Wu, Y., et al. 2016\. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
* Wubben, van den Bosch, and Krahmer (2012) Wubben, S.; van den Bosch, A.; and Krahmer, E. 2012\. Sentence simplification by monolingual machine translation. In ACL, ACL ’12, 1015–1024. Stroudsburg, PA, USA: Association for Computational Linguistics.
* Xu et al. (2016) Xu, W.; Napoles, C.; Pavlick, E.; Chen, Q.; and Callison-Burch, C. 2016\. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics 4:401–415.
* Zhang and Lapata (2017) Zhang, X., and Lapata, M. 2017\. Sentence simplification with deep reinforcement learning. In EMNLP, 584–594. Copenhagen, Denmark: Association for Computational Linguistics.
* Zhang and Toral (2019) Zhang, M., and Toral, A. 2019\. The effect of translationese in machine translation test sets. CoRR abs/1906.08069.
* Zhao et al. (2018) Zhao, S.; Meng, R.; He, D.; Saptono, A.; and Parmanto, B. 2018\. Integrating transformer and paraphrase rules for sentence simplification. In EMNLP, 3164–3173. Brussels, Belgium: Association for Computational Linguistics.
* Zhu, Bernhard, and Gurevych (2010) Zhu, Z.; Bernhard, D.; and Gurevych, I. 2010\. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, 1353–1361. Stroudsburg, PA, USA: Association for Computational Linguistics.
* Zoph et al. (2016) Zoph, B.; Yuret, D.; May, J.; and Knight, K. 2016\. Transfer learning for low-resource neural machine translation. In EMNLP, 1568–1575. Austin, Texas: Association for Computational Linguistics.
|
# Massless Preheating and Electroweak Vacuum Metastability
Jeff Kost<EMAIL_ADDRESS>Department of Physics & Astronomy, University
of Sussex, Brighton BN1 9QH, United Kingdom Center for Theoretical Physics of
the Universe, Institute for Basic Science, Daejeon 34126 Korea Chang Sub Shin
<EMAIL_ADDRESS>Center for Theoretical Physics of the Universe, Institute for
Basic Science, Daejeon 34126 Korea Takahiro Terada<EMAIL_ADDRESS>Center
for Theoretical Physics of the Universe, Institute for Basic Science, Daejeon
34126 Korea
###### Abstract
Current measurements of Standard-Model parameters suggest that the electroweak
vacuum is metastable. This metastability has important cosmological
implications because large fluctuations in the Higgs field could trigger
vacuum decay in the early universe. For the false vacuum to survive,
interactions which stabilize the Higgs during inflation—e.g., inflaton-Higgs
interactions or non-minimal couplings to gravity—are typically necessary.
However, the post-inflationary preheating dynamics of these same interactions
could also trigger vacuum decay, thereby recreating the problem we sought to
avoid. This dynamics is often assumed catastrophic for models exhibiting scale
invariance since these generically allow for unimpeded growth of fluctuations.
In this paper, we examine the dynamics of such “massless preheating” scenarios
and show that the competing threats to metastability can nonetheless be
balanced to ensure viability. We find that fully accounting for both the
backreaction from particle production and the effects of perturbative decays
reveals a large number of disjoint “islands of (meta)stability” over the
parameter space of couplings. Ultimately, the interplay among Higgs-
stabilizing interactions plays a significant role, leading to a sequence of
dynamical phases that effectively extend the metastable regions to large
Higgs-curvature couplings.
††preprint: CTPU-PTC-21-20
## I Introduction
A remarkable implication of the currently measured Standard-Model (SM)
parameters is that the electroweak vacuum is metastable. Specifically, given
the measured Higgs boson and top quark masses Zyla _et al._ (2020), one finds
that at energy scales exceeding $\mu\approx 10^{10}\,\text{GeV}$ the Higgs
four-point coupling runs to negative values $\lambda_{h}(\mu)<0$, signifying
the existence of a lower-energy vacuum Degrassi _et al._ (2012); Bezrukov
_et al._ (2012); Alekhin _et al._ (2012); Buttazzo _et al._ (2013);
Bednyakov _et al._ (2015). Although today the timescale for vacuum decay is
much longer than the age of the universe Andreassen _et al._ (2018); Chigusa
_et al._ (2018), dynamics earlier in the cosmological history could have
significantly threatened destabilization. That the false vacuum has persisted
until present day may thus provide a window into early-universe dynamics
involving the Higgs Markkanen _et al._ (2018).
In this respect, the evolution of the Higgs field during inflation is
especially relevant. During inflation, light scalar fields develop
fluctuations proportional to the Hubble scale $H$. Without some additional
stabilizing interactions, the fluctuations of the Higgs would likewise grow
and inevitably trigger decay of the false vacuum, unless the energy scale of
inflation is sufficiently small.
Metastability thus provides a strong motivation for investigating non-SM
interactions that could stabilize the Higgs during inflation—e.g., non-minimal
gravitational couplings Espinosa _et al._ (2008), direct Higgs-inflaton
interactions Lebedev and Westphal (2013), etc. However, the situation is
actually more delicate, as one must also ensure metastability throughout the
remaining cosmological history. While interactions such as those listed above
may stabilize the vacuum during inflation, they often proceed to _destabilize_
it during the post-inflationary preheating epoch, thereby recreating the
problem we sought to avoid. Indeed, a balancing is typically necessary between
the destabilizing effects of inflationary and post-inflationary dynamics.
To this end, a detailed understanding of the field dynamics after inflation is
essential in determining which interactions and couplings are well-motivated
overall. A decisive component of this analysis is the form of the inflaton
potential $V(\phi)$ during preheating. After inflation, the inflaton field
$\phi$ oscillates coherently about the minimum of its potential. These
oscillations determine the properties of the background cosmology, but they
also furnish the quantum fluctuations of the Higgs field with time-dependent,
oscillatory effective masses. These modulations are the underlying mechanism
for Higgs particle production, as they ultimately give rise to non-
perturbative processes such as tachyonic instabilities Felder _et al._
(2001a, b); Dufaux _et al._ (2006) and parametric resonances Kofman _et al._
(1994, 1997); Greene _et al._ (1997). These processes can enormously amplify
the field fluctuations over the course of merely a few inflaton oscillations.
For most inflaton potentials—such as those which are quadratic after
inflation—particles are produced within bands of comoving momentum and these
bands evolve non-trivially as the universe expands. The rates of particle
production for these bands also evolve and generally weaken as preheating
unfolds. As a rule, the particle production terminates after some relatively
short time, and one can classify a model that remains metastable over the full
duration as phenomenologically viable Herranen _et al._ (2015); Ema _et al._
(2016); Enqvist _et al._ (2016); Ema _et al._ (2017).
However, there is an notable exception to this rule: models which exhibit
scale invariance. Under a minimal set of assumptions, in which the inflaton is
the only non-SM field, the scalar potential during preheating is restricted to
the following interactions:
$V(\phi,h)~{}=~{}\frac{1}{4}\lambda_{\phi}\phi^{4}+\frac{1}{2}g^{2}\phi^{2}h^{2}+\frac{1}{4}\lambda_{h}h^{4}\
,$ (1)
and the epoch is termed “massless preheating.” Note that even if the inflaton-
Higgs interaction does not appear as a direct coupling, it should be generated
radiatively, since inflaton-SM interactions are generally necessary to reheat
the universe Gross _et al._ (2016). Additionally, we emphasize that no
assumptions have been made regarding the potential in the inflationary regime,
except that it smoothly interpolates to Eq. (1) at the end of inflation.
Above all, the scale invariance implies that the dynamical properties of the
system are independent of the cosmological expansion, in stark contrast to
other preheating scenarios. As a result, the salient features for vacuum
metastability in general do not evolve: particles are produced within momentum
bands that are fixed with time, and the production rates for the modes do not
change. The field fluctuations grow steadily and _unimpeded_ Greene _et al._
(1997).
At first glance, the unimpeded growth in massless preheating appears
catastrophic for electroweak vacuum metastability. That said, several
considerations should be evaluated more carefully before reaching this
conclusion. First, the Higgs particles produced during preheating have an
effective mass proportional to the inflaton background value $m_{h}\simeq
g|\phi|$. This dependence can significantly enhance the perturbative decay
rate of the Higgs to SM particles.111The interplay between non-perturbative
production and perturbative decays has been studied in a variety of contexts
Kasuya and Kawasaki (1996); Felder _et al._ (1999); Bezrukov _et al._
(2009); Garcia-Bellido _et al._ (2009); Mukaida and Nakayama (2013); Repond
and Rubio (2016); Fan _et al._ (2021). For large enough coupling, the decay
rate and the production rate could be comparable, thereby checking the growth
of Higgs fluctuations and effectively stabilizing the vacuum. Secondly, as
particles are produced, their backreaction modifies the effective masses of
the field fluctuations. If the vacuum does not decay first, the energy density
of the produced particles inevitably grows enough to disrupt the inflaton
oscillations, terminating the parametric resonance and ushering in the non-
linear phase of dynamics that follows.
In this paper, we assess the viability of massless preheating in the context
of electroweak vacuum metastability and address each of the above questions.
But our analysis includes an important generalization: we allow non-minimal
gravitational couplings for both the inflaton and the Higgs. In some sense,
these couplings are unavoidable since if we ignore them in the tree-level
action, they are generated at loop-level Buchbinder _et al._ (1992); even so,
we are inclined to include them for several other reasons. First, a non-
minimal Higgs-curvature coupling provides an additional stabilizing
interaction for the Higgs during inflation without introducing additional non-
SM field content. Secondly, a non-minimal inflaton-curvature coupling allows
us to present a complete and self-contained model. The effect of the curvature
coupling is to flatten the inflaton potential in the large-field region,
restoring viability to the quartic inflaton potential in the inflationary
regime, which is otherwise excluded by observations of the cosmic microwave
background (CMB) Akrami _et al._ (2018). Furthermore, during inflation the
non-minimal gravitational interaction is effectively scale invariant, so such
couplings fit naturally within the purview of scale-invariant theories of
inflation Csaki _et al._ (2014). In this way, our study provides an analysis
of the preheating dynamics that emerges from this well-motivated class of
inflationary models.
Along with the generalization of non-minimal gravitational couplings, we shall
consider a generalization of the gravity formulation as well. That is, in
addition to the conventional metric formulation, we consider the so-called
Palatini formulation, in which the connection and metric are independent
degrees of freedom Palatini (1919); Einstein (1925). In a minimally coupled
theory, such a distinction is not pertinent, but with non-minimal curvature
couplings there is a physical distinction between these formulations Bauer and
Demir (2008); we shall consider certain facets of both in this paper.
Undoubtedly, the presence of non-minimal curvature couplings can have a
substantial impact on the preheating dynamics. Because the curvature
interactions break scale invariance during preheating, particle production
from these terms dissipates over time and terminates, unassisted by
backreaction effects. Consequently, if the initial curvature contribution is
at least comparable to the quartic contribution, the system experiences a
_sequence of dynamical phases_ : particle production is dominated by the
former immediately after inflation and transitions to the latter after some
relatively short duration. Moreover, even if the curvature coupling is small,
it can play a significant role. The curvature interaction contributes either
constructively or destructively to the effective mass of the Higgs modes,
depending on the sign of the coupling. As a result, the Higgs fluctuations
that are generated can have an orders-of-magnitude sensitivity to this sign,
which is ultimately reflected in the range of curvature couplings that are
most vacuum-stable.
By broadly considering the dynamics of such massless preheating models, we
place constraints on the space of Higgs couplings for which the metastability
of the electroweak vacuum survives. And contrary to constraints that arise in
other scenarios, we do not find a simply connected region. Instead, we find a
large number of disjoint “islands of (meta)stability” over the parameter space
which merge into a contiguous metastable region at large quartic coupling.
Accordingly, unlike other preheating scenarios—which typically lead to an
upper bound on the Higgs-inflaton coupling—we find that massless preheating
requires a more complex constraint, and a _lower bound_ ultimately describes
the most favorable region of metastability. In this way, the constraints
necessary to stabilize the Higgs during inflation and to stabilize it during
preheating work in concert rather than opposition.
This paper is organized as follows. We begin Sec. II with an overview of the
class of models we consider for our study of massless preheating and our
assumptions therein. We show how the inclusion of non-minimal gravitational
couplings allows for a self-contained model with a viable inflationary regime
in the large-field region. The post-inflationary evolution of the inflaton and
other background quantities is also discussed. In Sec. III, we examine the
production of Higgs and inflaton particles in this background, without yet
considering the backreaction from these processes. At the close of the
section, we analyze the effect of perturbative decays on the growth of Higgs
fluctuations. In the penultimate Sec. IV, we investigate backreaction and the
destabilization of the electroweak vacuum. We delineate the resulting
metastability constraints on models in which massless preheating emerges.
Finally, in Sec. V we provide a summary of our main results and possible
directions for future work.
This paper also includes three appendices. In Appendix A, we provide an
overview of vacuum metastability in the context of “massive preheating,” where
the inflaton potential is approximately quadratic after inflation and the
scale invariance of the inflaton potential is broken. These scenarios have
been studied in the literature Herranen _et al._ (2015); Ema _et al._
(2016); Enqvist _et al._ (2016); Ema _et al._ (2017) and we reproduce their
findings for the purpose of comparing and contrasting to our results.
Meanwhile, in Appendix B we include the analytical calculations for the
tachyonic resonance which are not given in the main body of the paper, and in
Appendix C we provide the relevant details of our numerical methods.
## II The Model
Let us consider a model in which the Higgs doublet $H$ is coupled to a real
scalar inflaton $\phi$, and both of these fields have a non-minimal coupling
to the Ricci scalar curvature $R$. The model is described most succinctly in
the Jordan frame by the Lagrangian222 We employ a system of units in which
$c\equiv\hbar\equiv 8\pi G\equiv 1$ and adopt the $(+,+,+)$ sign convention of
Misner et al. Misner _et al._ (1973). Additionally, we follow a sign
convention for the non-minimal couplings $\xi_{X}$ ($X=\phi,h$) such that the
conformal value is $\xi_{X}=1/6$.
$\displaystyle\mathcal{L}_{\rm J}~{}=~{}-$
$\displaystyle\frac{1}{2}(\partial^{\mu}\phi)^{2}-\frac{1}{2}(\partial^{\mu}h)^{2}-V_{\rm
J}(\phi,h)$
$\displaystyle\qquad+\frac{1}{2}\big{(}1-\xi_{\phi}\phi^{2}-\xi_{h}h^{2}\big{)}R_{\rm
J}\ ,$ (2)
where we have used that $H=\begin{pmatrix}0&h/\sqrt{2}\end{pmatrix}^{\rm T}$
in the unitary gauge. The potential is restricted to the scale-invariant
interactions motivated in Sec. I:
$V_{\rm
J}(\phi,h)~{}=~{}\frac{1}{4}\lambda_{\phi}\phi^{4}+\frac{1}{2}g^{2}\phi^{2}h^{2}+\frac{1}{4}\lambda_{h}h^{4}$
(3)
and now interpreted in the Jordan frame. The first term in Eq. (3) determines
the evolution of the cosmological background, while the second term is meant
to stabilize the electroweak vacuum during inflation. The last term is the
source of electroweak instability, with $\lambda_{h}$ running to negative
values at energy scales exceeding $\sim 10^{10}\,\text{GeV}$.
Let us make some ancillary remarks on the gravitational formulation used in
what follows. In principle, one can formulate general relativity in two ways:
(i) using the metric $g_{\mu\nu}$ and taking the connection
$\Gamma^{\alpha}_{\beta\mu}$ to be given by the Christoffel symbols (the
“metric formulation”) or (ii) using the metric and the connection as
independent degrees of freedom (the “Palatini formulation”). Of course, these
formulations are physically equivalent for a minimally coupled theory, but
given the non-minimal couplings in our theory this distinction carries weight.
Not favoring one formulation over the other, we address both of these
possibilities by introducing a parameter
$\theta~{}\equiv~{}\begin{cases}0\ ,&\text{Palatini formulation}\\\ 1\
,&\text{metric formulation}\end{cases}\ ,$ (4)
and when relevant we shall state our results as functions of $\theta$.
Although the distinctions between these formulations can be realized in both
the preheating and inflationary dynamics, they figure most prominently in the
large-field region of the potential, including the inflationary regime; let us
now focus our discussion on this regime.
### II.1 The Inflationary Regime
In principle, the model we have stated in Eq. (II) is agnostic to the specific
form of the inflaton potential in the inflationary regime. As long as basic
criteria are satisfied—e.g., the vacuum remains stable during inflation,
isocurvature perturbations are negligible, etc.—our study of the preheating
epoch is insensitive to details of the inflation model. Even so, we are
motivated to examine whether Eq. (II) could consist of a phenomenologically
viable inflation potential in the large-field region, as this would furnish a
complete and self-contained model.
On the one hand, a minimal coupling $\xi_{\phi}=0$ would be the simplest
possible scenario. Unfortunately, this yields the standard quartic inflation,
which is known to predict too large a tensor-to-scalar ratio to satisfy
observations of the CMB Akrami _et al._ (2018). On the other hand, a finite
$\xi_{\phi}<0$ allows for a range of other possibilities Spokoiny (1984);
Futamase and Maeda (1989); Salopek _et al._ (1989); Fakir and Unruh (1990);
Bezrukov and Shaposhnikov (2008). In considering these, it is beneficial to
transform to the Einstein frame, defined by the Weyl transformation
$g_{\mu\nu}\\!\longrightarrow\Omega g_{\mu\nu}$ in which333 A covariant
framework in which the equivalence of the Jordan and Einstein frames are
manifest is discussed in Ref. Karamitsos and Pilaftsis (2018), where extension
beyond the tree-level equivalence is delineated.
$\Omega~{}\equiv~{}1-\xi_{\phi}\phi^{2}-\xi_{h}h^{2}\ .$ (5)
While this restores the canonical graviton normalization, it also transforms
the potential to $V_{\rm E}(\phi,h)=V_{\rm J}(\phi,h)/\Omega^{2}$ and
generates non-canonical kinetic terms for our scalar fields $X=\\{\phi,h\\}$
such that the Lagrangian becomes
$\mathcal{L}_{\rm E}~{}=~{}\frac{1}{2}R_{\rm
E}-\frac{1}{2}\sum_{ij}(\partial^{\mu}X_{i})\mathbf{K}_{ij}(\partial_{\mu}X_{j})-V_{\rm
E}(\phi,h)$ (6)
with a kinetic mixing matrix
$\mathbf{K}~{}\equiv~{}\frac{1}{\Omega^{2}}\begin{bmatrix}\Omega+\frac{3}{2}\theta(\partial_{\phi}\Omega)^{2}&\frac{3}{2}\theta\partial_{\phi}\Omega\partial_{h}\Omega\\\
\frac{3}{2}\theta\partial_{\phi}\Omega\partial_{h}\Omega&\Omega+\frac{3}{2}\theta(\partial_{h}\Omega)^{2}\end{bmatrix}\
.$ (7)
Figure 1: The inflaton potential for different curvature couplings
$\xi_{\phi}$ and gravity formulations $\theta$, all of which are approximately
quartic during preheating but vary substantially in the large-field region.
The thick segment of each curve shows the inflationary regime of the
potential, with each point marker indicating the end of inflation [coincident
with Eq. (12)]. The quartic potential in the absence of a non-minimal
gravitational coupling is shown (black-dashed curve) for reference.
In the large-field region $\phi\gg 1/\sqrt{-\xi_{\phi}}$, the inflaton
potential then takes the form
$V_{\rm
E}(\widetilde{\phi})~{}=~{}\frac{\lambda_{\phi}}{4\xi_{\phi}^{2}}\left\\{\begin{array}[]{lll}\displaystyle{\tanh^{4}\\!\big{(}\sqrt{-\xi_{\phi}}\widetilde{\phi}\big{)}}&\text{for}&\theta=0\\\\[2.5pt]
\displaystyle{\big{(}1-e^{-\sqrt{\frac{2}{3}}\widetilde{\phi}}\big{)}^{\\!2}}&\text{for}&\theta=1\end{array}\right.\
,$ (8)
where $\widetilde{\phi}$ is the canonically normalized inflaton field. In this
region, the theory acquires an approximate scale invariance and effectively
approaches so-called “attractor models” which are phenomenologically viable
Kallosh _et al._ (2013); Galante _et al._ (2015); Carrasco _et al._ (2015).
In particular, for an inflationary epoch of $N$ $e$-folds one finds a spectral
index $n_{\text{s}}=1-2/N$ and tensor-to-scalar ratio $r=12\alpha/N^{2}$,
where we have defined $\alpha\equiv\theta-1/(6\xi_{\phi})$.
A plot which illustrates the inflaton potential $V_{\rm E}(\widetilde{\phi})$
for a number of possible curvature couplings $\xi_{\phi}$ and gravity
formulations $\theta$, all of which lead to massless preheating after
inflation, is shown in Fig. 1, The inflationary trajectory is shown by the
thick part of each curve and inflation ends at the location of the point
marker.
There are several other constraints on our model parameters necessary to
achieve viability. First, matching the observed amplitude of primordial
curvature perturbations Akrami _et al._ (2018) fixes the relationship between
$\lambda_{\phi}$ and $\xi_{\phi}$:
$\lambda_{\phi}~{}=~{}4.9\times
10^{-10}\left(\frac{55}{N}\right)^{2}\alpha\xi_{\phi}^{2}\ .$ (9)
Secondly, we must ensure that the electroweak vacuum remains stable throughout
inflation. This requirement bounds the effective mass-squared of the Higgs as
$\displaystyle\left.\frac{\partial^{2}V_{\rm
E}}{\partial\widetilde{h}^{2}}\right|_{\widetilde{h}=0}\\!\\!\simeq~{}\frac{g^{2}\phi^{2}}{1-\xi_{\phi}\phi^{2}}+\frac{\xi_{h}\lambda_{\phi}\phi^{4}}{(1-\xi_{\phi}\phi^{2})^{2}}~{}>~{}0$
(10)
under the slow-roll approximation, where $\widetilde{h}$ is the canonically
normalized Higgs field. This constraint is simplified if we consider that
during inflation $1\ll-\xi_{\phi}\phi^{2}$ and thus Eq. (10) reduces to
$\lambda_{\phi}\xi_{h}-g^{2}\xi_{\phi}>0$. We shall assume that this
constraint is satisfied to the extent that the Higgs is stabilized strongly at
the origin during inflation. In other words, we assume that the Higgs mass is
larger than the Hubble scale $H$ during inflation, and imposing this
assumption we obtain the constraint
$\displaystyle\xi_{h}-\frac{g^{2}}{\lambda_{\phi}}\xi_{\phi}~{}\gg~{}\frac{1}{12}\
.$ (11)
Thirdly, we must also ensure that quantum corrections to the scalar potential
from Higgs loops are controlled and do not ruin the above predictions for
inflationary observables. To this end, if we require that
$g^{2}\ll\sqrt{\lambda_{\phi}}$, then the Higgs-loop contribution is
subdominant. Taking a value $\lambda_{\phi}=10^{-10}$ [based on Eq. (9)] gives
the constraint explicitly as $g^{2}/\lambda_{\phi}\ll 10^{5}$, which is well
within the parameter space that we shall consider.
Finally, the accelerated expansion of the universe ends once the energy
density of the inflaton background falls below $\rho_{\phi}=3V(\phi)/2$, i.e.,
when the field falls below
$\phi_{\rm
end}~{}\equiv~{}\frac{4}{\sqrt{1+\sqrt{1-32\xi_{\phi}(1-6\theta\xi_{\phi})}}}\
.$ (12)
Afterward, $\phi$ begins to oscillate about the minimum of the potential and
we identify this point in time as the beginning of the preheating era; we
focus our discussion throughout the rest of the paper on this epoch.
### II.2 Cosmological Background After Inflation
Let us now discuss the evolution of the cosmological background after the end
of inflation. Assuming the vacuum has been sufficiently stabilized, the Higgs
field is negligible444 The two-field evolution that one finds in breaking from
this assumption is non-trivial and has been investigated in Ref. Bond _et
al._ (2009). and the cosmological background is determined solely by the
dynamics of the inflaton and its potential $V_{\rm E}(\phi)\simeq V(\phi)$ in
the region $\phi<\phi_{\rm end}$, where the notation $V(\phi)$ is introduced
for the inflaton potential approximated in the small-field region.
The form of the inflaton potential in this region may still be sensitive to
$\xi_{\phi}$. Notably, for $|\xi_{\phi}|\gg 1$ an interesting distinction
appears between metric and Palatini gravity. For metric gravity, the potential
is quartic at sufficiently small $\phi\ll 1/\smash{|\xi_{\phi}|}$ but becomes
_quadratic_ in the intermediate region $1/|\xi_{\phi}|\ll\phi\ll
1/\smash{\sqrt{|\xi_{\phi}|}}$. It follows that Higgs fluctuations are
amplified by two different types of parametric resonance depending on the
field region.555In this context, we refer the reader to Ref. Rusak (2020), in
which the electroweak instability was studied (with $g\neq 0$ and $\xi_{h}=0$)
in the large non-minimal inflaton coupling regime $-\xi_{\phi}\gg 1$. By
contrast, in Palatini gravity the potential is purely quartic and has no
quadratic region Fu _et al._ (2017); Karam _et al._ (2021). Our model
therefore offers an explicit example of _purely massless preheating that is
consistent with inflation_ , even for a large non-minimal coupling
$\xi_{\phi}$. Furthermore, we can achieve the same effect with
$\xi_{\phi}=\mathcal{O}(1)$, as the intermediate quadratic region vanishes.
Given that the study of massless preheating is our main interest and that this
occurs in the small-field region, we shall henceforth assume that
$|\xi_{\phi}|\lesssim 1$ (with $\xi_{\phi}<0$).
After the end of inflation, the evolution of the background inflaton field is
then well-approximated by
$\ddot{\phi}+3H\dot{\phi}+\lambda_{\phi}\phi^{3}~{}=~{}0\ ,$ (13)
in which the dots correspond to time-derivatives and $H\equiv\dot{a}/a$ is the
Hubble parameter given in terms of the scale factor $a=a(t)$. As discussed in
Sec. I, the approximate scale invariance of the system makes its field
dynamics and resonance structure rather unique. These features have been
studied extensively Greene _et al._ (1997) and here we briefly review them.
The scale invariance is made transparent by writing Eq. (13) in terms of the
conformal time $\eta\equiv\int^{t}dt^{\prime}/a(t^{\prime})$ and conformal
inflaton field $\varphi\equiv a\phi$:
$\varphi^{\prime\prime}+\lambda_{\phi}\varphi^{3}~{}=~{}0\ ,$ (14)
where the prime notation corresponds to $\eta$-derivatives. Note that we have
ignored a term proportional to $\phi^{2}R$, as this term negligibly impacts
the inflaton evolution.
The approximate scale invariance of the theory is manifested by the fact that
the inflaton equation of motion in Eq. (14) is independent of the cosmological
expansion. The solutions carry a constant amplitude $\overline{\varphi}$ of
oscillations and are given in terms of a Jacobi elliptic function666 We define
the Jacobi elliptic sine $\operatorname{sn}(X,Y)=\sin Z$ and cosine
$\operatorname{cn}(X,Y)=\cos Z$ functions through the relation $\displaystyle
X~{}=~{}\int_{0}^{Z}\frac{d\theta}{\sqrt{1-Y^{2}\sin^{2}\theta}}\ .$ (15)
$\displaystyle\varphi(x)=\overline{\varphi}\operatorname{cn}\left(x-x_{0},\frac{1}{\sqrt{2}}\right),$
(16)
in which $x\equiv\smash{\sqrt{\lambda_{\phi}}}\overline{\varphi}\eta$ is the
conformal time measured in units of the effective inflaton mass
$\sqrt{\lambda_{\phi}}\overline{\varphi}$. The constant $x_{0}$ is used to
match to the inflaton configuration at the end of inflation, and the period of
oscillations in $x$ is
$T~{}\equiv~{}4K\left(\frac{1}{\sqrt{2}}\right)~{}\approx~{}7.42\ ,$ (17)
with
$\smash{K(X)\equiv\int_{0}^{\pi/2}\\!d\theta(1-X^{2}\sin^{2}\\!\theta)^{-1/2}}$
defined as the compete elliptic integral of the first kind.
A cosmological energy density which is dominated by the coherent oscillations
of a scalar field may behave in a variety of ways depending on the scalar
potential. For example, the class of potentials $V(\phi)\propto\phi^{2n}$ (for
integer $n>0$) yield a cosmological background that behaves as a fluid with
the equation-of-state parameter Turner (1983)
$w~{}\equiv~{}\frac{P}{\rho}~{}=~{}\frac{n-1}{n+1}\ ,$ (18)
in which $P$ is the pressure and $\rho$ is the energy density, averaged over
several oscillations. For the quadratic ($n=1$) and quartic ($n=2$) potentials
this demonstrates the well-known result that scalar-field oscillations in
these potentials correspond to perfect fluids with matter-like ($w=0$) and
radiation-like ($w=1/3$) equations of state, respectively. This behavior
reflects our observation in Eq. (14) that $\varphi$ evolves independently of
the cosmological expansion. Namely, since the inflaton energy density scales
like radiation $\rho_{\phi}\propto 1/a^{4}$, the amplitude of inflaton
oscillations scale as $\overline{\phi}\propto 1/a$. Therefore, the
corresponding conformal amplitude $\overline{\varphi}$ is fixed.
It also follows that in a radiation-like background the scale factor is
proportional to $x$ and grows according to
$a(x)~{}=~{}\frac{\overline{\varphi}\,x}{\sqrt{12}}\ .$ (19)
Inflation ends once the kinetic energy grows sufficiently to have
$3V(\phi)/2\leq\rho_{\phi}$, which corresponds to the time $x_{\rm
end}\equiv\sqrt{12}/\phi_{\rm end}$. Note that chronologically one always has
$x_{0}<x_{\rm end}$ and these are related explicitly by
$x_{\rm
end}-x_{0}~{}=~{}\text{arccn}\\!\left(\\!\left(\frac{2}{3}\right)^{\\!\\!\frac{1}{4}}\\!\\!,\frac{1}{2}\right)~{}\approx~{}0.45\
;$ (20)
for simplicity we have used $\xi_{\phi}=0$ in this expression.
Finally, in addition to the inflaton background, it is important that we
examine the scalar curvature after inflation. In general, the curvature is a
frame-dependent quantity with $\Omega R_{\text{J}}=4V_{\rm
J}(\phi)-\dot{\phi}^{2}$ in the Jordan frame. Nevertheless, at sufficiently
small $\phi$ we have $\Omega\approx 1$ and
$\displaystyle a^{4}R~{}$
$\displaystyle=~{}\lambda_{\phi}\varphi^{4}-\varphi^{\prime 2}$
$\displaystyle=~{}\frac{\lambda_{\phi}}{2}\overline{\varphi}{}^{4}\left[3\left(\frac{\varphi}{\overline{\varphi}}\right)^{\\!\\!4}-1\right]\
,$ (21)
where we have employed the solution in Eq. (16). The scalar curvature thus
oscillates about zero and over several oscillations averages to $\langle
R\rangle=3H(1-3w)=0$. However, as we shall find upon examining particle
production, neither the curvature terms nor their time-dependence can be
neglected, as they can impart a significant contribution to the preheating
dynamics.
## III Production of Higgs Particles
Having established the evolution of the classical inflaton field in Sec. II.2,
we can now discuss Higgs particle production in this background. We write the
quantized Higgs field $\widehat{h}$ in the Heisenberg picture as a function of
the fluctuations $h_{k}(t)$ of comoving momenta $k$:
$\widehat{h}(\bm{x},t)=\\!\int\\!\frac{d^{3}k}{(2\pi)^{\frac{3}{2}}}\\!\Big{[}\widehat{a}_{k}h_{k}(t)e^{+i\mathbf{k}\cdot\mathbf{x}}+\widehat{a}_{k}^{\dagger}h_{k}^{\\!*}(t)e^{-i\mathbf{k}\cdot\mathbf{x}}\Big{]}\
,$ (22)
where $\widehat{a}^{\dagger}_{k}$ and $\widehat{a}_{k}$ are creation and
annihilation operators, respectively. For a given comoving momentum, these
fluctuations follow equations of motion
$\ddot{h}_{k}+(3H+\Gamma_{h_{k}})\dot{h}_{k}+\omega^{2}_{h_{k}}h_{k}~{}=~{}0$
(23)
where $\Gamma_{h_{k}}$ is a phenomenological term accounting for perturbative
decays of the Higgs Kofman _et al._ (1997); Garcia-Bellido _et al._ (2009);
Repond and Rubio (2016) and $\omega_{h_{k}}$ is the energy of the mode. These
modes are coupled both to the oscillating inflaton background and the scalar
curvature $R$, and therefore their effective masses carry an implicit time-
dependence. In the Jordan frame,
$\omega_{h_{k}}^{2}~{}=~{}\frac{k^{2}}{a^{2}}+g^{2}\phi^{2}+\xi_{h}R\ .$ (24)
However, in the Einstein frame $\xi_{\phi}\neq 0$ generates a kinetic mixing
between the inflaton field and the Higgs modes, thereby producing an inflaton-
dependent friction term. We can absorb this friction into the effective mass
term by the field redefinition $\mathcal{H}_{k}\equiv
a\Omega^{-1/2}|_{h=0}h_{k}$ Rusak (2020), yielding the equations of motion
$\displaystyle\mathcal{H}^{\prime\prime}_{k}+a\Gamma_{h_{k}}\mathcal{H}^{\prime}_{k}+\omega^{2}_{\mathcal{H}_{k}}\mathcal{H}_{k}=0,$
(25)
in which the transformed modes are given by
$\omega^{2}_{\mathcal{H}_{k}}=~{}k^{2}+g^{2}\varphi^{2}\\!\left(\\!1+\xi_{\phi}\frac{\varphi^{2}}{a^{2}}\right)+\xi
a^{2}R-a^{2}\Gamma_{h_{k}}H$ (26)
and we have defined the effective non-minimal coupling
$\xi~{}\equiv~{}\xi_{h}+\xi_{\phi}-6\theta\xi_{h}\xi_{\phi}-\frac{1}{6}\ .$
(27)
Note that we have neglected terms which are higher order in $a^{-1}$, such
that the scalar curvature is given by
$a^{4}R=\lambda_{\phi}\varphi^{4}-(\varphi^{\prime})^{2}$. In this way, we
have absorbed most $\xi_{\phi}$ effects and dependence on the gravity
formulation into the single effective parameter $\xi$. Finally, we confirm
that if we take $\xi_{\phi}=0$, the scale invariance is restored for the
conformal value $\xi_{h}=1/6$, as one would expect.
Solving the equations of motion in Eq. (25), we can track the production of
Higgs particles. In particular, the comoving phase-space density of particles
associated with a mode of comoving momentum $k$ is given by
$n_{h_{k}}~{}=~{}\frac{\omega_{\mathcal{H}_{k}}}{2}\left(\frac{\left|\mathcal{H}^{\prime}\right|^{2}}{\omega_{\mathcal{H}_{k}}^{2}}+\left|\mathcal{H}_{k}\right|^{2}\right)-\frac{1}{2}\
.$ (28)
The physical mechanism that drives this production differs considerably
between the $\xi a^{2}R$ and $g^{2}\varphi^{2}$ terms. For the former, when
the inflaton field passes through the minimum of its potential, one may find
that a range of Higgs modes become tachyonic $\omega_{\mathcal{H}_{k}}^{2}<0$.
The tachyonic instability is strongest for the smaller-momentum modes, and
these modes produce particles for longer durations. By contrast, particle
production from the latter is driven by the oscillations in $\varphi$. The
resulting time-dependent modulations of $\omega_{\mathcal{H}_{k}}$ give rise
to parametric resonances for Higgs modes within certain momentum bands.
Another crucial distinction between these mechanisms is found by evaluating
their overall scaling under the cosmological expansion. The main term in Eq.
(26) responsible for tachyonic production redshifts as $\xi a^{2}R\propto
1/a^{2}$, while the term responsible for the parametric resonance
$g^{2}\varphi^{2}$ does not dissipate at all. Tachyonic production therefore
always terminates after some finite time, even for the zero-momentum mode. On
the other hand, production from the parametric instability continues unimpeded
and only ceases once the evolution is disrupted by backreaction effects, as we
cover in Sec. IV. Indeed, this distinction is ultimately traced to the Higgs-
curvature interaction breaking the scale-invariance that is preserved by all
of the other relevant terms.
We first break our analysis into two limiting cases: one in which the quartic
inflaton-Higgs interaction is dominant and the other in which the curvature
interaction is dominant. Then, we explore the interplay between these
interactions and finally begin to analyze the impact of perturbative Higgs
decays on the field dynamics.
### III.1 Production from Parametric Instability
Let us first consider the case that the curvature coupling $\xi$ is negligible
and thus ignore the tachyonic production. Then, Higgs particles are produced
purely from the parametric resonance associated with the $g^{2}\varphi^{2}$
term in Eq. (26). The dominant production in this case arises from the fact
that as the inflaton passes through the origin, the effective masses may
evolve non-adiabatically:
$\frac{|\dot{\omega}_{\mathcal{H}_{k}}|}{\omega_{\mathcal{H}_{k}}^{2}}~{}\gtrsim~{}1\
,$ (29)
which triggers a burst of particle production.777A number of the results we
present in this subsection are the subject of Ref. Greene _et al._ (1997); we
summarize only the most relevant aspects.
The growth of the number density for a given mode is exponential and follows
$\log n_{k}\simeq 2\mu_{k}x$ over several oscillations, where $\mu_{k}$ is the
characteristic exponent. Note the distinction between this regular exponential
growth and the stochastic growth one finds for theories without an approximate
scale invariance, e.g., those with a quadratic inflaton potential Kofman _et
al._ (1997); Greene _et al._ (1997). The stochastic nature of the resonance
appears in these scenarios because the accumulated phase of each mode evolves
with the cosmic expansion, destroying the phase coherence. In the scale-
invariant theory, no such time-dependence may arise and phase coherence is
maintained among the modes. A more in-depth comparison of the quadratic and
quartic theories, with an emphasis on the results of this paper, is provided
in Appendix A.
Figure 2: The instability bands of the parametric resonance arising from the
inflaton-Higgs interaction (bottom panel) and the corresponding maximum
characteristic exponents $\mu_{\rm max}\equiv\max_{k}\mu_{k}$ for each
coupling (top panel). The contours in the bottom panel show the value of the
exponent $\mu_{k}$ such that a given occupation number grows as
$n_{h_{k}}\propto e^{2\mu_{k}x}$. The couplings which lead to the strongest
growth are found at the center of bands which have an unstable zero mode,
i.e., for $g^{2}/\lambda_{\phi}=2,8,\ldots,2n^{2}$ for $n\in\mathbb{N}$, while
the weakest are found at the edge of these bands
$g^{2}/\lambda_{\phi}=3,10,\ldots,2n^{2}+n$—we refer to these as the “broad”
and “narrow” regimes, respectively. There is a universal $\mu_{\rm max}\approx
0.24$ for the former, while for the latter $\mu_{\rm max}$ is a non-trivial
function of $g^{2}/\lambda_{\phi}$, given in Eq. (30) and plotted over an
extended range in Fig. 3. Figure 3: The maximum exponents $\mu_{\rm
max}\equiv\max_{k}\mu_{k}$ for the narrow-resonance couplings
$g^{2}/\lambda_{\phi}=2n^{2}+n$ (for $n\in\mathbb{N}$). These are computed
numerically (point markers) and compared to the analytical fitting formula in
Eq. (30) (solid curve). Note that the analytical function is evaluated only at
the same discrete values $g^{2}/\lambda_{\phi}=2n^{2}+n$ in this figure, i.e.,
the minima of the top panel of Fig. 2. In the
$g^{2}/\lambda_{\phi}\rightarrow\infty$ limit the broad/narrow regimes become
degenerate with $\mu_{\rm max}\approx 0.24$, but this asymptote is approached
slowly.
The size of a particular growth exponent $\mu_{k}$ is determined by a
combination of the comoving momentum $k$ and the quotient
$g^{2}/\lambda_{\phi}$. In Fig. 2, we have numerically solved the mode
equations and plotted contours of $\mu_{k}$ in the space of these two
quantities, rescaling the momentum as $\kappa\equiv
k/(\sqrt{\lambda_{\phi}}\overline{\varphi})$. It is natural to separate the
resonances into two different classes. The couplings with instability bands
that include the zero-momentum mode, i.e., those between
$2n^{2}-n<g^{2}/\lambda_{\phi}<2n^{2}+n$, for $n\in\mathbb{N}$, contain the
broadest resonances and generally give the most copious particle production—we
refer to these collectively as the “broad regime.” Meanwhile, those couplings
with only finite-momentum bands contain the most narrow resonances and give
generally weaker production, and we refer to these as the “narrow regime.” The
most weak and narrow bands occur at the boundaries
$g^{2}/\lambda_{\phi}=2n^{2}+n$. The distinctions between these two coupling
regimes have important dynamical implications and play a major role in this
paper.
Figure 4: The evolution of background quantities $\varphi$, $a^{2}\\!R$ (top
panel) and Higgs phase-space density $n_{h_{k}}$ (center/bottom panels) during
the early stage of preheating. The latter two panels show the broad/narrow
regimes: taking $n=12$, the center panel uses a coupling
$g^{2}/\lambda_{\phi}=2n^{2}$ and has a broad range of resonant modes, while
the bottom panel uses the adjacent $g^{2}/\lambda_{\phi}=2n^{2}+n$ and has
only a narrow range of resonant modes; the mode with the largest growth
exponent is shown in each case. Additionally, these two panels show the effect
of allowing for perturbative Higgs decays (first discussed in Sec. III.4,
shown in blue) and non-minimal gravitational couplings $\xi_{h}$, evenly
spaced over the range $0\leq\xi_{h}\leq 50$.
Although in principle each mode with $\mu_{k}>0$ contributes to particle
production, in practice the maximum exponent of the band $\mu_{\rm
max}\equiv\max_{k}\mu_{k}$ dominates. In the broad regime, the maximum
exponent is $\mu_{\rm max}\approx 0.24$, found at the central values
$g^{2}/\lambda_{\phi}=2n^{2}$, and this value is universal over the entire
range of couplings. Conversely, the $\mu_{\rm max}$ values in the narrow
regime are _not_ universal. These features are most easily observed in the top
panel of Fig. 2, where $\mu_{\rm max}$ is seen to oscillate back-and-forth
between the broad and narrow regimes. Indeed, the growth rate of particle
production is _highly non-monotonic_ as one dials the inflaton-Higgs coupling.
In fact, the narrow resonances grow to join the broad resonance at $\mu_{\rm
max}\approx 0.24$ in the formal limit $g^{2}/\lambda_{\phi}\rightarrow\infty$.
The non-monotonicity of $\mu_{\rm max}$ thus diminishes as a function of
$g^{2}/\lambda_{\phi}$, albeit at an extremely slow pace. In what follows it
proves useful to quantify this observation, so we have numerically computed
$\mu_{\rm max}$ for the discrete minima of $\mu_{\rm max}$ in the narrow
regime $g^{2}/\lambda_{\phi}=2n^{2}+n$ and found that they are well-
approximated by the function
$\displaystyle{\frac{\mu_{\rm
max}}{0.24}~{}\approx~{}\frac{\log(Ag^{2}/\lambda_{\phi})}{1+\log(Bg^{2}/\lambda_{\phi})}}\
.$ (30)
The constants $A=2.82$ and $B=3.34\times 10^{5}$ reproduce the numerical
results to better than $1\%$ accuracy over the range $3\lesssim
g^{2}/\lambda_{\phi}\lesssim 10^{4}$, which spans the full range of narrow
resonances relevant to this paper. We display the function in Eq. (30) and the
numerical results together in Fig. 3 for comparison. Note that the weakest
resonances globally are found in the limit $g^{2}/\lambda_{\phi}\rightarrow
0$, where the bands become increasingly narrow and follow $\mu_{\rm
max}\approx 0.15g^{2}/\lambda_{\phi}$ Greene _et al._ (1997); we shall
discuss this small-coupling limit further in Sec. III.3.3.
We have plotted numerical solutions of the phase-space density $n_{h_{k}}$ in
the broad and narrow regimes, for adjacent coupling bands, shown by the black
curve in the top and bottom panels of Fig. 4, respectively. The mode
corresponding to the most rapid growth is shown in both cases: for the broad
regime this is the $\kappa=0$ mode, but for the narrow regime $\mu_{\rm max}$
corresponds to a finite momentum. We neglect the blue curves for the moment,
as these first enter our discussion in Sec. III.4.
For our purposes, the non-monotonic nature of $\mu_{\rm max}$ as a function of
$g^{2}/\lambda_{\phi}$ has extensive implications. In contrast to many other
preheating scenarios, the magnitude of our coupling has no bearing on the
growth rate of a given fluctuation—only the associated band within the
repeating resonance structure is important. That said, since the width of the
momentum bands increases with $g^{2}/\lambda_{\phi}$, the total number density
$n_{h}$ does in fact depend on this coupling. We obtain the total comoving
number density of the produced Higgs particles by using the saddlepoint
approximation to integrate over each band:
$n_{h}~{}\simeq~{}\frac{1}{2}\left(\\!\frac{\sqrt{\lambda_{\phi}}\overline{\varphi}}{2\pi}\\!\right)^{\\!\\!3}\left(\frac{g^{2}}{2\lambda_{\phi}}\right)^{\\!\\!3/4}\\!\frac{e^{2\mu_{\rm
max}x}}{\sqrt{\mu_{\rm max}x}}\ .$ (31)
As this density partly determines the variance of the Higgs fluctuations, it
plays a major role in assessing the metastability of the electroweak vacuum.
Accordingly, we shall continue to calculate $n_{h}$ in all of the regimes.
### III.2 Production from Tachyonic Instability
Let us now consider the opposite coupling limit $g\rightarrow 0$, i.e., the
limit in which particle production is driven not by parametric resonance but
by a tachyonic instability. The effective masses $\omega_{\mathcal{H}_{k}}$
are given by
$\frac{\omega_{\mathcal{H}_{k}}^{2}}{\lambda_{\phi}\overline{\varphi}{}^{2}}~{}=~{}\kappa^{2}+r_{h}\left[3\left(\frac{\varphi}{\overline{\varphi}}\right)^{\\!\\!4}-1\right]\
,$ (32)
where we have defined a quantity
$r_{h}\equiv\xi\overline{\varphi}{}^{2}/(2a^{2})$ that indicates the strength
of the curvature term at a given time and utilized Eq. (21). We recall from
Sec. II.1 that vacuum stability during inflation requires
$\lambda_{\phi}\xi_{h}>\xi_{\phi}g^{2}$, and therefore we only consider
$\xi_{h}>0$ for the moment. Indeed, for sufficiently small momenta one finds
modes that cross the tachyonic threshold $\omega_{\mathcal{H}_{k}}^{2}<0$ when
the inflaton is near the minimum of its potential, triggering an exponential
growth in the corresponding Higgs fluctuations. Although this growth is
typically short-lived due to the redshifting of the curvature term $a^{2}\xi
R\propto 1/a^{2}$, these may still be a source of copious particle production
soon after inflation and thus serve as a legitimate threat to destabilize the
electroweak vacuum.
There have been a number of studies devoted to calculating the rate of
tachyonic particle production in different settings Felder _et al._ (2001b);
Dufaux _et al._ (2006), and we employ several of those techniques in this
section. Similar to Sec. III.1, near the turning points
$\omega_{\mathcal{H}_{k}}^{2}=0$ the masses may change non-adiabatically and
one must then compute the Bogoliubov coefficients to proceed. However, unlike
in the previous section, the $\omega_{\mathcal{H}_{k}}^{2}$ may experience two
distinct adiabatic segments of evolution. As long as $r_{h}\gtrsim 1$, the
effective masses change adiabatically away from the turning points in both the
tachyonic and non-tachyonic segments. Under this assumption, we can apply the
WKB approximation by calculating the phase accumulated by modes both during
the tachyonic $X_{k}\equiv\int dt\,\Omega_{h_{k}}$ (for
$\smash{\Omega^{2}_{k}\equiv-\omega_{h_{k}}^{2}>0}$) and non-tachyonic
$\Theta_{k}\equiv\int dt\,\omega_{h_{k}}$ segments of evolution.
Applying these methods, one finds that after passing through a tachyonic
region $j$ times, the phase-space density for a given mode is written
generally as Dufaux _et al._ (2006)
$n_{h_{k}}~{}=~{}e^{2jX_{k}}\left(2\cos\Theta_{k}\right)^{2(j-1)}\ .$ (33)
Hence, employing Eq. (32) we can compute the accumulated quantities $X_{k}$
and $\Theta_{k}$ for $g=0$. The details of this calculation appear in Appendix
B and give
$\displaystyle X_{k}~{}$
$\displaystyle\simeq~{}\frac{\sqrt{2\pi}\,\Gamma(\frac{5}{4})(r_{h}-\kappa^{2})^{3/4}}{\Gamma(\frac{7}{4})(3r_{h})^{1/4}}$
$\displaystyle\Theta_{k}~{}$
$\displaystyle\simeq~{}\frac{4\pi^{3/2}\sqrt{2r_{h}+\kappa^{2}}}{\Gamma(\frac{1}{4})^{2}}\
.$ (34)
After successive bursts of tachyonic production, the growth exponent in Eq.
(33) accumulates a value $\sum_{j}2jX_{k}\simeq 4\int dxX_{k}/T$, with the
zero-mode receiving the greatest share of the number density. The accumulated
phases $\Theta_{k}$ only supply oscillatory behavior or modify the
distribution over momenta. Given that our primary concern is the overall
growth of the Higgs number density, we shall neglect these quantities. Then,
we can estimate that
$\displaystyle
n_{h_{k}}~{}\simeq~{}\left(\frac{x}{x_{0}}\right)^{\\!\\!4\sqrt{\\!\frac{2\xi}{3\sqrt{3}}}}\\!\exp\left[-\frac{2(x^{2}-x_{0}^{2})\kappa^{2}}{3^{9/4}\sqrt{\xi}}\right]\
,$ (35)
which holds for the duration of the tachyonic instability.
Let us focus on the $\kappa=0$ mode, which experiences the strongest growth in
the tachyonic regime. At first glance, the growth may actually appear weak in
comparison to the parametric resonance [in Eq. (31)] since it is not
exponential in time—it merely obeys a power law. The difference is that the
power scales with $\sqrt{\xi}$ and has no upper-bound, in line with studies of
the tachyonic instability in other settings Dufaux _et al._ (2006); Ema _et
al._ (2016). This feature sharply contrasts with the parametric resonance, in
which the $\mu_{\rm max}$ growth rate is bounded _universally_ from above (as
we observed in Fig. 2), regardless of the coupling.
Of course, as the universe expands the tachyonic production soon terminates.
In particular, as $r_{h}$ redshifts to values below unity, the tachyonic
masses $\Omega_{h_{k}}$ are suppressed and the adiabatic assumption breaks
down. Using $|\dot{\Omega}_{h_{k}}|\gtrsim\Omega_{h_{k}}^{2}$ as the threshold
for where this breakdown occurs, we find
$r_{h}^{2}\gtrsim(r_{h}-\kappa^{2})^{3}$. This condition implies that the span
of modes exposed to the instability is bounded above by
$\kappa\lesssim\sqrt{r_{h}}$ and that a given mode is active for
$x\lesssim\sqrt{6\xi}/\\!\max(1,\kappa)$. The modes shut down successively,
starting with the largest-momentum modes, such that tachyonic production ends
at the time
$x_{\xi}~{}\simeq~{}\sqrt{6\xi}\ .$ (36)
The total number density of Higgs particles is given by integrating Eq. (35)
over the phase space, but a cutoff $\kappa\lesssim r_{h}^{2}$ should be
imposed on each momentum band per the discussion above. We again use the
saddlepoint approximation to perform the integration and obtain a comoving
number density
$n_{h}~{}\simeq~{}\frac{1}{8}\Big{(}\sqrt{\lambda_{\phi}}\overline{\varphi}\Big{)}^{3}\left(\frac{3^{9/4}\\!\sqrt{\xi}}{2\pi
x^{2}}\right)^{\\!\\!3/2}\\!\\!\left(\frac{x}{x_{0}}\right)^{\\!\\!4\sqrt{\frac{2\xi}{3\sqrt{3}}}}\
,$ (37)
which is applicable for $x\lesssim x_{\xi}$. Although there is a brief
transient phase of non-adiabatic production for $x\gtrsim x_{\xi}$, particle
production from the curvature term proceeds only through the substantially
weaker narrow resonance, which shuts down entirely soon thereafter. We cover
the details of this regime in Sec. III.3.3 below.
### III.3 Production in the Mixed Case
Let us now promote our discussion to the mixed case, in which both couplings
$g^{2}/\lambda_{\phi}$ and $\xi$ are non-zero. As such, the Higgs modes evolve
with the effective masses
$\frac{\omega_{\mathcal{H}_{k}}^{2}}{\lambda_{\phi}\overline{\varphi}{}^{2}}=\kappa^{2}+\frac{g^{2}}{\lambda_{\phi}}\left(\frac{\varphi}{\overline{\varphi}}\right)^{\\!\\!2}\\!\left[1\\!-\\!2r_{\phi}\\!\left(\frac{\varphi}{\overline{\varphi}}\right)^{\\!\\!2}\right]\\!+r_{h}\\!\left[3\left(\frac{\varphi}{\overline{\varphi}}\right)^{\\!\\!4}\\!-\\!1\right]$
(38)
that follow directly from Eq. (26), and by analogy to $r_{h}$ we have defined
$r_{\phi}\equiv-\xi_{\phi}\overline{\varphi}{}^{2}/(2a^{2})$.
There are several immediate implications; and let us discuss these with a
focus on the effect of the Higgs-curvature coupling. First, since only the
curvature term dissipates with the cosmological expansion, the dynamics may
progress through several distinct phases, most noticeably if $r_{h}\gg
g^{2}/\lambda_{\phi}$ at early times. Secondly, the reintroduction of the
$g^{2}\varphi^{2}$ term lifts the effective masses and thereby opens the
$\xi<0$ region to viability. Indeed, the $\xi<0$ region was only excluded in
Sec. III.2 because the inflationary constraints [in Eq. (10) and Eq. (11)]
would have been violated, but for mixed couplings this half of parameter space
can be reincorporated. Thirdly, the small-coupling regime
($g^{2}/\lambda_{\phi}\lesssim 1$ and $r_{h}\lesssim 1$) stands apart from
most of our discussion thus far. The particle production in this regime is
driven entirely by narrow parametric resonances arising from two different
types of modulations $(\varphi/\overline{\varphi})^{2}$ and
$(\varphi/\overline{\varphi})^{4}$. If these modulations can be approximated
as sinusoidal then a perturbative treatment is likely effective. We
investigate all of these possible scenarios below.
#### III.3.1 Dominant $g^{2}\phi^{2}$ _(_ with $g^{2}/\lambda_{\phi}\gg 1$_)_
The simplest possibility is an initially small curvature term $|r_{h}|\lesssim
g^{2}/\lambda_{\phi}$, since this does not allow a tachyonic instability to
develop during preheating. (We confine our discussion to
$g^{2}/\lambda_{\phi}\gg 1$ for the moment and cover the small-coupling regime
separately.)
Nonetheless, the curvature term in this case has an impact on the Higgs
dynamics. Depending on the sign of $\xi$, these terms may add constructively
or destructively, enhancing or suppressing the effective masses
$\omega_{\mathcal{H}_{k}}$, respectively. For $\xi>0$ the effect is
constructive and the resonance bands are widened, e.g., the broad instability
bands are extended to
$\kappa^{2}\leq\sqrt{2g^{2}/(\pi^{2}\lambda_{\phi})+r_{h}}$. We have seen that
Fig. 4 falls within this category, and this extension explains the variation
among different $\xi$. Conversely, for $\xi<0$ the effect is _destructive_ :
not only do the bands narrow but the non-adiabaticity that drives the
parametric resonance is weakened. Explicitly, the maximum characteristic
exponent is reduced to
$\max_{g^{2}/\lambda_{\phi}}\mu_{\rm
max}~{}=~{}\frac{2}{T}\log\\!\left(\\!e^{-\frac{\pi
r_{h}^{2}}{g^{2}/\lambda_{\phi}}}\\!+\sqrt{1+e^{-\frac{2\pi
r_{h}^{2}}{g^{2}/\lambda_{\phi}}}}\hskip 1.99168pt\right)\ .$ (39)
Surely, after a sufficient duration one has
$r_{h}\lesssim\sqrt{g^{2}/\lambda_{\phi}}$ and the exponent $\mu_{\rm
max}\approx 0.24$ is restored. In other words, the full capacity of the
parametric resonance is _delayed_ until a time $x_{g}$, which is given
approximately by
$x_{g}~{}\simeq~{}\sqrt{-6\xi}\left(2\pi\frac{\lambda_{\phi}}{g^{2}}\right)^{\\!\\!1/4}\
.$ (40)
After this time the curvature term continues to dissipate and has an
increasingly negligible influence. The growth of fluctuations then proceeds
according to Sec. III.1.
#### III.3.2 Dominant $\xi Rh^{2}$ _(_ with $g^{2}/\lambda_{\phi}\gg 1$_)_
An initially large $r_{h}\gg g^{2}/\lambda_{\phi}\gg 1$ ensures the Higgs
dynamics are dominated by tachyonic production until a time $x_{\xi}$. Unlike
the converse situation above, where we could ignore the curvature interaction
after some duration, there is no regime in which we can ignore the
$g^{2}\phi^{2}$ interaction entirely. Aside from the fact that the parametric
resonance from this term will always play a role, its presence also alters the
size of the tachyonic masses. We can incorporate this effect by performing an
analysis along the lines of Sec. III.2 with $g^{2}/\lambda_{\phi}\neq 0$.
Then, for a given mode, we find that when $\sqrt{g^{2}/\lambda_{\phi}}\lesssim
r_{h}\lesssim g^{2}/\lambda_{\phi}+\kappa^{2}$ is satisfied the tachyonic
instability is active and proceeds adiabatically. In order to compute the
integral analytically we assume that
$(g^{2}/\lambda_{\phi})^{2}\gg(r_{h}-\kappa^{2})(3r_{h}-2r_{\phi}g^{2}/\lambda_{\phi})$.
Then, the accumulated quantities [from Eq. (33)] are found to be
$X_{k}\simeq\frac{\pi(r_{h}-\kappa^{2})}{\sqrt{2g^{2}/\lambda_{\phi}}}\
,\quad\
\Theta_{k}\simeq\frac{\pi\sqrt{g^{2}/\lambda_{\phi}-r_{h}+\kappa^{2}}}{\sqrt{2}}\
.$ (41)
As expected, Eq. (41) shows that tachyonic production is still concentrated in
the small-momentum modes. However, the evolution of $n_{h_{k}}$ in the
presence of $g^{2}/\lambda_{\phi}\neq 0$ is markedly different. Rather than a
power law, the phase-space density rapidly asymptotes to a constant as
$\displaystyle\log
n_{h_{k}}~{}\simeq~{}\frac{12\sqrt{2}\pi\xi}{T\sqrt{g^{2}/\lambda_{\phi}}}\left[\frac{1}{x_{0}}-\frac{1}{x}-\frac{(x-x_{0})\kappa^{2}}{6\xi}\right]\
,$ (42)
again neglecting the oscillatory component $\cos\Theta_{k}$. The termination
of the tachyonic resonance is pushed to an earlier time depending on the
quartic couplings:
$x_{\xi}~{}\simeq~{}\sqrt{\\!\frac{6\xi}{\sqrt{g^{2}/\lambda_{\phi}}}}\ .$
(43)
Afterward, the modes grow as $n_{h_{k}}\propto e^{2\mu_{k}x}$, driven by the
parametric resonance. The phase-space density can be found at these times by
matching to Eq. (42) at $x_{\xi}$.
Finally, the number density of Higgs particles produced during this tachyonic
phase is found by integrating Eq. (42). Using the saddlepoint approximation we
find
$n_{h}~{}\simeq~{}\frac{(\\!\sqrt{\lambda_{\phi}}\overline{\varphi})^{3}}{8\pi^{3}}\\!\left(\\!\frac{T}{2x}\sqrt{\frac{g^{2}}{2\lambda_{\phi}}}\right)^{\\!\\!\\!3/2}\\!\\!\\!e^{\frac{12\sqrt{2}\pi\xi}{T\sqrt{g^{2}/\lambda_{\phi}}}\left(\frac{1}{x_{0}}-\frac{1}{x}\right)}\
.$ (44)
Figure 5: The phase-space density of Higgs particles $n_{h_{k}}$ produced
during preheating, computed numerically and evaluated at the time $x=400$. The
point markers along the vertical axis show our analytical result for
$n_{h_{0}}$, found by evaluating Eq. (42) at zero momentum. The Higgs-inflaton
coupling chosen $g^{2}/\lambda_{\phi}=21$ corresponds to a narrow resonance,
and the different curves show various choices for the curvature coupling
$\xi$. The only production from the quartic interaction occurs at finite
momenta (grey region) and the curvature interaction dominates the production
at small momenta.
In Fig. 5 we have plotted the phase-space density $n_{h_{k}}$, calculated
numerically over the momentum space of the Higgs modes. The quartic coupling
chosen $g^{2}/\lambda_{\phi}=21$ sits on a narrow resonance and therefore only
produces particles at finite momenta (highlighted by the gray region).
Meanwhile, the different curves show various non-minimal couplings $\xi$.
Because the parametric resonance yields unimpeded particle production, the
finite-momentum peak in $n_{h_{k}}$ continues to grow with time, while
production for the other modes ceases once $x\gtrsim x_{\xi}$. The large point
markers along the vertical axis show agreement with our analytical result for
$n_{h_{0}}$ where the tachyonic production is dominant, found by evaluating
Eq. (42) at zero momentum.
The same logic as above may be applied to the $\xi<0$ region of parameter
space. As this region is dominated by tachyonic particle production, we may
again employ Eq. (33) to calculate the generated Higgs spectrum. While this
calculation is straightforward in principle, it requires some additional
technical details that we include in Appendix B, but we summarize the results
here. The phase-space density for the Higgs is given by
$\log
n_{h_{k}}\simeq\frac{4H_{1}(3,0)}{T}\\!\\!\int\\!dx\,\frac{\kappa^{2}\\!-2r_{h}+\frac{g^{2}}{\lambda_{\phi}}\\!\left(2r_{\phi}\\!-\\!1\right)}{\sqrt{\kappa^{2}-r_{h}}}\
,$ (45)
where $H_{1}(3,0)\approx 0.76$ corresponds to the function defined in Eq. (91)
of Appendix B. The small-momentum modes provide the largest contribution, and
we can approximate
$\displaystyle\log n_{h_{0}}~{}\simeq~{}$
$\displaystyle\frac{8H_{1}(3,0)}{T}\\!\left(\\!1+\frac{g^{2}}{\lambda_{\phi}}\frac{\xi_{\phi}}{\xi}\\!\right)\\!\sqrt{6|\xi|}\log\\!\left(\frac{x}{x_{0}}\right)$
$\displaystyle-\frac{H_{1}(3,0)}{T}\sqrt{\frac{2}{-3\xi}}\frac{g^{2}}{\lambda_{\phi}}(x^{2}-x_{0}^{2})$
(46)
in this limit. Compared to $\xi>0$, the power indices are a similar magnitude
if we neglect the effect of $\xi_{\phi}$.
Figure 6: The growth of the zero-mode phase-space density in the mixed-
coupling case, exploring both signs $\xi=\pm 100$ and the broad and narrow
regimes of the parametric resonance. The thin/thick curves show our
numerical/analytical results, respectively. For $\xi>0$ the tachyonic
instability drives particle production early on [following Eq. (42)], but once
$x\gtrsim x_{\xi}$ it is driven by the parametric resonance $n_{k}\propto
e^{2\mu_{k}x}$. The quartic and curvature contributions add constructively and
enhance the instability. Meanwhile, for $\xi<0$ these contributions add
destructively and the growth exponent $\mu_{k}$ is effectively shut off
[following Eq. (39)] until the time $x_{g}$.
In Fig. 6 we demonstrate the growth of the zero-mode $n_{h_{0}}$ in the mixed-
coupling case, showing both our numerical (thin curves) and analytical (thick
curves) results. For comparison, we have included both the positive (blue
curves) and negative (red curves) $\xi$ regimes, as well as the narrow and
broad resonance regimes. The destructive effect of $\xi<0$ is evident, as
_changing the sign of $\xi$ induces a gap of many orders of magnitude_ in the
zero-mode density. As a result, we should expect that electroweak vacuum
metastability shows a preference for the $\xi<0$ half of parameter space. We
shall return to this topic when we examine Higgs destabilization in Sec. IV.
#### III.3.3 Small-Coupling Regime
Let us now focus on the small-coupling regime, i.e., where both
$g^{2}/\lambda_{\phi}\lesssim 1$ and $|r_{h}|\lesssim 1$. The latter
inequality is always imminent, so we are free to also interpret this regime as
the late-time behavior of a model with $g^{2}/\lambda_{\phi}\lesssim 1$ and an
arbitrary curvature coupling. Indeed, as soon as $|r_{h}|\lesssim 1$ the
tachyonic instability transitions into a narrow resonance; in fact, both of
the instabilities in this regime experience a narrow parametric resonance and
our analysis should follow a different approach. Namely, the oscillatory terms
in Eq. (38) act as small modulations to $\omega_{\mathcal{H}_{k}}^{2}$ and we
can treat these terms perturbatively Greene _et al._ (1997). We expand the
elliptic functions as
$\displaystyle\left(\frac{\varphi}{\overline{\varphi}}\right)^{\\!2}\\!\\!~{}$
$\displaystyle=~{}\sum_{\ell=0}^{\infty}F_{\ell}\cos\frac{4\pi\ell(x-x_{0})}{T}$
$\displaystyle\left(\frac{\varphi}{\overline{\varphi}}\right)^{\\!4}\\!\\!~{}$
$\displaystyle=~{}\sum_{\ell=0}^{\infty}G_{\ell}\cos\frac{4\pi\ell(x-x_{0})}{T}\
,$ (47)
with the leading five coefficients given in Table 1. Noting that the series
converges quickly after the first three terms, we truncate at order $\ell=2$
and the expansion serves as a good approximation. Moreover, given that $r_{h}$
are slowly varying compared to the timescale of inflaton oscillations, the
Higgs mode equations take the form of the Whittaker-Hill equation:
$\displaystyle\frac{d^{2}\mathcal{H}_{k}}{dz^{2}}+\left(A_{k}+2p\cos 2z+2q\cos
4z\right)\mathcal{H}_{k}~{}=~{}0\ ,$ (48)
where we have defined $z\equiv 2\pi(x-x_{0})/T$. The coefficients of the
frequency terms are given by
$A_{k}=\left(\frac{T}{2\pi}\right)^{\\!2}\\!\Big{[}\kappa^{2}+\frac{g^{2}}{\lambda_{\phi}}(F_{0}-2r_{\phi}G_{0})+r_{h}(3G_{0}-1)\Big{]}$
(49)
in which we have defined the quantities
$\displaystyle p~{}$
$\displaystyle=~{}-\frac{1}{2}\left(\frac{T}{2\pi}\right)^{\\!2}\left[\frac{g^{2}}{\lambda_{\phi}}(F_{1}-2r_{\phi}G_{1})+3r_{h}G_{1}\right]$
$\displaystyle q~{}$
$\displaystyle=~{}+\frac{1}{2}\left(\frac{T}{2\pi}\right)^{\\!2}\left[\frac{g^{2}}{\lambda_{\phi}}(F_{2}-2r_{\phi}G_{2})+3r_{h}G_{2}\right]\
.$ (50)
Using Floquet theory it is relatively straightforward to calculate the
characteristic exponents $\mu_{k}$ for the unstable modes Whittaker and Watson
(1996); Lachapelle and Brandenberger (2009). We can therefore find the maximum
exponents $\mu_{\rm max}\equiv\max_{k}\mu_{k}$ over the small-coupling
parameter space. These results are displayed in Fig. 7.
Table 1: Coefficients for the expansions in Eq. (47). $\ell$ | $0$ | $1$ | $2$ | $3$ | $4$
---|---|---|---|---|---
$F_{\ell}$ | $0.457$ | $0.497$ | $4.29\times 10^{-2}$ | $2.78\times 10^{-3}$ | $1.60\times 10^{-4}$
$G_{\ell}$ | $0.333$ | $0.476$ | $0.164$ | $2.39\times 10^{-2}$ | $2.45\times 10^{-3}$
An interesting feature of Fig. 7 is that, given the scale-factor dependence of
$r_{h}$, we can interpret the figure as showing the flow of $\mu_{\rm max}$ as
a function of time. For some initial $r_{h}\neq 0$ value, $\mu_{\rm max}$
flows along contours of constant $g^{2}/\lambda_{\phi}$ (e.g., the gray
arrows) until it reaches the $r_{h}=0$ axis. Notably, for $\xi<0$ the
evolution of $\mu_{\rm max}$ over this time is not necessarily monotonic. For
larger quartic couplings, one may enter the $\mu_{\rm max}=0$ region (enclosed
by red curves) and then exit it while approaching $r_{h}=0$.
Note that in the regime of negligible curvature couplings, Eq. (48) reduces to
the form of the Mathieu equation. The Mathieu equation is familiar from models
of massive preheating, but unlike such models the parametric instability from
the $g^{2}\phi^{2}$ interaction does not terminate due to the approximate
scale invariance of the theory. The fact that the curvature contributions are
neither long-lived nor tachyonic means that the quartic contribution becomes
the most salient in the small-coupling regime. Even for couplings as large as
$r_{h}=\mathcal{O}(g^{2}/\lambda_{\phi})$, the resonance bands are widened,
but their influence only induces a logarithmic correction to the growth rates.
Figure 7: Contours of the maximum characteristic exponent $\mu_{\rm
max}\equiv\max_{k}\mu_{k}$ shown over the space of small couplings
$\\{g^{2}/\lambda_{\phi},r_{h}\\}$ with $\xi_{\phi}=0$ for simplicity. Because
the curvature parameter redshifts as $|r_{h}|\propto 1/a^{2}$, this figure may
be interpreted in a time-dependent way. From some initial point, the $\mu_{\rm
max}$ value flows along lines of fixed $g^{2}/\lambda_{\phi}$ toward
$|r_{h}|\rightarrow 0$ (illustrated by gray arrows) before finally landing
along the narrow quartic resonance $\mu_{\rm max}\approx
0.15g^{2}/\lambda_{\phi}$ (gray-dashed line).
For these reasons, the $r_{h}\rightarrow 0$ limit is usually the most
pertinent in the context of vacuum metastability. Here, the modes grow as
$n_{h_{k}}\approx e^{2\mu_{k}x}/2$ for momenta around the narrow bands. The
primary resonance is centered at $\kappa^{2}=2\pi/T$ and one finds the maximum
exponent $\mu_{\rm max}=F_{1}Tg^{2}\\!/(8\pi\lambda_{\phi})\approx
0.15g^{2}/\lambda_{\phi}$ Greene _et al._ (1997). Using the saddlepoint
approximation, we integrate to find
$n_{h}~{}\simeq~{}\frac{\pi\lambda_{\phi}\overline{\varphi}{}^{3}}{T^{2}}\sqrt{\frac{2\lambda_{\phi}\mu_{\rm
max}}{Tx}}e^{2\mu_{\rm max}x}\ .$ (51)
### III.4 Perturbative Higgs Decays
Thus far, for simplicity we have neglected the perturbative decay of Higgs
particles after their production. These decays may not appear important when
considering the substantial rates of steady non-perturbative particle
production. After all, perturbative decays are known to have only a minor
effect in preheating with massive inflaton potentials (see Appendix A).
However, given the unique properties that appear for massless preheating, it
is worth examining the Higgs decays in closer detail.
In the early stages of preheating, we expect that perturbative decay channels
are kinematically accessible since the homogeneous background value for $h$ is
negligible. Even if a background value is generated, the masses of SM
particles are lighter than the Higgs during preheating as long as $h\ll h_{\rm
crit}$, in which
$h_{\rm
crit}~{}=~{}\sqrt{\frac{g^{2}\phi^{2}+\xi_{h}R}{\left|\lambda_{h}\right|}}$
(52)
is the Higgs value at the barrier separating the false and true vacua, i.e.,
the local maximum of the potential.
We do not expect substantial resonant production of $W$ and $Z$ gauge bosons
from the Higgs field in the early stages of preheating. Therefore, the
dominant decay channel of the Higgs is into top quarks at a rate
$\Gamma_{h}~{}\simeq~{}\frac{3y_{t}^{2}m_{h}}{16\pi}$ (53)
in the rest-frame, where $y_{t}$ is the top Yukawa coupling, evaluated at the
Higgs mass scale, and we have denoted
$m_{h}~{}=~{}\sqrt{g^{2}\phi^{2}+\xi_{h}R}$ (54)
as the effective Higgs mass. In general, we should include a Lorentz factor
$\gamma=\omega_{h_{k}}/m_{h}$ which suppresses the decay rate for relativistic
particles. The tachyonic instability produces particles with physical momenta
$k/a\lesssim\overline{\phi}{}^{2}\sqrt{\xi\lambda_{\phi}/2}$, which translates
to $\gamma\simeq\mathcal{O}(1)$ and does not appreciably suppress the rate.
Likewise, the parametric instability with $g^{2}/\lambda_{\phi}\gg 1$ produces
particles with
$k/a\lesssim[\lambda_{\phi}\overline{\phi}{}^{2}\sqrt{g^{2}/\lambda_{\phi}}]^{1/2}$,
which is non-relativistic. The exception is small couplings
$g^{2}/\lambda_{\phi}\ll 1$, for which Higgs production is dominated by
particles with a Lorentz factor $\gamma\simeq
k/(am_{h})\simeq\sqrt{\lambda_{\phi}/g^{2}}$. While we include the full effect
of this time-dilation on the decay rate in our numerical computations, our
analytical calculations are performed assuming that
$\gamma\simeq\max(1,\sqrt{\lambda_{\phi}/g^{2}})$.
Naturally, as perturbative decays exponentially suppress the number density,
they work against the non-perturbative production processes above. The field-
dependence of the effective Higgs mass $m_{h}$ implies an interesting
chronology of events. If the parametric resonance is the dominant production
mechanism, then a burst of particles are produced as $\phi$ passes through the
origin. Meanwhile, decays are strongest when $\phi$ is maximally displaced.
The result is that rather than $n_{h_{k}}$ maintaining a constant value during
its adiabatic evolution, it is exponentially suppressed at the rate
$\Gamma_{h}$. However, due to its dependence on evolving background
quantities, the decay rate is a non-trivial function of time. The effect on
the number density is to dissipate it as
$\log n_{h_{k}}~{}\propto~{}-\int dt\,\frac{\Gamma_{h}}{\gamma}~{}=~{}-\int
dx\,\frac{a\Gamma_{h}}{\gamma\sqrt{\lambda_{\phi}}\overline{\varphi}}$ (55)
where the integration is performed over times when decays are kinematically
possible.
An important limit is that in which the non-minimal couplings
$\xi_{h},\xi_{\phi}$ are negligible, as this also represents the system for
times $x\gtrsim x_{\xi}$. The pivotal observation is that the effective decay
rate scales as $a\Gamma_{h}\propto|\varphi|$ and thus has _no overall scale-
factor dependence, allowing a direct competition between the production and
decay rates_ to determine the fate of the electroweak vacuum; this contrasts
with massive preheating, in which the decay exponent carries a logarithmic
time-dependence. (We refer to Appendix A for a more thorough comparison.)
Explicitly, the time-averaged conformal decay rate $\langle
a\Gamma_{h}\rangle$ is given by
$\frac{\left\langle
a\Gamma_{h}\right\rangle}{\sqrt{\lambda_{\phi}}\overline{\varphi}}~{}=~{}\frac{3y_{t}^{2}}{16\pi}\sqrt{\frac{g^{2}}{\lambda_{\phi}}}\left\langle\frac{\left|\varphi\right|}{\overline{\varphi}}\right\rangle~{}\approx~{}0.036y_{t}^{2}\sqrt{\frac{g^{2}}{\lambda_{\phi}}}\
,$ (56)
where we have assumed the regime $g^{2}/\lambda_{\phi}\gtrsim 1$. Indeed, we
find that the number density of Higgs particles does not grow for couplings
exceeding
$\frac{g^{2}}{\lambda_{\phi}}~{}\gtrsim~{}2.8\times 10^{3}\left(\frac{\mu_{\rm
max}}{0.24}\right)^{\\!2}\\!\left(\frac{y_{t}}{0.5}\right)^{\\!-4}\ .$ (57)
Remarkably, this result suggests that electroweak vacuum metastability can be
achieved in massless preheating. Moreover, this is given not by an upper bound
but by a _lower bound_ on the quartic coupling (see also Ref. Lebedev and Yoon
(2021)).
In the opposite limit, where $g$ is negligible and the curvature term
dominates, neither the production or perturbative decays evolve exponentially.
In particular, we have $\left\langle a\Gamma_{h}\right\rangle\propto\log a$
and a growth rate that is dominant at early times. The result is that
perturbative decays do not quell the early phase of tachyonic production, but
if the vacuum survives then decays increasingly dissipate the fluctuations as
the system evolves.
Looking back to Fig. 4, we have demonstrated the effect of the perturbative
decays on particle production by plotting $n_{h_{k}}$ both with (blue curves)
and without (grayscale curves) the decays incorporated. Clearly, in the
absence of decays the phase-space density remains constant when the inflaton
is away from $\phi\approx 0$, as the $n_{h_{k}}$ correspond to an adiabatic
invariant. However, when the Higgs decays are turned on, the curves show a
dissipation during the adiabatic evolution. The dissipation does not always
appear to be at a constant rate, reflecting that in our numerical computations
the decay rates are not averaged as in Eq. (56) but are given instantaneously
as a function of the effective Higgs mass $m_{h}$.
Before concluding this section, we put aside the Higgs for a moment and we
make some remarks regarding the quantum fluctuations of the inflaton field. An
extensive part of the discussion in this section can be applied by analogy to
production of inflaton particles. These particles are generated through the
inflaton self-coupling, so that the effective masses [analogous to Eq. (24)]
are
$\omega_{\phi_{k}}^{2}~{}=~{}\frac{k^{2}}{a^{2}}+3\lambda_{\phi}\phi^{2},$
(58)
in the Jordan frame, or also in the Einstein frame after neglecting higher-
order terms in $\mathcal{O}(a^{-2})$. In fact, production via this inflaton
“self-resonance” is understood by the simple replacement
$g^{2}/\lambda_{\phi}=3$ in the analysis of Sec. III.1 Greene _et al._
(1997). For example, from our result for the Higgs number density $n_{h}$ in
Eq. (31), we can produce an estimate of the inflaton number density $n_{\phi}$
with appropriate replacements. Interestingly, the characteristic growth
exponent $\mu_{\rm max}\approx 0.036$ for inflaton production corresponds to
the weakest of the $g^{2}/\lambda_{\phi}\geq 1$ regime. In Sec. IV below, we
shall find that this self-resonance and its properties are extremely material
to the discussion of vacuum destabilization and the end of preheating.
## IV Backreaction and Vacuum Destabilization
While Sec. III provides the groundwork for our study of electroweak vacuum
metastability, an essential ingredient is still absent from our analysis. In
particular, recalling our assumption that the Higgs field has a negligible
background value $h\approx 0$, the term in the potential which threatens to
destabilize the vacuum (the self-interaction $\lambda_{h}h^{4}/4$) has not yet
entered our analysis. In order to incorporate the unstable vacuum, we must not
only consider the growth of Higgs fluctuations but also the _backreaction_
that these fluctuations have on the system. As discussed in Sec. I, although
the fluctuations in massless preheating appear to grow unimpeded, they will
inevitably grow large enough to either trigger vacuum decay or disrupt the
background inflaton evolution through backreaction.
A complete treatment of the backreaction and non-linearities that appear,
especially in the later stages of preheating, requires numerical lattice
simulations. However, we find that in order to probe the Higgs stability, or
estimate the onset of the non-linear stage, the one-loop Hartree approximation
Boyanovsky _et al._ (1995) is sufficient. That is, we assume the
factorization $h^{4}\rightarrow 6\langle h^{2}\rangle h^{2}-3\langle
h^{2}\rangle^{2}$ for the quartic term in the Lagrangian (and an analogous
expression for the inflaton), where the expectation values are given by
$\displaystyle\langle h^{2}\rangle~{}$
$\displaystyle=~{}\frac{1}{(2\pi)^{3}}\int d^{3}k\left|h_{k}\right|^{2}$ (59)
$\displaystyle\langle\phi^{2}\rangle~{}$
$\displaystyle=~{}\frac{1}{(2\pi)^{3}}\int d^{3}k\left|\phi_{k}\right|^{2}\ .$
(60)
In turn, the effective masses for the Higgs and inflaton modes—from Eq. (24)
and Eq. (58), respectively—are modified as the variance of the fluctuations
grows:
$\displaystyle\omega_{h_{k}}^{2}~{}$
$\displaystyle=~{}\frac{k^{2}}{a^{2}}+g^{2}\phi^{2}+\xi_{h}R+g^{2}\langle\phi^{2}\rangle+3\lambda_{h}\langle
h^{2}\rangle$ (61) $\displaystyle\omega_{\phi_{k}}^{2}~{}$
$\displaystyle=~{}\frac{k^{2}}{a^{2}}+3\lambda_{\phi}\phi^{2}+g^{2}\langle
h^{2}\rangle+3\lambda_{\phi}\langle\phi^{2}\rangle\ .$ (62)
The fluctuations also couple to the inflaton background, modifying the
equation of motion as
$\ddot{\phi}+3H\dot{\phi}+\lambda_{\phi}\phi^{3}+\left(3\lambda_{\phi}\langle\phi^{2}\rangle+g^{2}\langle
h^{2}\rangle\right)\phi~{}=~{}0\ .$ (63)
Given that the motion of $\phi$ drives the particle-production processes, once
these oscillations are disrupted the dynamics of the type discussed in Sec.
III is shut down. The termination of these processes generally coincides with
the energy density of the fluctuations growing comparable to that of the
inflaton background and thus corresponds to the end of the linear stage of
preheating.
We shall include all of the Hartree terms in our numerical calculations (as
detailed in Appendix C), but the corrections from the self-interactions
$3\lambda_{h}\langle h^{2}\rangle$ and $3\lambda_{\phi}\langle\phi^{2}\rangle$
will tend to carry the most weight for our analytical calculations. We find
that from simple analytic estimates we can reproduce the salient lattice
simulation results in the literature. For further validation, our numerical
analyses are also compared to the literature on models for which the inflaton
potential is quadratic in Appendix A. For further discussion on the non-linear
dynamics and backreaction in preheating, we refer the reader to Refs. Kofman
_et al._ (1997); Greene _et al._ (1997).
The main results of Sec. III consist of the produced number density $n_{h}$,
so it is necessary to translate between these quantities and the variance
$\langle h^{2}\rangle$. While one can fully express $\langle h^{2}\rangle$ in
terms of Bogoliubov coefficients, the result is a sum of two terms, one of
which is rapidly oscillating and not important for our purpose of producing
order-of-magnitude estimates Kofman _et al._ (1997). As long as the produced
Higgs particles are non-relativistic one can write $\langle h^{2}\rangle\simeq
n_{h}/(a^{3}\omega_{h})$, with a similar expression applying to the inflaton
or any other relevant fields. Note that the number density is only well-
defined when evolving adiabatically, so the points at which we evaluate
$\omega_{h_{k}}$ must remain consistent with this assumption.
### IV.1 Onset of Non-Linear Stage
In the context of vacuum metastability, a natural concern is that particle
production processes arising from scale-invariant terms do not terminate in
the linear stage of preheating. That is, these processes only terminate once
field fluctuations become so large that they significantly backreact on the
system at some time $x_{\rm NL}$. The longer the linear stage lasts, the more
concern for vacuum destabilization since the Higgs fluctuations must remain
sufficiently controlled over this entire duration.
In our model, either the Higgs or inflaton fluctuations may bring about an end
to the linear stage, but a spectator field could potentially play this role as
well (see Sec. V for further discussion along these lines). Let us first
examine the necessary conditions for the Higgs to play this role. There are
two competing effects that influence the background inflaton field Greene _et
al._ (1997). On the one hand, as the inflaton loses energy to particle
production, $\overline{\varphi}$ falls and thereby decreases the effective
frequency of oscillations $\sqrt{\lambda_{\phi}}\,\overline{\phi}$. On the
other hand, as the Higgs variance grows the inflaton obtains an effective mass
$\sqrt{g^{2}\langle h^{2}\rangle}$, increasing the oscillation frequency. The
inflaton oscillations are disrupted once these two quantities are similar in
magnitude, i.e., once $g^{2}\\!\langle
h^{2}\rangle\\!/(\lambda_{\phi}\overline{\phi}{}^{2})$ approaches unity. Of
course, throughout this process the size of $\langle h^{2}\rangle$ must remain
controlled $\langle h^{2}\rangle\lesssim h_{\rm crit}^{2}$ so that we do not
destabilize the Higgs. According to Eq. (52), this requirement leads to a
rather tight constraint on the couplings:
$\frac{g^{2}}{\lambda_{\phi}}~{}\gtrsim~{}\sqrt{\frac{\left|\lambda_{h}\right|}{\lambda_{\phi}}}\
,$ (64)
which is approximately $g^{2}/\lambda_{\phi}\gtrsim 10^{4}$ for our benchmark
parameter choice $\lambda_{\phi}=10^{-10}$. However, we have already found
that in this regime the rate of perturbative decays dominate over that of
particle production [see previous discussion regarding Eq. (57)]. We can thus
conclude that, in the absence of backreaction sourced by any other fields, the
linear stage of _preheating must be terminated by the inflaton fluctuations_
and not the Higgs.
Unfortunately, the inflaton fluctuations grow slowly with the weak
characteristic exponent $\mu_{\rm max}\approx 0.036$ (refer to the end of Sec.
III.4), so this can take a significant amount of time. The analysis above
applies just as well to the inflaton variance $\langle\phi^{2}\rangle$ with
the replacements $g^{2}\to 3\lambda_{\phi}$ and
$\Gamma_{h}\rightarrow\Gamma_{\phi}$, where $\Gamma_{\phi}$ is the model-
dependent inflaton decay rate. Then, the onset of the non-linear (NL) stage
occurs at
$x_{\rm NL}~{}\simeq~{}-\frac{1}{4\mu_{\rm
max}}W_{-1}\left[-\sqrt{\frac{3}{8}}\frac{9\lambda_{\phi}^{2}}{(2\pi)^{6}}\right]\
,$ (65)
where $W_{-1}$ denotes the negative branch of the Lambert $W$-function and we
have neglected $\Gamma_{\phi}$. Using the parameters above we find $x_{\rm
NL}\approx 413$. Remarkably, this figure is consistent to less than $2\%$
error with lattice-simulation results Khlebnikov and Tkachev (1996) that give
$x_{\rm NL}=76-14.3\log\lambda_{\phi}\approx 405$.
### IV.2 Vacuum Destabilization
Figure 8: Evolution of the conformal variance for the Higgs
$\langle\mathcal{H}^{2}\rangle$ and inflaton $\langle\varphi^{2}\rangle$, with
their backreaction on the system included through the one-loop Hartree
approximation. If the variance grows to the extent that
$\omega_{\mathcal{H}_{k}}^{2}$ is dominated by the (negative)
$3\lambda_{h}\langle\mathcal{H}^{2}\rangle$ term, the Higgs experiences a
runaway tachyonic growth that rapidly destabilizes the electroweak vacuum,
causing the sharp divergence in the curves. A point marker at the end of a
curve indicates destabilization, and the gray curves show
$\langle\mathcal{H}^{2}\rangle$ in the absence of decays. Identically to Fig.
4, the top/bottom panels correspond to broad/narrow parametric resonance,
respectively. In the latter, $\langle\varphi^{2}\rangle$ grows sufficiently to
end the linear preheating stage at $x_{\rm NL}\approx 400$, consistent with
lattice results in the literature Khlebnikov and Tkachev (1996).
In the above, we have established a clear criterion for model viability: for a
given choice of couplings, if the electroweak vacuum survives for a duration
longer than $x_{\rm NL}$ then it survives the linear stage of preheating. The
transient non-linear stage that follows tends to asymptote to thermal
equilibrium, during which resonant particle production stops and the energy
density is redistributed among the modes to approach a thermal distribution.
We expect that if metastability is maintained until $x_{\rm NL}$ then it also
survives the non-linear stage; we briefly discuss these issues and how
massless preheating contrasts with other scenarios in the Conclusions of Sec.
V.
Our task then shifts to calculating the vacuum decay times $x_{\rm dec}$ as a
function of the model parameters. A robust method to conservatively estimate
$x_{\rm dec}$ is as follows Ema _et al._ (2016); Enqvist _et al._ (2016). As
the Higgs fluctuations grow, the effective mass term associated with the Higgs
self-interaction $3\lambda_{h}\langle h^{2}\rangle$ becomes negative and large
enough to make some Higgs modes tachyonic over a time interval $\Delta t$.
These modes grow exponentially in this interval at a rate
$\sqrt{3|\lambda_{h}|\langle h^{2}\rangle}$; in turn, this amplifies $\langle
h^{2}\rangle$, which then makes the tachyonic masses even larger—i.e., the
system enters a positive feedback loop which rapidly destabilizes the vacuum.
We can estimate $x_{\rm dec}$ as the time at which the growth exponent becomes
larger than $\mathcal{O}(1)$:
$\sqrt{\frac{3\langle
h^{2}\rangle}{\overline{\phi}{}^{2}}\frac{|\lambda_{h}|}{\lambda_{\phi}}}\,\Delta
x~{}\gtrsim~{}1\ ,$ (66)
and we have used that $\Delta t=\Delta
x/(\sqrt{\lambda_{\phi}}\,\overline{\phi})$.
Alternatively, in situations where the Higgs modes oscillate slowly relative
to the background inflaton field, the simpler condition $3\lambda_{h}\langle
h^{2}\rangle\gtrsim m_{h}^{2}$ is appropriate, in which we evaluate the
effective Higgs mass $m_{h}$ [of Eq. (54)] at the amplitude of the inflaton
oscillations $\phi=\overline{\phi}$.
The process of vacuum destabilization as it unfolds is illustrated in Fig. 8
by including backreaction in our numerical calculations. The panels reflect
the same parameter choices made for Fig. 4 in the previous section, where we
focus on the broad $g^{2}/\lambda_{\phi}=2n^{2}$ and narrow
$g^{2}/\lambda_{\phi}=2n^{2}+n$ parametric resonances within the same coupling
band (with $n=12$). The curvature coupling $\xi=30$ is subdominant but non-
negligible. Both the Higgs and inflaton variance are shown, and the former
shows the results both with (blue) and without (gray) Higgs decays. In the top
panel, the perturbative decays delay destabilization but it ultimately occurs
at $x_{\rm dec}\approx 35$. The runaway growth of the Higgs fluctuations is
observed toward the endpoint of each curve, as they finally cross the vacuum
barrier $\langle\mathcal{H}^{2}\rangle\geq(ah_{\rm crit})^{2}$. By contrast,
in the lower panel we see that the narrow resonance would lead to
destabilization in the absence of decays, after not much longer $x_{\rm
dec}\approx 55$. However, we see that _accounting for perturbative decays
prevents vacuum destabilization_. Moreover, we observe the onset of the non-
linear stage at $x_{\rm NL}$ induced by the growth of inflaton fluctuations,
as $\langle\varphi^{2}\rangle$ grows sufficiently large. As expected, this
stage occurs at $x_{\rm NL}\approx 400$.
Let us now survey $x_{\rm dec}$ over the parameter space of couplings and
produce analytical estimates in each region. We shall work largely in parallel
to Sec. III and employ the results from that section. That is, we shall first
cover the $\xi=0$ and $g=0$ limits separately and then progress to the mixed
case, providing a general picture of vacuum metastability at the end of the
section.
Figure 9: The decay time $x_{\rm dec}$ of the electroweak vacuum as a function
of $g^{2}/\lambda_{\phi}$ (left panel) and $\xi$ (right panel), normalized by
the onset time $x_{\rm NL}$ of the non-linear stage. The vacuum survives
preheating for $x_{\rm dec}\geq x_{\rm NL}$. In the left panel, the results
are shown both with (black curve) and without (gray curve) perturbative Higgs
decays. Our analytical estimates in this panel (green curves) correspond to
the different regimes of Eq. (69). In particular, the two green curves for
$g^{2}/\lambda_{\phi}\gtrsim 1$ show the broad and narrow regimes, evaluated
using $\mu_{\rm max}\approx 0.24$ and Eq. (30), respectively. As these
$\mu_{\rm max}$ correspond to the minimum and maximum growth exponents, they
estimate the envelope of the highly non-monotonic $x_{\rm dec}$. In the right
panel, the analytical estimates are also shown by green curves and correspond
to Eq. (71) and Eq. (72), plotted over their regions of validity.
#### IV.2.1 From Parametric Instability
Let us first consider destabilization of the vacuum from the parametric
resonance. The effective mass of a Higgs mode $\omega^{2}_{h_{k}}\approx
k^{2}/a^{2}+g^{2}\phi^{2}+3\lambda_{h}\langle h^{2}\rangle$ may first become
tachyonic once the inflaton field passes through $\phi=0$. We can estimate
$x_{\rm dec}$ by following the discussion surrounding Eq. (66), but we must
first calculate the Higgs variance. Using that $\langle h^{2}\rangle\simeq
n_{h}/(a^{3}\omega_{h})$, with Eq. (31) and Eq. (51) for the different
coupling regimes, we find
$\langle
h^{2}\rangle~{}\simeq~{}\begin{cases}\displaystyle{\frac{\lambda_{\phi}\overline{\phi}{}^{2}}{T^{3/2}}\sqrt{\frac{\mu_{\rm
max}}{2x}}e^{2\mu_{h}x}}&\text{for }g^{2}/\lambda_{\phi}\lesssim 1\\\\[6.0pt]
\displaystyle{\frac{\lambda_{\phi}\overline{\phi}{}^{3}}{16\pi^{3}\left|\phi\right|}\left(\frac{g^{2}}{8\lambda_{\phi}}\right)^{\\!\\!\frac{1}{4}}\\!\frac{e^{2\mu_{h}x}}{\sqrt{\mu_{\rm
max}x}}}&\text{for }g^{2}/\lambda_{\phi}\gtrsim 1\end{cases}\ ,$ (67)
where we have defined the effective rate
$\mu_{h}~{}\equiv~{}\mu_{\rm max}-\frac{1}{2\gamma}\frac{\left\langle
a\Gamma_{h}\right\rangle}{\sqrt{\lambda_{\phi}}\overline{\varphi}}$ (68)
in terms of the conformal decay rate of Eq. (56). The Lorentz factor
$\gamma\simeq\max(1,\sqrt{\lambda_{\phi}/g^{2}})$ accounts for the dilation of
relativistic Higgs decays, which is negligible in the
$g^{2}/\lambda_{\phi}\gtrsim 1$ regime but important for
$g^{2}/\lambda_{\phi}\lesssim 1$. Finally, the vacuum decay time is found in
both regimes:
$x_{\rm dec}\simeq\hskip
0.85358pt\begin{cases}\displaystyle{\frac{-1}{4\mu_{h}}W_{-1}\left[\frac{-9\lambda_{h}^{2}}{32\pi^{2}}\frac{F_{1}^{2}\mu_{h}}{T\mu_{\rm
max}}\right]}&\\!\\!\\!\text{for }g^{2}/\lambda_{\phi}\lesssim 1\\\\[9.0pt]
\displaystyle{\frac{-1}{4\mu_{h}}W_{-1}\left[\frac{-9\lambda_{h}^{2}}{(2\pi)^{6}}\frac{\mu_{h}}{\mu_{\rm
max}}\right]}&\\!\\!\\!\text{for }g^{2}/\lambda_{\phi}\gtrsim 1\end{cases}\,.$
(69)
The $g^{2}/\lambda_{\phi}\gtrsim 1$ result is based on the destabilization
condition in Eq. (66). Meanwhile, for $g^{2}/\lambda_{\phi}\lesssim 1$ the
Higgs modes oscillate slowly relative to the inflaton background, so that
$3\lambda_{h}\langle h^{2}\rangle\gtrsim g^{2}\overline{\phi}{}^{2}$ is an
appropriate condition. Additionally, note that $\mu_{\rm max}$ depends on the
regime of the resonance, as given explicitly in Sec. III.1.
Manifestly, the effective rate $\mu_{h}$ shows the competition between
particle production and decays. For large enough coupling the decays can
eventually dominate because $\mu_{\rm max}$ is globally bounded from above;
the decay rate of produced Higgs particles scales with the coupling but the
rate of particle production does not.
Let us compute the critical value of the coupling for which $x_{\rm dec}\geq
x_{\rm NL}$, i.e., the smallest coupling for which the metastable vacuum
survives preheating. We shall separately consider the couplings corresponding
to the broad and narrow resonances. In the broad regime we find
$g^{2}/\lambda_{\phi}\gtrsim 2.4\times 10^{3}$. Likewise, using Eq. (30) for
$\mu_{\rm max}$, for the narrow resonances we find
$g^{2}/\lambda_{\phi}\gtrsim 213$. The dramatic gap between these thresholds
reflects the difference in the strength of the resonance between these two
regimes. Indeed, confirming our earlier observations in Eq. (57), we find that
perturbative decays of the Higgs stabilize the electroweak vacuum for
sufficiently large quartic couplings, _and these couplings significantly
differ based on the regime of the parametric resonance._
To paint a more complete picture and confirm our analytical results, we solve
the mode equations numerically within the one-loop Hartree approximation and
scan continuously over the range of couplings $g^{2}/\lambda_{\phi}$; this is
shown in the left-hand panel of Fig. 9. The black and gray curves show the
numerical results with and without perturbative decays, respectively. In line
with our calculations in Eq. (69), we find that $x_{\rm dec}$ is highly _non-
monotonic_ with respect to the quartic coupling. Indeed, this behavior
originates from the structure of the resonance bands (in Fig. 2), in which the
minima/maxima of the oscillations correspond to the broad/narrow resonance
regimes, respectively. Therefore, evaluating our analytical result in Eq. (69)
at the broad and narrow $\mu_{\rm max}$ values estimates the envelope of the
$x_{\rm dec}$ oscillations. This envelope is shown by the thick green curves
at $g^{2}/\lambda_{\phi}\geq 1$ in Fig. 9. We also show our analytical
estimate for the $g^{2}/\lambda_{\phi}\leq 1$ region. Meanwhile, the pivotal
role of the perturbative decays is highlighted by the gray curve, which shows
the numerical $x_{\rm dec}$ with perturbative decays turned off. Indeed, in
the absence of decays, the vacuum is only stabilized for very small coupling
$g^{2}/\lambda_{\phi}\lesssim 0.25$.
#### IV.2.2 From Tachyonic Instability
Likewise, let us consider the pure curvature coupling limit, in which $g=0$.
We once again confine our discussion to $\xi>0$ since otherwise the vacuum
decays. From Sec. III, we know that the growth of fluctuations from this
tachyonic source is very different from the parametric instability. Most
notably, the tachyonic production ceases once $x\gtrsim x_{\xi}$, even for the
zero-mode. We estimate the variance again using $\langle h^{2}\rangle\simeq
n_{h}/(a^{3}\omega_{h})$, and since the effective mass is curvature dominated
we have $\omega_{h_{k}}^{2}\simeq\xi
R\simeq\xi\lambda_{\phi}\overline{\phi}{}^{4}$. Using Eq. (37) then yields
$\langle
h^{2}\rangle~{}\simeq~{}\frac{3^{15/8}}{8}\frac{\xi^{1/4}\lambda_{\phi}\overline{\phi}{}^{4}}{(8\pi)^{3/2}}\left(\frac{x}{x_{0}}\right)^{\\!\\!4\sqrt{\frac{2\xi}{3\sqrt{3}}}}$
(70)
for $x<x_{\xi}$. Afterward, there is no particle production, so $n_{h}$ is
fixed and $\omega_{h}\propto\overline{\phi}{}^{2}\propto 1/a^{2}$. The
variance then redshifts as $\langle h^{2}\rangle\approx
n_{h}/(a^{3}\omega_{h})\propto 1/a$, which is slower than the scaling one
would find in the quartic-dominated case.
The fact that $\langle h^{2}\rangle$ redshifts in this way is a crucial
observation. If the ratio $\langle h^{2}\rangle/h_{\rm crit}^{2}$ grows at
times $x\gtrsim x_{\xi}$ due to its redshifting behavior, the vacuum may decay
even after particle production shuts down. For the quartic interaction, this
ratio would be fixed. However, for the curvature interaction, the variance
redshifts more slowly $\langle h^{2}\rangle\propto 1/a$ and the barrier
redshifts more rapidly $h_{\rm crit}^{2}\simeq\xi_{h}R/|\lambda_{h}|\propto
1/a^{4}$. Consequently, the ratio redshifts as $\langle h^{2}\rangle/h_{\rm
crit}^{2}\propto a^{3}$ and grows. Even if the Higgs remains stable throughout
the tachyonic phase, it destabilizes once the fluctuations grow beyond the
extent of the barrier $\langle h^{2}\rangle\gtrsim h_{\rm crit}^{2}$; this
leads to a vacuum decay at
$x_{\rm
dec}~{}\simeq~{}\frac{8\sqrt{\pi}\xi^{3/4}}{3^{1/8}|\lambda_{h}|^{1/3}}\left(\frac{x_{0}}{x_{\xi}}\right)^{\\!\\!\frac{4}{3}\\!\sqrt{\\!\frac{2\xi}{3\sqrt{3}}}}\
,$ (71)
which is valid for $x_{\rm dec}>x_{\xi}$, or equivalently $\xi\lesssim 16$.
Note that the presence of even a small quartic coupling may shield the vacuum
from this effect, since the quartic term will eventually dominate
$g^{2}\phi^{2}\gtrsim\xi R$ and halt the growth of $\langle
h^{2}\rangle/h_{\rm crit}^{2}$, which happens once
$x\gtrsim\sqrt{12\xi\lambda_{\phi}/g^{2}}$.
Otherwise, destabilization occurs during the tachyonic phase and the condition
in Eq. (66) can be used again to produce an estimate. This task is now more
complicated since backreaction is not the only source of tachyonicity. The
vacuum could decay in one of two ways: (i) by the $3\lambda_{h}\langle
h^{2}\rangle$ term or (ii) directly by the tachyonic mass of the curvature
term. A coupling of order $\xi\gtrsim\mathcal{O}(10)$ is sufficient to produce
the latter and in this case the vacuum decays rapidly. For smaller $\xi$,
however, the false vacuum can survive much longer. Using Eq. (66) with the
approximation that $\Delta x\simeq T/2$ leads to
$x_{\rm dec}~{}\simeq~{}\frac{256\sqrt{2}\pi^{3/2}\xi^{3/4}}{9\cdot
3^{7/8}|\lambda_{h}|T^{2}}\left(\frac{x_{\xi}}{x_{0}}\right)^{\\!\\!1-4\sqrt{\\!\frac{2\xi}{3\sqrt{3}}}}\
.$ (72)
We consider this result consistent with our assumptions only if Eq. (72) gives
an earlier decay time than Eq. (71); in terms of the curvature coupling, this
implies a region of validity $\xi\gtrsim 4$ for Eq. (72).
Our numerical computations of the vacuum decay time $x_{\rm dec}$ are shown in
the right panel of Fig. 9. The pure curvature coupling case corresponds to the
black curve and the analytical estimates constitute the thick green curve. We
have also included the numerical results for several small quartic couplings
in the range $10^{-3}\lesssim g^{2}/\lambda_{\phi}\lesssim 10^{-1}$. These
results are in line with our rough estimates and confirm that the presence
even of a small quartic coupling can prevent decay of the false vacuum. We
have neglected $\xi\gtrsim\mathcal{O}(10)$ in our analytical estimates since
these correspond to rapid destabilization, for which the precise values of
$x_{\rm dec}$ are not essential.
#### IV.2.3 The Mixed Case
Let us finally consider the general mixed case in which both couplings $\xi$
and $g^{2}/\lambda_{\phi}$ are non-zero. As we have observed in Sec. III.3, if
the strength of the curvature coupling is at least comparable to
$g^{2}/\lambda_{\phi}$ the system undergoes a sequence of distinct phases of
particle production. In the context of vacuum stability, this means that the
electroweak vacuum must initially survive a phase of tachyonic production
until $x_{\xi}\simeq[6\xi/\sqrt{g^{2}/\lambda_{\phi}}]^{1/2}$ and then survive
the parametric instability until non-linear dynamics set in at $x_{\rm NL}$.
In what follows below, we shall assume such a scenario for our analytical
calculations.
Rather than concern ourselves directly with $x_{\rm dec}$, we produce an
estimate of the constraint for vacuum metastability by examining where
$\langle h^{2}\rangle\gtrsim h_{\rm crit}^{2}$. In the $\xi>0$ region we can
write $\langle h^{2}\rangle$ using Eq. (44) together with $\langle
h^{2}\rangle\simeq n_{h}/(a^{3}\omega_{h})$. We arrive at the expression
$\langle
h^{2}\rangle~{}\simeq~{}\frac{\lambda_{\phi}\overline{\varphi}}{8\pi^{3}\sqrt{\xi}}\left(\frac{T}{2x}\sqrt{\frac{g^{2}}{2\lambda_{\phi}}}\right)^{\\!\\!\\!3/2}\\!\\!e^{\frac{12\sqrt{2}\pi\xi}{T\sqrt{\\!g^{2}/\lambda_{\phi}}}\left(\\!\frac{1}{x_{0}}-\frac{1}{x}\\!\right)}\
.$ (73)
Owing to the exponential growth of $\langle h^{2}\rangle$, the constraint on
our parameter space has only a logarithmic sensitivity to $h_{\rm crit}^{2}$
and the coefficient in Eq. (73). That said, the constraint is sensitive to the
rate of perturbative decays, and we should reintroduce this rate as we
calculate our result. Along these lines, we find
$\displaystyle\xi~{}$
$\displaystyle\lesssim~{}\frac{x_{0}T}{12\sqrt{2}\pi}\sqrt{\frac{g^{2}}{\lambda_{\phi}}}\left[\log\mathcal{C}_{+}+\frac{\left\langle
a\Gamma_{h}\right\rangle}{\sqrt{\lambda_{\phi}}\overline{\varphi}}x_{\rm
dec}\right]\ ,$ (74)
where all of the logarithmic terms have been collected into the quantity
$\log\mathcal{C}_{+}$. The dependence on $x_{\rm dec}$ and $\langle
a\Gamma_{h}\rangle$ are important. To manage $x_{\rm dec}$, a natural
assumption is that $x_{\rm dec}\lesssim x_{\xi}$ in the region we are
evaluating, since otherwise the tachyonic effect is not strong enough to
destabilize the vacuum. We therefore have at most $x_{\rm
dec}\simeq(6\xi/\sqrt{g^{2}/\lambda_{\phi}})^{1/2}$. We can solve the more
complicated inequality that results for $\xi$ (and we shall do this
numerically below), but based on our observations of the tachyonic effect in
this section $x_{\rm dec}=\mathcal{O}(10)$ serves as an appropriate
simplifying approximation. As discussed in Sec. III.4, the perturbative decays
arising from the quartic interaction are generally more relevant, and these
scale parametrically as $\Gamma_{h}\propto\sqrt{g^{2}/\lambda_{\phi}}$. These
considerations lead to a constraint of roughly
$\xi\lesssim\mathcal{O}(0.1)g^{2}/\lambda_{\phi}$ to ensure metastability.
An interesting feature of the mixed-coupling scenario is that it opens the
possibility of metastability in the $\xi<0$ region, which is ruled out if
$g^{2}/\lambda_{\phi}=0$. A similar procedure to the above is followed to
produce a bound on the $\xi<0$ region, the details of which are included in
Appendix B. The constraint is expressed as
$-\xi-\xi_{\phi}\frac{g^{2}}{\lambda_{\phi}}\lesssim\frac{\log\mathcal{C}_{-}+\frac{(x_{\rm
dec}^{2}-x_{0}^{2})}{2\sqrt{-6\xi}}\frac{g^{2}}{\lambda_{\phi}}+(x_{\rm
dec}-x_{0})\frac{\langle
a\Gamma_{h}\rangle}{\sqrt{\lambda_{\phi}}\overline{\varphi}}}{\frac{8}{T}\sqrt{\frac{6}{-\xi}}H_{1}(3,0)\log\\!\big{(}\frac{x_{\rm
dec}}{x_{0}}\big{)}}$ (75)
and we have again taken $x_{\rm dec}=\mathcal{O}(10)$ and packaged the
logarithmic terms in a quantity $\mathcal{C}_{-}$. The function giving
$H_{1}(3,0)\approx 0.76$ is defined in Appendix B. Note that for $\xi\gg 1$
the term proportional to $g^{2}/\lambda_{\phi}$ on the left-hand side is
subdominant and the remaining terms balance for $|\xi|\sim
g^{2}/\lambda_{\phi}$. Therefore, neglecting numerical coefficients, the
constraint becomes $\xi\lesssim\mathcal{O}(g^{2}/\lambda_{\phi})$. The
negative curvature coupling thus yields a weaker bound, which is expected
based on our observations from Sec. III.3. Note that the inflation constraint
$\xi_{h}-\xi_{\phi}g^{2}/\lambda_{\phi}\gg 1/12$ in Eq. (11) yields a bound
that is roughly coincident.
A more complete picture of the constraints is achieved by numerical
calculations, and we display these in Fig. 10. For this figure, we have
calculated the time of vacuum decay $x_{\rm dec}$ over the
$\\{g^{2}/\lambda_{\phi},\xi\\}$ parameter space, making the simplifying
assumption that $\xi_{\phi}=0$. Note that the subset of couplings which
destabilize the vacuum during inflation have been excluded by the grey region.
Additionally, note that the curvature coupling is shown on a log scale over
the negative and positive axes, with the exception of the range
$-1\leq\xi\leq+1$ where the scale is linear. The numerics run until a time
$x_{\rm NL}$, so that the white regions effectively show where $x_{\rm
dec}/x_{\rm NL}\geq 1$ and thus where the vacuum remains metastable throughout
preheating.
Figure 10: The decay time $x_{\rm dec}$ of the electroweak vacuum computed
numerically and normalized by $x_{\rm NL}$. The gray region is excluded due to
violating Eq. (11). The white regions show “islands of (meta)stability” in
which the false vacuum survives preheating. Along the $g^{2}/\lambda_{\phi}$
direction, the pattern reflects the band structure in Sec. III.1, with the
least stable regions centered around $g^{2}/\lambda_{\phi}=2n^{2}$ (for
$n\in\mathbb{N}$). For larger $g^{2}/\lambda_{\phi}$, perturbative Higgs
decays enlarge the metastable regions until they become contiguous at
$g^{2}/\lambda_{\phi}\gtrsim 2\times 10^{3}$ [in agreement with Eq. (69)]. The
effect of the curvature interaction is to form an envelope over the metastable
regions, and the yellow curve shows our estimate [from Eq. (74)] for $\xi>0$.
Echoing the behavior we observed in Fig. 9, over the vast majority of
parameter space the metastable regions are not connected. Instead, we find a
large number of disjoint “islands of (meta)stability,” scattered along the
$g^{2}/\lambda_{\phi}$ direction. Indeed, _the constraint for metastability
cannot be fully expressed as a simple bound on the couplings_ —the fate of the
electroweak vacuum depends on $g^{2}/\lambda_{\phi}$ in a highly monotonic
way. Naturally, the most unstable regions appear where the characteristic
exponent $\mu_{\rm max}$ is maximized—i.e., around the broad resonances
$g^{2}/\lambda_{\phi}=2n^{2}$ (for $n\in\mathbb{N}$). Conversely, the most
stable regions appear around the narrow resonances, where $\mu_{\rm max}$ is
minimized [see Eq. (30)]. As $g^{2}/\lambda_{\phi}$ is taken to larger values,
the metastable regions grow and start to merge together, as shown by the
magnified inset panel. The integers written over the metastable regions
correspond to the couplings of the narrow resonances written in the form
$g^{2}/\lambda_{\phi}=2n^{2}+n$. The couplings eventually become large enough
for perturbative decays to stabilize the Higgs, regardless of the resonance
regime, forming a contiguous stable region for $g^{2}/\lambda_{\phi}\gtrsim
2\times 10^{3}$. This observation agrees with our analytical estimates in Eq.
(57) and Eq. (69) for the broad regime. In principle, the only limitation in
taking larger couplings is that we do not ruin the flatness of the inflaton
potential, which (as briefly discussed in Sec. II.1) roughly requires
$g^{2}/\lambda_{\phi}\ll 10^{5}$.
Beyond these observations, we notice that the interplay of the couplings is
also clarified in Fig. 10. In particular, the figure shows that the Higgs-
curvature interaction has the effect of cutting off the metastable regions and
imposing an envelope that scales with the couplings. This envelope is what we
have analytically estimated in Eq. (74) and Eq. (75). We have plotted the
numerical solution to the inequality in Eq. (74) using a yellow-dashed curve.
This curve is indeed consistent with the approximate bound that we estimated
$\xi\lesssim\mathcal{O}(0.1)g^{2}/\lambda_{\phi}$. We have not included the
analogous curve for the $\xi<0$ region since this closely coincides with the
constraint (the gray region) on stability of the vacuum during inflation. On
the whole, the interplay between the two interactions effectively extends the
range of metastability for the Higgs-curvature coupling, shielded by the
presence of a similarly large Higgs-inflaton coupling.
## V Conclusions and Discussion
Some amount of degeneracy often exists in inflationary models in that
observational constraints may be satisfied over a degenerate subspace of the
model parameters. Undoubtedly, one reason that studies of post-inflationary
dynamics are essential is that this dynamics may break such degeneracies by
leading to qualitatively different preheating histories, subject to a
qualitatively different set of possible constraints. For example, as we have
encountered in this paper, the stability of the Higgs during inflation can be
ensured either by a direct coupling to the inflaton or by a non-minimal
coupling to gravity. The dynamical roles played by these interactions are
similar during inflation but differ dramatically during preheating, in which
the non-trivial interplay between these interactions implies a rich structure
of metastable regions and associated constraints on the Higgs couplings.
Indeed, the broader motivation for this work is to explore dynamics which may
reveal independent probes for models of early-universe cosmology.
In this paper, we have examined the massless preheating dynamics that arises
in models composed of scale-invariant interactions, in which the inflaton
potential is effectively quartic after inflation. Most notably, we have
focused on the implications these models have for electroweak vacuum
metastability. We have provided constraints on the couplings for which the
Higgs remains stabilized during and after inflation, and among these couplings
we have included the possibility that the Higgs and the inflaton have non-
minimal couplings to gravity. While our study is motivated by addressing the
metastability of the vacuum, our results are readily generalized to other
approximately scale-invariant models.
Several comments are in order. First, while we have considered non-minimal
gravitational couplings for both the Higgs and inflaton fields, the Higgs
coupling $\xi_{h}$ has received the bulk of our attention. And while the
effects of $\xi_{\phi}$ have been included in our analysis through the
effective curvature coupling $\xi$ [in Eq. (27)], our treatment can be
extended in several ways. For example, we have not taken into account the
higher-order effects on the field fluctuations that appear as a result of
$\xi_{\phi}\neq 0$ corrections to the background-field solutions. For this
reason, our results are applicable only for $|\xi_{\phi}|\lesssim 1$, which is
the scope assumed for this paper. These higher-order effects are challenging
to incorporate analytically in massless preheating. Notably, even the zeroth-
order background solution [the elliptic function in Eq. (16)] is considerably
more complicated than the sinusoidal form found in massive preheating. It
would be worthwhile to study the effects of $\xi_{\phi}\approx-\mathcal{O}(1)$
without resorting to such approximations for the background equations of
motion.
Secondly, our study is performed under the assumption that there is no new
physics beyond the SM which significantly alters the renormalization-group
evolution of the Higgs quartic coupling. This assumption could be reasonable,
as there are no hints of new physics in collider or WIMP dark-matter searches.
Either way, the cosmological history following the preheating stage—i.e., the
reheating epoch and more precisely the inflaton decay channels—should be
consistent with this assumption. The reheating epoch could then be realized in
a number of ways. For instance, the inflaton could couple to the right-handed
neutrino, which is responsible for neutrino masses and leptogenesis Fukugita
and Yanagida (1986). In this case, reheating could be completed by the decay
of inflaton to right-handed neutrinos and their subsequent decay to SM
particles. Alternatively, if the inflaton is stable, it also opens the
possibility that the inflaton is a candidate for dark matter Almeida _et al._
(2019); Lebedev and Yoon (2021) or dark radiation Babichev _et al._ (2020),
provided that preheating converts most of the inflaton energy density into SM
degrees of freedom. Indeed, the details and variations on these possibilities
ultimately depend on the sign and size of the inflaton mass-squared term.
Along another direction, some comments are in order regarding the stage of
preheating that occurs after $x_{\rm NL}$ in our study. After entering the
non-linear stage of preheating, the energy density held in fluctuations is
redistributed among the modes by re-scattering processes. In general, the
Higgs can destabilize during this time and this possibility has been addressed
for massive preheating in Ref. Ema _et al._ (2016), based on the results of
Refs. Harigaya and Mukaida (2014); Mukaida and Yamada (2016). In massive
preheating, this destabilization arises because the effective Higgs mass
induced by the inflaton-Higgs coupling redshifts as $g^{2}\phi^{2}\propto
1/a^{3}$, while that induced by the Higgs self-coupling redshifts as
$3\lambda_{h}\langle h^{2}\rangle\propto 1/a^{2}$. In other words, the
tachyonic contribution continues to grow relative to the stabilizing mass term
and can eventually trigger decay of the vacuum, even though it was stabilized
at the end of the linear preheating stage. The details of the thermal Higgs
mass after preheating and the evolution of the background temperature thus
become important to ensure metastability as thermalization begins. That said,
in massless preheating these two mass contributions redshift at the same rate
$g^{2}\phi^{2}\sim 3\lambda_{h}\langle h^{2}\rangle\sim 1/a^{2}$, so these
same concerns over destabilization are not as relevant. Nevertheless, the
stages of evolution leading to thermalization are of course interesting in the
context of massless preheating for a host of other reasons. Furthermore, other
considerations such as the thermal corrections to the vacuum tunneling
probability become important during the reheating epoch, such as those studied
in Ref. Delle Rose _et al._ (2016).
As we recall from Sec. IV.1, the time $x_{\rm NL}$ at which the non-linear
stage of preheating begins is central to our results. We concluded, in our
minimal model realization, that the inflaton self-resonance is responsible for
the onset of the non-linear stage, as its fluctuations grow from the quartic
self-resonance. A natural extension, however, is to consider the presence of a
spectator field that does not couple to the Higgs. If such a spectator field
is likewise subject to particle production, then $x_{\rm NL}$ could be
triggered by the growth of the spectator fluctuations instead; this is not
difficult to arrange, considering that the characteristic exponent for the
inflaton self-resonance is rather small at $\mu_{\rm max}\approx 0.036$. A
reduction in $x_{\rm NL}$ would imply that our metastable regions, such as
those that appear in Fig. 10, grow in extent. This enhancement could be
substantial. For example, taking a quartic coupling between the spectator and
the inflaton, with the maximal exponent $\mu_{\rm max}\approx 0.24$, we find
that $x_{\rm NL}\approx 63$, roughly a factor of six smaller than the inflaton
self-resonance.
Moreover, the possibility of spectator fields opens other avenues of
exploration. For instance, such fields could generate important cosmological
observables such as non-Gaussianities in the density perturbations Chambers
and Rajantie (2008). Additionally, spectator fields could naturally serve as
dark-matter candidates. The corresponding relic abundance generated from
preheating would depend non-trivially on the couplings and spins of the
spectator fields. It is worth noting that such dark-matter production from the
thermal bath with a generic equation of state—including the radiation-like
equation of state relevant to our study—has been a topic of recent interest
Garcia _et al._ (2020, 2021).
There are also a number of possible extensions that would be interesting to
explore in the context of multi-field inflation. For instance, in models which
allow for a sizeable angular inflaton velocity at the end of inflation Cline
_et al._ (2020); Kawasaki and Ueda (2021), the inflaton-dependent modulation
of the effective Higgs masses could be suppressed; as a result, the rate of
particle production could be suppressed as well. On the other hand, the onset
of the non-linear stage of preheating would also occur at a later time $x_{\rm
NL}$. The end result of the competition between these two effects, and the
more complicated preheating dynamics altogether, require a dedicated study
Kost _et al._ . An alternative multi-field extension could involve hybrid
inflation Linde (1994). In this case, stability of the vacuum during inflation
is ensured by the inflaton-Higgs coupling as usual, but this coupling will not
oscillate much during the preheating phase since most of the energy is
transferred to the so-called waterfall fields Garcia-Bellido and Linde (1998).
These features imply that the electroweak vacuum would be relatively stable
both during inflation and preheating.
It would also be interesting to consider the presence of additional terms that
break the scale invariance of the theory, as these can have rich dynamical
implications. While the breaking of the scale invariance due to the non-
minimal coupling terms are relevant only around the end of inflation, effects
of lower-dimensional terms become more and more important at a later time. For
instance, a non-vanishing inflaton mass $m_{\phi}$ might alter the parametric
resonance once the inflaton oscillations reach a sufficiently small amplitude
Greene _et al._ (1997). In particular, for couplings
$g^{2}/\lambda_{\phi}\gtrsim\lambda_{\phi}/m_{\phi}^{2}$ the resonance becomes
stochastic once $\overline{\phi}\lesssim m_{\phi}/\\!\sqrt{\lambda_{\phi}}$.
The parametric resonance then increasingly behaves as in massive preheating
(see Appendix A), with the stochastic resonance giving way to the narrow
resonance once $\bar{\phi}\lesssim m_{\phi}/g$. Not only does this evolution
change the spectrum of produced particles and the production rates, the
introduction of $m_{\phi}\neq 0$ also allows the resonance to terminate at
some time before $x_{\rm end}$. A small inflaton mass can thus modify our
constraints for electroweak metastability in a non-trivial way.
Yet another possible extension of our work concerns the gravity formulation.
In this paper, given the non-minimal couplings of our scalar fields to
gravity, we have observed some of the physical distinctions that may appear
between the metric and Palatini formulations. Still, a more general extension
could be explored along the lines of the Einstein-Cartan formulation in the
presence of Holst and Nieh-Yan terms Shaposhnikov _et al._ (2021a, 2020, b).
The Einstein-Cartan theory generalizes our treatment, reproducing the metric
and Palatini formulations in certain limits and allowing for a continuous
interpolation between them.
All in all, if the metastability of the electroweak vacuum may be ensured in
models from which massless preheating emerges, we may adopt a renewed interest
in observables that are sensitive to the details of preheating. A well-suited
example consists of gravitational waves (GWs) produced during this epoch
Figueroa and Torrenti (2017). Massless preheating is distinctive in this sense
because the amplitude of the produced gravitational radiation does not
dissipate and the frequency of the waves does not redshift. While these
properties may present challenges for the observational prospects of GWs from
such scenarios, a number of methods show promise Aggarwal _et al._ (2020).
For example, radio telescopes may be used to examine the distortion in the CMB
from byproducts of high-frequency GWs interacting with background magnetic
fields Domcke and Garcia-Cely (2021). Indeed, such observations could even be
used to probe broader early-universe properties, such as the energy-scale for
inflation Cai _et al._ (2021).
###### Acknowledgements.
TT thanks Kazunori Nakayama for useful discussions regarding Ref. Ema _et
al._ (2016). The research activities of CSS, JK, and TT were supported in part
by the IBS under project code IBS-R018-D1 and those of JK were supported in
part by the Science and Technology Research Council (STFC) under the
Consolidated Grant ST/T00102X/1. This work was completed in part at the Aspen
Center for Physics, which is supported by the National Science Foundation
under Grant PHY-1607611. The opinions and conclusions expressed herein are
those of the authors, and do not represent any funding agencies.
## Appendix A Comparison to Massive Preheating
In this appendix, we provide a brief overview of the “massive preheating”
dynamics, in which our Lagrangian takes the same form as Eq. (II), but the
potential is instead given in the Jordan frame by
$V_{\rm
J}(\phi,h)~{}=~{}\frac{1}{2}m_{\phi}^{2}\phi^{2}+\frac{1}{2}g^{2}\phi^{2}h^{2}+\frac{1}{4}\lambda_{h}h^{4}\
,$ (76)
where $m_{\phi}$ is the inflaton mass. Note that previous studies in the
literature have examined electroweak vacuum metastability for this model in
the $\xi_{\phi}=0$ limit Ema _et al._ (2016, 2017), and we refer to reader to
those papers for details beyond the scope of our summary.
All other features of the model beyond the mass term are assumed to remain the
same. After the end of inflation, when $|\xi_{\phi}|\phi^{2}\ll 1$ the
inflaton potential is approximately quadratic and the field evolves according
to
$\ddot{\phi}+3H\dot{\phi}+m_{\phi}^{2}\phi~{}=~{}0\ .$ (77)
The inflaton amplitude then redshifts as $\overline{\phi}\propto 1/a^{3/2}$
and [in line with Eq. (18)] the cosmological equation of state is matter-like.
The inflaton field evolution is sinusoidal:
$\varphi(x)~{}=~{}\overline{\varphi}\cos(x-x_{0})\ ,$ (78)
where in analogy to Sec. II.2 we have defined $\varphi=a^{3/2}\phi$ and the
dimensionless time $x\equiv mt$. The constant $x_{0}$ is fixed by the
conditions at the end of inflation and we shall assume $\phi_{\rm
end}=\mathcal{O}(1)$. The scale factor evolves as
$a(x)~{}=~{}\frac{1}{2}\big{(}\sqrt{3}\overline{\varphi}x\big{)}^{\\!2/3}$
(79)
and thus for consistency $x_{\rm end}=2\sqrt{2/3}\phi_{\rm end}^{-3/2}$.
To compute the growth of fluctuations we write the equations of motion for the
Higgs modes in the Einstein frame. By defining $\mathcal{H}_{k}\equiv
a^{3/2}\Omega^{-1/2}|_{h=0}h_{k}$, we can remove the damping terms and up to
$\mathcal{O}(a^{-3})$ we have
$\mathcal{H}_{k}^{\prime\prime}+\omega_{\mathcal{H}_{k}}^{2}\mathcal{H}_{k}~{}=~{}0\
,$ (80)
in which the effective masses are
$\displaystyle\omega_{\mathcal{H}_{k}}^{2}~{}$
$\displaystyle=~{}\frac{\kappa^{2}}{a^{2}}+\Big{(}\frac{g^{2}}{m_{\phi}^{2}}+\xi_{h}\Big{)}\frac{\varphi^{2}}{a^{3}}+\xi\Big{(}\frac{\varphi^{2}}{a^{3}}-\frac{\varphi^{\prime
2}}{a^{3}}\Big{)}$
$\displaystyle=~{}\frac{\kappa^{2}}{a^{2}}+\frac{\overline{\varphi}{}^{2}}{a^{3}}\Big{[}\xi\sin^{2}(x-x_{0})$
(81) $\displaystyle\hskip
62.59605pt+\Big{(}\frac{g^{2}}{m_{\phi}^{2}}+\xi+\xi_{h}\Big{)}\cos^{2}(x-x_{0})\Big{]}\
.$
We have defined the rescaled momenta $\kappa\equiv k/m$, and in the second
line we have employed Eq. (78). Additionally, in analogy to Eq. (27) we have
defined the effective curvature coupling
$\xi\equiv\xi_{h}+\xi_{\phi}-6\theta\xi_{h}\xi_{\phi}-\frac{3}{8}\ .$ (82)
The rate of cosmological expansion is much smaller than that of the inflaton
oscillations, so that the mode equations in Eq. (80) approximately take the
Mathieu form
$\mathcal{H}_{k}^{\prime\prime}+\\{A_{k}-2q\cos[2(x-x_{0})]\\}\mathcal{H}_{k}=0$,
with the slowly varying parameters given by
$\displaystyle A_{k}~{}$
$\displaystyle\equiv~{}\frac{\kappa^{2}}{a^{2}}+\frac{\overline{\phi}{}^{2}}{2}\\!\left(\frac{g^{2}}{m_{\phi}^{2}}+\xi_{h}\right)$
$\displaystyle q~{}$
$\displaystyle\equiv-\frac{\overline{\phi}{}^{2}}{4}\\!\left(\frac{g^{2}}{m_{\phi}^{2}}+2\xi+\xi_{h}\right)\
.$ (83)
An important observation is that the contributions to the effective Higgs mass
from the quartic interaction $g^{2}\phi^{2}\propto 1/a^{3}$ and non-minimal
coupling $\xi_{h}R\propto 1/a^{3}$ _redshift identically_ in the massive
preheating scenario. Because much of the complexity we found in our study of
massless preheating arose from the mismatch between these two terms, the
massive preheating scenario is simpler in this respect. For example, whether
particle production for a given mode occurs through the tachyonic or
parametric instability is determined by the sign of $A_{k}-2|q|$. Namely,
$A_{k}-2|q|>0$ yields the parametric instability and $A_{k}-2|q|<0$ yields the
tachyonic instability. We can write these explicitly as
$A_{k}-2|q|~{}=~{}\frac{\kappa^{2}}{2}+\overline{\phi}{}^{2}\begin{cases}-\xi&\text{for
}q<0\\\ \frac{g^{2}}{m_{\phi}^{2}}+\xi+\xi_{h}&\text{for }q>0\end{cases}\ .$
(84)
Therefore, unlike in the massless preheating scenario, the type of particle
production that drives the zero-mode is a fixed time-independent property
based on the couplings. Notably, in the $g,\xi_{\phi}\rightarrow 0$ limit we
recover the well-known result that $3/16<\xi_{h}<3/8$ is necessary for the
complete absence of tachyonic production Ema _et al._ (2016).
Another crucial distinction between the massless and massive preheating
dynamics is found by re-examining the perturbative Higgs decays. In
particular, using the rest-frame decay rate $\Gamma_{h}$ in Eq. (53), the
decays introduce a dissipative exponent $-\int dx\,\Gamma_{h}/\gamma$ that
suppresses the growth of the Higgs number density. In terms of the Mathieu
parameters the effective Higgs mass is given by
$m_{h}^{2}=A_{0}-2q\cos[2(x-x_{0})]$ so that
$-\int dx\,\Gamma_{h}~{}\propto~{}\int dx\sqrt{A_{0}-2q\cos{[2(x-x_{0})]}}\ ,$
(85)
where the integration region is over the preheating times $x\geq x_{\rm end}$
for which $m_{h}^{2}\geq 0$ and we have taken $\gamma=1$. Because this
integral has an identical form to the accumulated phase of the zero-mode
$\Theta_{0}$ [refer to the general quantitiy defined in Eq. (33)], we can
employ previous calculations for $\Theta_{k}$ available in the literature
Dufaux _et al._ (2006). The decay exponent amounts to a sum over each non-
tachyonic oscillation period $-\int dx\Gamma_{h}\propto\int dx\,\Theta_{0}$,
and we can extract the time-dependence by noting that the integrand scales
approximately as $\Theta_{0}\propto\smash{\sqrt{|q|}}\propto 1/x$ so that
$-\int dx\,\Gamma_{h}~{}\propto~{}\sqrt{2|q|_{\rm
end}}\log\left(\frac{x}{x_{\rm end}}\right)\ .$ (86)
Figure 11: A comparison of the Higgs variance $\langle\mathcal{H}^{2}\rangle$
between massless and massive preheating, i.e., between the quartic and
quadratic inflaton potentials, respectively, taking $m_{\phi}=10^{-5}$ and
$\lambda_{\phi}=10^{-10}$. The backreaction effect is not included in this
figure. The top and bottom panels show a difference in sign for the curvature
coupling $\xi_{h}$. We have also included curves that show the result if
perturbative decays are turned off. Notably, this effect is minimal in massive
preheating but significant in the massless case. The contrast between the
preheating scenarios is also evident in the short-lived growth of fluctuations
for the massive case which ceases for $|q|\ll\frac{1}{4}$. Note that although
$x$ and $\mathcal{H}$ have the same qualitative meaning in both preheating
scenarios their explicit definitions differ due to the difference in
background cosmology.
The logarithmic time-dependence of the decay exponent in Eq. (86) demonstrates
a critical distinction between the massive and massless preheating scenarios.
For the latter, the time-dependence of the decay exponent was found [in Eq.
(55) and Eq. (56)] to be linear, making it possible for the perturbative
decays to efficiently counter the growth of Higgs fluctuations, even shutting
them down for sufficiently large quartic coupling. By contrast, the
logarithmic time-dependence in massive preheating shows that _the capacity for
perturbative decays to stabilize the electroweak vacuum is relatively
negligible_. This distinction makes the features of the massless preheating
dynamics somewhat unique.
In order to further illustrate this comparison, we have plotted the evolution
of the Higgs variance $\langle\mathcal{H}^{2}\rangle$ in Fig. 11 for both the
quadratic and quartic scenarios, using the quartic coupling
$g^{2}/m_{\phi}^{2}=g^{2}/\lambda_{\phi}=800$ and curvature couplings
$\xi_{h}=\pm 200$. The inflaton mass $m_{\phi}=10^{-5}$ and coupling
$\lambda_{\phi}=10^{-10}$ are taken in the respective cases. In contrast to
the unimpeded growth that arises from the quartic potential, we observe the
end of particle production that occurs in the quadratic case once $|q|\lesssim
1/4$. And comparing the curves, which show the results with perturbative
decays both included and not included, we see manifestly the relative
importance of perturbative decays in these scenarios. Indeed, although the
couplings are the same for each of the comparisons in Fig. 11, the results for
vacuum stability can greatly differ. For instance, upon including backreaction
one finds that, for the couplings in the bottom panel, the false vacuum
survives in the quadratic case but it decays in the quartic case.
Notwithstanding, owing to the complex pattern of metastability regions for the
quartic case, we can change this outcome with only a small change in the
coupling.
An analytical study of the preheating dynamics and vacuum decay times is found
in the literature for the unmixed case Ema _et al._ (2016), while the mixed
case is studied mostly numerically in Ref. Ema _et al._ (2017). For the
purpose of our comparison we shall focus on numerical results for $x_{\rm
dec}$. Along these lines, we provide an analog to Fig. 10 for the massive
preheating scenario in Fig. 12. Note that in order to make a clear comparison
between this result and the massless preheating result in Fig. 10 we have
constructed Fig. 12 using the same axes and axis scales. The gray exclusion
regions correspond either to the Higgs destabilizing during inflation
$\xi_{h}\lesssim-g^{2}\\!/(2m_{\phi}^{2})$ or the flatness of the inflaton
potential being ruined by quantum corrections
$\xi_{h}\gtrsim\frac{1}{2}(10^{-6}-g^{2})/m_{\phi}^{2}$. The line showing
$q=0$ is significant since along it the effective masses of the Higgs modes do
not oscillate in time and the resonant particle production vanishes.
Figure 12: The decay time $x_{\rm dec}$ of the electroweak vacuum in the
massive preheating scenario (for comparison with Fig. 10). The results are
given relative to a fiducial time $x_{*}\approx 2\times 10^{3}$ for which the
resonance is deep in the narrow regime over the parameter space shown. Our
most immediate observation is that the metastable region is connected, which |
# Adaptive Nonparametric Image Parsing
Tam V. Nguyen, Canyi Lu, Jose Sepulveda, and Shuicheng Yan Manuscript received
XX XX, XXXX; revised XX XX, XXXX, and XX XX, XXXX, accepted XX XX, XXXX. This
work was supported by the Singapore Ministry of Education under Grants
MOE2012-TIF-2-G-016 and MOE2014-TIF-1-G-007. This paper was recommended by
Associate Editor C. Shan.T. Nguyen and J. Sepulveda are with the Department
for Technology, Innovation and Enterprise, Singapore Polytechnic, Singapore
139651, e-mail: {nguyen_van_tam<EMAIL_ADDRESS>Lu and S. Yan
are with the Department of Electrical and Computer Engineering, National
University of Singapore, Singapore 119077, email: {canyilu,
<EMAIL_ADDRESS>versions of one or more of the figures in this paper
are available online at http://ieeexplore.ieee.org.
###### Abstract
In this paper, we present an adaptive nonparametric solution to the image
parsing task, namely annotating each image pixel with its corresponding
category label. For a given test image, first, a locality-aware retrieval set
is extracted from the training data based on super-pixel matching
similarities, which are augmented with feature extraction for better
differentiation of local super-pixels. Then, the category of each super-pixel
is initialized by the majority vote of the $k$-nearest-neighbor super-pixels
in the retrieval set. Instead of fixing $k$ as in traditional non-parametric
approaches, here we propose a novel adaptive nonparametric approach which
determines the sample-specific $k$ for each test image. In particular, $k$ is
adaptively set to be the number of the fewest nearest super-pixels which the
images in the retrieval set can use to get the best category prediction.
Finally, the initial super-pixel labels are further refined by contextual
smoothing. Extensive experiments on challenging datasets demonstrate the
superiority of the new solution over other state-of-the-art nonparametric
solutions.
###### Index Terms:
image parsing, scene understanding, adaptive nonparametric method.
## I Introduction
Image parsing, also called scene understanding or scene labeling, is a
fundamental task in computer vision literature [1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12]. However, image parsing is very challenging since it implicitly
integrates the tasks of object detection, segmentation, and multi-label
recognition into one single process. Most current solutions to this problem
follow the two-step pipeline. First, the category label of each pixel is
initially assigned by using a certain classification algorithm. Then,
contextual smoothing is applied to enforce the contextual constraints among
the neighboring pixels. The algorithms in the classification step can be
roughly divided into two categories, namely parametric methods and
nonparametric methods.
Figure 1: The flowchart of our proposed nonparametric image parsing. Given a
test image, we segment the image into super-pixels. Then the locality-aware
retrieval set is extracted by using super-pixel matching, and the initial
category label of each super-pixel is assigned by adaptive nonparametric
super-pixel classification. The initial labels, in combination with contextual
smoothing, give a dense labeling of the test image. The red rectangle
highlights the new contributions of this work, and removing the keywords of
locality-aware and adaptive in red then leads to the traditional nonparametric
image parsing pipeline.
Parametric methods Fulkerson et al. [13] constructed an SVM classifier on the
bag-of-words histogram of local features around each super-pixel. Tighe et al.
[14] combined super-pixel level features with per-exemplar sliding window
detectors to improve the performance. Socher et al. [15] proposed a method to
aggregate super-pixels in a greedy fashion using a trained scoring function.
The originality of this approach is that the feature vector of the combination
of two adjacent super-pixels is computed from the feature vectors of the
individual super-pixels through a trainable function. Farabet et al. [16]
later proposed to use a multiscale convolutional network trained from raw
pixels to extract dense feature vectors that encode regions of multiple sizes
centered at each pixel.
Nonparametric methods Different from parametric methods, nonparametric or
data-driven methods liaise with $k$-nearest neighbors classifiers [4, 5]. Liu
et al. [4] proposed a nonparametric image parsing method based on estimating
SIFT Flow, a dense deformation field between images. Given a test and a
training image, the annotated category labels of the training pixels are
transferred to the test ones via pixel correspondences. However, inference via
pixel-wise SIFT Flow is currently very complex and computationally expensive.
Therefore, Tighe et al. [5] further transferred labels at the level of super-
pixels, or coherent image regions produced by a bottom-up segmentation method.
In this scheme, given a test image, the system searches for the top similar
training images based on global features. The super-pixels of the most similar
images are obtained as a retrieval set. Then the label of each super-pixel in
the test image is assigned based on the corresponding $k$ most similar super-
pixels in the retrieval set. Eigen et al. [17] further improved [5] by
learning per-descriptor weights that minimize classification error. In order
to improve the retrieval set, Singh et al. [18] used adaptive feature
relevance and semantic context. They adopted a locally adaptive distance
metric which is learned at query time to compute the relevance of individual
feature channels. Using the initial labelling as a contextual cue for presence
or absence of objects in the scene, they proposed a semantic context
descriptor which helped refine the quality of the retrieval set. In a
different work, Yang et al. [19] looked into the long-tailed nature of the
label distribution. They expanded the retrieval set by rare class exemplars
and thus achieved more balanced super-pixel classification results. Meanwhile,
Zhang et al. [20] proposed a method which exploits partial similarity between
images. Namely, instead of retrieving global similar images from the training
database, they retrieved some partially similar images so that for each region
in the test image, a similar region exists in one of the retrieved training
images.
Due to the limited discriminating power of classification algorithms, the
output initial labels of pixels may be noisy. To further enhance the label
accuracy, contextual smoothing is generally used to exploit global contexts
among the pixels. Rabinovich et al. [9] incorporated co-occurrence statistics
of category labels of super-pixels into the fully connected Conditional Random
Field (CRF). Galleguillos et al. [10] proposed to exploit the information of
relative location such as above, beside, or enclosed between super-pixel
categories. Meanwhile, Myeong et al. [6] introduced a context link view of
contextual knowledge, where the relationship between a pair of annotated
super-pixels is represented as a context link on a similarity graph of
regions, and link analysis techniques are used to estimate the pairwise
context scores of all pairs of unlabeled regions in the input image. Later,
[11] proposed a method to transfer high-order semantic relations of objects
from annotated images to unlabeled images. Zhu et al. [21] proposed the
hierarchical image model composed of rectangular regions with parent-child
dependencies. This model captures large-distance dependencies and is solved
efficiently using dynamic programming. However, it supports neither multiple
hierarchies, nor dependencies between variables at the same level. In another
work, Tu et al. [22] introduced a unified framework to pool the information
from segmentation, detection and recognition for image parsing. They have to
spend much effort to design such complex models. Due to the complexity, the
proposed model might not scale well with different datasets.
In this work, our focus is placed on nonparametric solutions to the image
parsing problem. However, there are several shortcomings in existing
nonparametric methods. First, it is often quite difficult to get globally
similar images to form the retrieval set. Also by only considering global
features, some important local components or objects may be ignored. Second,
$k$ is fixed empirically in advance in such a nonparametric image parsing
scheme. Tighe et al. [5] reported the best results by varying $k$ on the test
set. However, this strategy is impractical since the ground-truth labels are
not provided in the testing phase. Therefore, the main issues in the context
of the nonparametric image parsing are 1) how to get a good retrieval set, and
2) how to choose a good $k$ for initial label transfer. In this work, we aim
to improve both aspects, and the main contributions of this work are two-fold.
1. 1.
Unlike the traditional retrieval set which consists of globally similar
images, we propose the locality-aware retrieval set. The locality-aware
retrieval set is extracted from the training data based on super-pixel
matching similarities, which are augmented with feature extraction for better
differentiation of local super-pixels.
2. 2.
Instead of fixing $k$ as in traditional nonparametric methods, we propose an
adaptive method to set the sample-specific $k$ as the number of the fewest
nearest neighbors which similar training super-pixels can use to get their
best category label predictions.
## II Adaptive Nonparametric Image Parsing
### II-A Overview
Generally, for nonparametric solutions to the image parsing task, the goal is
to label the test image at the pixel level based on the content of the
retrieval set, but assigning labels on a per-pixel basis as in [4, 16] would
be too inefficient. In this work, we choose to assign labels to super-pixels
produced by bottom-up segmentation as in [5]. This not only reduces the
complexity of the problem, but also gives better spatial support for
aggregating features belonging to a single object than, say, fixed-size square
patches centered at each pixel in an image.
The training images are first over-segmented into super-pixels by using the
fast graph-based segmentation algorithm of [23] and their appearances are
described using 20 different features similar to those of [5]. The complete
list of super-pixel’s features is summarized in Table I. Each training super-
pixel is assigned a category label if 50% or more of the super-pixel overlaps
with a ground truth segment mask of that label. For each super-pixel, we
perform feature extraction and then reduce the dimension of the extracted
feature.
TABLE I: The list of all super-pixel’s features. Type | Dim | Type | Dim
---|---|---|---
Centered mask | 64 | SIFT histogram top | 100
Bounding box | 2 | SIFT histogram right | 100
Super-pixel area | 1 | SIFT histogram left | 100
Absolute mask | 64 | Mean color | 3
Top height | 1 | Color standard deviation | 3
Texton histogram | 100 | Color histogram | 33
Dilated texton histogram | 100 | Dilated color histogram | 33
SIFT histogram | 100 | Color thumbnail | 192
Dilated SIFT histogram | 100 | Masked color thumbnail | 192
SIFT histogram bottom | 100 | GIST | 320
For the test image, as illustrated in Figure 1, over-segmentation and super-
pixel feature extraction are also conducted. Next, we perform the super-pixel
matching process to obtain the locality-aware retrieval set. The adaptive
nonparametric super-pixel classification is proposed to determine the initial
label of each super-pixel. Finally, the graphical model inference is performed
to preserve the semantic consistency between adjacent pixels. More details of
the proposed framework, namely the locality-aware retrieval set, adaptive
nonparametric super-pixel classification, and contextual smoothing, are
elaborated as follows.
Figure 2: The process to extract the retrieval set by super-pixel matching.
The test image is first oversegmented into super-pixels. Then, we compute the
similarity between the test image and each training image as described in
Algorithm 1. (Please view in high 400% resolution).
### II-B Locality-aware Retrieval Set
For nonparametric image parsing, one important step of parsing a test image is
to find a retrieval set of training images that will serve as the reference of
candidate super-pixel level annotations. This is done not only for
computational efficiency, but also to provide scene-level context for the
subsequent processing steps. A good retrieval set should contain images of a
similar scene type as that of the test image, along with similar objects and
spatial layouts. Unlike [5] where global features are used to obtain the
retrieval set, we utilize the super-pixel matching as illustrated in Figure 2.
The motivation is that sometimes it may be difficult to get globally similar
images, especially when the training set is not big enough, yet locally
similar ones are easier to obtain; also sometimes if only global features are
considered for retrieval set selection, some important local components or
objects may be ignored. In this work, the retrieval set is selected based on
local similarity measured over super-pixels. To enhance the discriminating
power of super-pixels, we utilize Linear Discriminant Analysis (LDA) [24] for
feature reduction to a lower feature dimension. Then we use the augmented
super-pixel similarity instead of global similarity to extract the retrieval
set.
Denote $x\in\mathbb{R}^{n_{x}\times 1}$ as the original feature vector of the
super-pixel, where $n_{x}$ is the dimension of the feature vector. The
corresponding feature vector $\hat{x}$ after the feature reduction is computed
as,
$\hat{x}=\bm{W}x,$ (1)
where $\bm{W}$ is the transformation matrix. In particular, LDA looks for the
directions that are most effective for discrimination by minimizing the ratio
between the intra-category ($\bm{S_{w}}$) and inter-category ($\bm{S_{b}}$)
scatters:
$\bm{W}^{*}=\arg\min_{\bm{W}}\frac{|\bm{W}^{T}\bm{S_{w}}\bm{W}|}{|\bm{W}^{T}\bm{S_{b}}\bm{W}|},$
(2)
$\bm{S_{w}}=\sum_{i=1}^{N}(\bm{x_{i}-\bar{x}^{c_{i}}})(\bm{x_{i}-\bar{x}^{c_{i}}})^{T},$
(3)
$\bm{S_{b}}=\sum_{c=1}^{N_{c}}n_{c}(\bm{\bar{x}^{c}-\bar{x}})(\bm{\bar{x}^{c}-\bar{x}})^{T},$
(4)
where $N$ is the number of super-pixels in all training images, $N_{c}$ is the
number of categories, $n_{c}$ is the number of super-pixels for the $c$-th
category, $\bm{x}_{i}$, $\forall i\in\\{1,\cdots,N\\}$, is the feature vector
of one training super-pixel, $c_{i}$ is the category label of the $i$-th
super-pixel in the training images, $\bm{\bar{x}}$ is the mean of feature
vector of training super-pixels, and $\bm{\bar{x}^{c}}$ is the mean of the
$c$-th category. Note that the category label of each super-pixel is obtained
from the ground-truth object segment with the largest overlapping with the
super-pixel. As shown in [24], the projection matrix $\bm{W}^{*}$ is composed
of the eigenvectors of $\bm{S_{w}^{-1}}\bm{S_{b}}$. Note that there are at
most $N_{c}-1$ eigenvectors with non-zero real corresponding eigenvalues since
there are only $N_{c}$ points to compute $\bm{S_{b}}$. In other words, the
dimensionality of $\bm{W}$ is $N_{c}-1\times n_{x}$. Therefore, LDA naturally
reduces the feature dimension to $N_{c}$$-$$1$ in the image parsing task.
Since the category number is much smaller than the feature number, the
benefits of the reduced dimension include the shrinkage of memory storage and
the removal of those less informative features for consequent super-pixel
matching. Obviously the reduction of feature dimension is also beneficial to
the nearest super-pixel search in the super-pixel classification stage.
Algorithm 1 Locality-aware Retrieval Set Algorithm
1:parameters: $n_{q}$, $\bm{n_{t}}$, $N_{I}$, $\bm{Q}$, $\bm{T}$.
2:The unique index set $S=\emptyset$.
3:$\bm{v}=\bm{0}\in\mathbb{R}^{N_{I}}$.
4:for i = 1:$n_{q}$ do
5: [$\bm{\eta}_{i}$, $\bm{\Delta}_{i}$]$\leftarrow$Knn($\bm{Q}_{i}$, $\bm{T}$,
$k_{m}$);
6: $\bm{\eta}_{i}\leftarrow\bm{\eta}_{i}\verb|\|S$;
7: if $\bm{\eta}_{i}\neq\emptyset$ then
8: $\bm{\eta}_{i}$$\leftarrow$RefineIndexSet($\bm{\eta}_{i}$);
9: $\bm{I}_{i}\leftarrow$FindImageIndex($\bm{\eta}_{i}$);
10:
$\bm{v}(\bm{I}_{i})\leftarrow\bm{v}(\bm{I}_{i})+1./\bm{\Delta}_{i}(\bm{\eta}_{i})$;
11: $S\leftarrow S\bigcup\bm{\eta}_{i}$;
12: end if
13:end for
14:$\bm{v}=$NormalizeAndSort($\bm{v}$).
15:$k_{r}=\arg\min_{u}\frac{\sum_{j=1}^{u}v_{j}}{\sum_{j=1}^{N_{I}}v_{j}}\geq\tau$.
16:return top $k_{r}$ training images.
17:
18:function RefineIndexSet($\eta$, $\Delta$)
19: $\bm{d}=\bm{\infty}\in\mathbb{R}^{N_{I}}$.
20: $\bm{\Gamma}=\emptyset$.
21: for i = 1:$|\eta|$ do
22: if $\bm{d}(\bm{m}(\eta_{i}))>\Delta_{i}$ then
23: $\bm{d}(\bm{m}(\eta_{i}))=\Delta_{i}$;
24: else
25: $\bm{\Gamma}=\bm{\Gamma}\bigcup\texttt{i}$;
26: end if
27: end for
28: return $\bm{\Gamma}$.
29:end function
30:
31:function FindImageIndex($\eta$)
32: $\bm{\Gamma}=\bm{\infty}\in\mathbb{R}^{|\eta|}$.
33: for i = 1:$|\eta|$ do
34: $\Gamma_{i}$ = $\bm{m}(\eta_{i})$;
35: end for
36: return $\Gamma$.
37:end function
38:
39:function NormalizeAndSort($\bm{v}$)
40: $\bm{\Gamma}=\bm{\infty}\in\mathbb{R}^{|\bm{v}|}$.
41: for i = 1:$|\bm{v}|$ do
42: $\Gamma_{i}$ = $\bm{v}_{i}/\min{(n^{t}_{i},n_{q})}$;
43: end for
44: $\Gamma=\mathrm{sort}(\Gamma)$.
45: return $\Gamma$.
46:end function
Figure 3: The distribution of best $k$s for all training images in the
SIFTFlow dataset. It can be observed that there is no dominant $k$ from $1$ to
$50$.
The procedure to obtain the retrieval set is summarized in Algorithm 1. Denote
$n_{q}$ as the number of super-pixels in the test image,
${n^{t}_{j}}\in\mathbb{R}$ as the number of super-pixels for the $j$-th
training image, and $N_{I}$ as the number of training images. We impose the
nature constraint that one super-pixel in a training image is matched with
only one super-pixel of the test image. We denote $S$ as the unique index set
which stores the indices of the already matched super-pixels, $\bm{v}$ as the
similarity vector between the test image and all training images,
$\bm{Q}\in\mathbb{R}^{(N_{c}-1)\times n_{q}}$ as the feature matrix for all
the super-pixels in the test image,
$\bm{T}\in\mathbb{R}^{(N_{c}-1)\times(\sum_{j}n^{t}_{j})}$ as the feature
matrix for all the super-pixels in the training set, and
$\bm{m}\in\mathbb{R}^{\sum_{j}n^{t}_{j}}$ as the mapping index between the
super-pixel and the corresponding training image. As aforementioned, the over-
segmentation over the image is performed by using [23]. Then we extract the
corresponding features similarly as [5] for each super-pixel and use LDA to
reduce the feature dimension.
We match each super-pixel in the test image with all super-pixels in the
training set. In order to reduce the complexity, we perform Knn to find the
nearest $k_{m}$ super-pixels in the training images for the $i$-th super-pixel
in the test image. The Euclidean distance is used to calculate the
dissimilarity between two super-pixels. As a result, we have
$\bm{\eta}_{i}\in\mathbb{R}^{k_{m}}$ as the indices of the returned nearest
super-pixels of the $i$-th test super-pixel, and
$\bm{\Delta}_{i}\in\mathbb{R}^{k_{m}}$ as the corresponding distances of the
returned nearest super-pixels to the $i$-th test super-pixel. We remove the
super-pixels in $S$ from $\bm{\eta}_{i}$, where $S$ includes the training
super-pixels matched by the first $i$$-$$1$ test super-pixels. There may be
more than one super-pixel from one training image, thus RefineIndexSet is
performed to keep the nearest one. Note that $|\cdot|$ denotes the number of
the elements in an array. Then, the index set $S$ is updated by adding
$\eta_{i}$.
The function FindImageIndex is invoked to retrieve the corresponding image
index of $\bm{\eta}_{i}$. Then we update the similarity vector $\bm{v}$ since
the number of super-pixels is not the same for every image. For example, the
number of super-pixels of SIFTFlow training set varies from $5$ to $193$.
Therefore we perform NormalizeAndSort to obtain the final similarity vector.
Namely, for each training image $j$, $v_{j}$ is divided by
$\min(n_{q},{n^{t}_{j}})$. The retrieval set then includes the top $k_{r}$
training images by
$\frac{\sum_{j=1}^{k_{r}}v_{j}}{\sum_{j=1}^{N_{I}}v_{j}}\geq\tau$, where the
parameters $k_{m}$ and $\tau$ are selected by the grid search over the
training set based on the leave-one-out strategy. Namely, we choose a pair of
$\tau\in\\{0.1,...,0.5\\}$ with step size $0.1$, and
$k_{m}\in\\{500,...,2500\\}$ with step size $500$ and perform the following
adaptive non-parametric super-pixel classification for all images in the
training set. The leave-one-out strategy means that when one training image is
selected as a test image, the rest of training images is used as the
corresponding training set.
### II-C Adaptive Nonparametric Super-pixel Classification
Adaptive nonparametric super-pixel classification aims to overcome the
limitation of the traditional $k$-nearest neighbor ($k$-NN) algorithm which
usually assigns the same number of nearest neighbors for each test sample. For
nonparametric algorithms, the label of each super-pixel in the test image is
assigned based on the corresponding similar $k$ super-pixels in the retrieval
set. Our improved $k$-NN algorithm focuses on looking for the suitable $k$ for
each test sample.
Basically the sample-specific $k$ of each test image is propagated from its
similar training images. In particular, each training image $t$ retrieved by
the super-pixel matching process, is considered as one test image, while the
left $N_{I}$$-$$1$ images in the training set are referred to the
corresponding training set. Then we perform super-pixel matching to obtain the
retrieval set for $t$ and assign the label $l^{k}_{i}$ of the $i$-th super-
pixel by the majority vote of the $k$ nearest super-pixels in the retrieval
set,
$l_{i}^{*}=\arg\max_{l_{i}}L(k,l_{i}),$ (5)
where $L$ is the likelihood ratio for the $i$-th super-pixel to have the
category $l_{i}$ based on the $k$ nearest super-pixels and defined as below,
$\displaystyle L(k,l_{i})=$
$\displaystyle\frac{P(i|l_{i},k)}{P(i|\bar{l}_{i},k)}=\frac{n(l_{i},NN(i,k))/n(l_{i},D)}{n(\bar{l}_{i},NN(i,k))/n(\bar{l_{i}},D)}.$
(6)
Here $n(l_{i},NN(i,k))$ is the number of super-pixels with class label $l_{i}$
in the $k$ nearest super-pixels of the $i$-th super-pixel in the retrieval
set, $\bar{l}_{i}$ is the set of all labels excluding $l_{i}$, and $D$ is the
set of all super-pixels in the whole training set. $NN(i,k)$ consists of $k$
nearest super-pixels of the $i$-th super-pixel from the retrieval set. Then we
compute the per-pixel accuracy of each retrieved training image $t$ for
different $k$s. We denote $A_{tk}$ as the per-pixel performance (the
percentage of all ground-truth pixels that are correctly labeled) of the
training image $t$ with the parameter value $k$. We vary $k$ from $1$ to $50$
with step size $1$, $k\in\\{1,~{}2,~{}3,~{}\cdots,~{}50\\}$. As can be
observed in Figure 3, there is no dominant $k$ from $1$ to $50$ in the overall
SIFTFlow training set. It motivates the necessity of adaptive $k$ nearest
neighbors for the nonparametric super-pixel classification process. Thus, for
each test image, we assign its $k$ by transferring $k$s of the similar images
returned by the super-pixel matching process,
$k^{*}=\arg\max_{k}\sum_{t=1}^{k_{r}}A_{tk},$ (7)
where $k_{r}$ is the number of images in the retrieval set for the test image.
Then based on selected $k^{*}$, the initial label of a super-pixel in the test
image is obtained in the same way as in Eqn. (4).
Figure 4: (Top) Label frequencies for the pixels in the SIFTFlow training set.
(Bottom) The per-category classification rates of different $k$s and our
adaptive nonparametric method on the SIFTFlow dataset. The categories ‘bird’,
‘cow’, ‘dessert’, and ‘moon’ are dropped from the figure since they are not
present in the test split. (Please view in high 400% resolution).
### II-D Contextual Smoothing
Generally, the initial labels for the super-pixels may still be noisy, and
these labels need be further refined with global context information. The
contextual constraints are very important for parsing images. For example, a
pixel assigned with “car” is likely connected with “road”. Therefore, the
initial labels are smoothened with an MRF energy function defined over the
field of pixels:
$E(l)=\sum_{i\in V}E_{d}(i,l_{i})+\lambda\sum_{e_{ij}\in
E}E_{s}(l_{i},l_{j}),$ (8)
where $V$ is the set of all pixels in the image, $E$ is the set of edges
connecting adjacent pixels, and $\lambda$ is a smoothing constant. The data
term is defined as follows
$E_{d}(i,l_{i})=-\log L(k^{*},l_{sp(i)}),$ (9)
where $sp(i)$ means the super-pixel containing the $i$-th super-pixel and the
$L$ function is defined in Eqn. (5). The MRF model also includes the
smoothness constraint reflecting the spatial consistency (pixels or super-
pixels close to each other are most likely to have similar labels). Therefore,
the smoothing term $E_{s}(l_{i},l_{j})$ imposes a penalty when two adjacent
pixels ($p_{i}$, $p_{j}$) are similar but are assigned with different labels
($l_{i}$, $l_{j}$). $E_{s}$ is defined based on probabilities of label co-
occurrence and biases the neighboring pixels to have the same label in the
case that no other information is available, and the probability depends on
the edge of the image:
$E_{s}(l_{i};l_{j})=-\xi_{ij}\times\log\left(\frac{P(l_{i}|l_{j})+P(l_{j}|l_{i})}{2}\right)\times\delta[l_{i}\neq
l_{j}],$ (10)
where $P(l_{i}|l_{j})$ is the conditional probability of one pixel having
label $l_{i}$ given that its neighbor has label $l_{j}$, estimated by counts
from the training set. $\xi_{ij}$ is defined based on the normalized gradient
value of the neighboring pixels:
$\xi_{ij}=\frac{\nabla_{ij}}{\sum_{e_{pq}\in E}\nabla_{pq}},$ (11)
where $\nabla_{ij}=||I(i)-I(j)||^{2}$ is the $\ell_{2}$ norm of the gradient
of the test image $I$ at a pixel $i$ and its neighbor pixel $j$. The stronger
the luminance edge is, the more likely the neighboring pixels may have
different labels. Multiplication with the constant Potts penalty
$\delta[l_{i}\neq l_{j}]$ is necessary to ensure that this energy term is
semi-metric as required by graph cut inference [25]. We perform the inference
using the $\alpha-\beta$ swap algorithm [25, 26, 27].
## III Experiments
### III-A Datasets and Evaluation Metrics
In this section, our approach is validated on two challenging datasets:
SIFTFlow [4] and 19-Category LabelMe [28].
SIFTFlow
dataset111http://people.csail.mit.edu/celiu/LabelTransfer/LabelTransfer.rar is
composed of 2,688 images that have been throughly labeled by LabelMe users.
The image size is 256 $\times$ 256 pixels. Liu et al. [4] split this dataset
into 2,488 training images and 200 test images and used synonym correction to
obtain 33 semantic labels (sky, building, tree, mountain, road, sea, field,
car, sand, river, plant, grass, window, sidewalk, rock, bridge, door, fence,
person, staircase, awning, sign, boat, crosswalk, pole, bus, balcony,
streetlight, sun, bird, cow, dessert, and moon).
19-Category LabelMe
dataset222http://www.umiacs.umd.edu/~ajain/dataset/LabelMesubsetdataset.zip
Jain et al. [28] randomly collected 350 images from LabelMe [8] with 19
categories (grass, tree, field, building, rock, water, road, sky, person, car,
sign, mountain, ground, sand, bison, snow, boat, airplane, and sidewalk). This
dataset is split into 250 training images and 100 test images.
We evaluate our approach on both sets, but perform additional analysis on the
SIFTFlow dataset since it has a larger number of categories and images. In
evaluating image parsing algorithms, there are two metrics that are commonly
used: per-pixel and per-category classification rate. The former rates the
total proportion of correctly labeled pixels, while the latter indicates the
average proportion of correctly labeled pixels in each object category. If the
category distribution is uniform, then the two would be the same, but this is
not the case for real-world scenes. Note that for all experiments, the
$\lambda$ is empirically set as $16$ in the contextual smoothing process.
$k_{m}$ and $\tau$ are set as $1000$ and $0.3$, respectively. In all of our
experiments, we use Euclidean distance metric to find the nearest neighbors.
TABLE II: Performance comparison of our algorithm with other algorithms on the SIFTFlow dataset [4]. Per-pixel rates and average per-category rates are presented. The best performance values are marked in bold. Algorithm | Per-Pixel (%) | Per-Category (%)
---|---|---
Parametric Baselines
Tighe et al. [14] | 78.6 | 39.2
Farabet et al. [16] | 78.5 | 29.6
Nonparametric Baselines
Liu et al. [4] | 74.8 | –
Tighe et al. [5] | 76.3 | 28.8
Tighe et al. [5] (adding geometric information) | 76.9 | 29.4
Myeong et al. [11] | 76.2 | 29.6
Eigen et al. [17] | 77.1 | 32.5
Our Proposed Adaptive Nonparametric Algorithm
Super-pixel Classification | 77.2 | 34.9
Contextual Smoothing | 78.9 | 34.0
| |
TABLE III: Performance comparison of different $k$s and our algorithm on the SIFTFlow dataset [4]. Per-pixel rates and average per-category rates are presented. Parameter | Per-Pixel (%) | Per-Category (%)
---|---|---
$k$ = 1 | 70.2 | 31.9
$k$ = 5 | 76.6 | 34.8
$k$ = 10 | 77.5 | 34.6
$k$ = 20 | 77.8 | 33.5
$k$ = 30 | 77.9 | 33.3
$k$ = 40 | 77.9 | 30.6
$k$ = 50 | 77.8 | 29.5
$k$ = 60 | 77.5 | 28.6
$k$ = 70 | 77.8 | 28.5
$k$ = 80 | 77.5 | 28.2
$k$ = 90 | 77.1 | 27.2
$k$ = 100 | 76.9 | 26.8
Adaptive $k$ in Our Algorithm | 78.9 | 34.0
| |
### III-B Performance on the SIFTFlow Dataset
Comparison of our algorithm with state-of-the-arts Table II reports per-pixel
and average per-category rates for image parsing on the SIFTFlow dataset. Even
though the nonparametric methods are our main baselines, we still list
parametric methods for reference. Our proposed method outperforms the
baselines by a remarkable margin. We did not compare our work with [18] and
[19] since [18] uses a different set of super-pixel’s features whereas [19]
utilizes the extra data to balance the distribution of the categories in the
retrieval set. Compared with our initial super-pixel classification result,
the final contextual smoothing improves overall per-pixel rates on the
SIFTFlow dataset by about 1.7%. Average per-category rates drop slightly due
to the contextual smoothing on some of the smaller classes. Note that Tighe et
al. [14] improved [5] by adding extensively multiple detectors (their
performance reaches $78.6\%$). The addition of many object detectors brings a
better per-category performance but also increases the processing time since
running object detection is very time-consuming. Note that, to train the
object detectors, [14] must use extra data. Also, [14] utilizes SVM instead of
$k$-NN as in our work that may bring better classification results, especially
for some rare categories. Meanwhile, our proposed method improves [5] with a
simpler solution and even achieves a better performance in terms of per-pixel
rate ($78.9\%$). Also, our method performs better than [16] which deployed
heavily deep learning features.
Figure 5: Top 4 exemplar retrieval results of super-pixel matching, global matching [5], and GIST-based matching [4]. (a) Global matching returns “tall building” and “open country” scenes, and GIST-based matching obtains “inside city” and “mountain”. Meanwhile, our method obtains the reasonable images of “urban street”. (b) The “open country” images are retrieved in GIST-based matching and the “sunset coastal” scenes are returned in global matching instead of “highway” as in our method. Figure 6: Exemplar results from the SIFTFlow dataset. In (a), the adaptive nonparametric method successfully parses the test image. In (b), the “rock” is classified instead of “river” or “mountain” in other two methods. In (c), our method recovers the “sun” and removes the spurious classification of the sun’s reflection in the water as “sky”. In (d), the labeled “sea” regions in two other methods are recovered as “road”. In (e), some of the trees are recovered in the adaptive nonparametric method. In (f), our method recovers “window” from “door”. Best viewed in color. TABLE IV: Performance comparison of different settings on the SIFTFlow dataset [4]. Per-pixel classification rates (with per-category rates in parentheses) are presented. Algorithm | Performance
---|---
Baseline
SuperParsing [5] | 76.3 (28.8)
Our Improvements
SuperParsing + LDA + Global Matching + (fixed $k$ = 20) | 76.4 (31.2)
SuperParsing + LDA + Super-pixel Matching + ($k$ = 20) | 77.8 (33.5)
SuperParsing + LDA + Super-pixel Matching + Adaptive $k$ | 78.9 (34.0)
|
Performance of different $k$s The impact of different $k$s is further
investigated on the SIFTFlow dataset. In this experiment, the parameter $k$
varies from $1$ to $100$. LDA and super-pixel matching are utilized in order
to keep fair comparison with our adaptive nonparametric method. Table III
summarizes the performance of different $k$s on both per-pixel and per-
category criteria. The relationship between per-pixel and per-category of
different $k$s is inconsistent. The smaller $k$s ($\leq 20$) tend to achieve a
higher per-category whereas the larger $k$s lean to a higher per-pixel rate. A
lower $k$ responds well with rare categories (i.e., boat, pole, bus, etc. as
illustrated in Figure 4), thus it leads to improved per-category
classification. Meanwhile, a higher $k$ leads to better per-pixel accuracy
since it works well for more common categories such as sky, building, and
tree. $k=5$ yields the largest per-category rate, but its per-pixel
performance is much lower than that of $k=40$. As a closer look, Figure 4 also
shows the details of per-category classification rates of different $k$s. The
smaller $k$s yield better results on the categories with a small number of
samples while the larger $k$s are sensitive on categories with a large number
of samples such as sky, sea, etc. As observed in the same Figure 4, our
adaptive nonparametric approach exhibits advantages over smaller and larger
$k$s.
Figure 7: Exemplar results on the 19-Category LabelMe dataset [28]. The test
images, ground truth, and results from our proposed adaptive nonparametric
method are shown in triple batches. Best viewed in color.
How each new component affects SuperParsing [5] In order to study the impact
of each newly proposed component, another experiment is conducted with
different configuration settings. Namely, we report the results by
incrementally adding LDA, super-pixel matching and adaptive nonparametric
super-pixel classification to the traditional nonparametric image parsing
pipeline [5], respectively. Keeping the fixed $k$ as 20 and the number of
similar images in the retrieval set as 200, as recommended in [5] and adding
LDA increase the performance of [5] by a small margin. We observe a large gain
by adding super-pixel matching, i.e., 1.4%, in per-pixel rate. Further adding
adaptive nonparametric super-pixel classification drastically increases the
combination ([5], LDA, super-pixel matching, and fixed $k=20$) by 1.1% in per-
pixel rate. For comparison, our work improves [5] by 2.6% in terms of per-
pixel rate and 5.2% in terms of per-category rate. The results clearly show
the efficiency of our proposed super-pixel matching and adaptive nonparametric
super-pixel classification. Figure 6 shows the exemplar results of different
experimental settings on the SIFTFlow dataset.
TABLE V: The evaluation of the relevance of a retrieval set with respect to a query. Retrieval Set Algorithm | NDCG
---|---
GIST-based matching [4] | 0.83
Global matching [5] | 0.85
Super-pixel matching | 0.88
How good is the locality-aware retrieval set We evaluate the performance of
our retrieval set via Normalized Discounted Cumulative Gain (NDCG) [29] which
is commonly used to evaluate ranking systems. NDCG is defined as follows,
$NDCG@k_{r}=\frac{1}{Z}\sum_{i=1}^{k_{r}}\frac{2^{rel(i)}-1}{\log(i+1)},$ (12)
where $rel(\cdot)$ is a binary value indicating whether the scene of the
returned image is relevant (with value 1) or irrelevant (with value 0) to the
one of the query image, and $Z$ is a constant to normalize the calculated
score. Recall that $k_{r}$ is the number of returned images from locality-
aware retrieval set to ensure the fair comparison. As shown in Table V, our
super-pixel matching outperforms other baselines, namely, GIST-based matching
and global matching in terms of NDCG. Figure 5 also demonstrates the good
results of our locality-aware retrieval set.
Adaptive $k$ on different scene classes Based on our hypothesis that the
similar images should share the same $k$, we would like to study how the
adaptive $k$ selection works for different types (scene classes) of similar
images. To this end, we divide images in the SIFTFlow dataset into scene
classes based on their filenames. For example, the test image
“coast_arnat59.jpg” is classified into coast scene class. In total, there are
8 scene classes, namely, coast, forest, highway, inside city, mountain, open
country, street, and tall building. We compute the mean number of categories
(car, building, road, etc.) inside the testing set of the SIFTFlow dataset.
Next, we compute the selected $k$ for each scene class by selecting the $k$
that has the highest confidence over all of the images in the same scene. The
mean number of categories and the selected $k$ of each scene class are
reported in Table VI. As we can observe, the scene images with more object
categories, i.e., highway, inside city and street, have lower $k$s. In
contrast, the scene images with fewer object categories have larger $k$s. Note
that our method is unaware of the scene class of the test image. This means
our method adapts well to different scene classes and brings the remarkable
improvement to image parsing. In the preliminary experiments, we apply the
randomization for the order of test super-pixels but the performance is
similar to the one that is from $1$ to $n_{q}$. Therefore, the order of the
super-pixels of test image does not affect the performance.
TABLE VI: The mean number of categories and the correspondingly selected $k$ of each scene class on the SiftFlow dataset. Scene Class | Mean No. of Categories | Selected $k$
---|---|---
Coast | 3.8 | 12
Forest | 2.5 | 36
Highway | 6.5 | 6
Inside City | 7.2 | 12
Mountain | 2.6 | 22
Open Country | 3.9 | 14
Street | 7.5 | 6
Tall Building | 3.3 | 43
TABLE VII: Performance comparison of our algorithm with other algorithms on the 19-Category LabelMe dataset [28]. Per-pixel rates and average per-category rates are presented. The best performance values are marked in bold. Algorithm | Per-Pixel (%) | Per-Category (%)
---|---|---
Parametric Baselines
Jain et al. [28] | 59.0 | –
Chen et al. [30] | 75.6 | 45.0
Nonparametric Baselines
Myeong et al. [6] | 80.1 | 53.3
Adaptive Nonparametric Algorithm
Super-pixel Classification | 80.3 | 53.3
Contextual Smoothing | 82.7 | 55.1
| |
### III-C Performance on 19-Category LabelMe Dataset
Table VII shows the performance of our work compared with other baselines on
the 19-Category LabelMe dataset. Our final adaptive nonparametric method on
this dataset achieves 82.7%, surpassing all state-of-the-art performances. For
the adaptive nonparametric method, our result has surpassed the one of Myeong
et al. [6] by a large margin. Compared with the parametric method [30], our
work improves by 7.1%. Some exemplar results on this dataset are shown in
Figure 7.
## IV Conclusions and Future Work
This paper has presented a novel approach to image parsing that can take
advantage of adaptive nonparametric super-pixel classification. To the best of
our knowledge, we are the first ones to exploit the locality-aware retrieval
set and adaptive nonparametric super-pixel classification in image parsing.
Extensive experimental results have clearly demonstrated the proposed method
can achieve the state-of-the-art performance on diverse and challenging image
parsing datasets.
For future work, we are interested in exploring possible extensions to improve
the performance. For example, the combination weight of different types of
features can be learned. Another possible extension is to elegantly transfer
other parameters apart from $k$, for example, the $\lambda$ of the contextual
smoothing process from the retrieved training images to the test image. Since
the current solution is specific for image parsing, we are also interested in
generalizing the proposed method to other recognition tasks, such as image
retrieval, and general $k$-NN classification applications. We also plan to
leverage our work to video domain, i.e., action recognition [31] and human
fixation prediction [32].
Last but not least, to boost the super-pixel matching process, we can embed
Locality-sensitive hashing (LSH) [33] or the recently introduced Set
Compression Tree (SCT) [34] to encode the feature’s representative in few bits
(instead of bytes) for large-scale matching. These coding methods and the
insignificant number of super-pixels of each image make our super-pixel
matching process feasible. In this paper, we only investigate the impact of
adaptive non-parametric method in scene parsing. The utilization of LSH or SCT
which are suitable for large-scale dataset will be considered for building a
practical system in the future.
## References
* [1] S. Gould, R. Fulton, and D. Koller, “Decomposing a scene into geometric and semantically consistent regions,” in _IEEE International Conference on Computer Vision_ , 2009, pp. 1–8.
* [2] M. P. Kumar and D. Koller, “Efficiently selecting regions for scene understanding,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2010, pp. 3217–3224.
* [3] V. S. Lempitsky, A. Vedaldi, and A. Zisserman, “Pylon model for semantic segmentation,” in _Neural Information Processing Systems Conference_ , 2011, pp. 1485–1493.
* [4] C. Liu, J. Yuen, and A. Torralba, “Nonparametric scene parsing via label transfer,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 33, no. 12, pp. 2368–2382, 2011.
* [5] J. Tighe and S. Lazebnik, “Superparsing: Scalable nonparametric image parsing with superpixels,” in _European Conference on Computer Vision_ , 2010, pp. 352–365.
* [6] H. Myeong, J. Y. Chang, and K. M. Lee, “Learning object relationships via graph-based context model,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2012, pp. 2727–2734.
* [7] X. He and R. S. Zemel, “Learning hybrid models for image annotation with partially labeled data,” in _Neural Information Processing Systems Conference_ , 2008, pp. 625–632.
* [8] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “Labelme: A database and web-based tool for image annotation,” _International Journal of Computer Vision_ , vol. 77, no. 1-3, pp. 157–173, 2008.
* [9] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie, “Objects in context,” in _IEEE International Conference on Computer Vision_ , 2007, pp. 1–8.
* [10] C. Galleguillos, B. McFee, S. J. Belongie, and G. R. G. Lanckriet, “Multi-class object localization by combining local contextual interactions,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2010, pp. 113–120.
* [11] H. Myeong and K. M. Lee, “Tensor-based high-order semantic relation transfer for semantic scene segmentation,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2013, pp. 3073–3080.
* [12] D. Munoz, J. A. Bagnell, and M. Hebert, “Stacked hierarchical labeling,” in _European Conference on Computer Vision_ , 2010, pp. 57–70.
* [13] B. Fulkerson, A. Vedaldi, and S. Soatto, “Class segmentation and object localization with superpixel neighborhoods,” in _IEEE International Conference on Computer Vision_ , 2009, pp. 670–677.
* [14] J. Tighe and S. Lazebnik, “Finding things: Image parsing with regions and per-exemplar detectors,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2013, pp. 3001–3008.
* [15] R. Socher, C. C.-Y. Lin, A. Y. Ng, and C. D. Manning, “Parsing natural scenes and natural language with recursive neural networks,” in _International Conference in Machine Learning_ , 2011, pp. 129–136.
* [16] C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Scene parsing with multiscale feature learning, purity trees, and optimal covers,” in _International Conference in Machine Learning_ , 2012, pp. 1–8.
* [17] D. Eigen and R. Fergus, “Nonparametric image parsing using adaptive neighbor sets,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2012, pp. 2799–2806.
* [18] G. Singh and J. Kosecka, “Nonparametric scene parsing with adaptive feature relevance and semantic context,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2013, pp. 3151–3157.
* [19] J. Yang, B. Price, S. Cohen, and M.-H. Yang, “Context driven scene parsing with attention to rare classes,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2014.
* [20] H. Zhang, T. Fang, X. Chen, Q. Zhao, and L. Quan, “Partial similarity based nonparametric scene parsing in certain environment,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2011, pp. 2241–2248.
* [21] L. Zhu, Y. Chen, Y. Lin, C. Lin, and A. L. Yuille, “Recursive segmentation and recognition templates for image parsing,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 34, no. 2, pp. 359–371, 2012.
* [22] Z. Tu, X. Chen, A. L. Yuille, and S. C. Zhu, “Image parsing: Unifying segmentation, detection, and recognition,” _International Journal of Computer Vision_ , vol. 63, no. 2, pp. 113–140, 2005.
* [23] P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” _International Journal of Computer Vision_ , vol. 59, no. 2, pp. 167–181, 2004.
* [24] A. M. Martínez and A. C. Kak, “PCA versus LDA,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 23, no. 2, pp. 228–233, 2001.
* [25] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 23, no. 11, pp. 1222–1239, 2001.
* [26] V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 26, no. 2, pp. 147–159, 2004.
* [27] Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 26, no. 9, pp. 1124–1137, 2004\.
* [28] A. Jain, A. Gupta, and L. S. Davis, “Learning what and how of contextual models for scene labeling,” in _European Conference on Computer Vision_ , 2010, pp. 199–212.
* [29] B. Siddiquie, R. S. Feris, and L. S. Davis, “Image ranking and retrieval based on multi-attribute queries,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2011, pp. 801–808.
* [30] X. Chen, A. Jain, A. Gupta, and L. S. Davis, “Piecing together the segmentation jigsaw using context,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2011, pp. 2001–2008.
* [31] T. Nguyen, Z. Song, and S. Yan, “STAP: Spatial-temporal attention-aware pooling for action recognition,” _IEEE Transactions on Circuits and Systems for Video Technology_ , 2015.
* [32] T. Nguyen, M. Xu, G. Gao, M. S. Kankanhalli, Q. Tian, and S. Yan, “Static saliency vs. dynamic saliency: a comparative study,” in _ACM Multimedia Conference_ , 2013, pp. 987–996.
* [33] A. Andoni and P. Indyk, “Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions,” _Communications of the ACM_ , vol. 51, no. 1, pp. 117–122, 2008.
* [34] R. Arandjelovic and A. Zisserman, “Extremely low bit-rate nearest neighbor search using a set compression tree,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2014.
| Dr. Tam V. Nguyen obtained his Ph.D. degree with the Department of
Electrical and Computer Engineering, National University of Singapore in 2013.
Prior to that, he obtained his Master’s degree from Chonnam National
University, South Korea in 2009. He is currently a research scientist and
principal investigator at the Applied Research and Technologies for Infocomm
Centre at Singapore Polytechnic. His research interests include computer
vision, machine learning and multimedia content analysis. He is the recipient
of numerous awards including Young Vietnamese of the year 2005, the 2nd prize
winner of ICPR 2012 contest on action recognition, and the best technical
demonstration from ACM Multimedia 2012.
---|---
| Canyi Lu received the bachelor degree in mathematics from the Fuzhou
University in 2009, and the master degree in the pattern recognition and
intelligent system from the University of Science and Technology of China in
2012. He is currently a phd student with the Department of Electrical and
Computer Engineering at the National University of Singapore. His current
interests are the block diagonal affinity matrix learning and convex and
nonconvex optimization. He was the winner of 2014 Microsoft Research Asia
Fellowship.
---|---
| Dr. Jose Sepulveda is the Director of the Applied Research and Technologies
for Infocomm Centre at Singapore Polytechnic. Dr Sepulveda’s areas of interest
include data analysis, technology in education. Dr. Sepulveda’s training
background includes a PhD in Physics and an MBA. His postdoctoral research
focused on bioinformatics at the Center for Genomics and Bioinformatics at
Karolinska Institutet (Sweden) and at the Biochemistry Department at Baylor
College of Medicine (USA). After this, Dr Sepulveda went to work in technology
in education and video game design (edutainment) at the Center for Technology
in Teaching and Learning at Rice University in Houston. His interests focuses
on the use of technology to empower creativity and innovation (agile methods)
to bridge the gap between ideas and commercial products, and the way that it
can be used to support knowledge management and communities of practice.
---|---
| Dr. Shuicheng Yan (M’06, SM’09) is currently an Associate Professor at the
Department of Electrical and Computer Engineering at National University of
Singapore, and the founding lead of the Learning and Vision Research Group
(http://www.lv-nus.org). Dr. Yan’s research areas include machine learning,
computer vision and multimedia, and he has authored/co-authored hundreds of
technical papers over a wide range of research topics, with Google Scholar
citation $>$14,000 times and H-index 52. He is ISI Highly-cited Researcher,
2014 and IAPR Fellow 2014. He has been serving as an associate editor of IEEE
TKDE, TCSVT and ACM Transactions on Intelligent Systems and Technology (ACM
TIST). He received the Best Paper Awards from ACM MM’13 (Best Paper and Best
Student Paper), ACM MM’12 (Best Demo), PCM’11, ACM MM’10, ICME’10 and
ICIMCS’09, the runner-up prize of ILSVRC’13, the winner prize of ILSVRC’14
detection task, the winner prizes of the classification task in PASCAL VOC
2010-2012, the winner prize of the segmentation task in PASCAL VOC 2012, the
honourable mention prize of the detection task in PASCAL VOC’10, 2010 TCSVT
Best Associate Editor (BAE) Award, 2010 Young Faculty Research Award, 2011
Singapore Young Scientist Award, and 2012 NUS Young Researcher Award.
---|---
|
Improving Surrogate Gradient Learning in Spiking Neural Networks via
Regularization and Normalization Undergraduate Thesis Dr. Timothée Masquelier
Senior Research Scientist CERCO, CNRS, France Dr. FirstName SecondName Asst.
Professor BITS-Pilani Hyderabad Campus Bachelor of Engineering in Computer
Science, Master of Science in Mathematics BITS F421T Thesis 2017B4A70657G
Birla Institute of Technology and Science Pilani, Goa Campus BIRLA INSTITUTE
OF TECHNOLOGY AND SCIENCE PILANI, GOA CAMPUS Computer Science & Information
Systems COMPUTER SCIENCE & INFORMATION SYSTEMS Research Group Name RESEARCH
GROUP NAME (IN BLOCK CAPITALS) Faculty Name FACULTY NAME (IN BLOCK CAPITALS)
#
###### Abstract
Spiking neural networks (SNNs) are different from the classical networks used
in deep learning: the neurons communicate using electrical impulses called
spikes, just like biological neurons. SNNs are appealing for AI technology,
because they could be implemented on low power neuromorphic chips. However,
SNNs generally remain less accurate than their analog counterparts. In this
report, we examine various regularization and normalization techniques with
the goal of improving surrogate gradient learning in SNNs.
###### Acknowledgements.
I would like to express my sincere gratitude for the support of my supervisor,
Dr. Timothée Masquelier, and for granting me the opportunity to pursue my
undergraduate thesis under his guidance. I would also like to acknowledge my
university, BITS Pilani, Goa, for their support and help during my thesis.
|
# Certified Lattice reduction
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
Quadratic form reduction and lattice reduction are fundamental tools in
computational number theory and in computer science, especially in
cryptography. The celebrated Lenstra–Lenstra–Lovász reduction algorithm (so-
called lll) has been improved in many ways through the past decades and
remains one of the central methods used for reducing _integral_ lattice basis.
In particular, its floating-point variants—where the rational arithmetic
required by Gram–Schmidt orthogonalization is replaced by floating-point
arithmetic—are now the fastest known. However, the systematic study of the
reduction theory of _real_ quadratic forms or, more generally, of real
lattices is not widely represented in the literature. When the problem arises,
the lattice is usually replaced by an integral approximation of (a multiple
of) the original lattice, which is then reduced. While practically useful and
proven in some special cases, this method doesn’t offer any guarantee of
success in general. In this work, we present an adaptive-precision version of
a generalized lll algorithm that covers this case in all generality. In
particular, we replace floating-point arithmetic by Interval Arithmetic to
certify the behavior of the algorithm. We conclude by giving a typical
application of the result in algebraic number theory for the reduction of
ideal lattices in number fields.
###### Key words and phrases:
Lattice Reduction, Quadratic forms reduction, Algorithmic number theory
###### 1991 Mathematics Subject Classification:
11H06, 11H55, 11R04
This work has been supported in part by the European Union as H2020 Programme
under grant agreement number ERC-669891.
Thomas Espitau
Sorbonne Université
LIP 6, CNRS UMR 7606
Paris, France
Antoine Joux
Chaire de Cryptologie de la Fondation SU
Sorbonne Université, Institut de Mathématiques de Jussieu–Paris Rive Gauche
CNRS, INRIA, Univ Paris Diderot.
Campus Pierre et Marie Curie, F-75005, Paris, France
## 1\. Introduction
In a general setting, a _lattice_ $\Lambda$ is a free $\mathbf{Z}$-module of
finite rank, endowed with a positive-definite bilinear form on its ambient
space $\Lambda\otimes_{\mathbf{Z}}\mathbf{R}$, as presented for instance in
[16]. In particular, this definition implies that $\Lambda$ is discrete in its
ambient space for the topology induced by the scalar product. This formalism
encompasses the well-known _Euclidean lattices_ when taking the canonical
scalar product of $\mathbf{R}^{d}$, but also lattices arising from ideals in
rings of integers of number fields. The rank of the lattice $\Lambda$ is
defined as the dimension of the vector space
$\Lambda\otimes_{\mathbf{Z}}\mathbf{R}$. By definition of a finitely-generated
free module, there exists a finite set of vectors
$b_{1},\ldots,b_{\operatorname{rk}\Lambda}\in\Lambda$ such that
$\Lambda=\bigoplus_{i=1}^{\operatorname{rk}\Lambda}b_{i}\mathbf{Z}$. Such a
family is called a basis of the lattice and is not unique. In fact, as soon as
$\operatorname{rk}\Lambda\geq 2$ there are infinitely many bases of $\Lambda$.
Some among those have interesting properties, such as having reasonably small
vectors and low orthogonality defects. They are informally called _reduced
bases_ and finding them is the goal of _lattice reduction_.
Numerous algorithms arising in algebraic number theory heavily rely on lattice
reduction, for example, the computation of normal forms of integral matrices
(see [10] for the Hermite Normal Form and [9] for the Smith Normal Form),
class group computations in a number field [7, 2], or even the enumeration of
points of small height near algebraic curves [6].
Even for lattices that use the canonical scalar product, there is a deep link
with bilinear forms that clearly appears when considering the _Gram matrix_ of
a basis $\mathcal{B}=\\{{b_{1}},\dotsc,{b_{d}}\\}$, that is, the real
symmetric matrix
$\mathcal{G}=\mathopen{}\mathclose{{}\left(\langle{b_{i}},{b_{j}}\rangle}\right)_{i,j}$.
The study of these reduction problems is not recent and goes back to the works
of Lagrange and Gauss. These early works were expressed in terms of reduction
of quadratic form, more precisely integral binary quadratic forms111This can
be viewed as the reduction of integral dimension-two lattices. and led to a
method often called Gauss’ algorithm. This method can be seen as a
2-dimensional extension of the Euclid algorithm for computing the greatest
common divisor of two integers. In 1850, Hermite proved a general upper bound
on the length of the shortest vector in a lattice, given as a function of the
dimension and of a very important invariant called the determinant, which is
defined in Section 2.1. This bound involves the so-called Hermite constant and
has recently been rephrased in algorithmic terms [20, Hermite’s Algorithms]. A
century later, in 1982, Lenstra, Lenstra and Lovász designed the lll
_algorithm_ [14], with the polynomial factorization problem as an application,
following the work of Lenstra on integer programming [15]. This algorithm
constitutes a breakthrough in the history of lattice reduction algorithm,
since it is the first to have a runtime polynomial in terms of the dimension.
It was followed by many improvements lowering its complexity or improving the
output’s quality.
Current implementations of lll often work with low precision approximations in
order to greatly speed-up the computations. Indeed, the algorithm works
surprinsingly well even with such reduced precisions, even if some care needs
to be taken to avoid infinite loops. Moreover, once the result is obtained, it
can verified efficiently as shown in [30].
We propose here an alternative strategy where we not only certify that the
end-result is a reduced basis but also that the algorithm followed a valid
computation path to reach it. This strongly deviates from other approaches
that have been taken to obtain guaranteed lattice reduced basis. At first,
this may seems irrelevant. After all, one might claim that a basis satisfying
the end conditions of lll is what is desired and that the computation path
doesn’t matter. However, as shown in [13] for Siegel-reduced bases, a reduced
basis chosen uniformely at random behaves as the worst-case allowed by the
final inequalities. By constrast, bases produced by the lll algorithm are
usually much better than this worst-case. This argues in favor of trying to
follow the algorithm defintion exactly to better understand the phenomenon. In
particular, this option might be invaluable for experiments performed toward
analyzing this gap.
The present article also relies on Interval Arithmetic, a representation of
reals by intervals—whose endpoints are floating-point numbers—that contain
them. Arithmetic operations, in particular the basic operations
$+,-,\times,\div$ can be redefined in this context. The main interest of this
representation lies in its _certification_ property: if real numbers are
represented by intervals, the interval resulting from the evaluation of an
algebraic expression contains the exact value of the evaluated expression.
For some authors, Interval Arithmetic was introduced by R. Moore in 1962 in
his Ph.D. thesis [18]. For others, it can be dated back to 1958, in an article
of T. Sunaga [28] which describes an algebraic interpretation of the lattice
of real intervals, or even sooner in 1931 as a proposal in the Ph.D. thesis
[31] of R.C. Young at Cambridge. Its main asset—calculating directly on
sets—is nowadays used to deterministically determine the global extrema of a
continuous function [24] or localizing the zeroes of a function and
(dis)proving their existence [11]. Another application of Interval Arithmetic
is to be able to detect lack of precision at run-time of numerical algorithms,
thanks to the guarantees it provides on computations. This can, in particular,
be used to design adaptive-precision numerical algorithms.
In the present paper, we propose to transform and generalize the lll algorithm
into an adaptive-precision version, which can reduce arbitrary lattices and
follows a certified flow of execution. More precisely, it uses Interval
Arithmetic to validate the size-reduction and exchange steps that occur within
lll.
The interested reader may download an implementation of the algorithm from the
webpage http://almacrypt.eu/outputs.php.
### Organisation of the paper
In Section 2, we briefly introduce reduction theory and present the l2 variant
of the lll algorithm. Section 3 aims at describing the basics of Interval
Arithmetic used in Section 4 to handle the problem of representation of real
lattices. The framework of this latter section is then used in Section 5 to
derive a certified reduction algorithm for real lattices. Section 6 presents
an application to algorithmic number theory.
### Notations and conventions
#### General notations
As usual, the bold capitals $\mathbf{Z}$, $\mathbf{Q}$, $\mathbf{R}$ and
$\mathbf{C}$ refer respectively to the ring of integers and the fields of
rational, real and complex numbers. Given a real number $x$, the integral
roundings _floor_ , _ceil_ and _round to nearest integer_ are denoted
respectively by $\lfloor x\rfloor,\lceil x\rceil,\lfloor x\rceil$. Note that
the rounding operator is ambiguous when operating on half-integers. However,
either choice when rounding is acceptable in lattice reduction algorithms. In
fact, in this context, it is often enough to return an integer close to $x$,
not necessarily the closest.
These operators are extended to operate on vectors and matrices by point-wise
composition. The complex conjugation of $z\in\mathbf{C}$ is denoted by the
usual bar $\bar{z}$ whereas the real and imaginary parts of a complex $z$ are
indicated by respectively $\mathfrak{R}(z)$ and $\mathfrak{I}(z)$. All
logarithms are taken in base $2$.
#### Matrices and norms
For a field $\mathbf{K}$, let us denote by $\mathbf{K}^{d\times d}$ the space
of square matrices of dimension $d$ over $\mathbf{K}$,
$\textrm{Gl}_{d}(\mathbf{K})$ its group of invertible elements and
$\mathcal{S}_{d}(\mathbf{K})$ its subspace of symmetric matrices. For a
complex matrix $A$, we write $A^{\dagger}$ for its conjugate transpose. For a
vector $v$, we denote by $\|v\|_{\infty}$ its absolute (or infinity) norm,
that is the maximum of the absolute value of its entries. We similarly define
the matrix _max norm_
$\|B\|_{\textrm{max}}=\max_{(i,j)\in[1\,\cdots\,d]^{2}}|B_{i,j}|$, for any
matrix $B$.
#### Computational setting
The generic complexity model used in this work is the random-access machine
(RAM) model and the computational cost is measured in bits operations.
$\mathcal{M}\mathopen{}\mathclose{{}\left({k}}\right)$ denotes the complexity
of the multiplication of two integers of bit length at most $k$. It is also
the cost of the multiplication of two floating-point numbers at precision $k$,
since the cost of arithmetic over the exponents is negligible with regards to
the cost of arithmetic over the mantissae.
## 2\. Basics of Lattice Reduction
### 2.1. Orthogonalization
Let us fix an Euclidean space $(E,\langle{\cdot},{\cdot}\rangle)$, i.e. a real
vector space $E$ together with a positive-definite bilinear form
$\langle{\cdot},{\cdot}\rangle:E\times E\to\mathbf{R}$. As usual, two vectors
$x,y\in E$ are said to be orthogonal—with respect to the form
$\langle{\cdot},{\cdot}\rangle$—if $\langle{x},{y}\rangle=0$. More generally a
family of vectors is orthogonal if its elements are pairwise orthogonal.
Now consider $S=(b_{1},\dots,b_{r}),$ a family of linearly independent vectors
of $E$. The _flag_ $\mathcal{F}_{S}$ associated to $S$ is the finite
increasing chain of subspaces:
$b_{1}\mathbf{R}\subset b_{1}\mathbf{R}\oplus
b_{2}\mathbf{R}\subset\cdots\subset\bigoplus_{i=1}^{r}b_{i}\mathbf{R}.$
The orthogonal complement $S^{\bot}$ is defined as the subspace $\\{x\in
E\leavevmode\nobreak\ |\leavevmode\nobreak\ \forall i,\leavevmode\nobreak\
\langle{x},{b_{i}}\rangle=0\\}$. Denote by $\pi_{i}$ the orthogonal projection
on $(b_{1},\dotsc,b_{i-1})^{\bot}$, with the convention that $\pi_{1}$ is the
identity map. The Gram–Schmidt orthogonalization process—shorthanded as gso—is
an algorithmic method for orthogonalizing $S$ while preserving its flag. It
constructs the orthogonal set
$S^{*}=\mathopen{}\mathclose{{}\left(\pi_{1}(b_{1}),\ldots,\pi_{r}(b_{r})}\right)$.
The computation $S^{*}$ can be done inductively as follows:
$\displaystyle\pi_{1}(b_{1})$ $\displaystyle=b_{1}$ $\displaystyle\forall
1<i\leq r,\quad\pi_{i}(b_{i})$
$\displaystyle=b_{i}-\sum_{j=1}^{i-1}\frac{\langle{b_{i}},{\pi_{j}(b_{j})}\rangle}{\langle{\pi_{j}(b_{j})},{\pi_{j}(b_{j})}\rangle}b_{j}.$
Define the _Gram matrix_ , associated to a family of vectors
$S=(b_{1},\ldots,b_{r})$, as the symmetric matrix of scalar products:
$\mathcal{G}_{S}=\mathopen{}\mathclose{{}\left(\langle{b_{i}},{b_{j}}\rangle}\right)_{(i,j)\in[1\,\cdots\,r]^{2}}$.
The (co)volume of $S$, also called its determinant, is defined as the square
root of the Gram determinant $\det\mathcal{G}_{S}$. It can be easily computed
from the Gram-Schmidt vectors $S^{*}$ as:
$\operatorname{covol}\mathopen{}\mathclose{{}\left({S}}\right)=\prod_{i=1}^{r}\|\pi_{i}(b_{i})\|$
### 2.2. Lattices and reduction
###### Definition 2.1.
A (real) _lattice_ $\Lambda$ is a finitely generated free $\mathbf{Z}$-module,
endowed with a positive-definite bilinear form $\langle{\cdot},{\cdot}\rangle$
on its ambient space $\Lambda\otimes_{\mathbf{Z}}\mathbf{R}$.
By definition of the tensor product, there is a canonical injection that sends
a vector $v$ to $v\otimes 1$ in the ambient space and preserves linear
independence. Thus, the rank of $\Lambda$ as a $\mathbf{Z}$-module, is equal
to the dimension of the vector space $\Lambda\otimes_{\mathbf{Z}}\mathbf{R}$.
Denoting by $d$ the rank of the lattice, a _basis_ of $\Lambda$ is a family
$b_{1},\ldots,b_{d}$ of elements of $\Lambda$ such that
$\Lambda=\bigoplus_{i=1}^{d}b_{i}\mathbf{Z}$.
In the sequel, we identify $\Lambda$ with its canonical image $\Lambda\otimes
1$ and thus view the lattice as an additive subgroup of its ambient space
$\Lambda\otimes_{\mathbf{Z}}\mathbf{R}$. When the context makes it clear, we
may omit to write down the bilinear form associated to a lattice $\Lambda$.
Throughout this section, $\|\cdot\|$ stands for the Euclidean norm induced by
$\langle{\cdot},{\cdot}\rangle$, unless stated otherwise. As usual, any two
bases $(b_{1},\ldots,b_{d})$ and $(b^{\prime}_{1},\ldots,b^{\prime}_{d})$ of
$\Lambda$ are related by a unimodular transformation, i.e., a linear
transformation represented by a $d\times d$ integer matrix of determinant $\pm
1$.
###### Lemma 2.2.
A lattice $\Lambda$ is discrete for the topology induced by the given norm on
its ambient space. I.e., there exists a real $\epsilon_{\Lambda}>0$ such that
for any pair $(x,y)$ of elements of $\Lambda$ with $x\neq y$ we have:
$\|x-y\|\geq\epsilon_{\Lambda}.$
The largest possible value for $\epsilon_{\Lambda}$ in the above inequality is
equal to the norm of the shortest non-zero vector of $\Lambda$, which is
traditionally called the _first minimum_ or the _minimum distance_ of the
lattice and denoted by $\lambda_{1}(\Lambda)$.
###### Proof.
Let $\mathcal{B}=(b_{1},\ldots,b_{d})$ be a basis of $\Lambda$. Let
$\mathcal{B}^{*}=(\pi_{1}(b_{1}),\ldots,\pi_{d}(b_{d}))$ be the orthogonal
basis obtained by applying Gram-Schmidt orthogonalization to the canonical
image of $\mathcal{B}$ in $\Lambda\otimes_{\mathbf{Z}}\mathbf{R}$. This
orthogonalization is taken using as scalar product the given bilinear form.
Assume by contradiction that there exist pairs of distinct vectors with the
norm of their difference arbitrarily small. Since the difference is also an
element of $\Lambda$, there are non-zero elements of arbitrarily small norm in
$\Lambda$. For any integer $i>0$, choose a vector $x_{i}$ in $\Lambda$ with
$\|x_{i}\|^{2}\leq 2^{-i}.$ Decompose $x_{i}$ in the basis $\mathcal{B}^{*}$
as $x_{i}=\sum_{j=1}^{d}\chi_{i}^{(j)}\pi_{j}(b_{j}).$ For any pair of
integers $i$, $j$ we see that
$|\chi_{i}^{(j)}|^{2}\,\|\pi_{j}(b_{j})\|^{2}\leq\|x_{i}\|^{2}\leq 2^{-i}.$ As
a consequence, each sequence $\chi^{(j)}$ converges to zero. Multiplying by
the basis-change matrix, we see that the coordinates of $x_{i}$ in the basis
$b_{1},\ldots,b_{d}$ also converge to zero. Since these coordinates are
integral, the sequences are ultimately constant and $x$ is also ultimately
constant (and null). This contradicts the choice of $x_{i}$ as a non-zero
element. ∎
### 2.3. The LLL reduction algorithm
In 1982, Lenstra, Lenstra and Lovász [14] proposed a notion called _lll
reduction_ and a polynomial-time algorithm that computes an lll-reduced basis
from arbitrary basis of the same lattice. Their reduction notion is formally
defined as follows:
###### Definition 2.3 (lll reduction).
A basis $\mathcal{B}=(b_{1},\ldots,b_{d})$ of a lattice is said to be
$\delta$-lll-reduced for a parameter $1/4<\delta<1$, if the following
conditions are satisfied:
(1) $\forall
i<j,\quad\mathopen{}\mathclose{{}\left|\langle{{b_{j}}},{\pi_{i}(b_{i})}\rangle}\right|\leq\frac{1}{2}{\|\pi_{i}(b_{i})\|^{2}}\quad\textrm{(size-
reduction condition)}$ (2) $\forall
i,\quad\delta\|\pi_{i}({b_{i}})\|^{2}\leq\mathopen{}\mathclose{{}\left(\|\pi_{i+1}(b_{i+1})\|^{2}+\frac{\langle{b_{i+1}},{\pi_{i}(b_{i})}\rangle}{\|\pi_{i}(b_{i})\|^{2}}}\right)\quad\textrm{(Lovász
condition)}$
In order to find a basis satisfying these conditions, it suffices to
iteratively modify the current basis at any point where one of these
conditions is violated. This yields the simplest version of the lll algorithm.
As in [14], it is only defined for full-rank sublattice of $\mathbf{Z}^{d}$.
It was remarked by Lovász and Scarf in [17] that the same algorithm also works
with an arbitrary integral-valued scalar product. The method can be extended
to deal with lattices described by a generating family rather than by a basis
[23].
Parameters : $\delta\in(1/4,1)$
Input: Initial basis $\mathcal{B}=({b_{1}},\ldots,{b_{d}})$
Result: A $\delta$-lll-reduced basis
1 $k\leftarrow 2$;
2 Compute the $\pi_{i}(b_{i})$’s with the gso process (Paragraph 2.1);
3 while _$k\leq d$_ do
4 for _$j=k-1$ downto $1$_ do $b_{k}\leftarrow
b_{k}-\mathopen{}\mathclose{{}\left\lceil\frac{\langle{b_{k}},{\pi_{j}(b_{j})}\rangle}{\|{\pi_{j}(b_{j})}\|^{2}}}\right\rfloor\cdot
b_{j}$ ;
5 if
_$\delta\|\pi_{k-1}(b_{k-1})\|^{2}\leq\|\pi_{k}(b_{k})\|^{2}+{\langle{b_{k}},{\pi_{k-1}(b_{k-1})}\rangle^{2}}/{\|\pi_{k-1}(b_{k-1})\|^{2}}$_
then
6 $k\leftarrow k+1$;
7
8 else
9 Swap $b_{k}$ and $b_{k-1}$; Update $\pi_{k}(b_{k})$ and
$\pi_{k-1}(b_{k-1})$;
10 $k\leftarrow\max(k-1,2)$;
11
12
13 end while
return (${b_{1}},\ldots,{b_{d}}$)
Algorithm 1 The original lll algorithm.
#### 2.3.1. Decrease of the potential and complexity.
The algorithm can only terminate when the current lattice basis is lll-
reduced. Moreover, as shown in [14], it terminates in polynomial time when
$\delta<1$. Indeed, consider the (square of the) product of the covolumes of
the flag associated with a basis: $\prod_{i=1}^{d}\|\pi(b_{i})\|^{2(d-i+1)},$
which is often called its _potential_. This value decreases by a factor at
least $\delta^{-1}$ in each exchange step and is left unchanged by other
operations. Indeed:
* •
The flag is not modified by any operation other than swaps.
* •
A swap between ${b_{k}}$ and ${b_{k-1}}$ only changes the sublattice spanned
by the first $k-1$ vectors. The corresponding covolume
$\prod_{i=1}^{k-1}\|\pi(b_{i})\|^{2}$ decreases by a factor at least
$\delta^{-1}$ and so does the potential.
Since the total number of iterations can be bounded by twice the number of
swaps plus the dimension of the lattice, this suffices to conclude that it is
bounded by
$\textrm{O}\mathopen{}\mathclose{{}\left(d^{2}\log\|B\|_{\textrm{max}}}\right)$
where $B$ is the matrix of the initial basis.
As the cost of a loop iteration is of
$\textrm{O}\mathopen{}\mathclose{{}\left(d^{2}}\right)$ arithmetic operations
on _rational_ coefficients of length at most
$\textrm{O}\mathopen{}\mathclose{{}\left(d\log\|B\|_{\textrm{max}}}\right)$,
the total cost in term of arithmetic operations is loosely bounded by
$\textrm{O}\mathopen{}\mathclose{{}\left(d^{6}\log^{3}\|B\|_{\textrm{max}}}\right)$.
By being more precise in the majoration of the bit length of the integers
appearing in lll, this analysis can be improved. Kaltofen in [12] bounds the
complexity by
$\textrm{O}\mathopen{}\mathclose{{}\left(\frac{d^{5}\log^{2}\|B\|_{\textrm{max}}}{d+\log\|B\|_{\textrm{max}}}\mathcal{M}(d+\log\|B\|_{\textrm{max}})}\right).$
#### 2.3.2. A bound on the norm of reduced elements
###### Proposition 1.
Let $1/4<\delta<1$ be an admissible lll parameter. Let $(b_{1},\ldots,b_{d})$
be a $\delta$-lll reduced basis of rank-$d$ lattice
$(\Lambda,\langle{\cdot},{\cdot}\rangle)$. Then for any $1\leq k\leq d$:
$\operatorname{covol}\mathopen{}\mathclose{{}\left({b_{1},\ldots,b_{k}}}\right)\leq\mathopen{}\mathclose{{}\left(\delta-\frac{1}{4}}\right)^{-\frac{(d-k)k}{4}}\operatorname{covol}\mathopen{}\mathclose{{}\left({\Lambda}}\right)^{\frac{k}{d}}.$
Note that this is an easy generalization of the bound on the norm of $b_{1}$
which is given in most texts. It appears among other related inequalities in
[22]. For completeness, a proof is given in Appendix.
#### 2.3.3. Floating point representation
The total cost of the lll algorithm is dominated by the computation to handle
arithmetic on rational values. A first idea of De Weger [5] to overcome this
issue is to avoid the use of denominators by multiplying all the quantities by
their common denominator. This is slightly more efficient in practice but
doesn’t improve the asymptotics. Another idea is to remark that the norms of
the rational values remain small and to try to use approximations instead of
exact values. However, directly replacing rationals in the lll algorithm by
floating-point approximations leads to severe drawbacks. The algorithm might
not even terminate, and the output basis is not guaranteed to be lll-reduced.
The first _provable_ floating-point version of the algorithm is due to Schnorr
in [26], with complexity
$\textrm{O}\mathopen{}\mathclose{{}\left(d^{4}\log(\|B\|_{\textrm{max}})\mathcal{M}(d+\log\|B\|_{\textrm{max}})}\right)$.
One of the key ingredients to achieve this reduction is to slightly relax the
definition of the size-reduction, in order to compensate for the approximation
errors introduced by the use of floating-point arithmetic. We call
_admissible_ any parameters $(\delta,\eta)$ satisfying $1/4<\delta<1$, and
$1/2<\eta<\sqrt{\delta}$ and define:
###### Definition 2.4 ($(\delta,\eta)$-lll reduction).
Let $(\delta,\eta)$ be admissible parameters. A basis $\mathcal{B}$ of a
lattice is said to be $(\delta,\eta)$-LLL-reduced if the following condition
is satisfied:
(3) $\forall
i<j,\quad\mathopen{}\mathclose{{}\left|\langle{{b_{j}}},{\pi_{i}(b_{i})}\rangle}\right|\leq\eta{\|\pi_{i}(b_{i})\|^{2}}\quad\textrm{(Approximate
size-reduction condition)}$
together with the Lovász condition, which is kept unchanged from Definition
2.3.
Using naive multiplication, the cost of Schnorr’s algorithm is cubic in the
size of the numbers, i.e. in $\log(\|B\|_{\textrm{max}})$. The introduction of
approximate size reduction removes the need to know with extreme precision
values close to half-integers. Instead, approximate size reduction of such
values can be achieved by rounding either up or down in an arbitrary (possibly
randomized) manner. In our pseudo-code, we use a function called
$\eta$-Closest-Integer to achieve this rounding, returning an integer at
distance at most $\eta$ of the function’s argument.
### 2.4. The $L^{2}$ algorithm
The l2 algorithm is a variant of Schnorr-Euchner version [27] of lll. By
contrast with the original algorithm, l2 computes the gso coefficients on the
fly as they are needed instead of doing a full orthogonalization at the start.
It also uses a lazy size reduction inspired by the Cholesky factorization
algorithm. These optimizations yield an improved lattice reduction with
running time
$\textrm{O}\mathopen{}\mathclose{{}\left(d^{5}(d+\log(\|B\|_{\textrm{max}}))\log(\|B\|_{\textrm{max}})}\right).$
As usual in lattice reduction, while performing the Gram-Schmidt
orthogonalization of $\mathcal{B}$, we also compute _QR_ -decomposition of $B$
into $B^{*}\cdot M$ where $B^{*}$ is the matrix representing the
$\mathopen{}\mathclose{{}\left(\pi_{i}(b_{i})}\right)_{1\leq i\leq d}$, and
$M$ is the upper unitriangular matrix, whose coefficients with $j\geq i$ are
$M_{i,j}=\frac{\langle{b_{j}},{\pi_{i}(b_{i})}\rangle}{\|\pi_{i}(b_{i})\|^{2}}$.
Thus, the Gram matrix associated to the basis, i.e., $G=B^{T}B$ satisfies:
$G=M^{T}\cdot{B^{*}}^{T}\cdot B^{*}\cdot M=M^{T}\cdot D\cdot M$
where $D$ is a diagonal matrix whose entries are ${\|\pi_{i}(b_{i})\|^{2}}$.
We denote by $R$ the matrix $D\cdot M$, and thus have $G=R^{T}\cdot
M=M^{T}\cdot R.$
We give the pseudo-code of the Lazy Size-Reduction procedure as Algorithm 2
and of the l2 algorithm as Algorithm 3. Both use classical formulas relating
$R$, $M$ and $B^{*}$ to perform the computations.
Input: Initial basis $\mathcal{B}=({b_{1}},\ldots,{b_{d}})$, with $G$, $R$ and
$M$. An integer $1\leq k\leq d$.
Result: Size-reduces $b_{k}$, updates $G$, $R$, $M$ and returns $s^{(k)}$
1 done $\leftarrow\textnormal{{false}}$;
2 while _done $=\textnormal{{false}}$_ do
3 for _$j=1$ to $k-1$_ do
4 ${R_{k,j}}\leftarrow G_{k,j}$; for _$i=1$ to $j-1$_ do $R_{k,j}\leftarrow
R_{k,j}-M_{j,i}R_{k,i}$;
5 ${M_{k,j}}\leftarrow{R_{k,j}}/{R_{j,j}}$;
6 end for
7 ${s}_{1}^{(k)}\leftarrow G_{k,k}$; for _$j=2$ to $k$_ do
${s}_{j}^{(k)}\leftarrow{s_{j-1}^{(k)}}-{M}_{k,j-1}\cdot{R_{k,j-1}}$;
8 ${R_{k,k}}\leftarrow{s_{k}^{(k)}}$;
9 if _$(\max_{j <k}|M_{k,j}|)\leq\eta$_ then done
$\leftarrow\textnormal{{true}}$;
10 else
11 for _$i=k-1$ downto $1$_ do
12 $X_{i}\leftarrow$ $\eta$-Closest-Integer$(M_{k,i})$;
13 for _$j=1$ to $i-1$_ do $M_{k,j}\leftarrow M_{k,j}-X_{i}M_{i,j}$;
14
15 end for
16 $b_{k}\leftarrow b_{k}-\sum_{i=1}^{k}X_{i}b_{i}$; Update $G$ accordingly;
17
18 end if
19
20 end while
Algorithm 2 The lazy size reduction algorithm, $\eta$-LazyRed.
Parameters : $\delta\in(1/4,1),\eta\in(1/2,\sqrt{\delta}$).
Input: Initial basis $\mathcal{B}=({b_{1}},\ldots,{b_{d}})$
Result: A $(\delta,\eta)$-lll-reduced basis
1 Compute $G=G({b_{1}},\cdots,{b_{d}})$ in exact integer arithmetic;
2 ${R}_{1,1}\leftarrow G_{1,1}$;
3 $k\leftarrow 2$;
4 while _$k\leq d$ _ do
5 Apply size reduction $\eta$-LazyRed$(k)$;
6 $k^{\prime}\leftarrow k$;
7 while _( $k\geq 2$ and $\delta{R}_{k-1,k-1}>{s}_{k-1}^{k^{\prime}})$_ do
$k\leftarrow k-1$ ;
8 $R_{k,k}\leftarrow s_{k}^{k^{\prime}}$;
9 if _$k\neq k^{\prime}$_ then
10 for _$i=1$ to $k-1$_ do $M_{k,i}\leftarrow M_{k^{\prime},i}$;
$R_{k,i}\leftarrow R_{k^{\prime},i}$ ;
11 $R_{k,k}\leftarrow s_{k}^{k^{\prime}}$;
12 Insert ${b_{k^{\prime}}}$ at pos $k$ (before $b_{k}$) and update matrix $G$
accordingly;
13
14 end if
15
16 $k\leftarrow k+1$;
17
18 end while
return (${b_{1}},\cdots,{b_{d}}$)
Algorithm 3 The $L^{2}$ Algorithm.
#### 2.4.1. Precision required.
The precision required by the l2-Algorithm is
${d\log\mathopen{}\mathclose{{}\left(\frac{(1+\eta)^{2}}{(\delta-\eta)^{2}}+\epsilon}\right)+o\mathopen{}\mathclose{{}\left(d}\right)}$
bits for any $\epsilon>0$, i.e., almost linear in the dimension of the
lattice. Moreover, as discussed in [21], it appears that—even though this
bound can be shown to be sharp by specific examples—experiments indicate that
the number of bits required _on average_ is, in fact, lower.
This phenomenom is well-known and is often used in existing algorithms and
softwares in the form of a compute-and-verify paradigm. For example, this is
default strategy of the well-known FPLLL [29]. It relies on the fact that
verifying that a lattice basis is indeed reduced is much less costly than the
reduction itself, as shown in [30]. In addition, it is necessary to take
several conservative measures in order to prevent the implementation to enter
potentially infinite loops.
The approach we propose deviates from this paradigm. Instead of guaranteeing
the end-result, we want to make sure that the whole computation follows the
mathematical definition of the algorithm. With low-precision approximations,
it is unclear how this could be done. However, interval-arithmetic offers a
neat solution to achieve this goal.
## 3\. Interval Arithmetic and its certification property
Interval arithmetic is a representation of reals by intervals that contain
them. For instance, one can specify a value $x$ with an error $\epsilon$ by
giving an interval of length $\epsilon$ containing $x$. For example, the
constant $\pi$ can be represented with an error of $10^{-2}$ by the interval
$[3.14,3.15]$. Interval arithmetic is crucial in the context of _certified_
numerical computations, where reals can only be represented with finite
precision. For more details, the interested reader can consult an extensive
reference, such as [19].
In the following, we denote by $\underline{x}$ a closed interval
$[\underline{x}^{-},\underline{x}^{+}]$. We define its _diameter_ as the
positive real $\underline{x}^{+}-\underline{x}^{-}$ and its _center_ as the
real $\frac{1}{2}(\underline{x}^{+}+\underline{x}^{-})$.
Given a real-valued function $f(x_{1},\ldots,x_{n})$ an interval-arithmetic
realization of $f$ is an interval-valued function $F$ such that the interval
$F(\underline{x_{1}},\ldots,\underline{x_{n}})$ contains all the values
$f(x_{1},\ldots,x_{n})$ for $(x_{1},\ldots,x_{n})$ in
$\underline{x_{1}}\times\cdots\times\underline{x_{n}}$.
If $F$ always returns the smallest possible interval, it is called a _tight_
realization, otherwise it is called _loose_. In practice, tight realizations
can only be achieved in very simple specific cases. However, even a loose
realization can suffice to certify the correctness of a computation.
Another important property of interval arithmetic is that it can be used to
compare numbers in a certified way, as long as the intervals that represent
them are disjoint.
### 3.1. Some useful interval-arithmetic realizations
#### 3.1.1. Integral representation of fixed length
A first convenient way to represent reals at finite precision is to use
integers as an approximate representation.
###### Definition 3.1 (Integral representation of reals).
Let $x\in\mathbf{R}$ be an arbitrary real number and $n\geq 0$ a non-negative
integer. Define _an integral representation at accuracy 222 We use here the
denomination of “accuracy” instead of “precision” to avoid confusions with the
floating-point precision as defined in paragraph 3.1.3. $n$_ as an interval of
diameter $2$:
$\underline{x}_{n}=\mathopen{}\mathclose{{}\left[X_{n}-1,X_{n}+1}\right]$
together with a guarantee that $2^{n}x$ belongs to $\underline{x}_{n}$.
This representation is very compact, since it only requires to store the
center $X_{n}$ of the interval using $n+\lceil\log x\rceil$ bits. However,
computing with this form of representation is not convenient. As a
consequence, we only use it to represent immutable values and we convert to a
different representation for computations. The reason for using the interval
$\mathopen{}\mathclose{{}\left[X_{n}-1,X_{n}+1}\right]$ of diameter $2$ rather
than $\mathopen{}\mathclose{{}\left[X_{n}-1/2,X_{n}+1/2}\right]$ (of diameter
$1$) is that when $2^{n}x$ is very close to a half-integer, it remains
possible to easily provide a valid value for $X_{n}$ without computing
extraneous bits of the representation of $x$.
#### 3.1.2. Fixed-point representations
In the context of lattice reduction, it is useful to compute linear
combinations with exact integral coefficients. In order to do that with
approximate values initially given by centered integral representation, it is
possible to use a fixed-point representation.
###### Definition 3.2 (Fixed point representation of reals).
Let $x\in\mathbf{R}$ be an arbitrary real number and $n\geq 0$ a non-negative
integer. Define a _fixed-point representation at accuracy $n$_ of radius
$\delta$ as an interval:
$\underline{x}_{n}=\mathopen{}\mathclose{{}\left[X_{n}-\delta,X_{n}+\delta}\right]$
together with a guarantee that $2^{n}x$ belongs to $\underline{x}_{n}$.
It is easy to add or subtract such intervals by doing the computation on the
center and by adding the two radii. It is also easy to multiply by an exact
integer by multiplying the center by the integer and the radius by its
absolute value. Integral representations are a special case of fixed-point
representations, with radius equal to $1$.
#### 3.1.3. Floating-point representation.
Another way to handle real values is to use floating point representations of
the two bounds of each interval. For example, if we denote by $\lfloor
x\rfloor_{n}$ and $\lceil x\rceil_{n}$ respectively the largest floating-point
number below $x$ and the lowest floating-point number above $x$ written with
$n$ bits, the tightest floating-point representation of $x$ with $n$ bits of
precision is the interval $I_{n}(x)=\mathopen{}\mathclose{{}\left[\lfloor
x\rfloor_{n},\lceil x\rceil_{n}}\right]$.
With such a representation, it becomes possible to create a realization of the
elementary operations by using careful rounding when computing approximations
of the bounds of the resulting interval, as shown in Figure 1. When speaking
of the precision of such a representation, we simply refer to the common
floating-point precision of the upper and lower bounds.
Once the elementary operations are available, they can be used to implement
certified versions of any function that can classically be computed with
floating point arithmetic.
$\displaystyle\mathopen{}\mathclose{{}\left[\underline{x}^{-},\underline{x}^{+}}\right]+\mathopen{}\mathclose{{}\left[\underline{y}^{-},\underline{y}^{+}}\right]$
$\displaystyle=\mathopen{}\mathclose{{}\left[\underline{x}^{-}+^{-}\underline{y}^{-},\underline{x}^{+}+^{+}\underline{y}^{+}}\right]$
$\displaystyle\mathopen{}\mathclose{{}\left[\underline{x}^{-},\underline{x}^{+}}\right]-\mathopen{}\mathclose{{}\left[\underline{y}^{-},\underline{y}^{+}}\right]$
$\displaystyle=\mathopen{}\mathclose{{}\left[\underline{x}^{-}-^{-}\underline{y}^{-},\underline{x}^{+}-^{+}\underline{y}^{+}}\right]$
$\displaystyle\mathopen{}\mathclose{{}\left[\underline{x}^{-},\underline{x}^{+}}\right]\times\mathopen{}\mathclose{{}\left[\underline{y}^{-},\underline{y}^{+}}\right]$
$\displaystyle=\mathopen{}\mathclose{{}\left[\textrm{min}^{-}(\rho),\textrm{max}^{+}(\rho)}\right]\qquad\text{where}\quad\rho=\underline{x}^{-}\underline{y}^{-},\underline{x}^{+}\underline{y}^{-},\underline{x}^{-}\underline{y}^{+},\underline{x}^{+}\underline{y}^{+}$
$\displaystyle\mathopen{}\mathclose{{}\left[\underline{x}^{-},\underline{x}^{+}}\right]^{-1}$
$\displaystyle=\mathopen{}\mathclose{{}\left[\textrm{min}^{-}\mathopen{}\mathclose{{}\left(\frac{1}{\underline{x}^{+}},\frac{1}{\underline{x}^{-}}}\right),\textrm{max}^{+}\mathopen{}\mathclose{{}\left(\frac{1}{\underline{x}^{+}},\frac{1}{\underline{x}^{-}}}\right)}\right]$
_$+^{+}$ , $+^{-}$ are here respectively the $+$ operator with rounding up or
down.
The same goes for the $-^{+},-^{-},\min^{-},\max^{+}$ operators._
Figure 1. Basic arithmetic operators in Interval Arithmetic
## 4\. Approximate lattices
The need to reduce lattices given by approximations, especially for number-
theoretic applications as been known for long. In particular, Buchmann gives
in [3] a bound on the required precision to achieve this goal by using a
direct approximation of the input basis. However, this bound is computed in
terms of a quantity called the defect that can be very large and also involves
the first minimum of the lattice.
Using interval arithmetic, it becomes possible to get finer control on the
precision required to perform the lattice reduction, even with approximate
lattices.
### 4.1. Approximate representation of a positive-definite matrix
A matrix with real entries can easily be represented with the integral
representation from Definition 3.1, using the same accuracy for all of its
entries.
###### Definition 4.1 (Matrix integral representation).
Let $A=(a_{i,j})_{i,j}\in\mathbf{R}^{d\times d}$ be an arbitrary real matrix
of dimension $d$ and $n>0$ be a fixed positive integer. A matrix of intervals
$\underline{A}_{n}=(\underline{a_{i,j}}_{n})_{(i,j)\in[1\,\cdots\,d]^{2}},$
where each $\underline{a_{i,j}}_{n}$ is an integral representation of
$a_{i,j}$ is said to _integrally represent $A$ at accuracy $n$_.
We may omit the subscript $n$ when the accuracy is clear from the context.
Given a matrix $A$, and a matrix $B\in\underline{A}_{n}$, there exists a
unique $d\times d$ matrix $\Delta$ with entries in $[-2,2]$ such that
$B=2^{n}A+\Delta$.
In particular, we may apply this representation to symmetric matrices. In that
case, we obtain the following useful lemma:
###### Lemma 4.2.
Let $S=(s_{i,j})_{i,j}\in\mathcal{S}_{d}(\mathbf{R})$ be a symmetric matrix of
dimension $d$ and $\underline{S}_{n}$ an integral representation of $S$ at
accuracy $n$. Then, for any symmetric matrix $S^{\prime}$ in
$\underline{S}_{n}$, we have:
$2^{n}\lambda_{d}(S)-2d\leq\lambda_{d}(S^{\prime})\leq
2^{n}\lambda_{d}(S)+2d,$
where $\lambda_{d}(T)$ denotes the smallest eigenvalue of a $d$-dimensional
symmetric matrix $T$.
###### Proof.
This is a direct consequence of Weyl’s inequalities for Hermitian matrices and
of the relation $S^{\prime}=2^{n}S+\Delta$, where $\Delta$ is real symmetric
with entries in $[-2,2]$. Note that the eigenvalues of $\Delta$ all belong to
$[-2d,2d]$. ∎
### 4.2. Representation of lattices
In order to represent arbitrary lattices, we first need a description of their
ambient space. We simply describe the ambient space $V$ of dimension $d$ by
providing a basis $\gamma=(\gamma_{1},\ldots,\gamma_{d})$. Then, the scalar
product $\langle{\cdot},{\cdot}\rangle$ on $V$ can be encoded by a Gram matrix
$\mathcal{G}_{\gamma}=\mathopen{}\mathclose{{}\left(\langle{\gamma_{i}},{\gamma_{j}}\rangle}\right)_{(i,j)\in[1\,\cdots\,d]^{2}}$.
When the Gram matrix $\mathcal{G}_{\gamma}$ is integral, this already is a
standard description of the lattice $\Gamma$ spanned by $\gamma$. This
representation appears in particular in [4, Proposition 2.5.3]. We now extend
this in order to represent bases and generating families of arbitrary
sublattices of $\Gamma$. Let $\Lambda$ be a rank $r\leq d$ sublattice of
$\Gamma$ given by a generating family
$\ell=\mathopen{}\mathclose{{}\left(\ell_{1},\ldots,\ell_{p}}\right)$. Since
any vector in $\ell$ belongs to $\Gamma$, it can be expressed with integral
coordinates in the basis $\gamma$. As a consequence, we can represent $\ell$
by a $p\times d$ integral matrix $L$. Moreover, the knowledge of
$\mathcal{G}_{\gamma}$ allows us to easily compute the scalar product of any
pair of vectors in $\Lambda$.
All this leads to the following definition:
###### Definition 4.3 (Approximate representation of a lattice).
Let $\mathcal{G}_{\gamma}$ and $L$ be as above and $n$ be a non-negative
integer. Denote by $G$ the matrix of centers of an integral representation
$\underline{\mathcal{G}_{\gamma}}_{n}$at accuracy $n$ of the Gram matrix
$\mathcal{G}_{\gamma}$. Then the pair $(G,L)\in\mathbf{Z}^{d\times
d}\times\mathbf{Z}^{p\times d}$ of integral matrices is said to _represent at
accuracy $n$_ the lattice $\Lambda$ in the basis $\gamma$ of $\Gamma$.
#### 4.2.1. Computation of the inner product in Interval Arithmetic.
Let $a$ and $b$ be two vectors of $\Lambda$ described by their vectors $A$ and
$B$ of coordinates in the basis $\gamma$. We know that:
$\langle{a},{b}\rangle=A^{T}\cdot\mathcal{G}_{\gamma}\cdot B.$
Thus:
$2^{n}\langle{a},{b}\rangle=A^{T}\cdot G\cdot B+A^{T}\cdot\Delta\cdot
B,\leavevmode\nobreak\ \mbox{where}\leavevmode\nobreak\ |A^{T}\cdot\Delta\cdot
B|\leq\mathopen{}\mathclose{{}\left(\sum_{i}|A_{i}|}\right)\mathopen{}\mathclose{{}\left(\sum_{i}|B_{i}|}\right).$
This directly gives an interval representation of $\langle{a},{b}\rangle$.
### 4.3. Lattice reduction of approximate lattices
Suppose now that the Gram matrix
$\mathcal{G}_{\gamma}=\mathopen{}\mathclose{{}\left(\langle{\gamma_{i}},{\gamma_{j}}\rangle}\right)_{(i,j)\in[1\,\cdots\,d]^{2}}$
representing the inner product of the ambient space
$\Gamma\otimes_{\mathbf{Z}}\mathbf{R}$ in the basis $\gamma$ is given
indirectly by an algorithm or an oracle $\mathcal{O}_{\gamma}$ that can
compute each entry at any desired accuracy. We can restate the definition of a
reduced basis in this framework as:
###### Definition 4.4 ($(\delta,\eta)$-lll reduction).
Let $(\delta,\eta)$ be admissible lll parameters. Given an integral matrix
$L\in\mathbf{Z}^{p\times d}$ which describes the vectors of a basis of a
lattice $\Lambda$ in the basis $\gamma$, we say that
$(\mathcal{G}_{\gamma},L)$ is a $(\delta,\eta)$-lll reduced basis of $\Lambda$
if and only if there exists an $n_{0}>0$ such that for any $n\geq n_{0}$ there
exists a pair $({\underline{G}}_{n},L)$, where ${\underline{G}}_{n}$ is an
integral representation of $\mathcal{G}_{\gamma}$ at accuracy $n$, which is a
$(\delta,\eta)$-lll reduced basis.
The computational problem associated with reduction theory can then be written
as:
###### Problem (Lattice Reduction for approximate representation).
Let $\delta,\eta$ be admissible lll parameters. Given as input an algorithm or
oracle to compute $\mathcal{G}_{\gamma}$ at arbitrary precision and an
integral matrix $L\in\mathbf{Z}^{p\times d}$ that describes the vectors of a
generating family of a lattice $\Lambda$ in the basis $\gamma$: find a basis
$L^{\prime}$ of $\Lambda$ such that $(\mathcal{G}_{\gamma},L^{\prime})$ is a
$(\delta,\eta)$-lll reduced basis in the sense of Definition 4.4.
Note that using interval arithmetic it suffices to check the
$(\delta,\eta)$-lll reduction condition at accuracy $n_{0}$ to be sure it
holds at any larger accuracy. Indeed, an integral representation that
satisfies the condition can be refined into a more precise integral
representation by scaling up the integer representing the center by an
adequate power of two. This refined representation continues to satisfy the
condition.
#### 4.3.1. Accuracy of representation and space complexity.
Let $({\underline{G}}_{n},L)$ be an integral representation of $\Lambda$, at
accuracy $n$. Then, the magnitude of the entries of $G$ is $2^{n}$ times the
magnitude of the entries of $\mathcal{G}_{\gamma}$. Thus,
${\underline{G}}_{n}$ can be encoded using
$\textrm{O}\mathopen{}\mathclose{{}\left(d^{2}(n+\log\|\mathcal{G}_{\gamma}\|_{\textrm{max}})}\right)$
bits.
## 5\. Generalized LLL reduction with Interval Arithmetic
In this Section, we adapt lattice reduction algorithms to our setting. More
precisely, we represent the information related to Gram-Schmidt vectors by
interval arithmetic using a floating-point representation as described in
Section 3.1.3. For the representation of the lattice itself, we consider two
cases: either the underlying Gram matrix is integral, or it is given by an
approximate integral representation as in Section 4.1. In the latter case, our
algorithm also asks for representations with higher accuracy until it is
sufficient to yield a reduced basis for the given lattice. The canonical case
with the standard Euclidean scalar product is achieved by setting the Gram
matrix to the (exact) identity matrix.
### 5.1. Interval Arithmetic $L^{2}$ reduction with fixed precision.
We first consider the simplified case where the lattice representation is
fixed. It can be either exact or approximate with a given accuracy. In both
cases, we fix a basis $\gamma=(\gamma_{1},\ldots,\gamma_{d})$ and a
representation of a lattice $\Lambda$ in this basis. It is respectively an
exact integral representation $(G,L)$ or an approximate representation
$({\underline{G}}_{n},L)$ at accuracy $n$ of $(\mathcal{G}_{\gamma},L)$.
#### 5.1.1. Using Interval Arithmetic in lll.
We now modify the l2 algorithm of [21] in a few relevant places to make use of
interval arithmetic instead of floating-point arithmetic for the Gram-Schmidt-
related values. Since the description of the lattice $\Lambda$ is already
using intervals, it seems natural to use interval arithmetic in the lattice
reduction algorithm. For completeness, when the input Gram matrix is exact, we
make the updates to the Gram-Schmidt orthogonalized matrix used by lll
explicit in the algorithm (except the simple displacements). This also
emphasizes a subtle difference with the case of an approximate input Gram
matrix. Indeed, in that case, we update the gso-values but recompute the
errors rather than relying on the interval arithmetic to do it. This is
important to gain a fine control on the error growth during updates.
In addition, when using the technique from [23] to be able to deal with
lattices given by a generating family instead of a basis, we make a slightly
different choice than in [21]. Instead of moving the zero vectors that are
encountered during the computation during the reduction to the start of the
basis, we simply remove them. Note that with an approximate matrix, if we
discover a non-zero vector whose length is given by an interval containing
$0$, it is not possible to continue the computation. This means that the
accuracy of the input is insufficient and we abort. The core modification with
interval arithmetic appears while testing the Lovász condition. If it is not
possible to decide whether the test is true or false because of interval
overlap, we also abort due to lack of precision. To be more precise, when
testing the Lovász condition, we also need to check that the corresponding
$\mu$ coefficient is indeed smaller than $\eta$. The reason for this is that,
when called with insufficient precision, the Lazy reduction routine may fail
to ensure that property.
In addition, if a negative number occurs when computing the norm of a vector,
it means that the given Gram matrix is not positive-definite and the algorithm
returns an error accordingly.
Input: Initial basis $L=({L_{1}},\ldots,{L_{d}})$, precomputed (internal) Gram
matrix $Gram$, interval matrices $\underline{R}$ and $\underline{M}$, an
integer $1\leq k\leq d$.
Result: Size-reduce the $k$-th vector of $L$ and update the Gram matrix
$Gram$.
1 done $\leftarrow\textnormal{{false}}$;
2 while _done $=\textnormal{{false}}$_ do
3 for _$j=1$ to $k-1$_ do
4 $\underline{R_{k,j}}\leftarrow\textsc{ConvertToFPinterval}({Gram_{k,j}})$;
5 for _$i=1$ to $j-1$_ do
$\underline{R_{k,j}}\leftarrow\underline{R_{k,j}}-\underline{M_{j,i}}\underline{R_{k,i}}$
;
6 $\underline{M_{k,j}}\leftarrow\underline{R_{k,j}}/\underline{R_{j,j}}$;
7 end for
8
$\underline{{s}_{1}^{(k)}}\leftarrow\textsc{ConvertToFPinterval}({Gram_{k,k}})$;
9 for _$j=2$ to $k$_ do
$\underline{{s}_{j}^{(k)}}\leftarrow\underline{s_{j-1}^{(k)}}-\underline{{M}_{k,j-1}}\cdot\underline{R_{k,j-1}}$
;
10 $\underline{R_{k,k}}\leftarrow\underline{s_{k}^{(k)}}$;
11 $\underline{\tau}\leftarrow(\max_{j<k}\underline{M_{k,j}})$;
12 ret $\leftarrow(\underline{\tau}\leq\eta)$;
13 if _ret $\neq\textnormal{{false}}$_ then done
$\leftarrow\textnormal{{true}}$;
14 else
15 for _$i=k-1$ downto $1$_ do
16 $X_{i}\leftarrow$ $\eta$-IntervalClosestInteger$(\underline{M_{k,i}})$;
17 for _$j=1$ to $i-1$_ do
$\underline{M_{k,j}}\leftarrow\underline{M_{k,j}}-X_{i}\underline{M_{i,j}}$;
18 $L_{k}\leftarrow L_{k}-X_{i}L_{i}$;
// Update the Gram matrix accordingly
19
${Gram_{k,k}}\leftarrow{Gram_{k,k}}-2X_{i}{Gram_{k,i}}+X_{i}^{2}{Gram_{i,i}}$;
20 for _$j=1$ to $i$_ do
${Gram_{k,j}}\leftarrow{Gram_{k,j}}-X_{i}{Gram_{i,j}}$;
21 for _$j=i+1$ to $k-1$_ do
${Gram_{k,j}}\leftarrow{Gram_{k,j}}-X_{i}{Gram_{j,i}}$;
22 for _$j=k+1$ to $d$_ do
${Gram_{j,k}}\leftarrow{Gram_{j,k}}-X_{i}{Gram_{j,i}}$;
23
24 end for
25
26 end if
27
28 end while
Algorithm 4 The (interval) lazy size reduction algorithm, $\eta$-ILazyRed.
Parameters : $\delta\in(1/4,1),\eta\in(1/2,\sqrt{\delta})$ admissible lll
parameters, $\ell\in\mathbf{N}$ the internal precision used for floating-point
representation.
Input: Exact representation $(G,L)$ or approximate representation
$(\underline{G_{n}},L)$ of a lattice given by $p$ generating vectors in
dimension $d$.
Result: A $(\delta,\eta)$ lll-reduced basis $L^{\prime}$ (with $\dim(L)$
vectors).
1 $k\leftarrow 2$ ;
// Compute the Gram matrix of the basis represented by $L$
2 for _$i=1$ to $p$ for $j=1$ to $i$_ do
3 if _Exact_ then $GramL_{i,j}\leftarrow L_{i}^{T}GL_{j}$;
4 else $GramL_{i,j}\leftarrow$ Interval of center $L_{i}^{T}G_{n}L_{j}$ and
radius $\|L_{i}\|_{1}\|L_{j}\|_{1}$ ;
5
6 end for
7
8$\underline{R_{1,1}}\leftarrow\textsc{ConvertToFPinterval}(GramL_{1,1})$;
9 while _$k\leq p$ _ do
// Size-reduce $L_{k}$ with interval on the family $(L_{1},\ldots,L_{k-1})$
10 $\eta$-ILazyRed(k, Exact);
11 if Exact = false then for _$j=1$ to $k$_ do
12 Update radius of $GramL_{k,j}$ to $\|L_{k}\|_{1}\|L_{j}\|_{1}$ (rounded up
with $\ell$ significant bits)
13 end for
14 $k^{\prime}\leftarrow k$;
15 while _$k\geq 2$_ do
16 ret
$\leftarrow\mathopen{}\mathclose{{}\left(\underline{M_{k^{\prime},k-1}}\leq\eta}\right)\mbox{\leavevmode\nobreak\
and\leavevmode\nobreak\
}\mathopen{}\mathclose{{}\left(\underline{\delta}\cdot\underline{R_{k-1,k-1}}>\underline{s_{k-1}^{(k^{\prime})}}}\right)$;
17 if _ret = true_ then $k\leftarrow k-1$;
18 else if _ret = false_ then break ;
19 else return ErrorPrecision ;
20
21 end while
22
23 if _$k\neq k^{\prime}$_ then
24 for _$i=1$ to $k-1$_ do
$\underline{M_{k,i}}\leftarrow\underline{M_{k^{\prime},i}}$;
$\underline{R_{k,i}}\leftarrow\underline{R_{k^{\prime},i}}$ ;
25 $\underline{R_{k,k}}\leftarrow\underline{s_{k}^{k^{\prime}}}$;
$L_{tmp}\leftarrow L_{k^{\prime}}$; for _$i=k^{\prime}$ downto $k+1$_ do
$L_{i}\leftarrow L_{i-1}$ ;
26 $L_{k}\leftarrow L_{tmp}$; Move values in ${GramL}$ accordingly;
27
28 else
29 $\underline{R_{k,k}}\leftarrow\underline{s_{k}^{(k^{\prime})}}$;
30 if _$0\in\underline{R_{k,k}}$ and $L_{k}\neq 0$_ then return ErrorAccuracy
;
31 if _$\underline{R_{k,k}} <0$_ then return ErrorNonPosDefinite ;
32
33 end if
34
35 if _$L_{k}=0$_ then
36 for _$i=k$ to $p-1$_ do $L_{i}\leftarrow L_{i+1}$ ;
37 $p\leftarrow p-1$; $k\leftarrow k-1$; Move values in ${GramL}$ accordingly;
38
39 end if
40 $k\leftarrow\max(k+1,2)$;
41
42 end while
return ($L$)
Algorithm 5 The $\overline{\textrm{{ll}}}$ Algorithm.
#### 5.1.2. Internal precision in the exact-input case.
For the classical l2 algorithm, Section 2.4.1 states that the precision that
is needed for the computations only depends on the dimension of the lattice.
It is natural to ask a similar question about the algorithm
$\overline{\textrm{{ll}}}$: can the required internal accuracy be bounded
independently of the entries appearing in the matrices $G$ and $L$. When $G$
is exact, i.e., integral, the adaptation is straightforward and we obtain the
following result.
###### Theorem 5.1.
Let $(\delta,\eta)$ be admissible lll parameters. Let
$c>\log\frac{(1+\eta)^{2}}{\delta-\eta^{2}}$ and let
$(\Lambda,\langle{\cdot},{\cdot}\rangle)$ denote a rank-$d$ lattice, exactly
described by the pair $(G,L)$. Let $B$ denotes the maximum entry in absolute
value in $L^{T}GL$. Then, the $\overline{\textrm{{ll}}}$ of Figure 5 used with
$\ell=cd+o\mathopen{}\mathclose{{}\left(d}\right)$ outputs a
$(\delta,\eta)$-lll-reduced basis in time
$\textrm{O}\mathopen{}\mathclose{{}\left(d^{3}\log{B}(d+\log{B})\mathcal{M}\mathopen{}\mathclose{{}\left({d}}\right)}\right).$
Furthermore, if $\tau$ denotes the number of main loop iterations, the running
time is
$\textrm{O}\mathopen{}\mathclose{{}\left(d(\tau+d\log{dB})(d+\log{B})\mathcal{M}\mathopen{}\mathclose{{}\left({d}}\right)}\right).$
In fact, the bound on $\ell$ is made explicit in [21]. More precisely, it
states that for any arbitrary $C>0$ and an $\epsilon\in]0,1/2]$, it suffices
to have:
$\ell\geq
10+2\log_{2}{d}-\log_{2}{\min(\epsilon,\eta-1/2)}+d(C+\log_{2}\rho)\quad\mbox{where\leavevmode\nobreak\
}\rho=\frac{(1+\eta)^{2}+\epsilon}{\delta-\eta^{2}}.$
For example, choosing $C=\epsilon=\eta-1/2$ it suffices to have:
$\ell\geq
T(d,\delta,\eta)=10+2\log_{2}{d}-\log_{2}{(\eta-1/2)}+(\eta-1/2+\log_{2}\rho)\,d.$
When $\delta$ is close to $1$ and $\eta$ to $1/2$, the constant before $d$
becomes smaller than $1.6$.
#### 5.1.3. Dealing with approximate inputs.
When dealing with lattices given in an approximate form, i.e., by a
representation $({\underline{G}}_{n},L)$ at accuracy $n$ of
$(\mathcal{G}_{\gamma},L)$, the analysis of the algorithms differs in three
main places:
* •
When bounding the number of rounds $\tau$, we can no longer assume that the
potential is an integer. As a consequence, in order to keep a polynomial bound
on $\tau$, we need to provide a lower bound on the possible values of the
potential, rather than rely on the trivial lower bound of $1$ for an integral-
valued potential.
* •
Since the notion of lll-reduction is only well-defined for a positive definite
$G$, we need to make sure that ${\underline{G}}_{n}$ is positive-definite
during the algorithm. Otherwise, it should output an error; Algorithm 5
returns an error that ${\underline{G}}_{n}$ is incorrect whenever it
encounters a vector with a negative norm.
* •
When ${\underline{G}}_{n}$ is approximate, the scalar products between lattice
vectors can no longer be exactly computed. Thus, we need to able to make sure
that the errors are small enough to be compatible with the inner precision
used for Gram-Schmidt values. At first glance, this might seem easy. However,
when using update formulas to avoid recomputation of scalar products, the
estimates on errors provided by interval arithmetic can grow quite quickly. In
fact,it would prevent the update strategy from working. The key insight is to
remark that since the centers of the intervals are represented by integers,
any computation on them is exact and we can use update formulas to compute
them. However, it is essential to recompute the radii of the intervals, i.e.,
the errors, to prevent them from growing too quickly.
#### Number of rounds
Since interval arithmetic allows up to emulate exact computations as long as
no failures are detected, we can analyze the number of rounds by assuming that
all computations on non-integral values are done using an exact arithmetic
oracle. In this context, the number of rounds can be studied by considering
the potential as usual. Remember that the initial setting where lll operates
on a basis the potential is defined as
$D(B)=\prod_{i=1}^{d}\operatorname{covol}\mathopen{}\mathclose{{}\left({B_{[1\ldots
i]}}}\right).$
The key argument is that it decreases by a multiplicative factor whenever an
exchange is performed.
However, in our context, the starting upper bound and the ending lower bound
are different from the integer lattice setting. The initial upper bound needs
to account from the presence of the positive definite matrix. So if the
lattice is described by a pair $(\mathcal{G}_{\gamma},L)$ the upper bound
becomes:
$D(B)^{2}\leq\mathopen{}\mathclose{{}\left(d^{2}\|\mathcal{G}_{\gamma}\|_{\textrm{max}}\|L\|_{\textrm{max}}^{2}}\right)^{d(d+1)/2}.$
More importantly, it is no longer possible to claim that the potential is an
integer. Instead, we derive a lower bound by considering the smallest
eigenvalue of $\mathcal{G}_{\gamma}$ and find:
$D(B)^{2}\geq\lambda_{d}(\mathcal{G}_{\gamma})^{d(d+1)/2}.$
As a consequence, if we let $\tau$ denote the number of rounds of the
algorithm, we can conclude that:
$\tau\leq\textrm{O}\mathopen{}\mathclose{{}\left(d^{2}\mathopen{}\mathclose{{}\left(\log(\|L\|_{\textrm{max}}}\right)+\log\mathopen{}\mathclose{{}\left(\|\mathcal{G}_{\gamma}\|_{\textrm{max}}/\lambda_{d}(\mathcal{G}_{\gamma})}\right)+\log(d)}\right).$
When the lattice is given by a generating family $L$ rather than a basis $B$,
we need a slightly different invariant. Following [21], we define $d_{i}$ to
be the product of the first $i$ non-zero values $\|b^{*}_{j}\|$. Note that
they are not necessarily consecutive, since zeroes may occur anywhere. We then
let:
$D^{\prime}(L)=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{\dim{L}}d_{i}}\right)\cdot\mathopen{}\mathclose{{}\left(\prod_{i,b^{*}_{i}=0}2^{i}}\right).$
This generalized potential is needed for the proof of Theorem 5.2. Note that,
for lattices given by a basis, the two definitions coincide.
#### Necessary accuracy for the scalar products
In order to preserve the correctness of the algorithm when computing with
internal precision $\ell$, we need to check that all conversions of scalar
product values, using the calls to ConvertToFPinterval in Algorithms 4 and 5,
have sufficient precision. For a pair of lattice elements, described by
vectors $L_{i}$ and $L_{j}$, the relative precision on the value of their
scalar product is:
$\frac{\|L_{i}\|_{1}\|L_{j}\|_{1}}{|L_{i}^{T}G_{n}L_{j}|}.$
When the vectors are close to orthogonal with respect to the scalar product
given by $G_{n}$, the error can be arbitrarily large. However, by carefully
following the analysis of Theorem 3 in [21, Section 4.1], we can show that
this Theorem remains true in our context. This suffices to ensure the
correctness part of Theorem 5 of [21]. The first check is to verify that
quantity called $err_{1}$ in the proof of the Theorem remains upper bounded by
$2^{-\ell}$. Since the value is defined as the error on the scalar product of
the vectors number $i$ and $1$ divided by the norm of the first vector, we
have:
$err_{1}\leq\frac{\|L_{i}\|_{1}\|L_{1}\|_{1}}{|L_{1}^{T}G_{n}L_{1}|}\leq\frac{\max_{i}{\|L_{i}\|_{1}^{2}}}{\lambda_{d}(G_{n})}\leq\frac{d\max_{i}{\|L_{i}\|^{2}}}{\lambda_{d}(G_{n})}\leq\frac{d\max_{i}{\|b_{i}\|^{2}}}{\lambda_{d}(G_{n})^{2}}.$
Thus:
$err_{1}\leq\frac{d^{3}\|G_{n}\|_{\textrm{max}}\|L\|^{2}_{\textrm{max}}}{\lambda_{d}(G_{n})^{2}}\leq\frac{d^{3}\mathopen{}\mathclose{{}\left(2^{n}\|\mathcal{G}_{\gamma}\|_{\textrm{max}}+1}\right)\|L\|^{2}_{\textrm{max}}}{\mathopen{}\mathclose{{}\left(2^{n}\lambda_{d}(\mathcal{G}_{\gamma})-2d}\right)^{2}}.$
As a consequence, it suffices to have:
$n\geq\ell+\textrm{O}\mathopen{}\mathclose{{}\left(\log(\|L\|_{\textrm{max}})+\log\mathopen{}\mathclose{{}\left(\|\mathcal{G}_{\gamma}\|_{\textrm{max}}/\lambda_{d}(\mathcal{G}_{\gamma})}\right)+\log(d)}\right).$
#### l2 with approximate inputs.
To complete the above properties on the number of rounds and necessary
accuracy, it suffices to remark that the only additional line of code in the
approximate l2 is the recomputation of interval radii on line 10. Since it
suffices to know the $\ell$ high-order bits of the values, this recomputation
can fully be done using arithmetic on $\ell$. Indeed, during the computations
of $\|L_{i}\|_{1}$ no cancellation occurs. As a consequence, we get the
following adaptation of Theorem 5.1. For completeness, we give here the case
where the lattice is initially given by a generating family of $p$ vectors,
has rank $d$ and lives in an ambient space of dimension $D$.
###### Theorem 5.2.
Let $(\delta,\eta)$ be such that $1/4<\delta<1$ and $1/2<\eta<\sqrt{\delta}$.
Let $c>\log\frac{(1+\eta)^{2}}{\delta-\eta^{2}}$. Assume that we are given as
input $(\Lambda,\langle{\cdot},{\cdot}\rangle)$ a rank-$d$ lattice
$(\mathcal{G},L)$ described by $p\geq d$ generating vectors in a ambient space
of dimension $D\geq d$. Further assume that it is approximately represented at
accurary $N$ by the pair $(\underline{G_{N}},L)$ and let $B$ denote the
maximum entry in absolute value in $L^{T}\mathcal{G}L$. Let
$\ell=cd+o\mathopen{}\mathclose{{}\left(d}\right)$ and
$N\geq\ell+\log\mathopen{}\mathclose{{}\left(B/\lambda_{D}(\mathcal{G})}\right)+\log(d).$
Then, the $\overline{\textrm{{ll}}}$ of Figure 5 outputs a
$(\delta,\eta)$-lll-reduced basis in time
$\textrm{O}\mathopen{}\mathclose{{}\left(DN\mathopen{}\mathclose{{}\left(d^{2}N+p(p-d)}\right)\mathcal{M}\mathopen{}\mathclose{{}\left({d}}\right)}\right).$
Furthermore, if $\tau$ denotes the number of main loop iterations, the running
time is
$\textrm{O}\mathopen{}\mathclose{{}\left(DN\mathopen{}\mathclose{{}\left(dN+\tau}\right)\mathcal{M}\mathopen{}\mathclose{{}\left({d}}\right)}\right).$
### 5.2. $L^{2}$ reduction with adaptive precision and accuracy.
#### 5.2.1. Adaptive precision.
Since by construction the $\overline{\textrm{{ll}}}$ Algorithm can detect that
the choice for internal precision $\ell$ is insufficient to correctly reduce
the lattice $\Lambda$. The procedure can be wrapped in a loop that
geometrically increases precision $\ell$ after each unsuccessful iteration.
This yields an _adaptive precision_ reduction algorithm adaptive-lll. Since
the complexity of floating-point multiplication is superlinear, the use of a
geometric precision growth guarantees that the total complexity of this
lattice reduction is asymptotically dominated by its final iteration.333In
practice, for lattices of rank few hundreds it appears nonetheless that the
computational cost of the previous iterations lies between $20\%$ and $40\%$
of the total cost.
Moreover, the cost of operations in the floating-point realization of interval
arithmetic is at most four times the cost of floating-point arithmetic at the
same precision. Depending on the internal representation used, this constant
can even be improved. As a consequence, for lattices that can be reduced with
a low-enough precision, it can be faster to use interval arithmetic than
floating-point arithmetic with the precision required by the bound from
Section 2.4.1.
#### 5.2.2. Adaptive accuracy.
We now turn to the setting of Section 4.3, where an algorithm or oracle
$\mathcal{O}_{\gamma}$ can output an integral representation of the Gram
matrix
$\mathcal{G}_{\gamma}=\mathopen{}\mathclose{{}\left(\langle{\gamma_{i}},{\gamma_{j}}\rangle}\right)_{(i,j)\in[1\,\cdots\,r]^{2}}$
at arbitrary accuracy $n$. In that context, we need to determine both the
necessary accuracy and internal precision. When running Algorithm 5 with some
given accuracy and precision, three outcomes are possible:
* •
Either the reduction terminates in which case the lattice is lll-reduced,
which implies that both accuracy and precision are sufficient.
* •
The Lovász condition fails to be tested correctly, which indicates an
insufficient precision. In that case, we need to test whether the precision is
lower than theoretical bound $T(d,\delta,\eta)$ given after Theorem 5.1 or
not. In the latter case, we know that the accuracy needs to be increased.
* •
The algorithm detects a non-zero vector whose norm is given by an interval
containing 0. This directly indicates insufficient accuracy.
Depending on the result of Algorithm 5, we increase the precision or the
accuracy and restart. The corresponding pseudo-code is given in Algorithm 6.
Since the precision and accuracy both follow a geometric growth, the
computation is dominated by its final iteration. In particular, we may use the
complexity bound given by Theorem 5.2.
Note that when we increase the accuracy in Algorithm 6, we also reset the
precision to its minimal value. This is a matter of preference that doesn’t
affect the asymptotic complexity. In practice, it seems to be preferable.
It is important to note that we do need to precompute the eigenvalues of the
Gram matrix, since Algorithm 6 automatically detects the needed accuracy.
Parameters : $\delta\in(1/4,1),\eta\in(1/2,\sqrt{\delta})$,
$\ell_{0}\in\mathbf{N}$ initial precision of the algorithm for floating-point
representation, $n_{0}$ initial accuracy for representing the scalar product,
$g>1$ geometric growth factor.
Input: $\gamma$ a basis of a lattice $(\Gamma,\langle{\cdot},{\cdot}\rangle)$,
and $\mathcal{O}_{\gamma}(n)$ an oracle that compute the integral
representation of the inner product $\langle{\cdot},{\cdot}\rangle$ at
accuracy $n$.
Input: A generating family represented by $L$ in $\gamma$ of a sublattice
$\Lambda\subset\Gamma$.
Result: A $(\delta,\eta)$ lll-reduced basis of $\Lambda$ represented as
$L^{\prime}\in\mathbf{Z}^{\operatorname{rk}(\Lambda)\times\operatorname{rk}(\Lambda)}$.
// Set initial values for accuracy and precision
// T$(d,\delta,\eta)$ is the theoretical bound given after Theorem 5.1
1 $\ell\leftarrow\ell_{0}$;
2 $n\leftarrow n_{0}$;
3 $G\leftarrow\mathcal{O}_{\gamma}(n)$;
4 succeed $\leftarrow$ false ;
5 repeat
6 retcode $\leftarrow$ $\overline{\textrm{{ll}}}$$(G,L)$;
7 if _retcode $=$ErrorNonPosDefinite_ then return ErrorNonPosDefinite;
8
9 if _retcode $=$OK_ then succeed $\leftarrow$ true ;
10 else if _retcode $=$ErrorPrecision_ then
11 $\ell^{\prime}\leftarrow\ell$;
12 $\ell\leftarrow\min\mathopen{}\mathclose{{}\left(\lceil
g\,\ell\rceil,T(d,\delta,\eta),n}\right)$;
13 if _$\ell^{\prime}=\ell$_ then retcode $\leftarrow$ ErrorAccuracy;
14
15 end if
16 if _retcode $=$ErrorAccuracy_ then
17 $\ell\leftarrow\ell_{0}$;
18 $n\leftarrow\lceil g\,n\rceil$;
19 $G\leftarrow\mathcal{O}_{\gamma}(n)$;
20
21 end if
22
23until _succeed = true_;
return $L$
Algorithm 6 The adaptive-lll algorithm.
### 5.3. Possible generalizations
The adaptative strategy we describe for lll can be generalized to other
lattice reduction algorithm. In particular, enumeration algorithms are
possible within our framework, which allows the implementation of the BKZ
algorithm of [25].
It would be interesting to study a generalization to sieving techniques to
adapt them to approximate lattices.
## 6\. Application to Algebraic Number Theory
We now present a direct application of our lattice reduction strategy in
algorithmic number theory. Namely, we consider some interesting lattices
sitting inside number fields: ideal lattices.
### 6.1. Number fields, integers and ideal lattices
#### Number fields
A number field $\mathbf{K}$ is a finite-dimensional algebraic extension of
$\mathbf{Q}$. It can be described as:
$\mathbf{K}\cong\faktor{\mathbf{Q}[X]}{(P)}=\mathbf{Q}(\alpha),$
where $P$ is a monic irreducible polynomial of degree $d$ in $\mathbf{Z}[X]$
and where $\alpha$ denotes the image of $X$ in the quotient.
Let
$\mathopen{}\mathclose{{}\left(\alpha_{1},\dotsc,\alpha_{d}}\right)\in\mathbf{C}^{d}$
denote the distinct complex roots of $P$. Then, there are $d$ distinct ring-
embeddings of $\mathbf{K}$ in $\mathbf{C}$. We define the $i$-th embedding
$\sigma_{i}:\mathbf{K}\to\mathbf{C}$ as the field homomorphism sending
$\alpha$ to $\alpha_{i}$.
It is classical to distinguish embeddings induced by real roots, a.k.a., real
embeddings from embeddings coming from (pairs of conjugate) complex roots,
called complex embeddings. Those arising from complex roots called complex
embeddings.
Assume that $P$ has $r_{1}$ real roots and $r_{2}$ pairs of conjugate complex
roots, with $d=r_{1}+2r_{2}$. Since the embeddings corresponding to conjugate
roots are related by conjugation on $\mathbf{C}$, we can either keep a single
complex root in each pair or replace each pair by the real and imaginary part
of the chosen root. This leads to the _Archimedean embedding_ $\sigma$ defined
as:
$\begin{array}[]{cccl}\sigma:&\mathbf{K}&\longrightarrow&\mathbf{R}^{d}\\\
&x&\longmapsto&\mathopen{}\mathclose{{}\left(\sigma_{1}(x),\ldots,\sigma_{r_{1}}(x),\sqrt{2}\mathfrak{R}(\sigma_{r_{1}+1}(x)),\sqrt{2}\mathfrak{I}(\sigma_{r_{1}+1}(x)),\ldots}\right)^{T}\end{array}$
This embedding allows us to define a real symmetric bilinear form on
$\mathbf{K}$:
$\langle{{a}},{b}\rangle_{\sigma}=\sigma(a)\cdot\sigma(b)=\sum_{i=1}^{d}\sigma_{i}({a})\overline{\sigma_{i}(b)}.$
The second equality explains the presence of the normalization factors
$\sqrt{2}$ in the definition of $\sigma$. Note that the form is positive
definite, thus endowing $\mathbf{K}$ with an Euclidean structure.
#### Integers
Any element $\gamma$ of $\mathbf{K}$ has a minimal polynomial, defined as the
unique monic polynomial of least degree among all polynomials of
$\mathbf{Q}[X]$ vanishing at $\gamma$. The algebraic number $\gamma$ is said
to be _integral_ if its minimal polynomial lies in $\mathbf{Z}[X]$. The set of
all integers in $\mathbf{K}$ forms a ring, called the ring of integers of
$\mathbf{K}$ and denoted $\mathfrak{o}_{\mathbf{K}}$. It is also a free
$\mathbf{Z}$-module of rank $d$. A basis $(w_{1},\ldots,w_{d})$ of
$\mathfrak{o}_{\mathbf{K}}$ (as a $\mathbf{Z}$-module) is called an integral
basis of $\mathbf{K}$.
As a consequence, using the bilinear form
$\langle{\cdot},{\cdot}\rangle_{\sigma}$, we can view
$\mathfrak{o}_{\mathbf{K}}$ as a lattice.
#### Ideals
An ideal of $\mathfrak{o}_{\mathbf{K}}$ is defined as an
$\mathfrak{o}_{\mathbf{K}}$-submodule of $\mathfrak{o}_{\mathbf{K}}$. In
particular, it is a $\mathbf{Z}$-submodule of rank $d$. Every ideal $I$ can be
described by a two-element representation, i.e. expressed as
$I=\alpha\mathfrak{o}_{\mathbf{K}}+\beta\mathfrak{o}_{\mathbf{K}},$ with
$\alpha$ and $\beta$ in $\mathfrak{o}_{\mathbf{K}}$. Alternatively, every
ideal can also be described by a $\mathbf{Z}$-basis formed of $d$ elements.
### 6.2. Lattice reduction for ideals
With the above notations, we can directly use our lattice reduction algorithm
to reduce an ideal lattice. More precisely, given an integral basis
$(w_{1},\ldots,w_{d})$ and a two-element representation of $I$ by $\alpha$ and
$\beta$, we proceed as follows:
1. (1)
Define the Gram matrix $\mathcal{G}_{w}$ with entries
$\langle{w_{i}},{w_{j}}\rangle_{\sigma}$. It can be computed to any desired
precision from approximations of the roots of $P$. The roots themselves can be
computed, using, for example, the Gourdon-Schönhage algorithm [8].
2. (2)
Let $L$ be the matrix formed of the (integral) coordinates of $(\alpha
w_{1},\ldots,\alpha w_{d})$ and $(\beta w_{1},\ldots,\beta w_{d})$ in the
basis $(w_{1},\ldots,w_{d}).$
3. (3)
Directly apply Algorithm 6 to $(\mathcal{G}_{w},L).$
The same thing can be done, mutantis mudantis, for an ideal described by a
$\mathbf{Z}$-basis.
#### A well-known special case
For some number fields, the Gram matrix is $\mathcal{G}_{w}$ is integral. In
that case, the use of Algorithm 6 isn’t necessary and one can directly work
with an exact lattice. This is described for the special case of reducing the
full lattice corresponding to the ring of integers in [1, Section 4.2] for
totally real fields. It can be generalized to CM-fields, since they satisfy
the same essential property of having an integral Gram matrix. The same
application is also discussed in [4, Section 4.4.2].
#### Non integral case
For the general case where the Gram matrix is real, [1] propose to multiply by
$2^{e}$ and round to the closest integer. It also gives a bound on the
necessary accuracy $e$ as the logarithm of (the inverse of) the smallest
diagonal entry in the Cholesky decomposition of the Gram matrix. In some
sense, this is similar to our approach. However, without any auxiliary
information on this coefficient, it is proposed to continue increasing $e$ as
long as it is deemed unsatisfactory.
By contrast, termination of our algorithm guarantees that lattice reduction is
completed and that the output basis is lll-reduced.
## References
* [1] K. Belabas, Topics in computational algebraic number theory, _J. Théor. Nombres Bordeaux_ , 16 (2004), 19–63.
* [2] J.-F. Biasse and C. Fieker, Improved techniques for computing the ideal class group and a system of fundamental units in number fields, _The Open Book Series_ , 1 (2013), 113–133.
* [3] J. A. Buchmann, Reducing lattice bases by means of approximations, in _Algorithmic Number Theory, First International Symposium, ANTS-I, Ithaca, NY, USA, May 6-9, 1994, Proceedings_ , 1994, 160–168.
* [4] H. Cohen, _A Course in Computational Algebraic Number Theory_ , Springer-Verlag New York, Inc., New York, NY, USA, 1993.
* [5] B. M. M. De Weger, Solving exponential Diophantine equations using lattice basis reduction algorithms, _J. Number theory_ , 26 (1987), 325–367.
* [6] N. D. Elkies, Rational points near curves and small nonzero $|x^{3}-y^{2}|$ via lattice reduction, _Algorithmic Number Theory: 4th International Symposium, ANTS-IV Leiden, The Netherlands, July 2-7, 2000. Proceedings_ , 33–63.
* [7] A. Gélin and A. Joux, Reducing number field defining polynomials: an application to class group computations, in _Algorithmic Number Theory Symposium XII_ , vol. 19 of LMS Journal of Computation and Mathematics, 2016, 315–331.
* [8] X. Gourdon, Combinatoire, algorithmique et géométrie des polynomes, _PhD thesis_ , 27–49.
* [9] G. Havas, B. S. Majewski and K. R. Matthews, Extended GCD and Hermite normal form algorithms via lattice basis reduction, _Experimental Mathematics_ , 7 (1998), 125–136.
* [10] G. Jäger, Reduction of Smith normal form transformation matrices, _Computing_ , 74 (2005), 377–388.
* [11] L. Jaulin, M. Kieffer, O. Didrit and E. Walter, _Applied interval analysis: with examples in parameter and state estimation, robust control and robotics_ , Springer Verlag, 2001.
* [12] E. Kaltofen, On the complexity of finding short vectors in integer lattices, in _Computer Algebra, EUROCAL ’83, European Computer Algebra Conference_ (ed. J. A. van Hulzen), vol. 162 of Lecture Notes in Computer Science, Springer, 1983, 236–244.
* [13] S. Kim and A. Venkatesh, The behavior of random reduced bases, 2016.
* [14] A. K. Lenstra, H. W. Lenstra Jr. and L. Lovász, Factoring polynomials with rational coefficients, _Math. Ann._ , 261 (1982), 515–534.
* [15] H. W. Lenstra Jr., Integer programming with a fixed number of variables, _Math. Oper. Res._ , 8 (1983), 538–548.
* [16] H. W. Lenstra Jr. and A. Silverberg, Lattices with symmetry, _Journal of Cryptology_ , 30 (2017), 760–804.
* [17] L. Lovász and H. E. Scarf, The generalized basis reduction algorithm, _Math. Oper. Res._ , 17 (1992), 751–764.
* [18] R. E. Moore, _Interval arithmetic and automatic error analysis in digital computing_ , PhD thesis, Stanford, 1962.
* [19] R. E. Moore, _Methods and applications of interval analysis_ , 1977.
* [20] P. Q. Nguyen, Hermite’s constant and lattice algorithms, in _The LLL algorithm_ (eds. P. Q. Nguyen and B. Vallée), Springer, 2010, chapter 2.
* [21] P. Q. Nguyen and D. Stehlé, An LLL algorithm with quadratic complexity, _SIAM J. of Computing_ , 39 (2009), 874––903.
* [22] G. Pataki and M. Tural, On sublattice determinants in reduced bases, 2008.
* [23] M. E. Pohst, A modification of the LLL reduction algorithm, _Journal of Symbolic Computation_ , 4 (1987), 123–127.
* [24] H. Ratschek and J. Rokne, _New computer methods for global optimization_ , Halsted Press, New York, NY, USA, 1988.
* [25] C. Schnorr, A hierarchy of polynomial time lattice basis reduction algorithms, _Theor. Comput. Sci._ , 53 (1987), 201–224.
* [26] C. Schnorr, A more efficient algorithm for lattice basis reduction, _J. Algorithms_ , 9 (1988), 47–62.
* [27] C. Schnorr and M. Euchner, Lattice basis reduction: Improved practical algorithms and solving subset sum problems, _Math. Program._ , 66 (1994), 181–199.
* [28] T. Sunaga, Theory of an interval algebra and its application to numerical analysis, _Japan J. Indust. Appl. Math._ , 26.
* [29] FPLLL development team, Fplll, a lattice reduction library, 2016, Available at https://github.com/fplll/fplll.
* [30] G. Villard, Certification of the qr factor r, and of lattice basis reducedness, 2007\.
* [31] R. C. Young, _The algebra of many-values quantities_ , PhD thesis, Cambridge, 1931.
## Appendix A Proof of Proposition 1
We now show the more general statement for a $(\delta,\eta)$-lll reduced basis
$(b_{1},\ldots,b_{d})$ of $(\Lambda,\langle{\cdot},{\cdot}\rangle)$. Namely
that for any $1\leq k\leq d$ we have:
$\operatorname{covol}\mathopen{}\mathclose{{}\left({b_{1},\ldots,b_{k}}}\right)\leq\mathopen{}\mathclose{{}\left(\delta-\eta^{2}}\right)^{-\frac{(d-k)k}{4}}\operatorname{covol}\mathopen{}\mathclose{{}\left({\Lambda}}\right)^{\frac{k}{d}}.$
###### Proof.
Using the Lovász condition at index $1\leq i<d$, we write:
$\delta\|\pi_{i}(b_{i})\|^{2}\leq\|\pi_{i}(b_{i+1})\|^{2}=\|\pi_{i+1}(b_{i+1})\|^{2}+\mu_{i,i+1}^{2}\|\pi_{i}(b_{i})\|^{2}$
Thanks to the size-reduction condition, this implies:
(4) $\forall
i\in\\{1,\ldots,d-1\\},\quad\|\pi_{i}(b_{i})\|^{2}\leq\mathopen{}\mathclose{{}\left(\delta-\eta^{2}}\right)^{-1}\|\pi_{i+1}(b_{i+1})\|^{2}.$
Let $K$ denote $\mathopen{}\mathclose{{}\left(\delta-\eta^{2}}\right)^{-1/2}$
and $\ell_{i}$ be the norm of the vector $\pi_{i}(b_{i})$. Then, Equation (4)
becomes:
$\forall i\in\\{1,\ldots,d-1\\},\quad\ell_{i}\leq K\ell_{i+1}.$
Recall that
$\operatorname{covol}\mathopen{}\mathclose{{}\left({b_{1},\ldots,b_{k}}}\right)=\prod_{i=1}^{k}\ell_{i}$.
This implies that for any $j>k$:
$\operatorname{covol}\mathopen{}\mathclose{{}\left({b_{1},\ldots,b_{k}}}\right)\leq\prod_{i=1}^{k}K^{j-i}\ell_{j}=K^{k(2j-k-1)/2}\cdot\ell_{j}^{k}.$
Thus:
$\displaystyle\operatorname{covol}\mathopen{}\mathclose{{}\left({b_{1},\ldots,b_{k}}}\right)^{d}=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\ell_{i}}\right)^{d}$
$\displaystyle\leq\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\ell_{i}}\right)^{k}\,\prod_{j=k+1}^{d}K^{k(2j-k-1)/2}\cdot\ell_{j}^{k}$
$\displaystyle\leq\mathopen{}\mathclose{{}\left(\prod_{i=1}^{d}\ell_{i}}\right)^{k}\,K^{\sum_{j=k+1}^{d}{k(2j-k-1)/2}}$
$\displaystyle\leq\operatorname{covol}\mathopen{}\mathclose{{}\left({\Lambda}}\right)^{k}K^{\frac{d(d-k)k}{2}}.$
∎
|
# On the positive mass theorem in general relativity and Lorentz covariance
of the Dirac wave equation in quantum mechanics
Changbiao Wang<EMAIL_ADDRESS>ShangGang Group, 10 Dover Road, North
Haven, CT 06473, USA
###### Abstract
The positive mass theorem in general relativity states that in an
asymptotically flat spacetime, if the momentum–energy tensor is divergence-
free and satisfies a dominant energy condition, then a total momentum–energy
four-vector can be formed, of which the energy component is nonnegative. In
this paper, we take the wave four-tensor of a plane light wave in free space
as a counterexample to show that there is no guarantee that a total four-
vector can be formed. Thus the theoretical framework for the positive mass
theorem is flawed. In addition, it is also shown as well that the Lorentz
covariance of Dirac wave equation is not compatible with Einstein mass–energy
equivalence.
###### pacs:
03.50.De, 04.20.-q, 04.20.Cv
## I Introduction
In the theory of relativity, the momentum and energy of a closed physical
system can be described by a four-tensor $T^{\mu\nu}$, usually called
momentum–energy tensor. It is well established in traditional textbooks (r1, ,
p. 443) (r2, , p. 46) (r3, , p. 166) (r4, , p. 756) that if the tensor is
divergence-free, namely $T^{\mu\nu}_{~{}~{}\,,\nu}=0$, then the total momentum
and energy of the system can be obtained by carrying out integration of the
time-column elements of the tensor to form a constant four-vector
$P^{\mu}=\int T^{\mu 4}\textrm{d}^{3}x$, which is usually called _conservation
law_. However recently, it has been shown by enumerating specific
counterexamples that this conclusion is not correct in general r5 ; r6 ; r7 ;
r8 .
In an asymptotically flat spacetime, if the momentum–energy tensor
$T^{\mu\nu}$ satisfies the conservation law $T^{\mu\nu}_{~{}~{}\,,\nu}=0$, and
also satisfies an additional _dominant energy condition_ , namely the energy
component $T^{44}\geq|T^{\mu\nu}|$ r9 , then a total momentum–energy four-
vector $P^{\mu}=\int T^{\mu 4}\textrm{d}^{3}x$ can be formed, of which the
total energy component $P^{4}=\int T^{44}\textrm{d}^{3}x$ is nonnegative,
which is called _positive mass theorem_ in the general relativity r10 ; r11 ;
r12 ; r13 ; r14 . (Note that Rindler’s practice of the tensor indices
$\mu,\nu=1,2,3,4$ (r15, , p. 138) is used throughout the paper except for in
Sec. II.)
The best way to disprove a mathematical conjecture is to give its
counterexample. In this paper, we take the wave four-tensor
$T^{\mu\nu}=K^{\mu}K^{\nu}$ as a counterexample to show that there is no
guarantee that a total four-vector can be formed although it satisfies the
traditional conservation law $T^{\mu\nu}_{~{}~{}\,,\nu}=0$ and the dominant
energy condition $T^{44}\geq|T^{\mu\nu}|$, where $K^{\mu}$ is the wave four-
vector of a plane light wave in free space, set up by Einstein r16 . From this
we conclude that the theoretical framework for the positive mass theorem is
fundamentally flawed.
Finally, it is also shown as well that the Lorentz covariance of Dirac wave
equation for an electron is not compatible with Einstein mass–energy
equivalence.
## II Positive mass theorem
In this section, the theoretical framework and conclusions for the positive
mass theorem are examined, including the origin of the total energy
definition, which is usually omitted in the original research works for
various proofs of the theorem r10 ; r11 ; r12 ; r13 ; r14 .
In general relativity, the metric $g_{\mu\nu}$ are the _solutions_ of Einstein
field equation, while the energy–momentum tensor $T^{\mu\nu}$, which causes
space to curve (r1, , p. 5), is the _source_ of the field equation; just like
the EM fields are the solutions of Maxwell equations, while the charge and
current are the source of Maxwell equations (r4, , p. 238). (Note that
Arnowitt-Deser-Misner practice of the tensor indices $\mu,\nu=0,1,2,3$ is used
in this section for the convenience of readers to check with the source papers
r17 ; r18 .)
According to Arnowitt, Deser and Misner r17 , the definition of the total
energy–momentum four-vector $P^{\mu}$ is given by the _volume_ integral of the
components of $T^{0\mu}$, namely
$\displaystyle P^{\mu}=\int T^{0\mu}\textrm{d}^{3}x,$ (1)
which can be expressed as surface integral through Einstein field equations
and Gauss theorem (r1, , p. 462). $P^{0}=E$ in above Eq. (1) is the total
energy r18 .
In the proofs of the positive mass theorem r11 ; r12 ; r14 , the total energy
follows the definition in r17 ; r18 (r1, , p. 462), given by
$\displaystyle E=\int
T^{00}\textrm{d}^{3}x~{}~{}=\frac{1}{16\pi}\int(g_{jk,k}-g_{kk,j})\mathrm{d}^{2}S^{j},$
(2)
where the volume integral is evaluated over the source, and the surface
integral is done over a closed surface completely surrounding the source in
the asymptotically flat region. $T^{00}$ satisfies the dominant energy
condition $T^{00}\geq|T^{\mu\nu}|$ r9 ; r11 ; r14 and it is of the source
term of Einstein field equation, while $(g_{jk,k}-g_{kk,j})$ is the solution
related term.
It should be repeatedly emphasized that $\int(...)\,\textrm{d}^{3}x$ in Eq.
(1) means to carry out _volume_ integration so that Gauss theorem can be used
to convert $\int(...)\,\textrm{d}^{3}x$ into $\int(...)\,\mathrm{d}^{2}S^{j}$
in Eq. (2); confer Ref. (r1, , p. 462).
Arnowitt, Deser and Misner claim that because of the conservation law
$T^{\mu\nu}_{~{}~{}~{},\,\mu}=0$, “$P^{\mu}$ should transform as a four-
vector” r18 , which is clearly endorsed by Nester r19 . In the textbook by
Misner, Thorne and Wheeler (r1, , p. 443, p. 462), it is also emphasized that
the conservation law $T^{\mu\nu}_{~{}~{}\,,\nu}=0$
($T^{\mu\nu}_{~{}~{}~{},\,\mu}=0$) makes $P^{\mu}=\int T^{\mu
0}\textrm{d}^{3}x$ $~{}(P^{\mu}=\int T^{0\mu}\textrm{d}^{3}x)$ be a constant
four-vector.
Thus the positive mass theorem actually states that if the tensor $T^{\mu\nu}$
satisfies the conservation law and dominant energy condition, namely
$T^{\mu\nu}_{~{}~{}~{},\,\mu}=0$ and $T^{00}\geq|T^{\mu\nu}|$ hold, then two
conclusions can be drawn: Conclusion (a) $P^{\mu}=\int
T^{0\mu}\textrm{d}^{3}x$ is a constant four-vector, and Conclusion (b)
$E=P^{0}=\int T^{00}\textrm{d}^{3}x$ is nonnegative.
It is worthwhile to point out that in Eq. (2), no matter in what
asymptotically flat region the solution-related surface integral
$(1/16\pi)\int(g_{jk,k}-g_{kk,j})\mathrm{d}^{2}S^{j}$ is evaluated, it is
always equal to the source-related volume integral $\int
T^{00}\textrm{d}^{3}x$; in other words, asymptotically flat solutions
$g_{\mu\nu}$ of Einstein field equation always satisfies Eq. (2). Thus if
$\int T^{00}\textrm{d}^{3}x$ is not a component of four-vector, then
$(1/16\pi)\int(g_{jk,k}-g_{kk,j})\mathrm{d}^{2}S^{j}$ is not either.
In the following section, we will prove by enumerating a counterexample that
$E=P^{0}=\int T^{00}\textrm{d}^{3}x$ is not a component of four-vector, and
thus the above Conclusion (a) in the positive mass theorem is not true in
general.
## III Counterexample
In this section, a counterexample of the positive mass theorem is provided,
which is constructed from the wave four-vector of a plane wave of light in
free space. This wave four-vector, first formulated by Einstein in 1905,
predicts the relativistic Doppler effect r16 , which is the physical basis of
the two successive frequency upshifts of a free-electron laser r20 and has
been widely demonstrated by experiments r21 ; r22 ; r23 ; r24 .
Suppose that observed in an inertial frame $XYZ$ in free space, the wave four-
vector for a plane light wave is given by r16
$\displaystyle K^{\mu}=\left(\mathbf{k}_{\textrm{w}},\frac{\omega}{c}\right),$
(3)
where $K^{1,2,3}=(\mathbf{k}_{\textrm{w}})_{x,y,z}$, $K^{4}=\omega/c$,
$\mathbf{k}_{\textrm{w}}$ is the wave vector, $\omega~{}(>0)$ is the angular
frequency, and $c$ is the speed of light in free space.
The wave four-vector satisfies $K^{\mu}K_{\mu}=0$, and we have
$(\omega/c)^{2}-\mathbf{k}_{\textrm{w}}^{2}=0\Rightarrow|\mathbf{k}_{\textrm{w}}|=\omega/c=K^{4}$.
With
$K^{4}=|\mathbf{k}_{\textrm{w}}|=[(\mathbf{k}_{\textrm{w}})_{x}^{2}+(\mathbf{k}_{\textrm{w}})_{y}^{2}+(\mathbf{k}_{\textrm{w}})_{z}^{2}]^{1/2}$
and $(\mathbf{k}_{\textrm{w}})_{x,y,z}=K^{1,2,3}$ taken into account, we have
$K^{4}\geq|K^{1,2,3}|.$ (4)
As a counterexample of the positive mass theorem, the wave four-tensor is
defined as
$T^{\mu\nu}=K^{\mu}K^{\nu}~{}\textrm{for}~{}\mathbf{x}\in
V;\quad=0~{}\textrm{for}~{}\mathbf{x}\not\in V;$ (5)
where the finite volume $V$ $(\neq 0)$ is fixed in $XYZ$. According to above
Eq. (5), the wave four-tensor $T^{\mu\nu}$ is a finite distribution source of
Einstein field equation. Because $K^{\mu}$ is independent of space and time
variables $X^{\mu}=(\mathbf{x},ct)$, $T^{\mu\nu}$ is divergence-free within
the volume $V$, namely
$T^{\mu\nu}_{~{}~{}\,,\nu}=\frac{\partial T^{\mu\nu}}{\partial
X^{\nu}}=\frac{\partial(K^{\mu}K^{\nu})}{\partial X^{\nu}}=0.\\\ $ (6)
From Eqs. (4) and (5) we find that $T^{\mu\nu}$ satisfies the dominant energy
condition in $V$,
$T^{44}=(K^{4})^{2}\geq|K^{\mu}K^{\nu}|=|T^{\mu\nu}|.$ (7)
Consider the volume integral of the components of $T^{\mu 4}$ over $V$, given
by
$\displaystyle P^{\mu}$ $\displaystyle=\int_{V}T^{\mu 4}\textrm{d}^{3}x$
$\displaystyle=\int_{V}(K^{\mu}K^{4})\textrm{d}^{3}x=(K^{4}V)K^{\mu},$ (8)
where $\int_{V}(K^{\mu}K^{4})\,\textrm{d}^{3}x=(K^{4}V)K^{\mu}$ is employed
because $K^{\mu}$ is independent of $X^{\mu}=(\mathbf{x},ct)$.
From above Eqs. (6) and (7), we know that the wave four-tensor $T^{\mu\nu}$
satisfies the conservation law $T^{\mu\nu}_{~{}~{}\,,\nu}=0$ and the dominant
energy condition $T^{44}\geq|T^{\mu\nu}|$ over $V$. Thus according to the
positive mass theorem, the quantity $P^{\mu}=(K^{4}V)K^{\mu}$, given by Eq.
(8), must be a Lorentz covariant four-vector.
Thus if we can prove that $P^{\mu}=(K^{4}V)K^{\mu}$ is not a Lorentz four-
vector, then $T^{\mu\nu}$ becomes a counterexample of the positive mass
theorem. As shown below, $P^{\mu}=(K^{4}V)K^{\mu}$ indeed cannot be a four-
vector.
Suppose that the inertial frame $X^{\prime}Y^{\prime}Z^{\prime}$ moves with
respect to $XYZ$ at an arbitrary velocity of $\mathbf{v}=\boldsymbol{\beta}c$,
with $X^{\prime}Y^{\prime}Z^{\prime}$ and $XYZ$ overlapping at
$t^{\prime}=t=0$ r6 . Observed in $X^{\prime}Y^{\prime}Z^{\prime}$, the volume
integral of the components of $T^{\prime\mu 4}$ is given by
$\displaystyle P^{\prime\mu}$ $\displaystyle=\int_{V^{\prime}}T^{\prime\mu
4}\textrm{d}^{3}x^{\prime}$
$\displaystyle=\int_{V^{\prime}}(K^{\prime\mu}K^{\prime
4})\textrm{d}^{3}x^{\prime}=(K^{\prime 4}V^{\prime})K^{\prime\mu},$ (9)
where the volume $V^{\prime}$ is moving with respect to
$X^{\prime}Y^{\prime}Z^{\prime}$ at the velocity
$\mathbf{v}^{\prime}=\boldsymbol{\beta}^{\prime}c=-\boldsymbol{\beta}c$.
Comparing above Eq. (9) with Eq. (8), we find that if the relation between
$P^{\mu}=(K^{4}V)K^{\mu}$ and $P^{\prime\mu}=(K^{\prime
4}V^{\prime})K^{\prime\mu}$ follows four-vector Lorentz transformation, then
$(K^{4}V)=(K^{\prime 4}V^{\prime})$ = _Lorentz covariant scalar_ must hold,
because $K^{\mu}$ is a four-vector. However as shown below,
$(K^{4}V)=(K^{\prime 4}V^{\prime})$ cannot hold for an _arbitrary_
$\mathbf{v}=\boldsymbol{\beta}c$, namely $(K^{4}V)$ is not a covariant scalar,
and so $P^{\mu}=(K^{4}V)K^{\mu}$ cannot be a four-vector.
According to the _change of variables theorem_ in principles of classical
mathematical analysis (r25, , p. 252), the transformation of differential
elements appearing in Eq. (8) and Eq. (9) is given by
$\textrm{d}^{3}x^{\prime}=\left|\frac{\partial(x^{\prime},y^{\prime}z^{\prime})}{\partial(x,y,z)}\right|\textrm{d}^{3}x=\frac{1}{\gamma}\textrm{d}^{3}x,$
(10)
where $\gamma=(1-\boldsymbol{\beta}^{2})^{-1/2}$ is the Lorentz factor, and
the Jacobi determinant
$\partial(x^{\prime},y^{\prime}z^{\prime})/\partial(x,y,z)=1/\gamma$ is
employed. It should be noted that the transformation of above Eq. (10) meets
the physical requirement of Lorentz contraction effect of the length of a
moving rigid rod argued by Einstein r16 and further formulated in Ref. (r26,
, Footnote 2 there).
From above Eq. (10), we have
$\int_{V^{\prime}}\textrm{d}^{3}x^{\prime}=\int_{V}\left(\frac{1}{\gamma}\right)\textrm{d}^{3}x\quad\quad\textrm{or}\quad\quad
V^{\prime}=\frac{V}{\gamma}.$ (11)
According to Einstein r16 , the Lorentz transformation between $K^{\prime
4}=\omega^{\prime}/c$ and $K^{4}=\omega/c$ is given by
$K^{\prime 4}=\gamma(K^{4}-\mathbf{k}_{\textrm{w}}\cdot\boldsymbol{\beta}).$
(12)
Combining Eqs. (11) and (12), we have
$K^{\prime
4}V^{\prime}=K^{4}V-(\mathbf{k}_{\textrm{w}}\cdot\boldsymbol{\beta})V,$ (13)
which is valid for any $|\boldsymbol{\beta}|<1$.
Obviously, $(\mathbf{k}_{\textrm{w}}\cdot\boldsymbol{\beta})V=0$ cannot hold
for arbitrary $|\boldsymbol{\beta}|<1$; for example,
$(\mathbf{k}_{\textrm{w}}\cdot\boldsymbol{\beta})V=(\omega/c)|\boldsymbol{\beta}|V\neq
0$ holds for $\mathbf{k}_{\textrm{w}}\|\,\boldsymbol{\beta}$ and
$\boldsymbol{\beta}\neq 0$. Thus from Eq. (13) we conclude that
$(K^{4}V)=(K^{\prime 4}V^{\prime})$ cannot hold for arbitrary
$|\boldsymbol{\beta}|<1$, namely $(K^{4}V)$ is not a Lorentz covariant scalar.
Since $(K^{4}V)$ is not a covariant scalar, the quantity
$P^{\mu}=(K^{4}V)K^{\mu}$, given by Eq. (8), cannot be a four-vector.
So far, we have finished the proof that the wave four-tensor $T^{\mu\nu}$
satisfies $T^{\mu\nu}_{~{}~{}\,,\nu}=0$ and $T^{44}\geq|T^{\mu\nu}|$ over the
volume $V$, but $P^{\mu}=\int_{V}T^{\mu 4}\textrm{d}^{3}x$ is not a four-
vector ($\Leftrightarrow$ $P^{4}=\int_{V}T^{44}\textrm{d}^{3}x$ is not a
component of four-vector); namely the wave four-tensor $T^{\mu\nu}$, defined
in Eq. (5), is a counterexample of the positive mass theorem.
## IV Conclusions and remarks
By examining the background, we have found that the positive mass theorem has:
* •
_two requirements_ : the momentum–energy tensor $T^{\mu\nu}$, as the source of
Einstein field equation, satisfies (a) the conservation law
$T^{\mu\nu}_{~{}~{}\,,\nu}=0$ and (b) the dominant energy condition
$T^{44}\geq|T^{\mu\nu}|$;
* •
_two conclusions_ : (a) the time-column volume integral $P^{\mu}=\int T^{\mu
4}\textrm{d}^{3}x$ constitutes a total four-momentum that is conserved (r1, ,
p. 443), and (b) the total energy $E=P^{4}=\int T^{44}\textrm{d}^{3}x$ is
nonnegative.
We have proved that the wave four-tensor, given in Eq. (5), is a
counterexample of the positive mass theorem because it satisfies the above two
requirements, but Conclusion (a) does not hold. Since Conclusion (a) is not
valid, Conclusion (b) naturally loses its physical significance. Thus the
positive mass theorem is fundamentally flawed.
From the counterexample provided in the present paper, one can see that the
problem of positive mass theorem comes from the traditional conservation law
$T^{\mu\nu}_{~{}~{}\,,\nu}=0$, which is actually an incorrect conjecture r5 ;
r6 ; r7 ; r8 although some proofs in support of it are provided in textbooks,
such as in (r1, , p. 142) (r27, , p. 318).
In addition to the conservation law $T^{\mu\nu}_{~{}~{}\,,\nu}=0$, there are
some other basic concepts and definitions which have not yet reached a
consensus, such as the covariance of physical laws r28 , electromagnetic power
flow in an anisotropic medium r29 , and Fermat principle for a plane light
wave r30 . Probably one of the best known and most controversial examples is
about how to define a particle’s mass. In respected textbooks r15 ; r31 , two
different mass definitions are argued, which leads to an amazing and
unexpected result: the Lorentz covariance of Dirac wave equation is not
compatible with Einstein mass-energy equivalence, as shown below.
In the first mass definition for a particle, the four-momentum is given by
(r15, , p. 110)
$\displaystyle P^{\mu}=\frac{m}{\gamma}U^{\mu}$
$\displaystyle=\left(m\mathbf{v},~{}\frac{mc^{2}}{c}\right)$
$\displaystyle=\left(\mathbf{p},~{}\frac{E}{c}\right),\quad(\textrm{First
definition})$ (14)
where $m$ is defined as the particle’s (relativistic inertial) mass;
$(m/\gamma)$ is a _covariant_ scalar (invariant) r28 ;
$U^{\mu}=\gamma(\mathbf{v},c)$ is the four-velocity, with $\mathbf{v}$ the
particle’s velocity; $\mathbf{p}$ is the momentum; and $E$ is the energy. In
this definition, (i) $(m/\gamma)=m_{0}$ holds, where $m_{0}$ is the rest mass
and it is $not$ a covariant invariant r28 ; (ii) Einstein mass–energy
equivalence equation $E=mc^{2}$ holds in all inertial frames.
In the second mass definition, the particle’s four-momentum is given by (r31,
, p. 289)
$\displaystyle P^{\mu}=mU^{\mu}$ $\displaystyle=\left(\gamma
m\mathbf{v},~{}\frac{\gamma mc^{2}}{c}\right)$
$\displaystyle=\left(\mathbf{p},~{}\frac{E}{c}\right),\quad(\textrm{Second
definition})$ (15)
where $m$ is defined as the particle’s (invariant) mass and it is a covariant
invariant; $\mathbf{p}=\gamma m\mathbf{v}$ is the momentum; and
$E=\gamma(mc^{2})$ is the energy. In this definition, (i) $m=m_{0}$ always
holds in all inertial frames (namely $m_{0}$ and $m$ are the same), and
$m_{0}$ is a covariant invariant; (ii) $E=\gamma(mc^{2})$ holds, but the
Einstein mass–energy equivalence equation $E=(mc^{2})$ does not hold except
for in the particle-rest frame.
According to the first mass definition shown in Eq. (14), the particle’s rest
mass is not a covariant invariant; thus Dirac wave equation for an electron is
not covariant because the electron’s rest mass appearing in Dirac equation is
not a covariant invariant although Dirac took it to be in his proof (r32, , p.
258). However the covariance of Dirac equation is consistent with the second
definition shown in Eq. (15), which contradicts with Einstein mass–energy
equivalence equation. From this one can conclude that the Lorentz covariance
of Dirac wave equation is not compatible with Einstein mass–energy
equivalence. In other words, if Einstein mass–energy equivalence is valid,
then Dirac wave equation is not Lorentz covariant, and vice versa.
It is worthwhile to point out that in the first mass definition, the photon
has its (kinetic) mass r28 because “the mass of a body is a measure of its
energy content” r33 , while in the second mass definition, the photon does not
have any mass (equal to zero) (r34, , p. 99). However Einstein’s thought
experiment for mass–energy equivalence does support the view that the photon
has a non-zero mass, as shown below.
In his thought experiment r33 , Einstein assumes that a stationary body with a
rest energy of $E_{0}$ emits two identical photons (“plane waves of light” in
Einstein’s words) in free space at the same time in the opposite directions so
that the body keeps at rest after the emissions. According to the energy
conservation law, Einstein argues
$E_{0}=(E_{0}-\Delta E_{0})+\frac{1}{2}\Delta E_{0}+\frac{1}{2}\Delta E_{0},$
(16)
where $(E_{0}-\Delta E_{0})$ is the body’s energy after emissions, and the
last two terms of $(\Delta E_{0}/2)$ are the (kinetic) energies of the two
photons respectively. Dividing above Eq. (16) by $c^{2}$ yields
$\frac{E_{0}}{c^{2}}=\left(\frac{E_{0}}{c^{2}}-\frac{\Delta
E_{0}}{c^{2}}\right)+\frac{1}{2}\frac{\Delta
E_{0}}{c^{2}}+\frac{1}{2}\frac{\Delta E_{0}}{c^{2}}.$ (17)
In terms of the principle of relativity, Einstein proved that the body’s rest
mass is reduced by $(\Delta E_{0}/c^{2})$ after emissions. Thus it is
legitimate to define $(E_{0}/c^{2})$ and $(E_{0}-\Delta E_{0})/c^{2}$ as the
masses of the stationary body before and after the emissions. From this it
follows that the last two terms of $\Delta E_{0}/(2c^{2})$ in Eq. (17)
legitimately denote the masses of the two photons respectively.
Thus Einstein’s thought experiment r33 supports the conclusion that the
photon has a non-zero kinetic energy, and it has a non-zero mass; namely the
mass and energy are equivalent, as Einstein argued.
## References
* (1) C.W. Misner, K.S. Thorne, and A.Wheeler, Gravitation, W.H. Freeman and Company, San Francisco, 1973.
* (2) S. Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, John Wiley & Sons, New York, 1972.
* (3) C. Møller, The Theory of Relativity, Oxford University Press, London, 1955.
* (4) J.D. Jackson, Classical Electrodynamics, 3rd Edition, John Wiley & Sons, NJ, 1999.
* (5) C. Wang, von Laue’s theorem and its applications, Can. J. Phys. 93 (2015) 1470–1476.
* (6) C. Wang, Self-consistent theory for a plane wave in a moving medium and light-momentum criterion, Can. J. Phys. 93 (2015) 1510–1522.
* (7) C. Wang, Disproof of a widely-accepted mathematical conjecture, Optik 140 (2017) 1110–1113; arXiv:1612.03201.
* (8) C. Wang, Resolution of two fundamental issues in the dynamics of relativity and exposure of a real version of the emperor’s new clothes, arXiv:1702.03200.
* (9) S. Hawking, The conservation of matter in general relativity, Commun. Math. Phys. 18 (1970) 301–306.
* (10) R. Schoen and S.T. Yau, On the proof of the positive mass conjecture in general relativity, Commun. Math. Phys. 65 (1979) 45–76.
* (11) R. Schoen and S.T. Yau, Proof of the positive mass theorem. II, Commun. Math. Phys. 79 (1981) 231–260.
* (12) E. Witten, A new proof of the positive energy theorem, Commun. Math. Phys. 80 (1981) 381–402.
* (13) R. Schoen and S.T. Yau, Positivity of the total mass of a general space-time, Phys. Rev. Lett. 43 (1979) 1457-1459.
* (14) T. Parker and C.H. Taubes, On Witten’s proof of the positive energy theorem, Commun. Math. Phys. 84 (1982) 223–238.
* (15) W. Rindler, Relativity: Special, General, and Cosmological, 2nd Edition, Oxford, NY, 2006.
* (16) A. Einstein, Zur Elektrodynamik bewegter Körper, Ann. Phys. 322 (1905) 891–921.
* (17) R. Arnowitt, S. Deser, and C.W. Misner, Energy and the criteria for radiation in general relativity, Phys. Rev. 118 (1960) 1100–1104.
* (18) R. Arnowitt, S. Deser, and C. W. Misner, Coordinate invariance and energy expressions in general relativity, Phys. Rev. 122 (1961) 997–1006.
* (19) J.M. Nester, A new gravitational energy expression with a simple positivity proof, Phys. Lett. A 83 (1981) 241–242.
* (20) C. Wang, The relativistic Doppler effect: when a zero-frequency shift or a red shift exists for sources approaching the observer, Ann. Phys. (Berlin) 523 (2011) 239–246. As shown in this reference, after two successive frequency upshifts in a magnetic wiggler free-electron laser (FEL), the radiation wavelength is given by $\lambda=\lambda_{\mathrm{w}}(1-\beta_{\|})/\beta_{\|}$, where $\lambda_{\mathrm{w}}$ is the wiggler period, and $\beta_{\|}$ is the electron’s axial drift velocity normalized to the vacuum light speed $c$. When a high-energy and small wiggler-velocity electron beam is employed so that $(1+a_{\textrm{w}}^{2})/\gamma^{2}<<1$ holds, the radiation wavelength can be well approximated as $\lambda=\lambda_{\mathrm{w}}(1+a^{2}_{\mathrm{w}})/(2\gamma^{2})$ for a helical wiggler FEL r21 ; r22 , and $\lambda=\lambda_{\mathrm{w}}(1+a^{2}_{\mathrm{w}}/2)/(2\gamma^{2})$ for a planar wiggler FEL r23 ; r24 , where $\gamma$ is the electron’s relativistic factor; $a_{\mathrm{w}}=(|e|B_{\mathrm{w}}/m)(\lambda_{\mathrm{w}}/2\pi c)=0.934\,B_{\mathrm{w}}[\mathrm{T}]\,\lambda_{\mathrm{w}}[\mathrm{cm}]$ is the wiggler parameter, with $e$ and $m$, respectively, the electron’s charge and rest mass, and $B_{\mathrm{w}}$ the amplitude of wiggler magnetic field. (1) For a helical wiggler FEL with $\gamma=47.97$ (24 MeV), $\lambda_{\mathrm{w}}=3.2$ cm, and $B_{\mathrm{w}}=0.24$ T, we have $a_{\mathrm{w}}=0.717$ and the theoretical radiation wavelength is given by $\lambda=3.2\,\mathrm{cm}\times(1+0.717^{2})/(2\times 47.97^{2})=10.53$ $\upmu$m, compared with the experimental observation of 10.6 $\upmu$m in Ref. r21 . (2) For a planar wiggler FEL with $\gamma=130.2$ (66 MeV), $\lambda_{\mathrm{w}}=3.6$ cm, and $B_{\mathrm{w}}=0.29$ T, we have $a_{\mathrm{w}}=0.975$ and the theoretical radiation wavelength is given by $\lambda=3.6\,\mathrm{cm}\times(1+0.975^{2}/2)/(2\times 130.2^{2})=1.567$ $\upmu$m, compared with the experimental observation of 1.57 $\upmu$m in Ref. r23 . Thus the free-electron laser has provided strong and convincing experimental support for the wave four-vector, which is one of the basic results of Einstein’s special relativity r16 .
* (21) L.R. Elias, W.M. Fairbank, J.M.J. Madey, H.A. Schwettman, and T.I. Smith, Observation of stimulated emission of radiation by relativistic electrons in a spatially periodic transverse magnetic field, Phys. Rev. Lett. 36 (1976) 717–720.
* (22) D.A.G. Deacon, L.R. Elias, J.M.J. Madey, G.J. Ramian, H.A. Schwettman, and T.I. Smith, First operation of a free-electron laser, Phys. Rev. Lett. 38 (1977) 892–894.
* (23) J.A. Edighoffer, G.R. Neil, C.E. Hess, T.I. Smith, S.W. Fornaca, and H.A. Schwettman, Variable-wiggler free-electron-laser oscillation, Phys. Rev. Lett. 52 (1984) 344–347.
* (24) J. Andruszkow, B. Aune, V. Ayvazyan, N. Baboi, R. Bakker, V. Balakin, D. Barni, A. Bazhan, M. Bernard, A. Bosotti _et al._ , First observation of self-amplified spontaneous emission in a free-electron laser at 109 nm wavelength, Phys. Rev. Lett. 85 (2000) 3825–3829.
* (25) W. Rudin, Principles of Mathematical Analysis, 3rd Edition, McGraw-Hill, New York, 1976.
* (26) C. Wang, Minkowski tensor in electrodynamics of moving media and three rules for construction of the physical tensors in Einstein’s special relativity, Optik 223 (2020) 165469.
* (27) W. Thirring, Classical Mathematical Physics, 3rd Edition, Springer-Verlag, New York, 1997.
* (28) C. Wang, Criterion for testing the covariance of physical laws and Gordon optical metric, Optik 239 (2021) 166775.
* (29) C. Wang, Electromagnetic power flow, Fermat’s principle, and special theory of relativity, Optik 126 (2015) 2703–2705.
* (30) C. Wang, New insight into light propagation and light-matter interactions with applications to experimental observations, Optik 204 (2020) 163954.
* (31) H. Goldstein, C. Poole, and J. Safko, Classical Mechanics, 3rd Edition, Addison-Wesley, NY, 2000.
* (32) P.A.M. Dirac, The Principles of Quantum Mechanics, 4th Edition, Oxford, London, 1958.
* (33) A. Einstein, Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig? Ann. Phys. 323 (1905) 639–641.
* (34) D. Griffiths, Introduction to Elementary Particles, 2nd Edition, Wiley-VCH, Weinheim, 2008.
|
# Modeling and Control of Diesel Engine Emissions using Multi-layer Neural
Networks and Economic Model Predictive Control
Jiadi Zhang Xiao Li Mohammad Reza Amini Ilya Kolmanovsky Munechika
Tsutsumi Hayato Nakada Department of Aerospace Engineering, University of
Michigan, Ann Arbor, MI 48109, USA (e-mails<EMAIL_ADDRESS>Department of Naval Architecture and Marine Engineering, University of
Michigan, Ann Arbor, MI 48109, USA (e-mail<EMAIL_ADDRESS> Hino Motors,
Ltd., Tokyo 191-8660, Japan (e-mails<EMAIL_ADDRESS>
###### Abstract
This paper presents the results of developing a multi-layer Neural Network
(NN) to represent diesel engine emissions and integrating this NN into control
design. Firstly, a NN is trained and validated to simultaneously predict
oxides of nitrogen ($NOx$) and $Soot$ using both transient and steady-state
data. Based on the input-output correlation analysis, inputs to NN with the
highest influence on the emissions are selected while keeping the NN structure
simple. Secondly, a co-simulation framework is implemented to integrate the NN
emissions model with a model of a diesel engine airpath system built in GT-
Power and used to identify a low-order linear parameter-varying (LPV) model
for emissions prediction. Finally, an economic supervisory model predictive
controller (MPC) is developed using the LPV emissions model to adjust
setpoints to an inner-loop airpath tracking MPC. Simulation results are
reported illustrating the capability of the resulting controller to reduce
$NOx$, meet the target $Soot$ limit, and track the adjusted intake manifold
pressure and exhaust gas recirculation (EGR) rate targets.
###### keywords:
Model predictive control, Linear parameter-varying control, Neural networks,
Diesel engine control
, , , , ,
## 1 Introduction
To achieve emission reduction in diesel engines, advanced engine control
methods can be leveraged to coordinate engine fueling, exhaust gas
recirculation (EGR) valve, and variable geometry turbocharger (VGT). In
particular, Model Predictive Control (MPC) has shown significant promise for
improving engine transient control while satisfying state and control
constraints (Norouzi et al. (2021)). Accurate and low-complexity engine-out
emission models can support control design and verification before physical
engine testing (Li et al. (2017)).
Different emission modeling methods have been proposed in the literature
including map-based quasi-steady models (Hagena et al. (2006)), physics-based
grey-box models (Heywood (2018); Arrègle et al. (2008)), and machine learning
based models (Winkler-Ebner et al. (2010); Prokhorov (2008); De Cesare and
Covassin (2011); Li et al. (2017)). As shown in Hagena et al. (2006), map-
based models may not accurately predict transient effects such as smoke spikes
during fuel tip-ins. On the other hand, physics-based grey-box models often
exploit in-cylinder pressure as an input, which is not available in commercial
vehicles in production. Hence, in this paper, a data-driven machine learning-
based model is developed using limited training data to support the emissions-
oriented MPC design.
The existing literature on the application of MPC to diesel engines has
addressed intake manifold pressure and airflow/EGR rate setpoint tracking in
the airpath system (Huang et al. (2013, 2018); Ortner and Del Re (2007)) as
well as higher-level objectives such as enhanced fuel economy or reduced
emissions (Huang et al. (2020)). Liao-McPherson et al. (2020); Liu et al.
(2021) have developed solutions along the lines of nonlinear economic MPC
(EMPC) to control $NOx$ and $Soot$ emissions. Broomhead et al. (2016) have
used EMPC for power tracking while limiting emissions in diesel generator
applications.
In this paper we focus on a hierarchical MPC architecture for emissions
reduction with an outer-loop (supervisory) economic MPC which adjusts fueling
rate and intake manifold pressure and EGR rate targets (set-points) to an
inner-loop tracking MPC for the airpath system. For the inner-loop MPC, we
adopt the airpath controller design from Liao-McPherson et al. (2020); Zhang
et al. (2022a), which exploits feedforward MPC and feedback rate-based MPC.
Note that in automotive applications, a feedback tracking controller is often
coupled with a feedforward controller to speed up the transient response
(Norouzi et al. (2021)). One of the most common feedforward controllers is
based on look-up tables for set-points and actuator positions, however, an
MPC-based feedforward (Liao-McPherson et al. (2020); Zhang et al. (2022a)) has
been shown to improve transient response.
To support controller implementation, we first develop a multi-layered Neural
Network (NN) model for engine-out $NOx$ and $Soot$ emissions based on
experimental dynamometer data. Then, we integrate this NN-based emissions
model with a high-fidelity engine model in GT-Power that predicts flows,
pressures, and temperatures in the engine to obtain a high-fidelity model for
closed-loop simulations. We then use this high-fidelity model to identify a
low-order linear parameter-varying (LPV) model for emissions prediction, and
we use this LPV model as a prediction model to implement supervisory economic
MPC. Closed-loop simulation results are finally reported followed by
concluding remarks.
## 2 Diesel Engine Modeling
The schematic of the diesel engine considered in this paper is shown in Figure
1. The EGR system dilutes the cylinder charge and lowers the peak combustion
temperature, thereby reducing oxides of nitrogen $NOx$ emissions. The flow
from the exhaust manifold of the engine to the intake manifold is controlled
by the EGR valve. As the EGR flow also depends on the exhaust pressure, this
flow is also affected by the VGT. Both VGT and EGR valve affect the intake
manifold pressure and air-to-fuel ratio. High levels of recirculated exhaust
gas and low air-to-fuel engine can lead to increased smoke/soot emissions. The
engine operating point is typically defined by the engine speed ($N_{\tt e}$)
and total fuel injection rate ($w_{\tt inj}$) inferred from the engine torque
request; these are treated as the exogenous inputs.
Figure 1: Diesel engine with exhaust gas recirculation (EGR) and a variable
geometry turbocharger (VGT).
A typical diesel engine airpath controller tracks target values (set-points)
for the intake manifold pressure ($p_{\tt im}$) and EGR rate ($\chi_{\tt
egr}$). The EGR rate is defined by
$\chi_{\tt egr}=\frac{w_{\tt egr}}{w_{\tt egr}+w_{\tt c}},$ (1)
where $w_{\tt c}$ is the mass flow into the intake manifold through the
compressor and intercooler, and $w_{\tt egr}$ is the mass flow through the EGR
valve from the exhaust manifold into the intake manifold.
In this paper, the objective is to coordinate the EGR valve and VGT actuators
to control the intake manifold pressure ($p_{\tt im}$) and EGR rate
($\chi_{\tt egr}$) to target values that are computed and provided to the
airpath controller by a supervisory controller to satisfy emissions and fuel
consumption requirements.
As in our previous work on diesel engine airpath control (Zhang et al. (2022b,
a)), our controller development relies on a high-fidelity diesel engine model
in GT-Power. This model represents the responses of flows, and pressures in
different parts of the engine at a crankangle resolution and has been
validated against experimental data from engine dynamometer testing. This
model, however, does not represent engine feedgas emissions; hence, to enable
emissions-oriented controller development in this paper, we first augment this
GT-Power engine model with data-driven emissions models. We then develop
control-oriented models for feedgas emissions and airpath, and use them for
EMPC design.
### 2.1 Data-driven Engine Feedgas Emissions Modeling
Neural Networks, as universal function approximators, are capable of learning
emission models from data. Given experimental dynamometer data sets with both
transient and steady-state measurements, we model the diesel engine emissions
using a multi-layered NN (Figure 2), where the output of each layer is defined
by
$y_{i}=\sigma_{ReLU}\left(W_{i}y_{i-1}+b_{i}\right),~{}~{}i=1,\dots,4,$ (2)
and where $W_{i}$ and $b_{i}$ are the network parameters of the $i$th layer
while $y_{0}\in\mathbb{R}^{10}$ and $y_{4}\in\mathbb{R}^{2}$ are the inputs
and predicted emissions outputs, respectively. The NN is trained on an
experimental dataset consisting of both steady-state ($\tt ss$)
$\\{y_{0,i}^{\tt ss},y_{4,i}^{\tt ss}\\}_{i=1,\dots,306}$ and transient ($\tt
ts$) measurements $\\{y_{0,i}^{\tt ts},y_{4,i}^{\tt ts}\\}_{i=1,\dots,12001}$.
The objective of the training a NN model is to minimize the sums of squares of
the errors between the output $y_{4,i}$, $i=1,2$, and the actual emissions,
namely, oxides of nitrogen ($NOx$) and $Soot$.
Figure 2: Schematic of the developed multi-layer neural network diesel
emissions model.
The three hidden layers of our NN model of size $1024$, $512$, and $32$,
respectively, are chosen consistently with an encoder-decoder architecture
while keeping the number of trainable parameters relatively small. The first
hidden layer, functioning as an encoder, blends the input measurements from
$\mathbb{R}^{10}$ into features in a higher dimension $\mathbb{R}^{1024}$. The
consecutive two hidden layers have $512$ and $32$ neurons, respectively. They
decode the high-dimensional features and project the vector layer-by-layer
into $\mathbb{R}^{2}$.
Table 1: Normalized cross-covariance with zero lag between measured input variable candidates and emissions. | Transient | Steady State
---|---|---
Measured | $NOx$ | $Soot$ | $NOx$ | $Soot$
Variables | $\rm[ppm]$ | $\rm[\%]$ | $\rm[ppm]$ | $\rm[\%]$
Injection Pressure $\rm[MPa]$ | 0.45 | -0.32 | -0.09 | -0.18
Main injection | 0.27 | -0.39 | -0.03 | -0.19
timing $\rm[BTDC]$
Main injection fuel | 0.68 | -0.12 | 0.38 | -3.0e-3
flow rate $\rm[mm^{3}/st]$
Pre-injection fuel | 0.22 | 0.07 | -8.2e-17 | -1.7e-16
flow rate $\rm[mm^{3}/st]$
Engine torque output $\rm[Nm]$ | 0.67 | -0.10 | 0.38 | -0.02
Engine speed $\rm[rpm]$ | 0.28 | -0.40 | -0.25 | -0.07
Intake manifold | 0.47 | -0.21 | 0.15 | 0.15
pressure $\rm[kPa]$
Exhaust manifold | 0.49 | -0.22 | 0.11 | -0.04
pressure $\rm[kPa]$
Mass air flow $\rm[G/s]$ | 0.39 | -0.27 | 0.04 | -0.12
EGR position $\rm[\%]$ | -0.23 | -0.33 | 0.10 | 0.22
VGT position $\rm[\%]$ | -0.32 | 0.42 | 0.08 | 0.13
To enable our NN model to exploit two datasets (i.e., steady-state dataset and
transient dataset) of distinct physical nature and dataset sizes we used the
following procedure. Firstly, the input variables used in training were
selected based on correlation analysis. For instance, Table 1 indicates low
normalized cross-covariance for the pre-injection fuel flow rate; hence we
remove it from the set of model inputs while other signals in Table 1 are kept
in $y_{0}\in\mathbb{R}^{10}$. Secondly, we exclude outliers from the transient
training data set to facilitate the integration of the transient and steady
state data. For this, we empirically choose a threshold, $\epsilon>0$, and
exclude a transient data point $(y_{0,i}^{\tt ts},y_{4,i}^{\tt ts})$ if
$r(y_{0,i}^{\tt ts})=\left((y_{0,i}^{\tt
ts}-\mu_{ss})^{T}\Sigma_{ss}^{-1}(y_{0,i}^{\tt
ts}-\mu_{ss})\right)^{1/2}>\epsilon,$ (3)
where $r(y_{0,i}^{\tt ts})$ designates the Mahalanobis distance, while
$\mu_{ss}$ and $\Sigma_{ss}$ stand for the mean and the covariance matrix of
$\\{y_{0,i}^{\tt ss}\\}_{i=1,\dots,306}$. Thirdly, note that the transient
dataset of $12001$ data points is significantly larger than the steady-state
dataset of $306$ data points. To avoid overemphasizing transient response in
training the NN model, the final training dataset comprises the transient
dataset after outlier removal and the steady state dataset being duplicated
sevenfold.
Using the Pytorch package developed by Paszke et al. (2019), the NN model was
trained for $1000$ epochs with MSE-loss function and a batch size of $40$. The
stochastic gradient descent algorithm with momentum $\rho=0.9$ was used for
training. The initial learning rate is set to $lr=10^{-4}$ with a decay rate
of $\gamma=0.5$ for every $100$ epochs. The number of data points used for
training, validation, and testing were chosen in the ratio of 70%:15%:15%.
Figure 3 compares the prediction results using the NN model versus the actual
measured emissions data from the testing fraction of the dynamometer dataset.
In Figure 3 and subsequent figures, we are not able to report the $y$-axis
values in order to protect OEM proprietary data. Compared to the measured
$NOx$ data, the predicted $NOx$ has mean errors of 49.8 $\rm ppm$ during
transients and 129.8 $\rm ppm$ in steady state. The mean errors in $Soot$
prediction are 0.45% in transients and 5.7% in steady state. As shown in
Figure 3, our predictions for both transient data (in blue dash lines) and
steady-state data (in blue crosses) match those from the dynamometer data with
relatively small errors on average.
Figure 3: The NN-based diesel emissions prediction performance against the
actual dynamometer data.
### 2.2 Emissions Control-oriented Modeling
To reduce diesel engine emissions, we employ an economic MPC (EMPC). To
implement EMPC, a control-oriented model is needed for predicting the
emissions over the MPC optimization horizon. To that end, in this section, we
develop and validate an LPV model for emissions. To keep the prediction model
as simple as possible, only $NOx$ and $Soot$ are selected as the LPV model
states ($x$). Both emission outputs are assumed to be measured or accurately
estimated. The input ($u$) for our model includes the intake manifold pressure
($p_{\tt im}$), EGR rate ($\chi_{\tt egr}$) and fuel injection rate ($w_{\tt
inj}$).
Both the states and the input variables have been normalized by the steady-
state values ($x^{\tt ss}(\rho)$, $u^{\tt ss}(\rho)$) and their corresponding
standard deviation ($\sigma_{x}(\rho)$, $\sigma_{u}(\rho)$) as follows
$\tilde{x}(\rho)=\frac{x-x^{\tt
ss}(\rho)}{\sigma_{x}(\rho)},\quad\tilde{u}(\rho)=\frac{u-u^{\tt
ss}(\rho)}{\sigma_{u}(\rho)}.$ (4)
The prediction model has an LPV form,
$\tilde{x}_{k+1}(\rho_{k})=\tilde{A}(\rho_{k})\tilde{x}_{k}(\rho_{k})+\tilde{B}(\rho_{k})\tilde{u}_{k}(\rho_{k}),$
(5)
where $k$ denote the discrete time, $\rho_{k}$ is the vector of engine speed
and fuel injection rate at time instant $k$,
$\tilde{A}:\mathbb{R}^{2}\to\mathbb{R}^{2\times 2}$,
$\tilde{B}:\mathbb{R}^{2}\to\mathbb{R}^{2\times 3}$ are operating condition
($\rho$) dependent matrices.
We linearly interpolate the matrices $\tilde{A}(\rho)$ and $\tilde{B}(\rho)$
from $99$ models identified at pre-selected operating points ($\rho$) defined
by 9 values of the engine speed and 11 values of fuel injection rate that
cover the engine operating range. A co-simulation framework is implemented to
integrate the NN emissions model with a model of the diesel engine airpath
system built in GT-Power in order to generate input-output response data
corresponding to small EGR and VGT position perturbations; these data are then
used for local model identification at each of the $99$ operating points. The
data generation procedure is summarized in Figure 4. Using the function ${\tt
n4sid}$ in MATLAB with prediction horizon setting as 50 steps, we could get
the optimized $\tilde{A}$ and $\tilde{B}$ matrix for each operating point that
minimizes the one-step-ahead prediction error between measured and predicted
outputs.
Figure 4: Data generation procedure for identification of control-oriented LPV
model of diesel emissions, where $p_{\tt im}$ is intake manifold pressure,
$\chi_{\tt egr}$ is EGR rate, and $w_{\tt inj}$ is fuel injection quantity.
Figure 5 compares the results of simulating the LPV model (5) and the GT-Power
model with NN-based emissions model over the first $600s$ of the World
Harmonized Transient Cycle (WHTC) driving cycle, where both models respond to
the same input trajectories of engine speed, fuel injection rate, target
intake manifold pressure, and target EGR rate. The trajectories of the target
intake manifold pressure and target EGR rate for these simulations were
generated from look-up tables that specify desired $p_{\tt im}$ and $\chi_{\tt
egr}$ in steady-state as functions of the engine speed and fuel injection
rate.
The average errors in $NOx$ and $Soot$ LPV models as compared to the higher-
fidelity NN model are less than $100~{}ppm$ and 2%, respectively, even though
larger errors are observed in transients. While the transient errors could
potentially be reduced with higher-order LPV or/and NN models, as we will
demonstrate in the following section, the proposed simple LPV model could
successfully support an EMPC design.
Figure 5: The comparison results between control-oriented LPV emissions model
against the GT-Power model co-simulation with NN-based emissions model over
the first 600 $s$ of WHTC driving cycle: (a) $NOx$, and (b) $Soot$.
### 2.3 Diesel Airpath Control-Oriented Model
A control-oriented model of the diesel airpath system is adopted to design the
inner-loop tracking MPC. The airpath model estimates the response of the
intake manifold pressure and EGR rate to EGR and VGT actuators. The intake
manifold pressure ($p_{\tt im}$) and EGR rate ($\chi_{\tt egr}$) are two of
the airpath model states ($z$). The model inputs ($v$) are the EGR valve
position (percent open) and VGT position (percent close). The control-oriented
airpath model also has an LPV form,
$\displaystyle z_{k+1}-z_{k+1}^{\tt
ss}(\rho_{k})=A(\rho_{k})\left[z_{k}-z_{k}^{\tt ss}(\rho_{k})\right]$
$\displaystyle+B(\rho_{k})\left[v_{k}-v_{k}^{\tt
ss}(\rho_{k})\right]+B_{f}(\rho_{k})\left[w_{{\tt inj},k}-w_{{\tt inj}}^{\tt
ss}(\rho_{k})\right],$ (6)
where $A,B:\mathbb{R}^{2}\to\mathbb{R}^{2\times 2}$, $z_{k}^{\tt
ss},v_{k}^{\tt ss}:\mathbb{R}^{2}\to\mathbb{R}^{2}$ are mappings that
determine equilibrium values of $z$ and $v$ corresponding to a given $\rho$,
$B_{f}:~{}\mathbb{R}^{2}\to\mathbb{R}^{2\times 1}$, and $w_{\tt inj}^{\tt
ss}(\rho)=\left[\begin{array}[]{cc}0&1\end{array}\right]\rho$. Note that the
model (2.3) includes $w_{{\tt inj},k}$ as an extra additive input. This input
has been added as includes it improves the model match in transients. Details
of the airpath LPV model identification and validation can be found in Zhang
et al. (2022a).
## 3 Integrated Emissions and Airpath MPC Design
The overall architecture of the proposed integrated emissions and airpath
controller is shown in Figure 6. For each engine speed ($N_{\tt e}$) and fuel
injection rate target ($w_{\tt inj}$), the outer-loop EMPC controller
generates adjusted target values for intake manifold pressure ($p_{\tt
im}^{adj}$), EGR rate ($\chi_{\tt egr}^{adj}$), as well as the fuel injection
command ($w_{\tt inj}^{adj}$) to reduce $NOx$ and enforce the constraint on
$Soot$. The fuel injection rate is applied directly to the engine while the
adjusted intake manifold pressure and EGR rate target values are passed to the
airpath MPC which tracks these targets by adjusting the EGR valve and VGT
positions, see Zhang et al. (2022a) for more details.
Figure 6: The block diagram of the integrated emissions and airpath control
system.
In the baseline engine control strategy, the intake manifold pressure and EGR
rate targets are static functions of the operating condition, i.e., $p_{\tt
im}^{\tt trg}(\rho):\mathbb{R}^{2}\to\mathbb{R}$ and $\chi_{\tt egr}^{\tt
trg}(\rho):\mathbb{R}^{2}\to\mathbb{R}$ that are implemented using look-up
tables. These targets are obtained during engine development and are chosen to
be optimal in steady-state operating conditions. In this paper, we focus on
the optimization of transient engine response, i.e., using MPC to shape the
transient response of the engine as it transitions between operating points.
### 3.1 EMPC for Emissions Control
Our economic MPC (EMPC) exploits a rate-based prediction model to enhance
closed-loop robustness and disturbance rejection. Rate-based MPC was applied
to aircraft turbofan engines by DeCastro (2007) and to diesel engine airpath
control by Huang et al. (2013, 2014, 2016).
First, in reference to the model (5), we define rate variables $\Delta x_{k}$
and $\Delta u_{k}$ as
$\Delta x_{k}=\tilde{x}_{k}-\tilde{x}_{k-1},\quad\Delta
u_{k}=\tilde{u}_{k}-\tilde{u}_{k-1},$
that correspond to the increments in the state and control variables,
respectively. Then, assuming $\rho_{k}$ remains constant over the prediction
horizon, the prediction model (5) implies that
$\Delta x_{k+1}=\tilde{A}(\rho_{k})\Delta x_{k}+\tilde{B}(\rho_{k})\Delta
u_{k}.$ (7)
The constant $\rho_{k}$ assumption is adopted widely in the literature on
engine and powertrain control with MPC (Norouzi et al. (2021)).
The rate-based reformulation as given in (7) is particularly effective when
applied to an LPV model, as the steady-state values of states and controls in
(4), i.e., $x_{k}^{ss}(\rho_{k})$ and $u_{k}^{ss}(\rho_{k})$, under the
assumption of $\rho_{k}$ remaining constant over the prediction horizon, does
not need to be known.
We let $\tilde{x}_{j|k}$ and $\tilde{u}_{j|k}$ denote the predicted state and
control values at time step $j,$ $0\leq j\leq N$, over the prediction horizon
when the prediction is made at the time step $k$. Then to be able to impose
constraints on $\tilde{x}_{j|k}$ and $\tilde{u}_{j|k}$ and compute the MPC
cost that penalizes the rate of change of input, $\Delta u_{k}$, we define the
augmented state vector,
$x_{j|k}^{\tt ext}=\left[\begin{array}[]{cccc}\Delta x_{j|k}^{\sf
T},&\tilde{x}_{j-1|k}^{\sf T},&\tilde{u}_{j-1|k}^{\sf
T}\end{array}\right]^{\sf T}.$
The rate-based prediction model (7) then implies
$\displaystyle x_{j+1|k}^{\tt ext}=\begin{bmatrix}\tilde{A}(\rho_{k})&0&0\\\
\mathbb{I}_{n_{x}\times n_{x}}&\mathbb{I}_{n_{x}\times n_{x}}&0\\\
0&0&\mathbb{I}_{n_{u}\times n_{u}}\end{bmatrix}x_{j|k}^{\tt ext}$
$\displaystyle\qquad\qquad+\begin{bmatrix}\tilde{B}(\rho_{k})\\\ 0\\\
\mathbb{I}_{n_{u}\times n_{u}}\end{bmatrix}\Delta u_{j|k},$ (8)
where $\Delta u_{j|k}$ is the control input in the extended system.
The EMPC is designed based on considerations of safety and drivability, so
that the fuel injection rate satisfies the specified upper and lower bounds,
as well as on the considerations of the emission requirements such as $Soot$
limits. The following optimal control problem is solved at each sampling
instant with the intake manifold pressure, EGR rate, and fuel injection rate
being optimized:
$\min_{\Delta u_{0|k},...,\Delta u_{N-1|k},\epsilon_{k}}J(\Delta
u,\epsilon,\rho)=\sum_{j=0}^{N}l(\Delta u_{j|k},\epsilon_{j|k},\rho_{k})$ (9a)
subject to
$\displaystyle\Delta x_{j+1|k}=\tilde{A}(\rho_{k})\Delta
x_{j|k}+\tilde{B}(\rho_{k})\Delta u_{j|k}$ (9b) $\displaystyle
u_{j|k}=u_{j-1|k}+\Delta u_{j|k},j={0},\ldots,N-1,$ (9c)
$\displaystyle{Soot}_{j|k}\leq{Soot}_{max}+\epsilon_{j|k},j=1,\ldots,N,$ (9d)
$\displaystyle 0.9w_{\tt inj}^{trg}\leq{w_{\tt inj}^{adj}}_{j|k}\leq w_{\tt
inj}^{trg},j=0,\ldots,N-1,$ (9e) $\displaystyle\epsilon_{j|k}\geq
0,j=0,\ldots,N-1,$ (9f)
where $N$ is the prediction horizon and $\epsilon_{k}$ is the slack variable
introduced to avoid infeasibility of the $Soot$ constraints. The stage cost
function $l$ is defined as
$\displaystyle l(\Delta u,\epsilon,\rho)=$ $\displaystyle\alpha(p_{\tt
im}^{trg}(\rho)-p_{\tt im}^{adj})^{2}+\beta(\chi_{\tt
egr}^{trg}(\rho)-\chi_{\tt egr}^{adj})^{2}+$ $\displaystyle\gamma(w_{\tt
inj}^{trg}-w_{\tt inj}^{adj})+\eta{NOx}+\zeta\epsilon+\Delta u^{\sf T}R\Delta
u$
where $\alpha,\beta,\gamma,\eta,\zeta>0$ and $R>0$ are tuning parameters and
reflect tracking objectives for the intake manifold pressure target, EGR rate
target, and fuel injection rate, a penalty on $NOx$ value, a penalty to soften
the $Soot$ constraint to guarantee feasibility, and a damping term. The fuel
tracking, $NOx$ term, and slack variable use 1-norm penalties since they are
more robust to ill-conditioning compared to quadratic penalties, facilitating
the solution of (9) numerically.
### 3.2 Tracking MPC for Airpath System
Our airpath MPC includes a feedforward (FF) MPC, to provide fast transient
response, and a feedback (FB) MPC that provides integral action and ensures
offset-free tracking. The FB MPC uses a rate-based formulation to eliminate
the steady-state error while the FF MPC only closes the loop around the model
of the plant. Details of design and tuning of the airpath MPC are presented in
Zhang et al. (2022a).
## 4 Simulation Results and Discussions
Four different tuning scenarios are considered to investigate how the
emissions can be altered according to different control requirements:
* •
EMPC-A: Low $NOx$ penalty without $Soot$ limit
* •
EMPC-B: High $NOx$ penalty without $Soot$ limit
* •
EMPC-C: Low $NOx$ penalty with $Soot$ limit
* •
EMPC-D: High $NOx$ penalty with $Soot$ limit
A baseline scenario without EMPC (w/o EMPC) was also generated, in which the
adjusted targets for $p_{\tt im}$, $\chi_{\tt egr}$ are the same as the values
from look-up table while $w_{\tt inj}$ is the same as defined by the operating
condition.
### 4.1 Case Study with Steps/Ramps
Figure 7 shows the simulation results at one operating point with different
$NOx$ penalties in the cost function during first, a fuel tip-in and tip-out
and then, a ramp change of engine speed. Here no $Soot$ limit was added. The
simulations are run for 30 $s$ first to achieve close to steady-state
temperature conditions before steps are applied. According to the results
summarized in Table 2, the EMPC reduces both cumulative $NOx$ and peak $NOx$
values. Figure 8 shows the corresponding intake manifold pressure, EGR rate,
and fuel injection rate values from EMPC-B simulation. The EMPC adjusts the
target $p_{\tt im}$ and $\chi_{\tt egr}$ to help reduce $NOx$ while the
airpath MPC is able to track these adjusted target values without steady-state
errors.
Figure 7: Comparison of (a) $NOx$ and (b) $Soot$ with different $NOx$
penalties during first, a fuel ($w_{\tt inj}$) tip-in and tip-out and then, a
ramp change of $N_{\tt e}$. Figure 8: Comparison of actual , target from
look-up table, and adjusted target by EMPC (a) intake manifold pressure, (b)
EGR rate and (c) fuel injection rate with EMPC-B.
After adding the $Soot$ constraint, the EMPC is able to keep $Soot$ below a
predefined limit. Figure 9 shows the simulation results at the same operating
point and with the $Soot$ limit, and with different $NOx$ penalties in the
cost function. Table 3 indicates that both EMPC-C and EMPC-D have reduced
$Soot$ peak values. However, EMPC-D has higher average $Soot$ compared with
the baseline due to its large penalty on $NOx$ in the cost function. According
to Figure 10, $Soot$ values of both EMPCs are below the limit for most of the
time. Oscillations in the adjusted target values of $p_{\tt im}$ and
$\chi_{\tt egr}$ are observed in Figure 10 attributed to the model mismatch
between LPV and NN emissions models. The range of the oscillation in $p_{\tt
im}$ and $\chi_{\tt egr}$ is less than 0.01 bar and 1%, respectively, and is
relatively small. Such oscillation can also be reduced by using a larger
penalty $R$ on the damping term in the MPC cost function. At the same time,
the airpath MPC successfully tracks the adjusted target values.
Figure 9: Comparison of (a) $NOx$ and (b) $Soot$ with the same $NOx$ penalty and operating conditions as in Figure 7 with the additional of $Soot$ limit. Figure 10: Comparison of actual, target from look-up table, and adjusted target by EMPC (a) intake manifold pressure, (b) EGR rate, and (c) fuel injection rate with EMPC-D. Table 2: Comparison of the $NOx$ results for different EMPC scenarios. MPC | Cumulative $NOx$ [%] | Peak $NOx$ [%]
---|---|---
w/o EMPC | reference | reference
(baseline) | |
EMPC-A | $\downarrow$ 0.780% | $\downarrow$ 5.700%
EMPC-B | $\downarrow$ 6.695% | $\downarrow$ 9.065%
EMPC-C | $\uparrow$ 0.395% | $\downarrow$ 5.341%
EMPC-D | $\downarrow$ 3.033% | $\downarrow$ 8.301%
Table 3: Comparison of the $Soot$ results for different EMPC scenarios. MPC | Average $Soot$ [%] | Peak $Soot$ [%]
---|---|---
w/o EMPC | reference | reference
(baseline) | |
EMPC-A | $\uparrow$ 1.067% | $\uparrow$ 0.090%
EMPC-B | $\uparrow$ 14.071% | $\uparrow$ 14.946%
EMPC-C | $\downarrow$ 1.300% | $\downarrow$ 6.183%
EMPC-D | $\uparrow$ 5.003% | $\downarrow$ 4.765%
### 4.2 Transient Drive Cycle Simulations
We now evaluate the integrated emissions and airpath control strategy in more
complex simulation scenarios over the Federal Test Procedure (FTP) and WHTC.
The results are summarized in Tables 4 and 5. Both cycles run for 100 $s$
first to achieve close to steady-state temperature conditions before cycle
inputs are applied. Figures 11 and 12 show the comparison between EMPC-B and
EMPC-C against the baseline case over FTP and WHTC cycles, respectively. With
EMPC-B, $NOx$ are considerably reduced over both drive cycles with a sacrifice
of around 20% increase in average $Soot$. If the $Soot$ limit is imposed,
EMPC-C could reduce both average and peak $Soot$ in both drive cycles compared
with the baseline case. Moreover, according to Figures 11-(b) and 12-(b), most
of the peaks above the $Soot$ constraint are eliminated, illustrating EMPC-C
is able to reduce the visible smoke. Results of EMPC-D are only shown in
Tables 4 and 5, from which it can be seen that both cumulative $NOx$ and
average $Soot$ are reduced if the EMPC is made very aggressive on both $NOx$
reduction and $Soot$ limits.
Figure 11: Comparison of (a) $NOx$ and (b) $Soot$ control results over the FTP cycle. Figure 12: Comparison of (a) $NOx$ and (b) $Soot$ control results over the WHTC cycle. Table 4: Comparison of $NOx$ control results over FTP and WHTC cycles. MPC | Cumulative $NOx$ [%] | Peak $NOx$ [%]
---|---|---
w/o EMPC | reference | reference
(baseline) | FTP/WHTC | FTP/WHTC
EMPC-B | $\downarrow$ 9.527%/$\downarrow$ 38.644% | $\uparrow$ 8.374%/$\downarrow$ 33.530%
EMPC-C | $\uparrow$ 4.056%/$\uparrow$ 2.963% | $\uparrow$ 6.6059%/$\downarrow$ 1.649%
EMPC-D | $\downarrow$ 0.451%/$\downarrow$ 15.378% | $\uparrow$ 10.822%/$\downarrow$ 11.615%
Table 5: Comparison of $Soot$ control results over FTP and WHTC cycles. MPC | Average $Soot$ [%] | Peak $Soot$ [%]
---|---|---
w/o EMPC | reference | reference
(baseline) | FTP/WHTC | FTP/WHTC
EMPC-B | $\uparrow$ 18.789%/$\uparrow$ 21.329% | $\uparrow$ 17.800%/$\downarrow$ 7.238%
EMPC-C | $\downarrow$ 14.268%/$\downarrow$ 9.521% | $\downarrow$ 9.485%/$\downarrow$ 33.786%
EMPC-D | $\downarrow$ 3.262%/$\downarrow$ 5.656% | $\downarrow$ 9.147%/$\downarrow$ 34.741%
## 5 Conclusions
In this paper, a multi-layer Neural Network (NN) was trained and validated to
simultaneously predict $NOx$ and $Soot$ emissions of a diesel engine for both
transient and steady-state operating conditions. A co-simulation framework was
then implemented to integrate the NN model with a high-fidelity model of the
diesel engine airpath system built in GT-Power. An integrated control strategy
for emissions regulation and airpath system setpoint tracking was developed
based on model predictive control (MPC). The emission controller is based on
an economic MPC formulation that leverages a linear parameter varying (LPV)
model of $NOx$ and $Soot$ to minimize $NOx$ and enforce the $Soot$ limit. The
airpath controller is a rate-based MPC consisting of feedback and feedforward
loops, aiming to track adjusted intake manifold pressure and EGR rate
setpoints computed by the economic MPC.
The performance of our integrated emissions and airpath MPCs was assessed
through simulations over different steady-state and transient drive cycles in
terms of (i) reducing $NOx$, (ii) enforcing $Soot$ limit, and (iii) tracking
the adjusted intake manifold pressure and EGR rate setpoints. The results
highlight the potential of economic supervisory MPC to achieve improved
transient control and for reducing engine feedgas emissions. Future works
include using region-based EMPC calibration to improve transient performance
and replacing the LPV model used for emission prediction with a dynamic neural
network model to reduce model mismatch and improve the overall performance.
## References
* Arrègle et al. (2008) Arrègle, J., López, J.J., Guardiola, C., and Monin, C. (2008). Sensitivity study of a NOx estimation model for on-board applications. Technical report, SAE Technical Paper.
* Broomhead et al. (2016) Broomhead, T., Manzie, C., Hield, P., Shekhar, R., and Brear, M. (2016). Economic model predictive control and applications for diesel generators. _IEEE Transactions on Control Systems Technology_ , 25(2), 388–400.
* De Cesare and Covassin (2011) De Cesare, M. and Covassin, F. (2011). Neural network based models for virtual no x sensing of compression ignition engines. Technical report, SAE Technical Paper.
* DeCastro (2007) DeCastro, J.A. (2007). Rate-based model predictive control of turbofan engine clearance. _Journal of Propulsion and Power_ , 23(4), 804–813.
* Hagena et al. (2006) Hagena, J.R., Filipi, Z.S., and Assanis, D.N. (2006). _Transient diesel emissions: analysis of engine operation during a tip-in_. SAE International New York, NY, USA.
* Heywood (2018) Heywood, J.B. (2018). _Internal combustion engine fundamentals_. McGraw-Hill Education.
* Huang et al. (2020) Huang, C., Salehi, R., Ersal, T., and Stefanopoulou, A.G. (2020). An energy and emission conscious adaptive cruise controller for a connected automated diesel truck. _Vehicle System Dynamics_ , 58(5), 805–825.
* Huang et al. (2018) Huang, M., Liao-McPherson, D., Kim, S., Butts, K., and Kolmanovsky, I. (2018). Toward real-time automotive model predictive control: A perspective from a diesel air path control development. In _2018 Annual American Control Conference_ , 2425–2430. IEEE.
* Huang et al. (2014) Huang, M., Nakada, H., Butts, K., and Kolmanovsky, I. (2014). Robust rate-based model predictive control of diesel engine air path. In _American Control Conference_ , 1505–1510.
* Huang et al. (2013) Huang, M., Nakada, H., Polavarapu, S., Butts, K., and Kolmanovsky, I. (2013). Rate-based model predictive control of diesel engines. _IFAC Proceedings Volumes_ , 46(21), 177–182.
* Huang et al. (2016) Huang, M., Zaseck, K., Butts, K., and Kolmanovsky, I. (2016). Rate-based model predictive controller for diesel engine air path: Design and experimental evaluation. _IEEE Transactions on Control Systems Technology_ , 24(6), 1922–1935.
* Li et al. (2017) Li, H., Butts, K., Zaseck, K., Liao-McPherson, D., and Kolmanovsky, I. (2017). Emissions modeling of a light-duty diesel engine for model-based control design using multi-layer perceptron neural networks. Technical report, SAE Technical Paper.
* Liao-McPherson et al. (2020) Liao-McPherson, D., Huang, M., Kim, S., Shimada, M., Butts, K., and Kolmanovsky, I. (2020). Model predictive emissions control of a diesel engine airpath: Design and experimental evaluation. _International Journal of Robust and Nonlinear Control_ , 30(17), 7446–7477.
* Liu et al. (2021) Liu, Z., Dizqah, A.M., Herreros, J.M., Schaub, J., and Haas, O. (2021). Simultaneous control of nox, soot and fuel economy of a diesel engine with dual-loop egr and vnt using economic mpc. _Control Engineering Practice_ , 108, 104701.
* Norouzi et al. (2021) Norouzi, A., Heidarifar, H., Shahbakhti, M., Koch, C.R., and Borhan, H. (2021). Model predictive control of internal combustion engines: A review and future directions. _Energies_ , 14(19), 6251.
* Ortner and Del Re (2007) Ortner, P. and Del Re, L. (2007). Predictive control of a diesel engine air path. _IEEE Transactions on Control Systems Technology_ , 15(3), 449–456.
* Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. (2019). Pytorch: An imperative style, high-performance deep learning library. _Advances in Neural Information Processing Systems_ , 32.
* Prokhorov (2008) Prokhorov, D. (2008). Neural networks in automotive applications. In _Computational intelligence in automotive applications_ , 101–123. Springer.
* Winkler-Ebner et al. (2010) Winkler-Ebner, B., Hirsch, M., Del Re, L., Klinger, H., and Mistelberger, W. (2010). Comparison of virtual and physical nox-sensors for heavy duty diesel engine application. _SAE International Journal of Engines_ , 3(1), 1124–1139.
* Zhang et al. (2022a) Zhang, J., Amini, M.R., Kolmanovsky, I., Tsutsumi, M., and Nakada, H. (2022a). Benefits of feedforward for model predictive airpath control of diesel engines. In _10th IFAC Symposium on Robust Control Design_ , 181–186.
* Zhang et al. (2022b) Zhang, J., Amini, M.R., Kolmanovsky, I., Tsutsumi, M., and Nakada, H. (2022b). Development of a model predictive airpath controller for a diesel engine on a high-fidelity engine model with transient thermal dynamics. In _American Control Conference_ , 3058–3063.
|
# Collectives: Towards Cloud-aware Collectives for ML Workloads with Rank
Reordering
Liang Luo∗‡, Jacob Nelson†, Arvind Krishnamurthy∗, Luis Ceze∗
∗University of Washington, †Microsoft Research, ‡Facebook AI
###### Abstract
ML workloads are becoming increasingly popular in the cloud. Good cloud
training performance is contingent on efficient parameter exchange among VMs.
We find that Collectives, the widely used distributed communication
algorithms, cannot perform optimally out of the box due to the hierarchical
topology of datacenter networks and multi-tenancy nature of the cloud
environment.
In this paper, we present Collectives (Cloud-aware collectives), a prototype
that accelerates collectives by reordering the ranks of participating VMs such
that the communication pattern dictated by the selected collectives operation
best exploits the locality in the network. Collectives is non-intrusive,
requires no code changes nor rebuild of an existing application, and runs
without support from cloud providers. Our preliminary application of
Collectives on allreduce operations in public clouds results in a speedup of
up to 3.7x in multiple microbenchmarks and 1.3x in real-world workloads of
distributed training of deep neural networks and gradient boosted decision
trees using state-of-the-art frameworks.
## I Introduction
Collective communication operations are an essential component of many data-
parallel computation frameworks. Originally developed for high-performance
computing frameworks such as MPI [49], they are now widely used for cloud-
based distributed machine learning [48, 9, 1, 33, 27, 8, 43] workloads. With
increasingly more complex models [37, 35, 42] calling for larger data
exchanges, and rapid deployment of faster accelerators [11, 21, 26] demanding
more frequent exchanges, efficient execution of these workloads contingent on
efficient collectives.
Unfortunately, achieving good collectives performance in a cloud environment
is fundamentally more challenging than in an HPC world, because the user has
no control over node placement, topology and has to share the infrastructure
with other tenants. These constraints have a strong implication on the
performance of collectives. As a result, the bottleneck of running these
workloads on the cloud has shifted from computation to communication [50, 30,
29].
Figure 1: Performance distribution of allreduce task of 100MB data with ring
algorithm varies widely with 500 random rank orders on Azure. Figure 2:
Pairwise RTT probe on 64 Azure Standard F64 v2 VMs shows non-uniform latency
in point to point VM communication.
Consider a common practice of applying allreduce ring collectives, a popular
algorithm, in the cloud context, where a randomly-ordered IP list (obtained
through the provider) of VMs is used to form a virtual ring on which data is
passed along, with $i$-th VM sending data to $i+1$-th VM. But do different
ways of forming ring (through permutation of VMs in the list) exhibit the same
performance? The answer is most likely no, as the ring that corresponds to
shorter total hop cost will likely perform better (Figure 1). On the other
hand, not all ways of forming rings achieve the same cost, because the point
to point communication cost (bandwidth, latency, or collectively referred to
as locality in this work) is different across VMs (Figure 2), due to the
hierarchical structure of the datacenter network, and the dynamic nature of
traffic from other tenants. Consequently, running collectives with a randomly-
ordered list of VMs results in unpredictable and subpar performance.
Our work focuses on discovering a permutation of the IP list that exploits the
network locality for efficient communication, in a completely transparent way,
by minimizing the cost model of a given collectives parameterized with the
actual hop cost. To do so, we need to (1) efficiently identify the underlying
network constraints (or collectively, locality); (2) accurately build cost
model for the collectives at hand; (3) effectively approximate the minimum of
complex cost functions.
This paper proposes Collectives, a tool that uses network probes to discover
locality within the underlying datacenter network, and uses it to solve a
communication cost minimization problem with constraints, with the rank of
each VM as the unknowns. We use reordered ranks as input to unmodified
communication backends in microbenchmarks including OMB [17], Nvidia NCCL [6],
Facebook Gloo [23] and real-world workloads of training deep neural networks
with Pytorch/Caffe2 and gradient boosted decision trees using LightGBM [33,
27] and our preliminary results show a speedup of up to 3.7x in various
allreduce operations and 1.3x in end-to-end performance across EC2 and Azure.
## II Background
We provide an overview of typical structures and performance implications for
datacenter networks and a brief introduction to the various popular
collectives operations.
### II-A Locality in Datacenter Network
Modern datacenters are hierarchical, with machines connecting to a top of rack
switch, which are in turn connected to upper-level devices [36, 24, 44, 28].
This particular topology induces locality [30], as the communication
performance between two physical hosts is not the same. For example, VMs
within the same rack have the best and stable performance, as the physical
link is not shared. On the other hand, links between hosts residing in
different racks are shared, and the communication performance depends on
factors like hop count, link congestion, oversubscription ratio [15], and
dynamic load. Topology information is crucial for achieving optimal
performance as many collectives implementations generate routines based on
this information [19, 38, 39, 20, 29]. But in a cloud-environment, this
information is hidden. Various attempts are made to reconstruct the physical
affinity, e.g., PLink [30] uses DPDK-based latency probes and K-Means
clustering to find hosts with high physical affinity.
### II-B Collectives
Collectives works by decomposing an operation into a series of point to point
operations between two nodes according to a predefined communication schedule.
collectives most often appear in MPI contexts [45, 47, 41, 16, 13]. Typically,
all nodes in collectives participate in the communication, usually running
symmetric tasks. collectives can be used in many tasks such as (all)reduce,
broadcast, (all)gather, and (reduce)scatter [7], and it is thus impractical to
individually optimize each task. Fortunately, many of the tasks can be
decomposed into multiple stages of collectives primitives (e.g., allreduce can
be decomposed into reducescatter followed by allgather. Therefore, we only
need to focus on accelerating such primitives. We now introduce these
algorithms. We use $N$ as the number of participating nodes, $S$ as the amount
of data to process per node.
Figure 3: Rounds of communications (color-coded) in various popular algorithms
used in collectives with 4 nodes.
Ring [40]. As shown in Figure 3(a), ring algorithms work by connecting nodes
to form a virtual ring. Data is then passed along the ring sequentially. Ring
algorithms require $O(N)$ steps to complete, sending $O(NS)$ amount of data.
Halving Doubling [47]. As shown in Figure 3(b), halving doubling works by
recursively doubling the distance (in terms of rank ID) while halving the
total amount of data sent in each round, requiring $O(log_{2}{N})$ steps to
finish while sending $O(NS)$ amount of data on the wire.
Tree. In one form, a single tree is built where data is transferred from
leaves to the root and vice-versa [25]; in a more optimized setting, a pair of
complementary binary trees are built to fully utilize the full bisection
bandwidth [46], each sending and receiving $S/2$. For binary trees,
$O(log_{2}N)$ rounds of communication are required, sending $O(NS)$ bytes.
BCube [3]. BCube is very similar to halving doubling from a structural
perspective, in the sense that nodes are organized into a group of $B$ peers.
BCube operates in $O(log_{B}N)$ rounds, and each node in each round would peer
with a unique node in another $B-1$ groups. Each node communicates
$\frac{S}{B^{i}}$ amount of data in round $i$. BCube achieving a total bytes
on wire of $O(\sum_{i=0}^{log_{B}N-1}\frac{S}{B^{i+1}})$.
## III Motivation
This section motivates Collectives by demonstrating the implication of rank
order on the performance of cloud-based collectives. We start by highlighting
the asymmetric, non-uniform link cost in the cloud environment, by launching
64 Standard F64 VMs on the Azure cloud. We then run an in-house hybrid DPDK
and ping-based [4] latency probe (§IV-B) between each pair of VM node, using
the same technique in PLink. The result is summarized in Figure 2 as a
heatmap. We observe the pairwise round-trip latency can range from sub-10 to
hundreds of microseconds.
We proceed to examine the performance of allreduce operation using Gloo [2],
running the Ring chunked algorithm with 512 Standard F16 VMs. To derive a
performance distribution, we use 500 randomly generated rank orders to
generate 500 samples, and each is the average runtime of a 10-iteration
reduction of 100MB of data. The result is summarized in the yellow
distribution in Figure 1. The performance of different rank orderings of VM
varies drastically, ranging from 330ms to 3400ms, with a mean of 1012ms and a
standard deviation of 418ms. Now we have established the profound influence of
rank ordering of VM nodes on the performance of collectives algorithms, the
goal of this paper is to derive an approximately optimal rank-ordering given a
selected collectives algorithm such that it maximizes the performance.
## IV Design and Implementation
We now describe Collectives, a tool that takes in a list of VM nodes and a
target algorithm, accurately and efficiently probes their pairwise distance,
and uses that information to construct a rank order of VMs that attempts to
minimize the total cost of communication.
### IV-A Cost Models for Collective Algorithms
Collectives builds a cost model $\mathbb{C}_{\mathbb{O}}$ for each popular
algorithm used in collectives $\mathbb{O}$, parameterized with the number of
participating nodes $N$ and size $S$. This section details the cost models for
popular algorithms. We use $c_{i,j}(S)$ to refer to the cost for transferring
$S$ amount of data from node $i$ to $j$. We further define
$MAX_{i=0}^{j}(f(i))=MAX(f(0),...,f(j))$. We assume $N$ a power of 2 to
simplify explanation, and allow arbitrary rank $r$ to alias to canonical rank
$(r+N)\text{ mod }N$.
Ring. The cost model of the ring algorithm is the sum of the cost of each hop
when traversing the ring:
$\mathbb{C}_{r}(N,c,S)=\sum_{i=0}^{N-1}c_{i,i-1}(S)$
Having Doubling. The cost of halving doubling is the sum of costs for each
round of communication, which in turn is the max cost of all communications in
that round.
$\mathbb{C}_{hd}(N,c,S)=\sum_{i=0}^{log_{2}N-1}MAX_{j=0}^{\frac{N}{2}-1}c_{j,j+2^{i}}(\frac{S}{2^{i+1}})$
Tree. The cost of running tree algorithms depends on the number of trees and
how trees are constructed. The total cost is the maximum cost of all trees,
which is in turn determined by the maximum cost of each subtree. We provide a
cost model for a popular variant of tree algorithm: double binary tree as used
in [5].
$\mathbb{C}_{dbt}(N,c,S)=T(0,N-1,S)$
where $T(i,j,S)$ is expressed recursively:
$\displaystyle T(i,j,S)=\left\\{\begin{array}[]{l@{}l}0\textbf{ if $i\geq
j$}\\\
MAX(c_{\frac{i+j}{2},\frac{3i+j}{2}-1}(\frac{S}{2})+T(i,\frac{i+j}{2}-1),\\\
c_{\frac{i+j}{2},\frac{i+3j}{2}+1}(\frac{S}{2})+T(\frac{i+j}{2}+1,j))\textbf{
otherwise}\end{array}\right.$
Similarly a mirrored tree is built by decrementing each node’s rank in the
tree without changing the tree structure.
BCube. The cost of running the BCube algorithm is similar to halving doubling,
except in each round, each node communicates with $B-1$ peers, instead of 1.
$\mathbb{C}_{b}(N,c,S,B)=\sum_{i=0}^{log_{B}N-1}MAX_{j=0}^{\frac{N}{B}-1}MAX_{k=1}^{B}c_{j,j+kB^{i}}(\frac{S}{B^{i+1}})$
### IV-B Probing for Pairwise Distance
We need to determine values for $c_{i,j}(S)$ with end-to-end measurements. In
this work, we use a latency-centric view for the cost component. The rationale
behind this stems from the well-known theoretical TCP bandwidth model of
$BW=O(\frac{MSS}{RTT\sqrt{p}})$ [32]: given constant drop rate $p$ and window
$MSS$, higher latency induces lower bandwidth in TCP streams. This
conveniently lets us approximate costs by only probing for latency. We adopt
the probing pipeline used in PLink, which focuses on discovering physical
locality with an in-house DPDK based echo tool, leveraging network enhancement
provided by the clouds [34, 12]. Each pair of nodes receive a total of $10k$
probes from sequentially and bidirectionally. To derive an accurate reading,
we take the RTT of 10th percentile to filter out interference during probes.
Each probe is a UDP packet with a 32-bit payload that encodes sequence number
and round id for fault tolerance. When DPDK cannot be used, we use fping, a
ICMP Echo-based latency probing tool. For each entry in $c$, we update
$c_{i,j}\leftarrow MAX(c_{i,j},c_{j,i})$ to make it symmetric.
### IV-C Minimizing the Cost Model
We parameterize the cost model with values of probed $c$. To derive a rank
ordering that minimizes $\mathbb{C_{O}}$, we perform the following
transformation: let set of variables $\mathbb{R}$ defined as
$r_{i},i\in[0,N-1]$ be a permutation of $[0,N-1]$ to be solved, and we replace
each $c_{i,j}$ with $c_{r_{i},r_{j}}$. We can then establish a bijection from
the original rank ordering to the desired order $r_{i}\leftrightarrow i$ once
$r_{i}$s are solved. We flatten $c_{i,j}\leftrightarrow c^{\prime}_{iN+j}$ to
use theory of arrays to allow direct solving with conventional optimizing SMT
solvers such as Z3 [18, 22].
Unfortunately, we find solvers inefficient, perhaps due to the non-convex,
non-linear nature of the objective function and a large search space ($N!$).
Thus, we take a two-stage solving process. The first step employs a range of
stochastic search techniques such as simulated annealing [14], with a few
standard heuristics (e.g., permuting a random sub-array, permuting random
pairs) for obtaining neighboring states and a timeout. When the search returns
with an initial result $C_{0}$, we generate an additional SMT constraint
$\mathbb{C_{O}}<C_{0}$ to better guide pruning for solvers. We let the solver
continue to run for a few minutes, and we either find a better solution or
will use $C_{0}$ as the final value. The end-product of this process is a
rearranged list of VMs.
## V Preliminary Evaluation
We evaluate Collectives with a series of microbenchmarks from various
communication backends and real-world applications that use collectives. We
represent speedup by comparing the performance we get from the best rank order
and the worst rank order. We avoid comparing with the original rank order
because it is random and unstable (Figure 1).
### V-A Experimental Setup
Our experiments are conducted on two public clouds, Azure and EC2. We enable
network acceleration on both clouds and set TCP congestion control protocol to
DCTCP. We include microbenchmarks that exercise ring, having doubling, double
binary tree, and Bcube algorithms. All experiments run on Ubuntu 19. We focus
our evaluation on one of the most important tasks in collectives, allreduce
for its popularity.
### V-B Prediction Accuracy of Cost Model
While the goal of the cost model is not to predict the actual performance, but
rather, it should preserve the relative order of performance, i.e.,
$p_{pred}(\mathbb{R}_{1})<p_{pred}(\mathbb{R}_{2})\implies
p_{real}(\mathbb{R}_{1})<p_{real}(\mathbb{R}_{2})$ should hold true for as
many pairs of $(R_{i},R_{j})$s as possible. We demonstrate this for ring based
collectives by generating 10 different rank orders, with the $i$-th order
approximately corresponds to the $10i$-th percentile in the range of costs
found by the solver. We obtain performance data for Facebook Gloo and OpenMPI
running OSU Benchmark on 64 F16 nodes on Azure and 64 C5 nodes on EC2. We then
compute Spearman [10] correlation coefficient between the predicted
performance and the actual performance for each setup (Table I). We found the
cost function predicting and actual collectives performance exhibits strong
correlation.
Setup | Azure | EC2
---|---|---
Gloo Ring 100MB | 0.58 | 0.78
OpenMPI Ring 100MB | 0.81 | 0.94
Table I: Spearman correlation coefficient between predicted performance from
cost model and actual performance.
### V-C Microbenchmark Performance
We evaluate Collectives’s efficacy with microbenchmarks of algorithms
introduced in II. We report a mean speedup of 20 iterations. We run all
benchmarks with 512 F16 nodes on Azure, except for NCCL, which runs on 64
P3.8xLarge GPU nodes on EC2. Specifically, we set $B=4$ for BCube; for NCCL,
we use a single binary tree reduction for small buffers and a ring for large
buffers. In all benchmarks, we reduce a buffer of 100MB, except for Nvidia
NCCL, where we reduce a small buffer of 4B to trigger the tree algorithm.
Figure 4: Speedups achieved by just using the rank ordering produced by our
tool in various algorithms across multiple distributed communication backends
at a large scale.
Figure 4 shows a summary of speedups achieved using Collectives on these
benchmarks with the ring-family algorithms benefiting the most, up to 3.7x. We
speculate the reason for the effectiveness is that they have a much wider
performance distribution, as each permutation of the order can potentially
generate a different performance (cost of each hop is on the critical path);
they also have simpler cost model, allowing the solvers to quickly navigate
the objective landscape. On the other hand, halving doubling, BCube, and tree
algorithms have complex objectives – sum of maximums, resulting in a narrower
performance distribution because mutation of the ordering may not change the
cost at all if the mutation does not cause the critical path to change.
### V-D End-to-end Performance Impact on Real-world Applications
Distributed Gradient Boosted Decision Tree. We evaluate Collectives’s impact
on LightGBM [27], a gradient boosted decision tree training system. We use
data parallelism to run lambdarank with metric ndcg. Communication-wise, this
workload runs two tasks: allreduce and reducescatter, and they are called
sequentially in each split of the iteration. At our scale of 512 nodes,
LightGBM automatically chooses to use halving and doubling for both
reducescatter and allreduce. We use a dataset that represents an actual
workload in our commercial setting with 5K columns and a total size of 10GB
for each node. We train 1K trees, each with 120 leaves. We exclude the time it
takes to load data from disk to memory and report an average speedup of 1000
iterations. Collectives generated rank orders speed up training by 1.3x.
Distributed Deep Neural Network. We show Collectives’s effectiveness on
distributed training of DNNs with Caffe2/Pytorch, on 64 EC2 p3.8xLarge nodes
with data parallelism and a batch size of 64/GPU. We train AlexNet on the
ImageNet dataset. Since our Collectives does not change computation and only
improves communication efficiency of the allreduce operation at iteration
boundary, we report speedup of training in terms of images/second, averaged
across 50 iterations. We use the ring chunked algorithm, which achieves the
best baseline performance, and the Collectives-optimized rank ordering of VM
nodes achieves a speedup of 1.2x.
## VI Discussion and Limitations
Generalizability Study. Due to lack of resources, we only evaluated
Collectives on a few VM allocations on each cloud. Further investigation is
needed to understand how Collectives performs across different physical
allocations and whether Collectives’s rank order reacts to cloud variance
well.
Limitations to Cost Modeling. Our use of latency-centric cost function is
reasonable and effective, but is not perfect: it is possible that different
bandwidth can correspond to the same latency depending on link condition. This
may cause unoptimal solution when a transfer is bandwidth-bound. Further study
is needed to determine how to incorporate bandwidth related cost into the cost
function without incurring high probing overhead.
Complements to Cost Models. While building accurate cost models is difficult,
we can dynamically adjust rank ordering with help from tools such as TCP_INFO
[31] that monitors link properties such as latency and bandwidth. Since we
know the communication pattern, we can determine the critical path and find
bottleneck transfer between node $n_{i}$ and $n_{j}$ in the system. From
there, we can find a $n_{k}$ to replace $n_{i}$ such that the replacement
results in a minimized cost objective.
Adapting to Dynamic Traffic. The above mechanism can be applied to adapt to
dynamic network load in the cloud environment. The framework, however, must
support the dynamic change of node ranks, which is possible and should come
with a small cost as a full mesh of connections can be established beforehand
among all nodes.
## References
* [1] “Extend mxnet distributed training by mpi allreduce - mxnet - apache software foundation,” https://cwiki.apache.org/confluence/display/MXNET/Extend+MXNet+Distributed+Training+by+MPI+AllReduce, (Accessed on 01/15/2020).
* [2] “Github - facebookincubator/gloo: Collective communications library with various primitives for multi-machine training.” https://github.com/facebookincubator/gloo, (Accessed on 01/24/2020).
* [3] “gloo/algorithms.md at master · facebookincubator/gloo,” https://github.com/facebookincubator/gloo/blob/master/docs/algorithms.md, (Accessed on 01/26/2020).
* [4] “Home - dpdk,” https://www.dpdk.org/, (Accessed on 01/24/2020).
* [5] “Massively scale your deep learning training with nccl 2.4 — nvidia developer blog,” https://devblogs.nvidia.com/massively-scale-deep-learning-training-nccl-2-4/, (Accessed on 01/25/2020).
* [6] “Nvidia collective communications library (nccl) — nvidia developer,” https://developer.nvidia.com/nccl, (Accessed on 01/19/2020).
* [7] “pages.tacc.utexas.edu/~eijkhout/pcse/html/mpi-collective.html,” http://pages.tacc.utexas.edu/~eijkhout/pcse/html/mpi-collective.html, (Accessed on 01/22/2020).
* [8] “Spark-mpi: Approaching the fifth paradigm - databricks,” https://databricks.com/session/spark-mpi-approaching-the-fifth-paradigm, (Accessed on 01/15/2020).
* [9] “Writing distributed applications with pytorch — pytorch tutorials 1.3.1 documentation,” https://pytorch.org/tutorials/intermediate/dist_tuto.html, (Accessed on 01/15/2020).
* [10] _Spearman Rank Correlation Coefficient_. New York, NY: Springer New York, 2008, pp. 502–505. [Online]. Available: https://doi.org/10.1007/978-0-387-32833-1_379
* [11] Amazon, “Amazon ec2 f1 instances,” https://aws.amazon.com/ec2/instance-types/f1/, (Accessed on 03/04/2019).
* [12] Amazon, “Enable and configure enhanced networking for ec2 instances,” https://aws.amazon.com/premiumsupport/knowledge-center/enable-configure-enhanced-networking/, (Accessed on 01/06/2019).
* [13] V. Bala, J. Bruck, R. Cypher, P. Elustondo, A. Ho, C.-T. Ho, S. Kipnis, and M. Snir, “Ccl: A portable and tunable collective communication library for scalable parallel computers,” _IEEE Transactions on Parallel and Distributed Systems_ , vol. 6, no. 2, pp. 154–164, 1995.
* [14] D. Bertsimas and J. Tsitsiklis, “Simulated annealing,” _Statistical Science_ , vol. 8, 02 1993.
* [15] K. Bilal, S. U. Khan, J. Kolodziej, L. Zhang, K. Hayat, S. A. Madani, N. Min-Allah, L. Wang, and D. Chen, “A comparative study of data center network architectures,” in _ECMS_ , 2012.
* [16] E. K. Blum, X. Wang, and P. Leung, “Architectures and message-passing algorithms for cluster computing: Design and performance,” _Parallel Computing_ , vol. 26, no. 2-3, pp. 313–332, 2000.
* [17] D. Bureddy, H. Wang, A. Venkatesh, S. Potluri, and D. K. Panda, “Omb-gpu: A micro-benchmark suite for evaluating mpi libraries on gpu clusters,” in _Recent Advances in the Message Passing Interface_ , J. L. Träff, S. Benkner, and J. J. Dongarra, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 110–120.
* [18] L. De Moura and N. Bjørner, “Z3: An efficient smt solver,” in _International conference on Tools and Algorithms for the Construction and Analysis of Systems_. Springer, 2008, pp. 337–340.
* [19] A. Faraj, P. Patarasuk, and X. Yuan, “Bandwidth efficient all-to-all broadcast on switched clusters,” in _2005 IEEE International Conference on Cluster Computing_ , Sep. 2005, pp. 1–10.
* [20] A. Faraj and X. Yuan, “Message scheduling for all-to-all personalized communication on ethernet switched clusters,” in _19th IEEE International Parallel and Distributed Processing Symposium_ , April 2005, pp. 10 pp.–.
* [21] J. Fowers, K. Ovtcharov, M. Papamichael, T. Massengill, M. Liu, D. Lo, S. Alkalay, M. Haselman, L. Adams, M. Ghandi _et al._ , “A configurable cloud-scale dnn processor for real-time ai,” in _Proceedings of the 45th Annual International Symposium on Computer Architecture_. IEEE Press, 2018, pp. 1–14.
* [22] Google, “Or-tools — google developers,” https://developers.google.com/optimization, (Accessed on 01/26/2020).
* [23] P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He, “Accurate, large minibatch sgd: Training imagenet in 1 hour,” 2017.
* [24] A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, , P. Lahiri, D. Maltz, and and, “Vl2: A scalable and flexible data center network.” Association for Computing Machinery, Inc., August 2009, recognized as one of ”the most important research results published in CS in recent years”. [Online]. Available: https://www.microsoft.com/en-us/research/publication/vl2-a-scalable-and-flexible-data-center-network/
* [25] F. N. Iandola, M. W. Moskewicz, K. Ashraf, and K. Keutzer, “Firecaffe: near-linear acceleration of deep neural network training on compute clusters,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 2592–2600.
* [26] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-l. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon, “In-datacenter performance analysis of a tensor processing unit,” in _Proceedings of the 44th Annual International Symposium on Computer Architecture_ , ser. ISCA ’17. New York, NY, USA: ACM, 2017, pp. 1–12. [Online]. Available: http://doi.acm.org/10.1145/3079856.3080246
* [27] G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.-Y. Liu, “Lightgbm: A highly efficient gradient boosting decision tree,” in _NIPS_ , 2017.
* [28] M. Liu, L. Luo, J. Nelson, L. Ceze, A. Krishnamurthy, and K. Atreya, “IncBricks: Toward in-network computation with an in-network cache,” _SIGOPS Oper. Syst. Rev._ , vol. 51, no. 2, pp. 795–809, Apr. 2017. [Online]. Available: http://doi.acm.org/10.1145/3093315.3037731
* [29] L. Luo, J. Nelson, L. Ceze, A. Phanishayee, and A. Krishnamurthy, “Parameter hub: A rack-scale parameter server for distributed deep neural network training,” in _Proceedings of the ACM Symposium on Cloud Computing_ , ser. SoCC ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 41–54. [Online]. Available: https://doi.org/10.1145/3267809.3267840
* [30] L. Luo, P. West, J. Nelson, A. Krishnamurthy, and L. Ceze, “Plink: Discovering and exploiting locality for accelerated distributed training on the public cloud,” in _MLSys 2020_ , 2020.
* [31] M. Mathis, J. Heffner, and R. Reddy, “Web100: extended tcp instrumentation for research, education and diagnosis,” _ACM SIGCOMM Computer Communication Review_ , vol. 33, no. 3, pp. 69–79, 2003.
* [32] M. Mathis, J. Semke, J. Mahdavi, and T. Ott, “The macroscopic behavior of the tcp congestion avoidance algorithm,” _ACM SIGCOMM Computer Communication Review_ , vol. 27, no. 3, pp. 67–82, 1997.
* [33] Q. Meng, G. Ke, T. Wang, W. Chen, Q. Ye, Z.-M. Ma, and T.-Y. Liu, “A communication-efficient parallel algorithm for decision tree,” in _Advances in Neural Information Processing Systems 29_ , D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, Eds. Curran Associates, Inc., 2016, pp. 1279–1287. [Online]. Available: http://papers.nips.cc/paper/6381-a-communication-efficient-parallel-algorithm-for-decision-tree.pdf
* [34] Microsoft, “Create an azure virtual machine with accelerated networking — microsoft docs,” https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-cli, (Accessed on 01/06/2019).
* [35] Microsoft, “Turing-nlg: A 17-billion-parameter language model by microsoft - microsoft research,” https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/, (Accessed on 05/25/2020).
* [36] R. N. Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat, “Portland: a scalable fault-tolerant layer 2 data center network fabric,” in _SIGCOMM_ , 2009.
* [37] OpenAI, “Ai and compute,” https://openai.com/blog/ai-and-compute/, (Accessed on 05/25/2020).
* [38] P. Patarasuk, A. Faraj, and Xin Yuan, “Pipelined broadcast on ethernet switched clusters,” in _Proceedings 20th IEEE International Parallel Distributed Processing Symposium_ , April 2006, pp. 10 pp.–.
* [39] P. Patarasuk and X. Yuan, “Bandwidth efficient all-reduce operation on tree topologies,” in _2007 IEEE International Parallel and Distributed Processing Symposium_ , March 2007, pp. 1–8.
* [40] P. Patarasuk and X. Yuan, “Bandwidth optimal all-reduce algorithms for clusters of workstations,” _Journal of Parallel and Distributed Computing_ , vol. 69, no. 2, pp. 117–124, 2009.
* [41] R. Rabenseifner, “Optimization of collective reduction operations,” 06 2004, pp. 1–9.
* [42] S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He, “Zero: Memory optimizations toward training trillion parameter models,” 2019.
* [43] J. Rehr, F. Vila, J. Gardner, L. Svec, and M. Prange, “Scientific computing in the cloud,” _Computing in Science and Engineering_ , vol. 12, pp. 34 – 43, 07 2010.
* [44] A. Roy, H. Zeng, J. Bagga, G. Porter, and A. C. Snoeren, “Inside the social network’s (datacenter) network,” _Computer Communication Review_ , vol. 45, pp. 123–137, 2015.
* [45] P. D. Sack, “Scalable collective message-passing algorithms,” Ph.D. dissertation, Champaign, IL, USA, 2011, aAI3503864.
* [46] P. Sanders, J. Speck, and J. L. Träff, “Two-tree algorithms for full bandwidth broadcast, reduction and scan,” _Parallel Comput._ , vol. 35, no. 12, p. 581–594, Dec. 2009. [Online]. Available: https://doi.org/10.1016/j.parco.2009.09.001
* [47] R. Thakur, R. Rabenseifner, and W. Gropp, “Optimization of collective communication operations in mpich,” _Int. J. High Perform. Comput. Appl._ , vol. 19, no. 1, p. 49–66, Feb. 2005. [Online]. Available: https://doi.org/10.1177/1094342005051521
* [48] A. Vishnu, C. Siegel, and J. Daily, “Distributed tensorflow with mpi,” 2016.
* [49] D. W. Walker and J. J. Dongarra, “Mpi: a standard message passing interface,” _Supercomputer_ , vol. 12, pp. 56–68, 1996.
* [50] B. Zhang, Y. Ruan, and J. Qiu, “Harp: Collective communication on hadoop,” in _2015 IEEE International Conference on Cloud Engineering_ , 2015, pp. 228–233.
|
# Some aspects of the Bergman and Hardy spaces associated with a class of
generalized analytic functions ††thanks: Supported by the National Natural
Science Foundation of China (No. 12071295).
†Corresponding author.
E-mail<EMAIL_ADDRESS>(Zh.-K. Li<EMAIL_ADDRESS>(H.-H. Wei).
Zhongkai Li1 and Haihua Wei†,2
1Department of Mathematics, Shanghai Normal University
Shanghai 200234, China
2School of Mathematics and Statistics, Changshu Institute of Technology
Changshu 215500, Jiangsu, China
###### Abstract
For $\lambda\geq 0$, a $C^{2}$ function $f$ defined on the unit disk
${{\mathbb{D}}}$ is said to be $\lambda$-analytic if $D_{\bar{z}}f=0$, where
$D_{\bar{z}}$ is the (complex) Dunkl operator given by
$D_{\bar{z}}f=\partial_{\bar{z}}f-\lambda(f(z)-f(\bar{z}))/(z-\bar{z})$. The
aim of the paper is to study several problems on the associated Bergman spaces
$A^{p}_{\lambda}({{\mathbb{D}}})$ and Hardy spaces
$H_{\lambda}^{p}({{\mathbb{D}}})$ for $p\geq 2\lambda/(2\lambda+1)$, such as
boundedness of the Bergman projection, growth of functions, density,
completeness, and the dual spaces of $A^{p}_{\lambda}({{\mathbb{D}}})$ and
$H_{\lambda}^{p}({{\mathbb{D}}})$, and characterization and interpolation of
$A^{p}_{\lambda}({{\mathbb{D}}})$.
2020 MS Classification: 30H20, 30H10 (Primary), 30G30, 42A45 (Secondary)
Key Words and Phrases: Bergman space; Hardy space; Bergman projection;
$\lambda$-analytic function
## 1 Introduction
For $\lambda\geq 0$, the (complex) Dunkl operators $D_{z}$ and $D_{\bar{z}}$
on the complex plane ${\mathbb{C}}$, as substitutes of $\partial_{z}$ and
$\partial_{\bar{z}}$, are defined by
$\displaystyle D_{z}f(z)$
$\displaystyle=\partial_{z}f+\lambda\frac{f(z)-f(\bar{z})}{z-\bar{z}},$
$\displaystyle D_{\bar{z}}f(z)$
$\displaystyle=\partial_{\bar{z}}f-\lambda\frac{f(z)-f(\bar{z})}{z-\bar{z}}.$
The associated Laplacian, called the $\lambda$-Laplacian, is given by
$\Delta_{\lambda}=4D_{z}D_{\bar{z}}=4D_{\bar{z}}D_{z}$, which can be written
explicitly as
$\displaystyle\Delta_{\lambda}f=\frac{\partial^{2}f}{\partial
x^{2}}+\frac{\partial^{2}f}{\partial y^{2}}+\frac{2\lambda}{y}\frac{\partial
f}{\partial y}-\frac{\lambda}{y^{2}}[f(z)-f(\bar{z})],\qquad z=x+iy.$
A $C^{2}$ function $f$ defined on the unit disk ${\mathbb{D}}$ is said to be
$\lambda$-analytic, if $D_{\bar{z}}f=0$; and $f$ is said to be
$\lambda-$harmonic, if $\Delta_{\lambda}f=0$.
The measure on the unit disk ${\mathbb{D}}$ associated with the operators
$D_{z}$ and $D_{\bar{z}}$ is
$\displaystyle d\sigma_{\lambda}(z)=c_{\lambda}|y|^{2\lambda}dxdy,\qquad
z=x+iy,$
where $c_{\lambda}=\Gamma(\lambda+2)/\Gamma(\lambda+1/2)\Gamma(1/2)$ so that
$\int_{{\mathbb{D}}}d\sigma_{\lambda}(z)=1$. For $0<p<\infty$, we denote by
$L^{p}({\mathbb{D}};d\sigma_{\lambda})$, or simply by
$L_{\lambda}^{p}({\mathbb{D}})$, the space of measurable functions $f$ on
${\mathbb{D}}$ satisfying
$\|f\|_{L_{\lambda}^{p}({\mathbb{D}})}:=\left(\int_{{\mathbb{D}}}|f(z)|^{p}d\sigma_{\lambda}(z)\right)^{1/p}<\infty;$
and $L_{\lambda}^{\infty}({\mathbb{D}})$, or simply
$L^{\infty}({\mathbb{D}})$, is the collection of all essentially bounded
measurable functions on ${\mathbb{D}}$ with norm
$\|f\|_{L^{\infty}({\mathbb{D}})}={\rm esssup}_{z\in{\mathbb{D}}}|f(z)|$. The
associated Bergman space $A^{p}_{\lambda}({\mathbb{D}})$, named the
$\lambda$-Bergman space, consists of those elements in
$L_{\lambda}^{p}({\mathbb{D}})$ that are $\lambda$-analytic in ${\mathbb{D}}$,
and the norm of $f\in A^{p}_{\lambda}({\mathbb{D}})$ is written as
$\|f\|_{A_{\lambda}^{p}}$ instead of $\|f\|_{L_{\lambda}^{p}({\mathbb{D}})}$.
The associated measure on the circle $\partial{\mathbb{D}}\simeq[-\pi,\pi]$ is
$\displaystyle
dm_{\lambda}(\theta)=\tilde{c}_{\lambda}|\sin\theta|^{2\lambda}d\theta,\ \ \ \
\ \ \tilde{c}_{\lambda}=c_{\lambda}/(2\lambda+2).$
As usual, the $p$-means of a function $f$ defined on ${\mathbb{D}}$, for
$0<p<\infty$, are given by
$\displaystyle
M_{p}(f;r)=\left\\{\int_{-\pi}^{\pi}|f(re^{i\theta})|^{p}\,dm_{\lambda}(\theta)\right\\}^{1/p},\qquad
0\leq r<1;$
and $M_{\infty}(f;r)=\sup_{\theta}|f(re^{i\theta})|$. The $\lambda$-Hardy
space $H_{\lambda}^{p}({\mathbb{D}})$ is the collection of $\lambda$-analytic
functions on ${\mathbb{D}}$ satisfying
$\|f\|_{H_{\lambda}^{p}}:=\sup_{0\leq r<1}M_{p}(f;r)<\infty.$
Obviously $H_{\lambda}^{\infty}({\mathbb{D}})$ is identical with
$A_{\lambda}^{\infty}({\mathbb{D}})$. Note that for $0<p<1$, $\|\cdot\|_{X}$
with $X=A_{\lambda}^{p}({\mathbb{D}})$ or $H_{\lambda}^{p}({\mathbb{D}})$ is
not a norm, however $\|f-g\|_{X}^{p}$ defines a metric.
It was proved in [11] that $f$ is $\lambda$-analytic in ${\mathbb{D}}$ if and
only if $f$ has the series representation
$\displaystyle
f(z)=\sum_{n=0}^{\infty}c_{n}\phi_{n}^{\lambda}(z),\qquad|z|<1,$ (1)
where
$\displaystyle\phi_{n}^{\lambda}(z)=\epsilon_{n}\sum_{j=0}^{n}\frac{(\lambda)_{j}(\lambda+1)_{n-j}}{j!(n-j)!}\bar{z}^{j}z^{n-j},\qquad
n\in{\mathbb{N}}_{0},$
and $\epsilon_{n}=\sqrt{n!/(2\lambda+1)_{n}}$. Here ${\mathbb{N}}_{0}$ denotes
the set of nonnegative integers. For details, see the next section. It is
remarked that $\phi_{0}^{\lambda}(z)\equiv 1$, and for $n\geq 1$,
$\phi_{n}^{0}(z)=z^{n}$.
We remark that for $\lambda$-analytic functions, there are no analog of
Cauchy’s theorem and also that of the Cauchy integral formula. Furthermore,
roughly speaking, the product and the composition of $\lambda$-analytic
functions are no longer $\lambda$-analytic.
The fundamental theory of the $\lambda$-Hardy spaces
$H_{\lambda}^{p}({\mathbb{D}})$ for $p\geq p_{0}$ was studied in [11], where
$p_{0}=\frac{2\lambda}{2\lambda+1}.$
Such a restriction on the exponent $p$ is due to the fact that $f\in
H_{\lambda}^{p}({\mathbb{D}})$ for these $p$ has a so-called
$\lambda$-harmonic majorization (cf. [11, Theorem 6.3]). This phenomenon also
occurs in the study of the Hardy spaces associated with the Gegenbauer
expansions in [14], and in the pioneering work [15] about the Hardy spaces on
the half space ${\mathbb{R}}_{+}^{d+1}$ for $d>1$, where the lower bound of
$p$ is $(d-1)/d$.
In this paper we study several basic problems on the $\lambda$-Bergman spaces
$A^{p}_{\lambda}({\mathbb{D}})$, and also on the $\lambda$-Hardy spaces
$H_{\lambda}^{p}({\mathbb{D}})$, such as boundedness of the Bergman
projection, growth of functions, density, completeness, and the dual spaces of
$A^{p}_{\lambda}({\mathbb{D}})$ and $H_{\lambda}^{p}({\mathbb{D}})$, and also
characterization and interpolation of $A^{p}_{\lambda}({\mathbb{D}})$.
There are rich theories of the Hardy spaces, the Bergman spaces, and other
spaces of (usual) analytic functions on the unit disk and even more general
domains in the plane or in higher-dimensional complex spaces. See [5, 8, 10,
21] for the Hardy spaces, and [6, 9, 19, 20] for the Bergman spaces.
The paper is organized as follows. In Section 2 we recall some basic knowledge
about $\lambda$-analytic functions and $\lambda-$harmonic functions on the
disk ${\mathbb{D}}$. The Bergman projection associated to the
$\lambda$-Bergman spaces is introduced in Section 3 and is proved to be
bounded from $L_{\lambda}^{p}({\mathbb{D}})$ into
$A^{p}_{\lambda}({\mathbb{D}})$ for $1<p<\infty$. In Section 4 we obtain the
growth estimates of the $p$-means $M_{p}(f;r)$ of functions in
$A^{p}_{\lambda}({\mathbb{D}})$ and the point estimates of functions in both
$H_{\lambda}^{p}({\mathbb{D}})$ and $A^{p}_{\lambda}({\mathbb{D}})$. Section 5
is devoted to density, completeness, and duality of the $\lambda$-Hardy and
$\lambda$-Bergman spaces, and Section 6 to a characterization by the operator
$D_{z}$ of the space $A^{p}_{\lambda}({\mathbb{D}})$ for $1\leq p<\infty$, and
also an interpolation theorem of $A^{p}_{\lambda}({\mathbb{D}})$.
For $0<p<\infty$, we denote by $L^{p}(\partial{\mathbb{D}};dm_{\lambda})$, or
simply by $L_{\lambda}^{p}(\partial{\mathbb{D}})$, the space of measurable
functions $f$ on $\partial{\mathbb{D}}$ satisfying
$\|f\|_{L_{\lambda}^{p}(\partial{\mathbb{D}})}:=\left(\int_{-\pi}^{\pi}|f(e^{i\theta})|^{p}dm_{\lambda}(\theta)\right)^{1/p}<\infty$,
and for $p=\infty$,
$L^{\infty}(\partial{\mathbb{D}};dm_{\lambda})=L^{\infty}(\partial{\mathbb{D}})$
as usual, with norm $\|f\|_{L^{\infty}(\partial{\mathbb{D}})}={\rm
esssup}_{\theta}|f(e^{i\theta})|$. In addition,
${\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})$ denotes the space of Borel
measures $d\nu$ on $\partial{\mathbb{D}}$ for which
$\|d\nu\|_{{\mathfrak{B}_{\lambda}(\partial{\mathbb{D}})}}=\tilde{c}_{\lambda}\int_{-\pi}^{\pi}|\sin\theta|^{2\lambda}|d\nu(\theta)|$
are finite. Throughout the paper, the notation
${\mathcal{X}}\lesssim{\mathcal{Y}}$ or ${\mathcal{Y}}\gtrsim{\mathcal{X}}$
means that ${\mathcal{X}}\leq c{\mathcal{Y}}$ for some positive constant $c$
independent of variables, functions, etc., and
${\mathcal{X}}\asymp{\mathcal{Y}}$ means that both
${\mathcal{X}}\lesssim{\mathcal{Y}}$ and ${\mathcal{Y}}\lesssim{\mathcal{X}}$
hold.
## 2 $\lambda$-analytic and $\lambda$-harmonic functions
Most materials of this section come from [11]. This topic is motivated by C.
Dunkl’s work [3], where he built up a framework associated with the dihedral
group $G=D_{k}$ on the disk ${\mathbb{D}}$. Our work here and also that in
[11] focuses on the special case with $G=D_{1}$ having the reflection
$z\mapsto\overline{z}$ only, to find possibilities to develop a deep theory of
associated function spaces. We note that C. Dunkl has a general theory named
after him associated with reflection-invariance on the Euclidean spaces, see
[1], [2] and [4] for example.
###### Proposition 2.1
([3, 4]) For $n\geq 1$ and $z=re^{i\theta}$, we have
$\displaystyle\phi_{n}^{\lambda}(z)$
$\displaystyle=\epsilon_{n}r^{n}\left[\frac{n+2\lambda}{2\lambda}P_{n}^{\lambda}(\cos\theta)+i\sin\theta
P_{n-1}^{\lambda+1}(\cos\theta)\right],$
$\displaystyle\bar{z}\overline{\phi_{n-1}^{\lambda}(z)}$
$\displaystyle=\epsilon_{n-1}r^{n}\left[\frac{n}{2\lambda}P_{n}^{\lambda}(\cos\theta)-i\sin\theta
P_{n-1}^{\lambda+1}(\cos\theta)\right],$
where $P_{n}^{\lambda}(t)$, $n\in{\mathbb{N}}_{0}$, are the Gegenbauer
polynomials (cf. [16]), and $P_{-1}^{\lambda+1}=0$. Moreover, the system
$\displaystyle\\{\phi_{n}^{\lambda}(e^{i\theta}):\,\,n\in{\mathbb{N}}_{0}\\}\cup\\{e^{-i\theta}\phi_{n-1}^{\lambda}(e^{-i\theta}):\,\,n\in{\mathbb{N}}\\}$
is an orthonormal basis of the Hilbert space
$L_{\lambda}^{2}(\partial{\mathbb{D}})$.
It follows from [11, Proposition 2.2] that, $\phi_{n}^{\lambda}(z)$
($n\in{\mathbb{N}}_{0}$) is $\lambda$-analytic, and
$\bar{z}\overline{\phi_{n-1}^{\lambda}}(z)$ ($n\in{\mathbb{N}}$) is
$\lambda$-harmonic; moreover,
$\displaystyle
D_{z}\phi_{n}^{\lambda}(z)=\sqrt{n(n+2\lambda)}\phi_{n-1}^{\lambda}(z)\quad\hbox{and}\quad
D_{z}(z\phi_{n-1}^{\lambda}(z))=(n+\lambda)\phi_{n-1}^{\lambda}(z).$ (2)
The function $\phi_{n}^{\lambda}(z)$ ($n\in{\mathbb{N}}_{0}$) has a closed
representation as given in [11, (29)], that is,
$\displaystyle\epsilon_{n}\phi_{n}(z)=2^{2\lambda+1}\tilde{c}_{\lambda-1/2}\int_{0}^{1}(sz+(1-s)\bar{z})^{n}(1-s)^{\lambda-1}s^{\lambda}ds.$
(3)
It is easy to see that
$\displaystyle|\phi_{n}(z)|\leq\epsilon_{n}^{-1}|z|^{n}\asymp(n+1)^{\lambda}|z|^{n}.$
(4)
In what follows, we write $\phi_{n}(z)=\phi_{n}^{\lambda}(z)$ for simplicity.
We say $f$ to be a $\lambda$-analytic polynomial, or a $\lambda$-harmonic
polynomial, if it is a finite linear combination of elements in the system
$\\{\phi_{n}(z):\,n\in{\mathbb{N}}_{0}\\}$, or in the system
$\displaystyle\\{\phi_{n}(z):\,\,n\in{\mathbb{N}}_{0}\\}\cup\\{\bar{z}\overline{\phi_{n-1}(z)}:\,\,n\in{\mathbb{N}}\\}.$
(5)
The $\lambda$-Cauchy kernel $C(z,w)$ and the $\lambda$-Poisson kernel
$P(z,w)$, which reproduce, associated with the measure $dm_{\lambda}$ on the
circle $\partial{\mathbb{D}}$, all $\lambda$-analytic polynomials and
$\lambda$-harmonic polynomials respectively, are given by (cf. [3])
$\displaystyle C(z,w)$
$\displaystyle=\sum_{n=0}^{\infty}\phi_{n}(z)\overline{\phi_{n}(w)},$ (6)
$\displaystyle P(z,w)$ $\displaystyle=C(z,w)+\bar{z}wC(w,z).$ (7)
The series in $C(z,w)$ is convergent absolutely for $zw\in{\mathbb{D}}$ and
uniformly for $zw$ in a compact subset of ${\mathbb{D}}$, and from [3] (see
[11] also),
$\displaystyle C(z,w)$ $\displaystyle=\frac{1}{1-z\bar{w}}P_{0}(z,w),\qquad
zw\in{\mathbb{D}},$ (8) $\displaystyle P(z,w)$
$\displaystyle=\frac{1-|z|^{2}|w|^{2}}{|1-z\bar{w}|^{2}}P_{0}(z,w),\qquad
zw\in{\mathbb{D}},$
where
$\displaystyle P_{0}(z,w)$
$\displaystyle=\frac{1}{|1-zw|^{2\lambda}}{}_{2}\\!F_{1}\Big{(}{\lambda,\lambda\atop
2\lambda+1};\frac{4({\rm Im}z)({\rm Im}w)}{|1-zw|^{2}}\Big{)}$
$\displaystyle=\frac{1}{|1-z\bar{w}|^{2\lambda}}{}_{2}\\!F_{1}\Big{(}{\lambda,\lambda+1\atop
2\lambda+1};-\frac{4({\rm Im}z)({\rm Im}w)}{|1-z\bar{w}|^{2}}\Big{)},$ (9)
and ${}_{2}\\!F_{1}[a,b;c;t]$ is the Gauss hypergeometric function.
A $\lambda$-harmonic function on ${\mathbb{D}}$ has a series representation in
terms of the system (5). Precisely by [11, Theorem 3.1], if $f$ is a
$\lambda$-harmonic function on ${\mathbb{D}}$, then there are two sequences
$\\{c_{n}\\}$ and $\\{\tilde{c}_{n}\\}$ of complex numbers, such that
$\displaystyle
f(z)=\sum_{n=0}^{\infty}c_{n}\phi_{n}(z)+\sum_{n=1}^{\infty}\tilde{c}_{n}\bar{z}\overline{\phi_{n-1}(z)}$
for $z\in{\mathbb{D}}$; furthermore, the two sequences above are given by
$\displaystyle c_{n}$ $\displaystyle=\lim_{r\rightarrow
1-}\int_{-\pi}^{\pi}f(re^{i\varphi})\overline{\phi_{n}(e^{i\varphi})}dm_{\lambda}(\varphi),$
$\displaystyle\tilde{c}_{n}$ $\displaystyle=\lim_{r\rightarrow
1-}\int_{-\pi}^{\pi}f(re^{i\varphi})e^{i\varphi}\phi_{n-1}(e^{i\varphi})dm_{\lambda}(\varphi),$
and satisfy the condition that, for each real $\gamma$, the series
$\sum_{n\geq 1}n^{\gamma}(|c_{n}|+|\tilde{c}_{n}|)r^{n}$ converges uniformly
for $r$ in every closed subset of $[0,1)$.
As stated in the first section, a $\lambda$-analytic function $f$ on
${\mathbb{D}}$ has a series representation in terms of the system
$\\{\phi_{n}(z):\,n\in{\mathbb{N}}_{0}\\}$, as in (1); and moreover, $f$ could
also be characterized by a Cauchy-Riemann type system. We summarize these
conclusions as follows.
###### Proposition 2.2
([11, Theorem 3.7]) For a $C^{2}$ function $f=u+iv$ defined on ${\mathbb{D}}$,
the following statements are equivalent:
(i) $f$ is $\lambda$-analytic;
(ii) $u$ and $v$ satisfy the generalized Cauchy-Riemann system
$\displaystyle\left\\{\partial_{x}u=D_{y}v,\atop
D_{y}u=-\partial_{x}v,\right.$
where
$\displaystyle
D_{y}u(x,y)=\partial_{y}u(x,y)+\frac{\lambda}{y}\left[u(x,y)-u(x,-y)\right].$
(iii) $f$ has the series representation
$\displaystyle f(z)=\sum_{n=0}^{\infty}c_{n}\phi_{n}(z),\qquad|z|<1,$ (10)
where
$c_{n}=\lim_{r\rightarrow
1-}\int_{-\pi}^{\pi}f(re^{i\varphi})\overline{\phi_{n}(e^{i\varphi})}dm_{\lambda}(\varphi).$
Moreover, for each real $\gamma$, the series $\sum_{n\geq
1}n^{\gamma}|c_{n}|r^{n}$ converges uniformly for $r$ in every closed subset
of $[0,1)$.
The $\lambda$-Poisson integral of $f\in L_{\lambda}^{1}(\partial{\mathbb{D}})$
is defined by
$\displaystyle
P(f;z)=\int_{-\pi}^{\pi}f(e^{i\varphi})P(z,e^{i\varphi})dm_{\lambda}(\varphi),\qquad
z=re^{i\theta}\in{\mathbb{D}},$ (11)
and that of a measure $d\nu\in{\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})$
by
$\displaystyle
P(d\nu;z)=\tilde{c}_{\lambda}\int_{-\pi}^{\pi}P(z,e^{i\varphi})|\sin\varphi|^{2\lambda}d\nu(\varphi),\qquad
z=re^{i\theta}\in{\mathbb{D}}.$ (12)
###### Proposition 2.3
([11, Propositions 2.4 and 2.5]) Let $f\in
L_{\lambda}^{1}(\partial{\mathbb{D}})$. Then
(i) the $\lambda$-Poisson integral $u(x,y)=P(f;z)$ ($z=x+iy$) is
$\lambda$-harmonic in ${\mathbb{D}}$;
(ii) if write $P_{r}(f;\theta)=P(f;re^{i\theta})$, we have the “semi-group”
property
$P_{s}(P_{r}f;\theta)=P_{s\,r}(f;\theta),\qquad 0\leq s,r<1;$
(iii) $P(f;z)\geq 0$ if $f\geq 0$;
(iv) for $f\in X=L_{\lambda}^{p}(\partial{\mathbb{D}})$ ($1\leq p\leq\infty$),
or $C(\partial{\mathbb{D}})$, $\|P_{r}(f;\cdot)\|_{X}\leq\|f\|_{X}$;
(v) for $f\in X=L_{\lambda}^{p}(\partial{\mathbb{D}})$ ($1\leq p<\infty$), or
$C(\partial{\mathbb{D}})$, $\lim_{r\rightarrow 1-}\|P_{r}(f;\cdot)-f\|_{X}=0$;
(vi) the conclusions in (i)-(iii) are true also for $P_{r}(d\nu;\cdot)$ in
place of $P_{r}(f;\cdot)$, if
$d\nu\in{\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})$; moreover,
$\|P_{r}(d\nu;\cdot)\|_{L_{\lambda}^{1}(\partial{\mathbb{D}})}\leq\|d\nu\|_{\mathfrak{B}_{\lambda}(\partial{\mathbb{D}})}$.
The following theorem asserts the existence of boundary values of functions in
the $\lambda$-Hardy spaces.
###### Theorem 2.4
([11, Theorem 6.6]) Let $p\geq p_{0}$ and $f\in
H_{\lambda}^{p}({\mathbb{D}})$. Then for almost every $\theta\in[-\pi,\pi]$,
$\lim f(re^{i\varphi})=f(e^{i\theta})$ exists as $re^{i\varphi}$ approaches to
the point $e^{i\theta}$ nontangentially, and if $p_{0}<p<\infty$, then
$\displaystyle\lim_{r\rightarrow
1-}\int_{-\pi}^{\pi}|f(re^{i\theta})-f(e^{i\theta})|^{p}dm_{\lambda}(\theta)=0$
(13)
and
$\|f\|_{H^{p}_{\lambda}}\asymp\left(\int_{-\pi}^{\pi}|f(e^{i\theta})|^{p}dm_{\lambda}(\theta)\right)^{1/p}$.
By [11, Theorems 6.6 and 6.8], we have
###### Theorem 2.5
Let $1\leq p\leq\infty$. If $f\in H_{\lambda}^{p}({\mathbb{D}})$, then its
boundary value $f(e^{i\theta})$ is in $L_{\lambda}^{p}(\partial{\mathbb{D}})$,
and $f$ can be recovered from $f(e^{i\theta})$ by the $\lambda$-Poisson
integral, namely,
$\displaystyle
f(z)=\int_{-\pi}^{\pi}f(e^{i\varphi})P(z,e^{i\varphi})dm_{\lambda}(\varphi),\qquad
z\in{\mathbb{D}}.$ (14)
Moreover,
$\|f\|_{H_{\lambda}^{p}}=\|f\|_{L_{\lambda}^{p}(\partial{\mathbb{D}})}$, and
$f(e^{i\theta})$ satisfies the condition
$\displaystyle\int_{-\pi}^{\pi}f(e^{i\varphi})e^{i\varphi}\phi_{n-1}(e^{i\varphi})dm_{\lambda}(\varphi)=0,\qquad
n=1,2,\dots,$ (15)
and if $f$ has the expansion $f(z)=\sum_{n=0}^{\infty}c_{n}\phi_{n}(z)$, then
$\displaystyle
c_{n}=\int_{-\pi}^{\pi}f(e^{i\varphi})\overline{\phi_{n}(e^{i\varphi})}dm_{\lambda}(\varphi),\qquad
n=0,1,\cdots.$ (16)
Conversely, the $\lambda$-Poisson integral of a function $f$ in
$L_{\lambda}^{p}(\partial{\mathbb{D}})$ satisfying (15) is an element in
$H_{\lambda}^{p}({\mathbb{D}})$.
For the Hardy spaces on the upper half plane
${\mathbb{R}}_{+}^{2}={\mathbb{R}}\times(0,\infty)$ associated to the Dunkl
operators $D_{z}$ and $D_{\bar{z}}$, see [12].
## 3 Preliminaries to the $\lambda$-Bergman spaces
We define the function $K_{\lambda}(z,w)$ by
$\displaystyle
K_{\lambda}(z,w)=\sum_{n=0}^{\infty}\frac{n+\lambda+1}{\lambda+1}\phi_{n}(z)\overline{\phi_{n}(w)}.$
By (4), the series in $K_{\lambda}(z,w)$ is convergent absolutely for
$zw\in{\mathbb{D}}$ and uniformly for $zw$ in a compact subset of
${\mathbb{D}}$. Moreover we have
###### Proposition 3.1
For fixed $w\in\overline{{\mathbb{D}}}$ the function $z\mapsto
K_{\lambda}(z,w)$ is $\lambda$-analytic in ${\mathbb{D}}$, and the function
$w\mapsto K_{\lambda}(z,w)$ reproduces all functions $f\in
A^{1}_{\lambda}({\mathbb{D}})$, that is,
$\displaystyle
f(z)=\int_{{\mathbb{D}}}f(w)K_{\lambda}(z,w)d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}.$ (17)
Conversely, if $f\in L_{\lambda}^{1}({\mathbb{D}})$ satisfies (17), then $f$
is $\lambda$-analytic in ${\mathbb{D}}$, and in particular $f\in
A^{1}_{\lambda}({\mathbb{D}})$.
* Proof. It is easy to see, from (3), that
$\displaystyle\left|\partial_{\bar{z}}\phi_{n}(z)\right|\leq\epsilon_{n}^{-1}n|z|^{n-1}\asymp
n^{\lambda+1}|z|^{n-1},$ (18)
which shows that taking termwise differentiation $\partial_{\bar{z}}$ in
${\mathbb{D}}$ to $K_{\lambda}(z,w)$ is legitimate. Therefore
$D_{\bar{z}}K_{\lambda}(z,w)=0$ since $D_{\bar{z}}\phi_{n}(z)=0$.
For $f\in A^{1}_{\lambda}({\mathbb{D}})$ and $0<r<1$, from (10) we have
$f(rw)=\sum_{k=0}^{\infty}c_{k}r^{k}\phi_{k}(w)$ ($|w|<1$). In view of (4),
the last assertion in Proposition 2.2(iii) implies that termwise integration
of $\overline{\phi_{n}(w)}f(rw)$ over ${\mathbb{D}}$ is legitimate, and since,
by Proposition 2.1,
$\displaystyle\int_{{\mathbb{D}}}\left|\phi_{n}(w)\right|^{2}\,d\sigma_{\lambda}(w)=\frac{\lambda+1}{n+\lambda+1},$
(19)
it follows that
$\int_{{\mathbb{D}}}\overline{\phi_{n}(w)}f(rw)\,d\sigma_{\lambda}(w)=\frac{\lambda+1}{n+\lambda+1}r^{n}c_{n}$.
Making change of variables $w\mapsto w/r$, one has
$\int_{{\mathbb{D}}_{r}}\overline{\phi_{n}(w)}f(w)\,d\sigma_{\lambda}(w)=\frac{\lambda+1}{n+\lambda+1}r^{2n+2\lambda+2}c_{n},$
where ${\mathbb{D}}_{r}=\\{w:\,|w|<r\\}$, and then, letting $r\rightarrow 1-$
yields
$\displaystyle\int_{{\mathbb{D}}}\overline{\phi_{n}(w)}f(w)\,d\sigma_{\lambda}(w)=\frac{\lambda+1}{n+\lambda+1}c_{n}.$
(20)
Again by (4),
$\sum_{n=0}^{\infty}(n+\lambda+1)|\phi_{n}(z)\overline{\phi_{n}(w)}|\lesssim\sum_{n=0}^{\infty}(n+1)^{2\lambda+1}|z|^{n}$
for all $w\in\overline{{\mathbb{D}}}$, which implies that termwise integration
of $f(w)K_{\lambda}(z,w)$ over ${\mathbb{D}}$ is also legitimate. This,
together with (20), proves (17).
The final assertion in the proposition is verified by the same procedure as
above.
The function $K_{\lambda}(z,w)$ is called the $\lambda$-Bergman kernel on the
disk ${\mathbb{D}}$.
###### Corollary 3.2
If $f$ is $\lambda$-analytic in ${\mathbb{D}}$, then for $r\in(0,1)$,
$\displaystyle
f(z)=r^{-2\lambda-2}\int_{{\mathbb{D}}_{r}}f(w)K_{\lambda}(z/r,w/r)d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}_{r},$ (21)
where ${\mathbb{D}}_{r}=\\{z:\,|z|<r\\}$. Conversely, if $f$ is locally
integrable on ${\mathbb{D}}$ associated with the measure $d\sigma_{\lambda}$
and satisfies (21) for all $r\in(0,1)$, then $f$ is $\lambda$-analytic in
${\mathbb{D}}$.
* Proof. For $z\in{\mathbb{D}}_{r}$, set $z^{\prime}=z/r$. Applying (17) to the function $z^{\prime}\mapsto f(rz^{\prime})$ one has
$f(rz^{\prime})=\int_{{\mathbb{D}}}f(rw)K_{\lambda}(z^{\prime},w)d\sigma_{\lambda}(w),$
and then, making change of variables $w\mapsto w^{\prime}/r$ yields (21). The
second part of the corollary follows from (21) and the final assertion in
Proposition 3.1.
We define the operator $P_{\lambda}$ by
$\displaystyle(P_{\lambda}f)(z)=\int_{{\mathbb{D}}}f(w)K_{\lambda}(z,w)d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}.$ (22)
###### Proposition 3.3
For $f\in L_{\lambda}^{1}({\mathbb{D}})$, the integral on the right hand side
of (22) is well defined for $z\in{\mathbb{D}}$ and defines a
$\lambda$-analytic function in ${\mathbb{D}}$.
Indeed as in the proof of Proposition 3.1, for fixed $z\in{\mathbb{D}}$, by
(4) termwise integration of $f(w)K_{\lambda,\alpha}(z,w)$ over ${\mathbb{D}}$
with respect to the measure $d\sigma_{\lambda}(w)$ is legitimate, and by
Proposition 2.2(iii) the resulting series represents a $\lambda$-analytic
function in ${\mathbb{D}}$.
###### Lemma 3.4
([11, Theorem 4.2]) For $|zw|<1$, we have
$\displaystyle|C(z,w)|\lesssim\frac{|1-z\bar{w}|^{-1}}{\big{(}|1-zw|+|1-z\bar{w}|\big{)}^{2\lambda}}\ln\left(\frac{|1-z\bar{w}|^{2}}{|1-zw|^{2}}+2\right).$
###### Lemma 3.5
For $|zw|<1$,
$\displaystyle|K_{\lambda}(z,w)|\lesssim\frac{|1-z\bar{w}|^{-1}}{\big{(}|1-zw|+|1-z\bar{w}|\big{)}^{2\lambda}}\left(\frac{1}{|1-z\overline{w}|}+\frac{1}{|1-zw|}\right).$
(23)
* Proof. For $|zw|<1$ and $z=re^{i\theta}$, one has
$\displaystyle(\lambda+1)K_{\lambda}(z,w)=r\frac{d}{dr}\left[C(z,w)\right]+(\lambda+1)C(z,w).$
By Lemma 3.4 and on account of the fact
$\frac{1}{s}\ln\left(\frac{s^{2}}{t^{2}}+2\right)\lesssim\frac{1}{s}+\frac{1}{t}$
for $s,t>0$, it suffices to show that
$\left|r\frac{d}{dr}\left[C(z,w)\right]\right|$ has the same upper bound as in
(23).
Put
$\displaystyle A=\frac{4({\rm Im}z)({\rm
Im}w)}{|1-zw|^{2}},\qquad\tilde{A}=-\frac{4({\rm Im}z)({\rm
Im}w)}{|1-z\bar{w}|^{2}}.$
It is noted that
$\displaystyle r\frac{d}{dr}\left[\frac{1}{1-z\bar{w}}\right]$
$\displaystyle=\frac{z\bar{w}}{(1-z\bar{w})^{2}},\qquad z=re^{i\theta},$
$\displaystyle r\frac{d}{dr}\left[\frac{1}{|1-zw|^{2\lambda}}\right]$
$\displaystyle=2\lambda\frac{{\mathrm{R}e}\,(zw)-|zw|^{2}}{|1-zw|^{2\lambda+2}},\qquad
z=re^{i\theta},$ $\displaystyle r\frac{d}{dr}A$
$\displaystyle=A\left(1+2\frac{{\mathrm{R}e}\,(zw)-|zw|^{2}}{|1-zw|^{2}}\right)=A\frac{1-|zw|^{2}}{|1-zw|^{2}},\qquad
z=re^{i\theta}.$
For $|1-z\bar{w}|^{2}\leq 2|1-zw|^{2}$, we have $-1\leq A<1$ since
$1-A=|1-z\bar{w}|^{2}/|1-zw|^{2}$, so that
$\displaystyle\left|{}_{2}\\!F_{1}\left(\lambda,\lambda;2\lambda+1;A\right)\right|\lesssim
1.$
It follows from [7, 2.1(7)] that
$\displaystyle
r\frac{d}{dr}\left[{}_{2}\\!F_{1}\left(\lambda,\lambda;2\lambda+1;A\right)\right]=\frac{\lambda^{2}}{2\lambda+1}{}_{2}\\!F_{1}\left(\lambda+1,\lambda+1;2\lambda+2;A\right)r\frac{d}{dr}A,$
and successively using [7, 2.1(7) and 2.1(23)],
$\displaystyle{}_{2}\\!F_{1}(a,b;a+b;t)$
$\displaystyle=1+\frac{ab}{a+b}\int_{0}^{t}(1-s)^{-1}{}_{2}\\!F_{1}(a,b;a+b+1;s)dt$
$\displaystyle\asymp
1+\ln\frac{1}{1-t}\asymp\ln\left(\frac{1}{1-t}+2\right),\qquad-1\leq t<1.$
(24)
Thus we have
$\displaystyle\left|r\frac{d}{dr}\left[{}_{2}\\!F_{1}\left(\lambda,\lambda;2\lambda+1;A\right)\right]\right|\lesssim\frac{1}{|1-zw|}\ln\left(\frac{|1-zw|^{2}}{|1-z\bar{w}|^{2}}+2\right)\lesssim\frac{1}{|1-z\overline{w}|},\qquad
z=re^{i\theta},$
where the last inequality is based on the fact
$s\ln\left(s^{-2}+2\right)\lesssim 1$ for $s\in(0,2]$. Now from (8) and the
first equality in (2) we get, for $|1-z\bar{w}|^{2}\leq 2|1-zw|^{2}$,
$\displaystyle\left|r\frac{d}{dr}\left[C(z,w)\right]\right|\lesssim$
$\displaystyle\frac{1}{|1-z\overline{w}|^{2}}\frac{1}{|1-zw|^{2\lambda}}\asymp\frac{|1-z\overline{w}|^{-2}}{(|1-z\overline{w}|+|1-zw|)^{2\lambda}},\qquad
z=re^{i\theta}.$ (25)
If $2|1-zw|^{2}<|1-z\bar{w}|^{2}$, we have $1/2<\tilde{A}<1$ since
$1-\tilde{A}=|1-zw|^{2}/|1-z\bar{w}|^{2}$, and then, ( ‣ 3) shows
$\displaystyle\left|{}_{2}\\!F_{1}\left(\lambda,\lambda+1;2\lambda+1;\tilde{A}\right)\right|\lesssim\ln\left(\frac{|1-z\bar{w}|^{2}}{|1-zw|^{2}}+2\right).$
Again using [7, 2.1(7)] gives
$\displaystyle
r\frac{d}{dr}\left[{}_{2}\\!F_{1}\left(\lambda,\lambda+1;2\lambda+1;\tilde{A}\right)\right]=\frac{\lambda(\lambda+1)}{2\lambda+1}{}_{2}\\!F_{1}\left(\lambda+1,\lambda+2;2\lambda+2;\tilde{A}\right)r\frac{d}{dr}\tilde{A},$
but by [7, 2.1(23)],
$\displaystyle{}_{2}\\!F_{1}\left(\lambda+1,\lambda+2;2\lambda+2;\tilde{A}\right)=(1-\tilde{A})^{-1}{}_{2}\\!F_{1}\left(\lambda+1,\lambda;2\lambda+2;\tilde{A}\right),$
so that
$\displaystyle\left|r\frac{d}{dr}\left[{}_{2}\\!F_{1}\left(\lambda,\lambda+1;2\lambda+1;\tilde{A}\right)\right]\right|\lesssim$
$\displaystyle\frac{|1-z\bar{w}|^{2}}{|1-zw|^{2}}\frac{1-|zw|^{2}}{|1-z\bar{w}|^{2}}=\frac{1-|zw|^{2}}{|1-zw|^{2}},\qquad
z=re^{i\theta}.$
Now from (8) and the second equality in (2) we get, for
$2|1-zw|^{2}<|1-z\bar{w}|^{2}$,
$\displaystyle\left|r\frac{d}{dr}\left[C(z,w)\right]\right|\lesssim$
$\displaystyle\frac{1}{|1-z\overline{w}|^{2\lambda+2}}\ln\left(\frac{|1-z\bar{w}|^{2}}{|1-zw|^{2}}+2\right)+\frac{1}{|1-z\overline{w}|^{2\lambda+1}}\frac{1}{|1-zw|}$
$\displaystyle\lesssim$
$\displaystyle\frac{|1-z\overline{w}|^{-1}|1-zw|^{-1}}{(|1-z\overline{w}|+|1-zw|)^{2\lambda}},\qquad
z=re^{i\theta}.$ (26)
Combining (25) and ( ‣ 3) shows that
$\left|r\frac{d}{dr}\left[C(z,w)\right]\right|$ has an upper bound like that
in (23), as disired.
###### Theorem 3.6
For $1<p<\infty$, the operator $P_{\lambda}$ is bounded from
$L_{\lambda}^{p}({\mathbb{D}})$ onto $A^{p}_{\lambda}({\mathbb{D}})$.
* Proof. We shall show that, for $\alpha=1/p$ and $\alpha=1/p^{\prime}$,
$\displaystyle\int_{\mathbb{D}}|K_{\lambda}(z,w)|(1-|w|^{2})^{-\alpha}d\sigma_{\lambda}(w)$
$\displaystyle\lesssim(1-|z|^{2})^{-\alpha},\qquad z\in{\mathbb{D}}.$ (27)
Thus by Schur’s theorem (cf. [19, p. 52]), $P_{\lambda}$ is bounded from
$L_{\lambda}^{p}({\mathbb{D}})$ to $A^{p}_{\lambda}({\mathbb{D}})$.
For $z=re^{i\theta}$, $w=se^{i\varphi}\in{\mathbb{D}}$, we have the following
inequalities
$\displaystyle|1-z\overline{w}|\asymp
1-rs+\left|\sin(\theta-\varphi)/2\right|,$
$\displaystyle|1-z\overline{w}|+|1-zw|\gtrsim
1-rs+|\sin\theta|+|\sin\varphi|,$
and from (23),
$\displaystyle|K_{\lambda}(z,w)|\lesssim\Phi_{r,\theta}(s,\varphi)+\Phi_{r,\theta}(s,-\varphi),$
(28)
where
$\displaystyle\Phi_{r,\theta}(s,\varphi)=\frac{\left(1-rs+|\sin\theta|+|\sin\varphi|\right)^{-2\lambda}}{\left(1-rs+\left|\sin(\theta-\varphi)/2\right|\right)^{2}}.$
(29)
We have
$\displaystyle\int_{\mathbb{D}}|K_{\lambda}(z,w)|(1-|w|^{2})^{-\alpha}d\sigma_{\lambda}(w)\lesssim\int_{0}^{1}\int_{-\pi}^{\pi}\Phi_{r,\theta}(s,\varphi)(1-s)^{-\alpha}|\sin\varphi|^{2\lambda}d\varphi
ds,$
since the contribution of $\Phi_{r,\theta}(s,-\varphi)$ to the integral is the
same as that of $\Phi_{r,\theta}(s,\varphi)$. Thus
$\displaystyle\int_{\mathbb{D}}|K_{\lambda}(z,w)|(1-|w|^{2})^{-\alpha}d\sigma_{\lambda}(w)\lesssim\int_{0}^{1}\int_{-\pi}^{\pi}\frac{(1-s)^{-\alpha}}{\left(1-rs+\left|\sin(\theta-\varphi)/2\right|\right)^{2}}\,d\varphi
ds,$
from which (27) is yields after elementary calculations.
The proof above actually gives a slightly stronger conclusion, stated as
follows.
###### Corollary 3.7
For $1<p<\infty$, the operator $T$ defined by
$(Tf)(z)=\int_{{\mathbb{D}}}|f(w)||K_{\lambda}(z,w)|d\sigma_{\lambda}(w)$
maps $L_{\lambda}^{p}({\mathbb{D}})$ boundedly into itself.
###### Corollary 3.8
There is an orthogonal projection from $L_{\lambda}^{2}({\mathbb{D}})$ onto
$A^{2}_{\lambda}({\mathbb{D}})$. Such an operator is uniquely determined by
(22).
* Proof. The existence and the uniqueness of the orthogonal projection from $L_{\lambda}^{2}({\mathbb{D}})$ onto $A^{2}_{\lambda}({\mathbb{D}})$ follows from the general theory of Hilbert spaces, since $A^{2}_{\lambda}({\mathbb{D}})$ is a closed subspace of $L_{\lambda}^{2}({\mathbb{D}})$ (see Theorem 5.6 below). Moreover by Theorem 3.6, the operator $P_{\lambda}$ is bounded from $L_{\lambda}^{2}({\mathbb{D}})$ into $A^{2}_{\lambda}({\mathbb{D}})$, and by Proposition 3.1, $P_{\lambda}\left(L_{\lambda}^{2}({\mathbb{D}})\right)=A^{2}_{\lambda}({\mathbb{D}})$ and $P_{\lambda}^{2}=P_{\lambda}$. In addition, by Corollary 3.7, for $f,g\in L_{\lambda}^{2}({\mathbb{D}})$ the interchange of order of the integration
$\displaystyle\int_{{\mathbb{D}}}\int_{{\mathbb{D}}}f(w)K_{\lambda}(z,w)d\sigma_{\lambda}(w)\,\overline{g(z)}d\sigma_{\lambda}(z)$
is permitted, which implies that the adjoint $P_{\lambda}^{*}$ of
$P_{\lambda}$ is itself.
The operator $P_{\lambda}$ defined by (22) is called the $\lambda$-Bergman
projection on the disk ${\mathbb{D}}$.
## 4 Growth of functions in the $\lambda$-Hardy and $\lambda$-Bergman spaces
### 4.1 The $p$-means $M_{p}(f;r)$ of $\lambda$-analytic functions
We recall several conclusions on the $\lambda$-Hardy spaces in [11].
###### Theorem 4.1
([11, Theorem 7.1]) Let $p_{0}\leq p\leq\ell\leq+\infty$,
$\delta=\frac{1}{p}-\frac{1}{\ell}$, and let $f\in
H^{p}_{\lambda}({\mathbb{D}})$. Then
(i) for $p_{0}\leq p\leq\ell\leq+\infty$, $M_{\ell}(f;r)\leq
c(1-r)^{-\delta(2\lambda+1)}\|f\|_{H_{\lambda}^{p}}$;
(ii) for $p_{0}<p<\ell\leq+\infty$,
$M_{\ell}(f;r)=o\left((1-r)^{-\delta(2\lambda+1)}\right)$;
(iii) for $p_{0}<p<\ell\leq+\infty$, $p\leq k<+\infty$,
$\displaystyle\left(\int_{0}^{1}(1-r)^{k\delta(2\lambda+1)-1}M_{\ell}(f;r)^{k}dr\right)^{1/k}\leq
c\|f\|_{H_{\lambda}^{p}}.$
###### Lemma 4.2
If $f$ is $\lambda$-harmonic in ${\mathbb{D}}$ and $1\leq p<\infty$, then its
integral mean $r\mapsto M_{p}(f;r)$ is nondecreasing over $[0,1)$; and if $f$
is $\lambda$-analytic in ${\mathbb{D}}$ and $p_{0}\leq p<1$, then
$M_{p}(f;r^{\prime})\leq 2^{2/p-1}M_{p}(f;r)$ for $0\leq r^{\prime}<r<1$.
The first part above follows from [11, Proposition 2.4(iv) and Lemma 3,3], and
the second part from [11, Theorem 6.4]. For $p_{0}\leq p<\infty$, we have the
following nearly trivial inequality
$\displaystyle\int_{0}^{1}M_{p}(f;r)^{p}dr\lesssim\int_{0}^{1}M_{p}(f;r)^{p}r^{2\lambda+1}dr.$
(30)
This is because $\int_{0}^{1/2}M_{p}(f;r)^{p}dr\leq
4\int_{0}^{1/2}M_{p}(f;1-r)^{p}dr=4\int_{1/2}^{1}M_{p}(f;r)^{p}dr$.
We now give some estimates of the $p$-means $M_{p}(f;r)$ of functions $f$ in
the $\lambda$-Bergman spaces $A^{p}_{\lambda}({\mathbb{D}})$.
###### Proposition 4.3
Let $p_{0}\leq p\leq l\leq+\infty$, $\delta=\frac{1}{p}-\frac{1}{l}$, and let
$f$ be a $\lambda$-analytic function on ${\mathbb{D}}$. If for some $\beta\geq
0$,
$\displaystyle M_{p}(f;r)\leq c_{0}(1-r)^{-\beta},\qquad 0\leq r<1,$ (31)
then there exists some $c>0$ independent of $r$ and $c_{0}$ such that
$\displaystyle M_{l}(f;r)\leq cc_{0}(1-r)^{-\delta(2\lambda+1)-\beta},\qquad
0\leq r<1.$ (32)
* Proof. For $p=\infty$ (32) is trivial. In what follows we assume $p_{0}\leq p<\infty$.
For $0<s<1$ set $f_{s}(z)=f(sz)$. We apply Theorem 4.1(i) to $f_{s}$ to get
$M_{l}(f_{s};r)\leq
c(1-r)^{-\delta(2\lambda+1)}\|f_{s}\|_{H_{\lambda}^{p}},\qquad 0\leq r<1,$
but
$\displaystyle\|f_{s}\|_{H_{\lambda}^{p}}=\sup_{0\leq
r<1}M_{p}(f_{s};r)=\sup_{0\leq r<1}M_{p}(f;sr)\leq 2^{2/p}M_{p}(f;s),$ (33)
where the last inequality is based on Lemma 4.2. Combining the above two
estimates gives $M_{l}(f;sr)\leq
c^{\prime}(1-r)^{-\delta(2\lambda+1)}M_{p}(f;s)$, which, together with (31),
implies $M_{l}(f;r^{2})\leq c^{\prime}c_{0}(1-r)^{-\delta(2\lambda+1)-\beta}$.
Finally replacing $r$ by $\sqrt{r}$ proves (32).
###### Proposition 4.4
Let $p_{0}\leq p<\infty$ and let $f$ be a $\lambda$-analytic function on
${\mathbb{D}}$. If $f\in A^{p}_{\lambda}({\mathbb{D}})$, then
$M_{p}(f;r)=o\left((1-r)^{-1/p}\right)$ as $r\rightarrow 1-$, and
$M_{p}(f;r)\lesssim(1-r)^{-1/p}\|f\|_{A_{\lambda}^{p}}$ for $0\leq r<1$. If
for some $\alpha<1/p$, $M_{p}(f;r)\lesssim(1-r)^{-\alpha}$, then $f\in
A^{p}_{\lambda}({\mathbb{D}})$.
Indeed, for $f\in A^{p}_{\lambda}({\mathbb{D}})$, one uses Lemma 4.2 to get
$(1-r)M_{p}(f;r)^{p}\leq
4\int_{r}^{1}M_{p}(f;s)^{p}ds\lesssim\int_{r}^{1}M_{p}(f;s)^{p}s^{2\lambda+1}ds\rightarrow
0\quad\hbox{as}\,\,r\rightarrow 1-;$
and further by use of (30),
$(1-r)M_{p}(f;r)^{p}\lesssim\|f\|_{A_{\lambda}^{p}}^{p}$. This verifies the
first part of the proposition. The second part is obvious.
The following theorem is a combination of Propositions 4.3 and 4.4.
###### Theorem 4.5
Let $p_{0}\leq p<\infty$, $p\leq\ell\leq\infty$, and
$\delta=\frac{1}{p}-\frac{1}{\ell}$. Then for $f\in
A^{p}_{\lambda}({\mathbb{D}})$,
$M_{\ell}(f;r)=o\left((1-r)^{-\delta(2\lambda+1)-1/p}\right)$ as $r\rightarrow
1-$, and
$\displaystyle
M_{\ell}(f;r)\lesssim(1-r)^{-\delta(2\lambda+1)-1/p}\|f\|_{A_{\lambda}^{p}},\qquad
0\leq r<1.$
### 4.2 Point-evaluation of functions in the $\lambda$-Hardy and
$\lambda$-Bergman spaces
We shall need the $\lambda$-harmonic majorizations of functions in
$H_{\lambda}^{p}({\mathbb{D}})$.
###### Theorem 4.6
([11, Theorem 6.3], $\lambda$-harmonic majorization) Let $p\geq p_{0}$ and
$f\in H_{\lambda}^{p}({\mathbb{D}})$.
(i) If $p>p_{0},$ then there exists a nonnegative function $g(\theta)\in
L_{\lambda}^{p/p_{0}}(\partial{\mathbb{D}})$ such that for $z\in{\mathbb{D}}$,
$\displaystyle|f(z)|^{p_{0}}\leq P(g;z),$
and
$\displaystyle\|f\|^{p_{0}}_{H^{p}_{\lambda}}\leq\|g\|_{L_{\lambda}^{p/p_{0}}(\partial{\mathbb{D}})}\leq
2^{2-p_{0}}\|f\|^{p_{0}}_{H^{p}_{\lambda}}.$
(ii) If $p=p_{0},$ there exists a finite positive measure
$d\nu\in{\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})$ such that for
$z\in{\mathbb{D}}$,
$\displaystyle|f(z)|^{p_{0}}\leq P(d\nu;z),$
and
$\displaystyle\|f\|^{p_{0}}_{H^{p_{0}}_{\lambda}}\leq\|d\nu\|_{{\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})}\leq
2^{2-p_{0}}\|f\|^{p_{0}}_{H^{p_{0}}_{\lambda}}.$
We first consider the $\lambda$-harmonic function $u(x,y)$ in the disc
${\mathbb{D}}$ satisfying the condition, for $1\leq p<\infty$,
$\displaystyle C_{p}(u):=\sup_{0\leq
r<1}\int_{-\pi}^{\pi}|u(r\cos\theta,r\sin\theta)|^{p}dm_{\lambda}(\theta)<\infty.$
(34)
###### Lemma 4.7
([11, Corollary 3.5], characterization of $\lambda$-Poisson integrals) Let
$u(x,y)$ be $\lambda$-harmonic in ${\mathbb{D}}$. Then
(i) for $1<p<\infty$, $u(x,y)$ is the $\lambda$-Poisson integral $P(f;z)$ with
$z=x+iy$ of some $f\in L_{\lambda}^{p}(\partial{\mathbb{D}})$ if and only if
$C_{p}(u)<\infty$;
(ii) $u(x,y)$ is the $\lambda$-Poisson integral $P(d\nu;z)$ of a measure
$d\nu\in{\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})$ if and only if
$C_{1}(u)<\infty$;
(iii) in parts (i) and (ii),
$\|f\|_{L_{\lambda}^{p}(\partial{\mathbb{D}})}=C_{p}(u)^{1/p}$ for
$1<p<\infty$, and $\|d\nu\|_{{\mathfrak{B}_{\lambda}}}=C_{1}(u)$ for $p=1$.
Note that, in the above lemma, for $1<p<\infty$ the equality
$\|f\|_{L_{\lambda}^{p}(\partial{\mathbb{D}})}=C_{p}(u)^{1/p}$ follows from
Proposition 2.3(iv) and (v) immediately. For $p=1$, the Riesz representation
theorem gives
$\displaystyle\|d\nu\|_{{\mathfrak{B}_{\lambda}}}=\sup
c_{\lambda}\left|\int_{-\pi}^{\pi}h(\theta)|\sin\theta|^{2\lambda}d\nu(\theta)\right|,$
where the supreme is taken over all $h\in C(\partial{\mathbb{D}})$ with
$\|h\|_{C(\partial{\mathbb{D}})}=1$. Applying Proposition 2.3(v) with
$X=C(\partial{\mathbb{D}})$, we have
$\displaystyle\|d\nu\|_{{\mathfrak{B}_{\lambda}}}$
$\displaystyle=\sup\lim_{r\rightarrow
1-}c_{\lambda}\left|\int_{-\pi}^{\pi}P(h,re^{i\theta})|\sin\theta|^{2\lambda}d\nu(\theta)\right|$
$\displaystyle=\sup\lim_{r\rightarrow
1-}\left|\int_{-\pi}^{\pi}h(\varphi)P(d\nu,re^{i\varphi})dm_{\lambda}(\varphi)\right|,$
which is dominated by $\sup\|h\|_{C(\partial{\mathbb{D}})}C_{1}(u)=C_{1}(u)$.
This, in conjunction with Proposition 2.3(vi), shows that
$\|d\nu\|_{{\mathfrak{B}_{\lambda}}}=C_{1}(u)$.
###### Lemma 4.8
([11, Corollary 4.3]) Let $z=re^{i\theta}$, $w=e^{i\varphi}$. For
$\theta,\varphi\in[-\pi,\pi]$ and $r\in[0,1)$, we have
$\displaystyle
P(re^{i\theta},e^{i\varphi})\lesssim\frac{(1-r)(1-r+|\sin(\theta-\varphi)/2|)^{-2}}{(1-r+|\sin\theta|+|\sin(\theta-\varphi)/2|)^{2\lambda}}\ln\Big{(}\frac{|\sin(\theta-\varphi)/2|}{1-r}+2\Big{)}.$
###### Theorem 4.9
Let $u(x,y)$ be a $\lambda$-harmonic function in ${\mathbb{D}}$ satisfying
(34) for $1\leq p<\infty$. Then
$\displaystyle|u(x,y)|\lesssim\frac{(1-|z|)^{-1/p}}{|1-z^{2}|^{2\lambda/p}}C_{p}(u)^{1/p},\qquad
z=x+iy\in{\mathbb{D}}.$ (35)
* Proof. We first consider the case for $1<p<\infty$. By Lemma 4.7(i), $u(x,y)$ is the $\lambda$-Poisson integral $P(f;z)$ with $z=x+iy$ of some $f\in L_{\lambda}^{p}(\partial{\mathbb{D}})$. From (11), Hölder’s inequality gives
$\displaystyle|u(x,y)|\leq\|f\|_{L_{\lambda}^{p}(\partial{\mathbb{D}})}\|P(z,\cdot)\|_{L_{\lambda}^{p^{\prime}}(\partial{\mathbb{D}})},\qquad
z=x+iy\in{\mathbb{D}}.$ (36)
On account of the fact
$\|P(z,\cdot)\|_{L_{\lambda}^{1}(\partial{\mathbb{D}})}=1$, one has
$\displaystyle\|P(z,\cdot)\|_{L_{\lambda}^{p^{\prime}}(\partial{\mathbb{D}})}\leq\sup_{\varphi}P(z,e^{i\varphi})^{1/p},\qquad
z=re^{i\theta}\in{\mathbb{D}},$ (37)
but by Lemma 4.8,
$\displaystyle
P(re^{i\theta},e^{i\varphi})\lesssim\frac{(1-r)^{-1}}{(1-r+|\sin\theta|)^{2\lambda}},\qquad\theta,\varphi\in[-\pi,\pi],\,\,r\in[0,1).$
(38)
Combining (36), (37) and (38) yields
$\displaystyle|u(x,y)|\lesssim\frac{(1-r)^{-1/p}}{(1-r+|\sin\theta|)^{2\lambda/p}}\|f\|_{L_{\lambda}^{p}(\partial{\mathbb{D}})},\qquad
z=x+iy=re^{i\theta}\in{\mathbb{D}},$
that is identical with (35), in view of the fact $|1-z^{2}|\asymp
1-r+|\sin\theta|$ for $z=re^{i\theta}\in{\mathbb{D}}$.
For $p=1$, Lemma 4.7(ii) implies existence of a measure
$d\nu\in{\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})$ so that $u(x,y)$ is
its $\lambda$-Poisson integral $P(d\nu;z)$ with $z=x+iy$. It follows from (12)
that
$|u(x,y)|\leq\|d\nu\|_{{\mathfrak{B}_{\lambda}}}\sup_{\varphi}P(z,e^{i\varphi})$,
and then, appealing to (38) proves (35) for $p=1$.
We now come to the point estimates of functions in
$H_{\lambda}^{p}({\mathbb{D}})$ and $A_{\lambda}^{p}({\mathbb{D}})$.
###### Theorem 4.10
Let $p\geq p_{0}$ and $f\in H_{\lambda}^{p}({\mathbb{D}})$. Then
$\displaystyle|f(z)|\lesssim\frac{(1-|z|)^{-1/p}}{|1-z^{2}|^{2\lambda/p}}\|f\|_{H_{\lambda}^{p}},\qquad
z\in{\mathbb{D}}.$ (39)
* Proof. For $p\geq p_{0}$ and $f\in H_{\lambda}^{p}({\mathbb{D}})$, let $u(x,y)$ be the $\lambda$-harmonic majorization of $f$ given in Theorem 4.6. Thus
$\displaystyle|f(z)|^{p_{0}}\leq u(x,y),\qquad z=x+iy\in{\mathbb{D}},$ (40)
where for $p>p_{0}$, $u(x,y)$ is the $\lambda$-Poisson integral $P(g;z)$ with
$z=x+iy$ of some nonnegative function $g\in
L_{\lambda}^{p/p_{0}}(\partial{\mathbb{D}})$ satisfying
$\|g\|_{L_{\lambda}^{p/p_{0}}(\partial{\mathbb{D}})}\asymp\|f\|^{p_{0}}_{H^{p}_{\lambda}}$,
and for $p=p_{0}$, the $\lambda$-Poisson integral $P(d\nu;z)$ with $z=x+iy$ of
some nonnegative measure
$d\nu\in{\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})$ satisfying
$\|d\nu\|_{{\mathfrak{B}}_{\lambda}(\partial{\mathbb{D}})}\asymp\|f\|^{p_{0}}_{H^{p_{0}}_{\lambda}}$.
Obviously by Lemma 4.7(iii),
$\displaystyle
C_{p/p_{0}}(u)=\|g\|^{p/p_{0}}_{L_{\lambda}^{p/p_{0}}(\partial{\mathbb{D}})}\asymp\|f\|^{p}_{H^{p}_{\lambda}}<\infty$
for $p>p_{0}$, and
$C_{1}(u)=\|d\nu\|_{{\mathfrak{B}_{\lambda}(\partial{\mathbb{D}})}}\asymp\|f\|^{p_{0}}_{H^{p_{0}}_{\lambda}}<\infty$
for $p=p_{0}$. Now by Theorem 4.9,
$\displaystyle|u(x,y)|\lesssim\frac{(1-|z|)^{-p_{0}/p}}{|1-z^{2}|^{2\lambda
p_{0}/p}}\|f\|^{p_{0}}_{H^{p}_{\lambda}},\qquad z=x+iy\in{\mathbb{D}}.$
Combining this with (40) proves (39).
###### Theorem 4.11
Let $p\geq p_{0}$ and $f\in A_{\lambda}^{p}({\mathbb{D}})$. Then
$\displaystyle|f(z)|\lesssim\frac{(1-|z|)^{-2/p}}{|1-z^{2}|^{2\lambda/p}}\|f\|_{A_{\lambda}^{p}},\qquad
z\in{\mathbb{D}}.$ (41)
* Proof. For $0<\rho<1$ set $f_{\rho}(z)=f(\rho z)$. We apply Theorem 4.10 to $f_{\rho}$ to get
$\displaystyle|f(\rho
se^{i\theta})|\lesssim\frac{(1-s)^{-1/p}}{|1-s^{2}e^{2i\theta}|^{2\lambda/p}}\|f_{\rho}\|_{H_{\lambda}^{p}},\qquad\rho,s\in[0,1),$
but from (33) and Proposition 4.4,
$\|f_{\rho}\|_{H_{\lambda}^{p}}\lesssim
M_{p}(f;\rho)\lesssim(1-\rho)^{-1/p}\|f\|_{A_{\lambda}^{p}}.$
Combining the above two estimates and letting $\rho=s=r^{1/2}$ for
$r\in[0,1)$, we have
$\displaystyle|f(z)|\lesssim\frac{(1-r^{1/2})^{-2/p}}{|1-re^{2i\theta}|^{2\lambda/p}}\|f\|_{A_{\lambda}^{p}},\qquad
z=re^{i\theta}\in{\mathbb{D}},$
that is identical with (41), in view of the fact $|1-z^{2}|\asymp
1-r+|\sin\theta|\asymp|1-re^{2i\theta}|$ for $z=re^{i\theta}\in{\mathbb{D}}$.
For the point estimates of functions in the usual Hardy spaces, see [5, p.
36], and in the usual Bergman spaces, see [17] and also [6, p. 79].
## 5 Density, completeness, and duality of the $\lambda$-Hardy and
$\lambda$-Bergman spaces
### 5.1 Density of $\lambda$-analytic polynomials
For a $\lambda$-analytic function $f$ on ${\mathbb{D}}$, we denote by
$S_{n}:=S^{f}_{n}(z)$ the $n$th partial sum of the series (10).
###### Lemma 5.1
If $f(z)$ is $\lambda$-analytic in ${\mathbb{D}}$, then for fixed $0<r_{0}<1$,
$S_{n}(z)$ converges uniformly to $f(z)$ on $\overline{{\mathbb{D}}}_{r_{0}}$
as $n\rightarrow\infty$, where ${\mathbb{D}}_{r_{0}}=\\{z:\,|z|<r_{0}\\}$.
Indeed, by (4) and Proposition 2.2(iii),
$\sup_{z\in\overline{{\mathbb{D}}}_{r_{0}}}|S_{n}(z)-f(z)|$ is controlled by a
multiple of $\sum_{k=n+1}^{\infty}n^{\lambda}|c_{n}|r_{0}^{n}$, that tends to
zero as $n\rightarrow\infty$.
###### Theorem 5.2
Assume that $p_{0}<p<\infty$. Then the set of $\lambda$-analytic polynomials
is dense in the $\lambda$-Hardy space $H_{\lambda}^{p}({\mathbb{D}})$; and in
particular, the system $\\{\phi_{n}(z):\,n\in{\mathbb{N}}_{0}\\}$ is an
orthonormal basis of the Hilbert space $H_{\lambda}^{2}({\mathbb{D}})$.
For $f\in H_{\lambda}^{p}({\mathbb{D}})$ and $s\in(0,1)$, let
$f_{s}(z)=f(sz)$. By Theorem 2.4 $\|f_{s}-f\|_{H^{p}_{\lambda}}\rightarrow 0$
as $s\rightarrow 1-$, and by Lemma 5.1, for fixed $s\in(0,1)$
$\|(S_{n})_{s}-f_{s}\|_{H^{p}_{\lambda}}\rightarrow 0$ as
$n\rightarrow\infty$. Thus the density in Theorem 5.2 is concluded.
###### Theorem 5.3
Assume that $p_{0}<p<\infty$. Then the set of $\lambda$-analytic polynomials
is dense in the $\lambda$-Bergman space $A^{p}_{\lambda}({\mathbb{D}})$; and
in particular, the system
$\big{\\{}\big{(}\frac{n+\lambda+1}{\lambda+1}\big{)}^{1/2}\phi_{n}^{\lambda}(z):\,\,n\in{\mathbb{N}}_{0}\big{\\}}$
is an orthonormal basis of the Hilbert space $A^{2}_{\lambda}({\mathbb{D}})$.
* Proof. For $f\in A^{p}_{\lambda}({\mathbb{D}})$, set $f_{s}(z)=f(sz)$ for $0\leq s<1$. We assert that $\|f_{s}-f\|_{A_{\lambda}^{p}}\rightarrow 0$ as $s\rightarrow 1-$, namely,
$\displaystyle\lim_{s\rightarrow
1-}\int_{0}^{1}M_{p}(f_{s}-f;r)^{p}\,r^{2\lambda+1}dr=0.$ (42)
Indeed, applying (13) yields $\lim_{s\rightarrow 1-}M_{p}(f_{s}-f;r)=0$ for
$0\leq r<1$, and for all $0\leq r,s<1$, $M_{p}(f_{s}-f;r)\lesssim
M_{p}(f;rs)+M_{p}(f;r)\lesssim M_{p}(f;r)$ by Lemma 4.2. Thus (42) follows
immediately by Lebesgue’s dominated convergence theorem. In addition, by Lemma
5.1, for fixed $s\in(0,1)$ $\|(S_{n})_{s}-f_{s}\|_{A^{p}_{\lambda}}\rightarrow
0$ as $n\rightarrow\infty$. Thus the density in Theorem 5.3 is proved. The
last assertion in the theorem follows from (19).
### 5.2 Completeness of the $\lambda$-Hardy and $\lambda$-Bergman spaces
###### Lemma 5.4
Let $\\{f_{n}\\}$ be a sequence of $\lambda$-analytic functions on
${\mathbb{D}}$. If $\\{f_{n}\\}$ converges uniformly on each compact subset of
${\mathbb{D}}$, then its limit function is also $\lambda$-analytic in
${\mathbb{D}}$.
* Proof. Obviously, the assumption implies that the limit function $f$ of the sequence $\\{f_{n}\\}$ exists and is continuous in ${\mathbb{D}}$. Furthermore, by Corollary 3.2, for each $n$ and $r\in(0,1)$ one has
$\displaystyle
f_{n}(z)=r^{-2\lambda-2}\int_{{\mathbb{D}}_{r}}f_{n}(w)K_{\lambda}(z/r,w/r)d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}_{r}.$ (43)
Since $|K_{\lambda}(z/r,w/r)|\lesssim\left(1-|z|/r\right)^{-2\lambda-2}$ for
$z,w\in{\mathbb{D}}_{r}$ by Lemma 3.5, letting $n\rightarrow\infty$ in (43)
asserts that (21) is true for the limit function $f$ and for each $r\in(0,1)$,
and so, again by Corollary 3.2, $f$ is $\lambda$-analytic in ${\mathbb{D}}$.
###### Theorem 5.5
For $p_{0}\leq p\leq\infty$, the space $H^{p}_{\lambda}({\mathbb{D}})$ is
complete.
* Proof. Let $\\{f_{n}\\}$ be a Cauchy sequence in $H^{p}_{\lambda}({\mathbb{D}})$ for $p_{0}\leq p<\infty$. For $r\in(0,1)$, it follows from Theorem 4.10 that
$\displaystyle|f_{m}(z)-f_{n}(z)|\lesssim(1-r)^{-(2\lambda+1)/p}\|f_{m}-f_{n}\|_{H_{\lambda}^{p}},\qquad
z\in\overline{{\mathbb{D}}}_{r},$
which shows that $\\{f_{n}\\}$ converges to a function $f$ uniformly on each
compact subset of ${\mathbb{D}}$. Lemma 5.4 asserts that $f$ is
$\lambda$-analytic in ${\mathbb{D}}$. The locally uniform convergence implies,
for $r\in[0,1)$,
$\displaystyle
M_{p}(f_{n}-f;r)^{p}=\lim_{m\rightarrow\infty}\int_{-\pi}^{\pi}|f_{n}(re^{i\theta})-f_{m}(re^{i\theta})|^{p}\,dm_{\lambda}(\theta)\leq\liminf_{m\rightarrow\infty}\|f_{n}-f_{m}\|^{p}_{H_{\lambda}^{p}},$
(44)
so that
$\|f_{n}-f\|_{H_{\lambda}^{p}}\leq\liminf_{m\rightarrow\infty}\|f_{n}-f_{m}\|_{H_{\lambda}^{p}}$.
Hence $H^{p}_{\lambda}({\mathbb{D}})$ for $p_{0}\leq p<\infty$ is complete,
and so is $H^{\infty}_{\lambda}({\mathbb{D}})$.
###### Theorem 5.6
For $p_{0}\leq p<\infty$, the space $A^{p}_{\lambda}({\mathbb{D}})$ is
complete.
The proof is similar to that of Theorem 5.5, by means of Theorem 4.11 and
Lemma 5.4. The only difference is that (44) is modified by, for $r\in(0,1)$,
$\displaystyle\int_{{\mathbb{D}}_{r}}|f_{n}(z)-f(z)|^{p}d\sigma_{\lambda}(z)=\lim_{m\rightarrow\infty}\int_{{\mathbb{D}}_{r}}|f_{n}(z)-f_{m}(z)|^{p}d\sigma_{\lambda}(z)\leq\liminf_{m\rightarrow\infty}\|f_{n}-f_{m}\|^{p}_{A_{\lambda}^{p}}.$
### 5.3 A Szegö type transform
In order to determine the dual of the $\lambda$-Hardy spaces subsequently, we
introduce a Szegö type transform. Since for $1\leq p\leq\infty$, the boundary
value $f(e^{i\theta})$ of $f\in H_{\lambda}^{p}({\mathbb{D}})$ satisfies (15),
it follows that
$\displaystyle\int_{-\pi}^{\pi}f(e^{i\varphi})e^{i\varphi}C(e^{i\varphi},z)dm_{\lambda}(\varphi)=0,\qquad
z\in{\mathbb{D}};$ (45)
and from (7) and (14), one has
$\displaystyle
f(z)=\int_{-\pi}^{\pi}f(e^{i\varphi})C(z,e^{i\varphi})dm_{\lambda}(\varphi),\qquad
z\in{\mathbb{D}}.$ (46)
We call the equality (46) the $\lambda$-Cauchy integral formula of $f\in
H_{\lambda}^{p}({\mathbb{D}})$ ($1\leq p\leq\infty$).
In general, a Szegö type transform can be defined, for $h\in
L^{1}_{\lambda}(\partial{\mathbb{D}})$, by
$\displaystyle
S_{\lambda}h(z)=\int_{-\pi}^{\pi}h(e^{i\varphi})C(z,e^{i\varphi})dm_{\lambda}(\varphi),\qquad
z\in{\mathbb{D}}.$ (47)
The operator $S_{\lambda}$ is called the $\lambda$-Szegö transform.
###### Proposition 5.7
For $1<p<\infty$, the $\lambda$-Szegö transform maps
$L^{p}_{\lambda}(\partial{\mathbb{D}})$ boundedly onto
$H^{p}_{\lambda}({\mathbb{D}})$.
* Proof. Recall that, for $h\in L^{1}_{\lambda}(\partial{\mathbb{D}})$, its $\lambda$-conjugate Poisson integral is (cf. [11, Subsection 5.1])
$\displaystyle
Q(h;z)=\int_{-\pi}^{\pi}h(e^{i\varphi})Q(z,e^{i\varphi})dm_{\lambda}(\varphi),\qquad
z\in{\mathbb{D}},$
where $Q(z,w)$ is the $\lambda$-conjugate Poisson kernel given by
$\displaystyle Q(z,w)=-i\left[2C(z,w)-P(z,w)-1-Q_{1}(z,w)\right]$
with
$\displaystyle
Q_{1}(z,w)=\sum_{n=1}^{\infty}\frac{2\lambda}{\sqrt{n(n+2\lambda)}}\phi_{n}(z)w\phi_{n-1}^{\lambda}(w).$
It then follows that
$\displaystyle
S_{\lambda}h(z)-\frac{i}{2}Q(h;z)=\frac{1}{2}\int_{-\pi}^{\pi}h(e^{i\varphi})\left[P(z,e^{i\varphi})+1+Q_{1}(z,e^{i\varphi})\right]dm_{\lambda}(\varphi),\qquad
z\in{\mathbb{D}}.$
By [11, Lemma 5.3], the function $(\theta,\varphi)\mapsto
Q_{1}(re^{i\theta},e^{i\varphi})$ is an integrable kernel in
$L^{1}_{\lambda}(\partial{\mathbb{D}})$ uniformly with respect to $r\in[0,1)$,
which together with Proposition 2.3(iv) yields, for $1\leq p\leq\infty$,
$M_{p}\left(S_{\lambda}h-2^{-1}iQ(h;\cdot);r\right)\lesssim\|h\|_{L_{\lambda}^{p}(\partial{\mathbb{D}})},\qquad
h\in L^{p}_{\lambda}(\partial{\mathbb{D}}),\,\,0\leq r<1.$
Thus for $1<p<\infty$, by the M. Riesz type theorem for $\lambda$-conjugate
Poisson integrals (cf. [11, Theorem 5.1]) we get
$M_{p}(S_{\lambda}h;r)\lesssim\|h\|_{L_{\lambda}^{p}(\partial{\mathbb{D}})}$
for all $h\in L^{p}_{\lambda}(\partial{\mathbb{D}})$ and $0\leq r<1$. This
finishes the proof of the proposition.
###### Corollary 5.8
If $h\in L^{p}_{\lambda}(\partial{\mathbb{D}})$ for $1<p<\infty$, then
$\displaystyle\int_{-\pi}^{\pi}\left(h(e^{i\theta})-S_{\lambda}h(e^{i\theta})\right)\overline{f(e^{i\theta})}dm_{\lambda}(\theta)=0$
(48)
for all $f\in H^{p^{\prime}}_{\lambda}({\mathbb{D}})$.
* Proof. From (6), (7), (11) and (47), we have
$\displaystyle
P(h;z)-S_{\lambda}h(z)=\bar{z}\int_{-\pi}^{\pi}h(e^{i\varphi})e^{i\varphi}C(e^{i\varphi},z)dm_{\lambda}(\varphi),\qquad
z\in{\mathbb{D}},$
and for $f\in H^{p^{\prime}}_{\lambda}({\mathbb{D}})$, applying Fubini’s
theorem and (45) gives
$\displaystyle\int_{-\pi}^{\pi}\left(P(h;re^{i\theta})-S_{\lambda}h(re^{i\theta})\right)\overline{f(e^{i\theta})}dm_{\lambda}(\theta)$
$\displaystyle=\int_{-\pi}^{\pi}h(e^{i\varphi})re^{i\varphi}\,\overline{\int_{-\pi}^{\pi}e^{i\theta}C(e^{i\theta},re^{i\varphi})f(e^{i\theta})dm_{\lambda}(\theta)}\,dm_{\lambda}(\varphi)$
$\displaystyle=0.$ (49)
Since $P(h;re^{i\theta})-S_{\lambda}h(re^{i\theta})$ converges to
$h(e^{i\theta})-S_{\lambda}h(e^{i\theta})$ in the
$L^{p}_{\lambda}(\partial{\mathbb{D}})$-norm by Propositions 2.3(v) and 5.7,
and then (13), letting $r\rightarrow 1-$ in ( ‣ 5.3) proves (48).
### 5.4 Duality for the $\lambda$-Hardy spaces and the $\lambda$-Bergman
spaces
Let $X$ be a Banach space, and let $S$ be a (closed) subspace of $X$. The
annihilator of $S$, denoted by $S^{\perp}$, is the set of all linear
functionals $L\in X^{*}$ such that $L(x)=0$ for all $x\in S$. A consequence of
the Hahn-Banach theorem reads as follows (cf. [5, Theorem 7.1]).
###### Lemma 5.9
The dual $S^{*}$ of $S$ is isometrically isomorphic to the quotient space
$X^{*}/S^{\perp}$. Furthermore, for each fixed $L\in X^{*}$,
$\displaystyle\sup_{x\in S,\|x\|\leq 1}|L(x)|=\min_{\widetilde{L}\in
S^{\perp}}\|L+\widetilde{L}\|,$
where “$\min$” indicates that the infimum is attained.
Theorem 2.5 implies that, for $1\leq p\leq\infty$, the set of boundary
functions of $H^{p}_{\lambda}({\mathbb{D}})$, namely,
$H^{p}_{\lambda}(\partial{\mathbb{D}}):=\\{f(e^{i\theta}):\,\,f\in
H^{p}_{\lambda}({\mathbb{D}})\\}$, consists of those elements in
$L^{p}_{\lambda}(\partial{\mathbb{D}})$ satisfying the condition (15), and is
isometrically isomorphic to $H^{p}_{\lambda}({\mathbb{D}})$.
For $1\leq p\leq\infty$, define
$\left(zH^{p}_{\lambda}\right)({\mathbb{D}})=\left\\{zf(z):\,\,f\in
H^{p}_{\lambda}({\mathbb{D}})\right\\}$, and similarly,
$\left(e^{i\theta}H^{p}_{\lambda}\right)(\partial{\mathbb{D}})=\left\\{e^{i\theta}f(e^{i\theta}):\,\,f\in
H^{p}_{\lambda}({\mathbb{D}})\right\\}.$
###### Theorem 5.10
For $1\leq p<\infty$, the dual space
$H^{p}_{\lambda}(\partial{\mathbb{D}})^{*}$ (or
$H^{p}_{\lambda}({\mathbb{D}})^{*}$) is isometrically isomorphic to the
quotient space
$L^{p^{\prime}}_{\lambda}(\partial{\mathbb{D}})/(e^{i\theta}H^{p^{\prime}}_{\lambda})(\partial{\mathbb{D}})$,
where $p^{-1}+p^{\prime-1}=1$. Furthermore if $1<p<\infty$, then
$H^{p}_{\lambda}(\partial{\mathbb{D}})^{*}$ is isomorphic to
$H^{p^{\prime}}_{\lambda}(\partial{\mathbb{D}})$ in the sense that, each $L\in
H^{p}_{\lambda}(\partial{\mathbb{D}})^{*}$ can be represented by
$\displaystyle
L(f)=\int_{-\pi}^{\pi}f(e^{i\theta})\overline{g(e^{i\theta})}dm_{\lambda}(\theta),\qquad
f\in H^{p}_{\lambda}(\partial{\mathbb{D}}),$ (50)
with a unique function $g\in H^{p^{\prime}}_{\lambda}(\partial{\mathbb{D}})$
satisfying
$C_{p}\|g\|_{H_{\lambda}^{p^{\prime}}}\leq\|L\|\leq\|g\|_{H_{\lambda}^{p^{\prime}}}$,
where the constant $C_{p}$ is independent of $g$. While each $L\in
H^{1}_{\lambda}(\partial{\mathbb{D}})^{*}$ can be represented by (50) with
some $g\in L^{\infty}(\partial{\mathbb{D}})$ satisfying
$\|g\|_{L^{\infty}(\partial{\mathbb{D}})}=\|L\|$.
* Proof. The proof of the theorem is in the same style as that of [5, Theorem 7.3]. The first part is a consequence of Lemma 5.9, as long as the equality $\left(H_{\lambda}^{p}({\mathbb{D}})\right)^{\perp}=(e^{i\theta}H^{p^{\prime}}_{\lambda})(\partial{\mathbb{D}})$ for $1\leq p<\infty$ is shown. Indeed, the density of $\lambda$-analytic polynomials in $H_{\lambda}^{p}({\mathbb{D}})$, by Theorem 5.2, implies that $g\in L^{p^{\prime}}_{\lambda}(\partial{\mathbb{D}})$ annihilates $H^{p}_{\lambda}(\partial{\mathbb{D}})$ if and only if
$\displaystyle\int_{-\pi}^{\pi}\phi_{n}(e^{i\theta})g(e^{i\theta})dm_{\lambda}(\theta)=0,\qquad
n\in{\mathbb{N}}_{0}.$ (51)
Note that (51) is equivalent to say that $e^{-i\theta}g(e^{i\theta})\in
L^{p^{\prime}}_{\lambda}(\partial{\mathbb{D}})$ satisfies (15), namely,
$e^{-i\theta}g(e^{i\theta})\in
H^{p^{\prime}}_{\lambda}(\partial{\mathbb{D}})$, and hence
$g\in(e^{i\theta}H^{p^{\prime}}_{\lambda})(\partial{\mathbb{D}})$.
To show the second part of the theorem, that is, each $L\in
H^{p}_{\lambda}(\partial{\mathbb{D}})^{*}$ for $1<p<\infty$, has the
representation (50), we use the $\lambda$-Szegö transform $S_{\lambda}$
defined in the last subsection. Since, by the Hahn-Banach theorem, $L$ can be
extended to a bounded linear functional on
$L^{p}_{\lambda}(\partial{\mathbb{D}})$ with the same norm, the Riesz
representation theorem yields a function $h\in
L^{p^{\prime}}_{\lambda}(\partial{\mathbb{D}})$, with
$\|h\|_{L_{\lambda}^{p^{\prime}}(\partial{\mathbb{D}})}=\|L\|$, satisfying
$\displaystyle
L(f)=\int_{-\pi}^{\pi}f(e^{i\theta})h(e^{i\theta})dm_{\lambda}(\theta)$ (52)
for all $f\in L^{p}_{\lambda}(\partial{\mathbb{D}})$. If we put
$g(z)=S_{\lambda}\bar{h}(z)$, then by Proposition 5.7, $g(z)\in
H^{p^{\prime}}_{\lambda}({\mathbb{D}})$ and hence $g(e^{i\theta})\in
H^{p^{\prime}}_{\lambda}(\partial{\mathbb{D}})$. Furthermore
$\|g\|_{H_{\lambda}^{p^{\prime}}}\leq
C_{p}^{-1}\|h\|_{L_{\lambda}^{p^{\prime}}(\partial{\mathbb{D}})}=C_{p}^{-1}\|L\|$,
and by Corollary 5.8 and (52), the representation (50) is concluded. If
$g(z)=\sum_{n=0}^{\infty}a_{n}\phi_{n}(z)$, then by (16),
$L(\phi_{n})=\overline{a_{n}}$ for $n\in{\mathbb{N}}_{0}$. This shows that
$g\in H^{p^{\prime}}_{\lambda}({\mathbb{D}})$ is uniquely determined by $L$.
On the other hand, every $g\in H^{p^{\prime}}_{\lambda}({\mathbb{D}})$ defines
a functional $L\in H^{p}_{\lambda}(\partial{\mathbb{D}})^{*}$ by (50),
certainly satisfying $\|L\|\leq\|g\|_{H_{\lambda}^{p^{\prime}}}$. The
assertion for $p=1$ is obvious. The proof of the theorem is completed.
Finally we consider the dual of the $\lambda$-Bergman spaces
$A^{p}_{\lambda}({\mathbb{D}})$ for $1<p<\infty$. The duality theorem of the
usual Bergman spaces $A^{p}$ with $1<p<\infty$ was proved in [18] (see also
[6, p. 35]).
###### Theorem 5.11
For $1<p<\infty$, The dual space $A^{p}_{\lambda}({\mathbb{D}})^{*}$ is
isomorphic to $A^{p^{\prime}}_{\lambda}({\mathbb{D}})$ in the sense that, each
$L\in A^{p}_{\lambda}({\mathbb{D}})^{*}$ can be represented by
$\displaystyle
L(f)=\int_{{\mathbb{D}}}f(z)\overline{g(z)}d\sigma_{\lambda}(z),\qquad f\in
A^{p}_{\lambda}({\mathbb{D}}),$
with a unique function $g\in A^{p^{\prime}}_{\lambda}({\mathbb{D}})$
satisfying
$C_{p}\|g\|_{A_{\lambda}^{p^{\prime}}}\leq\|L\|\leq\|g\|_{A_{\lambda}^{p^{\prime}}}$,
where the constant $C_{p}$ is independent of $g$.
The proof of the theorem is nearly an adaptation of that of its analog in the
classical case given in [6, p. 37]). On the one hand, we extend $L\in
A^{p}_{\lambda}({\mathbb{D}})^{*}$, by the Hahn-Banach theorem, to a bounded
linear functional on $L^{p}_{\lambda}({\mathbb{D}})$ with the same norm, and
the Riesz representation theorem asserts that there exists an $h\in
L^{p^{\prime}}_{\lambda}({\mathbb{D}})$ satisfying
$\|h\|_{L_{\lambda}^{p^{\prime}}({\mathbb{D}})}=\|L\|$, so that
$L(f)=\int_{{\mathbb{D}}}f(z)h(z)d\sigma_{\lambda}(z)$ for all $f\in
L^{p}_{\lambda}({\mathbb{D}})$. If we put $g(z)=(P_{\lambda}\bar{h})(z)$, then
by Theorem 3.6, $g\in A^{p^{\prime}}_{\lambda}({\mathbb{D}})$ and
$\|g\|_{A_{\lambda}^{p^{\prime}}}\leq
C_{p}^{-1}\|h\|_{L_{\lambda}^{p^{\prime}}({\mathbb{D}})}=C_{p}^{-1}\|L\|$. For
$f\in A^{p}_{\lambda}({\mathbb{D}})$, by Proposition 3.1 one has
$L(f)=\int_{{\mathbb{D}}}\int_{{\mathbb{D}}}f(w)K_{\lambda}(z,w)d\sigma_{\lambda}(w)\,h(z)d\sigma_{\lambda}(z)$,
and then, by Corollary 3.7, Fubini’s theorem gives
$\displaystyle
L(f)=\int_{{\mathbb{D}}}f(w)\overline{g(w)}\,d\sigma_{\lambda}(w).$ (53)
If $g(z)=\sum_{n=0}^{\infty}a_{n}\phi_{n}(z)$, then by (20),
$L(\phi_{n})=\frac{\lambda+1}{n+\lambda+1}\overline{a_{n}}$ for
$n\in{\mathbb{N}}_{0}$. This shows that $g\in
A^{p^{\prime}}_{\lambda}({\mathbb{D}})$ is uniquely determined by $L$. On the
other hand, every $g\in A^{p^{\prime}}_{\lambda}({\mathbb{D}})$ defines a
functional $L\in H^{p}_{\lambda}(\partial{\mathbb{D}})^{*}$ by (53),
satisfying $\|L\|\leq\|g\|_{A_{\lambda}^{p^{\prime}}}$.
## 6 Characterization and interpolation of the $\lambda$-Bergman spaces
### 6.1 Boundedness of some operators
We consider the following two operators
$\displaystyle(P_{\lambda,1}f)(z)$
$\displaystyle=\int_{{\mathbb{D}}}f(w)K_{\lambda,1}(z,w)(1-|w|^{2})d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}},$ (54) $\displaystyle(P_{\lambda,2}f)(z)$
$\displaystyle=\int_{{\mathbb{D}}}f(w)K_{\lambda,2}(z,w)(1-|w|^{2})^{2}d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}},$ (55)
where
$\displaystyle K_{\lambda,1}(z,w)$
$\displaystyle=\sum_{n=0}^{\infty}\frac{(n+\lambda+2)(n+\lambda+1)}{\lambda+1}\phi_{n}(z)\overline{\phi_{n}(w)},$
(56) $\displaystyle K_{\lambda,2}(z,w)$
$\displaystyle=\frac{1}{2\lambda+2}\sum_{n=0}^{\infty}\frac{\Gamma(n+\lambda+4)}{\Gamma(n+\lambda+1)}\phi_{n}(z)\overline{\phi_{n}(w)}.$
(57)
Similarly to the $\lambda$-Bergman kernel $K_{\lambda}(z,w)$, it follows from
(4) and (18) that the series in (56) and (57) are convergent absolutely for
$zw\in{\mathbb{D}}$ and uniformly for $zw$ in a compact subset of
${\mathbb{D}}$; and for fixed $w\in\overline{{\mathbb{D}}}$ the function
$z\mapsto K_{\lambda,j}(z,w)$ ($j=1,2$) is $\lambda$-analytic in
${\mathbb{D}}$.
Moreover we also have
###### Proposition 6.1
For $f\in L^{1}({\mathbb{D}};(1-|z|^{2})^{j}d\sigma_{\lambda})$ with $j=1$ or
$2$, the integrals on the right hand side of (54) and (55) are well defined
for $z\in{\mathbb{D}}$ and are $\lambda$-analytic in $z\in{\mathbb{D}}$; and
if $f\in L^{1}({\mathbb{D}};(1-|z|^{2})^{j}d\sigma_{\lambda})$ is
$\lambda$-analytic in ${\mathbb{D}}$, then $P_{\lambda,j}f=f$ with $j=1$ or
$2$.
Indeed as in the proof of Proposition 3.1, by (4) termwise integration of
$f(w)K_{\lambda,j}(z,w)$ over ${\mathbb{D}}$ with respect to the measure
$(1-|w|^{2})^{j}d\sigma_{\lambda}(w)$ is legitimate, and by Proposition
2.2(iii) the resulting series represents a $\lambda$-analytic function in
${\mathbb{D}}$. If $f\in L^{1}({\mathbb{D}};(1-|z|^{2})^{j}d\sigma_{\lambda})$
is $\lambda$-analytic in ${\mathbb{D}}$, by Proposition 2.2(iii) it has the
representation $f(z)=\sum_{n=0}^{\infty}c_{n}\phi_{n}(z)$ for
$z\in{\mathbb{D}}$, and since, by Proposition 2.1,
$\displaystyle\int_{{\mathbb{D}}}\left|\phi_{n}(z)\right|^{2}\,(1-|z|^{2})^{j}d\sigma_{\lambda}(z)=\frac{(\lambda+1)\Gamma(j+1)\Gamma(n+\lambda+1)}{\Gamma(n+\lambda+j+2)},$
it follows that, for $r\in(0,1)$,
$\int_{{\mathbb{D}}}\overline{\phi_{n}(z)}f(rz)\,(1-|z|^{2})^{j}d\sigma_{\lambda}(z)=\frac{(\lambda+1)\Gamma(j+1)\Gamma(n+\lambda+1)}{\Gamma(n+\lambda+j+2)}r^{n}c_{n}.$
Making change of variables $z\mapsto z/r$, one has
$\int_{{\mathbb{D}}_{r}}\overline{\phi_{n}(z)}f(z)\,(r^{2}-|z|^{2})^{j}d\sigma_{\lambda}(z)=\frac{(\lambda+1)\Gamma(j+1)\Gamma(n+\lambda+1)}{\Gamma(n+\lambda+j+2)}r^{2n+2\lambda+2j+2}c_{n},$
and then, letting $r\rightarrow 1-$ yields
$\displaystyle\int_{{\mathbb{D}}}\overline{\phi_{n}(z)}f(z)\,(1-|z|^{2})^{j}d\sigma_{\lambda}(z)=\frac{(\lambda+1)\Gamma(j+1)\Gamma(n+\lambda+1)}{\Gamma(n+\lambda+j+2)}c_{n}.$
(58)
Finally termwise integration of $f(w)K_{\lambda,j}(z,w)$ over ${\mathbb{D}}$
with respect to the measure $(1-|w|^{2})^{j}d\sigma_{\lambda}(w)$ proves
$P_{\lambda,j}f=f$ with $j=1$ or $2$.
Similarly to Lemma 3.5, we have
###### Lemma 6.2
For $j=1,2$ and for $|zw|<1$,
$\displaystyle|K_{\lambda,j}(z,w)|\lesssim\frac{(|1-z\overline{w}|+|1-zw|)^{-2\lambda}}{|1-z\overline{w}|}\left(\frac{1}{|1-z\overline{w}|^{j+1}}+\frac{1}{|1-zw|^{j+1}}\right).$
(59)
On the case with $j=1$, for $|zw|<1$ and $z=re^{i\theta}$ one has
$\displaystyle(\lambda+1)K_{\lambda,1}(z,w)=\left[\left(r\frac{d}{dr}\right)^{2}+(2\lambda+3)r\frac{d}{dr}+(\lambda+2)(\lambda+1)\right]C(z,w).$
(60)
Since, as indicated in the proof of Lemma 3.5,
$\left|r\frac{d}{dr}\left[C(z,w)\right]\right|$ has the same upper bound as in
(23), on account of Lemma 3.4 it suffices to show
$\displaystyle\left|\left(r\frac{d}{dr}\right)^{2}\left[C(z,w)\right]\right|\lesssim\frac{(|1-z\overline{w}|+|1-zw|)^{-2\lambda}}{|1-z\overline{w}|}\left(\frac{1}{|1-z\overline{w}|^{2}}+\frac{1}{|1-zw|^{2}}\right).$
This can be completed along the line of the proof of Lemma 3.5, although
computations are more complicated. We leave the details to the readers.
Correspondingly, on the case with $j=2$ we have a similar formula to (60), and
the proof of (59) is reduced to show
$\displaystyle\left|\left(r\frac{d}{dr}\right)^{3}\left[C(z,w)\right]\right|\lesssim\frac{(|1-z\overline{w}|+|1-zw|)^{-2\lambda}}{|1-z\overline{w}|}\left(\frac{1}{|1-z\overline{w}|^{3}}+\frac{1}{|1-zw|^{3}}\right).$
We again leave the details to the readers.
We will also need the operator $\widetilde{P}_{\lambda,1}$ and
$\widetilde{P}_{\lambda,2}$ defined by
$\displaystyle(\widetilde{P}_{\lambda,1}f)(z)$
$\displaystyle=\int_{{\mathbb{D}}}f(w)\widetilde{K}_{\lambda,1}(z,w)(1-|w|^{2})d\sigma_{\lambda}(w),$
(61) $\displaystyle(\widetilde{P}_{\lambda,2}f)(z)$
$\displaystyle=\int_{{\mathbb{D}}}f(w)\widetilde{K}_{\lambda,2}(z,w)d\sigma_{\lambda}(w),$
(62)
where
$\displaystyle\widetilde{K}_{\lambda,1}(z,w)$
$\displaystyle=\sum_{n=0}^{\infty}\frac{(n+\lambda+3)(n+\lambda+2)}{2\lambda+2}\phi_{n}(z)\overline{\phi_{n}(w)},$
(63) $\displaystyle\widetilde{K}_{\lambda,2}(z,w)$
$\displaystyle=K_{\lambda,2}(z,w)(1-|z|^{2})(1-|w|^{2}).$ (64)
Similarly to Lemma 6.2 with $j=1$, we have
###### Lemma 6.3
For $|zw|<1$,
$\displaystyle|\widetilde{K}_{\lambda,1}(z,w)|\lesssim\frac{(|1-z\overline{w}|+|1-zw|)^{-2\lambda}}{|1-z\overline{w}|}\left(\frac{1}{|1-z\overline{w}|^{2}}+\frac{1}{|1-zw|^{2}}\right).$
###### Theorem 6.4
For $1\leq p<\infty$, the operators $P_{\lambda,1}$, $P_{\lambda,2}$ and
$\widetilde{P}_{\lambda,1}$ are bounded from $L_{\lambda}^{p}({\mathbb{D}})$
into $A^{p}_{\lambda}({\mathbb{D}})$.
* Proof. We first show that, for $0<\alpha<j+1$ with $j=1$ or $2$,
$\displaystyle
I_{j,\alpha}(z):=\int_{\mathbb{D}}|K_{\lambda,j}(z,w)|(1-|w|^{2})^{j-\alpha}d\sigma_{\lambda}(w)\lesssim(1-|z|^{2})^{-\alpha},\qquad
z\in{\mathbb{D}}.$ (65)
Similarly to (28) and (29), it follows from (59) that, for $z=re^{i\theta}$,
$w=se^{i\varphi}\in{\mathbb{D}}$,
$\displaystyle|K_{\lambda,j}(z,w)|\lesssim\Phi_{j,r,\theta}(s,\varphi)+\Phi_{j,r,\theta}(s,-\varphi),$
where
$\displaystyle\Phi_{j,r,\theta}(s,\varphi)=\frac{\left(1-rs+|\sin\theta|+|\sin\varphi|\right)^{-2\lambda}}{\left(1-rs+\left|\sin(\theta-\varphi)/2\right|\right)^{j+2}}.$
We have
$\displaystyle\int_{\mathbb{D}}|K_{\lambda,j}(z,w)|(1-|w|^{2})^{j-\alpha}d\sigma_{\lambda}(w)\lesssim\int_{0}^{1}\int_{-\pi}^{\pi}\Phi_{j,r,\theta}(s,\varphi)(1-s)^{j-\alpha}|\sin\varphi|^{2\lambda}d\varphi
ds,$
since the contribution of $\Phi_{j,r,\theta}(s,-\varphi)$ to the integral is
the same as that of $\Phi_{j,r,\theta}(s,\varphi)$. Thus
$\displaystyle\int_{\mathbb{D}}|K_{\lambda,j}(z,w)|(1-|w|^{2})^{j-\alpha}d\sigma_{\lambda}(w)\lesssim\int_{0}^{1}\int_{-\pi}^{\pi}\frac{(1-s)^{j-\alpha}}{\left(1-rs+\left|\sin(\theta-\varphi)/2\right|\right)^{j+2}}\,d\varphi
ds,$
from which (65) is yields after elementary calculations.
For $f\in L_{\lambda}^{1}({\mathbb{D}})$, we have
$\displaystyle\int_{{\mathbb{D}}}\left|(P_{\lambda,j}f)(z)\right|d\sigma_{\lambda}(z)\lesssim\int_{{\mathbb{D}}}|f(w)|\int_{{\mathbb{D}}}|K_{\lambda,j}(z,w)|d\sigma_{\lambda}(z)(1-|w|^{2})^{j}d\sigma_{\lambda}(w).$
Applying (65) with $\alpha=j$ gives
$\|P_{\lambda,j}f\|_{L_{\lambda}^{1}({\mathbb{D}})}\lesssim\|f\|_{L_{\lambda}^{1}({\mathbb{D}})}$.
For $1<p<\infty$, take $h(z)=(1-|z|^{2})^{-1/pp^{\prime}}$, where
$p^{-1}+p^{{}^{\prime}-1}=1$. Applying (65), first with $\alpha=p^{-1}$, one
has
$\displaystyle\int_{\mathbb{D}}|K_{\lambda,j}(z,w)|(1-|w|^{2})^{j}h(w)^{p^{\prime}}d\sigma_{\lambda}(w)\lesssim
h(z)^{p^{\prime}},\qquad z\in{\mathbb{D}},$
and then, with $\alpha=j+p^{\prime-1}$,
$\displaystyle\int_{\mathbb{D}}|K_{\lambda,j}(z,w)|(1-|w|^{2})^{j}h(z)^{p}d\sigma_{\lambda}(z)\lesssim
h(w)^{p},\qquad z\in{\mathbb{D}}.$
By Schur’s theorem,
$\|P_{\lambda,j}f\|_{L_{\lambda}^{p}({\mathbb{D}})}\lesssim\|f\|_{L_{\lambda}^{p}({\mathbb{D}})}$
for $j=1$ or $2$. Furthermore, Proposition 6.1 implies that the images of
$f\in L_{\lambda}^{p}({\mathbb{D}})$ ($1\leq p<\infty$) under the mappings
$P_{\lambda,1}$ and $P_{\lambda,2}$ are $\lambda$-analytic in ${\mathbb{D}}$.
The assertion on the operator $\widetilde{P}_{\lambda,1}$ can be proved by the
same way above.
###### Theorem 6.5
For $1\leq p\leq\infty$, the operator $\widetilde{P}_{\lambda,2}$ is bounded
from $L_{\lambda}^{p}({\mathbb{D}})$ into itself.
* Proof. We note that $\widetilde{K}_{\lambda,2}(z,w)=\overline{\widetilde{K}_{\lambda,2}(w,z)}$, and by means of (65) with $j=2$ and $\alpha=1$,
$\displaystyle\int_{{\mathbb{D}}}|\widetilde{K}_{\lambda,2}(z,w)|d\sigma_{\lambda}(w)\lesssim
1.$
This shows that, for $1\leq p\leq\infty$, the operator
$\widetilde{P}_{\lambda,2}$ is bounded from $L^{p}_{\lambda}({\mathbb{D}})$
into itself.
### 6.2 A characterization of the $\lambda$-Bergman spaces by the operator
$D_{z}$
Characterizations of the usual Bergman spaces by derivatives can be found in
[19, pp. 56-58]. Here we make a modification, which is more apt for the
$\lambda$-Bergman spaces.
###### Lemma 6.6
For $1\leq p\leq\infty$, the mapping
$f(z)\mapsto(1-|z|^{2})D_{z}\left(zf(z)\right)$ is bounded from
$A^{p}_{\lambda}({\mathbb{D}})$ into $L^{p}_{\lambda}({\mathbb{D}})$.
* Proof. For $f\in A^{p}_{\lambda}({\mathbb{D}})$, assume that $f(z)=\sum_{n=0}^{\infty}c_{n}\phi_{n}(z)$ ($z\in{\mathbb{D}}$) by Proposition 2.2(iii). By (18), taking termwise differentiation $\partial_{z}$ in ${\mathbb{D}}$ to $zf(z)$ is legitimate, and then, by (2),
$\displaystyle
D_{z}\left(zf(z)\right)=\sum_{n=0}^{\infty}c_{n}(n+\lambda+1)\phi_{n}(z),$
(66)
and from (58),
$\displaystyle(n+\lambda+1)c_{n}=\frac{\Gamma(n+\lambda+4)}{(\lambda+1)\Gamma(n+\lambda+1)}\int_{{\mathbb{D}}}\overline{\phi_{n}(w)}f(w)\,(1-|w|^{2})d\sigma_{\lambda}(w)-2c_{n}.$
Consequently, in view of (57) and (66) we obtain
$\displaystyle
D_{z}\left(zf(z)\right)=2\int_{{\mathbb{D}}}f(w)K_{\lambda,2}(z,w)(1-|w|^{2})d\sigma_{\lambda}(w)-2f(z),$
so that, from (62) and (64),
$\displaystyle(1-|z|^{2})D_{z}\left(zf(z)\right)=2(\widetilde{P}_{\lambda,2}f)(z)-2(1-|z|^{2})f(z).$
Therefore, by Theorem 6.5,
$\|(1-|z|^{2})D_{z}\left(zf(z)\right)\|_{L_{\lambda}^{p}({\mathbb{D}})}\lesssim\|\widetilde{P}_{\lambda,2}f\|_{L_{\lambda}^{p}({\mathbb{D}})}+\|f\|_{A_{\lambda}^{p}}\lesssim\|f\|_{A_{\lambda}^{p}}$
for $1\leq p\leq\infty$.
###### Lemma 6.7
If $f$ is $\lambda$-analytic in ${\mathbb{D}}$ and satisfies
$(1-|z|^{2})^{2}D_{z}\left(zf(z)\right)\in L^{1}_{\lambda}({\mathbb{D}})$,
then
$\displaystyle
f(z)=\int_{{\mathbb{D}}}\widetilde{K}_{\lambda,1}(z,w)D_{w}\left(wf(w)\right)(1-|w|^{2})^{2}d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}},$ (67)
where $\widetilde{K}_{\lambda,1}(z,w)$ is given by (63). Furthermore, if in
addition $(1-|z|^{2})D_{z}\left(zf(z)\right)\in L^{p}_{\lambda}({\mathbb{D}})$
for $1\leq p<\infty$, then $f\in A^{p}_{\lambda}({\mathbb{D}})$ and
$\|f\|_{A_{\lambda}^{p}}\lesssim\|(1-|z|^{2})D_{z}\left(zf(z)\right)\|_{L_{\lambda}^{p}({\mathbb{D}})}$.
* Proof. Set
$g(z)=\int_{{\mathbb{D}}}\widetilde{K}_{\lambda,1}(z,w)D_{w}\left(wf(w)\right)(1-|w|^{2})^{2}d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}.$
For $(1-|z|^{2})^{2}D_{z}\left(zf(z)\right)\in L^{1}_{\lambda}({\mathbb{D}})$,
in view of (4) and (18) the function $g$ is well defined and
$\lambda$-analytic in ${\mathbb{D}}$, and moreover,
$\displaystyle
D_{z}\left(zg(z)\right)=\int_{{\mathbb{D}}}D_{z}\left(z\widetilde{K}_{\lambda,1}(z,w)\right)D_{w}\left(wf(w)\right)(1-|w|^{2})^{2}d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}.$
In the meanwhile, from (2), (57) and (63),
$\displaystyle
D_{z}\left(z\widetilde{K}_{\lambda,1}(z,w)\right)=K_{\lambda,2}(z,w),$
and hence, in view of (55),
$D_{z}\left(zg(z)\right)=P_{\lambda,2}\left[D_{w}\left(wf(w)\right)\right](z),\qquad
z\in{\mathbb{D}}.$
Since $D_{z}\left(zf(z)\right)\in
L^{1}({\mathbb{D}};(1-|z|^{2})^{2}d\sigma_{\lambda})$, and from (66),
$D_{z}\left(zf(z)\right)$ is $\lambda$-analytic in ${\mathbb{D}}$, it follows
that
$P_{\lambda,2}\left[D_{w}\left(wf(w)\right)\right](z)=D_{z}\left(zf(z)\right)$
by Proposition 6.1 with $j=2$. Therefore
$D_{z}\left(zg(z)\right)=D_{z}\left(zf(z)\right),\qquad z\in{\mathbb{D}}.$
In view of the $\lambda$-analyticity of $f$ and $g$, comparing the
coefficients of $D_{z}\left(zg(z)\right)$ and $D_{z}\left(zf(z)\right)$ as in
(66) yields that $g=f$ in ${\mathbb{D}}$. This proves (67).
On account of (61), the equality (67) says
$f(z)=(\widetilde{P}_{\lambda,1}F)(z)$ with
$F(z)=(1-|z|^{2})D_{z}\left(zf(z)\right)$. Now if $F\in
L^{p}_{\lambda}({\mathbb{D}})$ for $1\leq p<\infty$, then Theorem 6.4 tells us
that $f\in A^{p}_{\lambda}({\mathbb{D}})$ and
$\|f\|_{A_{\lambda}^{p}}\lesssim\|(1-|z|^{2})D_{z}\left(zf(z)\right)\|_{L_{\lambda}^{p}({\mathbb{D}})}$.
Collecting Lemmas 6.6 and 6.7 we have the following theorem.
###### Theorem 6.8
Suppose that $1\leq p<\infty$ and $f$ is $\lambda$-analytic in ${\mathbb{D}}$.
Then $f\in A^{p}_{\lambda}({\mathbb{D}})$ if and only if
$(1-|z|^{2})D_{z}\left(zf(z)\right)\in L^{p}_{\lambda}({\mathbb{D}})$.
Moreover
$\|f\|_{A_{\lambda}^{p}}\asymp\|(1-|z|^{2})D_{z}\left(zf(z)\right)\|_{L_{\lambda}^{p}({\mathbb{D}})}.$
Similarly to Lemma 6.7 one can prove the following lemma.
###### Proposition 6.9
If $f\in A^{1}_{\lambda}({\mathbb{D}})$, then
$f(z)=\int_{{\mathbb{D}}}\widetilde{K}_{\lambda}(z,w)D_{w}\left(wf(w)\right)(1-|w|^{2})d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}},$
where
$\displaystyle\widetilde{K}_{\lambda}(z,w)=\sum_{n=0}^{\infty}\frac{n+\lambda+2}{\lambda+1}\phi_{n}(z)\overline{\phi_{n}(w)}.$
(68)
* Proof. Set
$g(z)=\int_{{\mathbb{D}}}\widetilde{K}_{\lambda}(z,w)D_{w}\left(wf(w)\right)(1-|w|^{2})d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}.$
It suffices to show that $g=f$ in ${\mathbb{D}}$.
For $f\in A^{1}_{\lambda}({\mathbb{D}})$,
$(1-|z|^{2})D_{z}\left(zf(z)\right)\in L^{1}_{\lambda}({\mathbb{D}})$ by Lemma
6.6, and the function $g$ is well defined in ${\mathbb{D}}$. Moreover in view
of (4) and (18), $g$ is $\lambda$-analytic in ${\mathbb{D}}$ and
$\displaystyle
D_{z}\left(zg(z)\right)=\int_{{\mathbb{D}}}D_{z}\left(z\widetilde{K}_{\lambda}(z,w)\right)D_{w}\left(wf(w)\right)(1-|w|^{2})d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}.$
In the meanwhile, from (2), (56) and (68),
$\displaystyle
D_{z}\left(z\widetilde{K}_{\lambda}(z,w)\right)=K_{\lambda,1}(z,w),$
and hence
$D_{z}\left(zg(z)\right)=P_{\lambda,1}\left[D_{w}\left(wf(w)\right)\right](z),\qquad
z\in{\mathbb{D}}.$
Since, again by Lemma 6.6, $D_{z}\left(zf(z)\right)\in
L^{1}({\mathbb{D}};(1-|z|^{2})d\sigma_{\lambda})$, and from (66),
$D_{z}\left(zf(z)\right)$ is $\lambda$-analytic in ${\mathbb{D}}$, it follows
that
$P_{\lambda,1}\left[D_{w}\left(wf(w)\right)\right](z)=D_{z}\left(zf(z)\right)$
by Proposition 6.1. Therefore
$D_{z}\left(zg(z)\right)=D_{z}\left(zf(z)\right),\qquad z\in{\mathbb{D}}.$
In view of the $\lambda$-analyticity of $f$ and $g$, comparing the
coefficients of $D_{z}\left(zg(z)\right)$ and $D_{z}\left(zf(z)\right)$ as in
(66) proves that $g=f$ in ${\mathbb{D}}$.
### 6.3 Interpolation of the $\lambda$-Bergman spaces
###### Theorem 6.10
Suppose that $1\leq p_{0}<p_{1}<+\infty$ and
$\frac{1}{p}=\frac{1-\theta}{p_{0}}+\frac{\theta}{p_{1}},\qquad\theta\in(0,1).$
Then
$[A^{p_{0}}_{\lambda}({\mathbb{D}}),A^{p_{1}}_{\lambda}({\mathbb{D}})]_{\theta}=A^{p}_{\lambda}(\mathbb{D})$
with equivalent norms.
* Proof. For $f\in A^{p}_{\lambda}(\mathbb{D})$, as usual we consider the family of functions
$f_{\zeta}(z)=\frac{f(z)}{|f(z)|}|f(z)|^{p\left(\frac{1-\zeta}{p_{0}}+\frac{\zeta}{p_{1}}\right)},\qquad
z\in{\mathbb{D}},$
where $\zeta$ takes the value of the closed strip with its real part between
$0$ and $1$. Further, let $F_{\zeta}(z)=(P_{\lambda,1}f)(z)$, that is,
$\displaystyle
F_{\zeta}(z)=\int_{{\mathbb{D}}}f_{\zeta}(w)K_{\lambda,1}(z,w)(1-|w|^{2})d\sigma_{\lambda}(w),\qquad
z\in{\mathbb{D}}.$
By Proposition 6.1, for fixed $\zeta$, $F_{\zeta}(z)$ is $\lambda$-analytic in
${\mathbb{D}}$, and $F_{\theta}(z)=f_{\theta}(z)=f(z)$; moreover by Theorem
3.6,
$\displaystyle\|F_{\zeta}\|^{p_{0}}_{A_{\lambda}^{p_{0}}}\lesssim\|f_{\zeta}\|^{p_{0}}_{L_{\lambda}^{p_{0}}({\mathbb{D}})}\asymp\|f\|^{p}_{A_{\lambda}^{p}}$
for all $\zeta$ with ${\rm Re}\,\zeta=0$, and
$\displaystyle\|F_{\zeta}\|^{p_{1}}_{A_{\lambda}^{p_{1}}}\lesssim\|f_{\zeta}\|^{p_{1}}_{L_{\lambda}^{p_{1}}({\mathbb{D}})}\asymp\|f\|^{p}_{A_{\lambda}^{p}}$
for all $\zeta$ with ${\rm Re}\,\zeta=1$. It is easy to see that the mapping
$\zeta\mapsto F_{\zeta}\in A_{\lambda}^{p_{0}}+A_{\lambda}^{p_{1}}$ is
continuous when $0\leq{\rm Re}\,\zeta\leq 1$ and (usual) analytic when $0<{\rm
Re}\,\zeta<1$. Thus it has been proved that
$f=F_{\theta}\in[A^{p_{0}}_{\lambda}({\mathbb{D}}),A^{p_{1}}_{\lambda}({\mathbb{D}})]_{\theta}$
and
$\|f\|^{p}_{[A^{p_{0}}_{\lambda},A^{p_{1}}_{\lambda}]_{\theta}}\lesssim\|f\|^{p}_{A_{\lambda}^{p}}$.
Conversely, if
$f\in[A^{p_{0}}_{\lambda}({\mathbb{D}}),A^{p_{1}}_{\lambda}({\mathbb{D}})]_{\theta}$,
then $f$ is $\lambda$-analytic in ${\mathbb{D}}$, and
$f\in[L^{p_{0}}_{\lambda}({\mathbb{D}}),L^{p_{1}}_{\lambda}({\mathbb{D}})]_{\theta}=L^{p}_{\lambda}({\mathbb{D}})$.
This shows that $f\in A^{p}_{\lambda}(\mathbb{D})$ and
$\|f\|^{p}_{A_{\lambda}^{p}}\lesssim\|f\|^{p}_{[A^{p_{0}}_{\lambda},A^{p_{1}}_{\lambda}]_{\theta}}$.
The proof of the theorem is completed.
The above theorem for $\lambda=0$, i.e., for the usual Bergman spaces, was
proved in [13].
ACKNOWLEDGMENTS
The work is supported by the National Natural Science Foundation of China
(Grant No. 12071295). The authors would like to thank the anonymous referees
for their helpful comments and suggestions which have improved the original
manuscript.
## References
* [1] C. F. Dunkl, Reflection groups and orthogonal polynomials on the sphere, Math. Z. 197(1988), 33-60.
* [2] C. F. Dunkl, Differential-difference operators associated to reflection groups, Trans. Amer. Math. Soc. 311(1989), 167-183.
* [3] C. F. Dunkl, Poisson and Cauchy kernels for orthogonal polynomials with dihedral symmerty, J. Math. Anal. Appl. 143(1989), 459-470.
* [4] C. F. Dunkl, Integral kernels with reflection group invariance, Canad. J. Math. 43(1991), 1213-1227.
* [5] P. L. Duren, Theory of $H^{p}$ Spaces, Academic Press, Inc., New York, 1970.
* [6] P. L. Duren and A. Schuster, Bergman Spaces, Mathematical Surveys and Monographs, Vol. 100, American Mathematical Society, Providence, RI., 2004.
* [7] A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher Transcendental Functions, Vols. I and II, McGraw-Hill, New York, 1953.
* [8] J. B. Garnett, Bounded analytic functions, Academic Press, 1981.
* [9] H. Hedenmalm, B. Korenblum and K. Zhu, Theory of Bergman Spaces, Graduate Texts in Mathematics, Vol. 199, Springer-Verlag, New York, 2000.
* [10] P. Koosis, Introduction to $H^{p}$ Spaces, Cambridge Univ. Press, 2nd., 1998.
* [11] Zh.-K. Li and J.-Q. Liao, Hardy spaces for Dunkl-Gegenbauer expansions, J. Funct. Anal. 265(2013), 687-742.
* [12] Zh.-K. Li and J.-Q. Liao, Harmonic analysis associated with the one-dimensional Dunkl transform, Constr. Approx. 37(2013), 233-281.
* [13] T. H. MacGregor and K. Zhu, Coefficient multipliers between Bergman and Hardy spaces, Mathematika 42(1995), 413-426.
* [14] B. Muckenhoupt and E. M. Stein, Classical expansions and their relation to conjugate harmonic functions, Trans. Amer. Math. Soc. 118(1965), 17-92.
* [15] E. M. Stein and G. Weiss, On the theory of harmonic functions of several variables, I. The theory of $H^{p}$-spaces, Acta Math. 103(1960), 25-62.
* [16] G. Szegö, Orthogonal Polynomials, 4th edition. Amer. Math. Soc. Colloq. Publ., Vol. 23. Providence, RI, 1975.
* [17] D. Vukotić, A sharp estimate for $A_{\alpha}^{p}$ functions in ${\mathbb{C}}^{n}$, Proc. Amer. Math. Soc. 117(1993), 753-756.
* [18] V. P. Zaharyuta and V. I. Yudovich, The general form of a linear functional on $H^{\prime}_{p}$, Uspekhi Mat. Nauk 19(2)(116)(1964), 139-142. (in Russian)
* [19] K. Zhu, Operator Theory in Function Spaces, Second Edition, Math. Surveys and Monographs, Vol. 138, American Mathematical Society, Providence, Rhode Island, 2007.
* [20] K. Zhu, Spaces of Holomorphic Functions in the Unit Ball, Springer, New York, 2005.
* [21] A. Zygmund, Trigonometric Series, vols. I and II, 2nd edition, Cambridge Univ. Press, Cambridge, 1959.
|
# A universal model for the evolution of tidally stripped systems
Nicole E. Drakos1, James E. Taylor2,3, and Andrew J. Benson4
1Department of Astronomy and Astrophysics, University of California, Santa
Cruz, 1156 High Street, Santa Cruz, CA 95064, USA
2Department of Physics and Astronomy, University of Waterloo, 200 University
Avenue West, Waterloo, ON N2L 3G1, Canada
3Waterloo Centre for Astrophysics, University of Waterloo, 200 University
Avenue West, Waterloo, ON N2L 3G1, Canada
4Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101, USA
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Accurate models of the structural evolution of dark matter subhaloes, as they
orbit within larger systems, are fundamental to understanding the detailed
distribution of dark matter at the present day. Numerical simulations of
subhalo evolution support the idea that the mass loss associated with tidal
stripping is most naturally understood in energy space, with the particles
that are the least bound being removed first. Starting from this premise, we
recently proposed a zero-parameter “energy-truncation model" for subhalo
evolution. We tested this model with simulations of tidal stripping of
satellites with initial NFW profiles, and showed that the energy-truncation
model accurately predicts both the mass loss and density profiles. In this
work, we apply the model to a variety of Hernquist, Einasto and King profiles.
We show that it matches the simulation results quite closely in all cases,
indicating that it may serve as a universal model to describe tidally stripped
collisionless systems. A key prediction of the energy-truncation model is that
the central density of dark matter subhaloes is conserved as they lose mass;
this has important implications for dark matter annihilation calculations, and
for other observational tests of dark matter.
###### keywords:
dark matter – galaxies: haloes – methods: numerical
††pubyear: 2021††pagerange: A universal model for the evolution of tidally
stripped systems–B
## 1 Introduction
In the $\Lambda$-Cold Dark Matter ($\Lambda$CDM) framework, positive density
fluctuations present in the matter distribution at early times gradually break
away from the cosmological expansion and collapse into roughly spherical dark
matter haloes. These haloes merge hierarchically on progressively larger
scales as the Universe expands, giving rise to the galaxy, group and cluster-
scale haloes known today. Halo mergers are relatively ineffective at mixing
infalling material, and the central parts of smaller haloes merging into a
larger system can survive for many orbits as self-bound subcomponents, or
‘subhaloes’.
It is critical to understand precisely how subhaloes evolve after a merger,
since a number of important tests of dark matter, including dark matter
annihilation signals (e.g. Stref et al., 2019; Delos, 2019), gravitational
lensing (e.g. Limousin et al., 2005; Baltz et al., 2009; Sereno et al., 2016),
and the imprint of substructure on stellar streams (e.g. Carlberg, 2020;
Bonaca et al., 2020) depend sensitively on the properties of the subhalo
population. The exact internal structure and abundance of small subhaloes is
highly uncertain, however; in particular, there are concerns that substructure
seen in cosmological simulations may still be significantly affected by
artificial (numerical) disruption (e.g. van den Bosch et al., 2018; Errani &
Navarro, 2021). These uncertainties, together with baryonic effects,
contribute to the small-scale structure problems often cited as one of the
main challenges to the $\Lambda$CDM model (see, e.g. Bullock & Boylan-Kolchin,
2017, for a review of these problems).
Given the complexity of cosmological structure formation and the relatively
poor resolution of large-volume simulations, subhalo evolution has often been
studied using simplified simulations of a single (sub)halo evolving in a fixed
background potential (e.g. Hayashi et al., 2003; Kazantzidis et al., 2004;
Boylan-Kolchin & Ma, 2007; Kampakoglou & Benson, 2007; Peñarrubia et al.,
2008a, b, 2009; Choi et al., 2009; Peñarrubia et al., 2010; Drakos et al.,
2017; Ogiya et al., 2019; Delos, 2019; Errani & Navarro, 2021). These studies
have produced a number of models for tidal stripping and structural evolution
that describe how mass is lost as a function of radius. These models generally
remain empirical, however, i.e. they contain free parameters or functional
forms that are adjusted to match specific simulation results. Thus, while
accurate for the specific cases considered, these models are restricted to the
particular systems simulated (which are typically isotropic systems with NFW
profiles; e.g. Hayashi et al., 2003; Green & van den Bosch, 2019).
As first pointed out by Choi et al. (2009), tidal stripping of subhaloes is
well approximated as a monotonic, outside-in process in energy space.
Extending this idea, in Drakos et al. (2017)—Paper I hereafter—we showed that
truncation of the particle distribution function (DF) in a dark matter halo,
via a ‘lowering’ operation, produces a set of density profiles very similar to
those measured for tidally stripped systems in numerical simulations. This
approach has several advantages over models based on simulation data alone, as
it predicts the evolution of the central regions of haloes below the
resolution limit of numerical simulations, and it is easily generalized to
other profiles, or to anisotropic systems. It also has a clear physical
interpretation, namely that as mass is lost from a subhalo, the energies
change more-or-less uniformly within the system, pushing particles close to
the threshold for unbinding over this limit, and thus removing them from the
system. In subsequent work (Drakos et al., 2020, Paper II hereafter), we
developed a full model for tidal stripping that combines this lowering
approach with a simple estimate of the mass loss rate. The model has no free
parameters, and agrees well with simulations of mass loss from systems with
NFW profiles. Since then, a number of other authors have successfully studied
the evolution of tidally stripped systems using an energy-based approach (e.g
Stücker et al., 2021; Amorisco, 2021; Errani et al., 2021).
The aim of the current study is to test whether the energy-truncation approach
and the mass-loss model presented in Papers I and II are valid for a broader
range of tidally stripped collisionless systems. The structure of this paper
is as follows: first, in Section 2 we review the energy-based truncation
method introduced in Paper I and the mass-loss model from Paper II. In
Sections 3 and Section 4 we summarize the collisionless systems we consider in
this paper, and the idealized $N$-body simulations we use to test our models,
respectively. In Section 5 we calculate from energy truncation the expected
evolution of the density profiles of these systems, and compare these
predictions to the simulations in Section 6. We show our model predictions for
the central density of subhaloes in Section 7. Finally, we explore the
physical interpretation of this model in Section 8, and discuss our results in
Section 9. In a companion paper, we explore the implications of this work for
dark matter annihilation and galaxy-galaxy lensing signal predictions (Drakos
et al., 2022, in prep).
## 2 Review of energy-truncation model
In this section we briefly review the energy-based description of tidal
truncation introduced in Paper I, and the mass-loss model developed in Paper
II.
### 2.1 Review of distribution functions of isolated spherical systems
Systems of particles can generically be described by a distribution function
(DF) $f(\mathbf{r},t)={\rm d}m/{\rm d}r^{3}{\rm d}v^{3}$ which specifies the
mass per unit volume ${\rm d}r^{3}{\rm d}v^{3}$ at a given location in phase
space $(\mathbf{r},t)$. For isolated, spherically symmetric and isotropic
systems, the DF can be written as a function of a single variable
$f(\mathbf{r},t)=f(r,v)=f(\mathcal{E})$, where $\mathcal{E}=\Psi(r)-v^{2}/2$
is the (conserved) relative energy and $\Psi(r)$ is the relative potential,
defined as $\Psi(r)=-\Phi(r)+\Phi_{0}$. Here $\Phi(r)$ is the usual
gravitational potential, while $\Phi_{0}$ is a reference potential, usually
taken to be the value of $\Phi$ at the outer boundary of the system. Given the
sign convention in the definition of the relative energy, it is positive for
all bound particles and represents the binding energy needed to remove the
particle from the self-bound system. With these definitions,
$f(\mathcal{E})>0$ when $\mathcal{E}>0$, and $f(\mathcal{E})=0$ otherwise.
In terms of these quantities, we can calculate the density profile $\rho(r)$
corresponding to a given distribution function
$\displaystyle\rho(r)$
$\displaystyle=4\pi\int_{0}^{\Psi(r)}f(\mathcal{E})\sqrt{2(\Psi(r)-\mathcal{E})}{\rm
d}\mathcal{E}$ (1)
or we can invert this relationship to determine the isotropic distribution
function corresponding to a given density profile
$\displaystyle f(\mathcal{E})$
$\displaystyle=\dfrac{1}{\sqrt{8}\pi^{2}}\left[\int_{0}^{\mathcal{E}}\dfrac{1}{\sqrt{\mathcal{E}-\Psi}}\dfrac{{\rm
d}^{2}\rho}{{\rm d}\Psi^{2}}{\rm
d}\Psi+\dfrac{1}{\sqrt{\mathcal{E}}}\left(\dfrac{{\rm d}\rho}{{\rm
d}\Psi}\right)_{\Psi=0}\right]\,\,\,,$ (2)
where $\Phi(r)$ (and thus $\Psi=\Psi(r)$) is calculated from $\rho(r)$, using
Poisson’s equation $\nabla^{2}\Phi(r)=4\pi G\rho(r)$ (see Binney & Tremaine,
1987, for the derivation of these results).
### 2.2 Lowering the DF to represent tidal truncation
Subhaloes are clearly not isolated throughout their evolution, and the
relative energy defined above will vary with time for each individual
particle, due to changes in the self-bound potential, and the effects of tidal
heating. The approach of Paper I was to assume that over the course of a full
orbit, the relative energies of all particles change by a constant ‘tidal
energy’ $\mathcal{E}_{T}$. Considering the system near apocentre, where
heating is minimal, it can then be treated as approximately isolated, with
$f\sim f(\mathcal{E})$, where the potential used to calculate $\mathcal{E}$ is
that of the particles remaining bound at that time, and all the energies have
been shifted by $\mathcal{E}_{T}$ with respect to the previous apocentre. By
that point, tidal stripping will have removed from the system all the
particles in a range of $\mathcal{E}$ between zero and $\mathcal{E}_{T}$.
To estimate the resulting adjustment of the distribution function, we ‘lower’
it by a tidal energy $\mathcal{E}_{T}$, and use this modified form to recover
the new, tidally stripped density profile. This method is similar to how the
well-known King model for truncated stellar systems was derived by lowering
the DF of an isothermal sphere (King, 1966). It was originally proposed in
Widrow & Dubinski (2005) as a way to truncate NFW profiles for use as initial
conditions (ICs) in isolated simulations.
In terms of the original distribution function, the lowered version is defined
as
$f_{\mathcal{E}_{T}}(\mathcal{E})=\begin{cases}f_{0}(\mathcal{E}+\mathcal{E}_{\rm
T})-f_{0}(\mathcal{E}_{T})&\mathcal{E}\geq 0\\\ 0&\mathcal{E}\leq
0\,\,\,,\end{cases}$ (3)
where $f_{0}(\mathcal{E})$ is the DF of the original system, and
$\mathcal{E}_{T}$ is the truncation or tidal energy.
Given the lowered distribution function, we could in principle calculate the
density profile from Equation (1), but we would need the relative potential
$\Psi(r)$, which itself depends on $\rho(r)$ through Poisson’s equation.
Substituting the spherical form of Poisson’s equation on the left-hand side of
Equation (1), we can eliminate $\rho$ and write:
$\displaystyle\dfrac{{\rm d}^{2}{\Psi(r)}}{{\rm d}r^{2}}+$
$\displaystyle\dfrac{2}{r}\dfrac{{\rm d}\Psi(r)}{{\rm d}r}$ (4)
$\displaystyle=-16\pi^{2}G\int_{0}^{\Psi(r)}f_{\mathcal{E}_{T}}(\mathcal{E})\sqrt{2(\Psi(r)-\mathcal{E})}{\rm
d}\mathcal{E}\,.$
This differential equation, together with the boundary conditions
$\Psi(0)=\Psi_{0}(0)-\mathcal{E}_{T},\ {\rm d}\Psi(0)/{\rm d}r=0$ (where
$\Psi_{0}(r)$ is the relative potential of the original, un-truncated system)
is easily solved by conventional techniques (see Paper I). Once we have solved
for $\Psi(r)$, the truncation radius $r_{t}$ is given by the condition
$\Psi(r_{t})=0$, and the truncated density profile $\rho(r)$ can be calculated
from Poisson’s equation.
### 2.3 Predicting the mass-loss rate
As explained in Paper II, our proposed mass-loss model is based on the Jacobi
model for tidal truncation on a circular orbit (Binney & Tremaine, 1987;
Taylor & Babul, 2001). A tidal or limiting radius $r_{\lim}$ is defined such
that:
$\displaystyle\bar{\rho}_{\rm sat}(r_{\lim})$ $\displaystyle=\eta_{\rm
eff}\bar{\rho}_{\rm H}(R_{p})$ (5)
where $\bar{\rho}_{\rm sat}(r_{\rm lim})$ is the mean density of the satellite
interior to the tidal radius and $\bar{\rho}_{\rm H}(R_{p})$ is the mean
density of the host halo interior to the pericentre of the satellite’s orbit.
Throughout this paper, we use $R$ when referring to the radius with respect to
the centre of the host halo, and $r$ when referring to the radius within the
satellite.
The tidal radius, $r_{\rm lim}$, and resulting bound mass of the system,
$M_{\rm bnd}$, are then given by the following system of equations:
$\displaystyle M_{\rm bnd}$
$\displaystyle=\dfrac{4}{3}r_{\lim}^{3}\bar{\rho}_{\rm sat}(r_{\lim})$ (6)
$\displaystyle M_{\rm bnd}$ $\displaystyle=\eta_{\rm
eff}M_{H}(<R_{p})\dfrac{r_{\rm lim}^{3}}{R_{p}^{3}}$
We emphasize that the tidal radius $r_{\rm lim}$ is not the same as the
truncation radius $r_{t}$ defined in the previous section.
The constant, $\eta_{\rm eff}$ is defined as the orbital average of the
instantaneous, spherical $\eta$ value defined in King (1962):
$\eta_{\rm eff}=\dfrac{1}{t_{\rm orb}}\int_{0}^{t_{\rm
orb}}\left(\dfrac{\omega^{2}}{\omega_{c}^{2}}-\dfrac{1}{\omega_{c}^{2}}\dfrac{{\rm
d}^{2}\Phi_{H}}{{\rm d}R^{2}}\right){\rm d}t\,\,\,,$ (7)
where $\omega=|\mathbf{V}\times\mathbf{R}|/R^{2}$ is the instantaneous angular
velocity of the satellite, $\omega_{c}=GM_{H}(<R)/R^{3}$ is the angular
velocity of a circular orbit, and $\Phi_{H}(R)$ is the potential of the host.
For a spherically symmetric system, ${\rm d}\Phi_{H}/{\rm
d}R=GM_{H}(<R)/R^{2}$ As demonstrated in Paper II, with this definition of
$\eta$, we obtain a good match to the mass-loss rates measured in our
simulations without needing to add or adjust any free parameters. Recently,
Stücker et al. (2021) proposed that a more natural way to truncate tidally
stripped profiles in energy space is using the “boosted potential". We
consider $\eta$ determined from the boosted potential in Appendix B, and
compare it to the $\eta$ values found from the energy-truncation model.
Given a tidal radius defined as in Equation (5), we assume the mass outside
this radius is lost over the course of an orbit. We adjust the tidal energy
$\mathcal{E}_{T}$ in Equation (3) until it produces a satellite with this
lowered mass. This gives a full model for the stripped system, allowing us to
specify the total mass, distribution function and density profile by the end
of the orbit. We apply this stripping model at successive apocentres, since
this is the point at which the profile is expected to reach equilibrium. As
the system continues to orbit, each successive pericentric passage will
decrease $r_{\rm lim}$ and increase $\mathcal{E}_{T}$.111We take advantage of
the fact that truncating the profile first at energy $\mathcal{E}_{1}$, then
truncating the new truncated profile a second time at $\mathcal{E}_{2}$ is
mathematically equivalent to truncating the original profile at
$\mathcal{E}_{1}+\mathcal{E}_{2}$. Therefore, though $r_{\rm lim}$ is
calculated using the most recent, stripped density profile, the new profile is
always calculated by lowering the original, untrucated DF.
### 2.4 Algorithmic description of method
In practice, given an orbit and a set of initial conditions, the energy-
truncation model can be performed using the following steps:
1. 1.
Compute $\eta_{\rm eff}$ by integrating over the orbit using Equation (7).
2. 2.
Solve for the bound mass $M_{\rm bnd}$ and tidal radius, $r_{\rm lim}$, using
Equation (6). Note that $r_{\rm lim}$ is not needed in the following steps.
3. 3.
Compute the tidal energy that corresponds to $M_{\rm bnd}$. There is a
monotonic relationship between $M_{\rm bnd}$, $r_{t}$ and $\mathcal{E}_{T}$,
and any one of these three variables can be used to uniquely describe the
truncated profile. We create a lookup table to map between $M_{\rm bnd}$ to
$\mathcal{E}_{T}$222For convenience, Paper I provides an empirical
relationship to map between these variables..
4. 4.
Using $\mathcal{E}_{T}$, compute the lowered DF in Equation (3).
5. 5.
Calculate $\Psi(r)$ by solving the ODE in Equation (4).
6. 6.
Compute the corresponding truncated profile $\rho_{\rm sat}$ using Equation
(1).
7. 7.
Go to ii, and repeat for as many orbits as desired.
## 3 Collisionless satellite models
The main goal of this paper is to examine how well sharp truncation of the
distribution function at a tidal energy, and the mass-loss model reviewed in
the previous section, describe the evolution of a range of different
collisionless systems333A collisionless system is one in which interactions
between individual particles are negligible, and thus the gravitational force
acting on each particle can be treated as a smooth density field rather than a
collection of individual particles., including common models for CDM haloes
such as the NFW, Hernquist, Einasto models, but also models for stellar
systems such as the King model. In this section we review the properties of
each of these model systems.
### 3.1 NFW
In Papers I and II, we restricted our attention to the NFW profile,
$\rho(r)=\dfrac{\rho_{0}r_{\rm s}^{3}}{r(r+r_{\rm s})^{2}}\,\,\,,$ (8)
where $\rho_{0}$ is a characteristic density and $r_{\rm s}$ is the scale
radius, describing the point where the logarithmic slope is ${\rm
d}\log\rho/{\rm d}\log r=-2$.
For convenience, we provide an analytic description of an energy-truncated NFW
profile, using the functional form
$\rho_{\rm NFWT}=\dfrac{e^{-x/y}}{\left[1+(x/y)\right]^{a}}\rho_{\rm NFW}\,,$
(9)
where $x=r/r_{e}$ is the radius normalized by an effective radius, $r_{e}$,
and $y=M_{\rm bnd}/M_{\rm NFW}(<r_{s})$ is the bound mass of the truncated
profile, normalized by the mass of the untruncated profile within the NFW
scale radius, $r_{s}$. This functional form was chosen as we found it
empirically yielded a good fit, and conserves the central density as
$r\rightarrow 0$.
The parameters $r_{e}$ and $a$ can be expressed as follows:
$\displaystyle\log_{10}r_{e}$
$\displaystyle=0.0811y^{3}+0.358y^{2}+0.0781y-0.201$ (10) $\displaystyle a$
$\displaystyle=0.179y^{3}+0.379y^{2}-0.524y-0.952\,\,\,.$
As shown in Fig. 1, this analytic fit generally agrees with the energy-
truncated model to within 5 per cent, except at large radii ($r\gtrsim
0.5r_{t}$), where the density is very low. Our model predicts a conserved
central density, which is captured in Equation (9); this differs from
empirical models calibrated to simulations, which typically include an
explicit drop in the central density (e.g. Hayashi et al., 2003; Green & van
den Bosch, 2019). This point will be discussed further in Section 7.
Figure 1: Top: Energy-truncated NFW profiles, $\rho$ (solid lines) and the
analytic fit, $\rho_{\rm fit}$ (dotted lines). Bottom: Residuals in the fits,
measured as ($\rho_{\rm fit}/\rho-1)$. The 1 per cent errors are shown with
dashed black lines. The analytic fit agrees well with the energy-truncated
model inside $\sim 0.5r_{t}$.
### 3.2 Hernquist
The Hernquist profile (Hernquist, 1990) was originally used to describe
spherical galaxies, but it is also a reasonable approximation for cosmological
dark matter halo profiles. The density profile is given by:
$\rho(r)=\dfrac{M_{\rm tot}}{2\pi}\dfrac{a}{r(r+a)^{3}}\,\,\,,$ (11)
where $M_{\rm tot}$ is the total mass and $a$ is a characteristic radius, that
encloses a mass of $M_{\rm tot}/4$.
The advantage to this model is that it has simple analytic expressions for
many of its properties, including its DF:
$\displaystyle f(\mathcal{E})$
$\displaystyle=\dfrac{M}{8\sqrt{2}\pi^{3}a^{3}v_{g}^{3}}\dfrac{1}{(1-q^{2})^{5/2}}$
(12a) $\displaystyle\hskip
14.22636pt\times\left(3\sin^{-1}q+q(1-q^{2})^{1/2}(1-2q^{2})(8q^{4}-8q^{2}-3)\right)$
$\displaystyle q$ $\displaystyle=\sqrt{\dfrac{a\mathcal{E}}{GM}}$ (12b)
$\displaystyle v_{g}$ $\displaystyle=\left(\dfrac{GM}{a}\right)^{1/2}\,\,\,.$
(12c)
### 3.3 Einasto
The Einasto profile was first used to describe star counts in the Milky Way
(Einasto, 1965). However, this profile is often a better description of
cosmological dark matter halo profiles than the well-known NFW profile (e.g.
Navarro et al., 2004; Gao et al., 2008; Klypin et al., 2016). The Einasto
profile has the following form:
$\displaystyle\rho(r)$
$\displaystyle=\rho_{-2}\exp\left(-\dfrac{2}{\alpha}\left[\left(\dfrac{r}{r_{-2}}\right)^{\alpha}-1\right]\right)\,\,\,,$
(13)
where $\alpha$ is the Einasto shape parameter and $r_{-2}$ is the radius where
the logarithmic slope is $-2$. Compared to the NFW profile, the Einasto
profile has an extra parameter, $\alpha$, that controls the inner slope of the
density profile, and may reflect the mass accretion history of the halo (e.g.
Klypin et al., 2016).
### 3.4 King
The King model resembles an isothermal sphere at small radii, but has a finite
mass within a well-defined tidal radius. This model is typically used to
describe truncated stellar systems such as globular clusters or elliptical
galaxies (King, 1966). The King model is derived by lowering the DF of an
isothermal sphere:
$f(\mathcal{E})=\rho_{1}(2\pi\sigma^{2})^{-3/2}(e^{\mathcal{E}/\sigma^{2}}-1)\,\,\,,$
(14)
where $\sigma$ is the velocity dispersion, $\rho_{1}$ is a characteristic
density.
The density profile of the King model can then be calculated from the DF using
Equation (1), which gives:
$\rho(\Psi)=\rho_{1}\left[e^{\Psi/\sigma^{2}}\mbox{erf}\left(\dfrac{\sqrt{\Psi}}{\sigma}\right)-\sqrt{\dfrac{4\Psi}{\pi\sigma^{2}}}\left(1+\dfrac{2\Psi}{3\sigma^{2}}\right)\right]\,\,\,.$
(15)
To relate the relative potential energy, $\Psi(r)$, to the density of the
profile, $\Psi(r)$ can be solved numerically using Poisson’s equation
(Equation (4)). There are many possible parameterizations of the King model,
but it can be uniquely defined by the total mass, tidal radius, $r_{\rm t}$
and a dimensionless central potential $P_{0}=\Psi(0)/\sigma^{2}$. Alternately,
King models can also be characterized by a total mass, tidal radius and a
concentration parameter, $c_{K}$, which depends on the ‘King radius’ $r_{0}$.
The latter quantities are defined as:
$\displaystyle c_{K}$ $\displaystyle=\log_{10}\left(\dfrac{r_{\rm
t}}{r_{0}}\right)$ (16) $\displaystyle r_{0}$
$\displaystyle=\sqrt{\dfrac{9\sigma^{2}}{4\pi G\rho_{0}}}\,\,\,.$
where $\rho_{0}$ is the central density of the halo.
## 4 Simulations
The simulations were performed using a version of the $N$-body code gadget-2
(Springel, 2005), modified to contain a fixed background potential
corresponding to the host halo.
### 4.1 Initial halo models
We considered four different satellite models for our initial conditions
(ICs), as described in the previous section. The ICs were created using our
public code Icicle (Drakos et al., 2017). For all models, we define the mass
unit to be the initial mass of the satellite, $M_{\rm unit}=M_{\rm sat}$. We
define the length unit to be $r_{\rm unit}=a$ for the Hernquist model, $r_{\rm
unit}=r_{-2}$ for the Einasto models, and $r_{\rm unit}=0.1\,r_{\rm t}$ for
the King models. Then the density, time and energy units are $\rho_{\rm
unit}=M_{\rm unit}r_{\rm unit}^{-3}$, $t_{\rm unit}=\sqrt{r_{\rm
unit}^{3}/GM_{\rm unit}}$ and $E_{\rm unit}=GM_{\rm unit}/r_{\rm unit}$,
respectively. We calculated the softening length for each profile as
$\epsilon=0.5r_{h}N^{-1/3}$,where $r_{h}$ is the radius enclosing half of the
total mass, as in van Kampen (2000).
For the NFW profile, we used the same set of simulations described in Paper I.
Since NFW profiles are infinitely extended with divergent mass as $r$ goes to
infinity, these ICs were truncated as described in Paper I. The resulting
models resemble a truncated NFW profile with an energy truncation of
$\mathcal{E}_{\rm T}\approx 0.27$. Though the simulation results can be scaled
by the length and mass units, for the Einasto models, there is one free
parameter, $\alpha$, to fix; we chose values of $\alpha=0.15$ and $0.3$, as
these are representative of the range found in simulations (Gao et al., 2008).
The King models also have an additional free parameter (which could be
specified as $r_{0}$, $P_{0}$ or $c$); we used $P_{0}=3$ and $P_{0}=12$, which
correspond to King concentrations of $c_{K}=0.7$ and $c_{K}=2.7$. These
concentrations are typical of globular clusters and elliptical galaxies,
respectively (Binney & Tremaine, 1987). A summary of the IC properties are
given in Table 1. To check the stability of the ICs, they were evolved for
$t=1000\,t_{\rm unit}$ in isolation using the N-body code gadget-2 (Springel,
2005), as shown in Fig. 2. All four profiles are extremely stable outside
$r_{\rm unit}=0.1$ at $t=1000\,t_{\rm unit}$.
Table 1: Summary of the parameters used for the ICs; all satellites have an initial mass of $M_{\rm sat}=1$. The columns list (1) the name of the ICs, (2) the number of particles, (3) the radial unit, (4) specified profile parameters, (5) derived profile parameters, and (6) Gadget-2 softening length, roughly equivalent to the Plummer softening length. Profile Name | $N$ | $r_{\rm unit}$ | IC Parameters | Derived Parameters | $\epsilon/r_{\rm unit}$
---|---|---|---|---|---
NFWT | $\approx 1.3\times 10^{6}$ | $r_{s}$ | $r_{\rm cut}=10\,r_{\rm unit}$ | $\rho_{0}=0.08\,\rho_{\rm unit}$ | $0.01$
Hern | $10^{6}$ | $a$ | —– | —– | $0.01$
EinHigh | $10^{6}$ | $r_{-2}$ | $\alpha_{E}=0.3$ | $\rho_{-2}=0.01\,\rho_{\rm unit}$ | $0.02$
EinLow | $10^{6}$ | $r_{-2}$ | $\alpha_{E}=0.15$ | $\rho_{-2}=0.005\,\rho_{\rm unit}$ | $0.07$
KingHigh | $10^{6}$ | $r_{\rm t}/10$ | $P_{0}=12$ | $\rho_{1}=0.003\,\rho_{\rm unit}$, $r_{0}=0.02\,r_{\rm unit}$, $c_{K}=0.7$ | $0.01$
KingLow | $10^{6}$ | $r_{\rm t}/10$ | $P_{0}=3$ | $\rho_{1}=0.001\,\rho_{\rm unit}$, $r_{0}=2.13\,r_{\rm unit}$, $c_{K}=2.7$ | $0.01$
Figure 2: Stability of ICs. Solid lines are the profile at $t=0\,t_{\rm
unit}$; points are the profile at $t=1000\,t_{\rm unit}$.
### 4.2 Satellite orbits
Each satellite model was evolved on two different orbits, corresponding to the
‘Fast’ and ‘Slow’ Simulations in Paper II (Simulations 3 and 4 from Paper I),
which are representative of orbits that lose mass quickly and slowly,
respectively. In Papers I and II we found that the sharp energy truncation
model is a better descriptor of orbits which are losing mass slowly. The
infinitely extended host halo was assumed to have an NFW profile for all
simulations, with a mass $M_{\rm host}=300M_{\rm unit}$ and a scale radius of
$6.69\,r_{\rm unit}$. The orbits in the Slow and Fast Simulations have an
apocentre of $r_{a}=300\,r_{\rm unit}$, and pericentres of $R_{p}=50\,r_{\rm
unit}$ and $10\,r_{\rm unit}$, respectively. The simulations are summarized in
Table 2.
Table 2: Summary of orbital parameters for the Fast and Slow Simulations.
Columns give (1) the simulation name (2) the apocentric distance (3) the
pericentric distance (4) the tangential velocity at apocentre (5) the
tangential velocity at pericentre (6) the (radial) orbital period (7) mass-
loss factor, $\eta_{\rm eff}$
. Simulation Name $R_{a}/r_{\rm unit}$ $R_{p}/r_{\rm unit}$ $V_{a}/V_{\rm
unit}$ $V_{p}/v_{\rm unit}$ $t_{\rm orb}/t_{\rm unit}$ $\eta_{\rm eff}$ Slow
Simulation 100 50 1.42 2.84 185.4 2.34 Fast Simulation 100 10 0.51 5.10 129.7
1.72
Subhalo centres and bound particles were identified similarly to the method
described in Papers I and II. Considering only particles that were bound in
the previous snapshot, the centre of the satellite was defined as the densest
point in position and velocity space. The densest point was calculated using
the centre of mass (COM) in progressively smaller spheres, as originally
described in Tormen et al. (1997). Initially, the sphere was centred on the
COM of all particles, and the radius was defined as the distance to the
furthest particles. The radius was decreased by 90 per cent and recentered on
the COM, until there were fewer than 100 particles in the sphere. The velocity
frame was calculated in the same way, by using progressively smaller spheres
to find the densest point in velocity space.
Once the frame of the satellite was defined, we found the self-bound
particles. First, the energy of each particle was calculated in this frame,
assuming a spherical potential (in Paper II we showed the results are
insensitive to this assumption):
$P_{i}\approx-
Gm\left(\dfrac{N(<r_{i})}{r_{i}}+\sum_{j,r_{j}>r_{i}}^{N}\dfrac{1}{r_{j}}\right)\,\,\,,$
(17)
where $r_{i}$ is the distance of particle $i$ from the center of the system.
Particles were iteratively removed, and energies recalculated, until
convergence. The bound satellite mass is the mass of the remaining bound
particles.
## 5 Predicted profile evolution
In the energy truncation approach described in Section 2.1, the evolution of
the stripped system depends on a single parameter, which can be expressed as
the tidal energy $\mathcal{E}_{T}$. First, we explore how the NFW, Hernquist,
Einasto and King models evolve when truncated as function of
$\mathcal{E}_{T}$. We note that applying energy truncation to a King model
simply results in another King model, with parameters modified as follows:
$\displaystyle P_{0,T}$ $\displaystyle=P_{0}-E_{T}$ (18)
$\displaystyle\rho_{1,T}$ $\displaystyle=\rho_{1}\exp(E_{T})$ $\displaystyle
r_{0,T}$
$\displaystyle=r_{0}\sqrt{\dfrac{\rho(P_{0})}{\rho_{T}(P_{0,T})}}\,\,\,.$
For the other profiles, the truncated version no longer has a simple analytic
description, but can be calculated from the lowered distribution function. For
convenience, we provided an analytic fit to our lowered NFW profile in Section
3.1.
Though the model presented in Section 2.1 most naturally expresses the lowered
density profiles in terms of a truncation energy, $\mathcal{E}_{T}$, the model
can alternatively be parameterized by either the truncation radius, $r_{t}$,
or the total bound mass of the satellite, $M_{\rm bnd}$. In Fig. 3 we show the
relationship between these three parameters. Generally, there is a monotonic
relationship between the bound mass and the tidal energy or the tidal radius.
An exception is the tidal radius for the KingLow models; interestingly in this
case $r_{\rm t}$ _increases_ for large values of
$\mathcal{E}_{\rm{}_{T}}/\Psi(0)$. We will show in Section 5 that this
prediction is also consistent with the simulation results.
Figure 3: Relationship between the tidal energy $\mathcal{E}_{T}$, the bound
mass of the satellite (top), and the tidal radius (bottom) for different
initial satellite profiles. The relative energy, $\mathcal{E}$, has been
normalized by the relative potential of the untruncated profile at $r=0$,
$\Psi(0)$. Note that since the NFW profile has a divergent mass profile, we
have chosen $M_{\rm unit}$ to correspond to an NFW profile truncated at
$\mathcal{E}_{T}=0.27$; therefore, $M_{\rm bnd}/M_{\rm unit}$ can be greater
than 1 (dotted line). In general, the bound mass and truncation radius
decrease with tidal energy, with the exception of the KingLow profile. The
KingLow profile is the only cored profile, and its truncation radius increases
as it is tidally stripped.
In Fig. 4 we show the truncated density profiles and various related
quantities, coloured by the tidal energy, $\mathcal{E}_{\rm{}_{T}}$ normalized
by the relative potential of the untruncated profile at $r=0$, $\Psi(0)$. For
most of the halo models, as the satellites are stripped they decrease in
density at all radii, though most of this decrease is at large radii. An
exception is the KingLow models, where $r_{t}$ increases, with a corresponding
increase in density at larger radii. We note that when the density profile is
cored (as in the centre of the King models), the central density decreases
significantly as mass is lost, while for the cuspier profiles the central
density is conserved. This is consistent with results from Peñarrubia et al.
(2010).
We also find that the scale radius of the NFW, Hernquist and Einasto profiles
(the point where the logarithmic slope of the density profile is $-2$,
estimated from the location of the maximum of $\rho r^{2}$) decreases as the
halo becomes tidally stripped. In principle, this could lead to an increase in
the concentration parameter, though the virial radius is also decreasing; we
examine halo concentrations further in a companion paper (Drakos et al., 2022,
in prep). Finally, the location and amplitude of the peak of the circular
velocity curve both decrease with increasing $\mathcal{E}_{T}/\Psi(0)$ for all
profiles except KingLow.
Figure 4: Predicted evolution for different initial density profiles
(columns), assuming sharp truncation of the distribution function in energy.
The EinLow and EinHigh profiles have $\alpha$ values of $\alpha=0.15$ and
$0.3$, respectively, while KingHigh and KingLow profiles have central
potentials of $P_{0}=12$ and $3$, respectively. The curves are coloured by the
value of the tidal energy, $\mathcal{E}_{T}$, normalized by the relative
potential of the untruncated profile at $r=0$, $\Psi(0)$. The density profile
goes to zero at the truncation radius, $r_{t}$. The relation between the tidal
energy, truncation radius and bound mass is given in Fig. 3.
## 6 Model comparison to simulations
We have already demonstrated in Papers I and II that the energy truncation
approach provides an excellent description of the evolution of systems with
NFW profiles. The purpose of this section is to test whether this approach,
and the mass loss model developed in Paper II, works as well for the other
satellite models described in Section 3. We start by comparing the predicted
and observed mass-loss rates, before returning to the profile evolution.
### 6.1 Mass-loss rates
In Fig. 5 we compare the bound mass of the simulated satellites versus time,
calculated as described in Section 4.2, to the predictions of the mass loss
model summarized in Section 2.3. We assume the mass loss predicted by the
model on each orbit is removed by the time the system reaches apocentre. With
this convention for the Slow Simulation (left column), we get excellent
agreement (the bound mass is predicted to 5 per cent or better accuracy over
the first 5 orbits) for the NFW, Hern and EinLow profiles and good agreement
(10 per cent or better accuracy over the first 5 orbits) for the EinHigh and
KingHigh profiles. We get less accurate predictions ($\leq$20 per cent
accuracy over the first 5 orbits) for the KingLow models. In all cases, mass
loss is slightly faster in the simulations than predicted by the model.
For the Fast Simulations (right column), the observed mass-loss rates are
slightly slower than predicted, particularly on the first few orbits, where
the predictions for most models overestimate mass loss by $15$–$20$ per cent.
After a few orbits, however, the simulations have caught up with the model
predictions, and we once again get good agreement ($\leq$10 per cent accuracy)
for all models. Additionally, the model correctly predicts the complete
disruption of the KingLow profile by the second pericentric passage.
Figure 5: Bound satellite mass as a function of time in the Slow and Fast
Simulations, calculated as described in Section 2 (points), compared to
predictions of the mass loss from our model (grey lines). The black crosses
correspond to the model prediction at apocenter. Each row corresponds to a
different density profile, as indicated. In general, the energy-truncation
model agrees with the simulation results to within 20 per cent or better.
While our model does a reasonable job of predicting the evolution of the mass
distribution of the remnants, there are some discrepancies between the total
mass-loss rates predicted by the model and the rates measured in the
simulations. Defining which particles are bound to the simulated subhalo is
not a straightforward procedure, however, and often includes particles that
are temporarily bound but in the process of leaving the system (Peñarrubia et
al., 2009). We show evidence below that marginally bound mass in the outer
parts of the satellites may indeed affect the predicted mass-loss rates (see
e.g. Fig. 6).
The mass-loss picture is further complicated by the fact that it is often
difficult to determine whether mass loss in simulations is due to physical
processes or numerical artefacts (e.g. van den Bosch et al., 2018; van den
Bosch & Ogiya, 2018). Additionally, our mass loss model does not include
dynamical friction, which will cause the pericentric radius to decrease in
time. While dynamical friction can largely be neglected, as the host halo is
modelled as a smooth potential, there is some self dynamical friction, from
the remnant orbiting through tidal debris (e.g. Miller et al., 2020).
### 6.2 Profile evolution
In addition to predicting mass loss, our model also predicts the tidal energy
on each orbit, and thus the full structural evolution of the satellite. In
Fig. 6, we show how the density, mass and circular velocity profiles of the
simulations compare to the model predictions, for the Slow (top panels) and
the Fast (bottom panels) simulations. For the Slow Simulations, the agreement
is excellent, with the only significant discrepancy being at large radii for
the EinLow profile.
In the Fast Simulations, the agreement is also excellent at small radii, with
the exception of a slight disagreement in the circular velocity curves of the
KingHigh profiles at small radii; we suspect this is due to numerical
relaxation in the simulation. There are more significant discrepancies at
large radii for most of the profiles. The density profiles measured in the
simulations drop abruptly close to the predicted tidal radius, but then they
extend well beyond this radius with a shallower slope. As discussed in Paper
II and in Peñarrubia et al. (2009), this part of the profile includes
transitional material that is still bound but moving outwards, and will be
lost mostly on the next orbit. Refining the procedure we use to identify bound
particles in the simulation might resolve these discrepancies, and improve the
agreement between predicted and observed mass-loss rates in the case of orbits
with rapid mass loss. It may also be the case the energy-truncation fails to
accurately predict the evolution of the outskirts of the profile, due to the
equilibrium assumptions inherent in the model. Understanding the detailed
mass-loss and time scales is an interesting question, and one we leave to
future work.
Overall, our model appears to be universally valid for a wide range of density
profiles. The predictions for the mass loss rate are generally very accurate,
and especially so for the models that might describe dark matter structure
(NFW, Hernquist and Einasto), predicting the reminding bound mass to within
5-10 per cent or better in most cases over the first 5 orbits. The predictions
for the structure of the bound remnants are accurate except in the outer
regions close to the tidal radius; the latter discrepancy might be resolved by
adding a timescale for mass loss to our model, or refining our definition of
bound mass in the simulations. The success of the energy-truncation model is
particularly impressive given that it has no free parameters to adjust.
(a) (b)
Figure 6: Comparison of simulation results (points) and the energy-truncation
model (lines) for the Slow (top) and Fast (bottom) Simulations. Different
curves correspond to successive apocentric passages. The energy-truncation
model agrees well with simulations, except close to the truncation radius
where some additional mass appears to be temporarily bound in the simulations.
Note that for the cored KingLow profile (last column, bottom panel), our model
correctly predicts the observed drop in the central density. Residuals for the
density and mass profiles are shown in Appendix A.
## 7 Central density predictions
The main difference between the energy-truncation model and previous empirical
models of mass loss in the literature is that it predicts a conserved central
density in the tidally stripped systems, even for small bound mass fractions.
This is illustrated in Fig. 7, where we compare our NFW model predictions to
empirical models of stripped NFW systems from Hayashi et al. (2003),
Peñarrubia et al. (2010) and Green & van den Bosch (2019).444Note that the
bound mass fraction is defined in these papers as $M/M_{\rm NFW}(<r_{\rm
cut})$. In the following discussion, we convert all bound mass fractions to
$M/M_{\rm unit}$. As shown in Paper II, the recent model from Green & van den
Bosch (2019) is likely the most accurate description of tidally stripped NFW
haloes from single-subhalo simulations, while the model from Hayashi et al.
(2003) underestimates the central density.
Figure 7: Our model prediction (DTB20) for the density profile of a tidally
stripped NFW system, compared to previous empirical models of the stripped
profile, including Hayashi et al. (2003) (H03), Peñarrubia et al. (2010) (P10)
and Green & van den Bosch (2019) (G19). Each column is a different bound mass
fraction. The bottom panels show the residuals, calculated as $\rho_{\rm
model}/\rho_{x}-1$, where $\rho_{\rm model}$ is our model from Paper I, and
$\rho_{x}$ are the empirical models. Our model predicts larger central
densities and slightly sharper truncation compared to the others, particularly
at low bound mass fractions.
While the empirical models capture by design the results of the underlying
idealized simulations, it is unclear whether these simulations accurately
describe realistic cosmological situations. Empirical fits to simulations will
also fit numerical artefacts present in the results. For instance, Hayashi et
al. (2003) used an approximation in their initial conditions which caused a
drop in the central density in their simulated haloes that was numerical in
nature; this in turn is reflected in their empirical model for the stripped
profile (Kazantzidis et al., 2004). While the more recent paper by Green & van
den Bosch (2019) uses simulations carefully calibrated to understand the
effects of numerical relaxation on mass loss (Ogiya et al., 2019), it is still
unclear how the very central regions of the haloes are affected. Indeed,
separate work from Errani & Peñarrubia (2019) supports the idea that
centrally-divergent cusps are never disrupted.
Following Binney & Tremaine (1987), given the number of particles within
radius $r$, $N(<r)$, each of mass $m$, we can calculate a relaxation time
scale,
$t_{\rm rel}(r)\approx 0.1\frac{\sqrt{N(<r)}}{\ln
N(<r)}\sqrt{\frac{r^{3}}{Gm}}\,\,\,,$ (19)
which is the time it takes for a typical particle’s velocity to change by an
order of itself, and an evaporation time scale, $t_{\rm evap}(r)\approx
136\,t_{\rm rel}(r)$, which is the time for a typical particle to reach escape
speed. Since both times increase with radius, given a time, $t$, we can then
calculate relaxation and evaporation radii such that $t=t_{\rm rel}(r_{\rm
rel})$ and $t=t_{\rm evap}(r_{\rm evap})$, respectively. This will give us an
indication of the radii inside which resolution effects might dominate the
evolution. Note that in Fig. 7, we cannot determine at which radius numerical
relaxation and evaporation will dominate, since there is no temporal
information; the time for a subhalo to reach the indicated mass fraction would
depend on the specific orbit and background potential considered.
The slow mass loss case is the one in which the assumptions of the energy-
truncation model appear most valid (Paper II), such that we might expect the
central density to be conserved. The relative relaxation rate increases as the
number of particles in a system decreases, however, and we are unable to
evolve subhaloes to very low masses for the Slow Simulation and still have
enough particles to preserve the subhalo against numerical effects. Thus, we
cannot reliably measure the central density of the halo at late times in the
slow mass-loss simulations.
We can, however, use our analytic mass-loss model, as outlined in Section 2,
to predict the density profile after 10, 50, 100, 250 and 500 orbits. Fig. 8
shows this prediction for the orbit of the Slow Simulation. Additionally, we
show the profile predicted from Green & van den Bosch (2019) (using the same
enclosed mass), and the relaxation and evaporation radii (vertical lines). The
simulations used to calibrate the model in Green & van den Bosch (2019) were
run at a similar resolution to ours ($\sim 10^{6}$ particles), and therefore
should have similar resolution effects. With the exception of the density at
large radii (which depends on the details of how bound mass is calculated),
the results of Green & van den Bosch (2019) are nearly indistinguishable from
our model for $r>r_{\rm evap}$. They differ significantly from the model
predictions at smaller radii, but this is precisely where we expect relaxation
to affect the results.
Figure 8: The evolution of the density profile, as in Fig. 7, but comparing
our model with the model of Green & van den Bosch (2019), after the number of
orbits labelled. The dotted grey lines and dashed black lines show the
relaxation radius and evaporation radius, respectively. The difference in
central density between our model and the model of Green & van den Bosch
(2019) generally lies at radii unresolved in the simulations.
In summary, while we cannot resolve the question of central density
conservation definitively with our own simulations, we have shown the previous
predictions in the innermost parts of the profile need to be treated with
caution, and that the question is not yet resolved by previous work in the
literature. In a companion paper (Drakos et al., 2022, in prep), we explore
the sensitivity of dark matter annihilation rates and galaxy lensing signals
to the assumed behaviour of the innermost part of the density profile.
## 8 Physical interpretation
In the previous sections, we demonstrated the overall accuracy of the energy-
truncation model for a broad range of profiles. In this section, will consider
the physical justification for this model in more detail. As demonstrated in
Choi et al. (2009) and Paper II, during tidal stripping, mass loss appears to
be ordered primarily by the original relative energy of the particles. The
energy-truncation model captures this trend by shifting the relative energy of
all particles by a constant amount over each orbit, and unbinding those under
some threshold binding energy. However it does not explicitly state why the
ordering in relative energy should be conserved, or equivalently, why the
change in relative energy is approximately constant.
In general, over the course of an orbit, a particle can become unbound from a
subhalo because (1) the velocity of the particle changes, e.g. due to a kick
from a rapid perturbation or, (2) the potential of the system changes, due to
a change in the external field, or due to mass loss from the subhalo itself.
When estimating the relative contributions of these effects, we have to be
clear which frame is used to calculate kinetic energies, and which mass is
included in the calculation the potential. In this section, we will always
define particle velocities $v$ and radii $r$ in the frame of the satellite,
and will we calculate the satellite potential from the self-bound mass defined
previously.
Given these conventions, we can write the two contributions to the change in
energy as
$\Delta\mathcal{E}=\Delta\mathcal{E}_{K}+\Delta\mathcal{E}_{P}\,\,\,,$ (20)
where
$\Delta\mathcal{E}_{K}=-\left[\dfrac{1}{2}(\mathbf{v_{0}}+\Delta\mathbf{v})^{2}-\dfrac{1}{2}\mathbf{v}^{2}\right]=-\mathbf{v}\cdot\Delta\mathbf{v}-\dfrac{1}{2}(\Delta\mathbf{v})^{2}\,,$
(21)
assuming the velocity of the particle has changed from $\mathbf{v}$ to
$\mathbf{v}+\Delta\mathbf{v}$.
If we make the impulse approximation — assuming the satellite undergoes a
perturbation or ‘shock’ much shorter than most other timescales in the
problem, so particle positions are approximately constant throughout the event
(e.g. Taylor & Babul, 2001) — we can get an estimate of the energy change from
the shock by treating it as a point-mass perturber $M_{p}$, passing by the
satellite on a linear trajectory with a velocity $v_{p}$ and an impact
parameter $b$. Tidal heating from the shock should accelerate particles,
imparting a change in velocity
$\Delta\mathbf{v}=\dfrac{2GM_{p}}{v_{p}b^{2}}[-x,y,0]\,\,\,,$ (22)
where $(x,y)$ is the position of the particle in the plane of the encounter
(see, e.g. Mo et al., 2010). Since $\Delta\mathbf{v}\sim\mathbf{r}$,
$\mathbf{v}^{2}\sim r^{2}$, and the impulse approximation predicts that
$|\Delta\mathcal{E}_{K}|\propto r^{2}$ at large radii, and that it may also
have some additional energy or velocity dependence from the first term on the
right-hand-side of Equation (21). On the other hand, at small radii, adiabatic
shielding prevents particles from experiencing much net acceleration, since
their internal orbital timescales become shorter than the timescale of the
changing tidal field close to the centre of the satellite (e.g. Gnedin &
Ostriker, 1999). Thus the change in $\mathcal{E}_{K}$ should be approximately
zero in this limit. After this instantaneous change in kinetic energy, the
system re-virializes. The negative heat capacity of the system suggests that
there will be a resulting decrease in kinetic energy (an increase in
$\Delta\mathcal{E}_{K}$).
Fig. 9 shows the change in the specific energy of the bound particles over 5
orbits. The top panels show the change in relative energy as a function of
initial radius (left-hand) and initial energy (right-hand panels), while the
bottom panels show how the mass is distributed in the system, as a function of
the same variables. While there some variation in $|\Delta\mathcal{E}|$ across
the satellite (for particles that remain bound to the system,
$|\Delta\mathcal{E}|$ decreases with radius and also with binding energy),
overall it is fairly constant, varying by $\sim 10$ per cent over the range of
radius or energy that contains most of the mass555Note that in Paper II, we
showed that the scatter in the initial relative energy of particles stripped
on a given orbit was also $\sim 10$ per cent – see Fig. 5 of Paper II.. We see
larger variations only at large radii or low binding energies, where particles
are close to being stripped. Overall, the mean shift in relative energy from
orbit to orbit is fairly constant as a function of particle radius or initial
energy, and also from particle to particle, even after 5 orbits. This shows
why the energy truncation model produces a good approximation to the rate and
radial dependence of the actual mass loss, particularly in the slow mass-loss
case. The largest variations in $|\Delta\mathcal{E}|$ are close to the edge of
the system, where particles are close to being stripped.
The weak and declining dependence of $|\Delta\mathcal{E}|$ on particle radius
may seem counter-intuitive, since the particles at large radii are expected to
get larger velocity kicks in the satellite frame. Overall, the change in
relative energy does not scale with radius as expected for at least two
reasons. First, particles at small radii tend to be more bound, and therefore
can experience more acceleration and still remain bound, while particles at
large radii that receive large kicks will be lost from the system. We do not
track this unbound material in our model. This mass loss reduces the average
energy change in the material that remains bound, and may also explain why the
energy change decreases in magnitude quite rapidly close to the boundary of
the system. The second reason is that the main contribution to the relative
energy change is actually from the potential term, which decreases more or
less uniformly over the whole inner part of the satellite, as mass is stripped
primarily from the outside in. This term will deviate from a constant only in
the outer part of the satellite where some but not all of the mass is lost, so
here too this contributes to the rapid change in $|\Delta\mathcal{E}|$ at
large radii.
Figure 9: The average change in relative energy of the satellite self-bound
particles ($\Delta\mathcal{E}$). Note that negative changes in energies
correspond to particles becoming less bound to the satellite. Specific
energies are calculated for both the Slow Simulation (top) and Fast Simulation
(bottom) after one, three and five orbits. Shaded regions show the $1-\sigma$
standard deviation in each bin. We also show the mass distribution of
particles as a function of initial radius (left) and initial energy (right).
We can also consider which particles are removed in phase space. In Fig. 10 we
show the average change in relative energy as a function of initial particle
position. Close to the phase-space boundary of the system (at a constant
relative energy), the change in relative energy is constant. The decrease in
relative energy in the centre of the system is due to the overall mass of the
system decreasing, causing particles to become less bound. This figure
demonstrates that $\Delta\mathcal{E}_{P}$ for the self-bound particles mainly
depends on their initial energy, especially for the Slow Simulation.
Figure 10: The average particle change in relative energy,
$\overline{\Delta\mathcal{E}}$, as a function of the initial location in phase
space for the self-bound remnant. We plot contours of constant energy in an
NFW profile with dashed lines. The change of energy is constant along the edge
of the system, which roughly corresponds to a constant energy, indicating that
change in relative energy is primarily a function of particles initial energy.
The decrease in $\mathcal{E}$ in the center of the system (initial
$(r,v)\approx(0,0)$) is due to the decreased self-bound mass of the satellite.
In summary, there are several reasons why the energy-truncation model provides
a good estimate of the rate and radial dependence of mass loss, or
equivalently, why mass loss remains ordered by the original relative energy to
a significant degree. While simple arguments predict that tidal heating should
depend on particle radius and possibly also on energy, this variation is
reduced by selective mass loss at large radii and/or low energies, and the net
variation in the remaining bound mass is much smaller. Furthermore, the total
change in relative energy is actually dominated by the potential energy
change, which is relatively constant throughout the inner part of the
satellite.
Overall, the physical picture for mass-loss in our framework is that as the
satellite moves through the background potential, the tidal fields heat its
particle orbits and deform its self-potential (see Appendix B). The lowest
$\mathcal{E}$ particles rapidly escape, since they are no longer bound to the
satellite. Once this mass loss has occurred, however, the potential well of
the system decreases by a roughly constant amount, and the energies of all the
particles are consequently shifted while remaining ordered. Our model
approximates this process by assuming the initial distribution function is
lowered by a constant amount on each orbit. As discussed in Section 9, future
work will focus on improvements to this energy-truncation model, for instance
by adding scatter to the change in relative energy.
## 9 Discussion
In Paper I, we described an approach to modelling tidal mass loss based on
truncating and lowering the subhalo DF by a specified tidal energy. In Paper
II we used this approach to develop a model for mass loss with zero free
parameters, and showed that it provided a good description of the evolution of
subhaloes with NFW density profiles. In this work, we have demonstrated that
energy truncation and our mass loss model can be applied to a wide range of
other density profiles, with similar accuracy. We found the model does an
excellent job of describing the density, mass and velocity profile evolution
in all the tested cases. Additionally, our model naturally captures the effect
that cuspy profiles conserve central density, while cored profiles have a
large decrease in central density, as described in Peñarrubia et al. (2010).
Idealized simulations of individual subhaloes (such as the ones used in this
paper) require an assumption about the initial profile; this is typically
taken to be NFW, despite evidence that dark matter haloes are probably better
described by Einasto profiles (e.g. Navarro et al., 2010; Klypin et al.,
2016). Additionally, a recent study by Brown et al. (2020) indicates that the
universal profile might be due to the narrow range of initial conditions used
in simulations, and the diversity of dark matter profiles should be much
larger than previously thought. Regardless, the density profile of dark matter
haloes in cosmological $N$-body simulations is never resolved below the scale
of the softening length. To establish the true behaviour of haloes in their
innermost regions, we will likely need a theoretical model for the origin of
the universal density profile.
As discussed in Paper II, mass loss predictions are sensitive to the central
density of the system. Artificial disruption is likely a huge problem in
cosmological simulations on subhalo scales (van den Bosch et al., 2018; Errani
& Peñarrubia, 2019; Errani & Navarro, 2021; Green et al., 2021) and is thought
to be caused by artificial constant-density cores on the scale of the
resolution limit of the simulation that drive enhanced tidal disruption.
Controlled simulations suggest that centerally-divergent profiles can never be
fully disrupted by a smooth potential (e.g. Kazantzidis et al., 2004; Errani &
Peñarrubia, 2019). Similarly, Amorisco (2021) recently argued that the
conditions of (1) a centrally divergent density profile $\rho\sim r^{-1}$ and
(2) an isotropic phase space distribution are sufficient to ensure that
subhaloes can never be disrupted. These finding are in agreement with the
energy-truncation model, which predicts that systems will naturally always
retain a bound remnant as long as $\bar{\rho}(r)\to\infty$ as $r\to 0$.
These issues are particularly interesting in the context of microhaloes. These
structures form early in the universe, and their size is determined by the
free streaming scale of the dark matter particles. Assuming dark matter
particles have a mass of 100 GeV, the mass of the smallest microhaloes is
approximately an Earth mass (e.g. Diamanti et al., 2015). These early haloes
probably have cuspy profiles, with an inner slope of $\sim-1.5$ (e.g. Ogiya et
al., 2016). Most evidence suggests that the steep central cusps of microhaloes
are not conserved, and mergers are thought to drive profiles towards the
universal density profile, with an inner slope of $-1$. For instance, Ishiyama
(2014) show that the cusp slope gradually becomes shallower with increasing
halo mass. More recently, by using cosmological zoom-in simulations, Wang et
al. (2019) were able to simulate Earth-mass haloes at redshift zero, and find
that they look NFW (with an inner slope of $-1$).
Generally, it is thought that major mergers are responsible for this
flattening (e.g. Ogiya et al., 2016; Angulo et al., 2017; Delos et al., 2019).
Contrary to this picture, however, isolated simulations of major mergers show
that the inner densities of haloes are typically conserved (e.g. Kazantzidis
et al., 2006; Drakos et al., 2019a, b). A possible solution to this
discrepancy is that isolated binary merger simulations are overly simplistic,
and do not take into account the complex gravitational interactions to which
haloes are subject. However, it is also possible that high central densities
are not conserved in cosmological simulations due to numerical artefacts. The
central densities of these microhaloes are especially important to dark matter
annihilation calculations. For instance, Ishiyama (2014) found that
microhaloes had higher densities than expected from extrapolations of the low-
redshift concentration–mass–redshift relation, while Okoli et al. (2018)
showed that if these densities are conserved, it could increase estimates of
the dark matter annihilation boost factor by up to two orders of magnitude.
Overall, there is a growing body of evidence that the evolution of the inner
central density of haloes and their concentrations are not well understood.
This is particularly concerning since models of substructure evolution are key
ingredients in lensing predictions and dark matter annihilation constraints.
The consequences of this are examined in a companion paper, where we show that
while lensing constraints are not particularly sensitive to assumptions about
subhalo central density, dark matter annihilation signals depend very
sensitively on this quantity, and derived constraints on dark matter
properties are therefore still uncertain (Drakos et al., 2022, in prep).
The energy-truncation model used in this work is a promising approach to
modelling substructure evolution. As examined thoroughly in Paper II, the
energy truncation model includes a number of simplifying assumptions which
could be relaxed to make more accurate predictions. A straightforward
extension of the model would be allow some additional particle-to-particle
scatter in the energy change between orbits; equivalently, we could remove
particles from the DF using a more gradual cut-off in energy, rather than an
abrupt truncation. Additionally, we only consider mass loss in discrete steps
once per orbit, at the time of pericentric passage. Continuous models for
tidal mass loss are often created by dividing the orbital period into discrete
steps and assuming a fraction of the mass outside the tidal radius is lost at
each of these steps, according to a characteristic time scale for mass loss
(e.g. Taylor & Babul, 2001). We found in Paper II that a continuous mass-loss
model of this kind requires an additional free parameter, calibrated to
simulations, and thus we decided to use the discrete orbit-averaged model
instead. However, this is clearly a simplification, and we aim to improve the
mass-loss model in future work.
While our findings support the assertion that our energy-truncation model is
universal, we have only considered spherical, isotropic subhaloes, evolving in
an NFW main potential. Examining more complicated ICs (including anisotropic
and multicomponent systems) will be the focus of future work. Another
interesting extension would be to include a disk potential to the host halo,
as it has been shown that including an embedded central disk potential to
dark-matter-only simulations increases the rate of subhalo disruption (e.g.
Garrison-Kimmel et al., 2017). For example, recent work by Webb & Bovy (2020)
considers more realistic external tidal fields, including baryonic bulge and
disk components, and finds that the baryonic components lead to more subhalo
disruption and lower mass subhaloes. Since the energy-truncation model works
well for a wide range of collisionless systems, it may also be useful for
predicting the evolution of multiple-component systems. For instance, tidal
stripping is thought to be a likely mechanism for creating observed dark-
matter-deficient ultra-diffuse galaxies (e.g. Ogiya et al., 2021b; Ogiya et
al., 2021a). In future work, we will examine the evolution of these galaxies
in energy space.
Overall, a growing body of evidence suggests that the central density and
concentration of dark matter haloes are not well understood. Physically-based
models for subhalo evolution are critical for correctly predicting density
profile evolution at small radii, as these regions cannot be resolved in
simulations. The theoretically-based energy-truncation model offers a
convenient tool to study the evolution of collisionless systems as they are
tidally stripped, allowing us to work towards a complete understanding of how
subhaloes evolve, and to ultimately place more robust and accurate constraints
on the properties of dark matter.
## Acknowledgements
The authors thank the anonymous referee for useful comments. NED acknowledges
support from NSERC Canada, through a postdoctoral fellowship. JET acknowledges
financial support from NSERC Canada, through a Discovery Grant. The
simulations for this work were carried out on computing clusters provided by
Compute Ontario (https://computeontario.ca/) and Compute Canada
(www.computecanada.ca).
_Software_ : Gadget-2 (Springel, 2005), Icicle (Drakos et al., 2017), numpy
(Harris et al., 2020), matplotlib (Hunter, 2007), scipy (Virtanen et al.,
2020).
## Data availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
## References
* Amorisco (2021) Amorisco N. C., 2021, arXiv e-prints, p. arXiv:2111.01148
* Angulo et al. (2017) Angulo R. E., Hahn O., Ludlow A. D., Bonoli S., 2017, MNRAS, 471, 4687
* Baltz et al. (2009) Baltz E. A., Marshall P., Oguri M., 2009, J. Cosmology Astropart. Phys., 2009, 015
* Binney & Tremaine (1987) Binney J., Tremaine S., 1987, Galactic dynamics. Princeton University Press
* Bonaca et al. (2020) Bonaca A., et al., 2020, ApJ, 892, L37
* Boylan-Kolchin & Ma (2007) Boylan-Kolchin M., Ma C.-P., 2007, MNRAS, 374, 1227
* Brown et al. (2020) Brown S. T., McCarthy I. G., Diemer B., Font A. S., Stafford S. G., Pfiefer S., 2020, MNRAS, 495, 4994
* Bullock & Boylan-Kolchin (2017) Bullock J. S., Boylan-Kolchin M., 2017, ARA&A, 55, 343
* Carlberg (2020) Carlberg R. G., 2020, ApJ, 889, 107
* Choi et al. (2009) Choi J.-H., Weinberg M. D., Katz N., 2009, MNRAS, 400, 1247
* Delos (2019) Delos M. S., 2019, Phys. Rev. D, 100, 063505
* Delos et al. (2019) Delos M. S., Bruff M., Erickcek A. L., 2019, Phys. Rev. D, 100, 023523
* Diamanti et al. (2015) Diamanti R., Catalan M. E. C., Ando S., 2015, Phys. Rev. D, 92, 065029
* Drakos et al. (2017) Drakos N. E., Taylor J. E., Benson A. J., 2017, MNRAS, 468, 2345
* Drakos et al. (2019a) Drakos N. E., Taylor J. E., Berrouet A., Robotham A. S. G., Power C., 2019a, MNRAS, 487, 993
* Drakos et al. (2019b) Drakos N. E., Taylor J. E., Berrouet A., Robotham A. S. G., Power C., 2019b, MNRAS, 487, 1008
* Drakos et al. (2020) Drakos N. E., Taylor J. E., Benson A. J., 2020, MNRAS, 494, 378
* Drakos et al. (2022) Drakos N. E., Taylor J. E., Benson A. J., 2022, Do assumptions about the central density of halos effect lensing and dark matter annihilation calculations?, _In prep._
* Einasto (1965) Einasto J., 1965, Trudy Inst. Astrofiz. Alma-Ata, 5, 87
* Errani & Navarro (2021) Errani R., Navarro J. F., 2021, MNRAS, 505, 18
* Errani & Peñarrubia (2019) Errani R., Peñarrubia J., 2019, MNRAS, p. 2998
* Errani et al. (2021) Errani R., Navarro J. F., Ibata R., Peñarrubia J., 2021, arXiv e-prints, p. arXiv:2111.05866
* Gao et al. (2008) Gao L., Navarro J. F., Cole S., Frenk C. S., White S. D. M., Springel V., Jenkins A., Neto A. F., 2008, MNRAS, 387, 536
* Garrison-Kimmel et al. (2017) Garrison-Kimmel S., et al., 2017, MNRAS, 471, 1709
* Gnedin & Ostriker (1999) Gnedin O. Y., Ostriker J. P., 1999, ApJ, 513, 626
* Green & van den Bosch (2019) Green S. B., van den Bosch F. C., 2019, MNRAS, 490, 2091
* Green et al. (2021) Green S. B., van den Bosch F. C., Jiang F., 2021, MNRAS, 503, 4075
* Harris et al. (2020) Harris C. R., et al., 2020, Nature, 585, 357
* Hayashi et al. (2003) Hayashi E., Navarro J. F., Taylor J. E., Stadel J., Quinn T., 2003, ApJ, 584, 541
* Hernquist (1990) Hernquist L., 1990, ApJ, 356, 359
* Hunter (2007) Hunter J. D., 2007, Computing in Science & Engineering, 9, 90
* Ishiyama (2014) Ishiyama T., 2014, ApJ, 788, 27
* Kampakoglou & Benson (2007) Kampakoglou M., Benson A. J., 2007, MNRAS, 374, 775
* Kazantzidis et al. (2004) Kazantzidis S., Magorrian J., Moore B., 2004, ApJ, 601, 37
* Kazantzidis et al. (2006) Kazantzidis S., Zentner A. R., Kravtsov A. V., 2006, ApJ, 641, 647
* King (1962) King I., 1962, AJ, 67, 471
* King (1966) King I. R., 1966, AJ, 71, 64
* Klypin et al. (2016) Klypin A., Yepes G., Gottlöber S., Prada F., Heß S., 2016, MNRAS, 457, 4340
* Limousin et al. (2005) Limousin M., Kneib J.-P., Natarajan P., 2005, MNRAS, 356, 309
* Miller et al. (2020) Miller T. B., van den Bosch F. C., Green S. B., Ogiya G., 2020, MNRAS, 495, 4496
* Mo et al. (2010) Mo H., van den Bosch F. C., White S., 2010, Galaxy Formation and Evolution
* Navarro et al. (2004) Navarro J. F., et al., 2004, MNRAS, 349, 1039
* Navarro et al. (2010) Navarro J. F., et al., 2010, MNRAS, 402, 21
* Ogiya et al. (2016) Ogiya G., Nagai D., Ishiyama T., 2016, MNRAS, 461, 3385
* Ogiya et al. (2019) Ogiya G., van den Bosch F. C., Hahn O., Green S. B., Miller T. B., Burkert A., 2019, MNRAS, 485, 189
* Ogiya et al. (2021a) Ogiya G., van den Bosch F. C., Burkert A., 2021a, arXiv e-prints, p. arXiv:2111.12104
* Ogiya et al. (2021b) Ogiya G., Taylor J. E., Hudson M. J., 2021b, MNRAS, 503, 1233
* Okoli et al. (2018) Okoli C., Taylor J. E., Afshordi N., 2018, J. Cosmology Astropart. Phys., 8, 019
* Peñarrubia et al. (2008a) Peñarrubia J., McConnachie A. W., Navarro J. F., 2008a, ApJ, 672, 904
* Peñarrubia et al. (2008b) Peñarrubia J., Navarro J. F., McConnachie A. W., 2008b, ApJ, 673, 226
* Peñarrubia et al. (2009) Peñarrubia J., Navarro J. F., McConnachie A. W., Martin N. F., 2009, ApJ, 698, 222
* Peñarrubia et al. (2010) Peñarrubia J., Benson A. J., Walker M. G., Gilmore G., McConnachie A. W., Mayer L., 2010, MNRAS, 406, 1290
* Sereno et al. (2016) Sereno M., Fedeli C., Moscardini L., 2016, J. Cosmology Astropart. Phys., 2016, 042
* Springel (2005) Springel V., 2005, MNRAS, 364, 1105
* Stref et al. (2019) Stref M., Lacroix T., Lavalle J., 2019, Galaxies, 7, 65
* Stücker et al. (2021) Stücker J., Angulo R. E., Busch P., 2021, arXiv e-prints, p. arXiv:2107.13008
* Taylor & Babul (2001) Taylor J. E., Babul A., 2001, ApJ, 559, 716
* Tormen et al. (1997) Tormen G., Bouchet F. R., White S. D. M., 1997, MNRAS, 286, 865
* Virtanen et al. (2020) Virtanen P., et al., 2020, Nature Methods, 17, 261
* Wang et al. (2019) Wang J., Bose S., Frenk C. S., Gao L., Jenkins A., Springel V., White S. D. M., 2019, arXiv e-prints, p. arXiv:1911.09720
* Webb & Bovy (2020) Webb J. J., Bovy J., 2020, MNRAS, 499, 116
* Widrow & Dubinski (2005) Widrow L. M., Dubinski J., 2005, ApJ, 631, 838
* van Kampen (2000) van Kampen E., 2000, preprint, (arXiv:astro-ph/0002027)
* van den Bosch & Ogiya (2018) van den Bosch F. C., Ogiya G., 2018, MNRAS, 475, 4066
* van den Bosch et al. (2018) van den Bosch F. C., Ogiya G., Hahn O., Burkert A., 2018, MNRAS, 474, 3043
## Appendix A Accuracy of profile predictions
Fig. 11 shows the residuals in the density and mass profiles of the energy-
truncation model compared to the simulation results. Residuals are calculated
as the difference between the model predictions and simulations, divided by
the model predictions. We note that there is very little mass within $\sim
r_{\rm unit}$—particularly for the EinLow and KingLow simulations—which makes
the relative errors in the mass profiles untrustworthy at low radii.
Nonetheless, these residual plots offer important insights into the accuracy
of the energy-truncation model.
The Slow Simulation (top), generally captures the model to within $\sim 10$
per cent, except when $r$ approaches the truncation radius. An exception is
the KingLow profile, whose central density is very sensitive to the total mass
of the system. In general, for the Slow Simulation, the energy-truncation
model over-predicts central density and mass of the stripped satellite. The
Fast Simulations (bottom), shows a larger disagreement with the energy-
truncation model (but the central density is still fairly well predicted, to
within $\sim 20$ per cent or better). Since this simulation was chosen as a
case where the energy-truncation model does not work as well (see Papers I and
II), this is expected. In general, large deviations between the model and
simulations can be seen at radii larger than $\sim 2\,r_{\rm unit}$. At these
larger radii, the energy-truncation model predicts too much mass for the Slow
Simulation, and too little mass for the Fast Simulation. This result is
consistent with the mass-loss curves shown in Fig. 5.
(a) (b)
Figure 11: Residuals in the density profiles and mass profiles of the tidally
stripped systems, when comparing the energy-truncation model to the Slow (top)
and Fast (bottom) Simulations, as shown in Fig. 11. The shaded regions
correspond to 10 per cent (dark grey) and 20 per cent (light grey) agreement.
In general, the model does a fairly good job at predicting the central density
(within $\sim 2\,r_{\rm unit}$). At larger radii, there are significant
deviations between the energy-truncation model and the self-bound remnant from
simulations. The exact origin of these deviations are unclear, but are likely
tied to the detailed mass-loss timescales of each particle, and how the bound
particles of the satellite are defined.
This paper has mainly focussed on predicting the central density of
satellites, which the energy-truncation model does fairly well. The material
at large subhalo radii are more dynamically complicated, and are sensitive to
how we define bound particles. Future work will focus on understanding the
detailed time scales of mass loss for all particles, and whether some of the
bound mass is in the process of leaving the system.
## Appendix B The deforming bowl picture of tidal mass loss
Stücker et al. (2021) recently proposed that there is a natural energy at
which particles will be lost, calculated from the “boosted potential". They
advocate for a model for tidal stripping in which mass loss is explained by a
lowering of the escape energy caused by the tidal field termed the “deforming
bowl” picture. This naturally leads to the concept of a “truncation energy".
In tidal stripping analyses, we often only consider the self potential of the
satellite. However, the satellite exists within the large scale potential of
the host it is orbiting, as illustrated in Fig. 12. This figure shows an
(infinitely extended) NFW potential of mass $M_{\rm sat}$ located at
$R_{P}=50\,r_{\rm unit}$ of the host halo ($M_{H}=300M_{\rm sat}$).
Figure 12: The global potential around an NFW satellite located at
$R=50\,r_{\rm unit}$ of its host halo. The potential from the host halo
strongly effects the local potential of the satellite.
To account for the affect of the host potential on the satellite, Stücker et
al. (2021) define the boosted potential as:
$\Phi_{\rm boost}(\mathbf{x})=\Phi(\mathbf{x})+\mathbf{x}\cdot\mathbf{a_{0}}$
(23)
where $\Phi$ is the total (host plus satellite) potential and $\mathbf{a_{0}}$
is an additional apparent acceleration. Thus, in our example, the boosted
potential subtracts the local large scale gradient of the host halo from the
total potential.
Under the assumption of spherical symmetry for both the host and satellite
halo, and assuming that the satellite is located at pericentre, Stücker et al.
(2021) define the boosted potential of the satellite system as:
$\displaystyle\Phi_{\rm boost}(R)=$ $\displaystyle\Phi_{H}(R)+\Phi_{\rm
sat}(r)$ (24)
$\displaystyle-(R-R_{p})\dfrac{GM_{H}(<R_{p})}{R_{p}^{2}}-\Phi_{H}(R_{p})\,\,\,,$
where $R=R_{p}+r$. The final term has been subtracted to give the self-
potential of the satellite. Solving for the saddle points of Equation (24)
gives:
$\bar{\rho}_{\rm
sat}(r)=\pm\left[\dfrac{R_{p}}{r}-\dfrac{R_{p}^{3}}{r(R_{p}\pm
r)^{2}}\dfrac{M_{H}(<R_{p}\pm
r)}{M_{H}(<R_{p})}\right]\bar{\rho}_{H}(R_{p})\,\,\,.$ (25)
Fig. 13 shows the boosted potential of an NFW satellite. For this example, the
saddle points are located at $r=-7.3\,r_{\rm unit}$ and $8.4\,r_{\rm unit}$.
In the deforming bowl picture, particles are lost as they spill over the top
of this bowl.
Figure 13: The potential of an NFW satellite (dashed grey line) and the
boosted potential (solid black line). In the “deforming bowl" picture,
particles outside the saddle points of the boosted potential are lost.
This picture of mass-loss can be easily integrated into our energy-truncation
model with the observation that Equation (25) is in the form of Equation (5),
with
$\eta_{\rm boost}=\pm\left(\dfrac{R_{p}}{r_{\rm
lim}}-\dfrac{R_{p}^{3}}{r(R_{p}\pm r_{\rm lim})^{2}}\dfrac{M_{H}(<R_{p}\pm
r_{\rm lim})}{M_{H}(<R_{p})}\right)\,\,\,.$ (26)
Other common definitions of $\eta$, as discussed in detail in Paper II,
include the well-known Roche ($\eta=2$) and Jacobi limits ($\eta=3$),
$\eta_{1}=2-\dfrac{{\rm d}\ln M}{{\rm d}\ln R}\,\,\,,$ (27)
which describes a satellite and host with extended bodies, and
$\eta_{2}=\dfrac{\omega^{2}}{\omega_{c}^{2}}-\dfrac{1}{\omega_{c}^{2}}\dfrac{{\rm
d}^{2}\Phi}{{\rm d}R^{2}}\,\,\,,$ (28)
which includes the centrifugal force.
In Paper II we found that in the energy-truncation framework, $\eta_{2}$
overestimates mass-loss rates and work the best for circular orbits, while
$\eta_{1}$ underestimates mass loss and work the best for radial orbits. The
value we adopt, $\eta_{\rm eff}$, is the orbital average of the instantaneous
value of $\eta_{2}$ (Equation (7)). In practice, we found $\eta_{\rm eff}$
(which lies between $\eta_{1}$ and $\eta_{2}$) worked best to describe mass-
loss.
There are important differences between Equation (26) and the alternate $\eta$
definitions listed above. First, there are two $\eta_{\rm boost}$ values,
reflecting the fact that the tidal field is stronger on the side of the
satellite closer to the host halo. We will take $\eta_{\rm boost}$ to be the
negative root, as this corresponds to stronger tidal forces. Secondly, since
the value of $r_{\rm lim}$ depends on the satellite model, $\eta_{\rm boost}$
does not only depend on the host halo and orbit, but also on the details of
the satellite profile.
In Fig. 14 we compare $\eta_{\rm boost}$ to $\eta_{\rm eff}$ as well as
commonly used values in the literature. We consider both the Slow and Fast
Simulation, and use an infinitely extended NFW profile to calculate the
$\eta_{\rm boost}$ values. We find that the boosted potential calculation
predicts an $\eta$ value between $\eta_{1}$ and $\eta_{\rm eff}$. Since we
interpret $\eta_{\rm eff}$ as being the “true" value, this suggests that
$\eta_{\rm boost}$ underestimates mass-loss in our framework.
Figure 14: Values of $\eta$ used in the mass-loss prescription, Equation (6).
From left to right (1) $\eta_{\rm eff}$ is the value we use in Equation (7)
shown to match well with simulations (Paper II) so can be considered close to
the "true" value (2) $\eta_{1}$, Equation (27), which works well for radial
orbits (3) $\eta_{2}$, Equation (28), which works well for circular orbits and
(4) $\eta_{\rm boost}$, Equation (26), which is calculated from the boosted
potential. For comparison, we also show the Roche ($\eta=2$) and Jacobi
($\eta=3$) limits. $\eta_{\rm boost}$ is lower than the “true" value,
$\eta_{\rm eff}$, and therefore under-predicts mass loss in the energy-
truncation framework.
Although $\eta_{\rm boost}$ (and effectively the truncation energy) calculated
from this formulation of the boosted potential underestimates the best-fit
value (which is expected to be roughly equivalent to $\eta_{\rm eff}$), the
boosted potential framework seems to be a natural extension to the energy-
based tidal stripping model we advocate. A potential explanation for why
$\eta_{\rm boost}$ is too low is because we neglected the centrifugal term;
i.e. we could subtract a term $\frac{1}{2}|\omega\times\mathbf{R}|^{2}$ from
Equation (24) to account for the corotating potential. We leave more detailed
explorations of the boosted potential to future work.
|
# Semicomplete Arithmetic Sequences, Division of Hypercubes, and the Pell
Constant
Z. Hoelscher
###### Abstract -
In this paper we produce a few continuations of our previous work on
partitions into fractions. Specifically, we study strictly increasing integer
sequences $\\{n_{j}\\}$ such that there are partitions for all integers less
than the floor of $G$, where
$G=\frac{n_{1}}{j}+\frac{n_{2}}{j}+\cdots+\frac{n_{j-1}}{j}+\frac{n_{j}}{j}$,
and all summands are distinct terms drawn from
$\frac{n_{1}}{j}+\frac{n_{2}}{j}+\cdots+\frac{n_{j-1}}{j}+\frac{n_{j}}{j}$. We
call such sequences “semicomplete”. We find that there are only three
semicomplete arithmetic sequences. We also study sequences that give the
maximum number of pieces that an $M$ dimensional hypercube can be cut into
using $N-1$ hyperplanes. We find that these are semicomplete in one, two,
three, and four dimensions. As an aside, we use one of our generating
functions to produce what appears to be a new identity for the Pell constant,
a number which is closely connected to the density of solutions to the
negative Pell equation.
Keywords : Integer partitions, $q$-series, Pell equation, Pell constant
Mathematics Subject Classification (2020) : 05A17, 11B65
## 1 Introduction
Integer partitions have a long history of study, with mathematicians from
Euler to Ramanujan producing interesting results. Past mathematicians have
also studied complete sequences. These are defined in the work of Brown [1] as
sequences $\\{f_{i}\\}^{\infty}_{i=1}$ such that all terms are positive and
every $n\in\mathbb{N}$ can be written as shown below.
$n=\sum^{\infty}_{i=1}\alpha_{i}f_{i}\hskip 14.45377pt\alpha_{i}\in\\{0,1\\}$
(1.0.1)
In this paper we build on our previous work, where we studied partitions of
integers into fractions with constant denominators and distinct numerators
drawn from sets of even or odd integers [2]. We noted that for any positive
integer $j$, one can write $j^{2}=\sum_{n=1}^{j}(2n-1)$. After dividing both
sides of the equation by $j$, one can represent $j$ as a sum of fractions.
$j=\frac{1}{j}+\frac{3}{j}+\frac{5}{j}+\cdots+\frac{2j-1}{j}$ (1.0.2)
We then proved that when $j>2$, any $k\in\mathbb{N},k<j$ can be written as the
sum of some combination of distinct terms from the series that sums to produce
$j$. We noted that this is not always possible when the numerators are even
integers, prompting us to consider what other sequences allow for similar
behavior. Such a generalization of the problem is considered in this paper.
Suppose we have a series of the form given below:
$G=\frac{n_{1}}{j}+\frac{n_{2}}{j}+\cdots+\frac{n_{j-1}}{j}+\frac{n_{j}}{j}$
(1.0.3)
We look for solutions for natural numbers $k<\lfloor G\rfloor$, where
$k=\frac{a_{1}}{j}+\frac{a_{2}}{j}+\cdots+\frac{a_{m-1}}{j}+\frac{a_{m}}{j}$,
and $a_{1},\ldots,a_{m}$ are distinct coefficients drawn from
$\\{n_{1},n_{2},\ldots,n_{j}\\}$. We also require that $k,j\in\mathbb{N}$,
$j>2$. We say $\\{n_{j}\\}$ is semicomplete if it enables such solutions for
all $k$, where the terms of the sequence strictly increase, and all terms are
positive. We refer to such sequences as semicomplete because a semicomplete
sequence is not necessarily complete. For example, we prove that the cake
numbers are both complete and semicomplete, yet the odd integers are only
semicomplete. This is because the odd integers enable sums for all $kj$ under
the restrictions above, though one cannot find integers such as 2 as the sum
of distinct odd summands. This makes semicomplete sequences perhaps a bit less
obvious, as one does not have to be able to find partitions for every natural
number. Such questions lead naturally to the following theorem.
###### Theorem 1.0.1
The only semicomplete arithmetic sequences are $\\{1,3,5,7,9\ldots\\}$,
$\\{2,3,4,5,6\ldots\\}$, and $\\{1,2,3,4,5\ldots\\}$.
We prove this theorem in Section 2.
In our previous paper, we conjectured that the cake numbers and lazy caterer’s
sequence are both semicomplete [2]. We prove this now, and generalize these
sequences to higher dimensions. Let $C_{M}^{N}$ denote the maximum number of
pieces that an $M$-dimensional hypercube can be cut into using $N-1$
hyperplanes. Let $\\{C_{M}^{N}\\}$ be the sequence of such terms for a given
value of $M$ where $N$ varies. (Observe that when $M=2$ we have the lazy
caterer’s sequence, and when $M=3$, we have the cake numbers.) Note that we
use $N-1$ for the number of cuts so that $N=1$ gives the first term of the
sequence, $N=5$ gives the fifth term, et cetera, rather than $N=0$ for the
first term and $N=4$ for the fifth.
###### Theorem 1.0.2
For values of $M$ equal to one, two, three, or four, $\\{C_{M}^{N}\\}$ is
semicomplete.
We provide a proof for this in Section 3. We also give an interesting
conjecture on these sequences.
###### Conjecture 1.0.3
$\\{C_{M}^{N}\\}$ is semicomplete for any value of $M\geq 1$,
$M\in\mathbb{N}$.
If true, this would imply that an infinite number of semicomplete sequences
exist, hence one cannot list them all. To solve the full semicompleteness
problem, one would then have to find the general necessary and sufficient
conditions for any type of sequence to be semicomplete.
Perhaps a more interesting problem would be to search for sequences that, like
the odd integers, are semicomplete but not complete. We know of one example,
$\\{2,3,4,5,6.\dots\\}$, though it would be interesting to find more.
We note that if one does not require $k$ to be an integer, one can write the
generating function for the case of odd numerators as shown below.
$\prod_{n=1}^{j}(1+q^{2n-1})=\sum_{kj=0}^{j^{2}}f_{O_{j}}(k,j)q^{kj}$ (1.0.4)
As a somewhat interesting aside, we give what appears to be a new $q$-series
identity for the Pell constant, $\mathcal{P}_{Pell}$. This constant is closely
connected to the product $\prod_{n=1}^{j}(1+q^{2n-1})$ as well as the density
of solutions to the negative Pell equation $x^{2}-Dy^{2}=-1$ [3]. It is
currently unknown whether the Pell constant is transcendental, though it has
been proven to be irrational [3].
###### Theorem 1.0.4
$F(q)=\sum_{m=0}^{j+1}\sum_{n=0}^{j+1}\frac{(-1)^{\frac{3n}{2}}q^{\frac{1}{2}(n^{2}-2n+2)}(-1)^{\frac{m}{2}}q^{\frac{1}{2}(m^{2}-2m)}}{q+1}{j+1\brack
n}_{q}{j+1\brack m}_{q}$ (1.0.5)
$\mathcal{P}_{Pell}=1-\raisebox{2.15277pt}{\scalebox{0.8}{$\displaystyle\lim_{j\rightarrow\infty}\;$}}F(-\frac{1}{2})\approx
0.58057$
We provide a proof for this in Section 4. We note that ${j+1\brack n}_{q}$ is
a Gaussian binomial coefficient, which is the $q$-analog of a binomial
coefficient. Gaussian binomial coefficients are defined as shown below [4],
where $m\geq n$.
${m\brack
n}_{q}=\frac{(1-q)(1-q^{2})\cdots(1-q^{m})}{(1-q)(1-q^{2})\cdots(1-q^{n})(1-q)(1-q^{2})\cdots(1-q^{m-n})}$
(1.0.6)
One should note that when $n>m$, ${m\brack n}_{q}$ is defined as zero. We can
then restate our conjecture from our previous paper [2] in terms of $F(q)$,
where we note that $F(q)$ is a polynomial in $q$ when fully simplified.
###### Conjecture 1.0.5
Let $G(q)$ contain only the terms from $F(q)$ where the exponent of $q$ is
$kj$, $k,j\in\mathbb{N},k<j,j>2$. The coefficients of $G(q)$ are always either
unimodal or bimodal.
## 2 Proof for Theorem 1.0.1
As we require that we must have solutions for all $k<\lfloor G\rfloor$ where
$j>2$, $k,j\in\mathbb{N}$, we can prove that a sequence does not work by
finding a value of $k$ that has no solution for some value of $j$. While
somewhat tedious, this process works well to eliminate all sequences other
than the three semicomplete arithmetic sequences. We can then readily show
these three are semicomplete.
Proof. When $j=3$, we have a series of the form
$G=\frac{a}{3}+\frac{a+b}{3}+\frac{a+2b}{3}$ (2.0.1)
where $a,b\in\mathbb{N}$. Note that $a$ is the first term in the sequence of
numerators and $b$ is the difference between consecutive terms in that
sequence.
Let $a=1$:
$G=\frac{1}{3}+\frac{1+b}{3}+\frac{1+2b}{3}$ (2.0.2)
If $b=0$, we have the sequence $\\{1,1,1,1,\ldots\\}$ as numerators. In this
case $G$ is always exactly one, hence there are no natural numbers $k<\lfloor
G\rfloor$. We thus see that $\\{1,1,1,1,\ldots\\}$ is not semicomplete. If
$b=1$, we have the consecutive integers as numerators. It is easy to see that
any integer less than the sum of the first $n$ consecutive integers can be
found as the sum of some combination of terms from the first $n$ consecutive
integers, hence this sequence is semicomplete.
If $b=2$ our numerators are the odd integers, which we have previously proven
to be semicomplete [2]. If $b>2$ all terms in the series but $\frac{1}{3}$ are
greater than one, hence it is impossible to find a combination of terms that
sum to $k=1$. This then proves that any sequence with $a=1$ and $b>2$ is not
semicomplete.
We now consider sequences where $a>1$. If $b=0$ and $a=2$ there is no solution
for $k=1$, as $1$ is not an integral multiple of $\frac{2}{3}$. If $a=3$ and
$b=0$ there are solutions for all $k$ when $j=3$, but not when $j=4$, hence
$\\{3,3,3,3,\ldots\\}$ is not semicomplete. If $a>3$ and $b=0$ all terms in
the series
$G=\frac{a}{3}+\frac{a+b}{3}+\frac{a+2b}{3}$ (2.0.3)
are greater than one, hence there is no combination for $k=1$. All sequences
$\\{a,a,a,a,\ldots\\}$, where $a>3$ are thus incomplete.
If $a=2$ and $b=1$ we have the sequence $\\{2,3,4,5,\ldots\\}$. One can see
that any integer except $1$ and $(2+3+4+5+\cdots+(j+1))-1$ can be found as the
sum of some combination of terms from $\\{2,3,4,5,\ldots\\}$. We know that
$\frac{1}{j}$ and $\frac{(2+3+4+5+\cdots+(j+1))-1}{j}$ cannot be integers when
$j>2$, hence every integer $k<\lfloor G\rfloor$ can be found as the sum of
some combination of terms. We know $\frac{(2+3+4+5+\cdots+(j+1))-1}{j}$ cannot
be an integer through the argument shown below.
$G=\frac{(j+1)(j+2)}{2j}-\frac{1}{j}=\frac{j+3}{2}$ (2.0.4)
$\frac{(2+3+4+5+\cdots+(j+1))-1}{j}=\frac{j+3}{2}-\frac{1}{j}=\frac{j}{2}-\frac{1}{j}+\frac{3}{2}$
(2.0.5)
We know that $\frac{j}{2}-\frac{1}{j}+\frac{3}{2}$ cannot be an integer when
$j>2$, hence $\frac{(2+3+4+5+\cdots+(j+1))-1}{j}$ cannot be an integer when
$j>2$.
If $a=2$ and $b>1$ then there is no combination for $k=1$ when $j=3$, as every
term in the series but $\frac{2}{3}$ is greater than $1$. Such sequences are
thus incomplete. If $a=3$ and $b=1$ or $b=2$, there is no combination for
$k=2$ when $j=3$. If $a=3$ and $b=3$ there is no combination for $k=1$ when
$j=4$. If $a=3$ and $b>3$ there is no combination for $k=2$ when $j=3$. If
$a>3$ then all terms in the series are greater than $1$ when $j=3$, regardless
of the value of $b$, hence there is no combination for $k=1$. Such sequences
are thus incomplete. $\Box$
## 3 Proof for Theorem 1.0.2
Proof. In one dimension: It is easy to see that in this case we are cutting a
line into pieces with points. This then results in
$\\{C_{1}^{N}\\}=\\{1,2,3,\ldots\\}$, which is the sequence of consecutive
integers. We know this sequence is semicomplete.
In two dimensions:
$\\{C_{2}^{N}\\}=\bigg{\\{}\frac{N^{2}-N+2}{2}\bigg{\\}}$ (3.0.1)
The sum of the first $t$ terms can be found through induction.
$\sum_{N=1}^{t}\frac{N^{2}-N+2}{2}=\frac{1}{6}t(t^{2}+5)$ (3.0.2)
We see that when $t$ is sufficiently large, the $(t+1)^{TH}$ term is less the
sum of the first $t$ terms. We have
$\frac{(t+1)^{2}-(t+1)+2}{2}<\frac{1}{6}t(t^{2}+5)$ (3.0.3)
where $3<t$.
Suppose one takes a set of the first $t$ terms and one sees that any integer
less than the sum of those $t$ terms can be found as the sum of some
combination of terms from that set. Now take an integer $I$ that satisfies the
following inequality:
$\sum_{N=1}^{t}\frac{N^{2}-N+2}{2}<I<\sum_{N=1}^{t+1}\frac{N^{2}-N+2}{2}$
(3.0.4)
We know that the last term is less than the sum of all previous terms, where
$t$ is sufficiently large. If every integer less than the sum of the previous
terms can be found as the sum of some combination of the previous terms, one
can find a partition for any integer $I$ by summing the $(t+1)^{TH}$ term
$C_{2}^{t+1}$ and the partition for $I-C_{2}^{t+1}$, as
$I-C_{2}^{t+1}<\sum_{N=1}^{t}\frac{N^{2}-N+2}{2}$. By induction, one then sees
that any integer less than the sum of $\\{C_{2}^{N}\\}$ can be found as the
sum of some combination of the terms from this sequence, if this can be done
for the first $t$ terms, where $t\in\mathbb{N}$, $t>3$. (If you can do this
for the first $t$ terms, you can do it for the first $t+1$ terms, and thus the
first $t+2$ terms, et cetera.) This method closely follows that used by Brown
to show whether a sequence is complete [1]. Here we take the first four terms
of the sequence, then examine the integers that can be found as sums of the
terms.
$\\{C_{2}^{N}\\}=\\{1,2,4,7\\}$ (3.0.5) $1+2+4+7=14$ (3.0.6)
We see we have $1=1$, $2=2$, $3=1+2$, $4=4$, $5=1+4$, $6=2+4$, $7=7$, $8=1+7$,
$9=7+2$, $10=1+2+7$, $11=4+7$, $12=1+4+7$, and $13=2+4+7$. This then confirms
that the lazy caterer’s sequence is complete, and hence also semicomplete.
(Note that one can individually check cases to see that it is semicomplete
when using the first 3, 2, or 1 terms. We just needed to apply this technique
to a set of more than 3 terms.) To illustrate that this holds upon adding
another term, we examine the first five terms of the sequence.
$\\{C_{2}^{N}\\}=\\{1,2,4,7,11\\}$ (3.0.7) $1+2+4+7+11=25$ (3.0.8)
We already know we have partitions for $1,2,\ldots,14$. Now we need to find
these for $15,16\ldots,25$. As an example, we take the case of $I=20$. We see
that $20-11=9=2+7$, hence $20=11+2+7$.
In higher dimensions: There is a known recurrence relation that describes such
sequences.
$C_{M}^{N}=C_{M}^{N-1}+C_{M-1}^{N-1}$ (3.0.9) $C_{M}^{1}=1$
For a discussion of this recurrence, see [5].
In three dimensions:
$C_{3}^{N}=\bigg{\\{}\frac{N^{3}-3N^{2}+8N}{6}\bigg{\\}}$ (3.0.10)
$\sum_{N=1}^{t}\frac{N^{3}-3N^{2}+8N}{6}=\frac{t(t+1)(t^{2}-3t+14)}{24}$
(3.0.11)
We then have
$\frac{(t+1)^{3}-3(t+1)^{2}+8(t+1)}{6}<\frac{t(t+1)(t^{2}-3t+14)}{24}$
(3.0.12)
where $4<t$.
In four dimensions:
$C_{4}^{N}=\bigg{\\{}\frac{N^{4}-6N^{3}+23N^{2}-18N+24}{24}\bigg{\\}}$
(3.0.13)
$\sum_{N=1}^{t}\frac{N^{4}-6N^{3}+23N^{2}-18N+24}{24}=\frac{t(t^{4}-5t^{3}+25t^{2}+5t+94)}{120}$
(3.0.14)
We then have
$\frac{(t+1)^{4}-6(t+1)^{3}+23(t+1)^{2}-18(t+1)+24}{24}<\frac{t(t^{4}-5t^{3}+25t^{2}+5t+94)}{120}$
(3.0.15)
where $5<t$. One can then apply the same process used in two dimensions to
prove semicompleteness in three and four dimensions. For three dimensions one
must manually check semicompleteness for $t\leq 4$, and for four dimensions,
one must manually check semicompleteness for $t\leq 5$. $\Box$
## 4 Proof for Theorem 1.0.4
Proof.
$F(q)=\sum_{m=0}^{j+1}\sum_{n=0}^{j+1}\frac{(-1)^{\frac{3n}{2}}q^{\frac{1}{2}(n^{2}-2n+2)}(-1)^{\frac{m}{2}}q^{\frac{1}{2}(m^{2}-2m)}}{q+1}{j+1\brack
n}_{q}{j+1\brack m}_{q}$ (4.0.1)
We can split this double sum into the product of two sums.
$F(q)=\biggl{(}\sum_{m=0}^{j+1}\frac{(-1)^{\frac{3m}{2}}q^{\frac{1}{2}(m^{2}-2m+2)}}{q+1}{j+1\brack
m}_{q}\biggr{)}\biggl{(}\sum_{m=0}^{j+1}(-1)^{\frac{m}{2}}q^{\frac{1}{2}(m^{2}-2m)}{j+1\brack
m}_{q}\biggr{)}$ (4.0.2)
By noting the definition for a binomial coefficient and manipulating the
series, we can arrive at a more convenient form.
$\binom{m}{2}=\frac{m!}{2!(m-2)!}=\frac{1}{2}m(m-1)$ (4.0.3)
$F(q)=\biggl{(}\frac{q}{q+1}\biggr{)}\biggl{(}\sum_{m=0}^{j+1}(-1)^{m}\biggl{(}-\frac{1}{q}\biggr{)}^{\frac{m}{2}}q^{\binom{m}{2}}{j+1\brack
m}_{q}\biggr{)}\biggl{(}\sum_{m=0}^{j+1}\biggl{(}-\frac{1}{q}\biggr{)}^{\frac{m}{2}}q^{\binom{m}{2}}{j+1\brack
m}_{q}\biggr{)}$ (4.0.4)
We have the following identity from the work of Koekoek and Swarttouw [6]. It
follows from the $q$-binomial theorem.
$(a;q)_{n}=\sum_{k=0}^{n}(-a)^{k}q^{\binom{k}{2}}{n\brack k}_{q}$ (4.0.5)
This identity then enables us to rewrite $F(q)$.
$F(q)=\frac{q}{q+1}\biggl{(}\sqrt{-\frac{1}{q}};q\biggr{)}_{j+1}\biggl{(}-\sqrt{-\frac{1}{q}};q\biggr{)}_{j+1}$
(4.0.6)
The identity given below is known [6].
$(a^{2};q^{2})_{n}=(a;q)_{n}(-a;q)_{n}$ (4.0.7) $\therefore\hskip
5.69046ptF(q)=\frac{q}{q+1}\biggl{(}\sqrt{-\frac{1}{q}};q\biggr{)}_{j+1}\biggl{(}-\sqrt{-\frac{1}{q}};q\biggr{)}_{j+1}=\frac{q}{q+1}\biggl{(}-\frac{1}{q};q^{2}\biggr{)}_{j+1}$
(4.0.8)
The $q$-analog of the Pochhammer symbol can be defined as the following
product for positive $k$.
$(a;q)_{n}=\prod_{k=0}^{n-1}(1-aq^{k})$ (4.0.9)
$F(q)=\frac{q(-\frac{1}{q};q^{2})_{j+1}}{q+1}=\frac{q}{q+1}(1+q^{-1})(1+q)(1+q^{3})(1+q^{5})\cdots=\prod_{n=1}^{j}(1+q^{2n-1})$
(4.0.10)
This product is a generating function for partitions of $kj$ into distinct odd
integers drawn from $\\{1,3,5,\ldots,2j-1\\}$.
$\prod_{n=1}^{j}(1+q^{2n-1})=\sum_{kj=0}^{j^{2}}f_{O_{j}}(k,j)q^{kj}$ (4.0.11)
The Pell constant $\mathcal{P}_{Pell}$ can be written as shown below [3].
$\mathcal{P}_{Pell}=1-\prod_{k=0}^{\infty}\biggl{(}1-\frac{1}{2^{2k+1}}\biggr{)}$
(4.0.12)
$\lim_{j\to\infty}F\biggl{(}-\frac{1}{2}\biggr{)}=\lim_{j\to\infty}\biggl{(}\prod_{n=1}^{j}\biggl{(}1+\biggl{(}-\frac{1}{2}\biggr{)}^{2n-1}\biggr{)}\biggr{)}=\prod_{n=0}^{\infty}\biggl{(}1-\frac{1}{2^{2n+1}}\biggl{)}=1-\mathcal{P}_{Pell}$
(4.0.13)
Hence
$\mathcal{P}_{Pell}=1-\raisebox{2.15277pt}{\scalebox{0.8}{$\displaystyle\lim_{j\to\infty}\;$}}(F(-\frac{1}{2}))$.
$\Box$
## References
* [1] J. L. Brown Jr., A note on complete sequences of integers, Amer. Math. Monthly, 68 (1961), 557–560, available online at the URL: https://www.jstor.org/stable/2311150?seq=1
* [2] Z. Hoelscher and E. Palsson, Counting restricted partitions of integers into fractions: symmetry and modes of the generating function and a connection to $\omega(t)$, PUMP J. Undergrad. Res., 3 (2020), 277–307, available online at the URL: https://journals.calstate.edu/pump/article/view/2428
* [3] P. Stevenhagen, The number of real quadratic fields having units of negative norm, Experimental Mathematics, 2 (1993), 121–136, available online at the URL: https://projecteuclid.org/euclid.em/1048516217
* [4] K. O’Hara, Unimodality of Gaussian coefficients: A constructive proof, _J. Combin. Theory Ser. A_ , 53 (1990), 29-52, available online at the URL: https://www.sciencedirect.com/science/article/pii/009731659090018R
* [5] N.J.A. Sloane, Sequence A000125, The On-Line Encyclopedia of Integer Sequences, available online at the URL: https://oeis.org
* [6] R. Koekoek and R. F. Swarttouw, The Askey-Scheme of Hypergeometric Orthogonal Polynomials and its $q$-Analogue, Technische Universiteit Delft, Faculty of Technical Mathematics and Informatics Report, (1998).
|
Powders & Grains 2017
11institutetext: Institut fÃr Theoretische Physik I, UniversitÃt Erlangen-
NÃrnberg, StaudtstraÃe 7, 91058 Erlangen, Germany 22institutetext: School of
Engineering and Information Technology, Murdoch University, 90 South Street,
Murdoch, WA 6150, Australia 33institutetext: Institute for Multiscale
Simulation, NÃgelsbachstrasse 49b, 91052 Erlangen, Germany
# Pomelo, a tool for computing Generic Set Voronoi Diagrams of Aspherical
Particles of Arbitrary Shape
Simon Weis 11<EMAIL_ADDRESS>Philipp W. A. SchÃnhÃfer
SW and PS have contributed equally to the work described in this project11 2 2
<EMAIL_ADDRESS>Fabian M. Schaller 11 Matthias SchrÃter 33 Gerd
E. SchrÃder-Turk 11 2 2
###### Abstract
We describe the development of a new software tool, called "Pomelo", for the
calculation of Set Voronoi diagrams. Voronoi diagrams are a spatial partition
of the space around the particles into separate Voronoi cells, e.g. applicable
to granular materials. A generalization of the conventional Voronoi diagram
for points or monodisperse spheres is the Set Voronoi diagram, also known as
navigational map or tessellation by zone of influence. In this construction, a
Set Voronoi cell contains the volume that is closer to the surface of one
particle than to the surface of any other particle. This is required for
aspherical or polydisperse systems.
Pomelo is designed to be easy to use and as generic as possible. It directly
supports common particle shapes and offers a generic mode, which allows to
deal with any type of particles that can be described mathematically. Pomelo
can create output in different standard formats, which allows direct
visualization and further processing. Finally, we describe three applications
of the Set Voronoi code in granular and soft matter physics, namely the
problem of packings of ellipsoidal particles with varying degrees of particle-
particle friction, mechanical stable packings of tetrahedra and a model for
liquid crystal systems of particles with shapes reminiscent of pears.
$d(K,\vec{x})$
$\vec{x}$
K
$K$
$d(K,\vec{x})$
Figure 1: Left: The Voronoi Diagram of a system of monodisperse spheres.
Center: The Voronoi Diagram is not suitable for a system of bidisperse
spheres, as Voronoi cells are overlapping with particles. Right: The Set
Voronoi Diagram of a mixture of differently shaped objects.
The analysis of geometries and structures on a micro scale level is an
important aspect of granular and soft matter physics to attain knowledge about
many interesting properties of particle packings, including contact numbers,
anisotropy, local volume fraction, etc. RefHecke ; RefAste ; RefWang . A well-
established concept is the so called Voronoi Diagram. Here, the system is
investigated by dividing the space into separate cells in respect to the
positions of the center of the particles. A cell assigned to a certain
particle is defined as the space (or region of space) that contains all the
volume closer to the center of this specific particle than to any other one
(see figure 1 left). This partition of space, however, only yields precise
results for monodisperse spheres as the construction fails otherwise due to
morphological properties of the objects. For nonspherical or polydisperse
particles the classical Voronoi diagram is of limited usefullnes, as shown in
figure 1 (center) for a system of bidisperse spheres. A generalized version of
the Voronoi Diagram, the Set Voronoi Diagram RefSchaller2013 , also known as
navigational map RefLuchnikov or tessellation by zone of influence RefPreteux
, has to be applied. In this case the cells contain all space around the
particle which is closer to the particle’s surface than to the surface of any
other particle. Figure 1 (right) shows the Set Voronoi Diagram of a mixture of
differently shaped particles.
Here we introduce a software tool called Pomelo, which calculates Set Voronoi
Diagrams based on the algorithm described in RefSchaller2013 . Pomelo is
particularly versatile due to its ability to generically handle any arbitrary
shape which can be described mathematically. For instance objects neither have
to be convex nor simply connected. In the first step of the algorithm (figure
2 left) a triangulation of the particle’s bounding surface is generated to
sample its shape. Pomelo offers functionality for some common particle shapes,
but the user is can also specify generic particle shapes. After the
discretisation of the surface the system is tessellated by calculating the
classical Voronoi diagram of all surface points of this triangulation. This is
shown in figure 2 (center). To get the Set Voronoi tessellation of the system,
cells belonging to points on the same particle surface are merged to a single
cell in the last step (figure 2 right). The resulting partition represents the
Set Voronoi Diagram of the system. With this algorithm, systems and mixtures
of particles of any arbitrary shape can be treated.
Figure 2: Sketch of the Set Voronoi algorithm from left to right. Generating
points on particle surface. Calculating Voronoi Diagram of surface points.
Merging cells belonging to the same particle.
## 1 Pomelo
Pomelo is a generic Set Voronoi tool written in c++11 and licensed under GPL3.
Pomelo can be downloaded, see ref. RefPomeloDownload , as well as all
instructions regarding setting up, building and using Pomelo. The system
requirements are g++ 4.9.2 or clang++ 3.5.0-10 or any higher version.
While Pomelo can directly handle common particle shapes (mono- and
polydisperse spheres, tetrahedra, ellipsoids and spherocylinders), it also
provides a generic mode. The latter works for any shape which surface can be
described mathematically. The following two sections will describe how to use
Pomelo in both cases. To use Pomelo in generic mode (see section 1.2), lib-lua
5.2 or higher is required.
### 1.1 Common Particle Shapes
Particle shapes that are intrinsically supported by Pomelo are mono- and
polydisperse spheres, tetrahedra, spherocylinders and ellipsoids. Pomelo comes
with a set of demos and tests.
One test case is a set of polydisperse spheres in a cubic cell. The input is a
xyzr file. Its first line is the number of particles. The second line is a
comment which contains some information (separated by comma) required by
Pomelo. This includes boundary conditions (periodic/non periodic in each
axis), box size, shrink and the number steps in $\phi$ and $\theta$ for
discretizing the sphere’s surface. Every following line describes one sphere
in the packing with its parameters (coordinates and radius). To run the demo,
call Pomelo with the command line argument -SPHEREPOLY, the path to the xyzr
file and the desired output folder.
Pomelo’s output are the vertices, faces and cells of the Set Voronoi Diagram.
The output can be set to different formats, like the POLY file format, off
(for geomview) or a gnuplot readable format for easy visualisation.
### 1.2 Generic Mode
Using Pomelo in generic mode allows to calculate the Set Voronoi Diagram for
generic particles. The input is a position file, which is a list of all
particles which are described by a set of parameters each. The parameters are
a complete description of the particles surface within the packing. The read
file is key for Pomelos versatility. It allows the user to create a surface
triangulation based on the parameters given in the position file with a script
written in the lua language RefLua . The script tells Pomelo how to create a
surface triangulation with the particles parameters as given in the position
file. It can be fully customized by the user to match any specific particle
shape. This allows the user to create even systems composed of a mixture of
diferent particles. The surface triangulation of all particles will then be
used to calculate the Set Voronoi Diagram as described above.
Pomelo’s demos include a variety of examples on how to use the read script to
handle different types of particles.
## 2 Applications of Set Voronoi Diagrams
### 2.1 Ellipsoid Packings
Figure 3: Ellipsoids ($\alpha=1.4$) in an isotopic system (left) and ther Set
Voronoi Cells (right).
To demonstrate the use of Set Voronoi Diagrams the local packing fraction
which is defined as $\Phi_{l}=\frac{v_{e}}{v_{i}}$ with $v_{i}$ being the
volume of the Set Voronoi Cell and $v_{e}=\frac{4}{3}\pi abc$ the volume of
the particle is calculated. It has been shown that the probability
distribution of local packing fractions is universal for sphere packings
RefAste and jammed pro- and oblate ellipsoid packings RefSchaller2015 .
Experimental granular packings of triaxial ellipsoids (three distinct axis
lengths $a$, $b$, $c$) have been measured at various global packing fractions
using X-ray tomography, see figure 3. Particle positions, orientations and
size have been determined and the Set Voronoi Diagrams have been calculated
for each packing. Here the particles are shrinked by Pomelo to improve the
quality of the cells. The statistical distribution of local packing fractions
$\Phi_{l}$ for spheres ($\alpha=1.0$) and triaxial ellipsoids ($\alpha=1.1$,
$\alpha=1.4$) is shown in figure 4. To first order, the distribution can be
collapsed on a master curve, as shown in Ref. RefSchaller2015 , by subtracting
the global volume fraction $\Phi_{g}=\frac{<v_{e}>}{<v_{i}>}$ and scaling the
distribution by $\sigma(\Phi_{l})$, see figure 4. Thus the functional form of
the distribution is invariant to global packing fraction $\Phi_{g}$ and aspect
ratio $\alpha$, to first order.
Figure 4: Probability distribution of rescaled local packing fractions
$\Phi_{l}$ for packings of spheres (particle aspect ratio $\alpha=1.0$) and
triaxial ellipsoids ($\alpha=1.1$, $\alpha=1.4$). Global packing fractions
range from 0.62 to 0.65.
### 2.2 Tetrahedra
Packings of tetrahedra can also be treated with Pomelo without using the
generic mode. The data was obtained by X-ray tomography RefNeudecker . Each
tetrahedra is described by the positions of its 4 vertices. Pomelo is able to
process the data directly. This gives the advantage to treat not only
tetrahedra but pyramids in general.
The Set Voronoi tessellations of tetrahedra packings (figure 5) with different
packing fraction and contact numbers are calculated with Pomelo. The local
packing fraction is shown for those systems in figure 6.
Figure 5: Tetrahedra of one of the experiments (left) and their Set Voronoi
cels (right). Figure 6: Probability distribution of rescaled local packing
fractions $\Phi_{l}$ for different systems of tetrahedra. The colorbar and the
point’s color show the global packing fraction of the system.
### 2.3 Pear Shaped Particles
Figure 7: Pear shaped by two BÃzier curves. Both are determined by the aspect
ratio $\alpha=\frac{\text{Height}}{\text{Width}}$ and the degree of tapering
$\alpha_{\theta}=\frac{\text{Height}_{\theta}}{\text{Width}}$.
The third example is the calculation of the Set Voronoi Diagrams of pear
shaped or tapered particles. The shape of these particles is described by the
aspect ratio $\alpha$ and the degree of tapering $\alpha_{\theta}$. By using
two BÃzier curves forming the bottom and top part of the pear and rotating
them around their symmetry axis the surface of the particle is generated
(figure 7) RefBarmes . A triangulation algorithm of this surface is
implemented within the read file, which allows Pomelo to process pear shaped
particles in the generic mode. The position file provides the position,
orientation, size, aspect ratio and degree of tapering for every individual
particle.
Figure 8: Pears ($\alpha=3.0,\alpha_{\theta}=3.8$) in an isotropic system
(left) and their Set Voronoi cells (right).
Systems of hard pear shaped particles ($\alpha=3.0$) with different degrees of
tapering are generated using Molecular Dynamics (see figure 8) RefBarmes . The
statistical distribution of the local packing fractions $\Phi_{l}$ for
different pear systems ($\alpha_{\theta}=3.0,\Phi_{g}=0.48$ ;
$\alpha_{\theta}=3.8,\Phi_{g}=0.50$ ; $\alpha_{\theta}=6.0,\Phi_{g}=0.54$) is
shown in figure 9. Similarly to the sphere and ellipsoid packings the
distributions of the different pear systems collapse on a master curve.
Accordingly the distribution is not only invariant to global packing fraction
$\Phi_{g}$ but also to degree of tappering $\alpha_{\theta}$ for pears.
However, the data of ellipsoids/spheres and pears do not collapse on the same
curve.
Figure 9: Probability distribution of rescaled local packing fractions
$\Phi_{l}$ for pears ($\alpha=3.0$) with different degree of tapering and
packing fraction
$(\alpha_{\theta},\Phi_{g})\in\\{(3.0,0.48);(3.8,0.50);(6.0,0.54)\\}$. The
dashed line shows the master curve of the sphere distributions.
## 3 Outlook
We have shown that Pomelo is applicable for a variety of systems, including
spheres, ellipsoids, tetrahedra and pear shaped particles. We have illustrated
its use to calculate Set Voronoi volume distributions. However, it is
conceivable to analyse other interesting measures like Minkowski tensors, by
piping the Set Voronoi Cells into corresponding analysis tools RefSchaller2015
; RefSchroederTurk2011 ; RefSchroederTurk2013 . It is yet also unclear how
similar volume distributions of different systems are.
One way to improve the calculation of the Set Voronoi Diagram is to implement
an adaptive sampling of the particle’s surface. By comparing the distance
between two neighboring surface points to the distance to the face of the Set
Voronoi Cell it is possible to obtain an estimate on where the surface
triangulation needs a better resolution and where a coarse resolution is good
enough. This estimate will be used to change the density of the surface
triangulation of the particles.
## References
* (1) T. Aste, T. Di Matteo, M. Saadatfar, T. J. Senden, M. SchrÃter, H. L. Swinney, EPL 79, 24003 (2007)
* (2) M. van Hecke, J. Phys: Condens. Matter, 22, 0033101 (2010)
* (3) P. Wang, C. Song, Y. Jin, K. Wang, H. Makse, J. Stat Mech 2010, P12005 (2010)
* (4) F. M. Schaller, S. C. Kapfer, M. E. Evans et al., Philosophical Magazine 93, 3993-4017 (2013)
* (5) V. Luchnikov, N. Medvedev, L. Oger, J. Troadec, Phys. Rev. E 59, 7205 (1999)
* (6) E. Preteux, J. Math. Imaging Vision 1, 239 (1992)
* (7) `http://theorie1.physik.uni-erlangen.de/`
`research/pomelo/index.html`
* (8) `https://www.lua.org/`
* (9) F. M. Schaller, S. C. Kapfer, J. E. Hilton et al., EPL 111, 24002 (2015)
* (10) M. Neudecker, S. Ulrich, S. Herminghaus, M. SchrÃter PRL 111, 028001 (2013)
* (11) F. Barmes, M. Ricci, C. Zannoni, D. J. Cleaver, Phys. Rev. E. 68, 021708 (2003)
* (12) G. E. SchrÃder-Turk, W. Mickel, S. C. Kapfer et al., Adv. M. 23(22-23), 2535–2553 (2011)
* (13) G. E. SchrÃder-Turk, W. Mickel, S. C. Kapfer et al., NJP 15, 083028 (2013)
|
Country | Infection Rate
---|---
Netherlands | 1.11e-6
Germany | 2.89e-6
France | 3.75e-6
Italy | 6.20e-6
Spain | 7.58e-6
Belgium | 8.03e-6
Denmark | 8.69e-6
Portugal | 3.49e-5
Switzerland | 3.49e-5
Austria | 4.05e-5
Table C.5: Objective 3: Travel rates output from the best solution from the RRHCA for model given in Figure C.6 (Infection rates in Table C.4). Columns: Connection from; Rows: Connection to. | AT | BE | CH | DE | DK | ES | FR | IT | NL | PT
---|---|---|---|---|---|---|---|---|---|---
AT | | | 3.553e-6 | 1.471e-6 | | | | 1.086e-5 | |
BE | | | | 1.115e-6 | | | 5.525e-6 | | 5.408e-7 |
CH | 3.786e-6 | | | 1.56e-6 | | | 3.389e-7 | 6.361e-6 | |
DE | 3.390e-6 | 7.833e-6 | 3.443e-6 | | 2.125e-6 | | 4.267e-6 | | 3.187e-6 |
DK | | | | 1.960e-6 | | | | | |
ES | | | | | | | 1.462e-6 | | | 1.565e-6
FR | | 1.291e-6 | 1.425e-6 | 1.082e-5 | | 2.630e-6 | | 7.151e-6 | |
IT | 2.535e-6 | | 2.529e-6 | | | | 1.75e-6 | | |
NL | | 9.549e-6 | | 2.342e-6 | | | | | |
PT | | | | | | 2.071e-5 | | | |
### C.5 Analysis
##### Visual analysis.
For an example see Figure C.8.
Figure C.8: P10SIR behaviour. Modelled infections for 10 Western European
countries connected by road. AT = Austria, BE = Belgium, CH = Switzerland, DE
= Germany, DK = Denmark, ES = Spain, FR = France, IT = Italy, NL = Netherlands
and PT = Portugal. Plots for countries under 100 Infected are not visible due
to the scale of the graph.
##### Model checking
For illustration we include one experiment involving model checking, see
Figure C.9.
Figure C.9: P${}_{\textrm{China}}$SIR model checking. (Top, left) Time series
traces of Infectious for the 34 provinces, with infection starting in Hubei.
The ratio between rate constants for Infect and Recover is $1:10^{4}$
($k_{infect}=1.0e^{-6}$, $k_{recover}=1.0e^{-2}$.) (Top, right) Model checking
reveals three provinces with more than one peak. (Bottom) Network of China
highlighted to indicate the geographical context of the three provinces: HB -
red, NX - green, QH - brown, XZ - blue.
##### Correlation analysis.
Table C.6 records the variables, correlation matrices are given in Figure
C.10, C.11, C.12. The correlation matrices for the unfitted models indicate
that the percent of population infected at peak is related to the population
size for P48SIR, but not for the smaller model P10SIR, perhaps because there
are more smaller countries in the smaller model. The number of connections is
related to the peak and peak pecentage in the larger model, but only to the
peak in the smaller model. Area is a partial proxy for the whole population
and is strongly related to peak and time (negatively - i.e. earlier) but not
to peak percentage in P10SIR. However in P48SIR area is related to peak and
peak percentage but not to time.
However, the correlation over the fitted data indicates that the percentage of
the population at the peak is less related to the population size, compared
with the unfitted model. Also the size of the peak was less related to length
of time from the start of infections in the fitted model compared to the
unfitted model. Finally, the time of the peak of infections was not correlated
with the number of inter-country connections in the fitted model, as opposed
to being negatively correlated in the unfitted model (more connections
implying earlier peaks), indicating that the intervention policies that
governments have taken to control the epidemic in their countries have
overriden the effect of geographical connections (i.e. closing down inter-
country travel).
Table C.6: Variables used to investigate road connected models further. Categories: (1) real world data external to model, (2) derived measure connecting external (black) to blue, (3) real world data incorporated in model, (4) data generated by model. Category | Variable | Description
---|---|---
1 | Area.Km2 | Area in km2 of a given country.
2 | Density | Population density of a given country (population/km2).
3 | Connections | Number of connections a country has in land borders.
| initSusc | Susceptible + Infected at time 0.
| Population | The real-world country populations.
| MapPop | The population used in the model (real world population / 1000).
4 | peak | The peak of Infected, this represents active cases.
| peakPerc | The percentage of initSusc that were infected at the peak.
| time | The number of steps into the simulation the peak occurred.
Figure C.10: Correlation matrix for P10SIR, not fitted. Green indicates a
strong positive correlation and red indicates a strong negative correlation.
Black crosses represent a non-significant correlation. Infection rates and
travel rates all identical at 1.0e-6; recovery rates 1.0e-2. Figure C.11:
Correlation matrix for P10SIR, fitted. Green indicates a strong positive
correlation and red indicates a strong negative correlation. Black crosses
represent a non-significant correlation. Figure C.12: Correlation matrix for
P48SIR, not fitted. Green indicates a strong positive correlation and red
indicates a strong negative correlation. Black crosses represent a non-
significant correlation.
##### Dendrograms.
Examples are given in Figures C.13, C.14.
Figure C.13: Dendrogram of P10SIR for West Europe, hierarchical clustering.
Height is on the x-axis. Data not normalised Figure C.14: Dendrogram of P48SIR
for Europe, hierarchical clustering. Height is on the x-axis. Data not
normalised.
### C.6 Discussion.
Results of simulating lockdown and unlock by dynamic change of infection rates
in the SIR model; Figure C.15 \- lockdown only, Figure C.16 \- lockdown
followed by unlock.
Figure C.15: SIR model - Dynamic change of infection rates: lockdown. Output
from the SIR model in Figure 1 where infection rates were stepwise decremented
during simulation. Figure legend shows cumulative infections for different
number of decrements. Figure C.16: SIR model, dynamic change of infection rate
increased back to pre-intervention level at time step 20.
## Appendix D List of Files Provided
##### All models given in figures (provided as Snoopy files):
* ∙
SIR.spn \- standard SIR, see Figure 1
* ∙
SIQR.spn \- SIR extended by quarantine, see Figure 2 (Top)
* ∙
SIAR.spn \- SIR extend by symptomatic/asymptomatic compartments, see Figure 2
(Bottom)
* ∙
SIR-S2.spn \- SIR-S${}_{2}^{age}$, uncoloured, see Figure 3 (Top)
* ∙
SIR-S2_enum.colspn, SIR-S2_int.colspn \- SIR-S${}_{2}^{age}$, coloured, see
Figure 3 (Middle)
* ∙
SIR-S20.spn \- SIR-S20, see Figure 3 (Bottom)
* ∙
P2-SIR.spn \- P2SIR, uncoloured, see Figure 4 (Top)
* ∙
P2-SIR.colspn \- P2SIR, coloured, see Figure 4 (Bottom)
* ∙
network-Europe4.spn \- connectivity graph for Europe, 4 countries, uncoloured,
see Figure 5 (Top middle)
* ∙
network-Europe4.colspn \- connectivity graph for Europe, 4 countries,
coloured, see Figure 5 (Top right)
* ∙
P4-SIR.colspn \- P4SIR, coloured, see Figure 5 (Middle)
* ∙
P4-SIR.spn \- P4SIR, uncoloured, see Figure C.2
* ∙
P48-SIR-S10.colspn \- P48SIR-S10, see Figure 5 (Bottom)
* ∙
P48-SIR-S10.andl.spn \- P48SIR-S10 unfolded, see Figure 6 (Bottom)
* ∙
SIVR.hpn \- SIR with variant virus, see FigureC.1
* ∙
P10-QI-SIR.colspn \- P10Q$\\_\textrm{I}$SIR, see Figure C.3 (Top)
* ∙
P10-QSIR.colspn \- P10QSIR, see Figure C.3 (Bottom)
* ∙
P10-SIR_Separate_Rates.cpn \- $\mathcal{CPN}$ used for parameter optimisation
- unfolded version of P10SIR, given in Figure C.6.
##### Connectivity networks (provided as CANDL files):
* ∙
network-Europe04.candl
\- unfolding: |P|=4, |T|=8, |A|=16
* ∙
network-Europe05.candl
\- unfolding: |P|=5, |T|=16, |A|=32
* ∙
network-Europe10.candl
\- unfolding: |P|=10, |T|=30, |A|=60
* ∙
network-Europe48.candl
\- unfolding: |P|=47, |T|=170, |A|=340
\- note: IS isolated, https://countrycode.org
\- unfolding time: 1.302sec
* ∙
network-China.candl
\- unfolding: P|=33, |T|=142, |A|=286
\- note: 34 states; TW isolated
\- unfolding time: 0.283sec
* ∙
network-USA.candl
\- unfolding: |P|=48, |T|=210, |A|=420
\- note: 50 states; AK, HI isolated
\- unfolding time: 1.643sec
##### PnSIR models (provided as CANDL files):
* ∙
P-Europe02-SIR - Europe fragment, 2 countries
* ∙
P-Europe04-SIR - Europe fragment, 4 countries
* ∙
P-Europe10-SIR - Western Europe, 10 countries
* ∙
P-Europe48-SIR - All Europe, 48 countries
* ∙
P-China-SIR - China, 34 provinces
* ∙
P-USA-SIR - USA, 50 states
##### Python programs:
* ∙
RRHCA (Random Restart Hill Climbing)
* ∙
Python Web Scraper, used to to collect COVID19 Data.
##### R code:
* ∙
Europe.Rmd, Western_Europe.Rmd: Both these files take in csv outputs from
Snoopy, Spike to produce correlation matrices and regression analysis. The
Time and Infectious traces are extracted from the csv files and joined to real
world data (e.g. population, density, country size). Correlation matrices are
then produced. Regression analysis is also conducted along with checking the
assumptions of regression.
* ∙
Clustering.Rmd: This code takes in one csv file (output from Snoopy, Spike)
and performs agglomerative hierarchical clustering using Euclidean distance
and complete-linkage. This code selects Time and Infected traces to perform
the clustering. The code plots a dendrogram of the clustering results for
visualisation.
##### Spike:
* ∙
config file + andl/candl file
* ∙
all source files to produce Figure C.16.
##### MC2:
* ∙
property library and Unix script to produce Figure C.9. |
# Features of a nano-twist phase in the nanolayered Ti3AlC2 MAX phase
Julien Guénolé Vincent Taupin Maxime Vallet Wenbo Yu Antoine Guitton
###### Abstract
Complex intermetallic materials known as MAX phases exhibit exceptional
properties from both metals and ceramics, largely thanks to their nanolayered
structure. With high-resolution scanning transmission electron microscopy
supported by atomistic modelling, we reveal atomic features of a nano-twist
phase in the nanolayered Ti3AlC2. The rotated hexagonal single-crystal is
encompassed within basal symmetric twist interfaces similar to grain
boundaries. In particular, we show that air-oxidation at
$1000\text{\,}\mathrm{\SIUnitSymbolCelsius}$ can form a twisted phase that
leads to the formation of interfacial dislocation networks with screw
characters or to severe interfacial reconstructions. Additionally, we explore
the contribution of disclinations to the representation by continuum models of
the stress field generated by such nano-twist defect in the Ti3AlC2 bulk
phase. The occurrence of this unexpected defect is expected to impact the
physical response of this nanolayered-based material as such supports
property-by-design approaches.
††journal: Scripta Materialia
[lem3]organization=Université de Lorraine, CNRS, Arts et Métiers, LEM3,
city=Metz, postcode=57070, country=France
[labex]organization=Labex DAMAS, Université de Lorraine, city=Metz,
postcode=57070, country=France
[lms]organization=Laboratoire Mécanique des Sols, Structures et Matériaux,
CentraleSupélec, CNRS UMR 8579, Université Paris-Saclay, city=Gif-sur-Yvette,
postcode=91190, country=France [ls]organization=Laboratoire Structures,
Propriétés et Modélisation des Solides, CentraleSupélec, CNRS UMR 8580,
Université Paris-Saclay, city=Gif-sur-Yvette, postcode=91190, country=France
[cmse]organization=Center of Materials Science and Engineering, School of
Mechanical and Electronic Control Engineering, Beijing Jiaotong University,
city=Beijing, postcode=100044, country=China
Ti3AlC2 belongs to the wide family of MAX phases [1]. These ternary compounds
arise a keen interest, because they are stiff, lightweight, machinable, made
from relatively inexpensive raw materials, resistant to oxidation and thermal
shock, and capable of remaining strong up to temperatures above 1,300°C in air
[2]. More than fifty compounds are thermodynamically stable and all of them
exhibit the same range of promising properties. Because of their composition,
they were called Mn+1AXn phases ($n=$ 1 to 3, M is a transition metal, A is an
A-group element and X is nitrogen and/or carbon) [3].
As typical MAX phase, Ti3AlC2 has a nanolayered structure with an hexagonal
lattice, whose the space group is $P6_{3}/mmc$ [2, 4]. The primitive cell can
be described as a stacking of two Ti6C octahedron layers with one layer of Al
[1]. Furthermore, measurements of lattice parameters with numerous methods
reveal that Ti3AlC2 exhibits elevated high crystalline anisotropy. The $c/a$
ratio is slightly higher than 6 [5].
It is established, that MAX phases experience plastic deformation by the glide
on basal planes of a-dislocations confined between the M6X and Al layers, thus
forming pile-ups and walls [6, 5, 7]. The latter can form local disorientation
areas, known as kink bands [8, 9]. Furthermore, numerous dislocation
interactions forming networks (dipoles, reactions) and high lattice friction
have been observed [10, 11]. Note, that out-of-basal plane dislocations have
been observed as well, but they do not play a role at room temperature in
standard deformation conditions [12, 13]. At high temperature, it has been
revealed, that out-of-basal plane ¡a¿-dislocations are common events and hence
cross-slip (from basal planes to prismatic or pyramidal planes) plays a key
role in the deformation [14, 15]. This increase of available glide systems is
likely to originate the brittle-to-ductile transition of MAX phases [14]. Note
also that Frank partial ¡c¿-dislocations correlated with a diffusion mechanism
were recently reported [16]. Moreover, observations of stacking faults is
another major microstructural feature, but their role in deformation both at
room and high temperatures remains unclear [6, 13, 15].
Of all MAX phases, those containing Al, such as Ti3AlC2, are the most
resistant to oxidation [2]. During exposure of Ti–Al–C MAX phases to oxidizing
environment at high temperatures, the outward diffusion of the weakly bonded
Al atoms is much faster compared to the more covalently bonded Ti atoms [2,
17, 18, 19]. Therefore, it results in a regime, where a superficial protective
layer of Al2O3 is formed. However, TiO2 can be formed as well, leading, this
time, to a catastrophic regime. Both mechanisms are strongly depend on the
oxidation conditions and the initial microstructure [18, 19]. Despite the
description of these oxides, operating oxidation mechanisms of Ti3AlC2 and
more generally of MAX phases remain poorly documented, especially on how the
crystallographic structure is evolving. During decomposition of 312 MAX phases
(such as Ti3AlC2, Ti3SiC2) at high temperatures in an oxygen-containing
atmosphere or in species having a high affinity to A-element, A-element is
observed to de-intercalate, thus forming Ti3C2 platelets [20, 21, 22, 23].
More precisely, in case of Ti2AlC-Cu composite, this de-intercalation of Al is
associated with a Frank partial dislocation-based mechanism [16]. This
formation of Ti${}_{\text{x}}$C${}_{\text{y}}$ platelets is then viewed as
reinforcement of the composites [20].
Such is the background of the current letter. This work is motivated by the
analysis of original phases that were observed after oxidation in a Ti3AlC2
matrix by high-resolution scanning transmission electron microscopy (HR-STEM).
Such nanolamellar phases appear to be twisted with respect to the surrounding
matrix and also associated with diffusion mechanisms in its neighborhood.
Hence, by interpreting our HR-STEM observations using molecular dynamics
simulations of model nano-twist phases, we study their atomic scale features
including interfacial crystal defects, excess energy, and mechanical fields.
Concerning the latter, a continuous description of the twist phase using a
disclination based mechanical framework is conducted complementary to
atomistic simulations to further evidence the complexity of such phases.
The specimen was prepared by hot isotropic pressing (HIP). Briefly, powders of
Ti, Al and TiC were mixed in stoechiometric proportions 2Ti:1.05Al:0.85C and
then cold-compacted into cylindrical steel dies using an uniaxial pressure.
Powder compacts were encapsulated into pyrex containers under high vacuum for
reactive sintering in the HIP machine. Afterwards, the specimen were oxidized
at $1000\text{\,}\mathrm{\SIUnitSymbolCelsius}$ for $25\text{\,}\mathrm{h}$ in
a flowing air atmosphere. More details are given in [24, 19]. Cross-sectional
TEM samples were prepared by focused ion beam (FIB) on a FEI Helios dual beam
Nanolab 600i. Atomically-resolved high-angle annular dark field scanning TEM
(HAADF-STEM) was performed on an Titan3 G2 S/TEM fitted with a double
aberration corrector for probe and image corrections and operating at
$300\text{\,}\mathrm{kV}$.
HR-STEM in HAADF mode of the lamella is shown in Fig. 1 and at lower
magnification in Fig. SM1 in supplementary materials. As HAADF detector senses
a greater signal from atoms with a higher atomic number Z, Ti columns appear
brighter in the resulting micrograph [25]. The nano-layered structure of
Ti3AlC2 highlighted in the circular inset is consistent with the expected one
for a 312-MAX phase projected along [-1 -1 2 0] _i.e._ zigzag stacking of
three TiC planes followed by one Al plane, the Al plane being the axis of
symmetry of the zigzag. Several large Ti${}_{\text{x}}$C${}_{\text{y}}$ laths
are clearly visible as well. These sub-phases are probably similar to Ti3C2
phases as they are formed by the diffusion of the Al interlayers of the
Ti3AlC2 phase. However, the experiments we used are not able to characterize
them precisely, and the mechanisms responsible for their formation is out of
the scope of this letter. The region of interest shows blurred atomic layers
(ROI, white square in Fig. 1). The blur is along the [1 -1 0 0] direction,
localized within the basal plane. This indicates that this part of the lattice
has undergone a rotation around the [0 0 0 1] direction. The boundaries
between this rotated phase and the other phase are marked with dashed white
lines. Particular orientation relationships for MX platelets in MAX interfaces
have been reported in the literature [26, 27]. However, such orientations will
not produce the high-resolution contrast we observe in Fig.1, in particular,
the blurred atomic layer within the NTP. Note, that the upper extremity of the
ROI shows a large, blurred area and visible inter-diffusion between Al and Ti
planes. Such mechanism that induce severe local lattice strain might be
responsible for the difference in the contrast of the
Ti${}_{\text{x}}$C${}_{\text{y}}$ phase on both side of the NTP (Fig. 1).
Figure 1: Experimental observation of the nano-twist phase (region of
interest, ROI) in the nanolayered Ti3AlC2. Filtered HR-STEM micrography in
HAADF mode with electron direction along [1 1 -2 0]. Boundaries of the nano-
twist phase are indicated by dashed lines. A model of the Ti3AlC2
crystallographic structure is shown in the inset. Some
Ti${}_{\text{x}}$C${}_{\text{y}}$ similar to Ti3C2 laths are also observed.
Figure 2: Modeling of the nano-twist phase in the nano-layered Ti3AlC2 by
means of a cookie-shape rotation region. (a) atomistic simulation setup of the
cookie-shape region (in blue) parallel to the basal planes, with [0001]
rotation axis and $\psi$ rotation angle. The black square indicate the
location of the inset. : magnified projected view of a
$10\text{\,}\mathrm{nm}$ thin slice of the side of the cookie-shape nano-twist
phase for $\Psi=$$$. Ti, Al and C are colored in green, red and black,
respectively.
To gain more insights on the observed defect and to characterize precisely the
structure of the twist boundary, we modeled the nano-twist phase by atomistic
simulations. Molecular dynamics (MD) based on interatomic potentials is an
excellent modelling method to investigate the atomic-scale configuration of
defects, in particular interfaces [28, 29]. It has been widely used over the
past decade to investigate grain boundaries [30, 31, 32, 33] and phase
boundaries [34, 35, 36, 37]. The only published model suitable to describe
Ti3AlC2 atomic interactions is the bond-order potential (BOP) recently
adjusted by Plummer and Tucker [38]. It is based on the formalism initially
proposed by Tersoff [39] and described in detail by Albe _et al._ [40]. Fig. 2
shows the atomistic simulation setup to compute the energy and the structure
of the rotated phase boundaries. Within a fully periodic bulk Ti3AlC2 phase, a
cookie shape region in the center is rotated with an angle $\Psi$ around an
axis normal to the basal plane, as indicated in blue in Fig. 2. The inset
shows as an example a rotation of the atomic structure for $\Psi=$$$. Note
that structures with similar contrasts to HAADF-STEM ones can be obtained with
different values of $\Psi$ (Fig. 1).
It is important to mention that we do not intend here to model the exact
system observed in our experiments, for several reasons. (1) The limitation of
our experimental approaches do not reveal the exact composition of the
Ti${}_{\text{x}}$C${}_{\text{y}}$ phase. (2) The interaction potential used in
our MD simulations has been designed to model the Ti3AlC2 phase and, thus,
might be less reliable for any non-stochiometric
Ti${}_{\text{x}}$C${}_{\text{y}}$ phase. (3) The local atomic environment of
the interface, up to three TiC interlayers, is identical for both Ti3AlC2 and
Ti${}_{\text{x}}$C${}_{\text{y}}$ phases. In this context, changing the
material that forms the interfaces of the NTP in our MD simulations should
influence the value of the interfacial energy, but will have a negligible
impact on what is the focus of our work: the energetic profile and the
crystallographic structure of the interfaces. The consideration of idealized
systems or surrogate materials is common practice with atomistic simulations,
including for direct comparisons with experimental results [35, 41, 37, 16].
The as-formed nano-twist phase (NTP) exhibits prismatic and basal interfaces
with the Ti3AlC2 phase. The prismatic interface is not clearly defined in our
experimental observation, whereas the basal interface appears atomically sharp
(See Fig. 1). In this work, we thus focus on the characterisation of the NTP
basal boundary (NTB). Fig. 3 shows the interfacial energy of the NTB as
function of the twist angle $\Psi$. The energy is computed by considering a
cylindrical region in the center of the NTP, that does not encompass the
prismatic boundaries. For each twist angle, the system is statically relaxed
by using conjugate gradients and fire [42, 43] methods. The dimensions of the
box are adapted to ensure a globally stress-free system. The energy of such
as-relaxed NTB is shown with blue small dots in Fig. 3. Selected
configurations have been annealed at $600\text{\,}\mathrm{K}$ for
$100\text{\,}\mathrm{ps}$. A Nose-Hoover thermostat control the temperature
and a Nose-Hoover barostat ensure constant zero pressure. The systems are
subsequently quenched and their energies are shown in Fig. 3 with orange large
dots.
Figure 3: Energy of the basal interface of the nano-twist phase as function
of the twist angle $\Psi$. Interfacial energy of the ground state and annealed
structure in blue and orange dots, respectively. The dotted line is a guide
for the eyes. Insets show magnified views of the stress field $\sigma_{zz}$
for $\Psi=3,13,14,15,30$.
The insets in Fig. 3 present the out-of-plane stress component $\sigma_{zz}$
of the as-relaxed NTB for different $\Psi$. By showing the distortion of the
crystal lattice at the basal boundary, it reveals the structures of the
interface. More details are presented in Fig. 4 for $\Psi=$$$, the smallest
twist angle considered in this work. The distortion of the crystal lattice at
the NTB is captured by the out-of-plane stress component $\sigma_{zz}$ (Fig.
4a,b). This evokes the signature of an interfacial dislocation network, with
the three-fold symmetry of the basal plane and the nodes of interacting
interfacial dislocations. The atomic positions for different basal interlayers
shown in Fig. 4(c)(d) confirm this observation, as the relative displacement
of the atoms is characteristic of screw dislocations.
Figure 4: Interfacial structure of the Ti3AlC2 nano-twist phase for
$\Psi=$$$. (a) global and (b) magnified views of the local stress field
$\sigma_{zz}$ revealing the typical dislocation network of a low angle grain
boundary. (c) identical view as (b) but showing chemical species as in Fig. 2.
(d) magnified view of (c) revealing the atomic displacements induced by a
screw dislocation. (c) Atomic radius are proportional to the stacking position
in the [0001] direction: light green, red and dark green, from top to bottom.
The screw dislocation line is indicated as a blue dashed line.
The NTP we observed experimentally is not a grain on itself, but is located
within one crystallographically homogeneous area containing perfectly coherent
Ti3AlC2 and Ti${}_{\text{x}}$C${}_{\text{y}}$ phases. The phase
Ti${}_{\text{x}}$C${}_{\text{y}}$ being directly issued from the Ti3AlC2
phase, this area can be considered as a Ti3AlC2-base grain. This nano-phase
exhibits limited degree of freedom, with its twist axis and at least one
interface being precisely defined: the ¡c¿ axis and the basal planes,
respectively. It is clear that the basal interface, we called NTB, shares
similarities with pure twist grain boundaries, such as (1) an interfacial
energy evolution with a typical bell-shape, (2) an interfacial structure that
can be described by a network of screw dislocations lying on basal planes and
cross-sliping to 1st-order prismatic planes (consistent with previous
experimental observations [14]) for low twist angles, (3) a spacing between
interfacial dislocations inversely proportional to low twist angle values and
(4) a description of the interface in term of dislocations network that
vanished for high twist angles. The transition from a low angle configuration
to a high angle configuration is for $\Psi\approx$$$ (Fig. 3, insets).
However, the NTB also exhibits clear differences towards grain boundaries,
namely (1) only one macroscopic degree of freedom (DoF) instead of five for
grain boundaries and (2) interfacial reconstructions for high twist angle
leading to non-monotonous structural patterns.
Interestingly, such NTP were already reported by Drouelle _et al._ in Ti3AlC2
deformed by creep at high temperature. Indeed, they observed by conventional
TEM, highly disorganized lenticular defects with a very high density of screw
dislocations [15]. In addition, Zhang _et al._ reported low-angle twist grain
boundaries (disorientation around $0.5\text{\,}\mathrm{\SIUnitSymbolDegree}$)
in Ti3AlC2 compressed uniaxially at
$1200\text{\,}\mathrm{\SIUnitSymbolCelsius}$ [44]. Such sub-grain boundaries
are formed by screw dislocation networks originating from basal dislocation
reactions, like those predicted here in Fig. 4.a. Both studies conclude that
such features may play a role in the plasticity of Ti3AlC2.
It is worth noticing that, within the time frame considered by our atomistic
simulations, neither the interfacial energy nor the interfacial structure is
significantly altered by annealing. The configuration of the NTB predicted by
atomistic modelling appears as such stable.
Figure 5: Shear stress fields $\sigma_{xz}$ and $\sigma_{yz}$ of a high-angle
nano-twist phase represented by (a) a dipole of twist disclination loops and
(b) an atomistic cookie-shape region. Only voxels (a) and atoms (b) with
stresses with magnitude larger than 600 MPa are shown. In purple (a) is shown
the norm of the disclination density tensor.
For high twist angles ($\Psi>$$$), the interfacial configuration exhibits
severe reconstructions. By comparison to configurations with low twist angles,
this reconstruction releases localized atomic stresses but does lead to lower
interfacial energies. Interfacial energies for tilt and twist boundaries in
crystals with cubic structures often show clear drops for particular $\Psi$
values that correspond to high symmetry misorientations. Yet, such stable high
angle configuration are not observed with the NTB investigated in this work.
This can be related to the limited DoF accessible to this boundary, as
described in the following. Grain boundaries are classically considered within
the coincidence site lattice theory (CSL), which predict periodic interfaces
for particular misorientation. As such, they are defined by five macroscopic
DoF (directions and rotation), among which some lead to particularly stable
configurations with low interfacial energy. Additionally, it is known that
considering the microscopic DoF (translation) accessible to any grain
boundaries is crucial to fully determine the most stable configurations.
However, the NTB we observed does not have so many DoF. In particular, the NTP
being confined within a well defined bulk Ti3AlC2 phase, it does not allows
the NTB for translation DoF. Allowing the NTB for microscopic DoF might thus
lead to lower energy configurations. This is out of the scope of the present
work, which is focused on the characterization of the NTP we observed
experimentally.
For large angles $\Psi$, the NTB cannot be conveniently described by
dislocations anymore as discussed above and illustrated in the insets of Fig.
3. We propose that for high-angles of rotation, the NTB can be appropriately
represented by disclination loops. As the rotational counterpart of
dislocations, disclinations are line defects which introduce a discontinuity
of the elastic rotation field, referred to as the Frank vector. The concept of
disclinations is appropriate for the description of elastic fields of tilt
boundaries with high misorientations [45] and for nanotwin microstructures
[46, 47]. Regarding the cookie shape NTP considered in this work, the stress
field due to the network of overlapping dislocation arrays can be equivalently
and more simply described by using two disclination loops delimiting the NTP.
Such representation was succesfully applied for nanolamellar twins with tilt
misorientations [47]. Here, we propose to apply this concept to twist
interfaces with high misorientations, which has never been attempted so far to
our knowledge. More details can be found as Supplementary Material.
As shown in Fig. 5(a), a NTP with misorientation $30^{\circ}$ is modelled by a
dipole of twist disclination loops with Frank vector magnitude $30^{\circ}$
about the Z ([0001]) axis. A recent field disclination mechanics framework is
used to built this loop dipole [45, 47]. The two main Cauchy stress tensor
components predicted by the model are shear stresses $\sigma_{xz}$ and
$\sigma_{yz}$. The latter are characteristic of screw dislocation out of plane
shear stresses and clearly arise from the twist rotation discontinuity applied
within the NTB.
Fig. 5(b) shows $\sigma_{xz}$ and $\sigma_{yz}$ as predicted by atomistic
simulations and evidenced a fair match with what is predicted by the
disclination model in Fig. 5(a). The similarities are (1) the morphology of
the surrounding stress field maxima with two inverse dipoles in a particular
crystallographic direction, and (2) the magnitude of these stress dipoles
around $\pm 1\text{\,}\mathrm{GPa}$. This match between atomistic and
continuum representation is however partial. The differences are (1) the
direction of stress maxima rotated by $None$, and (2) stresses within the NTB
predicted by MD but not by the disclinations. These differences clearly
originate from characteristics not considered by the disclination based model.
Typically, dislocation cores effects are only considered by atomistic
modelling and leads to $\epsilon_{zz}$ eigenstrains. Additionally, the
reconstruction of the NTB for high twist angles we observed by MD, appears to
play a crucial role in the distribution of the stress field. Disclination
based models will have to be enriched to consider such atomistic mechanisms,
but they are yet much promising to model general interfaces in complex
materials at the continuum scale.
Regarding the formation mechanism of the NTP, our preliminary observations
suggest a strong influence of the Ti-Al diffusion in the basal planes at the
onset of the NTB. The diffusion could be favoured by the interfacial
dislocation network that locally induce a variation of free volume within
basal planes.
The exact impact of a nano-twist phase on the properties of the Ti3AlC2
nanolayered structure is yet unclear. But it seems that they are not anecdotal
events, as reported in [18, 44]. Similarly to what has been observed with the
formation of Frank partial dislocations [16], the interfacial dislocation
network could limit the propagation of ¡a¿ dislocations and thus participate
to the strengthening of Ti3AlC2 phases. The prismatic interfaces of the NTP
would require more investigations as they should play an important role by
hindering the propagation of basal dislocation more efficiently than the basal
interfaces. From a more general perspective, nano-twist crystals have been
shown to exhibits peculiar optical properties [48].
Additional investigations are ongoing to gain a comprehensive understanding on
the formation mechanisms of this nano-twist phase, in order to opens up new
possibilities for tailored properties MAX phases.
## Acknowledgments
This project has received financial supports from the CNRS through the MITI
interdisciplinary programs and from the National Natural Science Foundation of
China (no. 52175284). This work was performed using HPC resources from GENCI-
TGCC (grant 2020-A0080911390 and 2021-A0100911390) and from the EXPLOR center
of the Université de Lorraine.
## References
* [1] X. Wang, Y. Zhou, J Mater Chem 12 (3) (2002) 455–460. doi:10.1039/b108685e.
* [2] M. Barsoum, T. El-Raghy, Am. Sci. 89 (4) (2001) 334. doi:10.1511/2001.4.334.
* [3] M. W. Barsoum, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany, 2013. doi:10.1002/9783527654581.
* [4] M. W. Barsoum, Prog Solid State Ch 28 (1-4) (2000) 201–281. doi:10.1016/S0079-6786(00)00006-6.
* [5] M. W. Barsoum, M. Radovic, Ann Rev Mater Res 41 (1) (2011) 195–227. doi:10.1146/annurev-matsci-062910-100448.
* [6] L. Farber, M. W. Barsoum, A. Zavaliangos, T. El-Raghy, I. Levin, J. Am. Ceram. Soc. 81 (6) (2005) 1677–1681. doi:10.1111/j.1151-2916.1998.tb02532.x.
* [7] K. Gouriet, P. Carrez, P. Cordier, A. Guitton, A. Joulain, L. Thilly, C. Tromas, Philos. Mag. 95 (23) (2015) 2539–2552. doi:10.1080/14786435.2015.1066938.
* [8] M. W. Barsoum, L. Farber, T. El-Raghy, Metall. Mater. Trans. A 30 (7) (1999) 1727–1738. doi:10.1007/s11661-999-0172-z.
* [9] A. Guitton, S. Van Petegem, C. Tromas, A. Joulain, H. Van Swygenhoven, L. Thilly, Appl. Phys. Lett. 104 (24) (2014) 241910. doi:10.1063/1.4884601.
* [10] A. Guitton, A. Joulain, L. Thilly, C. Tromas, Philos. Mag. 92 (36) (2012) 4536–4546. doi:10.1080/14786435.2012.715250.
* [11] G.-P. Bei, A. Guitton, A. Joulain, V. Brunet, S. Dubois, L. Thilly, C. Tromas, Philos. Mag. 93 (15) (2013) 1784–1801. doi:10.1080/14786435.2012.755272.
* [12] C. Tromas, P. Villechaise, V. Gauthier-Brunet, S. Dubois, Philos. Mag. 91 (7-9) (2011) 1265–1275. doi:10.1080/14786435.2010.494584.
* [13] A. Joulain, L. Thilly, J. Rabier, Philos. Mag. 88 (9) (2008) 1307–1320.
* [14] A. Guitton, A. Joulain, L. Thilly, C. Tromas, Sci. Rep. 4 (1) (2015) 6358. doi:10.1038/srep06358.
* [15] E. Drouelle, A. Joulain, J. Cormier, V. Gauthier-Brunet, P. Villechaise, S. Dubois, P. Sallot, J. Alloys 693 (2017) 622–630. doi:10.1016/j.jallcom.2016.09.194.
* [16] W. Yu, J. Guénolé, J. Ghanbaja, M. Vallet, A. Guitton, Scr. Mater. 191 (2021) 34–39. doi:10.1016/j.scriptamat.2020.09.007.
* [17] E. Drouelle, V. Brunet, J. Cormier, P. Villechaise, P. Sallot, F. Naimi, F. Bernard, S. Dubois, J. Am. Ceram. Soc. 103 (2) (2020) 1270–1280. doi:10.1111/jace.16780.
* [18] E. Drouelle, V. Gauthier-Brunet, J. Cormier, P. Villechaise, P. Sallot, F. Naimi, F. Bernard, S. Dubois, J. Alloys 826 (2020) 154062. doi:10.1016/j.jallcom.2020.154062.
* [19] W. Yu, M. Vallet, B. Levraut, V. Gauthier-Brunet, S. Dubois, J. Eur. Ceram. Soc. 40 (5) (2020) 1820–1828. doi:10.1016/j.jeurceramsoc.2020.01.042.
* [20] J. Zhang, J. Y. Wang, Y. C. Zhou, Acta Mater. 55 (13) (2007) 4381–4390. doi:10.1016/j.actamat.2007.03.033.
* [21] X. H. Wang, Y. C. Zhou, Chemistry of Materials 15 (19) (9 2003). doi:10.1021/cm030022v.
* [22] J. Emmerlich, D. Music, P. Eklund, O. Wilhelmsson, U. Jansson, J. M. Schneider, H. Högberg, L. Hultman, Acta Materialia 55 (4) (2 2007). doi:10.1016/j.actamat.2006.10.010.
* [23] M. W. Barsoum, T. El‐Raghy, L. Farber, M. Amer, R. Christini, A. Adams, Journal of The Electrochemical Society 146 (10) (10 1999). doi:10.1149/1.1392573.
* [24] W. Yu, V. Gauthier-Brunet, T. Cabioc’h, S. Dubois, J. Am. Ceram. Soc. 97 (7) (2014) 2308–2313. doi:https://doi.org/10.1111/jace.12930.
* [25] D. B. Williams, C. B. Carter, Springer US, Boston, MA, 2009. doi:10.1007/978-0-387-76501-3.
* [26] X. Ma, Y. Zhu, X. Wang, Y. Zhou, Philosophical Magazine 84 (28) (2004) 2969–2977.
* [27] Z. Lin, M. Li, Y. Zhou, et al., J. Mater. Sci. Technol 23 (2) (2007) 145–65.
* [28] L. Priester, Springer, 2013.
* [29] C. R. Weinberger, G. J. Tucker (Eds.), Vol. 245 of Springer Series in Materials Science, Springer International Publishing, Cham, 2016. doi:10.1007/978-3-319-33480-6.
* [30] F. Sansoz, J. F. Molinari, Acta Mater. 53 (7) (2005) 1931–1944. doi:10.1016/j.actamat.2005.01.007.
* [31] T. Frolov, S. V. Divinski, M. Asta, Y. Mishin, Phys. Rev. Lett. 110 (25) (2013) 255502\. doi:10.1103/PhysRevLett.110.255502.
* [32] M. A. Tschopp, S. P. Coleman, D. L. McDowell, Integr. Mater. Manuf. Innov. 4 (1) (2015) 11. doi:10.1186/s40192-015-0040-1.
* [33] H. Zhao, L. Huber, W. Lu, N. J. Peter, D. An, F. De Geuser, G. Dehm, D. Ponge, J. Neugebauer, B. Gault, D. Raabe, Phys. Rev. Lett. 124 (10) (2020) 106102. doi:10.1103/PhysRevLett.124.106102.
* [34] Y. Mishin, Acta Mater. 52 (6) (2004) 1451–1467. doi:10.1016/j.actamat.2003.11.026.
* [35] A. Prakash, J. Guénolé, J. Wang, J. Müller, E. Spiecker, M. J. Mills, I. Povstugar, P. Choi, D. Raabe, E. Bitzek, Acta Mater. 92 (2015) 33–45. doi:10.1016/j.actamat.2015.03.050.
* [36] A. Vaid, J. Guénolé, A. Prakash, S. Korte-Kerzel, E. Bitzek, Materialia 7 (2019) 100355. doi:10.1016/j.mtla.2019.100355.
* [37] J. Guénolé, M. Zubair, S. Roy, Z. Xie, M. Lipińska-Chwałek, S. Sandlöbes-Haut, S. Korte-Kerzel, Materials & Design 202 (2021) 109572. doi:10.1016/j.matdes.2021.109572.
* [38] G. Plummer, G. J. Tucker, Phys. Rev. B 100 (21) (2019) 214114. doi:10.1103/PhysRevB.100.214114.
* [39] J. Tersoff, Phys. Rev. B 37 (12) (1988) 6991–7000. doi:10.1103/PhysRevB.37.6991.
* [40] K. Albe, K. Nordlund, J. Nord, A. Kuronen, Phys. Rev. B 66 (3) (2002) 352051–3520514. doi:10.1103/PhysRevB.66.035205.
* [41] Y. Chang, W. Lu, J. Guénolé, L. Stephenson, A. Szczpaniak, P. Kontis, A. Ackerman, F. Dear, I. Mouton, X. Zhong, S. Zhang, D. Dye, C. Liebscher, D. Ponge, S. Korte-Kerzel, D. Raabe, B. Gault, Nature Communications 10 (2019). doi:10.1038/s41467-019-08752-7.
* [42] E. Bitzek, P. Koskinen, F. Gähler, M. Moseler, P. Gumbsch, Phys. Rev. Lett. 97 (17) (2006) 1–4. doi:10.1103/PhysRevLett.97.170201.
* [43] J. Guénolé, W. G. Nöhring, A. Vaid, F. Houllé, Z. Xie, A. Prakash, E. Bitzek, Comput. Mater. Sci. 175 (2020) 109584. doi:10.1016/j.commatsci.2020.109584.
* [44] H. Zhang, C. Zhang, T. Hu, X. Zhan, X. Wang, Y. Zhou, Sci. Rep. 6 (1) (2016) 23943\. doi:10.1038/srep23943.
* [45] C. Fressengeas, V. Taupin, L. Capolungo, Int. J. Solids Struct. 51 (6) (2014) 1434–1441. doi:https://doi.org/10.1016/j.ijsolstr.2013.12.031.
* [46] B. Reinholz, S. Brinckmann, A. Hartmaier, B. Muntifering, W. B. Knowlton, P. Müllner, Acta Mater. 108 (2016) 197–206. doi:https://doi.org/10.1016/j.actamat.2016.02.007.
* [47] L. Capolungo, V. Taupin, Mater. theory 3 (1) (2019) 2\. doi:10.1186/s41313-018-0013-9.
* [48] H. Y. Lee, M. M. A. Ezzi, N. Raghuvanshi, J. Y. Chung, K. Watanabe, T. Taniguchi, S. Garaj, S. Adam, S. Gradečak, Nano Lett. 21 (7) (2021) 2832–2839. doi:10.1021/ACS.NANOLETT.0C04924.
|
00footnotetext: Version 2023-04-10. See https://shelah.logic.at/papers/E108/
for possible updates.
# Stable frames and weights
E108
Saharon Shelah Einstein Institute of Mathematics
Edmond J. Safra Campus, Givat Ram
The Hebrew University of Jerusalem
Jerusalem, 91904, Israel
and
Department of Mathematics
Hill Center - Busch Campus
Rutgers, The State University of New Jersey
110 Frelinghuysen Road
Piscataway, NJ 08854-8019 USA<EMAIL_ADDRESS>http://shelah.logic.at
(Date: December 23, 2015)
###### Abstract.
Was paper 839 in the author-s list till winter 2023 when it was divided to
three.
Part I: We would like to generalize imaginary elements, weight of ${\rm
ortp}(a,M,N),{\mathbf{P}}$-weight, ${\mathbf{P}}$-simple types, etc. from
[She90, Ch.III,V,§4] to the context of good frames. This requires allowing the
vocabulary to have predicates and function symbols of infinite arity, but it
seemed that we do not suffer any real loss.
Part II: Good frames were suggested in [She09d] as the (bare bones) right
parallel among a.e.c. to superstable (among elementary classes). Here we
consider $(\mu,\lambda,\kappa)$-frames as candidates for being the right
parallel to the class of $|T|^{+}$-saturated models of a stable theory (among
elementary classes). A loss as compared to the superstable case is that going
up by induction on cardinals is problematic (for cardinals of small
cofinality). But this arises only when we try to lift. For this context we
investigate the dimension.
Part III: In the context of Part II, we consider the main gap problem for the
parallel of somewhat saturated model; showing we are not worse than in the
first order case.
###### Key words and phrases:
model theory, classification theory, stability, a.e.c., stability,
orthogonality, weight, main gap
###### 2010 Mathematics Subject Classification:
Primary 03C45, 03C48; Secondary: 03C55
The author thanks Alice Leonhardt for the beautiful typing. The author would
like to thank the Israel Science Foundation for partial support of this
research (Grant No. 242/03). First Typed - 2002/Sept/10. Was paper [839] till
winter 203 when by an editor request it was divided to [839], [1248], [1249]
Anotated Content
Part I, pg.Stable frames and weights E108
§0 Introduction, pg. ‣ 0\. Introduction
§1 Weight and $\mathbf{P}$-weight, pg.5 (labels w(dot),wp(dot) and without
dot), pg.1
1. [For ${\mathfrak{s}}$ a good $\lambda$-frame with some additional properties we define placed and $\mathbf{P}$-weight.]
§2 Imaginary elements, an ${\rm ess}-(\mu,\lambda)$-a.e.c. and frames, (labels
$m(dot,e(dot),b)$, pg.2
1. [Define an ${\rm ess}-(\mu,\lambda)$-a.e.c. allowing infinitary functions. Then get ${\mathfrak{s}}$ with type bases.]
§3 $\mathbf{P}$-simple types, pg.18 (label 3.1, on (dot)), pg. 3
Part II Generalizing stable classes, pg.3
§4 Introduction, pg.4
§5 Axiomatize a.e.c. without full continuity, (label f), pg.5
1. [Smooth out: generalize [She09c, §1].]
§(5A) a.e.c., pg.5(A)
§(5B) Basic Notions, (label 5.15 on), pg.5(B)
§(5C) Liftings, (labels 5.20 on), pg.5(C)
§6 PR frames, (labels pr(dot)), pg.6
1. [Seems better with NF, here, so earlier;
1. $(a)$
dominated appear
2. $(b)$
missing reference
3. $(c)$
“$P$ based on $\mathbf{a}$”, see I, but by
4. $(d)$
use places $K_{(M,A)}$ or monsters ${\mathfrak{C}}$.]
Part III, pg.6
§7 Introduction, (7.1-7.4 give labels!), pg.7
§8 Analysis of dimension for $\mathbf{P}$, (label g(dot)), pg.8
1. [Question: use NF? Place $\mathbf{a}$ or ${\mathfrak{a}}$?
2. $(a)\quad K_{A}$ or $K_{(M,A)}$ or $K_{M,\infty}$
3. $(b)\quad$ use monster ${\mathfrak{C}}$ or play…
4. $(c)\quad$ define $M<_{{\mathfrak{k}}_{A}}N$
5. $(d)\quad$ m.d. candidate (multi-dimensional)
6. $(e)\quad(<\kappa)$-based.]
Part IV, pg.8
§9 Strong stability: weak form of superstability [continue [She14], (label
uq(dot)), pg.9
§10 Decomposition (more on main gap + decomposition, continue §7,§8), (label
h(dot)), pg.10
§11 Decompositions, (label dc(dot)), pg.11
Part I: Beautiful frames: weight and simplicity
## 0\. Introduction
We consider here the directions listed in the abstract111As we have started
this in 2002 and have not worked on it for long, we intend to make public what
is in reasonable state.
In part I we assume ${{\mathfrak{s}}}$ is a good $\lambda$-frame with some
extra properties from [She09e], e.g. as in the assumption of [She09e, §12], so
we shall assume knowledge of [She09e], and the basic facts on good
$\lambda$-frames from [She09c].
We can look at results from [She90] which were not regained in beautiful
$\lambda$-frames. Well, of course, we are far from the main gap for the
original ${{\mathfrak{s}}}$ ([She90, Ch.XIII]) and there are results which are
obviously more strongly connected to elementary classes, particularly
ultraproducts. This leaves us with parts of type theory: semi-regular types,
weight, ${\mathbf{P}}$-simple 222The motivation is for suitable $\mathbf{P}$
(e.g. a single regular type) that on the one hand
stp$(a,A)\pm\mathbf{P}\Rightarrow{\rm stp}(a/E,A)$ is $\mathbf{P}$-simple for
some equivalence relation definable over $A$ and on the other hand if ${\rm
stp}(a_{i},A)$ is $\mathbf{P}$-simple for $i<\alpha$ then
$\Sigma\\{w(a_{i},A)\cup\\{a_{j}:j<i\\}):i<\alpha\\}$ does not depend on the
order in which we list the $a_{i}$’s. Note that $\mathbf{P}$ here is
${{\mathscr{P}}}$ there. types, “hereditarily orthogonal to $\mathbf{P}$” (the
last two were defined and investigated in [She78, Ch.V,§0 + Def4.4-Ex4.15],
[She90, Ch.V,§0,pg.226,Def4.4-Ex4.15,pg.277-284]).
Note that “a type $q$ is $p$-simple (or ${\mathbf{P}}$-simple)” and “$q$ is
hereditarily orthogonal to $p$ (or ${\mathbf{P}}$)” are essentially the
333Note, “foreign to $\mathbf{P}$” and “hereditarily orthogonal to
$\mathbf{P}$” are equivalent. Now ($\mathbf{P}=\\{p\\}$ for ease) $(a)$
$q(x)$ is $p(x)$-simple when for some set $A$, in ${{\mathfrak{C}}}$ we have
$q({{\mathfrak{C}}})\subseteq{\rm acl}(A\cup\bigcup p_{i}({{\mathfrak{C}}}))$
$(b)$ $q(x)$ is $p(x)$-internal when for some set $A$, in ${{\mathfrak{C}}}$
we have $q({{\mathfrak{C}}})\subseteq{\rm dcl}(A\cup p({{\mathfrak{C}}}))$.
Note $(\alpha)$ internal implies simple $(\beta)$ if we aim at computing
weights it is better to stress acl as it covers more $(\gamma)$ but the
difference is minor and $(\delta)$ in existence it is better to stress dcl,
also it is useful that $\\{F\restriction(p({{\mathfrak{C}}})\cup
q({{\mathfrak{C}}}):F$ an automorphism of ${{\mathfrak{C}}}$ over
$p({{\mathfrak{C}}})\cup{\rm Dom}(p)\\}$ is trivial when $q(x)$ is
$p$-internal but not so for $p$-simple (though form a pro-finite group).
“internal” and “foreign” in Hrushovshi’s profound works.
## 1\. I, Weight and ${\mathbf{P}}$-weight
Recalling [She09c], [She09e]
###### Context 1.1.
1) ${{\mathfrak{s}}}$ is a full good${}^{+}\,\lambda$-frame, with primes,
$K^{3,{\rm vq}}_{{\mathfrak{s}}}=K^{3,{\rm
qr}}_{{\mathfrak{s}}},\bot={\underset{{\rm wk}}{\bot}}$ and $p\bot
M\Leftrightarrow p{\underset{{\rm a}}{\perp}}M$, note that as
${{\mathfrak{s}}}$ is full, ${{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M)={{\mathscr{S}}}^{{\rm na}}_{{\mathfrak{s}}}(M)$; also
${\mathfrak{k}}_{{\mathfrak{s}}}={\mathfrak{k}}[{\mathfrak{s}}]=(K^{{\mathfrak{s}}},\leq_{{\mathfrak{k}}_{{\mathfrak{s}}}})$
is the a.e.c.
2) ${{\mathfrak{C}}}$ is an ${{\mathfrak{s}}}$-monster so it is
$K^{{\mathfrak{s}}}_{\lambda^{+}}$-saturated over $\lambda$ and
$M<_{{\mathfrak{s}}}{{\mathfrak{C}}}$ means
$M\leq_{{{\mathfrak{k}}}[{{\mathfrak{s}}}]}{{\mathfrak{C}}}$ and $M\in
K_{{\mathfrak{s}}}$. As ${\mathfrak{s}}$ is full, it has regulars.
###### Observation 1.2.
${{\mathfrak{s}}}^{{\rm reg}}$ satisfies all the above except being full.
###### Proof.
See [She09e, 10.18=L10.p19tex] and Definition [She09e, 10.17=L10.p18tex]. ∎
###### Claim 1.3.
1) If $p\in{{\mathscr{S}}}^{{\rm bs}}_{\mathfrak{s}}(M)$ then we can find
$b,N$ and a finite $\mathbf{J}$ such that:
1. $\circledast$
$(a)\quad M\leq_{{\mathfrak{s}}}N$
2. $(b)\quad\mathbf{J}\subseteq N$ is a finite independent set in $(M,N)$
3. $(c)\quad c\in\mathbf{J}\Rightarrow{\rm ortp}(c,M,N)$ is regular, recalling ${\rm ortp}$ stands for orbital type
4. $(d)\quad(M,N,\mathbf{J})\in K^{3,{\rm qr}}_{{\mathfrak{s}}}$
5. $(e)\quad b\in N$ realizes $p$.
2) We can add, if $M$ is brimmed, that
1. $(f)$
$(M,N,b)\in K^{3,{\rm pr}}_{{\mathfrak{s}}}$.
3) In (2), $|\mathbf{J}|$ depends only on $(p,M)$.
4) If $M$ is brimmed, then we can work in ${\mathfrak{s}}({\rm brim})$ and get
the same $\|\mathbf{J}\|$ and $N$ (so $N\in K_{{\mathfrak{s}}}$) brimmed.
###### Proof..
1) By induction on $\ell<\omega$, we try to choose
$N_{\ell},a_{\ell},q_{\ell}$ such that:
1. $(*)$
$(a)\quad N_{0}=M$
2. $(b)\quad N_{\ell}\leq_{{\mathfrak{s}}}N_{\ell+1}$
3. $(c)\quad q_{\ell}\in{{\mathscr{S}}}_{{\mathfrak{s}}}(N_{\ell})$, so possibly $q_{\ell}\notin{\mathscr{S}}^{{\rm na}}_{{\mathfrak{s}}}(N_{\ell})$
4. $(d)\quad q_{0}=p$
5. $(e)\quad q_{\ell+1}\restriction N_{\ell}=q_{\ell}$
6. $(f)\quad q_{\ell+1}$ forks over $N_{\ell}$ so now necessarily $q_{\ell}\notin{{\mathscr{S}}}^{{\rm na}}_{{\mathfrak{s}}}(N_{\ell})$
7. $(g)\quad(N_{\ell},N_{\ell+1},a_{\ell})\in K^{3,{\rm pr}}_{{\mathfrak{s}}}$
8. $(h)\quad r_{\ell}={\rm ortp}(a_{\ell},N_{\ell},N_{\ell+1})$ is regular
9. $(i)\quad r_{\ell}$ either is $\perp M$ or does not fork over $M$.
If we succeed to carry the induction for all $\ell<\omega$ let
$N=\cup\\{N_{\ell}:\ell<\omega\\}$; as this is a countable chain, there is
$q\in{{\mathscr{S}}}(N)$ such that $\ell<\omega\Rightarrow q\restriction
N_{\ell}=q$ and as $q$ is not algebraic (because each $q_{n}$ is not), and
${{\mathfrak{s}}}$ is full, clearly $q\in{{\mathscr{S}}}_{{\mathfrak{s}}}(N)$;
but $q$ contradicts the finite character of non-forking. So for some $n\geq 0$
we are stuck, but this cannot occur if $q_{n}\in{{\mathscr{S}}}^{{\rm
na}}_{{\mathfrak{s}}}(N_{n})$. [Why? By 1.2, equivalently
${{\mathfrak{s}}}^{{\rm reg}}$ has enough regulars and then we can apply
[She09e, 8.3=L6.1tex].] So for some $b\in N_{n}$ we have $q_{n}={\rm
ortp}(b,N_{n},N_{n})$, i.e., $b$ realizes $q_{n}$ hence it realizes $p$.
Let $\mathbf{J}=\\{a_{\ell}:{\rm ortp}(a_{\ell},N_{\ell},N_{\ell+1})$ does not
fork over $N_{0}\\}$. By [She09e, 6.2] we have
$(M,N_{n},\mathbf{J})=(N_{0},N_{n},\mathbf{J})\in K^{3,{\rm
vq}}_{{\mathfrak{s}}}$ hence $\in K^{3,{\rm qr}}_{{\mathfrak{s}}}$ by [She09e]
so we are done.
2) Let $N,b,\mathbf{J}$ be as in part (1) with $|\mathbf{J}|$ minimal. We can
find $N^{\prime}\leq_{{\mathfrak{s}}}N$ such that $(M,N^{\prime},b)\in
K^{3,{\rm pr}}_{{\mathfrak{s}}}$ and we can find $\mathbf{J}^{\prime}$ such
that $\mathbf{J}^{\prime}\subseteq N^{\prime}$ is independent regular in
$(M,N^{\prime})$ and maximal under those demands. Then we can find
$N^{\prime\prime}\leq_{{\mathfrak{s}}}N^{\prime}$ such that
$(M,N^{\prime\prime},\mathbf{J}^{\prime})\in K^{3,{\rm qr}}_{{\mathfrak{s}}}$.
If ${\rm
ortp}_{{\mathfrak{s}}}(b,N^{\prime\prime},N^{\prime})\in{{\mathscr{S}}}^{{\rm
na}}_{{\mathfrak{s}}}(N^{\prime\prime})$ is not orthogonal to $M$ we can
contradict the maximality of $\mathbf{J}^{\prime}$ in $N^{\prime}$ as in the
proof of part (1), so ${\rm
ortp}_{{\mathfrak{s}}}(b,N^{\prime\prime},N^{\prime})\perp M$ (or
$\notin{{\mathscr{S}}}^{{\rm na}}_{{\mathfrak{s}}}(N)$). Also without loss of
generality $(N^{\prime\prime},N^{\prime},b)\in K^{3,{\rm
pr}}_{{\mathfrak{s}}}$, so by [She09e] we have
$(M,N^{\prime},\mathbf{J}^{\prime})\in K^{3,{\rm qr}}_{{\mathfrak{s}}}$. Hence
there is an isomorphism $f$ from $N^{\prime}$ onto $N^{\prime\prime}$ which is
the identity of $M\cup\mathbf{J}^{\prime}$ (by the uniqueness for $K^{3,{\rm
qr}}_{{\mathfrak{s}}}$). So using $(N^{\prime},f(b),\mathbf{J}^{\prime})$ for
$(N,b,\mathbf{J})$ we are done.
3) If not, we can find $N_{1},N_{2},\mathbf{J}_{1},\mathbf{J}_{2},b$ such that
$M\leq_{{\mathfrak{s}}}N_{\ell}\leq_{{\mathfrak{s}}}N$ and the quadruple
$(M,N_{\ell},\mathbf{J}_{\ell},b)$ is as in (a)-(e)+(f) of part (1)+(2) for
$\ell=1,2$. Assume toward contradiction that
$|\mathbf{J}_{1}|\neq|\mathbf{J}_{2}|$ so without loss of generality
$|\mathbf{J}_{1}|<|\mathbf{J}_{2}|$.
By “$(M,N_{\ell},b)\in K^{3,{\rm pr}}_{{\mathfrak{s}}}$” without loss of
generality $N_{2}\leq_{{\mathfrak{s}}}N_{1}$.
By [She09e, 10.15=L10b.11tex(3)] for some
$c\in\mathbf{J}_{2}\backslash\mathbf{J}_{1},\mathbf{J}_{1}\cup\\{c\\}$ is
independent in $(M,N_{1})$, contradiction to $(M,N,\mathbf{J}_{1})\in
K^{3,{\rm vq}}_{{\mathfrak{s}}}$ by [She09e, 10.15=L10b.11tex(4)].
4) Similarly. ∎
###### Definition 1.4.
1) For $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$, let the weight of
$p,w(p)$ be the unique natural number such that: if
$M\leq_{{\mathfrak{s}}}M^{\prime},M^{\prime}$ is brimmed,
$p^{\prime}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M^{\prime})$ is a
non-forking extension of $p$ then it is the unique $|\mathbf{J}|$ from Claim
1.3(3), it is a natural number.
2) Let $w_{{\mathfrak{s}}}(a,M,N)=w({\rm ortp}_{{\mathfrak{s}}}(a,M,N))$.
###### Claim 1.5.
1) If $p\in{{\mathscr{S}}}^{bs}(M)$ regular, then $w(p)=1$.
2) If $\mathbf{J}$ is independent in $(M,N)$ and $c\in N$, then for some
$\mathbf{J}^{\prime}\subseteq\mathbf{J}$ with $\leq w_{{\mathfrak{s}}}(c,M,N)$
elements, $\\{c\\}\cup(\mathbf{J}\backslash\mathbf{J}^{\prime})$ is
independent in $(M,N)$.
###### Proof..
Easy by now. ∎
Note that the use of ${\mathfrak{C}}$ in Definition 1.6 is for transparency
only and can be avoided, see 1.10 below.
###### Definition 1.6.
1) We say that ${\mathbf{P}}$ is an $M^{*}$-based family (inside
${{\mathfrak{C}}}$) when :
1. $(a)$
$M^{*}<_{{\mathfrak{k}}[{\mathfrak{s}}]}{{\mathfrak{C}}}$ and $M^{*}\in
K_{{\mathfrak{s}}}$
2. $(b)$
${\mathbf{P}}\subseteq\cup\\{{\mathscr{S}}^{{\rm
bs}}_{{\mathfrak{s}}}(M):M\leq_{{\mathfrak{k}}[{\mathfrak{s}}]}{{\mathfrak{C}}}$
and $M\in K_{{\mathfrak{s}}}\\}$
3. $(c)$
${\mathbf{P}}$ is preserved by automorphisms of ${{\mathfrak{C}}}$ over
$M^{*}$.
2) Let $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ where
$M\leq_{{\mathfrak{k}}[{\mathfrak{s}}]}{{\mathfrak{C}}}$
1. $(a)$
we say that $p$ is hereditarily orthogonal to ${\mathbf{P}}$ (or
${\mathbf{P}}$-foreign) when :
if
$M\leq_{{\mathfrak{s}}}N\leq_{{{\mathfrak{k}}}[{{\mathfrak{s}}}]}{{\mathfrak{C}}},q\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(N),q\restriction M=p$, then $q$ is orthogonal to
${\mathbf{P}}$
2. $(b)$
we say that $p$ is ${\mathbf{P}}$-regular when $p$ is regular, not orthogonal
to ${\mathbf{P}}$ and if $q\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M^{\prime}),M\leq_{{\mathfrak{s}}}M^{\prime}<_{{{\mathfrak{k}}}[{{\mathfrak{s}}}]}{{\mathfrak{C}}}$
and $q$ is a forking extension of $p$ then $q$ is hereditarily orthogonal to
${\mathbf{P}}$
3. $(c)$
$p$ is weakly ${\mathbf{P}}$-regular if it is regular and is not orthogonal to
some ${\mathbf{P}}$-regular $p^{\prime}$.
3) ${\mathbf{P}}$ is normal when ${\mathbf{P}}$ is a set of regular types and
each of them is ${\mathbf{P}}$-regular.
4) For $q\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M),M<_{{{\mathfrak{k}}}[{{\mathfrak{s}}}]}{{\mathfrak{C}}}$
let $w_{\mathbf{P}}(q)$ be defined as the natural number satisfying the
following
1. $\circledast$
if
$M\leq_{{\mathfrak{s}}}M_{1}\leq_{{\mathfrak{s}}}M_{2}\leq_{{\mathfrak{s}}}{{\mathfrak{C}}},M_{\ell}$
is $(\lambda,*)$-brimmed, $b\in M_{2},{\rm
ortp}_{{\mathfrak{s}}}(b,M_{1},M_{2})$ is a non-forking extension of
$q,(M_{1},M_{2},b)\in K^{3,{\rm
pr}}_{{\mathfrak{s}}},(M_{1},M_{2},\mathbf{J})\in K^{3,{\rm
qr}}_{{\mathfrak{s}}}$ and $\mathbf{J}$ is regular in $(M_{1},M_{2})$, i.e.
independent and $c\in\mathbf{J}\Rightarrow{\rm
ortp}_{{\mathfrak{s}}}(c,M_{1},M_{2})$ is a regular type then
$w_{\mathbf{P}}(q)=|\\{c\in\mathbf{J}:{\rm ortp}_{{\mathfrak{s}}}(c,M_{1},M)$
is weakly ${\mathbf{P}}$-regular$\\}|$.
5) We replace ${\mathbf{P}}$ by $p$ if ${\mathbf{P}}=\\{p\\}$, where
$p\in{{\mathscr{S}}}^{{\rm bs}}(M^{*})$ is regular (see 1.7(1)).
###### Claim 1.7.
1) If $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ is regular then
$\\{p\\}$ is an $M$-based family and is normal.
2) Assume $\mathbf{P}$ is an $M^{*}$-based family. If
$q\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ and
$M^{*}\leq_{{\mathfrak{s}}}M\leq_{{{\mathfrak{k}}}[{{\mathfrak{s}}}]}{{\mathfrak{C}}}$
then $w_{\mathbf{P}}(q)$ is well defined (and is a natural number).
3) In Definition 1.6(4) we can find $\mathbf{J}$ such that for every
$c\in\mathbf{J}_{1}$ we have: ${\rm ortp}(c,M_{1},M)$ is
weakly-$\mathbf{P}$-regular $\Rightarrow{\rm ortp}(c,M_{1},M)$ is
$\mathbf{P}$-regular.
###### Proof..
Should be clear. ∎
###### Discussion 1.8.
It is tempting to try to generalize the notion of ${\mathbf{P}}$-simple
(${\mathbf{P}}$-internal in Hrushovski’s terminology) and semi-regular. An
important property of those notions in the first order case is that: e.g. if
$(\bar{a}/A)\pm p,p$ regular then for some equivalence relation $E$ definable
over $A,{\rm ortp}(\bar{a}/E,A)$ is $\pm p$ and is $\\{p\\}$-simple. So assume
that $p,q\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ are not
orthogonal, and we can define an equivalence relation
${{\mathscr{E}}}^{p,q}_{M}$ on $\\{c\in{{\mathfrak{C}}}:c$ realizes $p\\}$,
defined by
$\begin{array}[]{clcr}c_{1}{{\mathscr{E}}}^{p,q}_{M}c_{2}&\text{
{\text@underline{iff}}\,\,for every }d\in{{\mathfrak{C}}}\text{ realizing
}q\text{ we have}\\\ &{\rm
ortp}_{{\mathfrak{s}}}(c_{1}d,M,{{\mathfrak{C}}})={\rm
ortp}_{{\mathfrak{s}}}(c_{2}d,M,{{\mathfrak{C}}}).\end{array}$
This may fail (the desired property) even in the first order case: suppose
$p,q$ are definable over $a^{*}\in M$ (on getting this, see later) and we have
$\langle c_{\ell}:\ell\leq n\rangle,\langle M_{\ell}:\ell<n\rangle$ such that
${\rm ortp}(c_{\ell},M_{\ell},{{\mathfrak{C}}})=p_{\ell}$ each $p_{\ell}$ is
parallel to $p,c_{\ell}{{\mathscr{E}}}^{p,q}_{M_{\ell}}c_{\ell+1}$ but
$c_{0},c_{n}$ realizes $p,q$ respectively and $\\{c_{0},c_{n}\\}$ is
independent over $M_{0}$. Such a situation defeats the attempt to define a
${\mathbf{P}}-\\{q\\}$-simple type $p/{{\mathscr{E}}}$ as in [She90, Ch.V].
In first order logic we can find a saturated $N$ and $a^{*}\in N$ such that
${\rm ortp}(M,\bigcup\limits_{\ell}M_{\ell}\cup\\{c_{0},\dotsc,c_{n}\\})$ does
not fork over $a^{*}$ and use “average on the type with an ultrafilter $c$
over $q({{\mathfrak{C}}})+a^{*}_{t}$” (for suitable $a^{*}_{t}$’s). But see
below.
###### Discussion 1.9.
: 1) Assume (${\mathfrak{s}}$ is full and) every $p\in{{\mathscr{S}}}^{{\rm
na}}_{{\mathfrak{s}}}(M)$ is representable by some $a_{p}\in M$ (in [She90],
e.g. the canonical base ${\rm Cb}(p)$). We can define for
$\bar{a},\bar{b}\in{}^{\omega>}{{\mathfrak{C}}}$ when ${\rm
ortp}(\bar{a},\bar{b},{{\mathfrak{C}}})$ is stationary (and/or non-forking).
We should check the basic properties. See §3.
2) Assume $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ is regular,
definable over $\bar{a}^{*}$ (in the natural sense). We may wonder if the
niceness of the dependence relation hold for $p\restriction\bar{a}^{*}$?
If you feel that the use of a monster model is not natural in our context, how
do we “translate” a set of types in ${{\mathfrak{C}}}^{{\rm eq}}$ preserved by
every automorphism of ${{\mathfrak{C}}}$ which is the identity on $A$? by
using a “place” defined by:
###### Definition 1.10.
1) A local place is a pair $\mathbf{a}=(M,A)$ such that $A\subseteq M\in
K_{{\mathfrak{s}}}$ (compare with 8.2.
2) The places $(M_{1},A_{1}),(M_{2},A_{2})$ are equivalent if $A_{1}=A_{2}$
and there are $n$ and $N_{\ell}\in K_{{\mathfrak{s}}}$ for $\ell\leq n$
satisfying $A\subseteq N_{\ell}$ for $\ell=0,\ldots,n$ such that
$M_{1}=N_{0},M_{2}=N_{n}$ and for each
$\ell<n,N_{\ell}\leq_{{\mathfrak{s}}}N_{\ell+1}$ or
$N_{\ell+1}\leq_{{\mathfrak{s}}}N_{\ell}$. We write
$(M_{1},A_{1})\sim(M_{2},A_{2})$ or $M_{1}\sim_{A_{1}}M_{2}$.
3) For a local place $\mathbf{a}=(M,A)$ let
$K_{\mathbf{a}}=K_{(M,A)}=\\{N:(N,A)\sim(M,A)\\}$; so in $(M,A)/\sim$ we fix
both $A$ as a set and the type it realizes in $M$ over $\emptyset$.
4) We call such class $K_{\mathbf{a}}$ a place.
5) We say that ${\mathbf{P}}$ is an invariant set444Really a class of types in
a place $K_{(M,A)}$ when :
1. $(a)$
${\mathbf{P}}\subseteq\\{{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(N):N\sim_{A}M\\}$
2. $(b)$
membership in ${\mathbf{P}}$ is preserved by isomorphism over $A$
3. $(c)$
if $N_{1}\leq_{{\mathfrak{s}}}N_{2}$ are both in
$K_{(M,A)},p_{2}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(N_{2})$ does
not fork over $N_{1}$ then $p_{2}\in{\mathbf{P}}\Leftrightarrow
p_{2}\restriction N_{1}\in{\mathbf{P}}$.
6) We say $M\in K_{{\mathfrak{s}}}$ is brimmed over $A$ when for some $N$ we
have $A\subseteq N\leq_{{\mathfrak{s}}}M$ and $M$ is brimmed over $N$.
###### Claim/Definition 1.11.
1) If $A\subseteq M\in K_{{\mathfrak{s}}}$ and
$\mathbf{P}_{0}\subseteq{\mathscr{S}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ then
there is at most one invariant set $\mathbf{P}^{\prime}$ of types in the place
$K_{(M,A)}$ such that $\mathbf{P}^{+}\cap{\mathscr{S}}^{{\rm
bs}}_{{\mathfrak{s}}}(M)=\mathbf{P}_{0}$ and $M\leq_{{\mathfrak{s}}}N\wedge
p\in\mathbf{P}^{+}\cap{\mathscr{S}}^{{\rm bs}}_{{\mathfrak{s}}}(N)\Rightarrow$
($p$ does not fork over $M$).
2) If in addition $M$ is brimmed555$M$ is brimmed over $A$ means that for some
$M_{1},A\subseteq M_{1}\leq_{{\mathfrak{s}}}M$ and $M$ is brimmed over
$M_{1}$. over $A$ then we can omit the last demand in part (1).
3) If $\mathbf{a}=(M_{1},A),(M_{2},A)\in K_{\mathbf{a}}$ then
$K_{(M_{2},A)}=K_{\mathbf{a}}$.
###### Proof..
Easy. ∎
###### Definition 1.12.
1) If in 1.11 there are such $\mathbf{P}$, we denote it by ${\rm
inv}(\mathbf{P}_{0})={\rm inv}(\mathbf{P}_{0},M)$.
2) If $\mathbf{P}_{0}=\\{p\\}$, then let ${\rm inv}(p)={\rm inv}(p,M)={\rm
inv}(\\{p\\})$.
3) We say $p\in{\mathscr{S}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ does not split
(or is definable) over $A$ when ${\rm inv}(p)$ is well defined.
## 2\. I Imaginary elements, an essential-$(\mu,\lambda)$-a.e.c. and frames
### 2(A). Essentially a.e.c.
We consider revising the definition of a.e.c. ${{\mathfrak{k}}}$, by allowing
function symbols in $\tau_{{\mathfrak{k}}}$ with infinite number of places
while retaining local characters, e.g., if $M_{n}\leq
M_{n+1},M=\cup\\{M_{n}:n<\omega\\}$ is uniquely determined. Before this we
introduce the relevant equivalence relations. In this context, we can give
name to equivalence classes for equivalence relations on infinite sequences.
###### Definition 2.1.
We say that ${{\mathfrak{k}}}$ is an essentially $[\lambda,\mu)$-a.e.c. or
ess-$[\lambda,\mu)$-a.e.c. and we may write $(\mu,\lambda)$ instead of
$[\lambda,\mu)$ if ($\lambda<\mu$ and) it is an object consisting of:
1. $I(a)$
a vocabulary $\tau=\tau_{{\mathfrak{k}}}$ which has predicates and function
symbols of possibly infinite arity but $\leq\lambda$
2. $(b)$
a class of $K=K_{{\mathfrak{k}}}$ of $\tau$-models
3. $(c)$
a two-place relation $\leq_{{\mathfrak{k}}}$ on $K$
such that
1. $II(a)$
if $M_{1}\cong M_{2}$ then $M_{1}\in K\Leftrightarrow M_{2}\in K$
2. $(b)$
if $(N_{1},M_{1})\cong(N_{2},M_{2})$ then
$M_{1}\leq_{{\mathfrak{k}}}N_{1}\Leftrightarrow
M_{2}\leq_{{\mathfrak{k}}}N_{2}$
3. $(c)$
every $M\in K$ has cardinality $\in[\lambda,\mu)$
4. $(d)$
$\leq_{{\mathfrak{k}}}$ is a partial order on $K$
5. $III_{1}$
if $\langle M_{i}:i<\delta\rangle$ is $\leq_{{\mathfrak{k}}}$-increasing and
$|\bigcup\limits_{i<\delta}M_{i}|<\mu$ then there is a unique $M\in K$ such
that $|M|=\cup\\{|M_{i}|:i<\delta\\}$ and $i<\delta\Rightarrow
M_{i}\leq_{{\mathfrak{k}}}M$
6. $III_{2}$
if in addition $i<\delta\Rightarrow M_{i}\leq_{{\mathfrak{k}}}N$ then
$M\leq_{{\mathfrak{k}}}N$
7. $IV$
if $M_{1}\subseteq M_{2}$ and $M_{\ell}\leq_{{\mathfrak{k}}}N$ for $\ell=1,2$
then $M_{1}\leq_{{\mathfrak{k}}}M_{2}$
8. $V$
if $A\subseteq N\in K$, then there is $M$ satisfying $A\subseteq
M\leq_{{\mathfrak{k}}}N$ and $\|M\|\leq\lambda+|A|$ (here it is enough to
restrict ourselves to the case $|A|\leq\lambda$).
###### Definition 2.2.
1) We say ${{\mathfrak{k}}}$ is an ess-$\lambda$-a.e.c. if it is an
ess-$[\lambda,\lambda^{+})$-a.e.c.
2) We say ${{\mathfrak{k}}}$ is an ess-a.e.c. if there is $\lambda$ such that
it is an ess-$[\lambda,\infty)$-a.e.c., so $\lambda={\rm
LST}({{\mathfrak{k}}})$.
3) If ${\mathfrak{k}}$ is an ess-$[\lambda,\mu)$-a.e.c. and
$\lambda\leq\lambda_{1}<\mu_{1}\leq\mu$ then let
$K^{{\mathfrak{k}}}_{\lambda_{1}}=(K_{{\mathfrak{k}}})_{\lambda_{1}}=\\{M\in
K_{{\mathfrak{k}}}:\|M\|=\lambda_{1}\\},K^{{\mathfrak{k}}}_{\lambda_{1},\mu_{1}}=\\{M\in
K_{{\mathfrak{k}}}:\lambda_{1}\leq\|M\|<\mu_{1}\\}$.
4) We define $\Upsilon^{{\rm or}}_{{\mathfrak{k}}}$ as in [She09b,
0.8=L11.1.3A(2)].
5) We may omit the “essentially” when ${\rm
arity}(\tau_{{\mathfrak{k}}})=\aleph_{0}$ where ${\rm
arity}({\mathfrak{k}})={\rm arity}(\tau_{{\mathfrak{k}}})$ and for vocabulary
$\tau,{\rm arity}(\tau)=\min\\{\kappa$: every predicate and function symbol
have ${\rm arity}<\kappa\\}$.
We now consider the claims on ess-a.e.c.
###### Claim 2.3.
Let ${{\mathfrak{k}}}$ be an ess-$[\lambda,\mu)$-a.e.c.
1) The parallel of ${\rm Ax}(III)_{1},(III)_{2}$ holds with a directed family
$\langle M_{t}:t\in I\rangle$.
2) If $M\in K$ we can find $\langle
M_{\bar{a}}:\bar{a}\in{}^{\omega>}M\rangle$ such that:
1. $(a)$
$\bar{a}\subseteq M_{\bar{a}}\leq_{{\mathfrak{k}}}M$
2. $(b)$
$\|M_{\bar{a}}\|=\lambda$
3. $(c)$
if $\bar{b}$ is a permutation of $\bar{a}$ then $M_{\bar{a}}=M_{\bar{b}}$
4. $(d)$
if $\bar{a}$ is a subsequence of $\bar{b}$ then
$M_{\bar{a}}\leq_{{\mathfrak{k}}}M_{\bar{b}}$.
3) If $N\leq_{{\mathfrak{k}}}M$ we can add in (2) that
$\bar{a}\in{}^{\omega>}N\Rightarrow M_{\bar{a}}\subseteq N$.
4) If for simplicity
$\lambda_{*}=\lambda+\sup\\{\Sigma\\{|R^{M}|:R\in\tau_{{\mathfrak{k}}}\\}+\Sigma\\{|F^{M}|:F\in\tau_{{\mathfrak{k}}}\\}:M\in
K_{{\mathfrak{k}}}$ has cardinality $\lambda\\}$ then $K_{{\mathfrak{k}}}$ and
$\\{(M,N):N\leq_{{\mathfrak{k}}}M\\}$ essentially are ${\rm
PC}_{\chi,\lambda_{*}}$-classes where $\chi=|\\{M/\cong:M\in
K^{{\mathfrak{k}}}_{\lambda}\\}|$, noting that $\chi\leq 2^{2^{\theta}}$. That
is, $\langle M_{\bar{a}}:\bar{a}\in{}^{\omega>}A\rangle$ satisfying clauses
(b),(c),(d) of part (2) such that
$A=\cup\\{|M_{\bar{a}}|:\bar{a}\in{}^{\omega>}A\\}$ represent a unique $M\in
K_{{\mathfrak{k}}}$ with universe $A$ and similarly for
$\leq_{{\mathfrak{k}}}$, (on the Definition of ${\rm PC}_{\chi,\lambda_{*}}$,
see [She09a, 1.4(3)]). Note that if in $\tau_{{\mathfrak{k}}}$ there are no
two distinct symbols with the same interpretation in every $M\in
K_{{\mathfrak{k}}}$ then $|\tau){k_{*}}|\leq 2^{2^{\lambda}}$.
5) The results on omitting types in [She99] or [She09b, 0.9=L0n.8,0.2=0n.11]
hold, i.e., if $\alpha<(2^{\lambda_{*}})^{+}\Rightarrow
K^{{\mathfrak{k}}}_{\beth_{\alpha}}\neq\emptyset$ then
$\theta\in[\lambda,\mu)\Rightarrow K_{\theta}\neq\emptyset$ and there is an
${\rm EM}$-model, i.e., $\Phi\in\Upsilon^{{\rm or}}_{{\mathfrak{k}}}$ with
$|\tau_{\Phi}|=|\tau_{{\mathfrak{k}}}|+\lambda$ and ${\rm EM}(I,\Phi)$ having
cardinality $\lambda+|I|$ for any linear order $I$.
6) The lemma on the equivalence of being universal model homogeneous and of
being saturated (see [She09f, 3.18=3.10] or [She09c, 1.14=L0.19]) holds.
7) We can generalize the results of [She09c, §1] on deriving an
ess-$(\infty,\lambda)$-a.e.c. from an ess $\lambda$-a.e.c.
###### Proof..
The same proofs, on the generalization in 2.3(7), see in §5 below. The point
is that, in the term of §5, our ${\mathfrak{k}}$ is a
$(\lambda,\mu,\kappa)$-a.e.c. (automatically with primes). ∎
###### Remark 2.4.
1) In 2.3(4) we can decrease the bound on $\chi$ if we have more nice
definitions of $K_{\lambda}$, e.g., if ${\rm arity}(\tau)\leq\kappa$ then
$\chi=2^{(\lambda^{<\kappa}+|\tau|)}$ where ${\rm arity}(\tau)=\min\\{\kappa$:
every predicate and function symbol of $\tau$ has ${\rm arity}<\kappa\\}$.
2) We may use above $|\tau_{{\mathfrak{s}}}|\leq\lambda,{\rm
arity}(\tau_{{\mathfrak{k}}})=\aleph_{0}$ to get that $\\{(M,\bar{a})/\cong
M\in K^{{\mathfrak{k}}}_{\lambda},\bar{a}\in{}^{\lambda}M$ list $M\\}$ has
cardinality $\leq 2^{\lambda}$. See also 2.18.
3) In 2.10 below, if we omit “${\mathbb{E}}$ is small” and
$\lambda_{1}=\sup\\{|{\rm seq}(M)/{\mathbb{E}}_{M}|:M\in
K^{{\mathfrak{k}}}_{\lambda}\\}$ is $<\mu$ then
${\mathfrak{k}}_{[\lambda_{1},\mu)}$ is an ess-$[\lambda_{1},\mu)$-a.e.c.
4) In Definition 2.1, we may omit axiom V and define ${\rm
LST}({\mathfrak{k}})\in[\lambda,\mu]$ naturally, and if $M\in
K^{{\mathfrak{k}}}_{\lambda}\Rightarrow\mu>|{\rm seq}(M)/{\mathbb{E}}_{M}|$
then in 2.10(1) below we can omit “${\mathbb{E}}$ is small”.
5) Can we preserve in such “transformation” the arity finiteness? A natural
candidate is trying to code $p\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M)$ by $\\{\bar{a}:\bar{a}\in{}^{\omega>}M\\}$ and there
are $M_{0}\leq_{{\mathfrak{s}}}M_{1}$ such that $M\leq_{{\mathfrak{s}}}M_{1}$
and ${\rm ortp}(a_{\ell},M_{0},M_{1})$ is parallel to $p$ and $\bar{a}$ is
independent in $(M_{0},M_{1})\\}$. If e.g., $K_{{\mathfrak{s}}}$ is saturated
this helps but still we suspect it may fail.
6) What is the meaning of ess-$[\lambda,\mu)$-a.e.c.? Can we look just at
$\langle M_{t}:t\in I\rangle,I$ directed, $t\leq_{I}s\Rightarrow
M_{t}\leq_{{\mathfrak{s}}}M_{{\mathfrak{s}}}\in K_{\lambda}$? But for
isomorphism types we take a kind of completion and so make more pairs
isomorphic but $\bigcup\limits_{t\in I}M_{t}$ does not determine
$\bar{M}=\langle M_{t}:t\in I\rangle$ and the completion may depend on this
representation.
7) If we like to avoid this and this number is $\lambda^{\prime}$, then we
should change the definition of ${\rm seq}(N)$ (see 2.5(b)) to ${\rm
seq}^{\prime}(N)=\\{\bar{a}:\ell g(\bar{a})=\lambda$ and for some
$M\leq_{{\mathfrak{s}}}N$ from $K^{{\mathfrak{k}}}_{\lambda},\langle
a_{1+\alpha}:\alpha<\lambda\rangle$ list the members of $M$ and
$a_{0}\in\\{\gamma:\gamma<\mu_{*}\\}$.
### 2(B). Imaginary Elements and Smooth Equivalent Relations
Now we return to our aim of getting canonical base for orbital types.
###### Definition 2.5.
Let ${{\mathfrak{k}}}=(K_{{\mathfrak{k}}},\leq_{{\mathfrak{k}}})$ be a
$\lambda$-a.e.c. or just ess-$[\lambda,\mu)$-a.e.c. (if
${{\mathfrak{k}}}_{\lambda}={{\mathfrak{k}}}_{{\mathfrak{s}}}$ we may write
${{\mathfrak{s}}}$ instead of ${{\mathfrak{k}}}_{\lambda}$, see 2.11). We say
that ${\mathbb{E}}$ is a smooth ${{\mathfrak{k}}}_{\lambda}$-equivalence
relation when :
1. $(a)$
${\mathbb{E}}$ is a function with domain $K_{{\mathfrak{k}}}$ mapping $M$ to
${\mathbb{E}}_{M}$
2. $(b)$
for $M\in K_{{\mathfrak{k}}},{\mathbb{E}}_{M}$ is an equivalence relation on a
subset of ${\rm seq}(M)=\\{\bar{a}:\bar{a}\in{}^{\lambda}M$ and
$M\restriction{\rm Rang}(\bar{a})\leq_{{\mathfrak{k}}}M\\}$ so $\bar{a}$ is
not necessarily without repetitions; note that ${\mathfrak{k}}$ determines
$\lambda$, pedantically when non-empty
3. $(c)$
if $M_{1}\leq_{{\mathfrak{k}}}M_{2}$ then
${\mathbb{E}}_{M_{2}}\restriction{\rm seq}(M_{1})={\mathbb{E}}_{M_{1}}$
4. $(d)$
if $f$ is an isomorphism from $M_{1}\in K_{{\mathfrak{s}}}$ onto $M_{2}$ then
$f$ maps ${\mathbb{E}}_{M_{1}}$ onto ${\mathbb{E}}_{M_{2}}$
5. $(e)$
if $\langle M_{\alpha}:\alpha\leq\delta\rangle$ is
$\leq_{{\mathfrak{s}}}$-increasing continuous then
$\\{\bar{a}/{\mathbb{E}}_{M_{\delta}}:\bar{a}\in{\rm
seq}(M_{\delta})\\}=\\{\bar{a}/{\mathbb{E}}_{M_{\delta}}:\bar{a}\in\bigcup\limits_{\alpha<\delta}{\rm
seq}(M_{\alpha})\\}$.
2) We say that ${\mathbb{E}}$ is small if each ${\mathbb{E}}_{M}$ has
$\leq\|M\|$ equivalence classes.
###### Remark 2.6.
1) Note that if we have $\langle{\mathbb{E}}_{i}:i<i^{*}\rangle$, each
${\mathbb{E}}_{i}$ is a smooth ${{\mathfrak{k}}}_{\lambda}$-equivalence
relation and $i^{*}<\lambda^{+}$ then we can find a smooth
${{\mathfrak{k}}}_{\lambda}$-equivalence relation ${\mathbb{E}}$ such that
essentially the ${\mathbb{E}}_{M}$-equivalence classes are the
$\mathbb{E}_{i}$-equivalence classes for $i<i^{*}$; in detail: without loss of
generality $i^{*}\leq\lambda$ and $\bar{a}{\mathbb{E}}_{M}\bar{b}$ iff $\ell
g(\bar{a})=\ell g(\bar{b})$ and
1. $\circledast_{1}$
$i(\bar{a})=i(\bar{b})$ and if $i(\bar{a})<i^{*}$ then
$\bar{a}\restriction[1+i(\bar{a})+1,\lambda){\mathbb{E}}_{i(\bar{a})}\bar{b}\restriction[1+i(\bar{b})+1,\lambda)$
where $i(\bar{a})={\rm Min}\\{j:(j+1<i(*))\wedge a_{0}\neq a_{1+j}$ or
$j=\lambda\\}$.
2) In fact $i^{*}\leq 2^{\lambda}$ is O.K., e.g. choose a function
$\mathbf{e}$ from $\\{e:e$ an equivalence relation on $\lambda$ to $i^{*}$ and
for $\bar{a},\bar{b}\in{\rm seq}(M)$ we let
$i(\bar{a})=\mathbf{e}(\\{(i,j):a_{2i+1}=a_{2j+1}\\}$ and
1. $\circledast_{2}$
$\bar{a}\in{\mathbb{E}}_{M}\bar{b}$ iff $i(\bar{a})=i(\bar{b})$ and $\langle
a_{2i}:i<\lambda\rangle{\mathbb{E}}_{i(\bar{a})}\langle
b_{2i}:i<\lambda\rangle$.
3) We can redefine ${\rm seq}(M)$ as ${}^{\lambda\geq}M$, then have to make
minor changes above.
###### Definition 2.7.
Let ${{\mathfrak{k}}}$ be a $\lambda$-a.e.c. or just
ess-$[\lambda,\mu)$-a.e.c. and ${\mathbb{E}}$ a small smooth
${{\mathfrak{k}}}$-equivalence relation and the reader may assume for
simplicity that the vocabulary $\tau_{{\mathfrak{k}}}$ has only predicates.
Also assume $F_{*},c_{*},P_{*}\notin\tau_{{\mathfrak{k}}}$. We define
$\tau_{*}$ and
${{\mathfrak{k}}}_{*}={{\mathfrak{k}}}\langle{\mathbb{E}}\rangle=(K_{{\mathfrak{k}}_{*}},\leq_{{\mathfrak{k}}_{*}})$
as follows:
1. $(a)$
$\tau^{*}=\tau\cup\\{F_{*},c_{*},P_{*}\\}$ with $P_{*}$ a unary predicate,
$c_{*}$ an individual constant and $F_{*}$ a $\lambda$-place function symbol
2. $(b)$
$K_{{\mathfrak{k}}_{*}}$ is the class of $\tau^{*}_{{\mathfrak{k}}}$-models
$M^{*}$ such that for some model $M\in K_{{\mathfrak{k}}}$ we have:
1. $(\alpha)$
$|M|=P^{M^{*}}_{*}$
2. $(\beta)$
if $R\in\tau$ then $R^{M^{*}}=R^{M}$
3. $(\gamma)$
if $F\in\tau$ has arity $\alpha$ then $F^{M^{*}}\restriction M=F^{M}$ and for
any $\bar{a}\in{}^{\alpha}(M^{*}),\bar{a}\notin{}^{\alpha}M$ we have
$F^{M^{*}}(\bar{a})=c^{M^{*}}_{*}$ (or allow partial function or use
$F^{M^{*}}(\bar{a})=a_{0}$ when $\alpha>0$ and $F^{M^{*}}(\langle\rangle)$
when $\alpha=0$, i.e. $F$ is an individual constant);
4. $(\delta)$
$F_{*}$ is a $\lambda$-place function symbol and:
5. $(i)\quad$ if $\bar{a}\in{\rm seq}(M)$ then $F^{M^{*}}_{*}(\bar{a})\in|M^{*}|\backslash|M|\backslash\\{c^{M^{*}}_{*}\\}$
6. $(ii)\quad$ if $\bar{a},\bar{b}\in{\rm Dom}({\mathbb{E}})\subseteq{\rm seq}(M)$ then $F^{M^{*}}_{*}(\bar{a})=F^{M^{*}}_{*}(\bar{b})\Leftrightarrow\bar{a}{\mathbb{E}}_{M}\bar{b}$
7. $(iii)\quad$ if $\bar{a}\in{}^{\lambda}(M^{*})$ and $\bar{a}\notin{\rm Dom}({\mathbb{E}})\subseteq{\rm seq}(M)$ then $F^{M^{*}}_{*}(\bar{a})=c_{*}^{M^{*}}$
8. $(\varepsilon)$
$c^{M^{*}}_{*}\notin|M|$ and if
$b\in|M^{*}|\backslash|M|\backslash\\{c^{M^{*}}_{*}\\}$ then for some
$\bar{a}\in{\rm Dom}({\mathbb{E}})\subseteq{\rm seq}(M)$ we have
$F^{M^{*}}_{*}(\bar{a})=b$
3. $(c)$
$\leq_{{\mathfrak{k}}_{*}}$ is the two-place relation on
$K_{{\mathfrak{k}}_{*}}$ defined by: $M^{*}\leq_{{\mathfrak{k}}_{*}}N^{*}$ if
1. $(\alpha)$
$M^{*}\subseteq N^{*}$ and
2. $(\beta)$
for some $M,N\in{{\mathfrak{k}}}$ as in clause (b) we have
$M\leq_{{\mathfrak{k}}}N$.
###### Definition 2.8.
1) In 2.7(1) we call $M\in{{\mathfrak{k}}}$ a witness for $M^{*}\in
K_{{\mathfrak{k}}_{*}}$ if they are as in clause (b) above.
2) We call $M\leq_{{\mathfrak{k}}}N$ witness for
$M^{*}\leq_{{{\mathfrak{k}}}^{*}_{\lambda}}N^{*}$ if they are as clause (c)
above.
###### Discussion 2.9.
Up to now we have restricted ourselves to vocabularies with each predicate and
function symbol of finite arity, and this restriction seems very reasonable.
Moreover, it seems a priori that for a parallel to superstable, it is quite
undesirable to have infinite arity. Still our desire to have imaginary
elements (in particular canonical basis for types) forces us to accept them.
The price is that inthe class of $\tau$-models the union of increasing chains
of $\tau$-models is not a well defined $\tau$-model, more accurately we can
show its existence, but not smoothness; however inside the class
${{\mathfrak{k}}}$ it will be.
###### Claim 2.10.
1) If ${{\mathfrak{k}}}$ is a $[\lambda,\mu)$-a.e.c. or just an
ess-$[\lambda,\mu)$-a.e.c. and ${\mathbb{E}}$ a small smooth
${{\mathfrak{k}}}$-equivalence relation then
${{\mathfrak{k}}}\langle{\mathbb{E}}\rangle$ is an ess-$[\lambda,\mu)$-a.e.c.
2) If ${{\mathfrak{k}}}$ has amalgamation and ${\mathbb{E}}$ is a small
${{\mathfrak{k}}}$-equivalence class then
${{\mathfrak{k}}}\langle{\mathbb{E}}\rangle$ has amalgamation property.
###### Proof..
The same proofs. Left as an exercise to the reader. ∎
### 2(C). Good Frames
Now we return to good frames.
###### Definition 2.11.
1) We say that ${{\mathfrak{s}}}$ is a good ess-$[\lambda,\mu)$-frame if
Definition [She09c, 2.1=L1.1tex] is satisfied except that:
1. $(a)$
in clause (A),
${\mathfrak{K}}_{{\mathfrak{s}}}=(K_{{\mathfrak{s}}},\leq_{{\mathfrak{s}}}),{\mathfrak{k}}$
is an ess-$[\lambda,\mu)$-a.e.c. and ${\mathfrak{K}}[{\mathfrak{s}}]$ is an
ess-$(\infty,\lambda)$-a.e.c.
2. $(b)$
$K_{{\mathfrak{s}}}$ has a superlimit model in $\chi$ in every
$\chi\in[\lambda,\mu)$
3. $(c)$
$K^{{\mathfrak{s}}}_{\lambda}/\cong$ has cardinality $\leq 2^{\lambda}$, for
convenience.
###### Discussion 2.12.
We may consider other relatives as our choice and mostly have similar results.
In particular:
1. $(a)$
we can demand less: as in [SV, §2] we may replace ${\mathscr{S}}^{{\rm
bs}}_{{\mathfrak{s}}}$ by a formal version of ${\mathscr{S}}^{{\rm
bs}}_{{\mathfrak{s}}}$
2. $(b)$
we may demand goodness only for ${\mathfrak{s}}_{\lambda}$, i.e.
${\mathfrak{s}}$ restriction the class of models to
$K^{{\mathfrak{s}}}_{\lambda}$ and have only the formal properties above so
amalgamation and JEP are required only for models of cardinality $\lambda$.
###### Claim 2.13.
All the definitions and results in [She09c], [She09e] and §1 here work for
good ess-$[\lambda,\mu)$-frames.
###### Proof..
No problem. ∎
###### Definition 2.14.
If ${{\mathfrak{s}}}$ is a $[\lambda,\mu)$-frame or just an
ess-$[\lambda,\mu)$-frame and ${\mathbb{E}}$ a small smooth
${{\mathfrak{s}}}$-equivalence relation then let
${{\mathfrak{t}}}={{\mathfrak{s}}}\langle{\mathbb{E}}\rangle$ be defined by:
1. $(a)$
${{\mathfrak{k}}}_{{\mathfrak{t}}}={{\mathfrak{k}}}_{{\mathfrak{s}}}\langle{\mathbb{E}}\rangle$
2. $(b)$
${{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{t}}}(M^{*})=\\{{\rm
ortp}_{{{\mathfrak{k}}}_{{\mathfrak{t}}}}(a,M^{*},N^{*}):M^{*}\leq_{{{\mathfrak{k}}}_{{\mathfrak{t}}}}N^{*}$
and if $M\leq_{{\mathfrak{k}}}N$ witness
$M^{*},N^{*}\in{{\mathfrak{k}}}_{{\mathfrak{t}}}$ then $a\in N\backslash M$
and ${\rm ortp}_{{\mathfrak{s}}}(a,M,N)\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M)\\}$
3. $(c)$
non-forking similarly.
###### Remark 2.15.
We may add: if ${{\mathfrak{s}}}$ is666The reader may ignore this version. an
NF-frame we define
${{\mathfrak{t}}}={{\mathfrak{s}}}\langle{\mathbb{E}}\rangle$ as an NF-frame
similarly, see [She09e].
###### Claim 2.16.
1) If ${{\mathfrak{s}}}$ is a good ess-$[\lambda,\mu)$-frame, ${\mathbb{E}}$ a
small, smooth ${{\mathfrak{s}}}$-equivalence relation then
${{\mathfrak{s}}}\langle{\mathbb{E}}\rangle$ is a good
ess-$[\lambda,\mu)$-frame.
2) In part (1) for every
$\kappa,\dot{I}(\kappa,K^{{{\mathfrak{s}}}<{\mathbb{E}}>})=\dot{I}(\kappa,K^{{\mathfrak{s}}})$.
3) If ${{\mathfrak{s}}}$ has primes/regulars then
${{\mathfrak{s}}}\langle{\mathbb{E}}\rangle$ has.
###### Remark 2.17.
We may add: if ${{\mathfrak{s}}}$ is an NF-frame then so is
${{\mathfrak{s}}}\langle{\mathbb{E}}\rangle$, hence
$({{\mathfrak{s}}}\langle{\mathbb{E}}\rangle)^{{\rm full}}$ is a full NF-
frame; see [She09e].
###### Proof..
Straightforward. ∎
Our aim is to change ${{\mathfrak{s}}}$ inessentially such that for every
$p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ there is a canonical
base, etc. The following claim shows that in the context we have presented
this can be done.
###### Claim 2.18.
The imaginary elements Claim
Assume ${{\mathfrak{s}}}$ a good $\lambda$-frame or just a good
ess-$[\lambda,\mu)$-frame.
1) If $M_{*}\in K_{{\mathfrak{s}}}$ and $p^{*}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M_{*})$, then777note that there may well be an
automorphism of $M^{*}$ which maps $p^{*}$ to some
$p^{**}\in{\mathscr{S}}^{{\rm bs}}_{{\mathfrak{s}}}(M^{*})$ such that
$p^{**}\neq p^{*}$. there is a small, smooth
${\mathfrak{k}}_{{\mathfrak{s}}}$-equivalence relation
${\mathbb{E}}={\mathbb{E}}_{{\mathfrak{s}},M_{*},p^{*}}$ and function
$\mathbf{F}$ such that:
1. $(*)$
if $M_{*}\leq_{{\mathfrak{s}}}N$ and $\bar{a}\in{\rm seq}(N)$ so
$M=:N\restriction{\rm Rang}(\bar{a})\leq_{{\mathfrak{s}}}N$ and $M\cong
M_{*}$, then
1. $(\alpha)$
$\mathbf{F}(N,\bar{a})$ is well defined iff $\bar{a}\in{\rm
Dom}({\mathbb{E}}_{N})$ and then $\mathbf{F}(N,\bar{a})$ belongs to
${{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(N)$
2. $(\beta)$
$S\subseteq\\{(N,\bar{a},p):N\in K_{{\mathfrak{s}}},\bar{a}\in{\rm
Dom}({\mathbb{E}}_{N})\\}$ is the minimal class such that:
3. $(i)\quad$ if $\bar{a}\in{\rm seq}(M_{*})$ and $p$ does not fork over $M_{*}{\restriction}{\rm Rang}(\bar{a})$ then
$(M_{*},\bar{a},p)\in S$
4. $(ii)\quad S$ is closed under isomorphisms
5. $(iii)\quad$ if $N_{1}\leq_{{\mathfrak{s}}}N_{2},p_{2}\in{\mathscr{S}}^{{\rm bs}}_{{\mathfrak{s}}}(N_{2})$ does not fork over $\bar{a}\in{\rm seq}(N_{1})$ then
$(N_{2},\bar{a},p_{2})\in
S\Leftrightarrow(N_{1},\bar{a},p_{2}{\restriction}N_{1})\in S$
6. $(iv)\quad$ if $\bar{a}_{1},\bar{a}_{2}\in{\rm seq}(N),p\in{\mathscr{S}}^{{\rm bs}}_{{\mathfrak{s}}}(N)$ does not fork over $N{\restriction}{\rm Rang}(\bar{a}_{\ell})$
for $\ell=1,2$ then $(N_{2},\bar{a}_{1},p)\in
S\Leftrightarrow(N_{2},\bar{a}_{2},p)\in S$
7. $(\gamma)$
$\mathbf{F}(N,\bar{a})=p$ iff $(N,\bar{a},p)\in S$ hence if
$\bar{a},\bar{b}\in{\rm seq}(N)$ then: $\bar{a}{\mathbb{E}}_{N}\bar{b}$ iff
$\mathbf{F}(\bar{a},N)=\mathbf{F}(\bar{b},N)$.
2) There are unique small888for small we use stability in $\lambda$ smooth
${\mathbb{E}}$-equivalence relation ${\mathbb{E}}$ called
${\mathbb{E}}_{{\mathfrak{s}}}$ and function $\mathbf{F}$ such that:
1. $(**)(\alpha)$
$\mathbf{F}(N,\bar{a})$ is well defined iff $N\in K_{{\mathfrak{s}}}$ and
$\bar{a}\in{\rm seq}(N)$
2. $(\beta)$
$\mathbf{F}(N,\bar{a})$, when defined, belongs to ${{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(N)$
3. $(\gamma)$
if $N\in K_{{\mathfrak{s}}}$ and $p\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(N)$ then there is $\bar{a}\in{\rm seq}(N)$ such that
${\rm Rang}(\bar{a})=N$ and $\mathbf{F}(N,\bar{a})=p$
4. $(\delta)$
if $\bar{a}\in{\rm seq}(M)$ and $M\leq_{{\mathfrak{s}}}N$ then
$\mathbf{F}(N,\bar{a})$ is (well defined and is) the non-forking extension of
$\mathbf{F}(M,\bar{a})$
5. $(\varepsilon)$
if $\bar{a}_{\ell}\in{\rm seq}(N)$ and $\mathbf{F}(N,\bar{a}_{\ell})$ is well
defined for $\ell=1,2$ then
$\bar{a}_{1}{\mathbb{E}}_{N}\bar{a}_{2}\Leftrightarrow\mathbf{F}(N,\bar{a}_{1})=\mathbf{F}(N,\bar{a}_{2})$
6. $(\zeta)$
$\mathbf{F}$ commute with isomorphisms.
3) For ${{\mathfrak{t}}}={{\mathfrak{s}}}\langle{\mathbb{E}}\rangle$ where
${\mathbb{E}}$ as in part (2) and $M^{*}\in K_{{\mathfrak{t}}}$ as witnessed
by $M\in K_{{\mathfrak{s}}}$ and $p^{*}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{t}}}(M^{*})$ is projected to $p\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M)$ let ${\rm bas}(p^{*})={\rm
bas}(p)=\mathbf{F}(\bar{a},M^{*})/{\mathbb{E}}$ whenever
$\mathbf{F}(M,\bar{a})=p$. That is, assume $M_{\ell}$ witness that
$M^{*}_{\ell}\in K_{{\mathfrak{t}}}$, for $\ell=1,2$ and
$(M^{*}_{1},M^{*}_{2},a)\in K^{3,{\rm bs}}_{{\mathfrak{t}}}$ then
$(M_{1},M_{2},a)\in K^{3,{\rm bs}}_{{\mathfrak{s}}}$ and $p^{*}={\rm
ortp}_{{\mathfrak{t}}}(a,M^{*}_{1},M^{*}_{2}),p={\rm
ortp}_{{\mathfrak{s}}}(a,M_{1},M_{2})$; then in ${{\mathfrak{t}}}$:
1. $(\alpha)$
if $M^{*}_{\ell}\leq_{{\mathfrak{s}}}M^{*},p_{\ell}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{t}}}(M^{*}_{\ell})$, then
$p^{*}_{1}\|p^{*}_{2}\Leftrightarrow{\rm bas}(p^{*}_{1})={\rm bas}(p^{*}_{2})$
2. $(\beta)$
$p^{*}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{t}}}(M^{*})$ does not split
over ${\rm bas}(p^{*})$, see Definition 1.12(3) or [She09e, §2 end].
###### Proof..
1) Let $M^{**}\leq_{{\mathfrak{s}}}M^{*}$ be of cardinality $\lambda$ such
that $p^{*}$ does not fork over $M^{**}$. Let $\bar{a}^{*}=\langle
a_{\alpha}:\alpha<\lambda\rangle$ list the element of $M^{**}$.
We say that $p_{1}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M_{1})$ is a
weak copy of $p^{*}$ when there is a witness $(M_{0},M_{2},p_{2},f)$ which
means:
1. $\circledast_{1}$
$(a)\quad M_{0}\leq_{{\mathfrak{s}}}M_{2}$ and
$M_{1}\leq_{{\mathfrak{s}}}M_{2}$
2. $(b)\quad$ if $\|M_{1}\|=\lambda$ then $\|M_{2}\|=\lambda$
3. $(c)\quad f$ is an isomorphism from $M^{**}$ onto $M_{0}$
4. $(d)\quad p_{2}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M_{2})$ is a non-forking extension of $p_{1}$
5. $(e)\quad p_{2}$ does not fork over $M_{0}$
6. $(f)\quad f(p^{*}\restriction M^{**})$ is $p_{2}\restriction M_{0}$.
For $M_{1}\in K^{{\mathfrak{s}}}_{\lambda},p_{1}\in{\mathscr{S}}^{{\rm
bs}}_{{\mathfrak{s}}}(M_{1})$ which is a weak copy of $p^{*}$, we say that
$\bar{b}$ explicate its being a weak copy when for some witness
$(M_{0},M_{2},p_{2},f)$ and $\bar{c}$
1. $\circledast_{2}$
$(a)\quad\bar{b}=\langle b_{\alpha}:\alpha<\lambda\rangle$ list the elements
of $M_{1}$
2. $(b)\quad\bar{c}=\langle c_{\alpha}:\alpha<\lambda\rangle$ list the element of $M_{2}$
3. $(c)\quad\\{\alpha:b_{2\alpha}=b_{2\alpha+1}\\}$ code the folowing sets
1. $(\alpha)\quad$ the isomorphic type of $(M_{2},\bar{c})$
2. $(\beta)\quad\\{(\alpha,\beta):b_{\alpha}=c_{\beta}\\}$
3. $(\gamma)\quad\\{(\alpha,\beta):f(a^{*}_{\alpha})=c_{\beta}\\}$
Now
1. $\circledast_{3}$
if $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ is a weak copy of
$p^{*}$ then for some $\bar{a}\in{\rm seq}(M)$, there is a
$M_{1}\leq_{{\mathfrak{s}}}M$ over which $p$ does not fork such that $\bar{a}$
list $M_{1}$ and explicate $p\restriction M_{1}$ is a weak copy of $p^{*}$
2. $\circledast_{4}$
$(a)\quad$ if $M\in K^{{\mathfrak{s}}}_{\lambda}$ and $\bar{b}$ explicate
$p^{*}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ is a weak copy of
$p^{*}$,
then from $\bar{b}$ and $M$ we can reconstruct $p_{1}$
3. $(b)\quad$ call it $p_{M,\bar{b}}$
4. $(c)\quad$ if $M\leq_{{\mathfrak{s}}}N$ let $p_{N,\bar{b}}$ be its non-forking extension in ${{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(N)$ we also
call it $\mathbf{F}(N,\bar{b})$.
Now we define ${\mathbb{E}}$, so for $N\in K_{{\mathfrak{s}}}$ we define a
two-place relation ${\mathbb{E}}_{N}$
1. $\circledast_{5}$
$(\alpha)\quad{\mathbb{E}}_{N}$ is on $\\{\bar{a}$: for some
$M\leq_{{\mathfrak{s}}}N$ of cardinality $\lambda$ and
$p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$
which is a copy of $p^{*}$, the sequence $\bar{a}$ explicates $p$ being
a weak copy of $p^{*}\\}$
2. $(\beta)$
$\bar{a}_{1}{\mathbb{E}}_{N}\bar{a}_{2}$ iff $(\bar{a}_{1},\bar{a}_{2}$ are as
above and) $p_{N,\bar{a}_{1}}=p_{N,\bar{a}_{2}}$.
Now
1. $\odot_{1}$
for $N\in K_{{\mathfrak{s}}},{\mathbb{E}}_{N}$ is an equivalence relation on
${\rm Dom}(E_{N})\subseteq{\rm seq}(N)$
2. $\odot_{2}$
if $N_{1}\leq_{{\mathfrak{s}}}N_{2}$ and $\bar{a}\in{\rm seq}(N_{1})$ then
$\bar{a}\in{\rm Dom}({\mathbb{E}}_{N_{1}})\Leftrightarrow\bar{a}\in{\rm
Dom}({\mathbb{E}}_{N_{2}})$
3. $\odot_{3}$
if $N_{1}\leq_{{\mathfrak{s}}}N_{2}$ and $\bar{a}_{1},\bar{a}_{2}\in{\rm
Dom}({\mathbb{E}}_{N_{1}})$ then
$\bar{a}_{2}{\mathbb{E}}_{N_{1}}\bar{a}_{2}\Leftrightarrow\bar{a}_{1}{\mathbb{E}}_{N_{2}}\bar{a}_{2}$
4. $\odot_{4}$
if $\langle N_{\alpha}:\alpha\leq\delta\rangle$ is
$\leq_{{\mathfrak{s}}}$-increasing continuous and $\bar{a}_{1}\in{\rm
Dom}({\mathbb{E}}_{N_{\delta}})$ then for some $\alpha<\delta$ and
$\bar{a}_{2}\in{\rm Dom}({\mathbb{E}}_{N_{\alpha}})$ we have
$\bar{a}_{1}{\mathbb{E}}_{N_{\delta}}\bar{a}_{2}$.
[Why? Let $\bar{a}_{2}$ list the elements of
$M_{1}\leq_{{\mathfrak{s}}}N_{\delta}$ and let $p=p_{N_{\delta},\bar{a}_{1}}$
so $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(N_{\delta})$, hence for
some $\alpha<\delta,p$ does not fork over $M_{\alpha}$ hence for some
$M^{\prime}_{1}\leq_{{\mathfrak{s}}}M_{\alpha}$ of cardinality $\lambda$, the
type $p$ does not fork over $M^{\prime}_{1}$. Let $\bar{a}_{2}$ list the
elements of $M^{\prime}_{1}$ such that it explicates $p\restriction
M^{\prime}_{1}$ being a weak copy of $p^{*}$. So clearly $\bar{a}_{2}\in{\rm
Dom}({\mathbb{E}}_{N_{\alpha}})\subseteq{\rm Dom}({\mathbb{E}}_{N_{\delta}})$
and $\bar{a}_{1}{\mathbb{E}}_{N_{\delta}}\bar{a}_{2}$.]
Clearly we are done.
2) Similar only we vary $(M^{*},p^{*})$ but it suffices to consider
$2^{\lambda}$ such pairs.
3) Should be clear. ∎
###### Definition/Claim 2.19.
Assume that ${{\mathfrak{s}}}$ is a good ess-$[\lambda,\mu)$-frame so without
loss of generality is full. We can repeat the operations in 2.18(3) and
2.16(2), so after $\omega$ times we get ${{\mathfrak{t}}}_{\omega}$ which is
full (that is ${{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{t}}_{\omega}}(M^{\omega})={{\mathscr{S}}}^{{\rm
na}}_{{\mathfrak{t}}_{\omega}}(M^{\omega}))$ and ${{\mathfrak{t}}}_{\omega}$
has canonical type-bases as witnessed by a function ${\rm
bas}_{{\mathfrak{t}}_{\omega}}$, see Definition 2.20.
###### Proof..
Should be clear. ∎
###### Definition 2.20.
We say that ${{\mathfrak{s}}}$ has type bases if there is a function ${\rm
bas}(-)$ such that:
1. $(a)$
if $M\in K_{{\mathfrak{s}}}$ and $p\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M)$ then ${\rm bas}(p)$ is (well defined and is) an
element of $M$
2. $(b)$
$p$ does not split over bas$(p)$, that is any automorphism999there are
reasonable stronger version, but it follows that the function ${\rm bas}(-)$
satisfies them of $M$ over ${\rm bas}(p)$ maps $p$ to itself
3. $(c)$
if $M\leq_{{\mathfrak{s}}}N$ and $p\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(N)$ then: ${\rm bas}(p)\in M$ iff $p$ does not fork over
$M$
4. $(d)$
if $f$ is an isomorphism from $M_{1}\in K_{{\mathfrak{s}}}$ onto $M_{2}\in
K_{{\mathfrak{s}}}$ and $p_{1}\in{{\mathscr{S}}}^{{\rm bs}}(M_{1})$ then
$f({\rm bas}(p_{1}))={\rm bas}(f(p_{1}))$.
###### Remark 2.21.
In §3 we can add:
1. $(e)$
strong uniqueness: if $A\subseteq
M\leq_{{\mathfrak{k}}({\mathfrak{s}})}{{\mathfrak{C}}},p\in{{\mathscr{S}}}(A,{{\mathfrak{C}}})$
well defined, then for at most one $q\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M)$ do we have: $q$ extends $p$ and ${\rm bas}(p)\in A$.
(needed for non-forking extensions).
###### Definition 2.22.
We say that ${{\mathfrak{s}}}$ is equivalence-closed when :
1. $(a)$
${{\mathfrak{s}}}$ has type bases $p\mapsto{\rm bas}(p)$
2. $(b)$
if ${\mathbb{E}}_{M}$ is a definition of an equivalence relation on
${}^{\omega>}M$ preserved by isomorphisms and
$\leq_{{\mathfrak{s}}}$-extensions (i.e.
$M\leq_{{\mathfrak{s}}}N\Rightarrow{\mathbb{E}}_{M}={\mathbb{E}}_{N}{\restriction}{}^{\omega>}M$)
then there is a definable function $F$ from ${}^{\omega>}M$ to $M$ such that
$F^{M}(\bar{a})=F^{M}(\bar{b})$ iff $\bar{a}E_{M}\bar{b}$ (or work in
${{\mathfrak{C}}}$).
To phrase the relation between ${\mathfrak{k}}$ and ${\mathfrak{k}}^{\prime}$
we define.
###### Definition 2.23.
Assume ${\mathfrak{k}}_{1},{\mathfrak{k}}_{2}$ are ess-$[\lambda,\mu)$-a.e.c.
1) We say $\mathbf{i}$ is an interpretation in ${\mathfrak{k}}_{2}$ when
$\mathbf{i}$ consists of
1. $(a)$
a predicate $P^{*}_{\mathbf{i}}$
2. $(b)$
a subset $\tau_{\mathbf{i}}$ of $\tau_{{\mathfrak{k}}_{2}}$.
2) In this case for $M_{2}\in K_{{\mathfrak{k}}_{2}}$ le
$M^{[\mathbf{i}]}_{2}$ be the $\tau_{\mathbf{i}}$-model
$M_{1}=M^{[\mathbf{i}]}_{2}$ with
1. $\bullet$
universe $P^{M_{2}}_{\mathbf{i}}$
2. $\bullet$
$R^{M_{1}}=R^{M_{2}}{\restriction}|M_{1}|$ for $R\in\tau_{\mathbf{i}}$
3. $\bullet$
$F^{M_{1}}$ similarly, so $F^{M_{2}}$ can be a partial function even if
$F^{M_{2}}$ is full.
3) We say that ${\mathfrak{k}}_{1}$ is $\mathbf{i}$-interpreted (or
interpreted by $\mathbf{i}$) in ${\mathfrak{k}}_{2}$ when :
1. $(a)$
$\mathbf{i}$ is an interpretation in ${\mathfrak{k}}_{1}$
2. $(b)$
$\tau_{{\mathfrak{k}}_{1}}=\tau_{\mathbf{i}}$
3. $(c)$
$K_{{\mathfrak{k}}_{1}}=\\{M^{[\mathbf{i}]}_{2}:M_{2}\in
K_{{\mathfrak{k}}_{2}}\\}$
4. $(d)$
if $M_{2}\leq_{{\mathfrak{k}}_{2}}N_{2}$ then
$M^{[\mathbf{i}]}_{2}\leq_{{\mathfrak{k}}_{1}}N^{[\mathbf{i}]}_{2}$
5. $(e)$
if $M_{1}\leq_{{\mathfrak{k}}_{1}}N_{1}$ and $N_{1}=N^{[\mathbf{i}]}_{2}$, so
$N_{2}\in K_{{\mathfrak{k}}_{2}}$ then for some
$M_{2}\leq_{{\mathfrak{k}}}N_{2}$ we have $M_{1}=M^{[\mathbf{i}]}_{2}$
6. $(f)$
if $M_{1}\leq_{{\mathfrak{k}}_{1}}N_{1}$ and $M_{1}=M^{[\mathbf{i}]}_{2}$, so
$M_{2}\in K_{{\mathfrak{k}}_{2}}$ then possible replacing $M_{2}$ by a model
isomorphic to it over $M_{1}$, there is $N_{2}\in K_{{\mathfrak{k}}_{2}}$ we
have $M_{2}\leq_{{\mathfrak{k}}_{2}}N_{2}$ and $N_{1}=N^{[\mathbf{i}]}_{2}$.
###### Definition 2.24.
1) Assume ${\mathfrak{k}}_{1}$ is interpreted by $\mathbf{i}$ in
${\mathfrak{k}}_{2}$. We say strictly interpreted when: if
$M^{[\mathbf{i}]}_{2}=N^{[\mathbf{i}]}_{2}$ then $M_{2},N_{2}$ are isomorphic
over $M^{[\mathbf{i}]}_{2}$.
2) We say ${\mathfrak{k}}_{1}$ is equivalent to ${\mathfrak{k}}_{2}$ if there
are $n$ and ${\mathfrak{k}}^{\prime}_{0},\dotsc,{\mathfrak{k}}^{\prime}_{n}$
such that
${\mathfrak{k}}_{1}={\mathfrak{k}}^{\prime}_{0},{\mathfrak{k}}_{2}={\mathfrak{k}}^{\prime}_{n}$
and for each $\ell<n,{\mathfrak{k}}_{\ell}$ is strictly interpreted in
${\mathfrak{k}}_{\ell+1}$ or vice versa. Actually we can demand $n=2$ and
$k_{\ell}$ is strictly interpreted in ${\mathfrak{k}}^{\prime}_{1}$ for
$\ell=1,2$.
###### Definition 2.25.
As above for (good) ess-$[\lambda,\mu)$-frame.
###### Claim 2.26.
Assume ${\mathfrak{s}}$ is a good ${\rm ess}-[\lambda,\mu)$-frame. Then there
is ${\mathfrak{C}}$ (called a $\mu$-saturated for $K_{{\mathfrak{s}}}$) such
that:
1. $(a)$
${\mathfrak{C}}$ is a $\tau_{{\mathfrak{s}}}$-model of cardinality $\leq\mu$
2. $(b)$
${\mathfrak{C}}$ is a union of some $\leq_{{\mathfrak{s}}}$-increasing
continuous sequence $\langle M_{\alpha}:\alpha<\mu\rangle$
3. $(c)$
if $M\in K_{{\mathfrak{s}}}$ so $\lambda\leq\|M\|<\mu$ then $M$ is
$\leq_{{\mathfrak{s}}}$-embeddable into some $M_{\alpha}$ from clause (b)
4. $(d)$
$M_{\alpha+1}$ is brimmed over $M_{\alpha}$ for $\alpha<\mu$.
## 3\. I ${\mathbf{P}}$-simple types
We define the basic types over sets not necessary models. Note that in
Definition 3.5 there is no real loss using $C$ of cardinality
$\in(\lambda,\mu)$, as we can replace $\lambda$ by $\lambda_{1}=\lambda+|C|$
and so replace $K_{{\mathfrak{k}}}$ to
$K^{{\mathfrak{k}}}_{[\lambda_{1},\mu)}$.
###### Hypothesis 3.1.
1) ${\mathfrak{s}}$ is a good ess-$[\lambda,\mu)$-frame, see Definition 2.11.
2) ${{\mathfrak{s}}}$ has type bases, see Definition 2.20.
3) ${\mathfrak{C}}$ denote some $\mu$-saturated model for $K_{{\mathfrak{s}}}$
of cardinality $\leq\mu$, see 2.26.
4) But $M,A,\ldots$ will be
$<_{{\mathfrak{k}}({\mathfrak{s}})}{\mathfrak{C}},\subseteq{\mathfrak{C}}$
respectively but of cardinality $<\mu$.
###### Definition 3.2.
Let $A\subseteq M\in K_{{\mathfrak{s}}}$.
1) ${\rm dcl}(A,M)=\\{a\in M$: if
$M^{\prime}\leq_{{\mathfrak{s}}}M^{\prime\prime},M\leq_{{\mathfrak{s}}}M^{\prime\prime}$
and $A\subseteq M^{\prime}$ then $a\in M^{\prime}$ and for every automorphism
$f$ of $M^{\prime},f\restriction A={\rm id}_{A}\Rightarrow f(a)=a\\}$.
2) ${\rm acl}(A,M)$ is defined similarly but only with the first demand.
###### Definition 3.3.
1) For $A\subseteq M\in K_{{\mathfrak{s}}}$ let
${{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(A,M)=\\{q\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M):{\rm bas}(q)\in{\rm dcl}(A,{{\mathfrak{C}}})\\}.$
2) We call $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(A,M)$ regular if
$p$ as a member of ${\mathscr{S}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ is regular.
###### Definition 3.4.
1) ${\mathbb{E}}_{{\mathfrak{s}}}$ is as in Claim 2.18(2).
2) If $A\subseteq M\in K_{{\mathfrak{s}}}$ and $p\in{\mathscr{S}}^{{\rm
bs}}_{{\mathfrak{s}}}(M)$, then $p\in{\mathscr{S}}^{{\rm
bs}}_{{\mathfrak{s}}}(A,M)$ iff $p$ is definable over $A$, see 1.12(3) iff
${\rm inv}(p)$ from Definition 1.12 is $\subseteq A$ and well defined.
###### Definition 3.5.
Let $A\subseteq{{\mathfrak{C}}}$.
1) We define a dependency relation on
good$(A,{{\mathfrak{C}}})=\\{c\in{\mathfrak{C}}:$ for some
$M<_{{\mathfrak{k}}({\mathfrak{s}})}{{\mathfrak{C}}},A\subseteq M$ and ${\rm
ortp}(c,M,{{\mathfrak{C}}})$ is definable over some finite $\bar{a}\subseteq
A\\}$ as follows:
1. $\circledast$
$c$ depends on $\mathbf{J}$ in $(A,{\mathfrak{C}})$ iff there is no
$M<_{{\mathfrak{k}}({\mathfrak{s}})}{{\mathfrak{C}}}$ such that
$A\cup\mathbf{J}\subseteq M$ and ${\rm ortp}(c,M,{{\mathfrak{C}}})$ is the
non-forking extension of ${\rm ortp}(c,\bar{a},{{\mathfrak{C}}})$ where
$\bar{a}$ witnesses $c\in\text{ good}(A,{{\mathfrak{C}}})$.
2) We say that $C\in{}^{\mu>}[{{\mathfrak{C}}}]$ is good over $(A,B)$ when
there is a brimmed $M<_{{\mathfrak{k}}({\mathfrak{s}})}{{\mathfrak{C}}}$ such
that $B\cup A\subseteq M$ and ${\rm ortp}(C,M,{{\mathfrak{C}}})$ (see
Definition 1.12(3)) is definable over $A$. (In the first order context we
could say $\\{c,B\\}$ is independent over $A$ but here this is problematic as
${\rm ortp}(B,A,{{\mathfrak{C}}})$ is not necessary basic).
3) We say $\langle A_{\alpha}:\alpha<\alpha^{*}\rangle$ is independent over
$A$ in ${{\mathfrak{C}}}$, see [She09e, L8.8,6p.5(1)] if we can find
$M,\langle M_{\alpha}:\alpha<\alpha^{*}\rangle$ such that:
1. $\circledast(a)$
$A\subseteq
M\leq_{{\mathfrak{k}}({\mathfrak{s}})}M_{\alpha}<_{{\mathfrak{s}}}{{\mathfrak{C}}}$
for $\alpha<\alpha^{*}$
2. $(b)$
$M$ is brimmed
3. $(c)$
$A_{\alpha}\subseteq M_{\alpha}$
4. $(d)$
${\rm ortp}(A_{\alpha},M,{{\mathfrak{C}}})$ definable over $A$ (= does not
split over $A$)
5. $(e)$
$\langle M_{\alpha}:\alpha<\alpha^{*}\rangle$ is independent over $M$.
3A) Similarly “over $(A,B)$”.
4) We define locally independent naturally, that is every finite subfamily is
independent.
###### Claim 3.6.
Assume $a\in{\mathfrak{C}},A\subseteq{{\mathfrak{C}}}$.
1) $a\in{\rm good}(A,{\mathfrak{C}})$ iff $a$ realizes
$p\in{\mathscr{S}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ for some $M$ satisfying
$A\subseteq M<_{{\mathfrak{k}}({\mathfrak{s}})}{\mathfrak{C}}$.
###### Claim 3.7.
1) If $A_{\alpha}\subseteq{{\mathfrak{C}}}$ is good over
$(A,\bigcup\limits_{i<\alpha}A_{i})$ for $\alpha<\alpha^{*},\alpha^{*}<\omega$
then $\langle A_{\alpha}:\alpha<\alpha^{*}\rangle$ is independent over $A$.
2) Independence is preserved by reordering.
3) If $p\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$ is regular then on
$p({{\mathfrak{C}}})=\\{c:c$ realizes $p\\}$ the independence relation
satisfies:
1. $(a)$
like (1)
2. $(b)$
if $b^{1}_{\ell}$ depends on $\\{b^{0}_{0},\dotsc,b^{0}_{n-1}\\}$ for $\ell<k$
and $b^{2}$ depends on $\\{b^{1}_{\ell}:\ell<k\\}$ then $b^{2}$ depends on
$\\{b^{0}_{\ell}:\ell<n\\}$
3. $(c)$
if $b$ depends on $\mathbf{J},\mathbf{J}\subseteq\mathbf{J}^{\prime}$ then $b$
depends on $\mathbf{J}^{\prime}$.
###### Remark 3.8.
1) However, we have not mentioned finite character but the local independence
satisfies it trivially.
###### Proof..
Easy. ∎
###### Definition 3.9.
1) Assume $q\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ and
$p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$.
We say that $q$ is explicitly $(p,n)$-simple when ,:
1. $\circledast$
there are $b_{0},\dotsc,b_{n-1},c$ such that 101010clause (c) + (e) are
replacements for $c$ is algebraic over $\bar{a}+\\{b_{\ell}:\ell<n\\}$ and
each $b_{\ell}$ is necessary:
1. $(a)$
$b_{\ell}$ realizes $p$
2. $(b)$
$c$ realizes $q$
3. $(c)$
$b_{\ell}$ is not good 111111not good here is a replacement to “${\rm
ortp}(b_{\ell},\bar{a}+c,{{\mathfrak{C}}})$ does not fork over $\bar{a}$” over
$(\bar{a},c)$ for $\ell<n$
4. $(d)$
$\langle b_{\ell}:\ell<n\rangle$ is independent over $\bar{a}$
5. $(e)$
$\langle c,b_{0},\dotsc,b_{n-1}\rangle$ is good over $\bar{a}$
6. $(f)$
if 121212this seems a reasonable choice here but we can take others; this is
an unreasonable choice for first order $c^{\prime}$ realizes $q$ then
$c=c^{\prime}$ iff for every $b\in p({{\mathfrak{C}}})$ we have: $b$ is good
over $(\bar{a},c)$ iff $b$ is good over $(\bar{a},,c^{\prime})$.
1A) We say that $a$ is explicitly $(p,n)$-simple over $A$ if ${\rm
ortp}(a,A,{{\mathfrak{C}}})$ is; similarly in the other definitions replacing
$(p,n)$ by $p$ means “for some $n$”.
2) Assume $q\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$ and ${\mathbf{P}}$ as in
Definition 1.6. We say that $q$ is ${\mathbf{P}}$-simple if we can find $n$
and explicitly ${\mathbf{P}}$-regular types
$p_{0},\dotsc,p_{n-1}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$ such that: each $c\in
p({{\mathfrak{C}}})$ is definable by its type over
$\bar{a}\cup\bigcup\limits_{\ell<n}p_{\ell}({{\mathfrak{C}}})$,
3) In part (1) we say strongly explicitly $(p,n)$-simple if there are $k>a$
and $\langle\bar{a}^{*}_{\ell}:\ell<\omega\rangle$ and
$r\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$
such that [on finitely many see 3.9(3B) below]:
1. $(a)$
$\\{\bar{a}^{*}_{\ell}:\ell<\omega\\}\in r({{\mathfrak{C}}})$ is independent
for any $c^{\prime},c^{\prime\prime}\in p({{\mathfrak{C}}})$, we have
$c^{\prime}=c^{\prime\prime}$ iff for infinitely many $m<\omega$ ($\equiv$ for
all but finitely many $\models$, see claim on average) for every $b\in
p({{\mathfrak{C}}})$ we have:
1. $(*)$
$b$ is good over $(\bar{a},\bar{a},a^{*}_{m},c)$ iff $b$ is good over
$(\bar{a},a,a^{*}_{m},c)$, (compare with 3.13, 3.17!).
3A) In part (1) we say weakly $(p,n)$-simple if in $\circledast$, clause $(f)$
is replaced by
1. $(f)^{\prime}$
if $b$ is good over $(\bar{a},a^{*}_{m})$ then $c^{\prime},c^{\prime}$
realizes the same type over $\bar{a}a^{*}_{m}b$.
3B) In part (1) we say $(p,n)$-simple if for some
$\bar{a}^{*}\in{}^{\omega>}{{\mathfrak{C}}}$ good over $\bar{a}$ for every
$c\in q({{\mathfrak{C}}})$ there are $b_{0},\dotsc,b_{n-1}\in
p({{\mathfrak{C}}})$ such that $c\in{\rm
dcl}(\bar{a},\bar{a}^{*},b_{0},\dotsc,b_{n-1})$ and $\bar{a}\char
94\relax\langle b_{0},\dotsc,b_{n-1}\rangle$ is good over $\bar{a}$ if simple.
4) Similarly in (2).
5) We define $gw_{p}(b,\bar{a}),p$ regular parallel to some
$p^{\prime}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(\bar{a})$ ($gw$ for
general weight). Similarly for $gw_{p}(q)$.
We first list some obvious properties.
###### Claim 3.10.
1) If $c$ is ${\mathbf{P}}$-simple over $\bar{a},\bar{a}\subseteq
A\subset{{\mathfrak{C}}}$ then $w_{p}(c,A)$ is finite.
2) The obvious implications.
###### Claim 3.11.
1) [Closures of the simple ${\rm bs}$].
2) Assume $p\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$. If $\bar{b}_{1},\bar{b}_{2}$
are $p$-simple over $A$ then
1. $(a)$
$\bar{b}_{1}\char 94\relax\bar{b}_{2}$ is $p$-simple (of course, ${\rm
ortp}_{{\mathfrak{s}}}(\bar{b}_{2}\bar{b}_{2},\bar{a},{{\mathfrak{C}}})$ is
not necessary in ${{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$ even if ${\rm
ortp}_{{\mathfrak{s}}}(\bar{b}_{\ell},\bar{a},{{\mathfrak{C}}})\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$ for $\ell=1,2$)
2. $(b)$
also ${\rm ortp}(\bar{b}_{2},\bar{a}b_{1},{{\mathfrak{C}}})$ is
${\mathbf{P}}$-simple.
2) If $\bar{b}_{\alpha}$ is $p$-simple over $\bar{a}$ for
$\alpha<\alpha^{*},\pi:\beta^{*}\rightarrow\alpha^{*}$ one to one into, then
$\sum\limits_{\alpha<\alpha^{*}}gw_{p}(b_{\alpha},\bar{a}_{*}\cup\bigcup\limits_{\ell<\alpha}b_{\alpha})=\sum\limits_{\beta<\beta^{*}}gw(b_{\pi(\beta)})\bar{a}\cup\bigcup\limits_{i<\beta}\bar{b}_{\pi(i)})$.
###### Claim 3.12.
[${{\mathfrak{s}}}$ is equivalence-closed].
Assume that $p,q\in{{\mathscr{S}}}^{{\rm bs}}(M)$ are not weakly orthogonal
(e.g. see 6.9). Then for some $\bar{a}\in{}^{\omega>}M$ we have: $p,q$ are
definable over $\bar{a}$ (works without being stationary) and for some
${{\mathfrak{k}}}_{{\mathfrak{s}}}$-definable function $\mathbf{F}$, for each
$c\in q({{\mathfrak{C}}}),{\rm
ortp}_{{\mathfrak{s}}}(\mathbf{F}(c,\bar{a}),\bar{a},{{\mathfrak{C}}})\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$ and is explicitly
$(p,n)$-simple for some $n$, (if, e.g., $M$ is $(\lambda,*)$-brimmed then
$n=w_{p}(q)$.
###### Proof..
We can find $n$ and $c_{1},b_{0},\dotsc,b_{-1}\in{{\mathfrak{C}}}$ with $c$
realizing $q,b_{\ell}$ realizing $p,\\{b_{\ell},c\\}$ is not independent over
$M$ and $n$ maximal. Choose $\bar{a}\in{}^{\omega>}M$ such that ${\rm
ortp}_{{\mathfrak{s}}}(\langle
c,b_{0},\dotsc,b_{n-1}\rangle,M,{{\mathfrak{C}}}\rangle)$ is definable over
$\bar{a}$. Define $E_{\bar{a}}$, an equivalence relation on
$q({\mathfrak{C}}):c_{1}E_{\bar{a}}c_{2}$ iff for every $b\in
p{{\mathfrak{C}}})$, we have $b$ is good over $(a,c_{1})\Rightarrow b$ is good
over $(\bar{a},c_{2})$. By “${{\mathfrak{s}}}$ is eq-closed”, we are done. ∎
###### Claim 3.13.
1) Assume $p,q\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$ are weakly
orthogonal (see e.g. 6.9(1)) but not orthogonal. Then we can find
$\bar{a}\in{}^{\omega>}M$ over which $p,q$ are definable and
$r_{1}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a},{{\mathfrak{C}}})$ such that letting
$p_{1}=p\restriction\bar{a},q_{1}=q\restriction\bar{a},n=:w_{p}(q)\geq 1$ we
have:
1. $\circledast^{n}_{\bar{a},p_{1},q_{1},r_{2}}(a)$
$p_{1},q_{1},r_{1}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(\bar{a}),\bar{a}\in{}^{\omega>}{{\mathfrak{C}}}$
2. $(b)$
$p_{1},q_{1}$ are weakly orthogonal (see e.g. Definition 6.9(1))
3. $(c)$
if $\\{a^{*}_{n}:n<\omega\\}\subseteq r_{1}({{\mathfrak{C}}})$ is independent
over $\bar{a}$ and $c$ realizes $q$ then for infinitely many $m<\omega$ there
is $b\in p({\mathfrak{C}})$ such that $b$ is good over $(\bar{a},a^{*}_{n})$
but not over $(\bar{a},a^{*}_{n},c)$
4. $(d)$
in (c) really there are $n$ independent such that (but not $n+1$).
2) If $\circledast^{n}_{\bar{a},p_{1},q_{1},r_{1}}$ then (see Definition
3.9(3) for some definable function $\mathbf{F}$, if $c$ realizes
$q_{1},c^{*}=F(c,\bar{a})$ and ${\rm
ortp}_{{\mathfrak{n}}}(c^{*},\bar{a},{{\mathfrak{C}}})$ is $(p_{1},n)$-simple.
See proof below.
###### Claim 3.14.
0) Assuming $A\subseteq\mathfrak{C}$ and $a\in\mathfrak{C}$ is called finitary
when it is definable over $\\{a_{0},\dots,a_{n-1}\\}$ where each $a_{\ell}$ is
in $\mathfrak{C}$ and is good over $A$ inside $\mathfrak{C}$.
1) If $a\in{\rm dcl}(\cup\\{A_{i}:i<\alpha\\}\cup A,{{\mathfrak{C}}})$ and
${\rm ortp}(a,A,{{\mathfrak{C}}})$ is finitary over (A) and
$\\{A_{i}:i<\alpha\\}$ is independent over $A$ then for some finite
$u\subseteq\alpha$ we have $a\in{\rm dcl}(\cup\\{A_{i}:i\in u\\}\cup
A,{{\mathfrak{C}}})$.
2) If ${\rm ortp}(b,\bar{a},{{\mathfrak{C}}})$ is ${\mathbf{P}}$-simple, then
it is finitary.
3) If $\\{A_{i}:i<\alpha\\}$ is independent over $A$ and $a$ is finitary over
$A$ then for some finite $u\subseteq\alpha$ (even $|u|<{\rm
wg}(c,A)),\\{A_{i}:i\in\alpha\backslash u\\}$ is independent over
$A,A\cup\\{c\\})$ (or use
$(A^{\prime},A^{\prime\prime}),(A^{\prime},A^{\prime\prime}\cup\\{c\\})$).
###### Definition 3.15.
1) ${\rm dcl}(A)=\\{a$: for every automorphism $f$ of
${{\mathfrak{C}}},f(a)=a\\}$.
2) ${\rm dcl}_{{\rm fin}}(A)=\cup\\{{\rm dcl}(B):B\subseteq A$ finite$\\}$.
3) $a$ is finitary over $A$ if there are $n<\omega$ and
$c_{0},\dotsc,c_{n-1}\in$ good$(A)$ such that $a\in{\rm
dcl}(A\cup\\{c_{0},\dotsc,c_{n-1}\\}$ (or dcl${}_{\text{fin}}$?).
4) For such $A$ let ${\rm wg}(a,A)$ be $w({\rm tp}(a,A,{\mathfrak{C}}))$ when
well defined.
5) Strongly simple implies simple.
###### Claim 3.16.
In Definition 3.9(3), for some $m,k<\omega$ large enough, for every $c\in
q({{\mathfrak{C}}})$ there are
$b_{0},\dotsc,b_{m-1}\in\bigcup\limits_{\ell<n}p_{\ell}({{\mathfrak{C}}})$
such that $c\in\rm
dcl(\bar{a}\cup\\{a^{*}_{\ell}:\ell<k\\}\cup\\{b_{\ell}:\ell<m\\})$.
###### Proof..
Let $M_{1},M_{2}\in K_{{\mathfrak{s}}({\rm brim})}$ be such that
$M\leq_{{\mathfrak{s}}}M_{1}\leq_{{\mathfrak{s}}}M_{2},M_{1}\,(\lambda,*)$-brimmed
over $M,p_{\ell}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M_{\ell})$ a
non-forking extension of $p,q_{\ell}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M_{\ell})$ is a non-forking extension of $q,c\in M_{2}$
realizes $q_{1}$ and $(M_{1},M_{2},c)\in K^{3,{\rm pr}}_{{\mathfrak{s}}({\rm
brim})}$. Let $b_{\ell}\in p_{1}(M_{2})$ for $\ell<n^{*}=:w_{p}(q)$ be such
that $\\{b_{\ell}:\ell<n^{*}\\}$ is independent in $(M_{1},M_{2})$, let
$\bar{a}^{*}\in{}^{\omega>}(M_{1})$ be such that ${\rm
ortp}_{{\mathfrak{s}}}(\langle c,b_{0},\dotsc,b_{n-1}\rangle,M_{1},M_{2})$ is
definable over $\bar{a}^{*}$ and $r={\rm
ortp}_{{\mathfrak{s}}}(\bar{a}^{*},M_{1},M_{2}),r^{+}={\rm
ortp}(\bar{a}^{*}\char 94\relax\langle b_{0},\dotsc,b_{n-1}\rangle,M,M_{2})$.
Let $\bar{a}\in{}^{\omega>}M$ be such that ${\rm
ortp}_{{\mathfrak{s}}}(\bar{a}^{*},\langle
c,b_{0},\dotsc,b_{n-1}\rangle,M,M_{2})$ is definable over $\bar{a}$. As
$M_{1}$ is $(\lambda,*)$-saturated over $M$ there is
$\\{\bar{a}^{*}_{f}:f<\omega\\}\subseteq r({{\mathfrak{C}}})$ independent in
$(M,M_{1})$ moreover letting $a^{*}_{\omega}=\bar{a}^{*}$, we have $\langle
a^{*}_{\alpha}:\alpha\leq\omega\rangle$ is independent in $(M,M_{1})$. Clearly
${\rm ortp}_{{\mathfrak{s}}}(c\bar{a}^{*}_{n},M,M_{2})$ does not depend on $n$
hence we can find $\left<\langle
b^{\alpha}_{\ell}:\ell<n\rangle:\alpha\leq\omega\right>$ such that
$b^{\alpha}_{\ell}\in M_{2},b^{\omega}_{\ell}=b_{\ell}$ and
$\\{c\bar{a}^{*}_{\alpha},b^{\alpha}_{0}\ldots
b^{\alpha}_{n-1}:\alpha\leq\omega\\}$ (as usual as index set is independent in
$(M_{1},M_{2})$.
The rest should be clear. ∎
###### Definition 3.17.
Assume
$\bar{a}\in{}^{\omega>}{{\mathfrak{C}}},n<\omega,p,q,r\in{\mathscr{S}}^{{\rm
bs}}(M)$ are as in the definition of $p$-simple[-] but $p,q$ are weakly
orthogonal (see e.g. Definition 6.9(1)) let $p$ be a definable related
function such that for any $\bar{a}^{\nu}_{\ell}\in
r({{\mathfrak{C}}}),\ell<k^{*}$ independent mapping $c\mapsto\langle\\{b\in
q({{\mathfrak{C}}}):R{{\mathfrak{C}}}\models R(b,c,\bar{a}^{*}_{\ell})\\}$ is
a one-to-one function from $q({{\mathfrak{C}}})$ into $\\{\langle
J_{\ell}:\ell<k^{*}\rangle:J_{\ell}\subseteq p({{\mathfrak{C}}})$ is closed
under dependence and has $p$-weight $n^{*}\\}$
1) We can define $E=E_{p,q,r}$ a two-place relation over
$r({{\mathfrak{C}}}):\bar{a}^{*}_{1}E\bar{a}^{*}_{2}$ iff
$\bar{a}_{1},\bar{a}_{2}\in r({\mathfrak{C}})$ have the same projection common
to $p({\mathfrak{C}})$ and $q({\mathfrak{C}})$.
2) Define unit-less group on $r/E$ and its action on $q({{\mathfrak{C}}})$.
###### Remark 3.18.
1) A major point is: as $q$ is $p$-simple, $w_{p}(-)$ acts “nicely” on
$p({{\mathfrak{C}}})$ so if $c_{1},c_{2},c_{3}\in q({{\mathfrak{C}}})$ then
$w_{p}(\langle c_{1},c_{2},c_{3}\rangle\bar{a})\leq 3n^{*}$. This enables us
to define average using a finite sequence seem quite satisfying.
Alternatively, look more at averages of independent sets.
2) Silly Groups: Concerning interpreting groups note that in our present
context, for every definable set $P^{M}$ we can add the group of finite
subsets of $P^{M}$ with symmetric difference (as addition).
3) The axiomatization above has prototype ${\mathfrak{s}}$ where
$K_{{\mathfrak{s}}}=\\{M:M$ a $\kappa$-saturated model of
$T\\},\leq_{{\mathfrak{s}}}=\prec\restriction
K_{{\mathfrak{s}}},\mathop{\hbox{\hbox to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits_{\textstyle{\mathfrak{s}}}$
is non-forking, $T$ a stable first order theory with $\kappa(T)\leq{\rm
cf}(\kappa)$. But we may prefer to formalize the pair
$({{\mathfrak{t}}},{{\mathfrak{s}}}),{{\mathfrak{s}}}$ as above,
$K_{{\mathfrak{t}}}=$ models of $T,\leq_{{\mathfrak{t}}}=\prec\restriction
K_{{\mathfrak{t}}},\mathop{\hbox{\hbox to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits_{\textstyle{\mathfrak{t}}}$
is non-forking.
From ${{\mathfrak{s}}}$ we can reconstruct a ${{\mathfrak{t}}}$ by closing
${{\mathfrak{k}}}_{{\mathfrak{s}}}$ under direct limits, but in interesting
cases we end up with a bigger ${{\mathfrak{t}}}$.
Part II Generalizing Stable Classes
## 4\. II Introduction
In this part we try to deal with classes like “$\aleph_{1}$-saturated models
of a first order theory $T$ and even a stable one” rather than of “a model of
$T$”. The parallel problem for “model of $T$, even superstable one” is the
subject of [She09d].
$*\qquad*\qquad*$
Now some construction goes well by induction on cardinality, say by dealing
with $(\lambda,{\mathscr{P}}(n))$-system of models but not all. E.g. starting
with $\aleph_{0}$ we may consider $\lambda>\aleph_{0}$, so can find
$F:[\lambda]^{\aleph_{0}}\rightarrow\lambda$ such that there is an infinite
decreasing sequence of $F$-closed subsets of
$\lambda,u\in[\lambda]^{<\aleph_{0}}\Rightarrow F(u)=0$ maybe such that
$u\in[\lambda]^{\leq\aleph_{0}}\Rightarrow|c\ell_{F}(u)|\leq\aleph_{0}$. Let
$\langle u_{\alpha}:\alpha<\alpha_{*}\rangle$ list
$\\{c\ell_{F}(u):u\in[\lambda]^{\leq\aleph_{0}}\\}$ such that
$c\ell_{F}(u_{\alpha})\subseteq
c\ell_{F}(u_{\beta})\Rightarrow\alpha\leq\beta$ we try to choose
$M_{u_{\alpha}}$ by induction on $\alpha$.
## 5\. II Axiomatizing a.e.c. without full continuity
### 5(A). a.e.c.
Classes like “$\aleph_{1}$-saturated models of a first order $T$, which is not
superstable”, does not fall under a.e.c., still they are close and below we
suggest a framework for them. So for increasing sequences of short length we
have weaker demand.
We shall say more on primes later.
We shall lift a $(\mu,\lambda,\kappa)$-a.e.c. to
$(\infty,\lambda,\kappa)$-a.e.c. (see below), so actually $k_{\lambda}$
suffice, but for our main objects, good frames, this is more complicated as
its properties (e.g., the amalgamation property) are not necessarily preserved
by the lifting.
This section generalizes [She09c, §1], in some cases the differences are
minor, sometimes the whole point is the difference.
###### Convention 5.1.
1) In this section ${\mathfrak{k}}$ will denote a directed a.e.c., see
Definition 5.2, may write d.a.e.c. (the $d$ stands for directed).
2) We shall write (outside the definitions)
$\mu_{{\mathfrak{k}}},\lambda_{{\mathfrak{k}}},\kappa_{{\mathfrak{k}}}$.
###### Definition 5.2.
Assume $\lambda<\mu,\lambda^{<\kappa}=\lambda$ (for notational simplicity) and
$\alpha<\mu\Rightarrow|\alpha|^{<\kappa}<\mu$ and $\kappa$ is regular.
We say that ${{\mathfrak{k}}}$ is a $(\mu,\lambda,\kappa)-1$-d.a.e.c. (we may
omit or add the “$(\mu,\lambda,\kappa)$” by Ax(0)(d) below, similarly in
similar definitions; if $\kappa=\aleph_{0}$ we may omit it, instead
$\mu=\mu^{+}_{1}$ we may write $\leq\mu_{1}$) when the axioms below hold; we
write d.a.e.c. or 0-d.a.e.c. when we omit Ax(III)(b),(IV)(b) Ax(0)
${{\mathfrak{k}}}$ consists of
1. $(a)$
$\tau_{{\mathfrak{k}}}$, a vocabulary with each predicate and function symbol
of arity $\leq\lambda$
2. $(b)$
$K$, a class of $\tau$-models
3. $(c)$
a two-place relation $\leq_{{\mathfrak{k}}}$ on $K$
4. $(d)$
the cardinals
$\mu=\mu_{{\mathfrak{k}}},\mu({\mathfrak{k}}),\lambda=\lambda_{{\mathfrak{k}}}=\lambda({\mathfrak{k}})$
and $\kappa=\kappa_{{\mathfrak{k}}}=\kappa({\mathfrak{k}})$ (so
$\mu>\lambda=\lambda^{<\kappa}\geq\kappa={\rm cf}(\kappa)$ and
$\alpha<\mu\Rightarrow|\alpha|^{<\kappa}<\mu$)
such that
1. $(e)$
if $M_{1}\cong M_{2}$ then $M_{1}\in K\Leftrightarrow M_{2}\in K$
2. $(f)$
if $(N_{1},M_{1})\cong(N_{2},M_{2})$ then
$M_{1}\leq_{{\mathfrak{k}}}N_{1}\Rightarrow M_{2}\leq_{{\mathfrak{k}}}N_{2}$
3. $(g)$
every $M\in K$ has cardinality $\geq\lambda$ but $<\mu$
4. $(Ax(I)(a))$
$M\leq_{{\mathfrak{k}}}N\Rightarrow M\subseteq N$
5. $(Ax(II)(a))$
$\leq_{{\mathfrak{k}}}$ is a partial order
6. $Ax(III)$
assume that $\langle M_{i}:i<\delta\rangle$ is a
$\leq_{{\mathfrak{k}}}$-increasing sequence and
$\|\cup\\{M_{i}:i<\delta\\}\|<\mu$ then
1. $(a)$
(existence of unions) if ${\rm cf}(\delta)\geq\kappa$ then there is $M\in K$
such that $i<\delta\Rightarrow M_{i}\leq_{{\mathfrak{k}}}M$ and
$|M|=\cup\\{|M_{i}|:i<\delta\\}$ but not necessarily
$M=\bigcup\limits_{i<\delta}M_{i}$
2. $(b)$
(existence of limits) there is $M\in K$ such that $i<\delta\Rightarrow
M_{i}\leq_{{\mathfrak{k}}}M$
7. $Ax(IV)(a)$
(weak uniqueness of limit = weak smoothness) for $\langle
M_{i}:i<\delta\rangle$ as above,
1. $(a)$
if ${\rm cf}(\delta)\geq\kappa$ and $M$ is as in Ax(III)(a) and
$i<\delta\Rightarrow M_{i}\leq_{{\mathfrak{k}}}N$ then
$M\leq_{{\mathfrak{k}}}N$
2. $(b)$
if $N_{\ell}\in K$ and $i<\delta\Rightarrow
M_{i}\leq_{{\mathfrak{k}}}N_{\ell}$ for $\ell=1,2$ then there are $N\in K$ and
$f_{1},f_{2}$ such that $f_{\ell}$ is a $\leq_{{\mathfrak{k}}}$-embedding of
$N_{\ell}$ into $N$ for $\ell=1,2$ and $i<\delta\Rightarrow f_{1}\restriction
M_{i}=f_{2}\restriction M_{i}$
8. $Ax(V)$
if $N_{\ell}\leq_{{\mathfrak{k}}}M$ for $\ell=1,2$ and $N_{1}\subseteq N_{2}$
then $N_{1}\leq_{{\mathfrak{k}}}N_{2}$
9. $Ax(VI)$
(L.S.T. property) if $A\subseteq M\in K,|A|\leq\lambda$ then there is
$M\leq_{{\mathfrak{k}}}N$ of cardinality $\lambda$ such that $A\subseteq M$.
###### Remark 5.3.
There are some more axioms listed in 5.4(5), but we shall mention them in any
claim in which they are used so no need to memorize, so 5.4(1)-(4) omit them?
###### Definition 5.4.
1) We say ${\mathfrak{k}}$ is a 4-d.a.e.c. or d.a.e.c+ when it is a
$(\lambda,\mu,\kappa)-1$-d.a.e.c. and satisfies Ax(III)(d),Ax(IV)(e) below.
2) We say ${\mathfrak{k}}$ is a 2-d.a.e.c. or d.a.e.c.± when is a
$(\lambda,\mu,\kappa)-0$-d.a.e.c. and Ax(III)(d), Ax(IV)(d) below holds.
3) We say ${\mathfrak{k}}$ is 5-d.a.e.c. when it is 1-d.a.e.c. and Ax(III)(f)
holds.
4) We say ${\mathfrak{k}}$ is 6-d.a.e.c. when it is a 1-d.a.e.c. and
Ax(III)(f) + Ax(IV)(f).
5) Concerning Definition 5.2, we consider the following axioms:
Ax(III)(c) if $I$ is $\kappa$-directed and $\bar{M}=\langle M_{s}:s\in
I\rangle$ is $\leq_{{\mathfrak{k}}}$-increasing $s\leq_{I}$
$t\Rightarrow M_{s}\subseteq M_{s}$ and $\Sigma\\{\|M_{s}\|:s\in I\\}<\mu$
then $\bar{M}$ has a
$\leq_{{\mathfrak{s}}}$-upper bound, $M$, i.e. $s\in I\Rightarrow
M_{s}\leq_{{\mathfrak{k}}}M$.
Ax(III)(d) (union of directed system) if $I$ is $\kappa$-directed,
$|I|<\mu,\langle M_{t}:t\in I\rangle$
is $\leq_{{\mathfrak{k}}}$-increasing and $\|\cup\\{M_{t}:t\in I\\}\|<\mu$
then there is
one and only one $M$ with universe
$\cup\\{|M_{t}|:t\in I\\}$ such that $M_{s}\leq_{{\mathfrak{k}}}M$ for every
$s\in I$
we call it the $\leq_{{\mathfrak{k}}}$-union of $\langle M_{t}:t\in I\rangle$.
Ax(III)(e) like Ax(III)(c) but $I$ is just directed
Ax(III)(f) If $\bar{M}=\langle M_{i}:i<\delta\rangle$ is
$\leq_{{\mathfrak{k}}}$-increasing, ${\rm cf}(\delta)<\kappa$ and
$|\cup\\{M_{i}:i<$
$\delta\\}|<\mu$ then there is $M$ which is $\leq_{{\mathfrak{k}}}$-prime over
$\bar{M}$, i.e.
1. $(*)\quad$ if $N\in K_{{\mathfrak{k}}}$ and $i<\delta\Rightarrow M_{i}\leq_{{\mathfrak{k}}}N$ then there is a $\leq_{{\mathfrak{k}}}$-embedding
of $M$ into $M$ over $\cup\\{|M_{i}|:i<\delta\\}$.
Ax(IV)(c) If $I$ is $\kappa$-directed and $\bar{M}=\langle M_{s}:s\in
I\rangle$ is $\leq_{{\mathfrak{k}}}$-increasing and $N_{1},N_{2}$
are $\leq_{{\mathfrak{s}}}$-upper bounds of $\bar{M}$ then for some
$(N^{\prime}_{2},f)$ we have
$N_{2}\leq_{{\mathfrak{k}}}N^{\prime}_{2}$ and $f$ is a
$\leq_{{\mathfrak{k}}}$-embedding of $N_{1}$ into $N_{2}$
which is the identity on $M_{s}$ for every $s\in I$
(this is a weak form of uniqueness)
Ax(IV)(d) If $I$ is a $\kappa$-directed partial order, $\bar{M}=\langle
M_{s}:s\in I\rangle$ is $\leq_{{\mathfrak{k}}}$-increasing,
$s\in I\Rightarrow M_{s}\leq_{{\mathfrak{k}}}M$ and $|M|=\cup|M_{s}|:s\in
I\\}$, then
$\bigwedge\limits_{s}M_{s}\leq_{{\mathfrak{k}}}N\Rightarrow
M\leq_{{\mathfrak{k}}}N$.
Ax(IV)(e) Like Ax(IV)(c) but $I$ is just directed.
Ax(IV)(f) If $I$ is directed and $\bar{M}=\langle M_{s}:s\in I\rangle$ is
$\leq_{{\mathfrak{k}}}$-increasing then there is
$M$ which is a $\leq_{{\mathfrak{k}}}$-prime over $\bar{M}$, defined as in
Ax(III)(f).
###### Claim 5.5.
Assume131313By 5.1 no need to say this ${\mathfrak{k}}$ is a d.a.e.c.
1) Ax(III)(d) implies Ax(III)(c).
2) Ax(III)(e) implies Ax(III)(c) and it implies Ax(III)(b).
3) Ax(IV)(d) implies Ax(IV)(c).
4) Ax(IV)(e) implies Ax(IV)(c) and implies Ax(III)(b).
5) In all the axioms in Definition 5.4 it is necessary that
$|\cup\\{M_{s}:s\in I\\}|<\mu_{{\mathfrak{k}}}$.
6) Ax(IV)(b) implies that ${\mathfrak{k}}$ has amalgamation.
###### Definition 5.6.
We say $\langle M_{i}:i<\alpha\rangle$ is $\leq_{{\mathfrak{k}}}$-increasing
$(\geq\kappa)$-continuous when it is $\leq_{{\mathfrak{k}}}$-increasing and
$\delta<\alpha$ and ${\rm
cf}(\delta)\geq\kappa\Rightarrow|M_{i}|=\cup\\{|M_{j}|:j<\delta\\}$.
As an exercise we consider directed systems with mappings.
###### Definition 5.7.
1) We say that $\bar{M}=\langle M_{t},h_{s,t}:s\leq_{I}t\rangle$ is a
$\leq_{{\mathfrak{k}}}$-directed system when $I$ is a directed partial order
and if $t_{0}\leq_{I}t_{2}\leq_{I}t_{2}$ then
$h_{t_{2},t_{0}}=h_{t_{2},t_{1}}\circ h_{t_{1},t_{0}}$.
1A) We say that $\bar{M}=\langle M_{t},h_{s,t}:s\leq_{I}t\rangle$ is a
$\leq_{{\mathfrak{k}}}-\theta$-directed system when in addition $I$ is
$\theta$-directed.
2) We omit $h_{s,t}$ when $s\leq_{I}t\Rightarrow h_{s,t}={\rm id}_{M_{s}}$ and
write $\bar{M}=\langle M_{t}:t\in I\rangle$.
3) We say $(M,\bar{h})$ is a $\leq_{{\mathfrak{k}}}$-limit of $\bar{M}$ when
$\bar{h}=\langle h_{s}:s\in I\rangle,h_{s}$ is a
$\leq_{{\mathfrak{k}}}$-embedding of $M_{s}$ into $M_{s}$ and
$s\leq_{I}t\Rightarrow h_{s}=h_{t}\circ h_{t,s}$.
4) We say $\bar{M}=\langle M_{\alpha}:\alpha<\alpha^{*}\rangle$ is
$\leq_{{\mathfrak{k}}}$-semi-continuous when : (see Ax(III)(f) in 5.4)
1. $(a)$
$\bar{M}$ is $\leq_{{\mathfrak{k}}}$-increasing
2. $(b)$
if $\alpha<\alpha^{*}$ has cofinality $\geq\kappa$ then
$M_{\alpha}=\cup\\{M_{\beta}:\beta<\alpha\\}$
3. $(c)$
if $\alpha<\alpha^{*}$ has cofinality $<\kappa$ then $M_{\delta}$ is
$\leq_{{\mathfrak{k}}}$-prime over $\bar{M}\restriction\alpha$.
###### Observation 5.8.
[${{\mathfrak{k}}}$ is an d.a.e.c.]
1) If $\bar{M}=\langle M_{t},h_{s,t}:s\leq_{I}t\rangle$ is a
$\leq_{{\mathfrak{k}}}$-directed system, then we can find a
$\leq_{{\mathfrak{k}}}$-directed system $\langle M^{\prime}_{t}:t\in I\rangle$
(so $s\leq_{I}t\Rightarrow M^{\prime}_{s}\leq_{{\mathfrak{k}}}M^{\prime}_{t}$)
and $\bar{g}=\langle g_{t}:t\in I\rangle$ such that:
1. $(a)$
$g_{t}$ is an isomorphism from $M_{t}$ onto $M^{\prime}_{t}$
2. $(b)$
if $s\leq_{I}t$ then $g_{s}=g_{t}\circ h_{s,t}$.
2) So in the axioms (III)(a),(b)(IV)(a) from Definition 5.2 as well as those
of 5.4 we can use $\leq_{{\mathfrak{k}}}$-directed system $\langle
M_{s},h_{s,t}:s\leq_{I}t\rangle$ with $I$ as there.
3) If ${\mathfrak{k}}$ is an ess-$(\mu,\lambda)$-a.e.c., see §1 then
${\mathfrak{k}}$ is a $(\mu,\lambda,\aleph_{0})$-d.a.e.c. and satisfies all
the axioms from 5.4.
4) If $(M,\bar{h})$ is prime over $\bar{M}=\langle
M_{t},h_{s,t}:s\leq_{I}t\rangle$ and $\chi=\Sigma\\{\|M_{t}\|:t\in I\\}$ then
$\|M\|\leq\chi^{<\kappa}$.
###### Proof..
Straightforward, e.g. we can use “${{\mathfrak{k}}}$ has
$(\chi^{<\kappa})$-LST”, i.e. Observation 5.9. ∎
More serious is proving the LST theorem in our content (recall that in the
axioms, see Ax(VI), we demand it only down to $\lambda$).
###### Claim 5.9.
[${\mathfrak{k}}$ is a $(\mu,\lambda,\kappa)-2$-d.a.e.c., see Definition 5.4.]
If
$\lambda_{{\mathfrak{k}}}\leq\chi=\chi^{<\kappa}<\mu_{{\mathfrak{k}}},A\subseteq
N\in{\mathfrak{k}}$ and $|A|\leq\chi\leq\|N\|$ then there is
$M\leq_{{\mathfrak{k}}}N$ of cardinality $\chi$ such that $\|M\|=\chi$.
###### Proof..
Let $\langle u_{\alpha}:\alpha<\alpha(*)\rangle$ list
$[A]^{<\kappa({{\mathfrak{k}}})}$, let $I$ be the following partial order:
1. $(*)_{1}$
$(\alpha)\quad$ set of elements is $\\{\alpha<\chi$: for no $\beta<\alpha$ do
we have $u_{\alpha}\subseteq u_{\beta}\\}$
2. $(\beta)\quad\alpha\leq_{I}\beta$ iff $u_{\alpha}\subseteq u_{\beta}$ (hence $\alpha\leq\beta$).
Easily
1. $(*)_{2}$
$(a)\quad I$ is $\kappa$-directed
2. $(b)\quad$ for every $\alpha<\alpha(*)$ for some $\beta<\alpha(*)$ we have $u_{\alpha}\subseteq u_{\beta}\wedge\beta\in I$
3. $(c)\quad\cup\\{u_{\alpha}:\alpha\in I\\}=A$.
Now we choose $M_{\alpha}$ by induction on $\alpha<\chi$ such that
1. $(*)_{3}$
$(a)\quad M_{\alpha}\leq_{{\mathfrak{k}}}N$
2. $(b)\quad\|M_{\alpha}\|=\lambda_{{\mathfrak{k}}}$
3. $(c)\quad M_{\alpha}$ include $\cup\\{M_{\beta}:\beta<_{I}\alpha\\}\cup u_{\alpha}$.
Note that $|\\{\beta\in I:\beta<_{I}\alpha\\}|\leq|\\{u:u\subseteq
u_{\alpha}\\}|=2^{|u_{\alpha}|}\leq
2^{<\kappa({{\mathfrak{k}}})}\leq\lambda_{{\mathfrak{k}}}$ and by the
induction hypothesis
$\beta<\alpha\Rightarrow\|M_{\beta}\|\leq\lambda_{{\mathfrak{k}}}$ and recall
$|u_{\alpha}|<\kappa({{\mathfrak{k}}})\leq\lambda_{{\mathfrak{k}}}$ hence the
set $\cup\\{M_{\beta}:\beta<\alpha\\}\cup u_{\alpha}$ is a subset of $N$ of
cardinality $\leq\lambda$ hence by Ax(VI) there is $M_{\alpha}$ as required.
Having chosen $\langle M_{\alpha}:\alpha\in I\rangle$ clearly by Ax(V) it is a
$\leq_{{\mathfrak{k}}}$-directed system hence by Ax(III)(d),
$M=\cup\\{M_{\alpha}:\alpha\in I\\}$ is well defined with universe
$\cup\\{|M_{\alpha}|:\alpha\in I\\}$ and by Ax(IV)(d) we have
$M\leq_{{\mathfrak{k}}}N$.
Clearly $\|M\|\leq\Sigma\\{\|M_{\alpha}\|:\alpha\in
I\\}\leq|I|\cdot\lambda_{{\mathfrak{k}}}=\chi$, and by $(*)_{2}(c)+(*)_{3}(c)$
we have
$A\subseteq\cup\\{u_{\alpha}:\alpha<\chi\\}=\cup\\{u_{\alpha}:\alpha\in
I\\}\subseteq\cup\\{|M_{\alpha}|:\alpha\in I\\}=M$ and so $M$ is as required.
∎
###### Notation 5.10.
1) For $\chi\in[\lambda_{{\mathfrak{k}}},\mu_{{\mathfrak{k}}})$ let
$K_{\chi}=K^{{\mathfrak{k}}}_{\chi}=\\{M\in K:\|M\|=\chi\\}$ and
$K_{<\chi}=\bigcup\limits_{\mu<\chi}K_{\mu}$.
2) ${{\mathfrak{k}}}_{\chi}=(K_{\chi},\leq_{{\mathfrak{k}}}\restriction
K_{\chi})$.
3) If
$\lambda_{{\mathfrak{k}}}\leq\lambda_{1}<\mu_{1}\leq\mu_{{\mathfrak{k}}},\lambda_{1}=\lambda^{<\kappa}_{1}$
and $(\forall\alpha<\mu_{1})(|\alpha|^{<\kappa}<\mu_{1})$, then we define
$K_{[\lambda_{1},\mu_{1}]}=K^{{\mathfrak{s}}}_{[\lambda_{1},\mu_{1})}$ and
${\mathfrak{k}}_{1}={\mathfrak{k}}_{[\lambda_{1},\mu_{1}]}$ similarly, i.e.
$K_{{\mathfrak{k}}}=\\{M\in
K_{{\mathfrak{k}}}:\|M\|\in[\lambda_{1},\mu_{1})\\}$ and
$\leq_{{\mathfrak{k}}_{1}}=\leq_{{\mathfrak{k}}}{\restriction}K_{{\mathfrak{k}}_{1}}$
and
$\lambda_{{\mathfrak{k}}_{1}}=\lambda_{1},\mu_{{\mathfrak{k}}_{1}}=\mu_{1},\kappa_{{\mathfrak{k}}_{1}}=\kappa_{{\mathfrak{k}}}$.
###### Definition 5.11.
The embedding $f:N\rightarrow M$ is a ${{\mathfrak{k}}}$-embedding or a
$\leq_{{\mathfrak{k}}}$-embedding if its range is the universe of a model
$N^{\prime}\leq_{{\mathfrak{k}}}M$, (so $f:N\rightarrow N^{\prime}$ is an
isomorphism onto).
###### Claim 5.12.
[${\mathfrak{k}}$ is a 2-d.a.e.c.]
1) For every $N\in K$ there is a $\kappa_{{\mathfrak{k}}}$-directed partial
order $I$ of cardinality $\leq\|N\|$ and $\bar{M}=\langle M_{t}:t\in I\rangle$
such that $t\in I\Rightarrow M_{t}\leq_{{\mathfrak{k}}}N,\|M_{t}\|\leq{\rm
LST}({{\mathfrak{k}}}),I\models s<t\Rightarrow
M_{s}\leq_{{\mathfrak{k}}}M_{t}$ and $N=\bigcup\limits_{t\in I}M_{t}$.
2) For every $N_{1}\leq_{{\mathfrak{k}}}N_{2}$ we can find $\langle
M^{\ell}_{t}:t\in I^{*}\rangle$ as in part (1) for $N_{\ell}$ such that
$I_{1}\subseteq I_{2}$ and $t\in I_{1}\Rightarrow M^{2}_{t}=M^{1}_{t}$.
###### Proof..
1) As in the proof of 5.9.
2) Similarly. ∎
###### Claim 5.13.
Assume
$\lambda_{{\mathfrak{k}}}\leq\lambda_{1}=\lambda^{<\kappa}_{1}<\mu_{1}\leq\mu_{{\mathfrak{k}}}$
and $(\forall\alpha<\mu_{1})(|\alpha|^{<\kappa}<\mu_{1})$.
1) Then ${\mathfrak{k}}^{*}_{1}:={\mathfrak{k}}_{[\lambda_{1},\mu_{1})}$ as
defined in 5.10(3) is a
$(\lambda_{1},\mu_{1},\kappa_{{\mathfrak{k}}})$-d.a.e.c.
2) For each of the folllowing axioms if ${\mathfrak{k}}$ satisfies it then so
does ${\mathfrak{k}}_{1}$: Ax(III)(d),(IV)(b),(IV)(c),(IV)(d).
3) If if addition ${\mathfrak{k}}$ satisfies Ax(III)(d),(IV)(d), this part (2)
apply to all the axioms in 5.2, 5.4.
###### Claim 5.14.
1) If ${\mathfrak{k}}$ satisfies Ax(IV)(e) then ${\mathfrak{k}}$ satisfies
Ax(III)(e) provided that $\mu_{{\mathfrak{k}}}$ is regular or at least the
relevant $I$ has cardinality $<{\rm cf}(\mu_{{\mathfrak{k}}})$.
2) If Ax(III)(d),(IV)(d) we can waive $\mu_{{\mathfrak{k}}}$ is regular.
###### Proof..
We prove this by induction on $|I|$.
Case 1: $I$ is finite.
So there is $t^{*}\in I$ such that $t\in I\Rightarrow t\leq_{I}t^{*}$, so this
is trivial.
Case 2: $I$ is countable.
So we can find a sequence $\langle t_{n}:n<\omega\rangle$ such that $t_{n}\in
I,t_{n}\leq_{I}t_{n+1}$ and $s\in
I\Rightarrow\bigvee\limits_{n<\omega}s\leq_{I}t_{n}$. Now we can apply the
axiom to $\langle M_{t_{n}},h_{t_{n,t_{m}}}:m<n<\omega\rangle$.
Case 3: $I$ uncountable.
First, we can find an increasing continuous sequence $\langle
I_{\alpha}:\alpha<|I|\rangle$ such that $I_{\alpha}\subseteq I$ is directed of
cardinality $\leq|\alpha|+\aleph_{0}$ and let
$I_{|I|}=I=\cup\\{I_{\alpha}:\alpha<|I|\\}$.
Second, by the induction hypothesis for each $\alpha<|I|$ we choose
$N_{\alpha},\bar{h}^{\alpha}=\langle h_{\alpha,t}:t\in I_{\alpha}\rangle$ such
that:
1. $(a)$
$N_{\alpha}\in{{\mathfrak{k}}}^{{\mathfrak{s}}}_{\leq\chi}$
2. $(b)$
$h_{\alpha,t}$ is a $\leq_{{\mathfrak{k}}}$-embedding of $M_{t}$ into
$N_{\alpha}$
3. $(c)$
if $s<_{I}t$ are in $I_{\alpha}$ then $h_{\alpha,s}=h_{\alpha,t}\circ h_{t,s}$
4. $(d)$
if $\beta<\alpha$ then $N_{\beta}\leq_{{\mathfrak{k}}}N_{\alpha}$ and $t\in
I_{\beta}\Rightarrow h_{\alpha,t}=h_{\beta,t}$.
For $\alpha=0$ use the induction hypothesis.
For $\alpha$ a limit ordinal by Ax(III)(a) there is $N_{\alpha}$ s required as
$I_{\alpha}=\cup\\{I_{\beta}:\beta<\alpha\\}$ there are no new $h_{t}$’s; well
we have to check $\Sigma\\{\|N_{\beta}\|:\beta<\alpha\\}<\mu_{{\mathfrak{k}}}$
but as we assume $\mu_{{\mathfrak{k}}}$ is regular this holds.
For $\alpha=\beta+1$, by the induction hypothesis there is
$(N^{\prime}_{\alpha},\bar{g}^{\alpha})$ which is a limit of $\langle
M_{s},h_{s,t}:s\leq_{I_{\alpha}}t\rangle$. Now apply Ax(IV)(e); well is the
directed system version with $\langle
M_{s},h_{s,t}:s\leq_{I_{\beta}}t\rangle,(N^{\prime}_{\alpha},\bar{g}_{\alpha}),(N_{\beta},\langle
h_{s}:s\in I_{\beta}\rangle$ here standing for $\bar{M},N_{1},N_{2}$ there.
So there are $N_{\alpha},f^{\alpha}_{s}(s\in I_{\beta})$ such that
$N_{\beta}\leq_{{\mathfrak{k}}}N_{\alpha}$ and $s\in I_{\beta}\Rightarrow
f^{\alpha}_{s}\cdot g_{s}=h_{s}$. Lastly, for $s\in I_{\alpha}\backslash
I_{\beta}$ we choose $h_{s}=f^{\alpha}_{s}\circ g_{s}$, so we are clearly
done.
2) Easy by 5.9 or 5.13. ∎
### 5(B). Basic Notions
As in [She09c, §1], we now recall the definition of orbital types (note that
it is natural to look at types only over models which are amalgamation bases
recalling Ax(IV)(b) implies every $M\in K_{{\mathfrak{k}}}$ is).
###### Definition 5.15.
1) For $\chi\in[\lambda_{{\mathfrak{k}}},\mu_{{\mathfrak{k}}})$ and $M\in
K_{\chi}$ we define ${{\mathscr{S}}}(M)$ as ${\rm
ortp}(a,M,N):M\leq_{{\mathfrak{k}}}N\in K_{\chi}\text{ and }a\in N\\}$ where
${\rm ortp}(a,M,N)=(M,N,a)/{{\mathscr{E}}}_{M}$ where ${{\mathscr{E}}}_{M}$ is
the transitive closure of ${{\mathscr{E}}}^{{\rm at}}_{M}$, and the two-place
relation ${{\mathscr{E}}}^{{\rm at}}_{M}$ is defined by:
$\begin{array}[]{clcr}(M,N_{1},a_{1}){{\mathscr{E}}}^{{\rm
at}}_{M}&(M,N_{2},a_{2})\text{{\text@underline{iff}}\,
}M\leq_{{\mathfrak{k}}}N_{\ell},a_{\ell}\in
N_{\ell},\|M\|\leq\|N_{\ell}\|=\|M\|^{<\kappa}\\\ &\text{ for }\ell=1,2\text{
and }\text{ there is }N\in K_{\chi}\text{ and
}\leq_{{\mathfrak{k}}}\text{-embeddings}\\\ &f_{\ell}:N_{\ell}\rightarrow
N\text{ for }\ell=1,2\text{ such that:}\\\ &f_{1}\restriction M={\rm
id}_{M}=f_{2}\restriction M\text{ and }f_{1}(a_{1})=f_{2}(a_{2}).\end{array}$
$(\text{of course
}M\leq_{{\mathfrak{k}}}N_{1},M\leq_{{\mathfrak{k}}}N_{2}\text{ and }a_{1}\in
N_{1},a_{2}\in N_{2})$
2) We say “$a$ realizes $p$ in $N$” when $a\in N,p\in{{\mathscr{S}}}(M)$ and
letting $\chi=\|M\|^{<\kappa}$ for some $N^{\prime}\in K_{\chi}$ we have
$M\leq_{{\mathfrak{k}}}N^{\prime}\leq_{{\mathfrak{k}}}N$ and $a\in N^{\prime}$
and $p={\rm ortp}(a,M,N^{\prime})$; so $M,N^{\prime}\in K_{\chi}$ but possibly
$N\notin K_{\chi}$.
3) We say “$a_{2}$ strongly 141414note that ${{\mathscr{E}}}^{{\rm at}}_{M}$
is not an equivalence relation and certainly in general is not
${\mathscr{E}}_{M}$ realizes $(M,N^{1},a_{1})/{{\mathscr{E}}}^{\text{at}}_{M}$
in $N$” when for some $N^{2}$ we have
$M\leq_{{\mathfrak{k}}}N^{2}\leq_{{\mathfrak{k}}}N$ and $a_{2}\in N^{2}$ and
$(M,N^{1},a_{1})\,{{\mathscr{E}}}^{{\rm at}}_{M}\,(M,N^{2},a_{2})$.
4) We say $M_{0}$ is a $\leq_{{\mathfrak{k}}[\chi_{0},\chi_{1})}$-amalgamation
base if this holds in ${{\mathfrak{k}}}_{[\chi_{0},\chi_{1})}$, see below.
4A) We say $M_{0}\in{{\mathfrak{k}}}$ is an amalgamation base or
$\leq_{{\mathfrak{k}}}$-amalgamation base when : for every
$M_{1},M_{2}\in{{\mathfrak{k}}}$ and $\leq_{{\mathfrak{k}}}$-embeddings
$f_{\ell}:M_{0}\rightarrow M_{\ell}$ (for $\ell=1,2$) there is
$M_{3}\in{{\mathfrak{k}}}_{\lambda}$ and $\leq_{{\mathfrak{k}}}$-embeddings
$g_{\ell}:M_{\ell}\rightarrow M_{3}$ (for $\ell=1,2$) such that $g_{1}\circ
f_{1}=g_{2}\circ f_{2}$.
5) We say ${{\mathfrak{k}}}$ is stable in $\chi$ when :
1. $(a)$
$\lambda_{{\mathfrak{k}}}\leq\chi<\mu_{{\mathfrak{k}}}$
2. $(b)$
$M\in K_{\chi}\Rightarrow|{{\mathscr{S}}}(M)|\leq\chi$
3. $(c)$
$\chi\in{\rm Car}_{{\mathfrak{k}}}$ which means $\chi=\chi^{<\kappa}$ or the
conclusion of 5.9 holds
4. $(d)$
${\mathfrak{k}}_{\chi}$ has amalgamation.
6) We say $p=q\restriction M$ if
$p\in{{\mathscr{S}}}(M),q\in{{\mathscr{S}}}(N),M\leq_{{\mathfrak{k}}}N$ and
for some $N^{+},N\leq_{{\mathfrak{k}}}N^{+}$ and $a\in N^{+}$ we have $p={\rm
ortp}(a,M,N^{+}),q={\rm ortp}(a,N,N^{+})$; note that $p\restriction M$ is well
defined if $M\leq_{{\mathfrak{k}}}N,p\in{{\mathscr{S}}}(N)$.
7) For finite $m$, for $M\leq_{{\mathfrak{k}}}N,\bar{a}\in{}^{m}N$ we can
define ${\rm ortp}(\bar{a},N,N)$ and ${{\mathscr{S}}}^{m}(M)$ similarly and
${{\mathscr{S}}}^{<\omega}(M)=\bigcup\limits_{m<\omega}{{\mathscr{S}}}^{m}(M)$,
(but we shall not use this in any essential way, hence we choose
${{\mathscr{S}}}(M)={{\mathscr{S}}}^{1}(M)$.)
###### Definition 5.16.
1) We say $N$ is $\lambda$-universal above or over $M$ if for every
$M^{\prime},M\leq_{{\mathfrak{k}}}M^{\prime}\in K^{{\mathfrak{k}}}_{\lambda}$,
there is a $\leq_{{\mathfrak{k}}}$-embedding of $M^{\prime}$ into $N$ over
$M$. If we omit $\lambda$ we mean ${\rm
rnd}_{{\mathfrak{k}}}(\lambda)=\min({\rm
Car})_{{\mathfrak{k}}}\backslash\lambda$) so
$\leq\|N\|^{<\kappa({{\mathfrak{k}}})}$; clearly this implies that $M$ is a
$\leq_{{\mathfrak{k}}_{[\chi_{0},\chi_{1}]}}$-amalgamation base where
$\chi_{0}=\|M\|,\chi_{1}=(\|N\|^{<\kappa})^{+}$.
2) $K^{3}_{{\mathfrak{k}}}=\\{(M,N,a):M\leq_{{\mathfrak{k}}}N,a\in N\backslash
M$ and $M,N\in K_{{\mathfrak{k}}}\\}$, with the partial order
$\leq=\leq_{{\mathfrak{k}}}$ defined by
$(M,N,a)\leq(M^{\prime},N^{\prime},a^{\prime})$ iff
$a=a^{\prime},M\leq_{{\mathfrak{k}}}M^{\prime}$ and
$N\leq_{{\mathfrak{k}}}N^{\prime}$. We say $(M,N,a)$ is minimal if
$(M,N,a)\leq(M^{\prime},N_{\ell},a)\in K^{3}_{{\mathfrak{k}}}$ for $\ell=1,2$
implies ${\rm ortp}(a,M^{\prime},N_{1})={\rm ortp}(a,M^{\prime},N_{2})$
moreover, $(M^{\prime},N_{1},a){{\mathscr{E}}}^{{\rm
at}}_{\lambda}(M^{\prime},N_{2},a)$, (not needed if every $M^{\prime}\in
K_{\lambda}$ is an amalgamation basis).
2A) $K^{3,{\mathfrak{k}}}_{\lambda}$ is defined similarly using
${\mathfrak{k}}_{[\lambda,{\rm rnd}(\lambda)]}$.
Generalizing superlimit, we have more than one reasonable choice.
###### Definition 5.17.
1) For $\ell=1,2$ we say $M^{*}\in K^{{\mathfrak{k}}}_{\lambda}$ is
superlimitℓ or $(\lambda,\geq\kappa)$-superlimitℓ when clause (c) of 5.15(5)
and:
1. $(a)$
it is universal, (i.e., every $M\in K^{{\mathfrak{k}}}_{\lambda}$ can be
properly $\leq_{{\mathfrak{k}}}$-embedded into $M^{*}$), and
2. $(b)$
Case 1: $\ell=1$ if $\langle M_{i}:i\leq\delta\rangle$ is
$\leq_{{\mathfrak{k}}}$-increasing, ${\rm
cf}(\delta)\geq\kappa,\delta<\lambda^{+}$ and $i<\delta\Rightarrow M_{i}\cong
M^{*}$ then $M_{\delta}\cong M^{*}$
3. Case 2: $\ell=2$ if $I$ is a $(<\kappa)$-directed partial order of cardinality $\leq\chi$, $\langle M_{t}:t\in I\rangle$ is $\leq_{{\mathfrak{k}}}$-increasing and $t\in I\Rightarrow M_{t}\cong M^{*}$ then $\cup\\{M_{t}:t\in I\\}\cong M^{*}$.
2) $M$ is $\lambda$-saturated above $\mu$ if $\|M\|\geq\lambda>\mu\geq{\rm
LST}({{\mathfrak{k}}})$ and:
$N\leq_{{\mathfrak{k}}}M,\mu\leq\|N\|<\lambda,p\in{{\mathscr{S}}}_{{\mathfrak{k}}}(N)$
implies $p$ is strongly realized in $M$. Let “$M$ is $\lambda^{+}$-saturated”
mean that “$M$ is $\lambda^{+}$-saturated above $\lambda$” and
$K(\lambda^{+}$-saturated) $=\\{M\in K:M$ is $\lambda^{+}$-saturated$\\}$ and
“$M$ is saturated $\ldots$” mean “$M$ is $\|M\|$-saturated $\ldots$”.
###### Definition 5.18.
1) We say $N$ is $(\lambda,\sigma)$-brimmed over $M$ if we can find a sequence
$\langle M_{i}:i<\sigma\rangle$ which is $\leq_{{\mathfrak{k}}}$-increasing
semi-continuous, $M_{i}\in K_{\lambda},M_{0}=M,M_{i+1}$ is
$\leq_{{\mathfrak{k}}}$-universal over $M_{i}$ and
$\bigcup\limits_{i<\sigma}M_{i}=N$. We say $N$ is $(\lambda,\sigma)$-brimmed
over $A$ if $A\subseteq N\in K_{\lambda}$ and we can find $\langle
M_{i}:i<\sigma\rangle$ as in part (1) such that $A\subseteq M_{0}$; if
$A=\emptyset$ we may omit “over $A$”.
2) We say $N$ is $(\lambda,*)$-brimmed over $M$ if for some
$\sigma\in[\kappa,\lambda),N$ is $(\lambda,\sigma)$-brimmed over $M$. We say
$N$ is $(\lambda,*)$-brimmed if for some $M,N$ is $(\lambda,*)$-brimmed over
$M$.
3) If $\alpha<\lambda^{+}$ let “$N$ is $(\lambda,\alpha)$-brimmed over $M$”
mean $M\leq_{{\mathfrak{k}}}N$ are from $K_{\lambda}$ and ${\rm
cf}(\alpha)\geq\kappa\Rightarrow N$ is $(\lambda,{\rm cf}(\alpha))$-brimmed
over $M$.
Recall
###### Claim 5.19.
1) If ${{\mathfrak{k}}}$ is a $(\mu,\lambda,\kappa)$-d.a.e.c. with
amalgamation, stable in $\chi$ and $\sigma={\rm cf}(\sigma)$ so
$\chi\in[\lambda,\mu)$, then for every $M\in K^{{\mathfrak{k}}}_{\chi}$ there
is $N\in K^{{\mathfrak{k}}}_{\lambda}$ universal over $M$ which is
$(\chi,\sigma)$-brimmed over $M$ (hence is $S^{\chi}_{\sigma}$-limit, see
[She09a], not used).
2) If $N_{\ell}$ is $(\chi,\theta)$-brimmed over $M$ for $\ell=1,2$, and
$\kappa\leq\theta={\rm cf}(\theta)\leq\chi^{+}$ then $N_{1},N_{2}$ are
isomorphic over $M$.
3) If $M_{2}$ is $(\chi,\theta)$-brimmed over $M$ and
$M_{0}\leq_{{\mathfrak{s}}}M_{1}$ then $M_{2}$ is $(\chi,\theta)$-brimmed over
$M_{0}$.
###### Proof..
Straightforward for part (1); recall clause (c) of Definition 5.19(5).
2),3) As in [She09c]. ∎
$*\qquad*\qquad*$
### 5(C). Liftings
Here we deal with lifting, there are two aspects. First, if
${\mathfrak{k}}^{1},{\mathfrak{k}}^{1}$ agree in $\lambda$ they agree in every
higher cardinal. Second, given ${\mathfrak{k}}$ we can find
${\mathfrak{k}}_{1}$ with
$\mu_{{\mathfrak{k}}_{1}}=\infty,({\mathfrak{k}}_{1})_{\lambda}={\mathfrak{k}}_{\lambda}$.
###### Theorem 5.20.
1) If ${{\mathfrak{k}}}^{\ell}$ is a $(\mu,\lambda,\kappa)$-a.e.c. for
$\ell=1,2$ and ${{\mathfrak{k}}}^{1}_{\lambda}={{\mathfrak{k}}}^{2}_{\lambda}$
then ${{\mathfrak{k}}}^{1}={\mathfrak{k}}^{2}$.
2) If ${{\mathfrak{k}}}_{\ell}$ is a $(\mu_{\ell},\lambda,\kappa)$-d.a.e.c.
for $\ell=1,2$ and ${\mathfrak{k}}^{1}$ satisfies Ax(IV)(d) and
$\mu_{1}\leq\mu_{2}$ and
${{\mathfrak{k}}}^{1}_{\lambda}={\mathfrak{k}}^{2}_{\lambda}$ then
${\mathfrak{k}}_{1}={\mathfrak{k}}_{2}[\lambda,\mu_{1})$.
###### Proof..
By 5.12. ∎
###### Theorem 5.21.
The lifting-up Theorem
1) If ${{\mathfrak{k}}}_{\lambda}$ is a
$(\lambda^{+},\lambda,\kappa)$-d.a.e.c.± then the pair
$(K^{\prime},\leq_{{{\mathfrak{k}}}^{\prime}})$ defined below is an
$(\infty,\lambda,\kappa)$-d.a.e.c.+ where we define
1. $(A)$
$K^{\prime}$ is the class of $M$ such that $M$ is a
$\tau_{{{\mathfrak{k}}}_{\lambda}}$-model, and for some $I$ and $\bar{M}$ we
have
1. $(a)$
$I$ is a $\kappa$-directed partial order
2. $(b)$
$\bar{M}=\langle M_{s}:s\in I\rangle$
3. $(c)$
$M_{s}\in K_{\lambda}$
4. $(d)$
$I\models s<t\Rightarrow M_{s}\leq_{{{\mathfrak{k}}}_{\lambda}}M_{t}$
5. $(e)$
if $J\subseteq I$ has cardinality $\leq\lambda$ and is $\kappa$-directed and
$M_{J}$ is the ${\mathfrak{k}}$-union of $\langle M_{t}:t\in J\rangle$, see
Definition 5.4, then $M_{J}$ is a submodel of $M$
6. $(f)$
$M=\cup\\{M_{J}:J\subseteq I$ is $\kappa$-directed of cardinality
$\leq\lambda\\}$, i.e. both for the universe and for the relations and
functions.
2. $(A)^{\prime}$
we call such $\langle M_{s}:s\in I\rangle$ a witness for $M\in K^{\prime}$, we
call it reasonable if $|I|\leq\|M\|^{<\kappa}$
3. $(B)$
$M\leq_{{{\mathfrak{k}}}^{\prime}}N$ iff for some $I,J,\bar{M}$ we have
1. $(a)$
$J$ is a $\kappa$-directed partial order
2. $(b)$
$I\subseteq J$ is $\kappa$-directed
3. $(c)$
$\bar{M}=\langle M_{s}:s\in J\rangle$ and is a
$\leq_{{{\mathfrak{k}}}_{\lambda}}$-increasing
4. $(d)$
$\langle M_{s}:s\in J\rangle$ is a witness for $N\in K^{\prime}$
5. $(e)$
$\langle M_{s}:s\in I\rangle$ is a witness for $M\in K^{\prime}$.
4. $(B)^{\prime}$
We call such $I,\langle M_{s}:s\in J\rangle$ witnesses for
$M\leq_{{{\mathfrak{k}}}^{\prime}}N$ or say $(I,J,\langle M_{s}:s\in
J\rangle)$ witness $M\leq_{{{\mathfrak{k}}}^{\prime}}N$.
2) For the other axioms we have implications.
###### Proof..
The proof of part (2) is straightforward so we concentrate on part (1). So let
us check the axioms one by one.
AxO(a),(b),(c) and (d): $K^{\prime}$ is a class of $\tau$-models,
$\leq_{{{\mathfrak{k}}}^{\prime}}$ a two-place relation on $K,K^{\prime}$ and
$\leq_{{{\mathfrak{k}}}^{\prime}}$ are closed under isomorphisms and $M\in
K^{\prime}\Rightarrow\|M\|\geq\lambda$, etc.
[Why? trivially.]
AxI(a): If $M\leq_{{{\mathfrak{k}}}^{\prime}}N$ then $M\subseteq N$.
[Why? We use smoothness for $\kappa$-directed unions, i.e. Ax(IV)(x).]
AxII(a),(b),(c):
We prove the first, the others are easier.
Ax II(a):
$M_{0}\leq_{{{\mathfrak{k}}}^{\prime}}M_{1}\leq_{{{\mathfrak{k}}}^{\prime}}M_{2}$
implies $M_{0}\leq_{{{\mathfrak{k}}}^{\prime}}M_{2}$ and $M\in
K^{\prime}\Rightarrow M\leq_{{{\mathfrak{k}}}^{\prime}}M$.
[Why? The second phrase is trivial. For the first phrase let for
$\ell\in\\{1,2\\}$ the $\kappa$-directed partial orders $I_{\ell}\subseteq
J_{\ell}$ and $\bar{M}^{\ell}=\langle M^{\ell}_{s}:s\in J_{\ell}\rangle$
witness $M_{\ell-1}\leq_{{{\mathfrak{k}}}^{\prime}}M_{\ell}$.
We first observe
1. $\boxdot$
if $I$ is a $\kappa$-directed partial order, $\langle M^{\ell}_{t}:t\in
I\rangle$ is a $\leq_{{{\mathfrak{k}}}_{\lambda}}$-system witnessing
$M_{\ell}\in K^{\prime}$ for $\ell=1,2$ and $t\in I\Rightarrow
M^{1}_{t}\leq_{{{\mathfrak{k}}}_{\lambda}}M^{2}_{t}$ then
$M_{1}\leq_{{\mathfrak{k}}}M_{2}$.
[Why? Let $I_{1}$ be the partial order with set of elements $I\times\\{1\\}$
ordered by $(s,1)\leq_{I_{1}}(t,1)\Leftrightarrow s\leq_{I}t$. Let $I_{2}$ be
the partial order with set of elements $I\times\\{1,2\\}$ ordered by
$(s_{1},\ell_{1})\leq_{I_{2}}(s_{2},\ell_{2})\Leftrightarrow
s_{1}\leq_{I}s_{2}\wedge\ell_{1}\leq\ell_{2}$. Clearly $I_{1}\subseteq I_{2}$
are both $\kappa$-directed.
Let $M_{(s,1)}=M^{1}_{s},M_{(s,2)}=M^{2}_{s}$, so clearly $\bar{M}=\langle
M_{t}:t\in I_{\ell}\rangle$ is a $\leq_{{\mathfrak{k}}}-\kappa$-directed
system witnessing $M_{\ell}\in K^{\prime}$ for $\ell=1,2$ and
$(I_{1},I_{2},\bar{M})$ witness $M_{1}\leq_{{{\mathfrak{k}}}^{\prime}}M_{2}$,
so we are done.]
Without loss of generality $J_{1},J_{2}$ are pairwise disjoint. Let
$\chi=(|J_{1}|+|J_{2}|)^{<\kappa}$ so $\lambda\leq\chi<\mu$ and let
$\begin{array}[]{clcr}{{\mathscr{U}}}:=\\{u:&u\subseteq J_{1}\cup J_{2}\text{
has cardinality }\leq\lambda\text{ and }u\cap I_{\ell}\\\ &\text{is
}\kappa\text{-directed under }\leq_{I_{\ell}}\text{ for }\ell=1,2\text{ and
}u\cap J_{\ell}\\\ &\text{is }\kappa\text{-directed under
}\leq_{J_{\ell}}\text{ for }\ell=1,2\text{ and}\\\ &\cup\\{|M^{2}_{t}|:t\in
I_{2}\\}=\cup\\{|M^{1}_{t}:t\in J_{1}\\}\\}.\end{array}$
Let $\langle u_{\alpha}:\alpha<\alpha^{*}\rangle$ list ${{\mathscr{U}}}$, and
we define a partial order $I$:
1. $(a)^{\prime}$
its set of elements is $\\{\alpha<\alpha^{*}$: for no $\beta<\alpha$ do we
have $u_{\alpha}\subseteq u_{\alpha}\\}$
2. $(b)^{\prime}$
$\alpha\leq_{I}\beta$ iff $u_{\alpha}\subseteq u_{\beta}\wedge\alpha\in
I\wedge\beta\in I$.
Note that the set $I$ may have ${\rm
card}(\sum\limits_{i<\delta}\|M_{i}\|)^{\lambda}$ which may be
$>\mu_{{\mathfrak{k}}}$.
As in the proof of 5.9, $I$ is $\kappa$-directed.
For $\ell=0,1,2$ and $\alpha\in I$ let $M_{\ell,\alpha}$ be
1. $(a)$
$\leq_{{\mathfrak{k}}}$-union of $\langle M^{\ell}_{t}:t\in u_{\alpha}\cap
I_{1}\rangle$ if $\ell=0$
2. $(b)$
the $\leq_{{\mathfrak{k}}}$-union of the
$\leq_{{\mathfrak{k}}_{\lambda}}$-directed system $\langle M^{1}_{t}:t\in
J_{1}\rangle$, equivalently the $\leq_{{{\mathfrak{k}}}_{\lambda}}$-directed
system of $\langle M^{2}_{t}:t\in I_{2}\rangle$, when $\ell=1$
3. $(c)$
the $\leq_{{\mathfrak{k}}}$-union of the
$\leq_{{{\mathfrak{k}}}_{\lambda}}$-directed system $\langle M^{2}_{t}:t\in
J_{2}\rangle$ when $\ell=2$.
Now
1. $(*)_{1}$
if $\ell=0,1,2$ and $\alpha\leq_{I}\beta$ then
$M^{\ell}_{\alpha}\leq_{{{\mathfrak{k}}}_{\lambda}}M^{\ell}_{\beta}$
2. $(*)_{2}$
if $\alpha\in I$ then
$M^{0}_{\alpha}\leq_{{{\mathfrak{k}}}_{\lambda}}M^{1}_{\alpha}\leq_{{{\mathfrak{k}}}_{\lambda}}M^{1}_{\alpha}$
3. $(*)_{3}$
$\langle M_{\ell,\alpha}:\alpha\in I\rangle$ is a witness for $M_{\ell}\in
K^{\prime}$
4. $(*)_{4}$
$M_{0,\alpha}\leq_{{{\mathfrak{k}}}_{\lambda}}M_{2,\alpha}$ for $\alpha\in I$.
Together by $\boxdot$ we get that $M_{0}\leq_{{\mathfrak{k}}^{\prime}}M_{2}$
as required.
Ax III(a): In general.
Let $(I_{i,j},J_{i,j},\bar{M}^{i,j})$ witness
$M_{i}\leq_{{{\mathfrak{k}}}^{\prime}}M_{j}$ when $i\leq j<\delta$ and without
loss of generality $\langle J_{i,j}:i<j<\delta\rangle$ are pairwise disjoint.
Let ${{\mathscr{U}}}$ be the family of $u$ such that for some
$v\in[\delta]^{\leq\lambda}$,
1. $(a)$
$v\subseteq\delta$ has cardinality $\leq\lambda$ and has order type of
cofinality $\geq\kappa$
2. $(b)$
$u\subseteq\cup\\{J_{i,j}:i<j$ are from $v\\}$ has cardinality $\leq\lambda$
and
3. $(b)$
for $i<j$ from $v$ the set $u\cap J_{i,j}$ is $\kappa$-directed under
$\leq_{J_{i,j}}$ and $u\cap I_{i,j}$ is $\kappa$-directed under
$\leq_{I_{i,j}}$
4. $(c)$
if $i_{0}\leq i_{1}\leq i_{2}$ then $\cup\\{M^{i(0),i(1)}_{t,s}:s\in u\cap
J_{i(0),i(1)}\\}=\cup\\{M^{i(1),i(2)}_{s}:s\in u\cap I_{i(1),i(2)}\\}$
5. $(d)$
if $i(0)\leq i(1)\leq i(2)$ are from $v$ then $\cup\\{M^{i(0),i(1)}_{t,s}:s\in
u\cap J_{i(0),i(1)}\\}=\cup\\{M^{i(1),i(2)}_{s}:s\in u\cap I_{i(1),i(2)}\\}$
6. $(e)$
if $i(0)\leq k(0)\leq j(1)$ and $i(1)\leq j(1)$ are from $u$ then
$\cup\\{M^{i(0),j(0)}_{s}:s\in u\cap
J^{i(0),j(0)}_{s}\\}=\cup\\{M^{i(1),j(1)}_{s}:s\in u\cap J_{i(1),j(1)}\\}$.
Let the rest of the proof be as before.
Ax(IV)(a):
Similar, but ${\mathscr{U}}=\\{u\subseteq I:u$ has cardinality $\leq\lambda$
and is $\kappa$-directed$\\}$.
Ax(III)(d):
Assuming ${\mathfrak{k}}$ satisfies Ax(III)(d). Similar.
Ax(IV)(d):
Assuming ${\mathfrak{k}}$ satisfiefs Ax(IV)(d). Similar.
Axiom V: Assume $N_{0}\leq_{{{\mathfrak{k}}}^{\prime}}M$ and
$N_{1}\leq_{{{\mathfrak{k}}}^{\prime}}M$.
If $N_{0}\subseteq N_{1}$, then $N_{0}\leq_{{{\mathfrak{k}}}^{\prime}}N_{1}$.
[Why? Let $(I_{\ell},J_{\ell},\langle M^{\ell}_{s}:s\in J_{\ell}\rangle)$
witness $N_{\ell}\leq_{{\mathfrak{k}}}M$ for $\ell=0,1$; without loss of
generality $J_{0},J_{1}$ are disjoint.
Let
$\begin{array}[]{clcr}{\mathscr{U}}:=\\{u\subseteq J_{0}\cup
J_{1}:&|u|\leq\lambda\text{ and }u\cap J_{\ell}\text{ is
}\kappa\text{-directed}\\\ &\text{ and }u\cap I_{\ell}\text{ is
}\kappa\text{-directed for }\ell=0,1\text{ and}\\\ &\cup\\{|M^{0}_{s}|:s\in
u\cap J_{0}\\}=\cup\\{|M^{0}_{s}|:s\in u\in J_{1}\\}\\}.\end{array}$
For $u\in{\mathscr{U}}$ let
1. $\bullet$
$M_{u}=M{\restriction}\cup\\{(M^{\ell}_{s}:s\in u\cap J_{\ell}\\}$ for $i=0,1$
2. $\bullet$
$N_{\ell,u}=N_{\ell}{\restriction}\\{(M^{\ell}_{s}):s\in u\cap I_{\ell}\\}$.
Let
1. $(*)$
$(a)\quad({\mathscr{U}},\subseteq)$ is $\kappa$-directed
2. $(b)\quad N_{\ell,u}\leq_{{\mathfrak{k}}}M$
3. $(c)\quad M_{\ell,u}\leq_{{\mathfrak{k}}}M_{\ell,v}$ when $u\subseteq v$ are from ${\mathscr{U}}$ and $\ell=0,1$
4. $(d)\quad M_{0,u}\leq_{{\mathfrak{k}}}M_{1,u}$
5. $(e)\quad N_{|}ell=\cup\\{N_{\ell,u}:u\in{\mathscr{U}}\\}$; as in ?
By $\boxdot$ above we are done.
Axiom VI: LST$({{\mathfrak{k}}}^{\prime})=\lambda$.
[Why? Let $M\in K^{\prime},A\subseteq M,|A|+\lambda\leq\chi<\|M\|$ and let
$\langle M_{s}:s\in I\rangle$ witness $M\in K^{\prime}$; without loss of
generality $|A|=\chi^{<\kappa}$. Now choose a directed $I\subseteq J$ of
cardinality $\leq|A|=\chi^{<\kappa}$ such that $A\subseteq
M^{\prime}=:\bigcup\limits_{s\in I}M_{s}$ and so $(I,J,\langle M_{s}:s\in
J\rangle)$ witnesses $M^{\prime}\leq_{{{\mathfrak{k}}}^{\prime}}M$, so as
$A\subseteq M^{\prime}$ and $\|M^{\prime}\|\leq|A|+\mu$ we are done.] ∎
Also if two such d.a.e.c.’s have some cardinal in common then we can put them
together.
###### Claim 5.22.
Let $\iota\in\\{1,2,3\\}$ and assume $\lambda_{1}<\lambda_{2}<\lambda_{3}$ and
1. $(a)$
${{\mathfrak{k}}}^{1}$ is an $(\lambda^{+}_{2},\lambda,\kappa)-2$-d.a.e.c.,
$K^{1}=K^{1}_{\geq\lambda}$
2. $(b)$
${{\mathfrak{k}}}^{2}$ is a $(\lambda_{3},\lambda_{2},\kappa)-\iota$-d.a.e.c.
3. $(c)$
$K^{{\mathfrak{k}}^{1}}_{\lambda_{2}}=K^{{\mathfrak{k}}^{2}}_{\lambda_{2}}$
and
$\leq_{{{\mathfrak{k}}}^{2}}{\restriction}K^{{\mathfrak{k}}^{2}}_{\lambda_{2}}=\leq_{{{\mathfrak{k}}}^{1}}\restriction
K^{{\mathfrak{k}}^{1}}_{\lambda_{2}}$
4. $(d)$
we define ${{\mathfrak{k}}}$ as follows:
$K_{{\mathfrak{k}}}=K_{{\mathfrak{k}}}\cup
K_{{\mathfrak{k}}_{2}},M\leq_{{\mathfrak{k}}}N$ iff
$M\leq_{{{\mathfrak{k}}}^{1}}N$ or $M\leq_{{{\mathfrak{k}}}^{2}}N$ or for some
$M^{\prime},M\leq_{{{\mathfrak{k}}}^{1}}M^{\prime}\leq_{{{\mathfrak{k}}}^{2}}N$.
Then ${{\mathfrak{k}}}$ is an
$(\lambda_{3},\lambda_{1},\kappa)-\iota$-d.a.e.c.
###### Proof..
Straightforward. E.g.
Ax(III)(d): So $\langle M_{s}:s\in I\rangle$ is
$\leq_{{\mathfrak{s}}}-\kappa$-directed system.
If $\|M_{s}\|\geq\lambda_{2}$ for some $\lambda$, use $\langle M_{s}:s\leq
t\in I\rangle$ and clause (b) of the assumption. If $\cup\\{M_{s}:s\in I\\}$
has cardinality $\leq\lambda_{2}$ use clause (a) in the assumption. If neither
one of them holds, recall $\lambda_{2}=\lambda^{<\kappa}_{2}$ by clause (b) of
the assumption, and let
$\\\ cU=\\{u\subseteq I:|u|\ le\lambda_{2},u\text{ is }\kappa\text{-directed
(in I), and }\cup\\{M_{s}:s\in u\\}\text{ has cardinality }\lambda\\}.$
Easily $({\mathscr{U}},\subseteq)$ is $\lambda_{2}$-directed, for $u\in J$ let
$M_{u}$ be the $\leq_{{\mathfrak{s}}}$-union of $\langle M_{s}:s\in u\rangle$.
Now by clause (a) of the assumption
1. $(*)_{1}$
$M_{u}\in K^{{\mathfrak{k}}^{1}}_{\lambda_{2}}=K^{k^{*}}_{\lambda_{2}}$
2. $(*)_{2}$
if $u_{1}\subseteq v$ are from ${\mathscr{U}}$ then
$M_{u}\leq_{{\mathfrak{k}}^{1}}M_{v},M_{u}\leq_{{\mathfrak{k}}^{2}}M_{v}$.
Now use clause (b) of the assumption.
Axiom V: We shall use freely
1. $(*)$
${{\mathfrak{k}}}^{2}_{\lambda}={{\mathfrak{k}}}^{2}$ and
${{\mathfrak{k}}}^{1}_{\lambda}={{\mathfrak{k}}}^{1}$.
So assume
$N_{0}\leq_{{\mathfrak{k}}}M,N_{1}\leq_{{\mathfrak{k}}}M,N_{0}\subseteq
N_{1}$.
Now if $\|N_{0}\|\geq\lambda_{1}$ use assumption (b), so we can assume
$\|N_{0}\|<\lambda_{1}$. If $\|M\|\leq\lambda_{1}$ we can use assumption (a)
so we can assume $\|M\|>\lambda_{1}$ and by the definition of
$\leq_{{\mathfrak{k}}}$ there is $M^{\prime}_{0}\in
K^{{\mathfrak{k}}^{1}}_{\lambda_{1}}=K^{{\mathfrak{k}}^{2}}_{\lambda_{1}}$
such that
$N_{0}\leq_{{\mathfrak{k}}^{1}}M^{\prime}_{0}\leq_{{\mathfrak{k}}^{2}}M$.
First assume $\|N_{1}\|\leq\lambda_{1}$, so we can find $M^{\prime}_{1}\in
K^{{\mathfrak{k}}^{1}}_{\lambda_{1}}$ such that
$N_{1}\leq_{{{\mathfrak{k}}}^{1}}M^{\prime}_{1}\leq_{{{\mathfrak{k}}}^{2}}M$
(why? if $N_{1}\in K^{{\mathfrak{k}}^{1}}_{<\lambda_{1}}$, by the definition
of $\leq_{{\mathfrak{k}}}$ and if $N_{1}\in
K^{{\mathfrak{k}}^{1}}_{\lambda_{1}}$ just choose $M^{\prime}_{1}=N_{1}$). Now
we can by assumption (b) find $M^{\prime\prime}\in
K^{{\mathfrak{k}}^{1}}_{\lambda_{1}}$ such that $M^{\prime}_{0}\cup
M^{\prime}_{1}\subseteq M^{\prime\prime}\leq_{{\mathfrak{k}}}M$, hence by
assumption (b) (i.e. AxV for ${{\mathfrak{k}}}^{2}$) we have
$M^{\prime}_{0}\leq_{{\mathfrak{k}}}M^{\prime\prime},M^{\prime}_{1}\leq_{{\mathfrak{k}}}M^{\prime\prime}$.
As
$N_{0}\leq_{{\mathfrak{k}}}M^{\prime}_{0}\leq_{{\mathfrak{k}}}M^{\prime\prime}\in
K^{{\mathfrak{k}}}_{\leq\lambda_{1}}$ by assumption (a) we have
$N_{0}\leq_{{\mathfrak{k}}}M^{\prime\prime}$, and similarly we have
$N_{1}\leq_{{\mathfrak{k}}}M^{\prime\prime}$. So $N_{0}\subseteq
N_{1},N_{0}\leq_{{\mathfrak{k}}}M^{\prime\prime},N_{1}\leq_{{\mathfrak{k}}}M^{\prime}$
so by assumption (b) we have $N_{0}\leq_{{\mathfrak{k}}}N_{1}$.
We are left with the case $\|N_{1}\|>\lambda$, by assumption (b) there is
$N^{\prime}_{1}\in K_{\lambda_{1}}$ such that $N_{0}\subseteq
N^{\prime}_{1}\leq_{{\mathfrak{k}}^{2}}N_{2}$. By assumption (b) we have
$N^{\prime}_{1}\leq_{{\mathfrak{k}}}M$, so by the previous paragraph we get
$N_{0}\leq_{{\mathfrak{k}}}N^{\prime}_{1}$, together with the previous
sentence we have
$N_{0}\leq_{{{\mathfrak{k}}}^{1}}N^{\prime}_{1}\leq_{{{\mathfrak{k}}}^{2}}N_{1}$
so by the definition of $\leq_{{\mathfrak{k}}}$ we are done. ∎
###### Definition 5.23.
If $M\in K_{\chi}$ is $(\chi,\geq\kappa)$-superlimit1 let
$K^{[M]}_{\chi}=\\{N\in K_{\chi}:N\cong
M\\},{{\mathfrak{K}}}^{[M]}_{\chi}=(K^{[M]}_{\chi},\leq_{{\mathfrak{k}}}\restriction
K^{[M]}_{\chi})$ and ${{\mathfrak{k}}}^{[M]}$ is the
${{\mathfrak{k}}}^{\prime}$ we get in 5.21(1) for
${{\mathfrak{k}}}^{\prime}={{\mathfrak{k}}}^{[M]}_{\chi}$.
###### Claim 5.24.
1) If ${{\mathfrak{k}}}$ is an $(\mu,\lambda,\kappa)$-a.e.c.,
$\lambda\leq\chi<\mu,M\in K_{\chi}$ is $(\chi,\geq\kappa)$-superlimit1 then
${{\mathfrak{k}}}^{[M]}_{\chi}$ is a $(\chi^{+},\chi,\kappa)$-d.a.e.c.
2) If in addition ${{\mathfrak{k}}}$ is a $(\mu,\lambda,\kappa)$-d.a.e.c.±
then ${{\mathfrak{k}}}^{[M]}_{\chi}$ is a $(\chi^{+},\chi,\kappa)$-d.a.e.c.±.
## 6\. II pr frames
###### Definition 6.1.
For $\iota=1,2,3,4$. We say that ${{\mathfrak{s}}}$ is a good
$(\mu,\lambda,\kappa)-\iota$-frame when ${{\mathfrak{s}}}$ consists of the
following objects satisfying the following condition: $\mu,\lambda,\kappa$ (so
we may write
$\mu_{{\mathfrak{s}}},\lambda_{{\mathfrak{s}}},\kappa_{{\mathfrak{s}}}$ but we
usually ignore them defining ${{\mathfrak{s}}}$) and
1. $(A)$
${{\mathfrak{k}}}={{\mathfrak{k}}}_{{\mathfrak{s}}}$ is a
$(\mu,\lambda,\kappa)-6$-d.a.e.c., so we may write ${{\mathfrak{s}}}$ instead
of ${{\mathfrak{k}}}$, e.g. $\leq_{{\mathfrak{s}}}$-increasing, etc. and
$\chi\in[\lambda,\mu)\Rightarrow{\rm LST}(\chi^{<\kappa})$
2. $(B)$
${{\mathfrak{k}}}$ has a $(\lambda,\geq\kappa)$-superlimit model $M^{*}$ which
151515follows by (C) in fact is not $<_{{\mathfrak{k}}}$-maximal, i.e.,
1. $(a)$
$M^{*}\in K^{{\mathfrak{s}}}_{\lambda}$
2. $(b)$
if $M_{1}\in K^{{\mathfrak{s}}}_{\lambda}$ then for some
$M_{2},M_{1}<_{{\mathfrak{s}}}M_{2}\in K^{{\mathfrak{s}}}_{\lambda}$ and
$M_{2}$ is isomorphic to $M^{*}$
3. $(c)$
if $\langle M_{i}:i<\delta\rangle$ is $\leq_{{\mathfrak{s}}}$-increasing,
$i<\delta\Rightarrow M_{i}\cong M$ and ${\rm
cf}(\delta)\geq\kappa,\delta<\lambda^{+}$ then $\cup\\{M_{i}:i<\delta\\}$ is
isomorphic to $M^{*}$
3. $(C)$
${{\mathfrak{k}}}$ has the amalgamation property, the JEP (joint embedding
property), and has no $\leq_{{\mathfrak{k}}}$-maximal member; if of $\iota\geq
2,{{\mathfrak{k}}}$ has primes- and if $\iota\geq 4,{{\mathfrak{k}}}$ has
primes+
4. $(D)$
$(a)\quad{{\mathscr{S}}}^{{\rm bs}}={{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}$ (the class of basic types for
${{\mathfrak{k}}}_{{\mathfrak{s}}}$) is included in
$\bigcup\\{{{\mathscr{S}}}(M):M\in K_{{\mathfrak{s}}}\\}$ and is closed under
isomorphisms including
automorphisms; for $M\in K_{\lambda}$ let ${{\mathscr{S}}}^{{\rm
bs}}(M)={{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}\cap{{\mathscr{S}}}(M)$;
no harm in allowing types of finite sequences.
5. $(b)\quad$ if $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M)$, then $p$ is non-algebraic (i.e., not realized by any
$a\in M$).
6. $(c)\quad$ (density)
if $M\leq_{{\mathfrak{k}}}N$ are from $K_{{\mathfrak{s}}}$ and $M\neq N$, then
for some $a\in N\backslash M$
we have ${\rm ortp}(a,M,N)\in{{\mathscr{S}}}^{{\rm bs}}$
[intention: examples are: minimal types in [She01], regular types
for superstable theories]
7. $(d)\quad$ bs-stability
${{\mathscr{S}}}^{{\rm bs}}(M)$ has cardinality $\leq\|M\|^{<\kappa}$ for
$M\in K_{{\mathfrak{s}}}$.
8. $(E)$
$(a)\quad\mathop{\hbox{\hbox to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits=\mathop{\hbox{\hbox
to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits_{\textstyle{\mathfrak{s}}}$
is a four place relation called nonforking with $\mathop{\hbox{\hbox
to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits(M_{0},M_{1},a,M_{3})$
implying $M_{0}\leq_{{\mathfrak{k}}}M_{1}\leq_{{\mathfrak{k}}}M_{3}$ are from
$K_{{\mathfrak{s}}},a\in M_{3}\backslash M_{1}$ and ${\rm
ortp}(a,M_{0},M_{3})\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M_{0})$ and
${\rm ortp}(a,M_{1},M_{3})\in{{\mathscr{S}}}^{{\rm bs}}(M_{1})$. Also
$\mathop{\hbox{\hbox to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits$
is preserved under isomorphisms.
We also write $M_{1}\mathop{\hbox{\hbox to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits_{\textstyle
M_{0}}^{\textstyle M_{3}}a$ and demand: if
$M_{0}=M_{1}\leq_{{\mathfrak{k}}}M_{3}$ both in $K_{\lambda}$ then:
$\mathop{\hbox{\hbox to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits(M_{0},M_{1},a,M_{3})$
is equivalent to “${\rm ortp}(a,M_{0},M_{3})\in{{\mathscr{S}}}^{{\rm
bs}}(M_{0})$”. Also we may state $M_{1}\mathop{\hbox{\hbox
to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits_{\textstyle
M_{0}}^{\textstyle M_{3}}a$ “${\rm ortp}(a,M_{1},M_{3})$ does not fork over
$M_{0}$ (inside $M_{3}$)” (this is justified by clause (b) below).
[Explanation: The intention is to axiomatize non-forking of types, but we
allow dealing with basic types only. Note that in [She01] we know something on
minimal types but other types are something else.]
9. $(b)\quad$ (monotonicity):
if
$M_{0}\leq_{{\mathfrak{k}}}M^{\prime}_{0}\leq_{{\mathfrak{k}}}M^{\prime}_{1}\leq_{{\mathfrak{k}}}M_{1}\leq_{{\mathfrak{k}}}M_{3}\leq_{{\mathfrak{k}}}M^{\prime}_{3},M_{1}\cup\\{a\\}\subseteq
M^{\prime\prime}_{3}\leq_{{\mathfrak{k}}}M^{\prime}_{3}$ all of them in
$K_{\lambda}$, then $\mathop{\hbox{\hbox to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits(M_{0},M_{1},a,M_{3})\Rightarrow\mathop{\hbox{\hbox
to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits(M^{\prime}_{0},M^{\prime}_{1},a,M^{\prime}_{3})\Leftrightarrow\mathop{\hbox{\hbox
to9.99998pt{\hfil\vrule
width=0.3pt,depth=10.00012pt,height=0.0pt\hfil}\hbox{$\textstyle\bigcup$}}}\limits(M^{\prime}_{0},M^{\prime}_{1},a,M^{\prime\prime}_{3})$,
so it is legitimate to just say “${\rm ortp}(a,M_{1},M_{3})$ does not fork
over $M_{0}$”.
[Explanation: non-forking is preserved by decreasing the type, increasing the
basis (= the set over which it does not fork) and increasing or decreasing the
model inside which all this occurs. The same holds for stable theories only
here we restrict ourselves to “legitimate” types.]
10. $(c)\quad$ (local character):
Case 1: $\iota=1,2,3$.
If $\langle M_{i}:i\leq\delta\rangle$ is $\leq_{{\mathfrak{s}}}$-semi-
continuous and $p\in{{\mathscr{S}}}^{{\rm bs}}(M_{\delta})$ and ${\rm
cf}(\delta)\geq\kappa$ then for every $\alpha<\delta$ large enough, $p$ does
not fork over $M_{\alpha}$.
Case 2: $\iota=4$.
If $I$ is a $\kappa$-directed partial order and $\bar{M}=\langle M_{t}:t\in
I\rangle$ is a $\leq_{{\mathfrak{s}}}$-directed system and $M$ is its
$\leq_{{\mathfrak{k}}}$-union and $M\leq_{{\mathfrak{s}}}N$ and ${\rm
ortp}(a,M,N)\in{{\mathscr{S}}}^{{\rm bs}}(M_{\delta})$ then for every $s\in I$
large enough ${\rm ortp}(a,M,N)$ does not fork over $M_{s}$.
[Explanation: This is a replacement for $\kappa\geq\kappa_{r}(T)$; if
$p\in{{\mathscr{S}}}(A)$ then there is a $B\subseteq A$ of cardinality
$<\kappa$ such that $p$ does not fork over $A$. The case $\iota=2$? is a very
strong demand even for stable first order theories.] It means dimensional
continuity, i.e. $M_{\delta}$ is minimal over
$\cup\\{M_{\alpha}:\alpha<\delta\\}$ and $\kappa$-saturated models.]
11. $(d)\quad$ (transitivity):
if
$M_{0}\leq_{{\mathfrak{k}}}M^{\prime}_{0}\leq_{{\mathfrak{k}}}M^{\prime\prime}_{0}\leq_{{\mathfrak{k}}}M_{3}$
and $a\in M_{3}$ and ${\rm ortp}(a,M^{\prime\prime}_{0},M_{3})$ does not fork
over $M^{\prime}_{0}$ and ${\rm ortp}(a,M^{\prime}_{0},M_{3})$ does not fork
over $M_{0}$ (all models are in $K_{\lambda}$, of course, and necessarily the
three relevant types are in ${{\mathscr{S}}}^{{\rm bs}}$), then ${\rm
ortp}(a,M^{\prime\prime}_{0},M_{3})$ does not fork over $M_{0}$
12. $(e)\quad$ uniqueness:
if $p,q\in{{\mathscr{S}}}^{{\rm bs}}(M_{1})$ do not fork over
$M_{0}\leq_{{\mathfrak{k}}}M_{1}$ (all in $K_{\mathfrak{s}}$) and
$p\restriction M_{0}=q\restriction M_{0}$ then $p=q$
13. $(f)\quad$ symmetry:
Case 1: $\ell\geq 3$.
If $M_{0}\leq_{{\mathfrak{s}}}M_{\ell}\leq_{{\mathfrak{s}}}M_{3}$ and
$(M_{0},M_{\ell},a_{\ell})\in K^{3,{\rm pr}}_{{\mathfrak{s}}}$, see clause (j)
below for $\ell=1,2$ then ${\rm ortp}_{{\mathfrak{s}}}(a_{2},M_{1},M_{3})$
does not fork over $M_{0}$ iff ${\rm
ortp}_{{\mathfrak{s}}}(a_{1},M_{2},M_{3})$ does not fork over $M_{0}$.
Case 2: $\iota=1,2$.
If $M_{0}\leq_{{\mathfrak{k}}}M_{3}$ are in ${{\mathfrak{k}}}_{\lambda}$ and
for $\ell=1,2$ we have $a_{\ell}\in M_{3}$ and ${\rm
ortp}(a_{\ell},M_{0},M_{3})\in{{\mathscr{S}}}^{{\rm bs}}(M_{0})$, then the
following are equivalent:
1. $(\alpha)\quad$ there are $M_{1},M^{\prime}_{3}$ in $K_{{\mathfrak{s}}}$ such that $M_{0}\leq_{{\mathfrak{k}}}M_{1}\leq_{{\mathfrak{K}}}M^{\prime}_{3}$,
$a_{1}\in M_{1},M_{3}\leq_{{\mathfrak{k}}}M^{\prime}_{3}$ and ${\rm
ortp}(a_{2},M_{1},M^{\prime}_{3})$ does not fork over $M_{0}$
2. $(\beta)\quad$ there are $M_{2},M^{\prime}_{3}$ in $K_{\lambda}$ such that $M_{0}\leq_{{\mathfrak{k}}}M_{2}\leq_{{\mathfrak{k}}}M^{\prime}_{3}$,
$a_{2}\in M_{2},M_{3}\leq_{{\mathfrak{k}}}M^{\prime}_{3}$ and ${\rm
ortp}(a_{1},M_{2},M^{\prime}_{3})$ does not fork over $M_{0}$.
[Explanation: this is a replacement to “${\rm
ortp}(a_{1},M_{0}\cup\\{a_{2}\\},M_{3})$ forks over $M_{0}$ iff ${\rm
ortp}(a_{2},M_{0}\cup\\{a_{1}\\},M_{3})$ forks over $M_{0}$”; which is not
well defined in out context]
14. $(g)\quad$ [existence] if $M\leq_{{\mathfrak{s}}}N,p\in{{\mathscr{S}}}^{{\rm bs}}(M)$ then there is $q\in{{\mathscr{S}}}^{{\rm bs}}(N)$
a non-forking extension of $p$
15. $(h)\quad$ [continuity] Case 1: $\iota\geq 1$.
If $\langle M_{\alpha}:\alpha\leq\delta\rangle$ is
$\leq_{{\mathfrak{s}}}$-increasing, $\leq_{{\mathfrak{s}}}$-semi-continuity,
$M_{\delta}=\bigcup\limits_{\alpha<\delta}M_{\alpha}$ which holds if ${\rm
cf}(\delta)\geq\kappa$ and $p\in{{\mathscr{S}}}(M_{\delta})$ and
$p\restriction M_{\alpha}$ does not fork over $M_{0}$ for $\alpha<\delta$ then
$p\in{{\mathscr{S}}}^{{\rm bs}}(M_{\delta})$ and it does not fork over $M_{0}$
16. Case 2: $\iota=4$.
Similarly for $\bar{M}=\langle M_{t}:t\in I\rangle,I$ directed,
$M=\cup\\{M_{t}:t\in I\\}$ is a $\leq_{{\mathfrak{s}}}$-upper bound of
$\bar{M}$
17. $(j)\quad{{\mathfrak{s}}}$ has $K^{3,{\rm pr}}_{{\mathfrak{s}}}$-primes, see 6.8
18. $(k)\quad$ Case 1: $\iota\geq 1$.
If $p\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(N)$ then $p$ does not fork
over $M$ for some $M\leq_{{\mathfrak{s}}}N$ from $K_{\lambda}$
19. Case 2: $\iota=3,4$.
If $M_{\ell}(\ell\leq 3),a_{\ell},p_{\ell}(\ell=1,2)$ are as in (E)(f).
###### Discussion 6.2.
Consider using: semi-continuous + ${\rm cf}(\delta)\geq\kappa$ for
$(E)(c),(E)(x):{\rm cf}(\delta)\geq\kappa$ stable only if
$\chi=\chi^{<\kappa}$.
###### Claim 6.3.
1) If $\langle M_{i}:i<\delta\rangle$ is $\leq_{{\mathfrak{k}}}$-increasing,
$(\Sigma\\{\|M_{i}\|:i<\delta\\})<\mu$ and $p_{i}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M_{i})$ does not fork over $M_{0}$ for $i<\delta$ and
$[i<j\Rightarrow p_{j}\restriction M_{i}=p_{i}]$ then :
1. $(a)$
we can find $M_{\delta}$ such that $i<\delta\Rightarrow
M_{i}\leq_{\mathfrak{k}}M_{\delta}$
2. $(b)$
for any such $M_{\delta}$, we can find
$p\in{{\mathscr{S}}}_{{\mathfrak{s}}}(M_{\delta})$ such that
$\bigwedge\limits_{i<\delta}p\restriction M_{i}=p_{i}$ and $p$ does not fork
over $M_{0}$
3. $(c)$
$p_{\delta}$ is unique in clause (b)
4. $(d)$
if $\ell\geq\kappa\wedge{\rm cf}(\delta)\geq\kappa$ we can add
$M=\cup\\{M_{\alpha}:\alpha<\delta\\}$.
2) Similarly for $\bar{M}=\langle M_{t}:t\in I\rangle,I$ directed.
###### Proof..
1) First choose $M_{\delta}$ by 6.1, Clause (A). Second choose
$p_{\delta}\in{{\mathscr{S}}}^{{\rm bs}}_{{\mathfrak{s}}}(M_{\delta})$, a non-
forking extension of $p_{0}$, exist by Ax(g) of (E) of 6.1. Now
$p_{\delta}\restriction M_{i}\in{{\mathscr{S}}}^{{\rm
bs}}_{{\mathfrak{s}}}(M_{i})$ does not fork over $M_{0}$ by (b) of (E) of 6.1 |
Singular Geometry and Higgs Bundles in String Theory
Singular Geometry and Higgs Bundles
in String Theory
Lara B. ANDERSON ${}^{{\dagger}^{1}}$, Mboyo ESOLE ${}^{{\dagger}^{2}}$, Laura
FREDRICKSON ${}^{{\dagger}^{3}}$
and Laura P. SCHAPOSNIK ${}^{{\dagger}^{4}{\dagger}^{5}}$
L.B. Anderson, M. Esole, L. Fredrickson and L.P. Schaposnik
${}^{{\dagger}^{1}}$ Department of Physics and Department of Mathematics,
Virginia Tech,
${}^{{\dagger}^{1}}$ Blacksburg, VA 24061, USA<EMAIL_ADDRESS>
${}^{{\dagger}^{2}}$ Department of Mathematics, Northeastern University,
Boston, MA 02115, USA<EMAIL_ADDRESS>
${}^{{\dagger}^{3}}$ Department of Mathematics, Stanford University, Stanford,
CA 94305, USA<EMAIL_ADDRESS>
${}^{{\dagger}^{4}}$ Department of Mathematics, University of Illinois at
Chicago, 60607 Chicago, USA<EMAIL_ADDRESS>${}^{{\dagger}^{5}}$ Department of
Mathematics, FU Berlin, 14195 Berlin, Germany
Received November 22, 2017, in final form April 13, 2018; Published online
April 18, 2018
This brief survey aims to set the stage and summarize some of the ideas under
discussion at the Workshop on Singular Geometry and Higgs Bundles in String
Theory, to be held at the American Institute of Mathematics from October 30th
to November 3rd, 2017. One of the most interesting aspects of the duality
revolution in string theory is the understanding that gauge fields and matter
representations can be described by intersection of branes. Since gauge theory
is at the heart of our description of physical interactions, it has opened the
door to the geometric engineering of many physical systems, and in particular
those involving Higgs bundles. This note presents a curated overview of some
current advances and open problems in the area, with no intention of being a
complete review of the whole subject.
Higgs bundles; Hitchin fibration; mirror symmetry; F-theory; Calabi–Yau;
singular curves; singularities
14D20; 14D21; 53C07; 14H70; 14P25
## 1 Introduction
One of the most interesting aspects of the duality revolution in string theory
is the understanding that gauge fields and matter representations can be
described by the intersection of branes. Since gauge theory is at the heart of
our description of physical interactions, it has opened the door to the
geometric engineering of many physical systems, and in particular those
arising from Higgs bundles, whose moduli spaces have become a source of many
interesting branes.
In an effort to consolidate and disseminate the variety of different
techniques, heuristics, and approaches that have been applied to the study of
Higgs bundles and spectral data in recent years by the mathematics and physics
communities, we present here a short survey on these subjects, as well as a
collection of open problems and ideas revolving around them. This note focuses
on two interrelated themes concerning Higgs bundles and the Hitchin fibration,
and their interactions with mathematical physics:
* (A)
Higgs bundles and algebraic geometry: invariants of singular spaces, and in
particular singular fibers of the Hitchin fibration (Section 2); the effect of
these fibers on the geometry of the moduli spaces of Higgs bundles, including
limits within the Hitchin fibration (Section 3), and the appearance of Higgs
bundles on singular curves (Section 4).
* (B)
Hitchin systems and T-branes: the study of the moduli space of Higgs bundles
and its branes through the Hitchin fibration (Section 5), and their appearance
within the broader setting of string/F-theory (Section 6), and Calabi–Yau
elliptic fibrations (Section 7).
Although these two themes are closely related, correspondences between them
are just in their infancy. In particular, obtaining a global understanding of
Higgs bundles over singular curves, and of Higgs bundles which have singular
spectral data, would be most beneficial from the perspective of F-theory and
superconformal theories in diverse dimensions. We hope these notes will help
to further clarify the role that spectral curves and spectral data play in
string theory, both for those studying Higgs bundles on Riemann surfaces and
for those studying Higgs bundles on higher dimensional spaces.
This short survey is not intended to be a complete overview of the research
done in the area, but rather a concise description of certain particular paths
of research that are currently receiving much attention, and that present open
problems that could be tackled by researchers in different areas of
mathematics and physics.
## 2 Higgs bundles and the Hitchin fibration
Throughout the paper we shall consider a compact Riemann surface $\Sigma$ of
genus $g\geq 2$ with canonical bundle $K=T^{*}\Sigma$. In what follows, we
recall some of the main properties of complex and real Higgs bundles, as well
as the associated Hitchin fibration whose structure groups are real or complex
subgroups of ${\rm GL}(n,\mathbb{C})$.
### 2.1 Higgs bundles
We begin by briefly reviewing the notions of Higgs bundles for real and
complex groups which are relevant to this paper. Further details can be found
in standard references such as Hitchin [112, 113] and Simpson [169, 170, 171].
Recall that ${\rm GL}(n,\mathbb{C})$-Higgs bundles of degree 0 on $\Sigma$ are
pairs $(E,\Phi)$ where
* •
$E\rightarrow\Sigma$ is a holomorphic vector bundle of rank $n$ and degree
$0$,
* •
the Higgs field $\Phi\colon E\rightarrow E\otimes K$, is a holomorphic
$K$-valued endomorphism.
By the work of Hitchin and Simpson, given a polystable Higgs bundle, there is
a unique hermitian metric $h$ on $E$, known as the harmonic metric, solving
the so-called Hitchin equations:
$\displaystyle
F_{D(\overline{\partial}_{E},h)}+[\Phi,\Phi^{*_{h}}]=0,\qquad\overline{\partial}_{E}\Phi=0,$
where $D(\overline{\partial}_{E}{,}h)$ is the Chern connection, i.e., the
unique $h$-unitary connection such that $D^{0,1}{=}\overline{\partial}_{E}$,
the curvature of the Chern connection is denoted by
$F_{D(\overline{\partial}_{E},h)}$, and $\Phi^{*_{h}}$ represents the
hermitian adjoint of $\Phi$ with respect to the hermitian metric $h$. The
correspondence between pairs $(E,\Phi)$ and triples $(E,\Phi,h)$ is known as
the nonabelian Hodge correspondence. More generally, for a complex reductive
Lie group $G_{\mathbb{C}}$, one has the following [113]:
###### Definition 2.1.
A $G_{\mathbb{C}}$-Higgs bundle is a pair $(P,\Phi)$, where $P$ is a
holomorphic principal $G_{\mathbb{C}}$-bundle, and $\Phi$ is a holomorphic
section of ${\rm ad}(P)\otimes K$, where ${\rm ad}(P)$ is the adjoint bundle
of $P$.
In this setting, there is a similar nonabelian Hodge correspondence, where the
notion of a hermitian metric is replaced by a reduction of structure of $P$ to
the maximal compact subgroup of $G_{\mathbb{C}}$. By considering appropriate
stability conditions, one may define the Hitchin moduli space
$\mathcal{M}_{G_{\mathbb{C}}}$ of isomorphism classes of polystable
$G_{\mathbb{C}}$-Higgs bundles, which was introduced by Hitchin in [112]. Up
to gauge equivalence, the points of the moduli space
$\mathcal{M}_{G_{\mathbb{C}}}$ represent polystable $G_{\mathbb{C}}$-Higgs
bundles on $\Sigma$. Moreover, the through nonabelian Hodge correspondence,
points of the moduli space represent solutions of the
$G_{\mathbb{C}}$-Hitchin’s equations.
Given a real form $G$ of the complex reductive lie group $G_{\mathbb{C}}$, we
may define $G$-Higgs bundles as follows. Let $H$ be the maximal compact
subgroup of $G$ and consider the Cartan decomposition
$\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{m}$ of $\mathfrak{g}$, where
$\mathfrak{h}$ is the Lie algebra of $H$, and $\mathfrak{m}$ its orthogonal
complement. This induces a decomposition of the Lie algebra
$\mathfrak{g}_{\mathbb{C}}=\mathfrak{h}^{\mathbb{C}}\oplus\mathfrak{m}^{\mathbb{C}}$
of $G_{\mathbb{C}}$. Note that the Lie algebras satisfy
$[\mathfrak{h},\mathfrak{h}]\subset\mathfrak{h}$,
$[\mathfrak{h,\mathfrak{m}}]\subset\mathfrak{m}$,
$[\mathfrak{m},\mathfrak{m}]\subset\mathfrak{h},$ and there is an induced
isotropy representation ${\rm Ad}|_{H^{\mathbb{C}}}\colon
H^{\mathbb{C}}\rightarrow{\rm GL}(\mathfrak{m}^{\mathbb{C}})$.
###### Definition 2.2.
A principal $G$-Higgs bundle is a pair $(P,\Phi)$ where
* •
$P$ is a holomorphic principal $H^{\mathbb{C}}$-bundle on $\Sigma$,
* •
$\Phi$ is a holomorphic section of
$P\times_{\mathrm{Ad}}\mathfrak{m}^{\mathbb{C}}\otimes K$.
Similarly to the case of Higgs bundles for complex groups, there are notions
of stability, semistability and polystability for $G$-Higgs bundles. One can
see that the polystability of a $G$-Higgs bundle for a group $G\subset{\rm
GL}(n,\mathbb{C})$ is equivalent to the polystability of the corresponding
${\rm GL}(n,\mathbb{C})$-Higgs bundle. However, it should be noted that a
$G$-Higgs bundle can be stable as a $G$-Higgs bundle but not as a ${\rm
GL}(n,\mathbb{C})$-Higgs bundle. The moduli space of polystable $G$-Higgs
bundles on the compact Riemann surface $\Sigma$ shall be denoted by
$\mathcal{M}_{G}$.
### 2.2 The Hitchin fibration
The moduli space $\mathcal{M}_{G_{\mathbb{C}}}$ of $G_{\mathbb{C}}$-Higgs
bundles admits a natural complete hyperkähler metric over its smooth points,
and a way of studying it is through the Hitchin fibration [113]. This
fibration maps $(E,\Phi)$ to the eigenvalues of $\Phi$ encoded in the
characteristic polynomial $\det(\Phi-\eta\operatorname{Id})$ of $\Phi$, and is
obtained as follows. Let $\\{p_{1},\ldots,p_{k}\\}$ be a homogeneous basis for
the algebra of invariant polynomials on the Lie algebra $\mathfrak{g}_{c}$ of
$G_{\mathbb{C}}$, and let $d_{i}$ denote the degree of $p_{i}$. The Hitchin
fibration is then given by
$\displaystyle\mathrm{Hit}\colon\ \mathcal{M}_{G_{\mathbb{C}}}$
$\displaystyle\longrightarrow\mathcal{A}_{G_{\mathbb{C}}}:=\bigoplus_{i=1}^{k}H^{0}\big{(}\Sigma,K^{d_{i}}\big{)},$
(2.1) $\displaystyle(E,\Phi)$
$\displaystyle\mapsto(p_{1}(\Phi),\ldots,p_{k}(\Phi)),$
where $\mathrm{Hit}$ is referred to as the Hitchin map. It is a proper map for
any choice of basis111In particular, it can be expressed in terms of the
coefficients of $\det(\Phi-\eta\operatorname{Id})$., its generic fibers are
abelian varieties, and makes the moduli space into a complex integrable system
[113].
Each connected component of a generic fiber of the Hitchin map is an abelian
variety. In the case of $G_{\mathbb{C}}$-Higgs bundles this can be seen using
spectral data [26, 113]. Through the characteristic polynomial of the Higgs
field of a $G_{\mathbb{C}}$-Higgs bundle $(E,\Phi)$, one may define an
algebraic curve, called the spectral curve of $(E,\Phi)$, which is generically
smooth222When considering classical groups $G_{\mathbb{C}}$, only for ${\rm
SO}(2p,\mathbb{C})$ one needs to consider a normalization of the curve.:
$\displaystyle S=\\{{\rm det}(\Phi-\eta\operatorname{Id})=0\\}\subset{\rm
Tot}(K),$ (2.2)
where ${\rm Tot}(K)$ is the total space of $K$, the map $\eta$ is the
tautological section of $K$ on ${\rm Tot}(K)$, and by abuse of notation, we
consider $\Phi$ as its pull-back to ${\rm Tot}(K)$ (the reader should refer to
[26] for thorough details on the construction). We say that $(E,\Phi)$ lies in
the regular locus of $\mathcal{M}_{G_{\mathbb{C}}}$ if the curve $S$ is non-
singular, and denote the regular locus of the moduli space by
$\mathcal{M}_{G_{\mathbb{C}}}^{\prime}$. Let $\pi\colon S\to\Sigma$ be the
natural projection to $\Sigma$, and let $\eta\in H^{0}(S,\pi^{*}(K))$ denote
the restriction of the tautological section of $K$ to $S$. If $(E,\Phi)$ is in
the regular locus, then there exists a line bundle $L\to S$ for which
$E=\pi_{*}L$, and $\Phi$ is obtained by pushing down the map $\eta\colon L\to
L\otimes\pi^{*}(K)$. In this way, one recovers the pair $(E,\Phi)$ from the
pair $(S,L)$, which is referred to as the spectral data associated to the pair
$(E,\Phi)$.
Note that the spectral curve $S$ of the pair $(E,\Phi)$ depends only on the
characteristic polynomial of $\Phi$ and hence it only depends on the image of
$(E,\Phi)$ under the Hitchin map. Therefore any point
$a\in\mathcal{A}_{G_{\mathbb{C}}}$ has an associated spectral curve $S_{a}$.
If $a$ is in the regular locus of $\mathcal{A}_{G_{\mathbb{C}}}$, in other
words, if the associated spectral curve $S_{a}$ is smooth, then the spectral
data construction identifies the fiber $\mathrm{Hit}^{-1}(a)$ of the Hitchin
system with some subspace of $\mathrm{Pic}(S_{a})$, the Picard variety of the
spectral curve $S$. The connected components of $\mathrm{Pic}(S_{a})$ are
isomorphic to copies of $\mathrm{Jac}(S_{a})$, the Jacobian of $S_{a}$, and
are labeled by the degree of the vector bundle $E$ of a Higgs pair $(E,\Phi)$.
In particular, for $G_{\mathbb{C}}={\rm GL}(n,\mathbb{C})$, the generic fibers
are isomorphic to $\mathrm{Jac}(S_{a})$, and one can see from here that the
components of the regular fibers are abelian varieties. While much is known
about the generic fibers of the $G_{\mathbb{C}}$-Hitchin fibration, there are
still several interesting open questions. In particular, it would be
interesting to understand the geometry of the generic fibers of the Hitchin
fibration stated in the open problems of [160]. For instance, it is
interesting to consider the following:
###### Open Question 2.3.
Considering the notion of “strong real form”333The notion of strong real form
is a refinement of the notion of real form. For example for ${\rm
SL}(2,\mathbb{C})$, there are three equivalence classes of strong real forms
corresponding to ${\rm SU}(2,0),~{}{\rm SU}(0,2)$ and ${\rm SU}(1,1)$. from
[1], describe the corresponding Higgs bundles and determine which ones define
singular spectral curves.
When considering arbitrary groups $\mathcal{G}$, the algebraic curve defined
by the characteristic polynomial of $\Phi$ is not always generically smooth
(for real groups, see for instance the case of $\mathcal{G}={\rm SU}(p,q)$ and
the spectral data described in [157]). In this case, one may consider cameral
covers [60]: these are $K$ valued covers of $\Sigma$ with an action of
$\mathcal{W}$, the Weyl group of $\mathcal{G}$. The fiber of the associated
fibration can be described in terms of these covers, and over a generic point
of the base the cover is a $\mathcal{W}$-Galois cover (e.g., see [59, 60, 61,
80, 164], and [65] for further references). In this set up, there is a natural
discriminant locus in the Hitchin base, away from which the connected
component of the fiber is isomorphic to a certain abelian variety which can be
described as a generalized Prym variety of the cameral cover. However, the
study of the singular fibers, even from the perspective of cameral covers, is
not fully understood.
###### Open Question 2.4.
Give a comparison of what is known for singular fibers of the Hitchin
fibration from the perspective of cameral covers and of spectral data.
Cameral covers have shown to be very useful tools to understand the moduli
spaces of principal Higgs bundles and their relation to many other fields.
However, the abstraction of the method and the constructions of the covers can
sometimes make certain properties of the moduli spaces very difficult to
discern. Although most objects are defined in the above papers in a general
way, their description and study for particular groups is still being done by
many researchers (e.g., see recent developments for real Higgs bundles in
[153, 154], where interesting comparisons with classical spectral data are
carefully explained).
###### Open Question 2.5.
Extend the cameral cover methods of [153] for ${\rm SU}(p,p+1)$-Higgs bundles
to all other real Higgs bundles which lie completely over the singular locus
of the Hitchin fibration, i.e., to those Higgs bundles whose characteristic
polynomial defines singular curves through (2.2).
### 2.3 The singular locus of the Hitchin fibration
As mentioned before, the fiber $\mathrm{Hit}^{-1}(a)$ over
$a\in\mathcal{A}_{G_{\mathbb{C}}}$ is said to be singular when the
corresponding spectral curve $S_{a}$ defined as in (2.2) is singular. The most
singular fiber is the nilpotent cone444The name was given by Laumon [130], to
emphasize the analogy with the nilpotent cone in Lie algebra., which sits over
$\mathbf{0}\in\mathcal{A}_{G_{\mathbb{C}}}$. One of the tools to study the
nilpotent cone is the moment map $\mu$ of the $S^{1}$ action
$(E,\Phi,h)\to(E,\mathrm{e}^{i\theta}\Phi,h)$ [105, 112]. Moreover, the
nilpotent cone is preserved by the flow by $\mu$ [104, Theorem 5.2], and since
points of $\mathcal{M}_{G_{\mathbb{C}}}$ flow towards the nilpotent cone, it
encodes the topology of the moduli space.
The nilpotent cone has primarily been studied for ${\rm SL}(n,\mathbb{C})$ and
${\rm GL}(n,\mathbb{C})$, and much of its geometry remains unknown for the
moduli spaces of $G_{\mathbb{C}}$-Higgs bundles. For ${\rm SL}(n,\mathbb{C})$
and ${\rm GL}(n,\mathbb{C})$-Higgs bundles, the irreducible components of the
nilpotent cone are labeled by connected components of the fixed point set of
the $S^{1}$ action. Among these components is the moduli space $\mathcal{N}$
of semistable bundles555Given a stable bundle $E$, take $\Phi=0$, then the
Higgs bundle $(E,0)$ is stable and trivially fixed by the $S^{1}$-action..
Other singular fibers have been the subject of more recent research (e.g., see
[25, 96, 116]). In particular, in the case of ${\rm GL}(n,\mathbb{C})$-Higgs
bundles while when the spectral curve $S$ is smooth, the corresponding fiber
$\mathrm{Hit}^{-1}(S)$ can be identified with the Jacobian $\mathrm{Jac}(S)$
of all line bundles $L\rightarrow S$ of degree $0$, when the spectral curve
$S$ is not smooth, the corresponding fiber is seen to be the _compactified
Jacobian_ [26, 162] (see also [139, Fact 10.3] for a clear explanation). The
compactified Jacobian $\overline{\mathrm{Jac}}(S)$ is the moduli space of all
torsion-free rank-1 sheaves on $S$, where the usual “locally-free” condition
is missing. Moreover, when $S$ is not integral, the fine moduli space needs to
be considered. A more intuitive definition of the compactified Jacobian is the
following: consider a path of smooth curves $S_{t}$ approaching a singular
curve $S_{0}$; since the limit of $\mathrm{Jac}(S_{t})$ does not depend on the
choice of smooth family [119], this limit is $\overline{\mathrm{Jac}}(S)$.
In the case of ${\rm SL}(2,\mathbb{C})$-Higgs bundles, much work has been done
on the singular fibers of the corresponding Hitchin fibration. For example,
see [89, Section 5.2.2] for connectedness of the fiber of $\mathcal{M}_{{\rm
SL}(2,\mathbb{C})}$ when $S$ is irreducible and has only simple nodes; see
[96] for a fuller description of the singular fibers666The authors of [96]
actually study the slightly more general situation of “twisted Higgs bundles”
where the canonical bundle $K$ is replaced by a line bundle $L$ with
$\deg(L)>0$. For this setting of $L$-twisted ${\rm SL}(2,\mathbb{C})$-Higgs
bundles, the monodromy action was considered in [24]. of $\mathcal{M}_{{\rm
SL}(2,\mathbb{C})}$, and see [156] for the monodromy action around singular
fibers of the Hitchin fibration.
###### Open Question 2.6.
Building on the results for ${\rm SL}(2,\mathbb{C})$-Higgs bundles, describe
the singular fibers of the Hitchin fibration for arbitrary $G_{\mathbb{C}}$.
When the singular fiber lies above some particular types of spectral curves,
one may describe the fibers by considering some modified version of spectral
data, leading to the following natural question:
###### Open Question 2.7.
Extending on [116], obtain a geometric description of the fibers of the
Hitchin fibration of $G_{\mathbb{C}}$-Higgs bundles which lie over points of
the Hitchin base defining curves through (2.2) with equation
$\det(\Phi-\eta\operatorname{Id})=P^{k}(\eta)$ for $k\geq 2$, and for which
$\\{P(\eta)=0\\}$ is generically smooth $($and thus defines itself a smooth
spectral curve$)$.
When the spectral curve has defining equation
$\det(\Phi-\eta\operatorname{Id})=P^{2}(\eta)$, components of the fiber were
studied in [116], and the full fibers of the Hitchin fibration with that base
point are described in [43]. From a different perspective, in terms of fiber
products of spectral curves, certain singular spectral curves were considered
in [42]. While not much is known about the singular fibers of the Hitchin
fibration for $G_{\mathbb{C}}$-Higgs bundles, one may deduce properties of the
whole moduli space by considering the monodromy action of the natural
Gauss–Manin connection of the fibration. In the case of ${\rm
SL}(2,\mathbb{C})$-Higgs bundles, the study of the monodromy was done in
[156], where an explicit formula was used to understand connectivity of the
moduli space. The work was later extended to twisted rank 2 Higgs bundles in
[24], and to all ${\rm SL}(n,\mathbb{C})$-Higgs bundles in [18]. However, the
general understanding of the monodromy action for other groups remains open:
###### Open Question 2.8.
Give a geometric description of the monodromy action for the Hitchin fibration
of $G_{\mathbb{C}}$-Higgs bundles.
Finally, since the moduli spaces $\mathcal{M}_{G_{\mathbb{C}}}$ are often not
smooth, it is important to understand the singularities of
$\mathcal{M}_{G_{\mathbb{C}}}$. For a beautiful survey on recent developments
in the theory of moduli spaces of sheaves on projective varieties, and
implications for Higgs bundles, the reader may refer to [140]. In the case of
parabolic Higgs bundles, a description of the Hitchin fibration was given
recently in [21], and it would be very interesting to understand the above
considerations and open questions in this other setting.
## 3 Higgs bundles and limiting structures
Many conjectures from mathematics and physics about
$\mathcal{M}_{G_{\mathbb{C}}}$ remain open because they require a finer
knowledge of the ends of the moduli space than what is provided by traditional
algebro-geometric techniques. In this section we shall restrict our attention
to ${\rm SL}(2,\mathbb{C})$-Higgs bundles, and note that for other groups most
of the questions mentioned here remain open. In this setting, one has the
following conjecture of Hausel:
###### Conjecture 3.1 ([105, Conjecture 1]).
There are no non-trivial $L^{2}$ harmonic forms on the Hitchin moduli space.
There is a similar conjecture for the moduli space of monopoles which is
called the Sen Conjecture. By analogy, the conjecture for the Hitchin moduli
space is sometimes called the Sen Conjecture as well. In order to obtain a
finer knowledge of the ends of $\mathcal{M}_{{\rm SL}(2,\mathbb{C})}$, finer
descriptions of solutions of Hitchin’s equations near the ends are needed. A
number of recent results [136, 137, 138, 142] demonstrate the power of
constructive analytic techniques for describing the ends of the Hitchin moduli
space.
Fixing a stable Higgs bundle $(E,\Phi)$ in $\mathcal{M}_{{\rm
SL}(2,\mathbb{C})}$, the ray of Higgs bundles with harmonic metric
$(E,t\Phi,h_{t})$ approaches the ends of the moduli space as $t\to\infty$. In
order to understand what the behavior of the harmonic metrics $h_{t}$ as
$t\to\infty$ is, note that in the limit the curvature
$F_{D(\overline{\partial}_{E},h_{t})}$ concentrates at the ramification points
$Z\subset\Sigma$ of $\pi\colon S\rightarrow\Sigma$ and vanishes everywhere
else. The decay is exponential in $t$, leading to the following result:
###### Theorem 3.2 ([142, Theorem 2.7]).
On a compact subset $\overline{U}$ of $\Sigma-Z$, there exist positive
constants $c_{0}$ and $\epsilon_{0}$ such at any point in $\overline{U}$
$\displaystyle\big{|}[\varphi,\varphi^{\dagger_{h_{t}}}]\big{|}_{h_{t},g_{\Sigma}}\leq
c_{0}\exp(-\epsilon_{0}t).$
Consequently, the limiting hermitian metric is singular at the ramification
points $Z\subset\Sigma$ and
$\displaystyle
F_{D(\overline{\partial}_{E},h_{\infty})}=0,\qquad[\Phi,\Phi^{*h_{\infty}}]=0,\qquad\overline{\partial}_{E}\Phi=0.$
(3.1)
It is often said that Hitchin’s equations “abelianize” asymptotically. The
vanishing of the Lie bracket in (3.1) reflects the deeper expectation that the
metric $h_{\infty}$ is the pushforward of a singular harmonic metric $h_{L}$
on the spectral data $L\rightarrow S$. This has been proved when $S$ is
smooth, i.e., $(E,\Phi)\in\mathcal{M}^{\prime}_{{\rm SL}(n,C)}$, by
Mazzeo–Swoboda–Weiss–Witt when $n=2$ [137] and generalized to any rank by
Fredrickson [85]. In [142, Theorem 5.1] Mochizuki proves this for all of
$\mathcal{M}_{{\rm SL}(2,\mathbb{C})}$, making no assumptions about the
smoothness of $S$.
### 3.1 The ends of the regular locus
More is known about the ends of the regular locus $\mathcal{M}^{\prime}_{{\rm
SL}(2,\mathbb{C})}$. For $t$ large but finite, the harmonic metric $h_{t}$ is
close to an approximate harmonic metric $h_{t}^{\mathrm{approx}}$, constructed
by desingularizing $h_{\infty}$ [137]. On small disks around points in
$Z\subset\Sigma$, the approximate metric is equal to a smooth local model
solution
$\displaystyle\overline{\partial}_{E}=\overline{\partial},\qquad
t\Phi=t\begin{pmatrix}0&1\\\ z&0\end{pmatrix}\mathrm{d}z,\qquad
h_{t}^{\mathrm{model}}=\begin{pmatrix}|z|^{1/2}{\mathrm{e}}^{u_{t}(|z|)}&\\\
&|z|^{-1/2}{\mathrm{e}}^{-u_{t}(|z|)}\end{pmatrix},$
where $u_{t}(|z|)$ comes as a solution of a $t$-rescaled Painlevé III ODE with
boundary conditions given by $u_{t}(|z|)\sim-\frac{1}{2}\log(|z|)$ near
$|z|=0$ (so that $h_{t}$ is smooth), and
$\lim\limits_{|z|\to\infty}u_{t}(|z|)=0$. Note that in this same local gauge,
the singular limiting metric $h_{\infty}$ would be equal to
$\displaystyle h_{\infty}=\begin{pmatrix}|z|^{1/2}&\\\
&|z|^{-1/2}\end{pmatrix}.$
Outside of small disks around $Z\subset\Sigma$, the approximate harmonic
metric $h_{t}^{\mathrm{approx}}$ is equal to $h_{\infty}$.
The approximate description of $h_{t}$ by $h_{t}^{\mathrm{approx}}$ has
already been useful in [138] for describing the hyperkähler metric on
$\mathcal{M}^{\prime}_{{\rm SL}(2,\mathbb{C})}$ near the ends. There are two
natural hyperkähler metrics on $\mathcal{M}^{\prime}_{{\rm
SL}(2,\mathbb{C})}$: first, the hyperkähler metric $g_{L^{2}}$ on
$\mathcal{M}_{{\rm SL}(2,\mathbb{C})}$ restricts to
$\mathcal{M}^{\prime}_{{\rm SL}(2,\mathbb{C})}$; second, there is a metric
$g_{\mathrm{sf}}$ on $\mathcal{M}^{\prime}_{{\rm SL}(2,\mathbb{C})}$, known as
the semiflat metric because $g_{\mathrm{sf}}$ is flat on the half-dimensional
torus fibers [87]. The metric $g_{L^{2}}$ comes from taking the $L^{2}$ metric
on triples $(E,t\Phi,h_{t})$, while the semiflat metric $g_{\mathrm{sf}}$
comes from taking the $L^{2}$ metric on the moduli space of triples
$(E,t\Phi,h_{\infty})$ [138]. Consequently, Mazzeo–Swoboda–Weiss–Witt are able
to describe the difference between $g_{L^{2}}$ and $g_{\mathrm{sf}}$ using
their careful description of $h_{t}$ and $h_{\infty}$. They prove
###### Theorem 3.3 ([138, Theorem 1.2]).
The metric $g_{L^{2}}$ admits an asymptotic expansion
$\displaystyle
g_{L^{2}}=g_{\mathrm{sf}}+\sum_{j=0}^{\infty}t^{\frac{4-j}{3}}G_{j}+\mathcal{O}\big{(}e^{-\beta
t}\big{)}$ (3.2)
as $t\to\infty$. Here each $G_{j}$ is a symmetric two-tensor.
###### Open Question 3.4.
Are the polynomial correction terms $G_{j}$ non-zero?
Separate from this description of the ends with PDE techniques, a remarkable
conjectural picture of the asymptotic geometry of $\mathcal{M}_{{\rm
SL}(n,\mathbb{C})}$ has emerged from physics in the work of Gaiotto, Moore,
and Neitzke [91, 150]. Their starting point is the semiflat metric
$g_{\mathrm{sf}}$ on $\mathcal{M}^{\prime}_{{\rm SL}(n,\mathbb{C})}$ which is
too homogeneous to extend to all of $\mathcal{M}_{{\rm SL}(n,\mathbb{C})}$.
They give a recipe for constructing a complete hyperkähler metric
$g_{\mathrm{GMN}}$ on $\mathcal{M}_{{\rm SL}(n,\mathbb{C})}$ differing from
$g_{\mathrm{sf}}$ by “quantum corrections” which are computed by counting
certain BPS states in supersymmetric field theory. In particular, the quantum
corrections have the following size
$\displaystyle g_{\mathrm{GMN}}=g_{\mathrm{sf}}+O\left(\sum_{\gamma\in
H_{1}(S_{a},\mathbb{Z})}\Omega(\gamma,a)\mathrm{e}^{-t\big{|}\int_{\gamma}\eta\big{|}}\right)$
(3.3)
as $t\to\infty$. In this formula, as in the previous sections, $a$ is a point
in $\mathcal{A}_{{\rm SL}(n,\mathbb{C})}$, the corresponding spectral curve is
$S_{a}$ with tautological one-form $\eta$; and the sum is over all loops
$\gamma$ in $S_{a}$. The $\Omega(\gamma,a)$ are BPS counts in supersymmetric
field theory. These are $\mathbb{Z}$-valued and piecewise-constant, jumping
across certain walls in the parameter space. The jumps are constrained to
satisfy the Kontesevich–Soibelman wall-crossing formula [128], and thus
$g_{\mathrm{GMN}}$ is smooth. Moreover, Gaiotto–Moore–Neitzke conjecture
###### Conjecture 3.5 ([91]).
The hyperkähler metric $g_{\mathrm{GMN}}$ on the moduli space is the natural
hyperkähler metric $g_{L^{2}}$ on $\mathcal{M}_{{\rm SL}(n,\mathbb{C})}$.
If the conjecture of Gaiotto–Moore–Neitzke is correct, then all of the
symmetric two-tensors $G_{j}$ appearing in (3.2) vanish, answering Open
Question 3.4. Note that there is already evidence that this happens on the
Hitchin section [68]. As $a\in\mathcal{A}_{{\rm SL}(n,\mathbb{C})}$ approaches
the singular locus $\mathcal{A}^{\mathrm{sing}}_{{\rm SL}(n,\mathbb{C})}$, the
spectral curves $S_{a}$ become singular. In particular, there is at least one
loop $\gamma_{0}$ on $S_{a}$ which pinches; hence the “quantum correction” in
(3.3) corresponding to $\gamma_{0}$ is not exponentially suppressed as $t$
approaches $\infty$. While we have focused here on ${\rm
SL}(2,\mathbb{C})$-Higgs bundles, equivalent questions (and conjectures) may
be asked for more general $G_{\mathbb{C}}$-Higgs bundles, providing several
new lines of research:
###### Open Question 3.6.
Generalize the above results to the case of $G_{\mathbb{C}}$-Higgs bundles for
arbitrary $G_{\mathbb{C}}$.
## 4 Higgs bundles on singular curves
While we have considered before $G_{\mathbb{C}}$-Higgs bundles on a compact
Riemann surface $\Sigma$, principal Higgs bundles can also be defined over
singular spaces $X$, and in particular, over singular curves. For simplicity,
we shall begin by considering nodal curves $X$, i.e., irreducible projective
curves $X$ whose singularities are nodes. In order to generalize the notion of
vector bundles on smooth projective curves, one may consider the torsion-free
sheaves on the nodal curve $X$. Through the work of Bhosle in [31], the
category of torsion-free sheaves on a nodal curve $X$ and category of
generalized parabolic bundles over its normalization $\widetilde{X}$ are
equivalent. Moreover, a first general construction of compactified moduli
spaces for semistable $G_{\mathbb{C}}$-bundles on an irreducible complex
projective curve $X$ with exactly one node was given in [163].
### 4.1 Singular principal $\boldsymbol{G_{\mathbb{C}}}$-Higgs bundles
Through the work of [163], it was shown in [95] that one can treat a principal
$G_{\mathbb{C}}$-Higgs bundle over a nodal curve $X$ as a particular type of
vector bundle on the normalization of the curve called a descending bundle,
objects which are in one-to-one correspondence with the following singular
principal $G_{\mathbb{C}}$-Higgs bundles.
###### Definition 4.1.
A singular principal $G_{\mathbb{C}}$-Higgs bundle is a triple
$(\mathcal{E},\tau,\Phi)$ where
* •
$\mathcal{E}$ is a locally free sheaf;
* •
$\tau\colon{\rm Sym}^{*}(\mathcal{E}\otimes
V)^{G_{\mathbb{C}}}\rightarrow\mathcal{O}_{X}$, for a fixed faithful
representation $G_{\mathbb{C}}\rightarrow{\rm GL}(V)$ of $G_{\mathbb{C}}$;
* •
$\Phi\colon X\rightarrow{\rm End}(\mathcal{E})\otimes\Omega_{X}^{1}$ is a
section;
###### Open Question 4.2.
Give, if possible, a notion of the Hitchin fibration for these singular
principal $G_{\mathbb{C}}$-Higgs bundles, and describe the geometry of the
smooth and singular fibers.
While Schmitt [163] and Bhosle [31] proved that there is a moduli space of
singular $G_{\mathbb{C}}$-bundles on $X$ with good specialization properties,
Seshadri gave a further study of the spaces of torsion-free sheaves on nodal
curves and generalizations to, among others, ramified $G_{\mathbb{C}}$-bundles
[167]. When considering degenerations of moduli spaces of vector bundles on
curves, which are closely related to the singular $G_{\mathbb{C}}$-bundles
mentioned above, the reader may want to consider the conjectures presented in
[166].
Just as one may define parabolic Higgs bundles on Riemann surfaces to consider
Higgs bundles on marked curves, one may extend these objects to singular
curves. Extending the notion of a parabolic vector bundle on a smooth curve,
Bhosle defined generalized parabolic sheaves (GPS) on any integral projective
curve $X$ [32] and generalized parabolic bundles (GPB) [33]. She constructed
the moduli spaces of GPS and GPB, and studied the correspondences appearing
when curves $X$ are obtained from blowing up finitely many nodes in a space
$Y$. Moreover, Bhosle also extended the notion of parabolic Higgs bundles to
that of generalized parabolic Higgs bundles (GPH) on the normalization $X$ of
an integral projective curve $Y$ [32, 33, 34]. In particular, she constructed
a birational morphism from the moduli space of good GPH on $X$ to the moduli
space of Higgs bundles on $Y$, and defined a proper Hitchin map on the space
of GPH. In this context, the following question is natural.
###### Open Question 4.3.
Generalize [34] to define generalized parabolic $G_{\mathbb{C}}$-Higgs bundles
on $X$, as well as a Hitchin fibration.
Moreover, the open questions mentioned for the moduli spaces of classical
$G_{\mathbb{C}}$-Higgs bundles may also be considered both for parabolic Higgs
bundles on Riemann surfaces $\Sigma$, and for parabolic Higgs bundles on
integral projective curves $X$. In particular, Bhosle studied recently the
relationship between Higgs bundles and the compactified Jacobian of a spectral
curve [34], considering Higgs bundles on the normalization $X$ of integral
projective curves $Y$, leading to an analogous question to that stated for
classical Higgs bundles:
###### Open Question 4.4.
Obtain a geometric description of the singular fibers of the Hitchin fibration
for generalized parabolic $G_{\mathbb{C}}$-Higgs bundles on $X$.
The study of Higgs bundles on singular curves may also be considered in a
limiting setting, where one begins with Higgs bundles on a smooth curve and
parametrically tunes the curve to degenerate to a singular curve. The
particular case of vector bundles on smooth curves degenerating to an
irreducible curve with one double point was considered in [122]. The case of
the degeneration of the moduli space of Higgs bundles on smooth projective
curves when the curve degenerates to an irreducible curve with a single node
was studied in [17].
###### Open Question 4.5.
Obtain equivalent degenerations to those in [17, 122] for the moduli spaces of
$G_{\mathbb{C}}$-Higgs bundles.
In particular, as explained in [17], their degeneration is analogous to the
models constructed by Gieseker and Nagaraj–Seshadri for the case of the moduli
spaces for which the Higgs structure is trivial. It should be noted that in
[17] the authors also construct a corresponding canonical relative proper
Hitchin map, whose fiber provides a new compactification of the Picard variety
of smooth curves with normal crossing singularities. In their setting, the
single node on the base curve leads to an irreducible vine curve with
$n$-nodes appearing as the spectral curve. It would then seem natural that the
quasi-abelianization of [17] (resembling Hitchin’s classical abelianization)
could be generalized.
###### Open Question 4.6.
Describe the quasi-abelianization of the moduli space of
$G_{\mathbb{C}}$-Higgs bundles for different degenerations, following the
techniques of [17].
Finally, since it is important to find natural compactifications of open
moduli, and torsion-free sheaves on nodal curves play an important role in
[152] within the study of the compactification of the universal moduli space
of slope-semistable vector bundles over the compactification
$\overline{M}_{g}$ of the moduli space of genus $g$ curves, it is natural to
ask the following:
###### Open Question 4.7.
Understand the relation between the degenerations of moduli spaces of Higgs
bundles above, and the known compactifications of
$\mathcal{M}_{G_{\mathbb{C}}}$.
Since one has the correspondence between Langlands dual groups
$G_{\mathbb{C}}$ and ${}^{L}G_{\mathbb{C}}$, once the corresponding moduli
spaces are understood, and the Hitchin fibrations are shown to exist, one may
also want to consider, if possible, the duality between the fibrations. In
particular, the work of Arinkin [11] for rank 2 Higgs bundles on the auto-
duality of compactified Jacobians for curves with plane singularities would
allow one to understand both this setting, as well as the one of singular
fibers of the classical Hitchin fibration, leading to an intermediate
question:
###### Open Question 4.8.
Extend the constructions of [11] to the setting of generalized parabolic
$G_{\mathbb{C}}$-Higgs bundles à la Bhosle [34].
## 5 Higgs bundles and branes within singular fibers
The appearance of Higgs bundles (and flat connections) within string theory
and the geometric Langlands program has led researchers to study the derived
category of coherent sheaves and the Fukaya category of these moduli spaces.
Therefore, it has become fundamental to understand Lagrangian submanifolds of
the moduli space of Higgs bundles supporting holomorphic sheaves ($A$-branes),
and their dual objects ($B$-branes). For ${}^{L}G_{\mathbb{C}}$ the Langlands
dual group of $G_{\mathbb{C}}$, there is a correspondence between invariant
polynomials for $G_{\mathbb{C}}$ and ${}^{L}G_{\mathbb{C}}$ giving an
identification
$\mathcal{A}_{G_{\mathbb{C}}}\simeq\mathcal{A}_{{}^{L}G_{\mathbb{C}}}$ of the
Hitchin bases.
### 5.1 Construction of branes
Through the Hitchin fibrations, the two moduli spaces
$\mathcal{M}_{G_{\mathbb{C}}}$ and $\mathcal{M}_{{}^{L}G_{\mathbb{C}}}$ are
then torus fibrations over a common base and their non-singular fibers are
dual abelian varieties [65, 106], answering some of the conjectures presented
in [172]. Kapustin and Witten give a physical interpretation of this in terms
of S-duality, using it as the basis for their approach to the geometric
Langlands program [121]. In this approach a crucial role is played by the
various types of branes and their transformation under mirror symmetry.
Adopting the language of physicists, a Lagrangian submanifold of a symplectic
manifold supporting a hyperholomorphic sheaf is called (the base of) an
A-brane, and a complex submanifold supporting a hyperholomorphic sheaf is (the
base of) a B-brane. A submanifold of a hyperkähler manifold may be of type $A$
or $B$ with respect to each of the complex or symplectic structures, and thus
choosing a triple of structures one may speak of branes of type $(B,B,B)$,
$(B,A,A)$, $(A,B,A)$ and $(A,A,B)$777One should note that since the complex
structures satisfy the quaternionic equations, and the symplectic forms are
obtained through them, branes of types $(A,A,A)$, $(A,B,B)$, $(B,A,B)$,
$(B,B,A)$ do not exist.. Throughout these notes we shall follow the convention
in [121] and fix the three complex structures $I$, $J$ and $K$, such that $I$
is induced from the Riemann surface $\Sigma$, and $J$ from the complex group
$G_{\mathbb{C}}$.
It is hence natural to seek constructions of different families of branes
inside the moduli space $\mathcal{M}_{G_{c}}$, understand their appearance
within the Hitchin fibration, and describe their mirror families of branes. In
the context of Higgs bundles, branes were first considered by Kapustin and
Witten in 2006 in [121], where much attention was given to the $(B,A,A)$-brane
of $G$-Higgs bundles inside $\mathcal{M}_{G_{\mathbb{C}}}$, where $G$ is a
real form of the complex Lie group $G_{\mathbb{C}}$. Soon after, examples of
brane dualities were considered in [100]; in particular the case of $G$-Higgs
bundles for compact real forms $G$ of low rank was considered, in which case
the $(B,A,A)$-brane lies completely inside the nilpotent cone. While partial
results exist for these branes over singular fibers, the more global picture
remains unknown.
###### Open Question 5.1.
Give a geometric description of all $(B,A,A)$-branes of $G$-Higgs bundles
which live completely inside the most singular fiber of the Hitchin fibration,
the nilpotent cone of $\mathcal{M}_{G_{\mathbb{C}}}$.
The study of branes within moduli spaces of Higgs bundles continued evolving
slowly, until natural generic methods to construct families of all types of
branes in $\mathcal{M}_{G_{\mathbb{C}}}$ were introduced in [23]. These branes
were constructed as fixed point sets of certain families of involutions on the
moduli spaces of complex Higgs bundles. Consider $\sigma$ an anti-holomorphic
involution fixing a real form $G$ of $G_{\mathbb{C}}$, and $\rho$ the anti-
holomorphic involution fixing the compact real form of $G_{\mathbb{C}}$. Then,
through the Cartan involution $\theta=\sigma\circ\rho$ of a real form $G$ of
$G_{\mathbb{C}}$, one may define
$\displaystyle
i_{1}\big{(}\bar{\partial}_{E},\Phi\big{)}:=\big{(}\theta(\bar{\partial}_{E}\big{)},-\theta(\Phi)).$
Moreover, a real structure $f\colon\Sigma\to\Sigma$ on $\Sigma$ induces an
involution
$\displaystyle
i_{2}(\bar{\partial}_{E},\Phi):=\big{(}f^{*}(\partial_{E}),f^{*}(\Phi^{*})\big{)}=\big{(}f^{*}\big{(}\rho\big{(}\bar{\partial}_{E}\big{)}\big{)},-f^{*}(\rho(\Phi))\big{)}.$
Lastly, by setting $i_{3}:=i_{1}\circ i_{2}$, one may define a third
involution:
$\displaystyle
i_{3}(\bar{\partial}_{E},\Phi)=\big{(}f^{*}\sigma\big{(}\bar{\partial}_{E}\big{)},f^{*}\sigma(\Phi)\big{)}.$
The fixed point sets of the induced involutions $i_{1}$, $i_{2}$, $i_{3}$
introduced in [23] are branes of type $(B,A,A),$ $(A,B,A)$ and $(A,A,B)$
respectively, and through the associated spectral data their topological
invariants can be described using $KO$, $KR$ and equivariant $K$-theory. In
particular, it was shown that among the fixed points of $i_{1}$ are solutions
to the Hitchin equations with holonomy in $G$. Moreover, those fixed by
$i_{2}$ were shown to give real integrable systems, fibered as a Lagrangian
fibration over a real slice of the Hitchin base [22]. In order to construct
the fourth type of branes, $(B,B,B)$-branes, one may consider Higgs bundles
for a complex subgroup of $G_{\mathbb{C}}$, but these branes would not appear
through a symmetry in the spirit of the above constructions888An instance of
this setting was recently explored in [84], where Langlands duality was
studied for branes appearing though Borel subgroups.. On the other hand, it is
shown in [111] that one may construct $(B,B,B)$-branes by considering the
subspaces of $\Gamma$-equivariant Higgs bundles for $\Gamma$ a finite group
acting on the Riemann surface $\Sigma$. In particular, it was shown in [111]
that for $G_{\mathbb{C}}={\rm SL}(2,\mathbb{C})$, these branes would be mid-
dimensional only under very restrictive conditions, and no equivalent result
has been shown for higher rank groups.
###### Open Question 5.2.
Describe the mid-dimensional $(B,B,B)$-branes appearing through
$\Gamma$-equivariant Higgs bundles when $\Gamma$ is a group of any rank, and
classify those components completely contained in the singular locus of the
Hitchin fibration.
The construction of branes following the procedures of [22, 23] have recently
been generalized to the space of framed instantons [82], Higgs bundles over K3
surfaces [83], Higgs bundles over elliptic curves [37], quiver varieties [117,
118], more general hyperkähler spaces [38], and principal Schottky bundles
[45]. Moreover, many of the geometric properties of the branes in [22, 23] are
yet unknown, and researchers continue to study them (e.g., see [16, 38]). In
the case of finite group actions, the branes introduced in [111] were later
studied in [94] from the perspective of character varieties, and many of their
properties remain unknown.
###### Open Question 5.3.
In the spirit of [19] and [20], describe the Brauer groups and automorphism
groups of the branes mentioned above.
Finally, it should be mentioned that in the last couple of years researchers
have found other novel ways in which branes can be constructed within the
moduli space of Higgs bundles, and which are yet to be generalized to other
settings. Examples of these are Nahm branes [81], branes appearing through
spinors [115], through moment maps [90], and through Borel subgroups [84].
However, since Lagrangian branes can appear in any of three types, it is of
interest to understand families of Lagrangian branes of each types which are
related in some geometric fashion, but it is not yet known of other triples of
families of branes appearing within hyperkähler spaces other than the ones
obtained thought the methods of [23].
###### Open Question 5.4.
Construct natural triples of families of branes in
$\mathcal{M}_{G_{\mathbb{C}}}$, and more generally, within hyperkähler spaces.
Considering the appearance of branes through real structures on Riemann
surfaces, one should also be able to impose other structures on the surfaces
to construct novel branes. A canonical example of such structure would be that
of a log-symplectic structure on $\Sigma$, also called a b-Poisson structure.
These structures are given by Poisson structures
$\pi\in\mathfrak{X}^{2}(\Sigma)$ for which $\pi$ has only non-degenerate
zeros. In particular, $\pi$ is generically symplectic. These structures were
completely classified by O. Radko [155], where she noted that every surface
(orientable or not) has a log-symplectic structure. The sets of invariants of
log-symplectic structures are:
* •
The zero curves $\gamma_{1},\ldots,\gamma_{n}$, taken with orientation defined
by $\pi$;
* •
The periods associated to each $\gamma_{i}$;
* •
The volume invariant of $\pi$.
Since there is a natural relation between the data defining log-symplectic
structures $\pi$ on $\Sigma$ as in [155], and the real structures
$f\colon\Sigma\rightarrow\Sigma$ considered in [22] to define
$(A,B,A)$-branes, a natural question is the following.
###### Open Question 5.5.
Which branes of Higgs bundles are characterized by log-symplectic structures,
and how do these relate to the $(A,B,A)$-branes introduced in [22, 23]?
### 5.2 Langlands duality
While it is understood that Langlands duality exchanges brane types, the exact
correspondence is not yet known999Strictly speaking, Langlands is a
correspondence between local systems on $\Sigma$ (or more precisely, coherent
sheaves on the moduli space), and $D$-modules over the moduli stack Bun of
bundles on $\Sigma$.. As mentioned before, the first instances of the
correspondence being studied for low rank Higgs bundles appeared in [121] and
[100], but no proof has yet been given of a pair of branes of Higgs bundles
being dual. In the case of $(B,A,A)$-branes of $G$-Higgs bundles, it was
conjectured in [23] how the duality should appear:
###### Conjecture 5.6 ([23, Section 7]).
The support of the dual $(B,B,B)$-brane in
$\mathcal{M}_{{}^{L}G_{\mathbb{C}}}$ to the $(B,A,A)$-brane
$\mathcal{M}_{G}\subset\mathcal{M}_{G_{\mathbb{C}}}$ is the moduli space
$\mathcal{M}_{\check{H}}\subset\mathcal{M}_{{}^{L}G_{\mathbb{C}}}$ of
$\check{H}$-Higgs bundles for $\check{H}$ the group associated to the Lie
algebra $\check{\mathfrak{h}}$ in [148, Table 1].
Support for this conjecture is given in [114] for the group $G=U(m,m)$ by
considering the spectral data description of the brane in [158], and in [25,
161] for the groups $G={\rm SO}(p+q,p)$ and ${\rm Sp}(2p+2q,2p)$. One should
note that, in contrast with the $(A,B,A)$ and $(A,A,B)$ branes considered in
[23], for any $q>1$ the $(B,A,A)$-branes studied in [25] lie completely over
the singular locus of the Hitchin fibrations. For these branes of orthogonal
Higgs bundles, support for Conjecture 5.6 is obtained from the description of
how the brane intersects the most generic fibers of the Hitchin fibration:
indeed the rank of the hyperholomorphic sheaf depends on the number of
components in this intersection, which remains constant for different $q$,
leading to the following conjecture.
###### Conjecture 5.7 ([25, Section 8]).
For all $q$ even $($and for all $q$ odd$)$, the $(B,A,A)$-brane of ${\rm
SO}(p+q,p)$-Higgs bundles has dual $(B,B,B)$-brane obtained by considering the
same base of Conjecture 5.6 and the same hyperholomorphic bundle supported on
it101010This hyperholomorphic bundle being, for example, the one introduced by
Hitchin in [114]., and it is only the way in which these spaces are embedded
into the different Langlands dual moduli spaces which depends on $q$.
In particular, the support of branes for $q$ odd and $q$ even are dual to each
other as hyperkähler moduli spaces of complex Higgs bundles. From the
description of the invariant polynomials appearing for $(B,A,A)$-branes of
$G$-Higgs bundles in [157], one can see that the majority of these branes lie
over the singular locus of the Hitchin fibration. However, it is still
possible to describe the intersection of these branes with the most regular of
singular fibers. For example, it was shown in [116] that the generic
intersections of the $(B,A,A)$-branes of ${\rm SL}(m,\mathbb{H})$, ${\rm
SO}(2n,\mathbb{H})$ and ${\rm Sp}(2m,2m)$-Higgs bundles with the fibers of the
Hitchin fibrations are not abelian varieties, but are instead moduli spaces of
rank 2 bundles on a spectral curve, satisfying certain natural stability
conditions. In order to fully understand Langlands duality for branes one
would need to understand how different branes in a moduli space intersect, and
thus the particular case of branes within the nilpotent cone is of much
importance.
###### Open Question 5.8.
Describe the intersections and relations between all $(B,A,A)$-branes in the
nilpotent cone of $G_{\mathbb{C}}$-Higgs bundles, for $G_{\mathbb{C}}$ an
arbitrary group.
While a first step towards an answer would be to consider $(B,A,A)$-branes of
$G$-Higgs bundles, or those constructed in the papers mentioned above, a more
general perspective considering generators of the corresponding Fukaya
category would be ideal. For a short review of the open problems and
literature of this section, the reader may refer to [159] and references
therein. When studying the nilpotent cone, a few questions arise from the work
of Gukov and his colleagues in relation to quantization:
###### Open Question 5.9.
What is the brane quantization à la [100] of the branes in the nilpotent cone
mentioned in this section, and how does this relate to the curve quantization
à la [69] of the spectral curves defined by Higgs bundles in those branes?
###### Open Question 5.10.
Use the above methods to construct branes for wild Hitchin systems, and
approach Langlands duality as appearing in [101].
### 5.3 Surface group representations and $\boldsymbol{GW}$-components
Finally, it should be noted that branes which lie completely over the singular
locus of the Hitchin fibration also play an important role in representation
theory. In particular, the following has been predicted by Guichard and
Wienhard:
###### Conjecture 5.11 ([99, Conjecture 5.6]).
Additional connected components coming from positive representations
$($through the notion of $\Theta$-positivity$)$, giving further families of
higher Teichmüller spaces, appear in the moduli space of surface group
representations into ${\rm SO}(p+q,p)$ for $q\geq 1$.
In the case of the moduli space $\mathcal{M}_{{\rm SO}(p+1,p)}$, the existence
of the extra Guichard–Wienhard components (or simply GW-components) as
predicted in [99, Conjecture 5.6] is known to be true [13, 48, 49], and
moreover it was shown in [49] that those components indeed contain
$\Theta$-positive representations. From the perspective of the spectral data
description of the $(B,A,A)$-branes of ${\rm SO}(p+q,p)$-Higgs bundles of [25,
161], natural candidates for $GW$-components for arbitrary $q\in\mathbb{N}$
are the following:
###### Conjecture 5.12 ([25, Section 7]).
The natural candidates for the $GW$-components of Conjecture 5.11 conjectured
to exist by Guichard and Wienhard [99, Conjecture 5.6] are those containing
Higgs bundles whose spectral data111111The spectral data is a triple
$(L,M,\tau)$ consisting of a line bundle $L$, an orthogonal bundle $M$ (on an
auxiliary curve) and an extension class $\tau$. $(L,M,\tau)$ in [25] has the
form $(\mathcal{O},\mathcal{O}^{q},\tau)$. Alternatively, this is equivalent
to taking ${\rm SO}(p+q,p)$-Higgs bundles whose vector bundle is of form
$(W,V\oplus\mathcal{O}^{q-1})$, where the pair $(W,V)$ gives the vector
bundles of one of the ${\rm SO}(p+1,p)$-Higgs bundles in the $GW$-components
known to exist.
To prove that this actually gives the $GW$-components, the monodromy action à
la [24, 156] should be taken into consideration as well as the behavior over
singular fibers. On the symplectic side, from the study of spectral data in
[25], one can see the geometric reason for the absence of any extra
$GW$-components in the $(B,A,A)$-brane of ${\rm Sp}(2p+2q,2p)$-Higgs bundles.
## 6 Higgs bundles and Calabi–Yau geometry
Higgs bundles have played an important role in string theory in a wide range
of contexts. But one recent application has provided some of the perhaps most
surprising connections between previously unrelated aspects of geometry – that
is, the links between Hitchin systems and the geometry of singular $3$\- and
$4$-(complex)dimensional Calabi–Yau (CY) varieties.
### 6.1 Calabi–Yau integrable systems and Hitchin Systems
The first hint of such a connection appeared in [56] in which links were
developed between Calabi–Yau integrable systems and Hitchin Systems. Briefly,
as described in Section 2, the Hitchin system forms an integrable system
through the definition of the Hitchin fibration (2.1), whose generic fibers
are even abelian varieties obtained through branched coverings of an
underlying Riemann surface $\Sigma$ with genus $g\geq 2$. On the other hand,
_Calabi–Yau_ integrable systems were first explored for families of Calabi–Yau
$3$-folds in [63, 64], where the base of the system was formed by the moduli
space of Calabi–Yau varieties in the family, and the fibers were formed by the
Deligne cohomology groups of the intermediate Jacobians:
$\displaystyle
J^{2}(X)=H^{3}(X,\mathbb{C})/\big{(}F^{2}H^{3}(X,\mathbb{C})+H^{3}(X,\mathbb{Z})\big{)}$
of the Calabi–Yau 3-folds $X$. Fiber and base fit together into a total space
carrying a holomorphic symplectic form and the fibers are Lagrangian [63].
Furthermore, in remarkable work [56] Diaconescu, Donagi, and Pantev developed
an isomorphism between Calabi–Yau integrable systems and those of Hitchin, the
DDP correspondence. More precisely, by considering a smooth projective complex
curve $\Sigma$ and an ADE group $G$, for a fixed pair $(\Sigma,G)$ they
constructed a family of quasi-projective (i.e., _non-compact_) CY $3$-folds
(defined as $\operatorname{Tot}(V)$ for a rank 2 vector bundle, $V$,
satisfying $\det(V)=K_{\Sigma}$).
Treating the moduli space of the non-compact CY manifold as the base of a
Hitchin integrable system for the group $G$, a correspondence between the CY
integrable system (whose fibers are the intermediate Jacobians of a family of
non-compact CY 3-folds) and that of the Hitchin system (whose fibers are Prym
varieties of the corresponding spectral covers) was explicitly laid out for
the Lie groups $A_{k}$, but the description is only valid away from the
discriminant. This mapping between Hitchin and CY integrable systems was
nicely generalized via a sheaf-theoretic approach to the remaining simple Lie
groups $B_{k}$, $C_{k}$, $F_{4}$ and $G_{2}$ in [27].
This important correspondence found a ready audience within string theory in
the context of F-theory [177] – a geometric approach to compactifications of
the type IIB string with non-trivial axio-dilaton backgrounds – in which the
effective physics of the type IIB compactification to $(12-2n)$ spacetime
dimensions is encoded in the geometry of an elliptically fibered (or more
generally genus one fibered), complex Calabi–Yau $n$-fold, $\pi\colon X_{n}\to
B_{n-1}$. The degeneration of the elliptic fibers encode information about a
Lie group, $G$, corresponding to D7-branes wrapping the discriminant locus of
the elliptic fibration (see Section 7 for further details). From a physical
perspective, the Calabi–Yau geometry is a tool to investigate intersecting
brane theories, which are innately linked to Higgs bundles. Within a “local”
description of F-theory, intersecting branes (wrapping a sub-variety
$\Sigma\subset X_{n}$) come equipped with an adjoint field $\Phi$ (the Higgs
field) which parametrizes normal motions of a stack of branes. Matter fields
are fluctuations around the background $\langle\Phi\rangle$ and Yukawa
couplings measure obstructions to extend these solutions beyond the linear
order. Usually the Higgs field $\Phi$ is taken to live in a Cartan subspace of
the Lie algebra so that only the eigenvalues of $\Phi$ are relevant. But this
seems to be an incomplete description in many situations relevant to
interesting physical models.
### 6.2 T-branes and Hitchin systems
Usually the connection with F-theory is made using the spectral cover, defined
through $\\{\det(\Phi-\eta\operatorname{Id})=0\\}$, and which in the case that
$\Sigma\subset X$ (with $X$ a CY $n$-fold) locally defines the CY as the
normal cone of $\Sigma$. However, the spectral cover will not accurately
parameterize the local geometry of $X$ when the Higgs field is non-
diagonalizable (for example when $\Phi$ is upper triangular) [62]. So-called
_T-branes_ are non-Abelian bound states that generalize intersecting branes
and admit a matrix of normal deformations (or Higgs field) that is nilpotent
over some loci [47, 66, 67]. Mathematically, T-branes correspond to singular
fibers in the Hitchin fibration.
As they first originated in the physics literature, the Hitchin systems
corresponding to T-branes were not explicitly linked geometrically to the
background Calabi–Yau elliptic fibrations of F-theory. A first step in this
direction was taken in [8] which attempted to extend some of the links
developed in [56] to _compact Calabi–Yau varieties_. A limiting mixed Hodge
structure analysis was employed to study the form of the intermediate Jacobian
of CY 3-folds in the limit that the geometry became singular. In certain
singular limits of the elliptic fibration, the degeneration of the
intermediate Jacobian, $J^{2}(X)$, leads to _an emergent Hitchin system_ –
i.e., generates the moduli space of Higgs bundles defined over the
discriminant locus of the fibration [8, 64]. In [8] T-branes were explored in
the context of six-dimensional F-theory vacua, that is using compactifications
of F-theory on _singular_ elliptically fibered Calabi–Yau $3$-folds,
$\pi\colon X_{3}\to B_{2}$.
The intrinsic intersecting brane Hitchin system was defined over a curve
$\Sigma\subset B$ in the base of the elliptic fibration, obtained through a
component of the discriminant locus ($\Delta=0$) describing degenerating
fibers. Upon a crepant resolution of the singular variety, it was argued that
the geometric remnants of T-branes correspond to periods of the three-form
potential of F-theory valued in the intermediate Jacobian of a now smooth
Calabi–Yau 3-fold. Moreover, in [8] a partial compactification of the DDP
correspondence was established and it was demonstrated that the Hitchin system
defined on the discriminant locus is contained in the local part of the
(compact) Calabi–Yau integrable system:
$\displaystyle\begin{split}&\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
8.39583pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern 23.39583pt\raise
0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\pi^{*}\mathcal{M}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
31.58478pt\raise-3.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@hook{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\kern
8.39583pt\raise-22.0882pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
35.22931pt\raise-18.77776pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
67.76418pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
67.76418pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{\mathcal{M}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
75.34752pt\raise-14.49998pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-2.39166pt\hbox{$\scriptstyle{\rm Hit}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
75.34752pt\raise-19.16667pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
103.63225pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern-8.39583pt\raise-28.99998pt\hbox{\hbox{\kern
0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{M\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
26.7293pt\raise-28.99998pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
26.7293pt\raise-28.99998pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise 0.0pt\hbox{$\textstyle{\widetilde{M}_{\rm
cx}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
50.29332pt\raise-24.49304pt\hbox{{}\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.50694pt\hbox{$\scriptstyle{\pi}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern
62.06279pt\raise-28.99998pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
62.06279pt\raise-28.99998pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{M_{\rm
loc},}$}}}}}}}\ignorespaces}}}}\ignorespaces\end{split}$ (6.1)
where $\mathcal{M}$ and $M$ are the full Hitchin and Calabi–Yau moduli spaces,
respectively, and $\widetilde{M}_{\rm cx}$ and $M_{\rm loc}$ the complex
structure moduli spaces of the resolved Calabi–Yau geometry and local
deformations of the singular Calabi–Yau variety (preserving the form of the
singular elliptic fibers). The maps are defined such that the right map is the
Hitchin fibration and the upper left (diagonal) map is an inclusion. This
correspondence was established for singular CY 3-folds with $A_{n}$-type
singularities. See [8], Section A.4 for details.
###### Conjecture 6.1 ([8]).
The correspondence described in (6.1) can be extended to any compact,
singular, elliptically fibered CY $3$-fold with singular fibers associated to
${\mathcal{G}}$-symmetry $($co-dimension $1$ over the base and admitting a
crepant resolution, see Section 7 for details$)$, and ${\mathcal{H}}$-type
Hitchin system defined over the discriminant locus of the elliptic fibration
such that ${\mathcal{H}}\subset{\mathcal{G}}$.
Although we will not explore it in detail here, it is also expected that
correspondences between Hitchin and CY moduli spaces (in either the compact or
non-compact CY setting) should extend to the Deligne cohomology of Calabi–Yau
$4$-fold geometries [35] (see also [28, 50, 51, 52, 132] for recent progress
on T-branes) and Higgs bundles defined over complex surfaces [168].
### 6.3 Wild Hitchin systems and F-theory
An intrinsic difficulty with the Hitchin systems arising within F-theory comes
from the fact that the Higgs bundles are defined on the discriminant locus of
the elliptically fibered CY manifold – and hence not on smooth Riemann
surfaces, but rather on complex curves that are in general singular (including
sometimes non-reduced and reducible). At the singular/intersection points of
such a curve, the physical theory suggests that the associated Higgs bundles
should also exhibit singularities. That is, in this context, it is natural to
also consider stable pairs with _singular connections_ (see also the
discussion in Section 4).
Thus far, work has focused primarily on so-called parabolic Hitchin systems
[127, 168, 169] which accommodate the possibility of simple poles in the gauge
and Higgs fields at marked points on a Riemann surface. However, many
questions – of both mathematical as well as physical interest – require the
consideration of higher order singularities in the gauge fields.
Wild/irregular Higgs bundles [6, 39, 41, 86] extend this formalism to include
stable, integrable connections with irregular singularities of the form
$\displaystyle d+A_{n}\frac{dz}{z^{n}}+\cdots+A_{1}\frac{dz}{z},$
with $n>1$ and stable parabolic Higgs pairs $(E,\Phi)$ where the Higgs field
has polar parts, e.g.,
$\displaystyle T_{n}\frac{dz}{z^{n}}+\cdots+T_{1}\frac{dz}{z}.$
As in Simpson’s construction [169] for parabolic Higgs bundles, a natural
assumption for this study is that the connections and Higgs fields are
holomorphically gauge equivalent to ones with diagonal polar parts (this is
weakened slightly in [86]). This leads to a correspondence between
singularities (after diagonalizing) of the form $T_{i}=\frac{1}{2}A_{i}$ for
$i\geq 2$ [36]. The moduli space of such wild Higgs bundles was described in
[39, 40] as a hyperkähler quotient. Already these irregular Hitchin systems
have played a significant role in the geometric Langlands program [88, 101],
and string applications including topologically twisted $\mathcal{N}=4$ super
Yang–Mills theories [101, 121], particularly in so-called ’Stokes phenomena’
[178, 180] (which describe how the asymptotical behavior of the solutions
changes in different angular regions around the singularity. Stokes matrices
link the solutions in different regions and define a generalized monodromy
which plays a central role in describing a wild Hitchin moduli space).
Recent progress [9] has demonstrated that the study of ordinary smooth Hitchin
systems is insufficient in the context of $6$-dimensional F-theory
compactifications. Not only should generic CY $3$-folds have a correspondence
to parabolic or wild Hitchin systems, but in general deformations of the
singular variety ($M_{\rm loc}$ in (6.1) above) can dynamically change the
pole order of the relevant singular complexified connections appearing in the
Higgs bundles (see [7] and [9, Section 5.1] for an example). That is, by
varying the complex structure moduli of a singular CY variety, the location of
singularities in the discriminant locus $\Delta$ can be tuned to coincide.
This tuning, when viewed from the intersecting brane models, should correspond
to a parametric deformation of a Hitchin System in which the location of
simple poles are tuned and forced to coincide into higher order poles. In many
instances it seems this tuning can be done without changing the dimension of
the underlying CY/Hitchin moduli space.
###### Conjecture 6.2 ([9]).
There exists a flat morphism between the moduli spaces ${\mathcal{M}}_{\rm
par}$ and ${\mathcal{M}}_{\rm wild}$ in the case of a singular parabolic Higgs
bundle with $n$-simple poles in its connection and that of a wild Higgs bundle
with a single, higher order pole of order $n$.
This limiting process in the context of CY varieties also leads to the open
question:
###### Open Question 6.3 ([9]).
Does the Stokes phenomenon exhibited by wild Higgs bundles have an analog in
the moduli space of singular, elliptically fibered CY geometries or CY
integrable systems?
### 6.4 Singular CY varieties
Finally, it should be noted that there are likely many unexplored links
between classification problems in parabolic/wild Hitchin Systems and singular
CY varieties. In general, the criteria for CY 3-folds to exhibit a generic
singularity everywhere in its complex structure moduli space has attracted
interest from the physics community in the context of “non-Higgsable clusters”
[10, 103, 143, 146]. The maps in (6.1) embedding the Hitchin moduli space into
that of the singular CY variety indicate that in such cases the highly
constrained form of the singular CY geometry must correspond to an equally
constrained Hitchin system. From the underlying effective physics of F-theory,
this correspondence is linked to an ${\rm SU}(2)$ R-symmetry which can rotate
components of hypermultiplets of the $6$-dimensional effective theory [14].
Here these halves of hypermultiplets correspond to complex structure moduli
and degrees of freedom in the intermediate Jacobian of the CY variety (the so-
called “RR-moduli”), respectively. Thus, any T-brane solution (or more
generally Higgs bundle on the brane with associated spectral cover) must
correspond under hypermultiplet rotation to a deformation of complex structure
of the singular 3-fold [8]. In particular, in the case of non-Higgsable CY
$3$-fold geometries an open question is to understand the following:
###### Open Question 6.4.
In the case that a non-Higgsable CY manifold exhibits ${\mathcal{G}}$ singular
elliptic fibers over a $($possibly singular$)$ curve $\Sigma$, does this
correspond to a trivial/empty Hitchin moduli space $($including parabolic or
wild Hitchin moduli space depending on the singularities of $\Sigma)$ of
${\mathcal{H}}$-Higgs bundles over $\Sigma$ where
${\mathcal{H}}\subset{\mathcal{G}}$?
One example where this open question can be confirmed in the affirmative is in
the case of CY $3$-folds $\pi\colon X\to\mathbb{F}_{n}$, defined as a
Weierstrass model (see Section 7) over a Hirzebruch surface, $\mathbb{F}_{n}$.
For each $n>2$, such elliptically fibered geometries are generically singular
[29, 147]. For $n=3$ for example there is a generic ${\rm SU}(3)$ singularity
(more specifically Kodaira type IV fibers) over a discriminant locus
$\Delta\subset\mathbb{F}_{3}$ which takes the form of a smooth curve of genus
zero (one of the sections of the rational fibration of $\mathbb{F}_{3}$). The
fact that this symmetry is “un-Higgsable” in the physical theory corresponds
to the triviality of ${\rm SL}(N,\mathbb{C})$-Higgs bundles over
$\mathbb{P}^{1}$ with $N=2,3$. See [8] for further examples involving
parabolic Hitchin systems over $\mathbb{P}^{1}$ with marked points.
In general, there are a large number of possible connections between Higgs
bundles and the effective theories and geometry arising within
string/M-/F-theory. One new correspondence has recently arisen within the
context of $4$-dimensional compactifications of F-theory in ${\mathcal{N}}=4$
supersymmetric Yang–Mills theories (with unity gauge groups) which are
quotiented by particular combinations of $R$-symmetry and ${\rm
SL}(2,\mathbb{Z})$ automorphisms (such theories can arise as D3-branes probing
terminal singularities in F-theory) [4, 5, 12, 92, 93].
###### Open Question 6.5.
What generalizations of Higgs bundles correspond to the new ${\mathcal{N}}=3$
supersymmetric theories recently discovered in [92]?
The self duality equations of ${\mathcal{N}}=4$ supersymmetric Yang–Mills
theories have led to a rich interplay between theories of branes arising in
string theory and Higgs bundles. It would be intriguing to understand whether
such links could arise between ${\mathcal{N}}=3$ theories and “cousins” of the
Hitchin system over Riemann surfaces. Finally, it should be noted that within
F-theory and the subject of T-branes there remain many open questions linking
so-called ”matrix factorization” techniques, K-theory, Hitchin systems and
Calabi–Yau geometry (see, e.g., [28]).
## 7 Elliptic fibrations, Weierstrass models,
and Calabi–Yau resolutions
Since elliptic fibrations play an important role when studying the relations
between Higgs bundles and F-theory, we shall conclude these notes with a
review of some of the basic ideas and recent advances. The reader should not
take this a thorough review, but rather a brief, curated overview of some
essential aspects of the underlying geometry.
###### Definition 7.1.
A surjective proper morphism $\varphi\colon Y\to B$ between two algebraic
varieties $Y$ and $B$ is called an elliptic fibration if the generic fiber of
$\varphi$ is a smooth projective curve of genus one and $\varphi$ has a
rational section. When $B$ is a curve, $Y$ is called an elliptic surface.
Moreover, when $B$ is a surface, $Y$ is said to be an elliptic $3$-fold. In
general, if $B$ has dimension $n-1$, $Y$ is called an elliptic $n$-fold.
### 7.1 Classification of singular fibers
The locus of singular fibers of an elliptic fibration, $\varphi\colon Y\to B$,
is called the discriminant locus, and is denoted by $\Delta(\varphi)$, or
simply $\Delta$ when the context is clear. If the base $B$ is smooth, the
discriminant locus is a divisor [58]. In the early 1960s, Kodaira classified
singular fibers of minimal elliptic surfaces in terms of numerical invariants
showing that there are 8 possibilities including two infinite series and 6
exceptional cases [125, 126]. Soon after, Néron obtained an equivalent
classification in an arithmetic setting using explicit regularizations of
singular Weierstrass models [151]. Based on Néron’s analysis, Tate proposed an
algorithm that allows (among other things) the determination of the type of
singular fibers of a Weierstrass model by analyzing the valuation of its
coefficients [176]. Under appropriate conditions, Kodaira’s classification of
singular fibers of an elliptic surface and Tate’s algorithm can be used to
describe the possible singular fibers and monodromies of an elliptically
fibered $n$-fold over points in codimension-1 in the base.
However, over points in codimension-2, new fibers not in Kodaira’s list are
known to occur [141, 173]. These are frequently referred to as collisions of
singular fibers as they usually appear at the intersections of two divisors of
the discriminant locus of the elliptic fibration. In general however, no
classification exists for the singular fibers of elliptic $3$-folds and
$4$-folds, leading to the broad question:
###### Open Question 7.2.
How can one geometrically classify non-Kodaira fibers?
Under some assumptions, the answer is known for the elliptic $n$-folds called
Miranda models [141, 173]. More generally, in the case of flat elliptic
fibrations obtained by crepant resolutions of Weierstrass models, non-Kodaira
fibers are expected to be contractions of usual Kodaira fibers. This is proven
for elliptic 3-folds [46] and confirmed in all known examples of non-Kodaira
fibers appearing in the F-theory literature [70, 76, 77, 78, 79, 131, 144,
175].
In view of the links described in Section 6 between elliptic CY geometry and
Hitchin systems, this leads naturally to the conjecture that two
classification problems might be linked:
###### Open Question 7.3.
How is the classification of non-Kodaira fibers of an elliptically fibered
Calabi–Yau $3$-fold related to the classification of parabolic or wild Hitchin
systems defined over the discriminant locus?
### 7.2 Weierstrass models
Since an elliptic fibration over a smooth base is birational to a (possibly
singular) Weierstrass model [55], the starting point of such an analysis will
usually be a Weierstrass model. We shall review here the main features of
these models, following the notation of Deligne [55]. Let $\mathcal{L}$ be a
line bundle over a quasi-projective variety $B$. We define the following
projective bundle (of lines):
$\displaystyle\pi\colon\
X_{0}=\mathbb{P}_{B}\big{[}\mathcal{O}_{B}\oplus\mathcal{L}^{\otimes
2}\oplus\mathcal{L}^{\otimes 3}\big{]}\longrightarrow B.$
We denote by $\mathcal{O}_{X_{0}}(1)$ the dual of the tautological line bundle
of the projective bundle $X_{0}$. The relative projective coordinates of
$X_{0}$ over $B$ are denoted $[z:x:y]$, where $z$, $x$, and $y$ are defined
respectively by the natural injection of $\mathcal{O}_{B}$,
$\mathcal{L}^{\otimes 2}$, and $\mathcal{L}^{\otimes 3}$ into
$\mathcal{O}_{B}\oplus\mathcal{L}^{\otimes 2}\oplus\mathcal{L}^{\otimes 3}$.
Hence, $z$ is a section of $\mathcal{O}_{X_{0}}(1)$, $x$ is a section of
$\mathcal{O}_{X_{0}}(1)\otimes\pi^{\ast}\mathcal{L}^{\otimes 2}$, and $y$ is a
section of $\mathcal{O}_{X_{0}}(1)\otimes\pi^{\ast}\mathcal{L}^{\otimes 3}$.
The most general Weierstrass equation is then the zero locus of the following
section of $\mathcal{O}(3)\otimes\pi^{\ast}\mathcal{L}^{\otimes 6}$ in $X_{0}$
$\displaystyle
F=y^{2}z+a_{1}xyz+a_{3}yz^{2}-\big{(}x^{3}+a_{2}x^{2}z+a_{4}xz^{2}+a_{6}z^{3}\big{)},$
where $a_{i}$ is a section of $\pi^{\ast}\mathcal{L}^{\otimes i}$. The line
bundle $\mathcal{L}$ is called the fundamental line bundle of the Weierstrass
model $\varphi\colon Y\to B$ and can be defined directly from the elliptic
fibration $Y$ as $\mathcal{L}=R^{1}\varphi_{\ast}\mathcal{O}_{Y}$. The
Weierstrass model has a trivial canonical class when the fundamental line
bundle $\mathcal{L}$ is the anti-canonical line bundle of $B$.
Each crepant resolution of a singular Weierstrass model is a relative minimal
model (in the sense of the Minimal Model Program) over the Weierstrass model
[134]. When the base of the fibration is a curve, the Weierstrass model has a
unique crepant resolution. On the other hand, when the base is of dimension
two or higher, a crepant resolution does not always exist; furthermore, when
it does, it is not necessarily unique. Different crepant resolutions of the
same Weierstrass model are connected by a finite sequence of flops (see for
example [73, 75, 77, 78, 79, 107, 129, 134]). Crepant resolutions of
Weierstrass models have the same Euler characteristic, and these have recently
been computed in [74].
Following F-theory, we can attach to a given elliptic fibration a Lie algebra
$\mathfrak{g}$, a representation $\mathbf{R}$ of $\mathfrak{g}$, and a
hyperplane arrangement ${\rm I}(\mathfrak{g},\mathbf{R})$. The Lie algebra
$\mathfrak{g}$ and the representation $\mathbf{R}$ are determined by the
fibers over codimension-1 and codimension-2 points, respectively, of the base
in the discriminant locus. The hyperplane arrangement ${\rm
I}(\mathfrak{g},\mathbf{R})$ is defined inside the dual fundamental Weyl
chamber of $\mathfrak{g}$ (i.e., the dual cone of the fundamental Weyl chamber
of $\mathfrak{g}$), and its hyperplanes are the set of kernels of the weights
of $\mathbf{R}$. Moreover, one may study the network of flops using the
hyperplane arrangement ${\rm I}(\mathfrak{g},\mathbf{R})$ inspired from the
theory of Coulomb branches of five-dimensional supersymmetric gauge theories
with eight supercharges [120].
The network of crepant resolutions is isomorphic to the network of chambers of
the hyperplane arrangement I$(\mathfrak{g},\mathbf{R})$ defined by splitting
the dual fundamental Weyl chamber of the Lie algebra $\mathfrak{g}$ by the
hyperplanes dual to the weights of $\mathbf{R}$. The hyperplane arrangement
I$(\mathfrak{g},\mathbf{R})$, its relation to the Coulomb branches of
supersymmetric gauge theories and the network of crepant resolutions are
studied, among others, in [44, 57, 71, 72, 73, 75, 77, 78, 97, 107, 108, 120].
The representation $\mathbf{R}$ attached to an elliptic fibration can be
derived systematically using intersection theory [15]. Indeed, let $C$ be a
vertical curve, i.e., a curve contained in a fiber of the elliptic fibration.
Let $S$ be an irreducible component of the reduced discriminant of the
elliptic fibration $\varphi\colon Y\to B$. The pullback of $\varphi^{*}S$ has
irreducible components $D_{0},D_{1},\ldots,D_{n}$, where $D_{0}$ is the
component touching the section of the elliptic fibration. The weight vector of
$C$ over $S$ is by definition the vector ${\varpi}_{S}(C)=(-D_{1}\cdot
C,\ldots,-D_{n}\cdot C)$ of intersection numbers $D_{i}\cdot C$ for
$i=1,\ldots,n$. To an elliptic fibration, we associate a representation
$\mathbf{R}$ of the Lie algebra $\mathfrak{g}$ as follows. The weight vectors
of the irreducible vertical rational curves of the fibers over codimension-2
points form a set $\Pi$ derived by intersection theory. The saturation of
$\Pi$ (by adding and subtracting roots) defines uniquely a representation
$\mathbf{R}$. This method due to Aspinwall and Gross [15, Section 4] explains
how the representation $\mathbf{R}$ can be deduced even in presence of non-
Kodaira fibers [133]. The method can be formalized using the notion of
saturation set of weights borrowed from Bourbaki [73, 75].
One interesting property of the derivation of the representation $\mathbf{R}$
from intersection theory is that it does not assume the Calabi–Yau condition
nor relies on anomaly cancellations. Hence, from that point of view, the
representation attached to an elliptic fibration is purely a geometric data of
the elliptic fibration that also controls aspects of its birational geometry
via the hyperplane arrangement ${\rm I}(\mathfrak{g},\mathbf{R})$. There are
subtleties in presence of exotic matter [7], when the component of the
discriminant supporting the gauge group is singular [124], in presence of a
non-trivial Mordell–Weil group [135], when the codimension-two fibers are non-
split [73, 75], or when the fibration is non-flat [131]. Although we
understand the structure of the hyperplane arrangement ${\rm
I}(\mathfrak{g},\mathbf{R})$ for most of the F-theory models with simple
groups (see for example [57, 71, 72, 73, 75, 107, 120]), the structure in
presence of semi-simple groups is still not well explored. This lead to the
following question.
###### Open Question 7.4.
What are the intersection properties of $($exotic$)$ representations appearing
in F-theory and the structure of their associated hyperplane arrangements?
### 7.3 Superconformal field theories in the context of F-theory
Finally, it should be noted that there is a rich array of open questions that
have arisen form the recent investigations into superconformal field theories
in the context of F-theory. The superconformal algebra is a graded Lie algebra
that combines the conformal Poincaré algebra and supersymmetry. Some of the
most basic data characterizing a superconformal field theory (SCFT) is the
number of spacetime dimensions in which it is defined, and the amount of
supersymmetry generators and their chirality.
Recently, substantial interest has centered on six-dimensional SCFTs with
$(2,0)$ and $(1,0)$ supersymmetry. According to the seminal work of Werner
Nahm, SCFTs are only possible for spacetime dimensions 2, 3, 4, 5 and 6 [149].
In particular, the (2,0) theories in $d=6$ are the SCFTs with the maximal
amount of supersymmetry in the highest dimension [174]. The six-dimensional
superconformal field theories with (1,0) supersymmetry are among the least
understood quantum field theories, for example, they do not always have a
Lagrangian formulation [165, 179]. They are connected to questions in broad
areas such as Donaldson-Thomas theory of Calabi–Yau manifolds, modular and
automorphic forms [98, 102], singularities [54, 109, 110], quivers [3, 30,
123], and representation theory [2]. As they arise in F-theory and CY elliptic
fibrations, it is then natural to ask:
###### Open Question 7.5.
What is the geometry of elliptic fibrations used to model $(1,0)$ theories?
How are the conformal matter connected to the structure of Higgs bundles
appearing in F-theory?
The crepant resolutions of the singularities of CY elliptic fibrations
exhibiting SCFT loci provide a beautiful connection between the mechanism of
anomaly cancellation as seen in physics and topological quantities that have
been recently discovered. An understanding of the SCFT geometry must be linked
to the simplest building blocks of (1,0) theories, the so called non-Higgsable
clusters [109, 110, 143, 145]. First steps towards the analysis of the crepant
resolutions of such SCFT loci (including so-called ”matter transitions” in
F-theory [7]) is already underway [53, 73, 75, 102, 131].
### Acknowledgements
The authors would like to thank the American Institute of Mathematics for the
support and hospitality which made the 2017 workshop on Singular Geometry and
Higgs Bundles in String Theory possible (and on which this survey of
ideas/open questions is based). In addition we would like to thank Steven
Rayan for helpful comments on the manuscript. The work of L.B. Anderson is
supported in part by NSF grant PHY-1720321 and is part of the working group
activities of the 4-VA initiative “A Synthesis of Two Approaches to String
Phenomenology”. M. Esole is supported in part by the National Science
Foundation (NSF) grant DMS-1701635 “Elliptic Fibrations and String Theory”.
The work of L.P. Schaposnik is partially supported by the NSF grant
DMS-1509693, and by the Alexander von Humboldt Foundation.
## References
* [1] Adams J., Strong real forms and the Kac classification, Atlas of Lie Groups and Representations, 2005, available at http://www.liegroups.org/papers/realforms.pdf.
* [2] Aganagic M., Frenkel E., Okounkov A., Quantum $q$-Langlands correspondence, arXiv:1701.03146.
* [3] Aganagic M., Okounkov A., Elliptic stable envelope, arXiv:1604.00423.
* [4] Aharony O., Evtikhiev M., On four dimensional $N=3$ superconformal theories, J. High Energy Phys. 2016 (2016), no. 4, 040, 13 pages, arXiv:1512.03524.
* [5] Aharony O., Tachikawa Y., S-folds and 4d ${\mathcal{N}}=3$ superconformal field theories, J. High Energy Phys. 2016 (2016), no. 6, 044, 26 pages, arXiv:1602.08638.
* [6] Aker K., Szabó S., Algebraic Nahm transform for parabolic Higgs bundles on $\mathbb{P}^{1}$, Geom. Topol. 18 (2014), 2487–2545, math.AG/0610301.
* [7] Anderson L.B., Gray J., Raghuram N., Taylor W., Matter in transition, J. High Energy Phys. 2016 (2016), no. 4, 080, 103 pages, arXiv:1512.05791.
* [8] Anderson L.B., Heckman J.J., Katz S., T-branes and geometry, J. High Energy Phys. 2014 (2014), no. 5, 080, 68 pages, arXiv:1310.1931.
* [9] Anderson L.B., Heckman J.J., Katz S., Schaposnik L.P., T-branes at the limits of geometry, J. High Energy Phys. 2017 (2017), no. 10, 058, 56 pages, arXiv:1702.06137.
* [10] Anderson L.B., Taylor W., Geometric constraints in dual F-theory and heterotic string compactifications, J. High Energy Phys. 2014 (2014), no. 8, 025, 81 pages, arXiv:1405.2074.
* [11] Arinkin D., Autoduality of compactified Jacobians for curves with plane singularities, J. Algebraic Geom. 22 (2013), 363–388, arXiv:1001.3868.
* [12] Arras P., Grassi A., Weigand T., Terminal singularities, Milnor numbers, and matter in F-theory, J. Geom. Phys. 123 (2018), 71–97, arXiv:1612.05646.
* [13] Arroyo M.A., The geometry of ${\rm SO}(p,q)$-Higgs bundles, Ph.D. Thesis, Universidad de Salamanca, Spain, 2009.
* [14] Aspinwall P.S., Donagi R.Y., The heterotic string, the tangent bundle and derived categories, Adv. Theor. Math. Phys. 2 (1998), 1041–1074, hep-th/9806094.
* [15] Aspinwall P.S., Gross M., The ${\rm SO}(32)$ heterotic string on a $K3$ surface, Phys. Lett. B 387 (1996), 735–742, hep-th/9605131.
* [16] Baird T.J., Symmetric products of a real curve and the moduli space of Higgs bundles, arXiv:1611.09636.
* [17] Balaji V., Barik P., Nagaraj D.S., A degeneration of moduli of Hitchin pairs, Int. Math. Res. Not. 2016 (2016), 6581–6625, arXiv:1308.4490.
* [18] Baraglia D., Monodromy of the ${\rm SL}(n)$ and ${\rm GL}(n)$ Hitchin fibrations, arXiv:1612.01583.
* [19] Baraglia D., Biswas I., Schaposnik L.P., Automorphisms of $\mathbb{C}^{*}$ moduli spaces associated to a Riemann surface, SIGMA 12 (2016), 007, 14 pages, arXiv:1508.06587.
* [20] Baraglia D., Biswas I., Schaposnik L.P., On the Brauer group of Higgs bundles and connections, in Proceedings of Hitchin 70, Oxford University Press, to appear, arXiv:1609.00454.
* [21] Baraglia D., Kamgarpour M., On the image of the parabolic Hitchin map, arXiv:1703.09886.
* [22] Baraglia D., Schaposnik L.P., Higgs bundles and $(A,B,A)$-branes, Comm. Math. Phys. 331 (2014), 1271–1300, arXiv:1305.4638.
* [23] Baraglia D., Schaposnik L.P., Real structures on moduli spaces of Higgs bundles, Adv. Theor. Math. Phys. 20 (2016), 525–551, arXiv:1309.1195.
* [24] Baraglia D., Schaposnik L.P., Monodromy of rank $2$ twisted Hitchin systems and real character varieties, Trans. Amer. Math. Soc., to appear, arXiv:1506.00372.
* [25] Baraglia D., Schaposnik L.P., Cayley and Langlands type correspondences for orthogonal Higgs bundles, Trans. Amer. Math. Soc., to appear, arXiv:1708.08828.
* [26] Beauville A., Narasimhan M.S., Ramanan S., Spectral curves and the generalised theta divisor, J. Reine Angew. Math. 398 (1989), 169–179.
* [27] Beck F., Hitchin and Calabi–Yau integrable systems via variations of Hodge structures, arXiv:1707.05973.
* [28] Bena I., Blåbäck J., Savelli R., T-branes and matrix models, J. High Energy Phys. 2017 (2017), no. 6, 009, 15 pages, arXiv:1703.06106.
* [29] Bershadsky M., Intriligator K., Kachru S., Morrison D.R., Sadov V., Vafa C., Geometric singularities and enhanced gauge symmetries, Nuclear Phys. B 481 (1996), 215–252, hep-th/9605200.
* [30] Bhardwaj L., Classification of 6d ${\mathcal{N}}=(1,0)$ gauge theories, J. High Energy Phys. 2015 (2015), no. 11, 002, 26 pages, arXiv:1502.06594.
* [31] Bhosle U.N., Generalised parabolic bundles and applications to torsionfree sheaves on nodal curves, Ark. Mat. 30 (1992), 187–215.
* [32] Bhosle U.N., Generalized parabolic sheaves on an integral projective curve, Proc. Indian Acad. Sci. Math. Sci. 102 (1992), 13–22.
* [33] Bhosle U.N., Generalized parabolic bundles and applications. II, Proc. Indian Acad. Sci. Math. Sci. 106 (1996), 403–420.
* [34] Bhosle U.N., Generalized parabolic Hitchin pairs, J. Lond. Math. Soc. 89 (2014), 1–23.
* [35] Bies M., Mayrhofer C., Pehle C., Weigand T., Chow groups, Deligne cohomology and massless matter in F-theory, arXiv:1402.5144.
* [36] Biquard O., Boalch P., Wild non-abelian Hodge theory on curves, Compos. Math. 140 (2004), 179–204, math.DG/0111098.
* [37] Biswas I., Calvo L.A., Franco E., García-Prada O., Involutions of the moduli spaces of $G$-Higgs bundles over elliptic curves, arXiv:1612.08364.
* [38] Biswas I., Wilkin G., Anti-holomorphic involutive isometry of hyper-Kähler manifolds and branes, J. Geom. Phys. 88 (2015), 52–55, arXiv:1410.6616.
* [39] Boalch P., Hyperkähler manifolds and nonabelian Hodge theory of (irregular) curves, arXiv:1203.6607.
* [40] Boalch P., Poisson varieties from Riemann surfaces, Indag. Math. (N.S.) 25 (2014), 872–900, arXiv:1309.7202.
* [41] Boden H.U., Yokogawa K., Moduli spaces of parabolic Higgs bundles and parabolic $K(D)$ pairs over smooth curves. I, Internat. J. Math. 7 (1996), 573–598, alg-geom/9610014.
* [42] Bradlow S.B., Schaposnik L.P., Higgs bundles and exceptional isogenies, Res. Math. Sci. 3 (2016), 14, 28 pages, arXiv:1508.02650.
* [43] Branco L.C., Higgs bundles, Lagrangians and mirror symmetry, Ph.D. Thesis, University of Oxford, 2017, arXiv:1803.01611.
* [44] Braun A.P., Schäfer-Nameki S., Box graphs and resolutions I, Nuclear Phys. B 905 (2016), 447–479, arXiv:1407.3520.
* [45] Casimiro A.C., Ferreira S., Florentino C., Principal Schottky bundles over Riemann surfaces, arXiv:1612.08662.
* [46] Cattaneo A., Crepant resolutions of Weierstrass threefolds and non-Kodaira fibres, arXiv:1307.7997.
* [47] Cecotti S., Córdova C., Heckman J.J., Vafa C., T-branes and monodromy, J. High Energy Phys. 2011 (2011), no. 7, 030, 93 pages, arXiv:1010.5780.
* [48] Collier B., Finite order automorphisms of Higgs bundles: theory and application, Ph.D. Thesis, University of Illinois at Urbana-Champaign, 2016.
* [49] Collier B., ${\rm SO}(n,n+1)$-surface group representations and their Higgs bundles, arXiv:1710.01287.
* [50] Collinucci A., Giacomelli S., Savelli R., Valandro R., T-branes through 3d mirror symmetry, J. High Energy Phys. 2016 (2016), no. 7, 093, 41 pages, arXiv:1603.00062.
* [51] Collinucci A., Giacomelli S., Valandro R., T-branes, monopoles and S-duality, J. High Energy Phys. 2017 (2017), no. 10, 113, 62 pages, arXiv:1703.09238.
* [52] Collinucci A., Savelli R., F-theory on singular spaces, J. High Energy Phys. 2015 (2015), no. 9, 100, 39 pages, arXiv:1410.4867.
* [53] Del Zotto M., Heckman J.J., Morrison D.R., 6D SCFTs and phases of 5D theories, J. High Energy Phys. 2017 (2017), no. 9, 147, 38 pages, arXiv:1703.02981.
* [54] Del Zotto M., Heckman J.J., Tomasiello A., Vafa C., 6d conformal matter, J. High Energy Phys. 2015 (2015), no. 2, 054, 56 pages, arXiv:1407.6359.
* [55] Deligne P., Courbes elliptiques: formulaire d’après J. Tate, in Modular Functions of One Variable, IV (Proc. Internat. Summer School, University Antwerp, Antwerp, 1972), Lecture Notes in Math., Vol. 476, Springer, Berlin, 1975, 53–73.
* [56] Diaconescu D.-E., Donagi R., Pantev T., Intermediate Jacobians and ADE Hitchin systems, hep-th/0607159.
* [57] Diaconescu D.-E., Entin R., Calabi–Yau spaces and five-dimensional field theories and exceptional gauge symmetry, Nuclear Phys. B 538 (1999), 451–484, hep-th/9807170.
* [58] Dolgačev I.V., On the purity of the degeneration loci of families of curves, Invent. Math. 8 (1969), 34–54.
* [59] Donagi R., Decomposition of spectral covers, Astérisque 218 (1993), 145–175.
* [60] Donagi R., Spectral covers, in Current Topics in Complex Algebraic Geometry (Berkeley, CA, 1992/93), Math. Sci. Res. Inst. Publ., Vol. 28, Cambridge University Press, Cambridge, 1995, 65–86, alg-geom/9505009.
* [61] Donagi R., Gaitsgory D., The gerbe of Higgs bundles, Transform. Groups 7 (2002), 109–153, math.AG/0005132.
* [62] Donagi R., Katz S., Sharpe E., Spectra of D-branes with Higgs vevs, Adv. Theor. Math. Phys. 8 (2004), 815–860, hep-th/0309270.
* [63] Donagi R., Markman E., Cubics, integrable systems, and Calabi–Yau threefolds, in Proceedings of the Hirzebruch 65 Conference on Algebraic Geometry (Ramat Gan, 1993), Israel Math. Conf. Proc., Vol. 9, Bar-Ilan University, Ramat Gan, 1996, 199–221, alg-geom/9408004.
* [64] Donagi R., Markman E., Spectral covers, algebraically completely integrable, Hamiltonian systems, and moduli of bundles, in Integrable Systems and Quantum Groups (Montecatini Terme, 1993), Lecture Notes in Math., Vol. 1620, Springer, Berlin, 1996, 1–119, alg-geom/9507017.
* [65] Donagi R., Pantev T., Langlands duality for Hitchin systems, Invent. Math. 189 (2012), 653–735, arXiv:1311.3624.
* [66] Donagi R., Wijnholt M., Gluing branes – I, J. High Energy Phys. 2013 (2013), no. 5, 068, 46 pages, arXiv:1104.2610.
* [67] Donagi R., Wijnholt M., Gluing branes – II: flavour physics and string duality, J. High Energy Phys. 2013 (2013), no. 5, 092, 50 pages, arXiv:1112.4854.
* [68] Dumas D., Neitzke A., Asymptotics of Hitchin’s metric on the Hitchin section, arXiv:1802.07200.
* [69] Dumitrescu O., Mulase M., Quantum curves for Hitchin fibrations and the Eynard–Orantin theory, Lett. Math. Phys. 104 (2014), 635–671, arXiv:1310.6022.
* [70] Esole M., Fullwood J., Yau S.-T., $D_{5}$ elliptic fibrations: non-Kodaira fibers and new orientifold limits of F-theory, Commun. Number Theory Phys. 9 (2015), 583–642, arXiv:1110.6177.
* [71] Esole M., Jackson S., Jagadeesan R., Noël A.G., Incidence geometry in a Weyl chamber I: ${\rm GL}_{n}$, arXiv:1508.03038.
* [72] Esole M., Jackson S., Jagadeesan R., Noël A.G., Incidence geometry in a Weyl chamber II: ${\rm SL}_{n}$, arXiv:1601.05070.
* [73] Esole M., Jagadeesan R., Kang M.J., The geometry of $G_{2}$, ${\rm Spin}(7)$, and ${\rm Spin}(8)$-models, arXiv:1709.04913.
* [74] Esole M., Jefferson P., Kang M.J., Euler characteristics of crepant resolutions of Weierstrass models, arXiv:1703.00905.
* [75] Esole M., Jefferson P., Kang M.J., The geometry of F4-models, arXiv:1704.08251.
* [76] Esole M., Kang M.J., Yau S.-T., A new model for elliptic fibrations with a rank one Mordell–Weil group I. Singular fibers and semi-stable degenerations, arXiv:1410.0003.
* [77] Esole M., Shao S.-H., Yau S.-T., Singularities and gauge theory phases, Adv. Theor. Math. Phys. 19 (2015), 1183–1247, arXiv:1402.6331.
* [78] Esole M., Shao S.-H., Yau S.-T., Singularities and gauge theory phases II, Adv. Theor. Math. Phys. 20 (2016), 683–749, arXiv:1407.1867.
* [79] Esole M., Yau S.-T., Small resolutions of $\rm SU(5)$-models in F-theory, Adv. Theor. Math. Phys. 17 (2013), 1195–1253, arXiv:1107.0733.
* [80] Faltings G., Stable $G$-bundles and projective connections, J. Algebraic Geom. 2 (1993), 507–568.
* [81] Franco E., Jardim M., Mirror symmetry for Nahm branes, arXiv:1709.01314.
* [82] Franco E., Jardim M., Marchesi S., Branes in the moduli space of framed instantons, arXiv:1504.05883.
* [83] Franco E., Jardim M., Menet G., Brane involutions on irreducible holomorphic symplectic manifolds, arXiv:1606.09040.
* [84] Franco E., Peón-Nieto A., The Borel subgroup and branes on the Higgs moduli space, arXiv:1709.03549.
* [85] Fredrickson L., Generic ends of the moduli space of ${\rm SL}(n,\mathbb{C})$-Higgs bundles, in preparation.
* [86] Fredrickson L., Neitzke A., From $S^{1}$-fixed points to ${\mathcal{W}}$-algebra representations, arXiv:1709.06142.
* [87] Freed D.S., Special Kähler manifolds, Comm. Math. Phys. 203 (1999), 31–52, hep-th/9712042.
* [88] Frenkel E., Ramifications of the geometric Langlands program, in Representation Theory and Complex Analysis, Lecture Notes in Math., Vol. 1931, Springer, Berlin, 2008, 51–135, math.QA/0611294.
* [89] Frenkel E., Witten E., Geometric endoscopy and mirror symmetry, Commun. Number Theory Phys. 2 (2008), 113–283, arXiv:0710.5939.
* [90] Gaiotto D., S-duality of boundary conditions and the geometric Langlands, arXiv:1609.09030.
* [91] Gaiotto D., Moore G.W., Neitzke A., Four-dimensional wall-crossing via three-dimensional field theory, Comm. Math. Phys. 299 (2010), 163–224, arXiv:0807.4723.
* [92] García-Etxebarria I., Regalado D., $\mathcal{N}=3$ four dimensional field theories, J. High Energy Phys. 2016 (2016), no. 3, 083, 21 pages, arXiv:1512.06434.
* [93] García-Etxebarria I., Regalado D., Exceptional ${\mathcal{N}}=3$ theories, J. High Energy Phys. 2017 (2017), no. 12, 042, 21 pages, arXiv:1611.05769.
* [94] García-Prada O., Wilkin G., Action of the mapping class group on character varieties and Higgs bundles, arXiv:1612.02508.
* [95] Giudice A.L., Pustetto A., A compactification of the moduli space of principal Higgs bundles over singular curves, arXiv:1110.0632.
* [96] Gothen P.B., Oliveira A.G., The singular fiber of the Hitchin map, Int. Math. Res. Not. 2013 (2013), 1079–1121, arXiv:1012.5541.
* [97] Grimm T.W., Hayashi H., F-theory fluxes, chirality and Chern-Simons theories, J. High Energy Phys. 2012 (2012), no. 3, 027, 54 pages, arXiv:1111.1232.
* [98] Gu J., Huang M.-X., Kashani-Poor A.-K., Klemm A., Refined BPS invariants of 6d SCFTs from anomalies and modularity, J. High Energy Phys. 2017 (2017), no. 5, 130, 62 pages, arXiv:1701.00764.
* [99] Guichard O., Wienhard A., Positivity and higher Teichmüller theory, arXiv:1802.02833.
* [100] Gukov S., Quantization via mirror symmetry, Jpn. J. Math. 6 (2011), 65–119, arXiv:1011.2218.
* [101] Gukov S., Witten E., Gauge theory, ramification, and the geometric Langlands program, in Current Developments in Mathematics, 2006, Int. Press, Somerville, MA, 2008, 35–180, hep-th/0612073.
* [102] Haghighat B., Klemm A., Lockhart G., Vafa C., Strings of minimal 6d SCFTs, Fortschr. Phys. 63 (2015), 294–322, arXiv:1412.3152.
* [103] Halverson J., Taylor W., ${\mathbb{P}}^{1}$-bundle bases and the prevalence of non-Higgsable structure in 4D F-theory models, J. High Energy Phys. 2015 (2015), no. 9, 086, 59 pages, arXiv:1506.03204.
* [104] Hausel T., Compactification of moduli of Higgs bundles, J. Reine Angew. Math. 503 (1998), 169–192, math.AG/9804083.
* [105] Hausel T., Geometry of the moduli space of Higgs bundles, math.AG/0107040.
* [106] Hausel T., Thaddeus M., Mirror symmetry, Langlands duality, and the Hitchin system, Invent. Math. 153 (2003), 197–229, math.AG/0205236.
* [107] Hayashi H., Lawrie C., Morrison D.R., Schäfer-Nameki S., Box graphs and singular fibers, J. High Energy Phys. 2014 (2014), no. 5, 048, 92 pages, arXiv:1402.2653.
* [108] Hayashi H., Lawrie C., Schäfer-Nameki S., Phases, flops and F-theory: ${\rm SU}(5)$ gauge theories, J. High Energy Phys. 2013 (2013), no. 10, 046, 43 pages, arXiv:1304.1678.
* [109] Heckman J.J., Morrison D.R., Rudelius T., Vafa C., Atomic classification of 6D SCFTs, Fortschr. Phys. 63 (2015), 468–530, arXiv:1502.05405.
* [110] Heckman J.J., Morrison D.R., Vafa C., On the classification of 6D SCFTs and generalized ADE orbifolds, J. High Energy Phys. 2014 (2014), no. 5, 028, 49 pages, arXiv:1312.5746.
* [111] Heller S., Schaposnik L.P., Branes through finite group actions, J. Geom. Phys. 129 (2018), 279–293, arXiv:1611.00391.
* [112] Hitchin N., The self-duality equations on a Riemann surface, Proc. London Math. Soc. 55 (1987), 59–126.
* [113] Hitchin N., Stable bundles and integrable systems, Duke Math. J. 54 (1987), 91–114.
* [114] Hitchin N., Higgs bundles and characteristic classes, in Arbeitstagung Bonn 2013, Progr. Math., Vol. 319, Birkhäuser/Springer, Cham, 2016, 247–264, arXiv:1308.4603.
* [115] Hitchin N., Spinors, Lagrangians and rank 2 Higgs bundles, Proc. London Math. Soc. 115 (2017), 33–54, arXiv:1605.06385.
* [116] Hitchin N., Schaposnik L.P., Nonabelianization of Higgs bundles, J. Differential Geom. 97 (2014), 79–89, arXiv:1307.0960.
* [117] Hoskins V., Schaffhauser F., Group actions on quiver varieties and applications, arXiv:1612.06593.
* [118] Hoskins V., Schaffhauser F., Rational points of quiver moduli spaces, arXiv:1704.08624.
* [119] Igusa J.-I., Fibre systems of Jacobian varieties, Amer. J. Math. 78 (1956), 171–199.
* [120] Intriligator K., Morrison D.R., Seiberg N., Five-dimensional supersymmetric gauge theories and degenerations of Calabi–Yau spaces, Nuclear Phys. B 497 (1997), 56–100, hep-th/9702198.
* [121] Kapustin A., Witten E., Electric-magnetic duality and the geometric Langlands program, Commun. Number Theory Phys. 1 (2007), 1–236, hep-th/0604151.
* [122] Kausz I., A Gieseker type degeneration of moduli stacks of vector bundles on curves, Trans. Amer. Math. Soc. 357 (2005), 4897–4955, math.AG/0201197.
* [123] Kimura T., Pestun V., Quiver elliptic W-algebras, arXiv:1608.04651.
* [124] Klevers D., Taylor W., Three-index symmetric matter representations of ${\rm SU}(2)$ in F-theory from non-Tate form Weierstrass models, J. High Energy Phys. 2016 (2016), no. 6, 171, 32 pages, arXiv:1604.01030.
* [125] Kodaira K., On compact analytic surfaces. II, Ann. of Math. 77 (1963), 563–626.
* [126] Kodaira K., On compact analytic surfaces. III, Ann. of Math. 78 (1963), 1–40.
* [127] Konno H., Construction of the moduli space of stable parabolic Higgs bundles on a Riemann surface, J. Math. Soc. Japan 45 (1993), 253–276.
* [128] Kontsevich M., Soibelman Y., Stability structures, motivic Donaldson–Thomas invariants and cluster transformations, arXiv:0811.2435.
* [129] Krause S., Mayrhofer C., Weigand T., $G_{4}$-flux, chiral matter and singularity resolution in F-theory compactifications, Nuclear Phys. B 858 (2012), 1–47, arXiv:1109.3454.
* [130] Laumon G., Un analogue global du cône nilpotent, Duke Math. J. 57 (1988), 647–671.
* [131] Lawrie C., Schäfer-Nameki S., The Tate form on steroids: resolution and higher codimension fibers, J. High Energy Phys. 2013 (2013), no. 4, 061, 66 pages, arXiv:1212.2949.
* [132] Marchesano F., Savelli R., Schwieger S., Compact T-branes, J. High Energy Phys. 2017 (2017), no. 9, 132, 28 pages, arXiv:1707.03797.
* [133] Marsano J., Schäfer-Nameki S., Yukawas, $G$-flux, and spectral covers from resolved Calabi–Yau’s, J. High Energy Phys. 2011 (2011), no. 11, 098, 59 pages, arXiv:1108.1794.
* [134] Matsuki K., Weyl groups and birational transformations among minimal models, Mem. Amer. Math. Soc. 116 (1995), vi+133 pages.
* [135] Mayrhofer C., Morrison D.R., Till O., Weigand T., Mordell–Weil torsion and the global structure of gauge groups in F-theory, J. High Energy Phys. 2014 (2014), no. 10, 016, 47 pages, arXiv:1405.3656.
* [136] Mazzeo R., Swoboda J., Weiss H., Witt F., Limiting configurations for solutions of Hitchin’s equation, Semin. Theor. Spectr. Geom. 31 (2012–2014), 91–116, arXiv:1502.01692.
* [137] Mazzeo R., Swoboda J., Weiss H., Witt F., Ends of the moduli space of Higgs bundles, Duke Math. J. 165 (2016), 2227–2271, arXiv:1405.5765.
* [138] Mazzeo R., Swoboda J., Weiss H., Witt F., Asymptotic geometry of the Hitchin metric, arXiv:1709.03433.
* [139] Melo M., Rapagnetta A., Vivani F., Fourier–Mukai and autoduality for compactified Jacobians, I, arXiv:1207.7233.
* [140] Mestrano N., Simpson C., Moduli of sheaves, in Development of Moduli Theory – Kyoto 2013, Adv. Stud. Pure Math., Vol. 69, Math. Soc. Japan, Tokyo, 2016, 77–172.
* [141] Miranda R., Smooth models for elliptic threefolds, in The Birational Geometry of Degenerations (Cambridge, Mass., 1981), Progr. Math., Vol. 29, Birkhäuser, Boston, Mass., 1983, 85–133.
* [142] Mochizuki T., Asymptotic behavior of certain families of harmonic bundles on Riemann surfaces, J. Topology 9 (2016), 1021–1073, arXiv:1508.05997.
* [143] Morrison D.R., Taylor W., Classifying bases for 6D F-theory models, Cent. Eur. J. Phys. 10 (2012), 1072–1088, arXiv:1201.1943.
* [144] Morrison D.R., Taylor W., Matter and singularities, J. High Energy Phys. 2012 (2012), no. 1, 022, 54 pages, arXiv:1106.3563.
* [145] Morrison D.R., Taylor W., Toric bases for 6D F-theory models, Fortschr. Phys. 60 (2012), 1187–1216, arXiv:1204.0283.
* [146] Morrison D.R., Taylor W., Non-Higgsable clusters for 4D F-theory models, J. High Energy Phys. 2015 (2015), no. 5, 080, 37 pages, arXiv:1412.6112.
* [147] Morrison D.R., Vafa C., Compactifications of $F$-theory on Calabi–Yau threefolds. II, Nuclear Phys. B 476 (1996), 437–469, hep-th/9603161.
* [148] Nadler D., Perverse sheaves on real loop Grassmannians, Invent. Math. 159 (2005), 1–73, math.AG/0202150.
* [149] Nahm W., Supersymmetries and their representations, Nuclear Phys. B 135 (1978), 149–166.
* [150] Neitzke A., Notes on a new construction of hyperkähler metrics, arXiv:1308.2198.
* [151] Néron A., Modèles minimaux des variétés abéliennes sur les corps locaux et globaux, Inst. Hautes Études Sci. Publ. Math. 21 (1964), 5–125.
* [152] Pandharipande R., A compactification over $\overline{M}_{g}$ of the universal moduli space of slope-semistable vector bundles, J. Amer. Math. Soc. 9 (1996), 425–471, alg-geom/9502020.
* [153] Peón-Nieto A., Higgs bundles, real forms and the Hitchin fibration, Ph.D. Thesis, Universidad Autónoma de Madrid, 2013.
* [154] Peón-Nieto A., Cameral data for ${\rm SU}(p,p+1)$-Higgs bundles, arXiv:1506.01318.
* [155] Radko O., A classification of topologically stable Poisson structures on a compact oriented surface, J. Symplectic Geom. 1 (2002), 523–542, math.SG/0110304.
* [156] Schaposnik L.P., Monodromy of the ${\rm SL}_{2}$ Hitchin fibration, Internat. J. Math. 24 (2013), 1350013, 21 pages, arXiv:1111.2550.
* [157] Schaposnik L.P., Spectral data for G-Higgs bundles, Ph.D. Thesis, University of Oxford, 2013, arXiv:1301.1981.
* [158] Schaposnik L.P., Spectral data for $U(m,m)$-Higgs bundles, Int. Math. Res. Not. 2015 (2015), 3486–3498, arXiv:1307.4419.
* [159] Schaposnik L.P., Higgs bundles and applications, Oberwolfach Reports, 2015, 31–33, arXiv:1603.06691.
* [160] Schaposnik L.P., An introduction to spectral data for Higgs bundles, Lecture Note Series, Vol. 65, Institute for Mathematical Sciences, National University of Singapore, 2017, 38 pages, arXiv:1408.0333.
* [161] Schaposnik L.P., A geometric approach to orthogonal Higgs bundles, Eur. J. Math., to appear, arXiv:1608.00300.
* [162] Schaub D., Courbes spectrales et compactifications de jacobiennes, Math. Z. 227 (1998), 295–312.
* [163] Schmitt A.H.W., Singular principal $G$-bundles on nodal curves, J. Eur. Math. Soc. 7 (2005), 215–251.
* [164] Scognamillo R., An elementary approach to the abelianization of the Hitchin system for arbitrary reductive groups, Compositio Math. 110 (1998), 17–37, alg-geom/9412020.
* [165] Seiberg N., Witten E., Comments on string dynamics in six dimensions, Nuclear Phys. B 471 (1996), 121–134, hep-th/9603003.
* [166] Seshadri C., Degenerations of the moduli spaces of vector bundles on curves, Lecture given at the School on Algebraic Geometry, 1999.
* [167] Seshadri C.S., Moduli spaces of torsion free sheaves on nodal curves and generalisations. I, in Moduli Spaces and Vector Bundles, London Math. Soc. Lecture Note Ser., Vol. 359, Cambridge University Press, Cambridge, 2009, 484–505.
* [168] Simpson C.T., Constructing variations of Hodge structure using Yang–Mills theory and applications to uniformization, J. Amer. Math. Soc. 1 (1988), 867–918.
* [169] Simpson C.T., Higgs bundles and local systems, Inst. Hautes Études Sci. Publ. Math. (1992), 5–95.
* [170] Simpson C.T., Moduli of representations of the fundamental group of a smooth projective variety. I, Inst. Hautes Études Sci. Publ. Math. 79 (1994), 47–129.
* [171] Simpson C.T., Moduli of representations of the fundamental group of a smooth projective variety. II, Inst. Hautes Études Sci. Publ. Math. 80 (1994), 5–79.
* [172] Strominger A., Yau S.-T., Zaslow E., Mirror symmetry is $T$-duality, Nuclear Phys. B 479 (1996), 243–259, hep-th/9606040.
* [173] Szydlo M.G., Flat regular models of elliptic schemes, Ph.D. Thesis, Harvard University, 1999.
* [174] Tachikawa Y., ${\mathcal{N}}=2$ supersymmetric dynamics for pedestrians, Lecture Notes in Physics, Vol. 890, Springer, Cham, Hindustan Book Agency, New Delhi, 2015, arXiv:1312.2684.
* [175] Tatar R., Walters W., GUT theories from Calabi-Yau 4-folds with ${\rm SO}(10)$ singularities, J. High Energy Phys. 2012 (2012), no. 12, 092, 24 pages, arXiv:1206.5090.
* [176] Tate J., Algorithm for determining the type of a singular fiber in an elliptic pencil, in Modular Functions of One Variable, IV (Proc. Internat. Summer School, University Antwerp, Antwerp, 1972), Lecture Notes in Math., Vol. 476, Springer, Berlin, 1975, 33–52.
* [177] Vafa C., Evidence for $F$-theory, Nuclear Phys. B 469 (1996), 403–415, hep-th/9602022.
* [178] Witten E., Gauge theory and wild ramification, arXiv:0710.0631.
* [179] Witten E., Conformal field theory in four and six dimensions, in Topology, Geometry and Quantum Field Theory, London Math. Soc. Lecture Note Ser., Vol. 308, Cambridge University Press, Cambridge, 2004, 405–419, arXiv:0712.0157.
* [180] Xie D., General Argyres-Douglas theory, J. High Energy Phys. 2013 (2013), no. 1, 100, 52 pages, arXiv:1204.2270.
|
11institutetext: Department of Electronics and Communication Engineering,
ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar-751030,
Odisha, India
11email<EMAIL_ADDRESS><EMAIL_ADDRESS>https://sites.google.com/site/erkundanec/home 22institutetext: Department of
Electrical Engineering, Sardar Vallabhbhai National Institute of Technology,
Surat, Gujarat-395007
22email<EMAIL_ADDRESS>
# Automated retinal vessel segmentation based on morphological preprocessing
and 2D-Gabor wavelets
Kundan Kumar corresponding author11 Debashisa Samal 11 Suraj 22
###### Abstract
Automated segmentation of vascular map in retinal images endeavors a potential
benefit in diagnostic procedure of different ocular diseases. In this paper,
we suggest a new unsupervised retinal blood vessel segmentation approach using
top-hat transformation, contrast-limited adaptive histogram equalization
(CLAHE), and 2-D Gabor wavelet filters. Initially, retinal image is
preprocessed using top-hat morphological transformation followed by CLAHE to
enhance only the blood vessel pixels in the presence of exudates, optic disc,
and fovea. Then, multiscale 2-D Gabor wavelet filters are applied on
preprocessed image for better representation of thick and thin blood vessels
located at different orientations. The efficacy of the presented algorithm is
assessed on publicly available DRIVE database with manually labeled images. On
DRIVE database, we achieve an average accuracy of 94.32% with a small standard
deviation of 0.004. In comparison with major algorithms, our algorithm
produces better performance concerning the accuracy, sensitivity, and kappa
agreement.
###### Keywords:
Retinopathy, Blood vasculature, Retinal vessel segmentation, 2D-Gabor wavelet,
Top-hat transform.
## 1 Introduction
Change in anatomical structure of retinal blood vessels (vasculature) in
retina is a good indication of the presence of ophthalmic diseases, e.g.,
hypertension, cardiovascular diseases, diabetic retinopathy, glaucoma, etc.
[1, 2]. Vascular map segmentation of fundus images has played a decisive role
in assessing the change in the vasculature for severity of ocular diseases.
However, periodic screening of retinal images for early recognition of the
change in vascular structure can prevent major vision loss [2]. Therefore, an
automatic and accurate retinal vessel segmentation is the prerequisite for the
initial diagnosis of retinal diseases. Extraction of the vascular map from an
uneven illuminated and pigmented fundus image is a challenging problem.
Besides retinal blood vessels, the presence of other structures (e.g.,
exudates, optic disc, fovea, red lesions) under uneven illuminated and
pigmented background makes the vessel detection even more difficult. Also, the
retinal vessel thickness varies in the wide range whereas thin vessels have
low contrast which makes thin vessel detection more challenging [2].
Several, automatic vascular map segmentation of retinal fundus images have
been proposed in literature [3, 4, 5, 6, 7]. Fraz et al. have done a
comprehensive literature survey on retinal blood vessel segmentation in [8].
The retinal blood vessel segmentation techniques are popularly classified as:
(i) supervised and (ii) unsupervised techniques. In supervised category,
$k$-NN-classifier [9], Artificial Neural Network (ANN) [10], trainable COSFIER
filters [11], Support Vector Machine (SVM) [12], Extreme Learning Machine
(ELM) [13], Deep Neural Network (DNN) [6, 14], etc. have been explored for
blood vessel segmentation as identification problem. However, supervised
algorithms rely on the robust feature extraction followed by classification.
In many approaches line detector [12], Gabor filters [1, 10], and gray level
co-occurrence matrix (GLCM) [15] based methods have been explored for feature
extraction purpose. A feature vector is computed for each pixel using these
approaches and classifier classify the pixels as the vessel and non-vessel
pixels. These approaches are time-consuming process due to training. On the
contrary, in the unsupervised category, filtering methods [3, 5, 16],
vasculature tracing methods [17], curvelet based [18], morphological operators
[19], have been used. In these approaches, classification of the vessel or
non-vessel pixels is performed without training process, i.e., training data
do not contribute to finding the model parameter.
In earlier reported works under the unsupervised category, match filter has
received the enormous response of the scholars due to its straightforwardness
in the implementation of the technique. Match filter based retinal vessel
segmentation relies on a 2D kernel with Gaussian profile initially proposed by
Chaudhuri et al. [3]. The kernel is rotated at $15^{o}$ increment, and the
best output of the filter for each pixel is carefully chosen to map all blood
vessels orientated at different angles. After that, thresholding is applied to
get binary vessel map image. Further, pruning is applied as post-processing to
improve the final identification of blood vessels. Hoover et al. [4] have used
local and region-based properties for vessel segmentation where threshold
probing technique is used on match filter response. However, match filter
gives a strong response in terms of vessels and non-vessels edges. Zhang et
al. [5] have exploited the first-order derivative of Gaussian to improve the
performance of matched filter by eliminating non-vessel edges from retinal
images. In literature [17][16], multiscale match filter and its variation are
suggested to identify blood vessels of different thickness. In [2], Zhao et
al. have enhanced the retinal vessels in retinal images by utilizing 2D-Gabor
wavelet filters and a contrast-limited adaptive histogram equalization
(CLAHE). After that, the processing results of region growing method and level
set approach are combined to get final segmentation as a binary image.
However, this approach is unable to remove non-vessel structures also takes
long processing time. Roychowdhury et al. [20] have performed an unsupervised
iterative process to obtain the vessels using top-hat reconstruction followed
by iterative region growing method. Most of the approaches like [5, 2] fail to
remove optic disc and exudates from pathological images. Also, many approaches
remove these structures in postprocessing with the extra burden of
computational time. Thus, an automated unsupervised blood vessel segmentation
approach is needed to identify the blood vessel pixels correctly with a small
false positive rate. Simultaneously, need to remove the anatomical structure
other than blood vessels with high accuracy and less complexity.
We propose an entirely unsupervised approach for automatic segmentation of
blood vessels to obtain the retinal vascular map. The significant contribution
of this paper is to use top-hat transform followed by CLAHE for retinal image
enhancement in preprocessing step. Use of top-hat transform facilitates to
enhance only the blood vessels, simultaneously remove the local intensity
change due to exudates, optic disc, and fovea from the background. For further
image enhancement, CLAHE is applied on top-hat transformed retinal image. The
preprocessed retinal image is passed through the Gabor filters bank, and the
maximum outcome of the filters are chosen for each pixel. Otsu thresholding as
global thresholding is applied to get a binary blood vessel structure. The
proposed technique is proficient in identifying the blood vessels under uneven
pigmentation and illuminance condition in the presence of exudates, optic
disc, and fovea. Our proposed approach suppresses the non-vessel pixels in the
preprocessing step, however multiscale Gabor wavelet filters efficiently
represent the thick and thin vessels in the retinal image. The efficacy of the
presented technique is validated on the publicly available Digital Retinal
Image for Vessels Extraction (DRIVE) database [9].
Rest of the paper is organized as follows. The DRIVE database and the proposed
algorithm are discussed in sections 2 and 3 respectively. The section 4
discusses the experimental results and finally, the paper is concluded in
section 5.
## 2 Materials
We validate our proposed algorithm on the DRIVE database [9] for performance
evaluation. The DRIVE database is publicly available to execute a comparative
study and experimental evaluation of vascular segmentation algorithms. The
gold standard segmented images are provided with the database as manually
labeled images. The DRIVE database contains 40 color retinal images which
include 20 images in training set and 20 images in the test set. All images
were captured by Canon CR5 nonmydriatic 3 charge-coupled-device (CCD) cameras
at $45^{o}$ field of view (FOV). Each color retinal image is having a
resolution of $565\times 584$ pixels with three R, G, and B channels, and each
channel is an 8-bit grayscale image. In this paper, we examined our presented
algorithm on images from the test set. For test set, two subsets of manually
segmented images, i.e., set A and set B, are provided. Set A as the first
observer’s manual segmented images are considered as gold standard which are
utilized as ground truth for performance assessment. The primary objective of
choosing DRIVE database is to perform a comparative analysis of the presented
work with the state-of-art techniques which have been evaluated on the same
database.
## 3 Proposed method
### 3.1 overview
In the presented work, an unsupervised blood vessel segmentation approach is
proposed. Fig. 1 presents the flow diagram of the proposed algorithm.
Figure 1: Flow diagram of the proposed algorithm
The principal idea of the proposed method is that in retinal image, vessels
can be distinguished from other structures like exudates, optic disc, fovea,
etc. during preprocessing. In many approaches, CLAHE is employed in
preprocessing to boost the dynamic range of the retinal images besides
preventing the over-amplification of noise [2]. However, CLAHE improves the
contrast of images by operating on local regions rather than globally due to
which vessel pixels enhance with the other structures too. Therefore, we first
applied the top-hat transformation on the green channel of color retinal
fundus image that intensifies only the blood vessel pixels and simultaneously
suppress the other structure pixels. After that, CLAHE is applied to get the
full advantage of its characteristics. In the second stage, 2D-Gabor wavelet
filter is applied to the preprocessed image. Furthermore, a global Otsu
thresholding method is used to get a binary segmented image.
### 3.2 Preprocessing
Initially, an original RGB color retinal image is split into three channels as
shown in Fig. 2.
Figure 2: Retinal image through FOV: (a) original RGB color image, (b) Red
channel, (c) green channel, and (d) blue channel.
Among these three channels, the green channel image appears having higher
contrast compared to other two channel images. In green channel image, the
blood vessel pixels are visible and easily distinguishable from the background
pixels. Because in our eyes, lens pigments absorb light colors differently
[21, 2]. Therefore, red vessels in color retinal image are more visible in the
green channel image as presented in Fig. 2, however, red channel image is the
brightest image and blue channel image suffers from poor dynamic range as
shown in Fig. 2.
Figure 3: Processing results at each intermediate steps of the proposed
algorithm. (a) Original test image, (b) Inverted green channel image, (c)
white top-hat transformed image of inverted green channel image, (d) CLAHE
processed image, (e) Gabor wavelet response, (f) Binary image after global
thresholding.
In the green channel image ($I_{G}$), the blood vessel pixels due to its
intensities being close to 0 seems to be dark. Image $I_{G}$ is inverted
followed by superposition of fundus mask ($M$) to make the blood vessel pixels
brighter and keep the focus on the region of interest (FOV). The inverted
green channel image through fundus mask is shown in Fig. 3. To enhance only
the vessels, white top-hat transformation is applied on inverted green channel
image ($I^{\prime}_{G}$). Usually, blood vessels have small thickness compared
to the other structures in retinal images. Therefore, a circular structuring
element of diameter at least equal to the diameter of the thickest blood
vessel is preferred for top-hat transformation. The white top-hat transform
can be defined as
$T_{w}(I^{\prime}_{G})=I^{\prime}_{G}-I^{\prime}_{G}\circ b.$ (1)
Where $T_{w}$ is the transformed image of $I^{\prime}_{G}$ using structuring
element $b$, and $\circ$ denote the morphological opening operator. In image
processing, the top-hat transformation is a morphological operation which
highlights the object smaller than the structuring element [22]. The diameter
of the structuring element is chosen 11 pixels wide as the maximum width of
the blood vessel is less than 11 pixels. For the DRIVE database, the width of
the blood vessel varies in the range of 1-10 pixels [23]. The diameter of the
structuring element may vary for the different database having different image
resolution.
Usually, the width of widest blood vessels is less than the width of other
structures, like exudates, fovea, and optic disc, which do not appear in the
top-hat transformed image ($T_{w}$) as shown in Fig. 3. Also, a homogeneous
background is obtained in the transformed image. After applying the top-hat
transformation, the blood vessel pixels do not achieve a good contrast
compared to the background. Therefore, (CLAHE) technique is adopted for
further enhancement of the processed retinal image. Fig 3 shows the CLAHE
response ($I_{c}$) of the top-hat transformed image (Fig. 3). However, CLAHE
also enhances the background noise. For informative representation of the
blood vessels having different thickness and orientation, retinal image is
processed through the multiscale Gabor wavelet filters at different
frequencies and orientations. Gabor wavelet also smoothes the background
noise.
### 3.3 2D-Gabor wavelet filter bank
The 2D-Gabor wavelet transformation is a tool for a complete representation of
an image in terms of radial frequency and orientation [24]. To highlight the
blood vessels of different width placed at different orientations in this
work, we used a bank of 2D-Gabor wavelet filters to the preprocessed retinal
image. Daugman et al. [25] have proposed that ensemble of a simple cell of
visual cortex can be represented as a family of 2D-Gabor wavelets. The
decomposition of an image, $f=I_{c}$, as wavelet transform is defined as
$({T^{wav}}f)(a,\theta,{x_{0}},{y_{0}})={\left\|a\right\|^{-1}}\iint{dxdyf(x,y){\psi_{\theta}}\left({\frac{{x-{x_{0}}}}{a},\frac{{y-{y_{0}}}}{a}}\right)},$
(2)
where $\theta$ is the orientation parameter of the wavelet, and $a$ is the
parameter that defines the standard deviation in $x$ and $y$ directions. In
equation (2),
${\psi_{\theta}}(a,x,y,{x_{0}},{y_{0}})={\left\|a\right\|^{-1}}{\psi_{\theta}}\left({\frac{{x-{x_{0}}}}{a},\frac{{y-{y_{0}}}}{a}}\right)$
(3)
represents the elementary function of the 2D wavelet rotated by an angle
$\theta$. Using the Gabor elementary function, the entire family of Gabor
wavelets can be generated. Lee et al. [24] have derived a specific class of
2D-Gabor wavelets which is used in this paper to obtain a set of Gabor
wavelets to process the retinal images. The Gabor wavelet which satisfies the
neurophysiological restraint of simple cells is defined as
$\displaystyle\psi(x,y,{\omega_{0}},\theta,K)=$
$\displaystyle\frac{{{\omega_{0}}}}{{\sqrt{2\pi}K}}\exp\left({-\frac{{\omega_{0}^{2}}}{{8{K^{2}}}}\left({4{{(xcos\theta+y\sin\theta)}^{2}}+{{(-xsin\theta+ycos\theta)}^{2}}}\right)}\right)$
(4)
$\displaystyle\cdot\left[{\exp\left\\{{i{\omega_{0}}\left({xcos\theta+y\sin\theta}\right)}\right\\}-\exp\left({-\tfrac{{{K^{2}}}}{2}}\right)}\right]$
where, $\theta$ denotes the wavelet orientation in radians and $\omega_{0}$ is
the radial frequency in radian per unit length. The constant $K$ tells about
the frequency bandwidth of octave where $K=\pi$ is for a frequency bandwidth
of one octave and $K\approx 2.5$ for frequency bandwidth of 1.5 octaves. Each
Gabor wavelet filter at radial frequency $\omega_{0}$ and orientation $\theta$
is centered at $(x=0,y=0)$ and normalized by $L^{2}$ norm.
For each pixel in the retinal image, maximum Gabor wavelet outcome over all
possible filters is stored for filtered image. If we consider
$T_{\psi}(\omega_{0},\theta,K)$ as the transformed retinal image at angular
frequency $\omega_{0}$ and orientation $\theta$. Then Gabor wavelet
transformation result is obtained as
$P(K)=\mathop{max}\limits_{{\omega_{0}},\theta}\left|{{T_{\psi}}({\omega_{0}},\theta,K)}\right|$
(5)
where $\theta$ is varied in the range of $[0,180)$ at an equal interval of
$20^{\circ}$. The radial frequency is varied between $[0.7,1.5)$ at an
interval of $0.2$. Lee at el.[24] have suggest to choose $K$ in the range of
[2,2.5]. In this paper, $K=2.2$ is selected to accomplish better
distinguishability between background and vessels. All these parameters are
selected by performing few experiments on retinal images and Gabor wavelet
filters. Fig. 4 shows few Gabor wavelet filters from the bank of filters for
different radial frequency at orientation of $50^{o}$. The Gabor wavelet
filter at $\omega_{0}=0.7$ is efficient to detect thick blood vessel. However,
$\omega_{0}=1.3$ is suitable to detect thin blood vessels. The Gabor wavelet
response of the preprocessed image is shown in Fig. 3.
(a) $\omega_{0}=0.7$
(b) $\omega_{0}=0.9$
(c) $\omega_{0}=1.1$
(d) $\omega_{0}=1.3$
Figure 4: Gabor wavelet filters at $50^{o}$ orientation for four different
radial frequencies
The hard segmented binary image as illustrated in Fig. 3 is obtained by
applying Otsu thresholding on the filtered image $P$.
## 4 Results and Discussions
The presented algorithm is evaluated and compared with competitive algorithms
using five different metrics: accuracy ($Acc$), sensitivity ($Se$),
specificity ($Sp$), kappa agreement ($\kappa$), and area under the curve
($A_{z}$). All the metrics are computed using only pixels inside the FOV.
Accuracy, sensitivity, and specificity are calculated using false positive
($X$), false negative ($Y$), true positive ($Z$), and true negative ($W$)
values. $Z$ denotes the number of pixels correctly identified as vessel
pixels, and $X$ denotes the number of pixels belongs to the background but
wrongly identified as vessel pixel. $W$ represents the number of pixels
correctly identified as background pixels, and $Y$ represents the number of
pixels belongs to the vessel but incorrectly assigned to background pixels.
The evaluation metrics are measured using the following mathematical
expressions as
$Acc=\frac{Z+W}{Z+X+Y+W},~{}~{}~{}~{}~{}~{}Se=\frac{Z}{Z+Y},$ (6)
$Sp=\frac{W}{X+W},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\kappa=\frac{p_{o}-p_{e}}{1-p_{e}}.$
(7)
Where $p_{e}$ is the hypothetical probability of chance agreement and $p_{o}$
is the relative perceived agreement. These agreement values can be calculated
using the perceived data to estimate the probabilities of each observer
randomly seeing all the classes. $\kappa$ value varies in the range of
$[0,1]$, where $\kappa=0$ relates to no agreement between two rates, whereas
$\kappa=1$ relates to complete agreement between the rates. To compute
$A_{z}$, receiver operating characteristic (ROC) curve is attained by changing
the global threshold between 1 and 0 in steps of 0.01. The global threshold
divides the image into a binary image with labels 0 and 1. For each threshold,
two performance measure false positive rate ($XR=1-Sp$) and true positive rate
($ZR=Sp$) are obtained by comparing the segmented binary image with the
corresponding ground truth. Before applying the global threshold, the Gabor
wavelet response values are normalized to 0-1 range.
The blood vessel segmentation outcome performances are shown in Fig. 3. All
intermediate results and segmented output of the presented algorithm for a
retinal image from DRIVE database are illustrated in Fig. 3. The segmentation
performance of the presented algorithm on the DRIVE test data set is listed in
Table 1.
Table 1: Segmented outcome of the presented work on the DRIVE test database Image | Se | Sp | Acc | $A_{z}$ | $\kappa$
---|---|---|---|---|---
1 | 0.8337 | 0.9592 | 0.9427 | 0.9687 | 0.7593
2 | 0.7781 | 0.9767 | 0.9469 | 0.9602 | 0.7837
3 | 0.7583 | 0.9639 | 0.9339 | 0.9432 | 0.7312
4 | 0.6619 | 0.9874 | 0.9440 | 0.9443 | 0.7282
5 | 0.7010 | 0.9837 | 0.9453 | 0.9468 | 0.7460
6 | 0.6975 | 0.9784 | 0.9388 | 0.9371 | 0.7279
7 | 0.7068 | 0.9737 | 0.9383 | 0.9431 | 0.7171
8 | 0.7111 | 0.9721 | 0.9393 | 0.9395 | 0.7120
9 | 0.7289 | 0.9771 | 0.9480 | 0.9525 | 0.7378
10 | 0.7021 | 0.9816 | 0.9483 | 0.9507 | 0.7354
11 | 0.7378 | 0.9660 | 0.9365 | 0.9412 | 0.7142
12 | 0.7936 | 0.9618 | 0.9408 | 0.9517 | 0.7364
13 | 0.6907 | 0.9798 | 0.9388 | 0.9481 | 0.7273
14 | 0.8094 | 0.9612 | 0.9433 | 0.9581 | 0.7386
15 | 0.7800 | 0.9672 | 0.9477 | 0.9566 | 0.7268
16 | 0.7225 | 0.9787 | 0.9451 | 0.9632 | 0.7440
17 | 0.7440 | 0.9684 | 0.9407 | 0.9451 | 0.7221
18 | 0.7920 | 0.9629 | 0.9432 | 0.9613 | 0.7300
19 | 0.8476 | 0.9713 | 0.9564 | 0.9729 | 0.7991
20 | 0.8080 | 0.9623 | 0.9459 | 0.9638 | 0.7306
Average | 0.7503 | 0.9717 | 0.9432 | 0.9524 | 0.7374
Std. deviation | 0.0499 | 0.0081 | 0.0049 | 0.0098 | 0.0206
Se - sensitivity, Sp - specificity, Acc - accuracy, $A_{z}$ \- Area under the
ROC curve, $\kappa$ \- Kappa agreement
The performance metrics illustrate that average segmentation accuracy,
sensitivity, and specificity of the presented technique are 94.32%, 75.03%,
and 97.12% respectively. It can be noticed that a small deviation in accuracy
($\sigma=0.0049$) is achieved. In DRIVE test data set, image 8 is a
pathological image having exudates on which we have achieved 93.93 % accuracy
with 0.7111 $ZR$. However, Zhao et al. [2] have reported 93.59 % of accuracy
with 0.6524 $ZR$ on the same image. The segmented blood vessels of image 8 is
presented in Fig. 5. It confirms that our proposed algorithm is efficient in
segmenting the blood vessels in pathological images also. Average area under
the curve, $A_{z}$, of 0.9524 is obtained with maximum and minimum $A_{z}$ as
0.9729 and 0.9371 respectively. On DRIVE test data set, we achieved average
kappa agreement 0.7374 with 0.0206 standard deviations.
A comparative analysis of the presented algorithm with competing methods is
presented in Table 2 in terms of average accuracy, sensitivity, specificity,
$A_{z}$, and $\kappa$ agreement. The evaluation metrics values of different
methods provided in the table are taken from the respective papers. Numerical
results show that our proposed algorithm outperforms many unsupervised methods
to DRIVE dataset concerning accuracy and sensitivity. Furthermore, the
proposed algorithm is better than few supervised techniques on DRIVE test
dataset concerning sensitivity and specificity.
Table 2: Comparative analysis of the presented work with existing approach on the DRIVE test data set with respect to golden standard ground truth image Methods | Se | Sp | Acc ($\sigma$) | $A_{z}$ | $\kappa$
---|---|---|---|---|---
Supervised Methods
Soares et al. [1] | 0.7230 | 0.9762 | 0.9466 (-) | 0.9614 | -
Staal et al. [9] | 0.7194 | 0.9773 | 0.9441 (-) | 0.9520 | -
Ricci et al. [12] | - | - | 0.9595 (-) | 0.9558 | -
Lahiri et al. [26] | - | - | 0.9530 (0.0030) | - | 0.7090
Unsupervised Methods
2nd observer | 0.7760 | 0.9725 | 0.9473 (-) | - | 0.6970
Chaudhuri et al.[3]∗ | - | - | 0.8773 (0.0232) | 0.7878 | 0.3357
Zana et al. [27]∗ | - | - | 0.9377 (0.0077) | 0.8984 | 0.6971
Martinez-Parez et al. [28] | 0.7246 | 0.9655 | 0.9344 (-) | - | -
Zhang et al. [5] | 0.7120 | 0.9724 | 0.9382 (-) | - | -
Miri et.al. [18] | 0.7352 | 0.9795 | 0.9458 (-) | - | -
Zhao et al. [2] | 0.7354 | 0.9789 | 0.9477 (-) | - | -
Gou et al. [7] | 0.7526 | 0.9669 | 0.9393 (-) | - | -
Presented method | 0.7503 | 0.9717 | 0.9432 (0.0049) | 0.9524 | 0.7374
∗results are taken from [29]
Fig. 5 shows the segmentation result of four retinal vessel images selected
from the DRIVE test data sets. Among these four retinal images, three retinal
images (Fig. 5(a)-(c)) are healthy retinal images with optic disc and fovea.
Whereas, Fig. 5 is the pathological image having exudates, optic disc, and
fovea. It can be observed that our proposed algorithm is competent in
identifying thin as well as thick vessels in the presence of exudates, fovea,
and optic disc. Besides, the proposed algorithm also preserves the
connectivity of the blood vessels. The segmentation performance can be further
improved by using some local adaptive thresholding technique instead of global
thresholding to detect 1-2 pixel thick vessels.
Figure 5: Segmentation result of retinal images from the DRIVE database. First
row: original image, Second row: golden standard ground truth, Third row:
segmented vessel using the proposed algorithm. (a-c) Healthy retinal image
with different pigmentation and illuminance, (d) Pathological retinal image.
## 5 Conclusions
Blood vessel segmentation in fundus images plays a vital role to detect
different ocular diseases. To distinguish blood vessel from other structure
like exudates, optic disc, fovea, etc., under uneven illuminance condition is
difficult. In this paper, we have been proposed an entirely unsupervised
approach for blood vessel segmentation using the retinal image and validated
on DRIVE database. In preprocessing, retinal image is enhanced in the two-
steps. In the first step, white top-hat transform is applied to enhance only
the blood vessels and concurrently suppress all other structures which are
larger than structuring element. In next step, the processed image is further
enhanced using CLAHE algorithm. After preprocessing, multiscale Gabor wavelet
filters are applied to emphasize the thick and thin vessels in the retinal
image. A global threshold is obtained from the filtered image by using Otsu’s
thresholding technique. Experimental results clearly indicate that the
proposed technique is efficient to recognize the blood vessels in the presence
of exudates, fovea, and optic disc with the average accuracy of 94.32% with a
small standard deviation of 0.0049. Moreover, the suggested algorithm in this
paper is simple and easy to implement.
The outcome result of the proposed technique depends on the diameter selected
as structuring element for the top-hat transform. If some other structures
(like small microaneurysms) which look like vessels and have a thickness less
than the width of the structuring element then our proposed method may fail to
remove those structures in the segmented image. In future, the shape feature
can be incorporated with Gabor wavelet response to discarding such kind of the
small structures. Furthermore, testing is to be performed on different retinal
databases like STARE and CHASE_DB1 to test the robustness of the proposed
method.
## References
* [1] Soares, J.V., Leandro, J.J., Cesar, R.M., Jelinek, H.F., Cree, M.J.: Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Transactions on medical Imaging 25(9), 1214–1222 (2006)
* [2] Zhao, Y.Q., Wang, X.H., Wang, X.F., Shih, F.Y.: Retinal vessels segmentation based on level set and region growing. Pattern Recognition 47(7), 2437–2446 (2014)
* [3] Chaudhuri, S., Chatterjee, S., Katz, N., Nelson, M., Goldbaum, M.: Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Transactions on medical imaging 8(3), 263–269 (1989)
* [4] Hoover, A., Kouznetsova, V., Goldbaum, M.: Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Transactions on Medical imaging 19(3), 203–210 (2000)
* [5] Zhang, B., Zhang, L., Zhang, L., Karray, F.: Retinal vessel extraction by matched filter with first-order derivative of Gaussian. Computers in biology and medicine 40(4), 438–445 (2010)
* [6] Liskowski, P., Krawiec, K.: Segmenting retinal blood vessels with deep neural networks. IEEE transactions on medical imaging 35(11), 2369–2380 (2016)
* [7] Gou, D., Wei, Y., Fu, H., Yan, N.: Retinal vessel extraction using dynamic multi-scale matched filtering and dynamic threshold processing based on histogram fitting. Machine Vision and Applications 29(4), 655–666 (2018)
* [8] Fraz, M.M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A.R., Owen, C.G., Barman, S.A.: Blood vessel segmentation methodologies in retinal images–a survey. Computer methods and programs in biomedicine 108(1), 407–433 (2012)
* [9] Staal, J., Abràmoff, M.D., Niemeijer, M., Viergever, M.A., Van Ginneken, B.: Ridge-based vessel segmentation in color images of the retina. IEEE transactions on medical imaging 23(4), 501–509 (2004)
* [10] Franklin, S.W., Rajan, S.E.: Retinal vessel segmentation employing ANN technique by Gabor and moment invariants-based features. Applied Soft Computing 22, 94–100 (2014)
* [11] Azzopardi, G., Strisciuglio, N., Vento, M., Petkov, N.: Trainable COSFIRE filters for vessel delineation with application to retinal images. Medical image analysis 19(1), 46–57 (2015)
* [12] Ricci, E., Perfetti, R.: Retinal blood vessel segmentation using line operators and support vector classification. IEEE transactions on medical imaging 26(10), 1357–1365 (2007)
* [13] Zhu, C., Zou, B., Zhao, R., Cui, J., Duan, X., Chen, Z., Liang, Y.: Retinal vessel segmentation in colour fundus images using extreme learning machine. Computerized Medical Imaging and Graphics 55, 68–77 (2017)
* [14] Sadek, I., Elawady, M., Shabayek, A.E.R.: Automatic classification of bright retinal lesions via deep network features. arXiv preprint arXiv:1707.02022 (2017)
* [15] Rahebi, J., Hardalaç, F.: Retinal blood vessel segmentation with neural network by using gray-level co-occurrence matrix-based features. Journal of medical systems 38(8), 85 (2014)
* [16] Li, Q., You, J., Zhang, D.: Vessel segmentation and width estimation in retinal images using multiscale production of matched filter responses. Expert Systems with Applications 39(9), 7600–7610 (2012)
* [17] Sofka, M., Stewart, C.V.: Retinal vessel centerline extraction using multiscale matched filters, confidence and edge measures. IEEE transactions on medical imaging 25(12), 1531–1546 (2006)
* [18] Miri, M.S., Mahloojifar, A.: Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction. IEEE Transactions on Biomedical Engineering 58(5), 1183–1192 (2011)
* [19] Hassan, G., El-Bendary, N., Hassanien, A.E., Fahmy, A., Snasel, V., et al.: Retinal blood vessel segmentation approach based on mathematical morphology. Procedia Computer Science 65, 612–622 (2015)
* [20] Roychowdhury, S., Koozekanani, D.D., Parhi, K.K.: Iterative vessel segmentation of fundus images. IEEE Transactions on Biomedical Engineering 62(7), 1738–1749 (2015)
* [21] Walter, T., Massin, P., Erginay, A., Ordonez, R., Jeulin, C., Klein, J.C.: Automatic detection of microaneurysms in color fundus images. Medical image analysis 11(6), 555–566 (2007)
* [22] Dougherty, E.R., Lotufo, R.A.: Hands-on morphological image processing, vol. 59. SPIE press (2003)
* [23] Fathi, A., Naghsh-Nilchi, A.R.: Automatic wavelet-based retinal blood vessels segmentation and vessel diameter estimation. Biomedical Signal Processing and Control 8(1), 71–80 (2013)
* [24] Lee, T.S.: Image representation using 2D Gabor wavelets. IEEE Transactions on pattern analysis and machine intelligence 18(10), 959–971 (1996)
* [25] Daugman, J.G.: Complete discrete 2D Gabor transforms by neural networks for image analysis and compression. IEEE Transactions on acoustics, speech, and signal processing 36(7), 1169–1179 (1988)
* [26] Lahiri, A., Roy, A.G., Sheet, D., Biswas, P.K.: Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography. In: Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the. pp. 1340–1343. IEEE (2016)
* [27] Zana, F., Klein, J.C.: Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. IEEE transactions on image processing 10(7), 1010–1019 (2001)
* [28] Martinez-Perez, M.E., Hughes, A.D., Thom, S.A., Bharath, A.A., Parker, K.H.: Segmentation of blood vessels from red-free and fluorescein retinal images. Medical image analysis 11(1), 47–61 (2007)
* [29] Niemeijer, M., Staal, J., van Ginneken, B., Loog, M., Abramoff, M.D.: Comparative study of retinal vessel segmentation methods on a new publicly available database. In: Medical Imaging 2004: Image Processing. vol. 5370, pp. 648–657. International Society for Optics and Photonics (2004)
|
11institutetext: ImViA EA7535, University Burgundy Franche-Comté, 21078 Dijon,
France 11email<EMAIL_ADDRESS>
# CEN-HDR: Computationally Efficient neural Network for real-time High Dynamic
Range imaging
Steven Tel 0000-0002-1487-9381 Barthélémy Heyrman 0000-0003-1642-8311
Dominique Ginhac 0000-0002-5911-2010
###### Abstract
High dynamic range (HDR) imaging is still a challenging task in modern digital
photography. Recent research proposes solutions that provide high-quality
acquisition but at the cost of a very large number of operations and a slow
inference time that prevent the implementation of these solutions on
lightweight real-time systems. In this paper, we propose CEN-HDR, a new
computationally efficient neural network by providing a novel architecture
based on a light attention mechanism and sub-pixel convolution operations for
real-time HDR imaging. We also provide an efficient training scheme by
applying network compression using knowledge distillation. We performed
extensive qualitative and quantitative comparisons to show that our approach
produces competitive results in image quality while being faster than state-
of-the-art solutions, allowing it to be practically deployed under real-time
constraints. Experimental results show our method obtains a score of 43.04
$\mu$-PSNR on the Kalantari2017 dataset with a framerate of 33 FPS using a
Macbook M1 NPU. The proposed network will be available at
https://github.com/steven-tel/CEN-HDR
###### Keywords:
High Dynamic Range Imaging, Efficient computational photography
## 1 Introduction
In the last decades, applications based on computer vision have become
increasingly important in everyday life. Currently, many research works are
conducted to propose more reliable algorithms in areas such as object
detection, action recognition, or scene understanding. However, the accuracy
of these algorithms depends largely on the quality of the acquired images.
Most standard cameras are unable to faithfully reproduce the illuminations
range of a natural scene, as the limitations of their sensors generate a loss
of structural or textural information in under-exposed and over-exposed
regions of the acquired scene. To tackle this challenge, sensors with a higher
dynamic range (HDR) have been proposed [17, 26] to capture more intensity
levels of the scene illumination, but these solutions are expensive,
preventing high dynamic range acquisition from being readily available.
Towards making HDR imaging practical and accessible, software solutions were
proposed, based on the emergence of deep learning in computer vision
applications. They acquire one Low Dynamic Range (LDR) image and try to
expends its dynamic range thanks to a generative adversarial network [12, 37,
8]. Although these methods produce images with a higher illumination range,
they have limitations in extending the dynamic of the input image to the
dynamic of the acquired scene. A more effective approach is to acquire
multiple LDR images with different exposure times and merge them into one
final HDR image. Traditional computer vision algorithms [3] allow the
acquisition of good-quality static scenes when there is no camera or object
motion between images with different exposure times. However, in a lot of use
cases, images are captured in a rapid sequence from a hand-held device
resulting in inevitable misalignments between low dynamic range shots.
Therefore scenes with motions introduce new challenges as ghost-like artifacts
for large motion regions or loss of details in occluded regions. Following
recent advances in the field of deep learning, several methods based on
Convolutional Neural Network (CNN) were proposed to spatially align input
frames to a reference one before merging them into a final HDR image.
State-of-the-art deep learning solutions for multi-frame merging HDR [34, 13]
tend to be based on a previously proposed method[30] and add additional
processing to increase the accuracy of the HDR merging system. As a
consequence, the computational cost and execution time are significantly
increased, preventing these solutions to be used in lightweight systems and/or
in real-time applications. Then, the primary goal of HDR imaging software
solutions, which was to make HDR imaging more widely available compared to
hardware solutions, is therefore not being achieved at all. Therefore, in this
paper, we propose a Computationally Efficient neural Network for High Dynamic
Range imaging (CEN-HDR). CEN-HDR is based on an encoder-decoder neural network
architecture for generating ghost-free HDR images from scenes with large
foreground and camera movements. Unlike previously published solutions, we
decided to develop a new approach keeping in mind the constraint of inference
time and computational cost.
1. 1.
We propose CEN-HDR a novel efficient convolutional neural network based on a
new attention mechanism and sub-pixel convolution that overcomes ghost-like
artifacts and occluded regions while keeping a low computational cost,
allowing our solution to be implemented in real-time on a lightweight system.
2. 2.
We demonstrate the efficiency of network compression for the realization of
CEN-HDR by applying a knowledge distillation scheme.
3. 3.
We perform extensive experiments to determine the best trade-off between
accuracy and inference cost with the main objective to demonstrate the
relevance of CEN-HDR.
## 2 Related Works
We briefly summarize existing HDR merging approaches into two categories: deep
learning-based architectures and efficient learning-based architectures. In
the first category, the proposed methods aim to achieve better quality in HDR
imaging without taking inference cost into account. Approaches belonging to
the second category seek to optimize the compromise between the quality of the
generated images and the computation cost. This leads to the use of new
operators in the proposed deep learning architectures.
Figure 1: Architecture of the proposed CEN-HDR solution. The spatial size of
input features is divided by 2 at the encoding step. The attention module
allows registering non-reference features to the reference ones. The full
spatial size is recovered thanks to the pixel shuffle operation.
### 2.1 Deep learning based HDR merging
Using multiple input images for HDR generation leads to the need to align the
features of the LDR images to the reference image. The first common method for
feature registration is by computing the motion between inputs features using
an optical flow algorithm. Multiple studies [31, 7, 22] used Liu[11] optical
flow algorithm. In Kalantari et al.[7], the input images are aligned by
selecting the image with better pixels as a reference and computing the
optical flow between this reference and other input LDR images. Then, the
warped images are fed to a supervised convolutional neural network (CNN) to
merge them into an HDR image. However, since the optical flow algorithm
initially assumes that the input images have the same exposure time, trying to
warp the different exposures with occluded regions can result in artifacts in
the final HDR image. To address this issue, DeepHDR[28] proposes an image
translation network able to hallucinate information in the occluded regions
without the need for optical flow. Moreover, many solutions have been
developed to correct the ghost effect introduced by the misalignment of the
input images. In AHDRNet[30] an attention module is proposed to emphasize the
alignment of the input images to the reference image. Input images are then
merged using several dilated residual dense blocks. The high performance of
the AHDRNet network led it to be used as a base network for other methods. For
example, ADNet[13] follows the same main architecture that AHDRNet but adds a
Pyramidal alignment module based on deformable convolution allowing a better
representation of the extracted features but in the counterpart of a larger
number of operations.
### 2.2 Efficient learning-based HDR merging architectures
To our best knowledge, the first architecture which aims to be efficient was
proposed by Prabhakar et al.[21] by processing low-resolution images and
upscaling the result to the original full resolution thanks to a bilateral
guided upsampling module. Recently, the HDR community tends to focus more on
efficient HDR image generation[20] and no longer only aims at improving the
image quality but also at significantly limiting the number of processing
operations. This results in efficient solutions such as GSANet[9] that propose
efficient ways to process gamma projections of input images with spatial and
channel attention blocks to increase image quality while limiting the number
of parameters. Yu et al.[36] introduce a multi-frequency lightweight encoding
module to extract features and a progressive dilated u-shape block for
features merging. Moreover, the different standard convolution operations are
replaced by depth-wise separable convolution operations firstly proposed in
[2], they are composed of a depth-wise convolution followed by a pointwise
convolution which allows more efficient use of model parameters. Another
efficient method, proposed by Yan et al.[33] is a lightweight network based on
an u-net[23] like encoder-decoder architecture, allowing for spatially reduced
processed features. While these solutions focus on the number of performed
operations, their inference time still remains too long to be considered as
real-time solutions.
## 3 Proposed Method
We consider three LDR images $I_{i}\in\mathbb{R}^{3\times H\times W}$ with
their respective exposure times $t_{i}$ as inputs. The generated HDR image is
spatially aligned with the central LDR frame $I_{2}$ selected as the reference
image. To make our solution more robust to exposure difference between inputs,
the respective projection of each LDR input frame into the HDR domain is
calculated using the gamma encoding function described in Eq. 1, following
previous works [13, 18, 30]:
.
$H_{i}=\frac{I_{i}^{\gamma}}{t_{i}},\quad\gamma=2,2$ (1)
Where $H_{i}\in\mathbb{R}^{3\times H\times W}$ is the gamma-projected input.
Then, each input is concatenated with their respective gamma-projection to
obtain $L_{i}\in\mathbb{R}^{6\times H\times W}$ :
$L_{i}=I_{i}\oplus H_{i}$ (2)
where $\oplus$ represents the concatenation operation. $L_{i}$ will then be
fed to our proposed merging network whose architecture is detailed in Fig. 1.
### 3.1 Feature encoding
Using high-resolution images as inputs presents an additional challenge in the
design of a real-time HDR merging network. To solve such a problem, previous
works [29, 33] propose to use a U-net[23] like architecture to reduce the
spatial size of the features processed by the merging network. However, a too
large reduction of the spatial dimensions causes the extraction of coarse
features that degrade the final result. So, we decide to limit the spatial
reduction to 2 by using an encoder block composed of 2 sequential convolutions
as described in Eq. 3.
$F_{i}=conv_{E_{1}}(conv_{E_{2}}(L_{i}))$ (3)
where $F_{i}\in\mathbb{R}^{32\times\frac{H}{2}\times\frac{W}{2}}$ is the
features map extracted from the encoder for each LDR input. $conv_{E_{1}}$ and
$conv_{E_{2}}$ are 3x3-convolution layers extracting respectively 16 and 32
features map. The spatial size is divided by 2 setting a stride of 2 for
$conv_{2}$.
Figure 2: Illustration of the attention module composed of 2 branches,
respectively responsible for spatial attention and channel attention. A
sigmoid activation function is used to keep the value between 0 and 1.
### 3.2 Attention module
The final generated HDR image must be aligned with the reference image. To
address this requirement [30] demonstrates the effectiveness of using a
spatial attention module after the encoding step. Since then, many spatial and
channel attention modules have been proposed in the literature which can be
integrated into networks to improve their performance. According to the
inference cost study done in Table 5, we propose the Spatial-Channel Reference
Attention Module (SCRAM) a slightly modified version of the Bottleneck
Attention Module (BAM) proposed in [19]. Indeed while BAM aims to generate a
mask of its input feature maps, in our case we want to generate attention maps
from the concatenation of reference and non-reference features, these
attention maps are then applied to non-reference features only resulting in a
reduction of the number of feature maps in the proposed attention module.
Moreover, batch normalization is not applied in SCRAM. The detailed structure
of SCRAM is illustrated in Fig. 2.
The non-reference features $F_{i\neq 2}$ are concatenated with the features of
the reference image:
$X_{i}=F_{i}\oplus F_{2=ref},\quad i\neq 2$ (4)
where $X_{i}\in\mathbb{R}^{64\times\frac{H}{2}\times\frac{W}{2}}$ is the input
of SCRAM and $\oplus$ is the concatenation operator.
Following [19], SCRAM is composed of two branches respectively responsible for
the spatial and channel features alignment of the non-reference images to the
reference ones:
$A_{i}=\sigma(s(X_{i})+c(X_{i})),\quad i\neq 2$ (5)
where $s$ is the spatial attention branch and $c$ is the channel attention
branch. The sum of produced features by each branch passes through $\sigma$, a
sigmoid activation function to keep the output values between 0 and 1.
Spatial attention: The objective of the spatial branch is to produce an
attention map allowing to keep the most relevant information for the spatial
alignment of the non-reference images to the reference one. To limit the
computation, we first reduce the number of features map by 3 using a pointwise
convolution. With the objective to extract more global features while keeping
the same computation, we then make the receptive field larger by employing 3
dilated convolutions [35] layers with a factor of dilatation set to 2. The
final attention map of size $(1,H,W)$ is produced by using a pointwise
convolution and then expanded across the channel dimension to obtain
$A_{S}\in\mathbb{R}^{32\times H\times W}$.
Channel attention: This branch aims to perform a channel-wise feature
recalibration. We first squeeze the spatial dimension by applying a global
average pooling which sums out the spatial information to obtain the features
vector of size $(64,1,1)$. A multilayer perceptron with three hidden layers is
then used in purpose to estimate cross-channel attention. The last activation
size is set to 32 to fit the number of channels of the non-reference features
$F_{i}$. Finally, the resulting vector map is spatially expanded to obtain the
final feature map $A_{C}\in\mathbb{R}^{32\times H\times W}$.
The Attention features $A_{i}$ are then used to weight the non-reference
features $F_{i}$:
$F^{\prime}_{i}=F_{i}\otimes A_{i},\quad i\neq 2$ (6)
where $\otimes$ is the element-wise product and $F^{\prime}_{i}$ is the
aligned non-reference features. For the reference features we set
$F^{\prime}_{2}=F_{2}$.
### 3.3 Features merging
While most of the computation is usually done in the merging block [13, 18,
30], we propose a novel efficient feature merging block that first focuses on
merging the non-references features. Each feature map $F^{\prime}_{i}$
produced by the encoder goes through a convolution layer:
$M_{i}=conv_{M_{1}}(F^{\prime}_{i})$ (7)
Where $conv_{M_{1}}$ is a 3x3-convolution producing
$M_{i}\in\mathbb{R}^{64\times\frac{H}{2}\times\frac{W}{2}}$. Then we focus on
merging the non-reference features maps by concatenating them and feeding the
result features in a convolution layer:
$M_{\text{non-ref}}=conv_{M_{2}}(M_{1}\oplus M_{3})$ (8)
where $conv_{M_{2}}$ is a 3x3-convolution and $M_{\text{non-
ref}}\in\mathbb{R}^{64\times\frac{H}{2}\times\frac{W}{2}}$ is the non-
reference merged features.
As we emphasize the reference features throughout all our network, here we
merge our reference features with the non-reference $M_{non-ref}$ only by
adding them together to limit the number of features map processed later:
$M=conv_{M_{4}}(conv_{M_{3}}(M_{2}+M_{non-ref}))$ (9)
where $conv_{M_{3}}$ and $conv_{M_{4}}$ are 3x3-convolutions producing each 64
features map and $M\in\mathbb{R}^{64\times\frac{H}{2}\times\frac{W}{2}}$
contains features from all LDR input images.
Figure 3: Illustration of the pixel rearrangement by the pixel shuffle layer
for an upscale factor set to $r=2$ and an input shape of $(4,4,4)$. In the
proposed solution, the input shape is $(64,\frac{H}{2},\frac{W}{2})$, the
produced output size is $(16,H,W)$.
### 3.4 Features decoding
The role of the decoder is to produce the final HDR image from the features
produced by the merger block. At the encoding stage, we divided the spatial
dimensions by 2. While the original spatial size is usually recovered using
bilinear upsampling or transposed convolution[14] operation, we propose to use
the pixel shuffle operation first proposed in [25], it is presented as an
efficient sub-pixel convolution with a stride of $1/r$ where $r$ is the
upscale factor. In our case, we set $r=2$. As illustrated in Fig. 3, the pixel
shuffle layer rearranges elements in a tensor of shape $(C\times r^{2},H,W)$
to a tensor of shape $(C,H\times r,W\times r)$:
$D=PixelShuffle(M)$ (10)
where $D\in\mathbb{R}^{16\times H\times W}$ is the resulting upscaled
features.
The final HDR image is obtained by following the Eq 11.
$HDR=\sigma(conv_{D}(D+S_{2}))$ (11)
where $S_{2}$ is the reference features extracted by the first convolution
layer of our network $conv_{E_{1}}$ to stabilize the training of our network.
Finally, we generate the final HDR image using a 3x3-convolution layer,
followed by a Sigmoid activation function.
## 4 Experimental Settings
### 4.1 Datasets
The CEN-HDR network has been trained using the dataset provided by [7]
composed of 74 training samples and 15 test samples. Each sample represents
the acquisition of a dynamic scene caused by large foreground or camera
motions and is composed of three input LDR images (with EV of -2.00, 0.00,
+2.00 or -3.00, 0.00, +3.00) and a reference HDR image aligned with the medium
exposure image. The network has also been separately trained and tested using
the dataset from the NTIRE[20] dataset, where the 3 LDR images are
synthetically generated from the HDR images provided by [4]. The dataset is
composed of 1500 training samples, 60 validation samples, and 201 testing
samples. The ground-truth images for the testing sample are not provided.
Figure 4: Qualitative comparison of the proposed CEN-HDR solution with other
HDR merging methods. The cropped patch demonstrates that the proposed
efficient network has equivalent capabilities to the state-of-the-art methods
to correct the ghost effect due to the large movement in the scene.
### 4.2 Loss function
Following previous works [7, 30], the images have been mapped from the linear
HDR domain to the LDR domain before evaluating the loss function. In order to
train the network, the tone-mapping function has to be differentiable around
zero, so, the $\mu$-law function is defined as follows:
$T(H)=\frac{log(1+\mu H)}{log(1+\mu)},\quad\mu=5000$ (12)
where H is the linear HDR image and $\mu$ the amount of compression.
To make an efficient network, a network compression method defined as
knowledge distillation proposed in [5] has been used. By using knowledge
distillation, we assume that the capacity of a large network is not fully
exploited, so the objective is to transfer the knowledge of this large teacher
network to our lighter network as described in the Eq. 13.
$\mathcal{L}=\alpha\times\mathcal{L}(T,T_{GT})+(1-\alpha)\times\mathcal{L}(T,T_{Teacher})$
(13)
where $\mathcal{L}$ is the $L_{1}$ loss function. $T$ is the tone mapped
prediction of our network, $T_{GT}$ the tone mapped ground truth provided in
the dataset and $T_{Teacher}$ the tone mapped prediction of the large teacher
network. we use the HDR-GAN [18] model as the teacher. $\alpha$ is a trade-off
parameter set to 0.2. Moreover, the method proposed by [7] to produce the
training dataset focuses mainly on foreground motion. It does not allow the
generation of reliable ground-truth images for chaotic motions in the
background, such as the movement of tree leaves due to wind, which results in
a ground truth image with blurred features that do not reflect reality. This
has the effect of producing a greater error when the predicted image contains
sharper features than in the ground truth image. Using also a predicted image
from a teacher model allows for dealing with this data misalignment. The
comparison of the performance obtained between training done with knowledge
distillation and without is made in Table. 1.
### 4.3 Implementation details
The CEN-HDR network has been trained using cropped patches of size $256\times
256$ pixels with a stride of 128 and evaluated on the full-resolution test
images. During training, random augmentations are applied on the cropped patch
such as horizontal symmetry and rotation of 90, 180, or 270 degrees. Training
has been done using the Adam optimizer with a batch size of 8. The learning
rate is initially set to $10^{-4}$, keep fixed for 80 epochs, and decreased by
0.8 every 20 epochs after. The training lasts for 500 epochs.
Figure 5: Comparison of the proposed CEN-HDR solution with other HDR merging
methods. The X-axis represents the mean runtime using the M1 NPU with an input
size of 1280x720 pixels. The Y-axis is the fidelity score on the test images
from [7] dataset. The best solutions tend to be in the upper left corner. The
radius of circles represents the number of operations, the smaller the better.
Table 1: Comparison of the proposed CEN-HDR architecture performances with and
without knowledge distillation. In the first case, the network is trained with
the HDR ground-truth proposed in [7]. In the second case, we also use the
prediction of HDR-GAN [18] as label (Eq. 13). In both cases, the network is
trained for 500 epochs using the $l_{1}$ criterion.
Training method | $\mu$-PSNR | PSNR | $\mu$-SSIM | SSIM | HDR-VDP2
---|---|---|---|---|---
w/o knowledge distillation | 40.8983 | 40.0298 | 0.9772 | 0.9926 | 62.17
with knowledge distillation | 43.0470 | 40.5335 | 0.9908 | 0.9956 | 64.34
## 5 Experimental Results
Table 2: Quantitative comparison with lightweight state-of-the-art methods on
the Kalantari2017[7] test samples. PSNR and SSIM are calculated in the linear
domain while $\mu$-PSNR and $\mu$-SSIM are calculated after $\mu$-law tone
mapping (Eq. 12). For compared methods, the results are from [18]. PU-PSNR and
PU-SSIM are calculated applying the encoding function proposed in [16].
Method | $\mu$-PSNR | PU-PSNR | PSNR | $\mu$-SSIM | PU-SSIM | SSIM | HDR-VDP2
---|---|---|---|---|---|---|---
Sen et al.[24] | 40.80 | 32.47 | 38.11 | 0.9808 | 0.9775 | 0.9721 | 59.38
Hu et al.[6] | 35.79 | $-$ | 30.76 | 0.9717 | $-$ | 0.9503 | 57.05
Kalantari et al.[7] | 42.67 | 33.82 | 41.23 | 0.9888 | 0.9832 | 0.9846 | 65.05
DeepHDR[28] | 41.65 | 31.36 | 40.88 | 0.9860 | 0.9815 | 0.9858 | 64.90
NHDRRNet[32] | 42.41 | $-$ | $-$ | 0.9887 | $-$ | $-$ | 61.21
AHDRNet[30] | 43.61 | 33.94 | 41.03 | 0.9900 | 0.9855 | 0.9702 | 64.61
HDRGAN[18] | 43.92 | 34.04 | 41.57 | 0.9905 | 0.9851 | 0.9865 | 65.45
CEN-HDR(our) | 43.05 | 33.23 | 40.53 | 0.9908 | 0.9821 | 0.9856 | 64.34
### 5.1 Fidelity performance
In Table 2 and Fig. 4, the proposed CEN-HDR solution is compared against seven
lightweight state-of-the-art methods: [24] and [6] are based on input patch
registration methods. [7] is based on a sequential CNN, the inputs need first
to be aligned thanks to an optical flow algorithm. For [28], the background of
each LDR input is aligned by homography before being fed to an encoder-
decoder-based CNN. [32] proposes an encoder-decoder architecture with a non-
local attention module. [30] is a CNN based on an attention block for features
registration and on multiple dilated residual dense blocks for merging. [18]
is the first GAN-based approach for HDR merging with a deep supervised HDR
method. Quantitative evaluation is done using objective metrics. The standard
peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are
computed both directly in the linear domain and after tone mapping by applying
the $\mu$-law function (Eq. 12). The HDR-VDP2[15] metric predicts the quality
degradation with the respect to the reference image. We set the diagonal
display size to 24 inches and the viewing distance to 0.5 meter. In addition,
PU-PSNR and PU-SSIM are calculated by applying the encoding function proposed
in [16] with a peak luminance set to 4000.
In Fig. 6, we present the results for 3 test scenes of the NTIRE[20] dataset.
While the network can produce images with a high dynamic range, we notice that
the high sensor noise present in the input images is not fully corrected in
the dark areas of the output HDR images. Moreover, the motion blur introduced
in the dataset produces less sharp characteristics in the output image.
In Table 6 we study the effect of the attention module on the performance of
proposed HDR deghosting architecture. SCRAM-C and SCRAM-S respectively
correspond to the SCRAM module composed only of the channel attention branch
and the spatial attention branch. The proposed SCRAM allows to achieve a
similar quality as the spatial attention module proposed in AHDRNet[28], while
Table 5 shows that the inference cost of SCRAM is lower.
### 5.2 Efficiency comparison
As we want to propose an efficient HDR generation method, in Table 3 we
compare the computation cost and the inference time of our network with state-
of-the-art HDR networks that achieve similar performance on quality metrics.
The number of operations and parameters are measured using the script provided
by the NTIRE[20] challenge. To evaluate runtimes, all the compared networks
are executed on the Neural Processing Unit (NPU) of a MacBook Pro (2021)
powered with an M1 chip. The time shown is the average for 500 inference runs
after a warm-up of 50 runs. The input size is set to $1280\times 720$ pixels.
The gamma-projection of LDR inputs (Eq. 1) and the tone mapping of the HDR
output are included in the inference time measurement. Note that the
background alignment of inputs frame using homography for [28] is not
included.
Table 3: Inference cost comparison of the proposed CEN-HDR solution against state-of-the-art lightweight deep learning-based methods. The number of operations and parameters are measured using the script provided by the NTIRE[20] challenge. To measure the inference time, all the compared networks are executed on an M1 NPU. The input size is set to $1280\times 720$ pixels. Method | Num. of params. | Num. of op. (GMAccs) | Runtime(s) | FPS
---|---|---|---|---
DeepHDR[28] | 14618755 | 843.16 | 0.1075 | 9.30
AHDRNet[30] | 1441283 | 1334.95 | 0.4571 | 2.18
NHDRRNet[32] | 7672649 | 166.11 | 0.0431 | 23.20
HDR-GAN[18] | 2631011 | 479.78 | 0.2414 | 4.14
CEN-HDR(our) | 282883 | 78.36 | 0.0277 | 36.38
Table 4 compares the number of parameters and operations of the proposed CEN-
HDR solution with recent efficient methods [9, 33, 36]. The input size is set
to $1900\times 1060$ pixels corresponding to the size of the inputs from the
dataset proposed by [20]. For compared methods [9, 33, 36] the measurements
are provided by the NTIRE[20] challenge. We could not compare the inference
time of the CEN-HDR solution with these three architectures as they were
recently proposed and their implementation is not yet available.
Fig. 5 compares the trade-off between fidelity to the ground truth label and
runtime of the proposed CEN-HDR solution with other HDR merging methods. The
X-axis represents the mean runtime using an M1 NPU with an input size of
1280x720 pixels. The Y-axis is the fidelity score on the test images from [7]
dataset. The best solutions tend to be in the upper left corner. The radius of
circles represents the number of operations. Our solution is shown as the best
solution for real-time HDR merging with a high-fidelity score.
Table 4: Inference cost comparison of the proposed CEN-HDR solution versus recent efficient merging networks. The number of operations and parameters for [9, 33, 36] and our solution are computed following the method described in [20].The input size is set to $1900\times 1060$ pixels. Method | Num. of params. | Num. of op. (GMAccs)
---|---|---
GSANet[9] | 80650 | 199.39
Yan et al.[33] | 18899000 | 156.12
Yu et al.[36] | 1013250 | 199.88
CEN-HDR(our) | 282883 | 128.78
Table 5: Inference cost comparison of attention modules. Spatial and Channel
attention modules are studied by feeding a tensor of size
$(1,\frac{H}{4},\frac{W}{4})$ corresponding to the concatenation of the
reference and non-reference tensors after the encoding step.
Method | Attention type | params. | GMAccs | Runtime(s)
---|---|---|---|---
| Spatial | Channel | | |
AHDRNet attention [30] | ✓ | | 55392 | 20.772 | 0.0085
EPSANet[38] | ✓ | | 42560 | 15.768 | 0.0111
SK attention [10] | | ✓ | 125984 | 43.104 | 0.0155
Double attention[1] | ✓ | ✓ | 33216 | 12.456 | 0.0101
CBAM[27] | ✓ | ✓ | 22689 | 7.525 | 0.0734
BAM[19] | ✓ | ✓ | 17348 | 5.008 | 0.0060
Table 6: Effect of the attention module on the performance of proposed HDR
deghosting network. SCRAM-C and SCRAM-S respectively correspond to the SCRAM
module composed only of the channel attention branch and the spatial attention
branch.
Method | $\mu$-PSNR | PSNR | $\mu$-SSIM | SSIM
---|---|---|---|---
Without attention module | 42.12 | 39.95 | 0.9850 | 0.9823
AHDRNet[30] attention module | 42.94 | 40.49 | 0.9903 | 0.9852
SCRAM-C | 42.32 | 40.14 | 0.9854 | 0.9829
SCRAM-S | 42.89 | 40.41 | 0.9884 | 0.9835
SCRAM | 43.05 | 40.53 | 0.9908 | 0.9856
Figure 6: Qualitative results of the proposed CEN-HDR solution on samples from
the NTIRE[20] challenge dataset. The ground truth images are not provided. We
notice that the high sensor noise present in the input images is not fully
corrected in the dark areas of the output HDR images. Moreover, the motion
blur introduced in the dataset produces less sharp characteristics in the
output image.
## 6 Conclusions
In this paper, we propose CEN-HDR, a novel computationally efficient HDR
merging network able to correct the ghost effect caused by large object
motions in the scene and camera motion. The proposed lightweight network
architecture effectively succeeds in generating real-time HDR images with a
dynamic range close to that of the original scene. By integrating the
knowledge distillation methods in our training scheme, we demonstrate that the
majority of the representation capabilities of a large HDR merging network can
be transferred into a lighter network, opening the door to real-time HDR
embedded systems.
## References
* [1] Chen, Y., Kalantidis, Y., Li, J., Yan, S., Feng, J.: $a^{2}$-nets: Double attention networks (2018). https://doi.org/10.48550/ARXIV.1810.11579, https://arxiv.org/abs/1810.11579
* [2] Chollet, F.: Xception: Deep learning with depthwise separable convolutions (2016). https://doi.org/10.48550/ARXIV.1610.02357, https://arxiv.org/abs/1610.02357
* [3] Debevec, P.E., Malik, J.: Recovering high dynamic range radiance maps from photographs. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques. p. 369–378. SIGGRAPH ’97, ACM Press/Addison-Wesley Publishing Co., USA (1997). https://doi.org/10.1145/258734.258884, https://doi.org/10.1145/258734.258884
* [4] Froehlich, J., Grandinetti, S., Eberhardt, B., Walter, S., Schilling, A., Brendel, H.: Creating cinematic wide gamut hdr-video for the evaluation of tone mapping operators and hdr-displays. Proc. SPIE 9023 (02 2014). https://doi.org/10.1117/12.2040003
* [5] Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network (2015). https://doi.org/10.48550/ARXIV.1503.02531, https://arxiv.org/abs/1503.02531
* [6] Hu, J., Gallo, O., Pulli, K., Sun, X.: Hdr deghosting: How to deal with saturation? In: 2013 IEEE Conference on Computer Vision and Pattern Recognition. pp. 1163–1170 (2013). https://doi.org/10.1109/CVPR.2013.154
* [7] Kalantari, N.K., Ramamoorthi, R.: Deep high dynamic range imaging of dynamic scenes. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2017) 36(4) (2017)
* [8] Khan, Z., Khanna, M., Raman, S.: Fhdr: Hdr image reconstruction from a single ldr image using feedback network (2019)
* [9] Li, F., Gang, R., Li, C., Li, J., Ma, S., Liu, C., Cao, Y.: Gamma-enhanced spatial attention network for efficient high dynamic range imaging. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. pp. 1032–1040 (June 2022)
* [10] Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks (2019). https://doi.org/10.48550/ARXIV.1903.06586, https://arxiv.org/abs/1903.06586
* [11] Liu, C.: Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. Ph.D. thesis, MIT, USA (2009), aAI0822221
* [12] Liu, Y.L., Lai, W.S., Chen, Y.S., Kao, Y.L., Yang, M.H., Chuang, Y.Y., Huang, J.B.: Single-image hdr reconstruction by learning to reverse the camera pipeline (2020)
* [13] Liu, Z., Lin, W., Li, X., Rao, Q., Jiang, T., Han, M., Fan, H., Sun, J., Liu, S.: Adnet: Attention-guided deformable convolutional network for high dynamic range imaging (2021)
* [14] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2015)
* [15] Mantiuk, R., Kim, K.J., Rempel, A.G., Heidrich, W.: Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. In: ACM SIGGRAPH 2011 Papers. SIGGRAPH ’11, Association for Computing Machinery, New York, NY, USA (2011). https://doi.org/10.1145/1964921.1964935, https://doi.org/10.1145/1964921.1964935
* [16] Mantiuk, R.K., Azimi, M.: Pu21: A novel perceptually uniform encoding for adapting existing quality metrics for hdr. In: 2021 Picture Coding Symposium (PCS). pp. 1–5 (2021). https://doi.org/10.1109/PCS50896.2021.9477471
* [17] Nayar, S., Mitsunaga, T.: High dynamic range imaging: spatially varying pixel exposures. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662). vol. 1, pp. 472–479 vol.1 (2000). https://doi.org/10.1109/CVPR.2000.855857
* [18] Niu, Y., Wu, J., Liu, W., Guo, W., Lau, R.W.H.: Hdr-gan: Hdr image reconstruction from multi-exposed ldr images with large motions. IEEE Transactions on Image Processing 30, 3885–3896 (2021). https://doi.org/10.1109/TIP.2021.3064433
* [19] Park, J., Woo, S., Lee, J.Y., Kweon, I.S.: Bam: Bottleneck attention module (2018). https://doi.org/10.48550/ARXIV.1807.06514, https://arxiv.org/abs/1807.06514
* [20] Pérez-Pellitero, E., Catley-Chandar, S., Shaw, R., Leonardis, A., Timofte, R., Zhang, Z., Liu, C., Peng, Y., Lin, Y., Yu, G., Zhang, J., Ma, Z., Wang, H., Chen, X., Wang, X., Wu, H., Liu, L., Dong, C., Zhou, J., Yan, Q., Zhang, S., Chen, W., Liu, Y., Zhang, Z., Zhang, Y., Shi, J.Q., Gong, D., Zhu, D., Sun, M., Chen, G., Hu, Y., Li, H., Zou, B., Liu, Z., Lin, W., Jiang, T., Jiang, C., Li, X., Han, M., Fan, H., Sun, J., Liu, S., Marín-Vega, J., Sloth, M., Schneider-Kamp, P., Röttger, R., Li, C., Bao, L., He, G., Xu, Z., Xu, L., Zhan, G., Sun, M., Wen, X., Li, J., Li, J., Li, J., Li, C., Li, C., Gang, R., Gang, R., Li, F., Li, F., Liu, C., Liu, C., Feng, S., Lei, F., Liu, R., Ruan, J., Dai, T., Li, W., Lu, Z., Liu, H., Huang, P., Ren, G., Luo, Y., Liu, C., Tu, Q., Ma, S., Cao, Y., Tel, S., Heyrman, B., Ginhac, D., Lee, C., Kim, G., Park, S., Vien, A.G., Mai, T.T.N., Yoon, H., Vo, T., Holston, A., Zaheer, S., Park, C.Y.: Ntire 2022 challenge on high dynamic range imaging: Methods and results. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. pp. 1009–1023 (June 2022)
* [21] Prabhakar, K., Agrawal, S., Singh, D.K., Ashwath, B., Babu, R.V.: Towards practical and efficient high-resolution hdr deghosting with cnn. In: ECCV (2020)
* [22] Prabhakar, K.R., Agrawal, S., Babu, R.V.: Self-gated memory recurrent network for efficient scalable HDR deghosting. CoRR abs/2112.13050 (2021), https://arxiv.org/abs/2112.13050
* [23] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. CoRR abs/1505.04597 (2015), http://arxiv.org/abs/1505.04597
* [24] Sen, P., Kalantari, N.K., Yaesoubi, M., Darabi, S., Goldman, D.B., Shechtman, E.: Robust Patch-Based HDR Reconstruction of Dynamic Scenes. ACM Transactions on Graphics (TOG) (Proceedings of SIGGRAPH Asia 2012) 31(6), 203:1–203:11 (2012)
* [25] Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D., Wang, Z.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network (2016)
* [26] Tumblin, J., Agrawal, A., Raskar, R.: Why i want a gradient camera. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). vol. 1, pp. 103–110 vol. 1 (2005). https://doi.org/10.1109/CVPR.2005.374
* [27] Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: Cbam: Convolutional block attention module (2018). https://doi.org/10.48550/ARXIV.1807.06521, https://arxiv.org/abs/1807.06521
* [28] Wu, S., Xu, J., Tai, Y., Tang, C.: End-to-end deep HDR imaging with large foreground motions. CoRR abs/1711.08937 (2017), http://arxiv.org/abs/1711.08937
* [29] Wu, S., Xu, J., Tai, Y.W., Tang, C.K.: Deep high dynamic range imaging with large foreground motions. In: The European Conference on Computer Vision (ECCV) (September 2018)
* [30] Yan, Q., Gong, D., Shi, Q., Hengel, A.v.d., Shen, C., Reid, I., Zhang, Y.: Attention-guided network for ghost-free high dynamic range imaging. IEEE Conference on Compute rVision and Pattern Recognition (CVPR) pp. 1751–1760 (2019)
* [31] Yan, Q., Gong, D., Zhang, P., Shi, Q., Sun, J., Reid, I., Zhang, Y.: Multi-scale dense networks for deep high dynamic range imaging. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 41–50 (2019). https://doi.org/10.1109/WACV.2019.00012
* [32] Yan, Q., Zhang, L., Liu, Y., Zhu, Y., Sun, J., Shi, Q., Zhang, Y.: Deep hdr imaging via a non-local network. IEEE Transactions on Image Processing 29, 4308–4322 (2020). https://doi.org/10.1109/TIP.2020.2971346
* [33] Yan, Q., Zhang, S., Chen, W., Liu, Y., Zhang, Z., Zhang, Y., Shi, J.Q., Gong, D.: A lightweight network for high dynamic range imaging. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. pp. 824–832 (June 2022)
* [34] Ye, Q., Xiao, J., Lam, K., Okatani, T.: Progressive and selective fusion network for high dynamic range imaging. CoRR abs/2108.08585 (2021), https://arxiv.org/abs/2108.08585
* [35] Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions (2015). https://doi.org/10.48550/ARXIV.1511.07122, https://arxiv.org/abs/1511.07122
* [36] Yu, G., Zhang, J., Ma, Z., Wang, H.: Efficient progressive high dynamic range image restoration via attention and alignment network (2022). https://doi.org/10.48550/ARXIV.2204.09213, https://arxiv.org/abs/2204.09213
* [37] Zeng, H., Cai, J., Li, L., Cao, Z., Zhang, L.: Learning image-adaptive 3d lookup tables for high performance photo enhancement in real-time. IEEE Transactions on Pattern Analysis and Machine Intelligence pp. 1–1 (2020). https://doi.org/10.1109/TPAMI.2020.3026740
* [38] Zhang, H., Zu, K., Lu, J., Zou, Y., Meng, D.: Epsanet: An efficient pyramid squeeze attention block on convolutional neural network (2021). https://doi.org/10.48550/ARXIV.2105.14447, https://arxiv.org/abs/2105.14447
|
# Capacitated Vehicle Routing in Graphic Metrics
Tobias Mömke Partially supported by DFG Grant 439522729 (Heisenberg-Grant) and
DFG Grant 439637648 (Sachbeihilfe). University of Augsburg, Germany
<EMAIL_ADDRESS>Hang Zhou Partially supported by Hi! PARIS
Grant “Efficiency in Algorithms”. École Polytechnique, IP Paris, France
<EMAIL_ADDRESS>
###### Abstract
We study the _capacitated vehicle routing problem in graphic metrics (graphic
CVRP)_. Our main contribution is a new lower bound on the cost of an optimal
solution. For graphic metrics, this lower bound is tight and significantly
stronger than the well-known bound for general metrics. The proof of the new
lower bound is simple and combinatorial. Using this lower bound, we analyze
the approximation ratio of the classical iterated tour partitioning algorithm
combined with the TSP algorithms for graphic metrics of Christofides [1976],
of Mömke-Svensson [JACM 2016], and of Sebő-Vygen [Combinatorica 2014]. In
particular, we obtain a 1.95-approximation for the graphic CVRP.
## 1 Introduction
Given a metric space with a set of $n$ _terminals_ , a _depot_ , and an
integer _tour capacity_ $k$, the _capacitated vehicle routing problem (CVRP)_
asks for a minimum length collection of tours starting and ending at the depot
such that the number of terminals covered by each tour is at most $k$ and
those tours together cover all terminals.
The CVRP was introduced by Dantzig and Ramser in 1959 [DR59]. It is a
generalization of the _traveling salesman problem (TSP)_ and is one of the
most studied problems in Operations Research. Books have been dedicated to
vehicle routing problems, e.g., [TV02, GRW08, CL12, AGM16]. Yet, these
problems remain challenging, both from a practical and a theoretical
perspective.
The most popular polynomial-time approximation for the CVRP is a _simple_
algorithm, called _iterated tour partitioning (ITP)_. The ITP algorithm was
introduced in 1985 by Haimovich and Rinnooy Kan [HR85]. Altinkemer and Gavish
[AG90] showed that, in general metric spaces, the approximation ratio of the
ITP algorithm is at most $1+\left(1-\frac{1}{k}\right)\alpha$, where
$\alpha\geq 1$ is the approximation ratio of a TSP algorithm. Bompadre, Dror,
and Orlin [BDO06] improved this bound to
$1+\left(1-\frac{1}{k}\right)\alpha-\Omega\left(\frac{1}{k^{3}}\right)$. The
ratio for the CVRP in general metric spaces was recently improved by Blauth,
Traub, and Vygen [BTV22] to $1+\alpha-\epsilon$, where $\epsilon$ is at least
$\frac{1}{3000}$. Additionally, the best-to-date approximation ratio $\alpha$
for the TSP in general metrics is $1.5-10^{-36}$ by Karlin, Klein, and Oveis
Gharan [KKO21], improving upon the ratio of $1.5$ from Christofides [Chr76]
and Serdyukov [Ser78, vBS20]. Consequently, the best-to-date approximation
ratio for the metric CVRP stands at roughly $2.5-10^{-36}-\frac{1}{3000}$.
In this work, we focus on _graphic_ metrics, where the distance between two
vertices is the length (i.e., number of edges) of a shortest path in a given
unweighted graph.111Technically, each edge in the given graph has a cost of
one and forming a shortest path metric introduces new edges of larger costs.
For TSP and CVRP, however, we may assume without loss of generality that such
edges are replaced by a shortest path of cost-one edges. When referring to an
edge, we therefore always refer to an edge in the _original_ graph. For
further details we refer to the excellent survey of Vygen [Vyg12]. The
graphic TSP has attracted much attention. It captures the difficulty of the
metric TSP in the sense that it is APX hard [GKP95] and the lower bound
$\frac{4}{3}$ on the integrality gap of the Held-Karp relaxation for the
metric TSP is established using an instance of the graphic TSP, see, e.g.,
[Vyg12]. Oveis Gharan, Saberi, and Singh [OSS11] gave the first approximation
algorithm with ratio strictly better than 1.5 for the graphic TSP. Mömke and
Svensson [MS16] obtained a 1.461-approximation for the graphic TSP by
introducing the powerful idea of _removable pairings_. Mucha [Muc14] improved
the analysis of [MS16] to achieve an approximation ratio of
$\frac{13}{9}\approx 1.444$. The best-to-date approximation for the graphic
TSP is due to Sebő and Vygen [SV14], who gave an elegant 1.4-approximation
algorithm using a special kind of ear-decomposition.
We study the capacitated vehicle routing problem in _graphic_ metrics, called
the _graphic CVRP_. The graphic CVRP is a generalization of the graphic TSP.
Combining the graphic TSP algorithm of Sebő and Vygen [SV14] and the algorithm
of Blauth, Traub, and Vygen [BTV22] for the CVRP in general metrics, the best-
to-date approximation ratio for the graphic CVRP stands at roughly
$2.4-\frac{1}{3000}$. In this work, we reduce the approximation ratio for the
graphic CVRP to 1.95.
| _general_ metrics | _graphic_ metrics
---|---|---
TSP | $1.5-10^{-36}$ [KKO21] | 1.4 [SV14]
CVRP | $2.5-10^{-36}-\frac{1}{3000}$ [BTV22] | 1.95 [this work]
### 1.1 Our Results
Our main result depends on the standard definition of the _radius cost_
introduced by Haimovich and Rinnooy Kan [HR85].
###### Definition 1 (radius cost, [HR85]).
Let $V$ be a set of terminals and let $O$ be the depot. For every vertex $v\in
V$, let $\mathrm{dist}(v)$ denote the $v$-to-$O$ distance in the graph. Let
$\mathrm{rad}$ denote the _radius cost_ , defined by
$\mathrm{rad}=\frac{2}{k}\sum_{v\in V}\mathrm{dist}(v).$
Let $\mathrm{opt}$ denote the cost of an optimal solution to the graphic CVRP.
It is well-known that $\mathrm{opt}\geq\mathrm{rad}$ in general metrics
[HR85].
Our main technical contribution is a new lower bound on $\mathrm{opt}$ in
graphic metrics, stated in the following Structure Theorem (Theorem 2). The
new lower bound is tight in graphic metrics, see Fig. 1. It is significantly
stronger than the known lower bound stated above. The proof of the new lower
bound, in Section 3, is simple and combinatorial.
###### Theorem 2 (Structure Theorem).
Consider the graphic CVRP with a set $V$ of $n$ terminals, a depot $O$, and a
tour capacity $k$. We have
$\mathrm{opt}\geq\mathrm{rad}+\frac{n}{2}-\frac{n}{2k^{2}}.$
The Structure Theorem implies easily the following theorem on the
approximation ratio of the classical iterated tour partitioning algorithm for
the graphic CVRP. The proof of Theorem 3 is in Section 4.
###### Theorem 3.
Consider the graphic CVRP with a set $V$ of $n$ terminals, a depot $O$, and an
arbitrary tour capacity $k$. Let $S$ be a traveling salesman tour on
$V\cup\\{O\\}$ of cost at most $\beta\cdot
n+\gamma\cdot\mathrm{opt}_{\mathrm{TSP}}$ with $\beta\geq\frac{1}{2}$ and
$\gamma\geq 0$, where $\mathrm{opt}_{\mathrm{TSP}}$ denotes the cost of an
optimal TSP tour. Then the iterated tour partitioning algorithm applied on $S$
yields a $\left(\beta+\gamma+\frac{1}{2}\right)$-approximate solution to the
graphic CVRP.
As a consequence of Theorem 3, in Corollary 4 we bound the approximation ratio
of the iterated tour partitioning algorithm combined with the TSP algorithms
for graphic metrics of Christofides [Chr76], of Mömke-Svensson [MS16], and of
Sebő-Vygen [SV14].
Figure 1: A tight instance for the lower bound in the Structure Theorem
(Theorem 2). Let $k$ be an odd integer. Let $n$ be a multiple of $k$. The
graph consists of $\frac{n}{k}$ cycles going through the depot (the central
black node), each visiting exactly $k$ terminals. (In the example, $k=13$ and
$n=52$, and the graph consists of 4 cycles.) Each cycle consists of $k+1$
edges, so the cost $\mathrm{opt}$ of an optimal solution equals
$(k+1)\cdot\frac{n}{k}=n+\frac{n}{k}$. At the same time, a simple calculation
gives $\mathrm{rad}=\frac{n}{2}+\frac{n}{k}+\frac{n}{2k^{2}}$. Thus
$\mathrm{rad}+\frac{n}{2}-\frac{n}{2k^{2}}=n+\frac{n}{k}$. Hence the lower
bound in the Structure Theorem is tight.
###### Corollary 4.
Consider the graphic CVRP with a set $V$ of terminals, a depot $O$, and an
arbitrary tour capacity $k$. We have:
1. 1.
Let $S_{1}$ be a traveling salesman tour on $V\cup\\{O\\}$ computed by the
Christofides algorithm [Chr76]. The iterated tour partitioning algorithm
applied on $S_{1}$ yields a 2-approximate solution.
2. 2.
Let $S_{2}$ be a traveling salesman tour on $V\cup\\{O\\}$ that is the better
one of the two solutions computed by the Mömke-Svensson algorithm [MS16] and
the Christofides algorithm. The iterated tour partitioning algorithm applied
on $S_{2}$ yields a $\left(2-\frac{1}{24}\right)$-approximate solution.
3. 3.
Let $S_{3}$ be a traveling salesman tour on $V\cup\\{O\\}$ that is the better
one of the two solutions computed by the Sebő-Vygen algorithm [SV14] and the
Christofides algorithm. The iterated tour partitioning algorithm applied on
$S_{3}$ yields a $1.95$-approximate solution.
To prove Corollary 4, the requirement $\beta\geq\frac{1}{2}$ is crucial in the
application of Theorem 3. Intuitively, the threshold $\frac{1}{2}$ comes from
the additive term $\frac{n}{2}$ in the Structure Theorem. For the Christofides
algorithm [Chr76], the requirement $\beta\geq\frac{1}{2}$ follows immediately
since in a graph with $n+1$ vertices the algorithm combines a spanning tree of
cost exactly $n$ with a matching of cost at most
$\frac{1}{2}\cdot\mathrm{opt}_{\mathrm{TSP}}$. For the Mömke-Svensson
algorithm [MS16] we may assume $\beta=\frac{1}{3}$, see Lemma 12; and for the
Sebő-Vygen algorithm [SV14] we may assume $\beta=0$, see Lemma 13. Comparing
their solutions with the solution of the Christofides algorithm and returning
the better of the solutions lead to $\beta\geq\frac{1}{2}$. The proof of
Corollary 4 is in Section 5.
###### Remark 5.
In Corollary 4, we analyze the Christofides algorithm but not the algorithm of
Karlin, Klein, and Oveis Gharan [KKO21], because of the simplicity of the
Christofides algorithm, and also because the algorithm in [KKO21] requires
computing a random spanning tree whose cost might be greater than $n$ due to
multiple edges.
Also note that the algorithm of Blauth, Traub, and Vygen [BTV22] does not seem
to yield an improved approximation for the graphic CVRP, because worst-case
instances of the ITP algorithm for graphic metrics are different from those
for general metrics.
#### Open Questions.
In our work, we obtain an approximation for the graphic CVRP of ratio $1.95$.
To that end, we combine the Sebő-Vygen algorithm [SV14], the Christofides
algorithm [Chr76], and the iterated tour partitioning algorithm. The main open
question to improve upon the 1.95 factor for the graphic CVRP. In particular,
it is an interesting open question to analyze the cost of the solution
computed by the Sebő-Vygen algorithm in terms of the number of vertices, which
might lead to an improvement over the 1.95 factor.
### 1.2 Related Work
#### Graphic TSP.
Besides the classical version of the graphic TSP mentioned previously, the
_$s$ -$t$-path_ version of the graphic TSP has also been well-studied [SV14,
MS16, Muc14, AKS15, Gao13, TV18, TVZ20], with the best-to-date approximation
ratio being $1.4+\epsilon$ due to Traub, Vygen, and Zenklusen [TVZ20].
Special cases of the graphs have been studied for the graphic TSP, e.g., cubic
graphs [GLS05, AGG18, BSvdSS14, MS16, CLS15, DL18, DKM17], cubic bipartite
graphs [KR16, vZ18], graphs of degree at most $3$ and claw-free graphs [MS16],
and graphs of degree at most $4$ [New20].
#### CVRP in Other Metrics.
The CVRP has been extensively studied in other metrics: trees and bounded
treewidth [MZ22a, JS22, BP19, Bec18, AKK01], Euclidean [HR85, AKTT97, ACL10,
DM15, JS22], planar and bounded-genus graphs [BKS17, BKS19, CFKL20], graphs of
bounded highway dimension [BKS18], and minor-free graphs [CFKL20].
#### CVRP with Arbitrary Unsplittable Demands
A natural way to generalize the unit demand version of the CVRP is to allow
terminals to have arbitrary unsplittable demands, which is called the
“unsplittable” version of the CVRP. The first constant-factor approximation
algorithm for the unsplittable CVRP in general metrics is due to Altinkemer
and Gavish [AG87] and has a ratio of $2+\alpha$, where $\alpha$ is the
approximation ratio of a TSP algorithm. The approximation ratio for the
unsplittable CVRP was only recently improved to $2+\alpha-2\epsilon$ by
Blauth, Traub, and Vygen [BTV22], where $\epsilon$ is at least
$\frac{1}{3000}$. Very recently, Friggstad et al. [FMRS22] further improved
the ratio to roughly $\ln 2+\alpha+1+\epsilon$ for any constant $\epsilon>0$.
This problem has also been studied on trees [MZ22b] and in the Euclidean space
[GMZ22].
#### Iterated Tour Partitioning.
The _iterated tour partitioning (ITP)_ algorithm is the most popular
polynomial-time approximation for the CVRP, and is very simple. This algorithm
first computes a traveling salesman tour (ignoring the capacity constraint)
using some other algorithm as a black box, then partitions the tour into
segments such that the number of terminals in each segment is at most $k$, and
finally, for each segment, connects the endpoints of that segment to the depot
so as to make a tour. The ITP algorithm was introduced and refined by
Haimovich and Rinnooy Kan [HR85] and Altinkemer and Gavish [AG90] in the
1980s.
The approximation ratio of the ITP algorithm has been well-studied, and bounds
on the ratio have been utilized in the design of approximation algorithms,
see, e.g., [BDO06]. Li and Simchi-Levi [LSL90] showed that the ITP algorithm
cannot lead to a better-than $(2-\frac{1}{k})$-approximation in general
metrics. Blauth, Traub, and Vygen [BTV22] exploited properties of tight
instances for the ITP algorithm, and used those properties to design the best-
to-date approximation algorithm for the metric CVRP. The performance of the
ITP algorithm has also been studied in the special case when the terminals are
uniform random points in the Euclidean plane [BDO07, MZ21].
Because of its simplicity, the ITP algorithm is versatile and has been adapted
to vehicle routing problems in other settings, e.g., with pick-up and delivery
services [Mos98], or under the constraints on the total distance traveled by
each vehicle [LSLD92].
## 2 Preliminaries
In this section, we introduce some notations and we formally define the
problem.
Let $G=(V\cup\\{O\\},E)$ be a connected, unweighted, and undirected graph,
where $V$ is a set of _terminals_ , vertex $O$ is the _depot_ , and $E$ is the
set of edges. Let $n$ denote the number of terminals in $V$. Let $V(G)$ denote
$V\cup\\{O\\}$. For each $v\in V$, let $\mathrm{dist}(v)$ denote the number of
edges on a $v$-to-$O$ shortest path in $G$. A _tour_ $T$ in $G$ is a path
$z_{1}z_{2}\dots z_{p}$ for some $p\in\mathbb{N}$ such that $z_{i}\in V(G)$
for each $i\in[1,p]$; $z_{1}=z_{p}=O$; and $(z_{i},z_{i+1})\in E$ for each
$i\in[1,p-1]$. The _cost_ of the tour $T$, denoted by $\mathrm{cost}(T)$, is
the number of edges on that tour, i.e., $\mathrm{cost}(T)=p-1$. Each terminal
has _unit demand_ , which must be covered by a single tour. Let $k\in[1,n]$ be
an integer _tour capacity_ , i.e., each tour can cover the demand of at most
$k$ terminals.
###### Definition 6 (graphic CVRP).
An instance of the _capacitated vehicle routing problem in graphic metrics
(graphic CVRP)_ consists of
* •
a connected, unweighted, and undirected graph $G=(V\cup\\{O\\},E)$ where $V$
is a set of $n$ _terminals_ and $O$ is the _depot_ ;
* •
a positive integer _tour capacity_ $k$ such that $k\in[1,n]$.
A feasible solution is a set of tours such that
* •
each tour starts and ends at $O$,
* •
each tour covers the demand of at most $k$ terminals,
* •
the demand of each terminal is covered by one tour.
The goal is to find a feasible solution such that the total cost of the tours
is minimum.
Let $\mathrm{OPT}$ denote an optimal solution to the graphic CVRP, and let
$\mathrm{opt}$ denote the cost of $\mathrm{OPT}$.
## 3 Proof of the Structure Theorem (Theorem 2)
In this section, we prove the Structure Theorem (Theorem 2). The key in the
analysis is the following Structure Lemma.
###### Lemma 7 (Structure Lemma).
Let $T:=z_{1}z_{2}\dots z_{p}$ for some $p\in\mathbb{N}$ be a tour in
$\mathrm{OPT}$. Let $U\subseteq V$ denote the set of terminals whose demands
are covered by $T$. Let $D$ denote $\sum_{v\in U}\mathrm{dist}(v)$. Then
$\mathrm{cost}(T)\geq\frac{2D}{|U|}+\frac{|U|}{2}-\frac{1}{2|U|}.$
In Section 3.1, we show how the Structure Lemma implies the Structure Theorem,
and in Section 3.2, we prove the Structure Lemma.
### 3.1 Using the Structure Lemma to prove the Structure Theorem
Let $T_{1},\dots,T_{\ell}$ denote the tours in $\mathrm{OPT}$ for some
$\ell\geq 1$. For each $i\in[1,\ell]$, let $U_{i}\subseteq V$ denote the set
of terminals whose demands are covered by $T_{i}$. Let $D_{i}$ denote
$\sum_{v\in U_{i}}\mathrm{dist}(v)$. We apply the Structure Lemma (Lemma 7) on
$T_{i}$ and obtain
$\mathrm{cost}(T_{i})\geq\frac{2D_{i}}{|U_{i}|}+\frac{|U_{i}|}{2}-\frac{1}{2|U_{i}|}.$
Summing over all tours $T_{i}$ and noting that
$\displaystyle\sum_{i=1}^{\ell}|U_{i}|=n$, we have
$\mathrm{opt}=\sum_{i=1}^{\ell}\mathrm{cost}(T_{i})\geq\frac{n}{2}+\sum_{i=1}^{\ell}\left(\frac{2D_{i}}{|U_{i}|}-\frac{1}{2|U_{i}|}\right).$
Define
$Z:=\sum_{i=1}^{\ell}\left(\frac{2D_{i}}{|U_{i}|}-\frac{1}{2|U_{i}|}\right)-\mathrm{rad}+\frac{n}{2k^{2}}.$
To show the claim in the Structure Theorem, it suffices to show that $Z$ is
non-negative. Using
$\displaystyle\mathrm{rad}=\frac{\sum_{i=1}^{\ell}2D_{i}}{k}$ and
$\displaystyle n=\sum_{i=1}^{\ell}|U_{i}|$, we have
$Z=\sum_{i=1}^{\ell}\left(\frac{2D_{i}}{|U_{i}|}-\frac{1}{2|U_{i}|}-\frac{2D_{i}}{k}+\frac{|U_{i}|}{2k^{2}}\right)=\sum_{i=1}^{\ell}\frac{2k(k-|U_{i}|)(2D_{i}-1)+(k-|U_{i}|)^{2}}{2k^{2}|U_{i}|}\geq
0,$
where the inequality follows from $|U_{i}|\leq k$ and $D_{i}\geq 1$ (since
$\mathrm{dist}(v)\geq 1$ for each $v\in U_{i}$).
This completes the proof of the Structure Theorem (Theorem 2).
### 3.2 Proof of the Structure Lemma (Lemma 7)
Let $W$ be a multi-set of terminals defined by
$W:=\\{z_{i}\mid i\in[1,p]\text{ and }z_{i}\neq O\\}.$
Let $R:=\max_{v\in W}\mathrm{dist}(v)$. Let $\Delta:=R-\frac{D}{|U|}$. It is
easy to see that $\Delta\geq 0$.
If $\Delta\geq\frac{|U|}{4}$, we have
$\displaystyle\mathrm{cost}(T)\geq 2R\geq\frac{|U|}{2}+\frac{2D}{|U|},$
the claim follows.
In the rest of the proof, we assume that $\Delta<\frac{|U|}{4}$. Let
$j\in[1,p]$ be such that $\mathrm{dist}(z_{j})=R$, breaking ties arbitrarily.
We split the tour $T$ at vertex $z_{j}$, obtaining two paths
$P_{1}:=z_{1}z_{2}\dots z_{j}$ and $P_{2}:=z_{j}z_{j+1}\dots z_{p}$. For each
$i\in[1,R-1]$, let $x_{i}$ (resp. $y_{i}$) be a vertex on $P_{1}$ (resp.
$P_{2}$), such that $\mathrm{dist}(x_{i})$ (resp. $\mathrm{dist}(y_{i})$)
equals $i$; breaking ties arbitrarily. Let $A$ denote the multi-set of
vertices $\\{z_{j},x_{1},\dots,x_{R-1},y_{1},\dots,y_{R-1}\\}$. Observe that
$A\subseteq W$. Define another multi-set $B:=W\setminus A$. Let
$U_{A}\subseteq U$ denote the set of vertices $u\in U$ such that $u$ has at
least one occurrence in $A$. Let $U_{B}:=U\setminus U_{A}$. Observe that each
element in $U_{B}$ has at least one occurrence in $B$, so
$|B|\geq|U_{B}|=|U|-|U_{A}|.$
From the definition of $W$ and since $O\notin W$, we have
$\mathrm{cost}(T)\geq|W|+1=|A|+|B|+1=2\cdot R+|B|.$
Noting that $R=\frac{D}{|U|}+\Delta$, we have
$\mathrm{cost}(T)\geq\frac{2D}{|U|}+2\Delta+|U|-|U_{A}|.$ (1)
To lower bound $\mathrm{cost}(T)$, we upper bound $|U_{A}|$ in the following
lemma, whose proof is elementary.
###### Lemma 8.
$|U_{A}|\leq\sqrt{4|U|\cdot\Delta+1}$.
###### Proof.
First, consider the case when $|U_{A}|$ is even. Suppose $|U_{A}|=2m$ for some
$m\in\mathbb{N}$. Since $U_{A}\subseteq A$ and using the definition of $A$, we
have
$\sum_{v\in U_{A}}\mathrm{dist}(v)\leq
R+2(R-1)+2(R-2)+\cdots+2(R-m+1)+(R-m)=2m\cdot R-m^{2}.$
Since $\mathrm{dist}(v)\leq R$ for each $v\in U_{B}\subseteq W$, we have
$\sum_{v\in U_{B}}\mathrm{dist}(v)\leq|U_{B}|\cdot R=(|U|-|U_{A}|)\cdot
R=|U|\cdot R-2m\cdot R.$
Thus
$D=\sum_{v\in U}\mathrm{dist}(v)\leq|U|\cdot R-m^{2}.$
Since $D=|U|\cdot(R-\Delta)$, we have
$m\leq\sqrt{|U|\cdot\Delta}.$
The claim follows since $|U_{A}|=2m$.
Next, consider the case when $|U_{A}|$ is odd. Suppose $|U_{A}|=2m+1$ for some
$m\in\mathbb{N}$. Since $U_{A}\subseteq A$ and using the definition of $A$, we
have
$\sum_{v\in U_{A}}\mathrm{dist}(v)\leq
R+2(R-1)+2(R-2)+\dots+2(R-m)=(2m+1)\cdot R-m(m+1).$
Since $\mathrm{dist}(v)\leq R$ for each $v\in U_{B}\subseteq W$, we have
$\sum_{v\in U_{B}}\mathrm{dist}(v)\leq|U_{B}|\cdot R=(|U|-|U_{A}|)\cdot
R=|U|\cdot R-(2m+1)\cdot R.$
Thus
$D=\sum_{v\in U}\mathrm{dist}(v)\leq|U|\cdot R-m(m+1).$
Since $D=|U|\cdot(R-\Delta)$, we have
$m\leq\frac{\sqrt{4|U|\cdot\Delta+1}-1}{2}.$
The claim follows since $|U_{A}|=2m+1$. ∎
From (1) and Lemma 8, we have
$\mathrm{cost}(T)\geq\frac{2D}{|U|}+2\Delta+|U|-\sqrt{4|U|\cdot\Delta+1}.$ (2)
To lower bound $\mathrm{cost}(T)$, we use the following simple fact.
###### Fact 9.
$2\Delta-\sqrt{4|U|\cdot\Delta+1}\geq-\frac{|U|}{2}-\frac{1}{2|U|}.$
###### Proof.
Define
$Y:=2\Delta-\sqrt{4|U|\cdot\Delta+1}+\frac{|U|}{2}+\frac{1}{2|U|}.$
It suffices to show that $Y\geq 0$. Let $t:=\sqrt{4|U|\cdot\Delta+1}$. Thus
$\displaystyle\Delta=\frac{t^{2}-1}{4|U|}$. We have
$Y=2\cdot\frac{t^{2}-1}{4|U|}-t+\frac{|U|}{2}+\frac{1}{2|U|}=\left(\frac{t}{\sqrt{2|U|}}-\sqrt{\frac{|U|}{2}}\right)^{2}\geq
0.$
∎
From (2) and 9, we conclude that
$\mathrm{cost}(T)\geq\frac{2D}{|U|}+\frac{|U|}{2}-\frac{1}{2|U|}.$
This completes the proof of the Structure Lemma (Lemma 7).
## 4 Proof of Theorem 3
Let $S$ be a traveling salesman tour on $V\cup\\{O\\}$. Let $\mathrm{ITP}(S)$
denote the cost of the solution computed by the iterated tour partitioning
algorithm applied on $S$. Altinkemer and Gavish [AG90] showed the following
bound.
###### Lemma 10 ([AG90]).
$\mathrm{ITP}(S)\leq\mathrm{rad}+\left(1-\frac{1}{k}\right)\cdot\mathrm{cost}(S).$
From Lemma 10 and the assumption on $\mathrm{cost}(S)$, we have
$\displaystyle\mathrm{ITP}(S)$
$\displaystyle\leq\mathrm{rad}+\left(1-\frac{1}{k}\right)\cdot\left(\beta\cdot
n+\gamma\cdot\mathrm{opt}_{\mathrm{TSP}}\right)$
$\displaystyle=\left(\mathrm{rad}+\frac{n}{2}-\frac{n}{2k^{2}}\right)+\left(\beta-\frac{1}{2}+\frac{1}{2k^{2}}-\frac{\beta}{k}\right)\cdot
n+\left(1-\frac{1}{k}\right)\cdot\gamma\cdot\mathrm{opt}_{\mathrm{TSP}}.$
From the Structure Theorem (Theorem 2),
$\mathrm{rad}+\frac{n}{2}-\frac{n}{2k^{2}}\leq\mathrm{opt}.$
Since $\beta\geq\frac{1}{2}$ and $k\geq 1$, we have
$\frac{1}{2k^{2}}-\frac{\beta}{k}\leq 0.$
Combining, we have
$\mathrm{ITP}(S)\leq\mathrm{opt}+\left(\beta-\frac{1}{2}\right)\cdot
n+\gamma\cdot\mathrm{opt}_{\mathrm{TSP}}\leq\left(\beta+\gamma+\frac{1}{2}\right)\cdot\mathrm{opt},$
where the last inequality follows from $n\leq\mathrm{opt}$ and
$\mathrm{opt}_{\mathrm{TSP}}\leq\mathrm{opt}$.
This completes the proof of Theorem 3.
## 5 Proof of Corollary 4
In this section, we analyze the approximation ratios for the graphic CVRP
using known algorithms for the graphic TSP.
###### Lemma 11 (adaptation from Christofides [Chr76]).
Given a connected graph $G$ with $n+1$ vertices, the Christofides algorithm
computes a traveling salesman tour of cost at most
$n+\frac{1}{2}\cdot\mathrm{opt}_{\mathrm{TSP}}$.
###### Proof.
Christofides algorithm [Chr76] computes a minimum spanning tree $T$ and adds a
minimum-cost perfect matching $M$ of the odd degree vertices as parity
correction. Since $G$ is connected, the cost of $T$ is exactly $n$. By
[Chr76], the cost of $M$ is at most
$\frac{1}{2}\cdot\mathrm{opt}_{\mathrm{TSP}}$. The claim follows. ∎
The first claim in Corollary 4 follows from Theorems 3 and 11.
###### Lemma 12 (adaptation from Mucha [Muc14] and Mömke and Svensson
[MS16]).
Given a connected graph $G$ with $n+1$ vertices, the Mömke-Svensson algorithm
computes a traveling salesman tour of cost at most
$\frac{n}{3}+\frac{10}{9}\cdot\mathrm{opt}_{\mathrm{TSP}}$.
###### Proof.
Letting $n^{\prime}=n+1$, it suffices to show that, given a connected graph
$G$ with $n^{\prime}$ vertices, the Mömke-Svensson algorithm computes a
traveling salesman tour of cost at most
$\frac{n^{\prime}}{3}+\frac{10}{9}\cdot\mathrm{opt}_{\mathrm{TSP}}-\frac{2}{3}$.
The proof is by induction. Let $X$ denote the cost of the traveling salesman
tour computed by the Mömke-Svensson algorithm.
First, consider the case when $G$ is $2$-vertex-connected. The Mömke-Svensson
algorithm computes some _circulation_ of cost $c$ and they show in Lemma 4.2
of [MS16]:
$X\leq\frac{4}{3}\cdot n^{\prime}+\frac{2}{3}\cdot c-\frac{2}{3}.$
Mucha shows in Corollary 1 of [Muc14]:
$c\leq\frac{5}{3}\cdot\text{opt}_{\mathrm{TSP}}-\frac{3}{2}\cdot n^{\prime}.$
The claim follows.
Next, consider the case when $G$ is not $2$-vertex-connected. There is a
vertex $v$ such that removing $v$ disconnects the graph. We therefore identify
two subgraphs $G_{1}=(V_{1},E_{1})$ and $G_{2}=(V_{2},E_{2})$ such that
$V_{1}\cup V_{2}=V(G)$, $V_{1}\cap V_{2}=\\{v\\}$ and $E=E_{1}\cup E_{2}$.
Observe that $|V_{1}|+|V_{2}|=n^{\prime}+1.$ From the induction, the claim
holds for both $G_{1}$ and $G_{2}$. Let $\text{opt}_{1}$ (resp.
$\text{opt}_{2}$) be the optimal cost of a TSP solution in $G_{1}$ (resp.
$G_{2}$). Then
$\mathrm{opt}_{\mathrm{TSP}}=\mathrm{opt}_{1}+\mathrm{opt}_{2}$. We have
$X\leq\left(\frac{|V_{1}|}{3}+\frac{10}{9}\cdot\text{opt}_{1}-\frac{2}{3}\right)+\left(\frac{|V_{2}|}{3}+\frac{10}{9}\cdot\text{opt}_{2}-\frac{2}{3}\right)=\frac{n^{\prime}+1}{3}+\frac{10}{9}\cdot\mathrm{opt}_{\mathrm{TSP}}-\frac{4}{3},$
which yields the claim. ∎
From Lemmas 11 and 12, and since $S_{2}$ is the better one of the two
solutions computed by the Christofides algorithm and by the Mömke-Svensson
algorithm, we have
$\mathrm{cost}(S_{2})\leq\frac{1}{4}\cdot\left(n+\frac{1}{2}\cdot\mathrm{opt}_{\mathrm{TSP}}\right)+\frac{3}{4}\cdot\left(\frac{n}{3}+\frac{10}{9}\cdot\mathrm{opt}_{\mathrm{TSP}}\right)=\frac{1}{2}\cdot
n+\frac{23}{24}\cdot\mathrm{opt}_{\mathrm{TSP}}.$ (3)
The second claim in Corollary 4 follows from Theorem 3 and (3).
###### Lemma 13 (Sebő and Vygen [SV14]).
Given a connected graph $G$ with $n+1$ vertices, the Sebő-Vygen algorithm
computes a traveling salesman tour of cost at most
$\frac{7}{5}\cdot\mathrm{opt}_{\mathrm{TSP}}$.
From Lemmas 11 and 13, and since $S_{3}$ is the better one of the two
solutions computed by the Christofides algorithm and by the Sebő-Vygen
algorithm, we have
$\mathrm{cost}(S_{3})\leq\frac{1}{2}\cdot\left(n+\frac{1}{2}\cdot\mathrm{opt}_{\mathrm{TSP}}\right)+\frac{1}{2}\cdot\left(\frac{7}{5}\cdot\mathrm{opt}_{\mathrm{TSP}}\right)=\frac{1}{2}\cdot
n+\frac{19}{20}\cdot\mathrm{opt}_{\mathrm{TSP}}.$ (4)
The last claim in Corollary 4 follows from Theorem 3 and (4).
#### Acknowledgments.
We thank Zipei Nie for helpful discussions in mathematics.
## References
* [ACL10] A. Adamaszek, A. Czumaj, and A. Lingas. PTAS for $k$-tour cover problem on the plane for moderately large values of $k$. International Journal of Foundations of Computer Science, 21(06):893–904, 2010.
* [AG87] K. Altinkemer and B. Gavish. Heuristics for unequal weight delivery problems with a fixed error guarantee. Operations Research Letters, 6(4):149–158, 1987.
* [AG90] K. Altinkemer and B. Gavish. Heuristics for delivery problems with constant error guarantees. Transportation Science, 24(4):294–297, 1990.
* [AGG18] N. Agarwal, N. Garg, and S. Gupta. A 4/3-approximation for TSP on cubic 3-edge-connected graphs. Oper. Res. Lett., 46(4):393–396, 2018.
* [AGM16] S. P. Anbuudayasankar, K. Ganesh, and S. Mohapatra. Models for practical routing problems in logistics. Springer, 2016.
* [AKK01] T. Asano, N. Katoh, and K. Kawashima. A new approximation algorithm for the capacitated vehicle routing problem on a tree. Journal of Combinatorial Optimization, 5(2):213–231, 2001.
* [AKS15] H. An, R. D. Kleinberg, and D. B. Shmoys. Improving Christofides’ algorithm for the $s$-$t$ path TSP. Journal of the ACM (JACM), 62(5):34:1–34:28, 2015.
* [AKTT97] T. Asano, N. Katoh, H. Tamaki, and T. Tokuyama. Covering points in the plane by $k$-tours: towards a polynomial time approximation scheme for general $k$. In ACM Symposium on Theory of Computing (STOC), pages 275–283, 1997\.
* [BDO06] A. Bompadre, M. Dror, and J. B. Orlin. Improved bounds for vehicle routing solutions. Discrete Optimization, 3(4):299–316, 2006.
* [BDO07] A. Bompadre, M. Dror, and J. B. Orlin. Probabilistic analysis of unit-demand vehicle routeing problems. Journal of Applied Probability, 44(1):259–278, 2007.
* [Bec18] A. Becker. A tight 4/3 approximation for capacitated vehicle routing in trees. In Approximation, Randomization, and Combinatorial Optimization (APPROX/RANDOM), volume 116, pages 3:1–3:15, 2018.
* [BKS17] A. Becker, P. N. Klein, and D. Saulpic. A quasi-polynomial-time approximation scheme for vehicle routing on planar and bounded-genus graphs. In 25th Annual European Symposium on Algorithms (ESA), 2017.
* [BKS18] A. Becker, P. N. Klein, and D. Saulpic. Polynomial-time approximation schemes for $k$-center, $k$-median, and capacitated vehicle routing in bounded highway dimension. In 26th Annual European Symposium on Algorithms (ESA), 2018.
* [BKS19] A. Becker, P. N. Klein, and A. Schild. A PTAS for bounded-capacity vehicle routing in planar graphs. In Workshop on Algorithms and Data Structures, pages 99–111. Springer, 2019.
* [BP19] A. Becker and A. Paul. A framework for vehicle routing approximation schemes in trees. In Workshop on Algorithms and Data Structures, pages 112–125. Springer, 2019.
* [BSvdSS14] S. C. Boyd, R. Sitters, S. van der Ster, and L. Stougie. The traveling salesman problem on cubic and subcubic graphs. Mathematical Programming, 144(1-2):227–245, 2014.
* [BTV22] J. Blauth, V. Traub, and J. Vygen. Improving the approximation ratio for capacitated vehicle routing. Mathematical Programming (to appear), 2022.
* [CFKL20] V. Cohen-Addad, A. Filtser, P. N. Klein, and H. Le. On light spanners, low-treewidth embeddings and efficient traversing in minor-free graphs. In Symposium on Foundations of Computer Science (FOCS), pages 589–600. IEEE, 2020.
* [Chr76] N. Christofides. Worst-case analysis of a new heuristic for the travelling salesman problem. Technical Report 388, Graduate School of Industrial Administration, Carnegie Mellon University, 1976.
* [CL12] T. G. Crainic and G. Laporte. Fleet management and logistics. Springer Science & Business Media, 2012.
* [CLS15] J. R. Correa, O. Larré, and J. A. Soto. TSP tours in cubic graphs: beyond 4/3. SIAM J. Discret. Math., 29(2):915–939, 2015.
* [DKM17] Z. Dvorák, D. Král, and B. Mohar. Graphic TSP in cubic graphs. In STACS, volume 66, pages 27:1–27:13, 2017.
* [DL18] B. Duník and R. Lukotka. Cubic TSP: a 1.3-approximation. SIAM J. Discret. Math., 32(3):2094–2114, 2018.
* [DM15] A. Das and C. Mathieu. A quasipolynomial time approximation scheme for Euclidean capacitated vehicle routing. Algorithmica, 73(1):115–142, 2015.
* [DR59] G. B. Dantzig and J. H. Ramser. The truck dispatching problem. Management Science, 6(1):80–91, 1959.
* [FMRS22] Z. Friggstad, R. Mousavi, M. Rahgoshay, and M. R. Salavatipour. Improved Approximations for Capacitated Vehicle Routing with Unsplittable Client Demands. In Integer Programming and Combinatorial Optimization - 23rd International Conference, IPCO, 2022.
* [Gao13] Z. Gao. An LP-based $\frac{3}{2}$-approximation algorithm for the $s$-$t$ path graph traveling salesman problem. Oper. Res. Lett., 41(6):615–617, 2013.
* [GKP95] M. Grigni, E. Koutsoupias, and C. H. Papadimitriou. An approximation scheme for planar graph TSP. In FOCS, pages 640–645. IEEE, 1995.
* [GLS05] D. Gamarnik, M. Lewenstein, and M. Sviridenko. An improved upper bound for the TSP in cubic 3-edge-connected graphs. Oper. Res. Lett., 33(5):467–474, 2005.
* [GMZ22] F. Grandoni, C. Mathieu, and H. Zhou. Unsplittable Euclidean Capacitated Vehicle Routing: A $(2+\epsilon)$-Approximation Algorithm. arXiv preprint arXiv:2209.05520, 2022.
* [GRW08] B. Golden, S. Raghavan, and E. Wasil. The vehicle routing problem: latest advances and new challenges, volume 43 of Operations Research/Computer Science Interfaces Series. Springer, 2008.
* [HR85] M. Haimovich and A. H. G. Rinnooy Kan. Bounds and heuristics for capacitated routing problems. Mathematics of Operations Research, 10(4):527–542, 1985.
* [JS22] A. Jayaprakash and M. R. Salavatipour. Approximation schemes for capacitated vehicle routing on graphs of bounded treewidth, bounded doubling, or highway dimension. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 877–893, 2022.
* [KKO21] A. R. Karlin, N. Klein, and S. Oveis Gharan. A (slightly) improved approximation algorithm for metric TSP. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 32–45, 2021.
* [KR16] J. Karp and R. Ravi. A $\frac{9}{7}$-approximation algorithm for graphic TSP in cubic bipartite graphs. Discret. Appl. Math., 209:164–216, 2016.
* [LSL90] C. L. Li and D. Simchi-Levi. Worst-case analysis of heuristics for multidepot capacitated vehicle routing problems. ORSA Journal on Computing, 2(1):64–73, 1990.
* [LSLD92] C. L. Li, D. Simchi-Levi, and M. Desrochers. On the distance constrained vehicle routing problem. Operations research, 40(4):790–799, 1992.
* [Mos98] G. Mosheiov. Vehicle routing with pick-up and delivery: tour-partitioning heuristics. Computers & Industrial Engineering, 34(3):669–684, 1998.
* [MS16] T. Mömke and O. Svensson. Removing and adding edges for the traveling salesman problem. Journal of the ACM (JACM), 63(1):1–28, 2016.
* [Muc14] M. Mucha. $\frac{13}{9}$-approximation for graphic TSP. Theory of computing systems, 55(4):640–657, 2014.
* [MZ21] C. Mathieu and H. Zhou. Probabilistic analysis of Euclidean capacitated vehicle routing. In International Symposium on Algorithms and Computation (ISAAC), page 43:1–43:16, 2021.
* [MZ22a] C. Mathieu and H. Zhou. A PTAS for capacitated vehicle routing on trees. In Proceedings of the International Colloquium on Automata, Languages and Programming (ICALP), page 95:1–95:20, 2022.
* [MZ22b] C. Mathieu and H. Zhou. A tight $(1.5+\epsilon)$-approximation for unsplittable capacitated vehicle routing on trees. arXiv preprint arXiv:2202.05691, 2022.
* [New20] A. Newman. An improved analysis of the Mömke-Svensson algorithm for graph-TSP on subquartic graphs. SIAM J. Discret. Math., 34(1):865–884, 2020.
* [OSS11] S. Oveis Gharan, A. Saberi, and M. Singh. A randomized rounding approach to the traveling salesman problem. In Symposium on Foundations of Computer Science (FOCS), pages 550–559. IEEE, 2011.
* [Ser78] A. I. Serdyukov. On some extremal walks in graphs (in Russian). Upravlyaemye Sistemy, 17:76–79, 1978.
* [SV14] A. Sebő and J. Vygen. Shorter tours by nicer ears: 7/5-approximation for the graph-TSP, 3/2 for the path version, and 4/3 for two-edge-connected subgraphs. Combinatorica, pages 1–34, 2014.
* [TV02] P. Toth and D. Vigo. The Vehicle Routing Problem. SIAM, 2002.
* [TV18] V. Traub and J. Vygen. Beating the integrality ratio for $s$-$t$-tours in graphs. In FOCS, pages 766–777. IEEE, 2018.
* [TVZ20] V. Traub, J. Vygen, and R. Zenklusen. Reducing path TSP to TSP. In STOC, pages 14–27. ACM, 2020.
* [vBS20] R. van Bevern and V. A. Slugina. A historical note on the 3/2-approximation algorithm for the metric traveling salesman problem. Historia Mathematica, 53:118–127, 2020.
* [Vyg12] J. Vygen. New approximation algorithms for the TSP. OPTIMA, 90:1–12, 2012.
* [vZ18] A. van Zuylen. Improved approximations for cubic bipartite and cubic TSP. Math. Program., 172(1-2):399–413, 2018.
|
January, 2024
Charged massless scalar fields in a charged $C$-metric black hole: Exact
solutions, Hawking radiation and scattering of scalar waves
Ming Chen, Gabriele Tartaglino-Mazzucchelli, Yao-Zhong Zhang
School of Mathematics and Physics, University of Queensland,
St Lucia, Brisbane, Queensland 4072, Australia
ming.chen@uq.edu.au,g.tartaglino-mazzucchelli<EMAIL_ADDRESS>
We study Hawking radiation and wave scattering of charged scalar fields in a
charged $C$-metric black hole background. The conformally invariant wave
equation for charged scalar fields can be separated into radial and angular
parts, each with five singularities. We show that the radial and angular
equations can be respectively transformed into the general Heun equation. We
explore exact solutions of the radial Heun equation in terms of the local Heun
functions and connection coefficients. Exact behaviours of the asymptotic wave
functions are determined without approximations. We apply the exact results to
derive Hawking radiation, quasi-normal modes and superradiance. Since
quasinormal modes are significant for black holes through gravitional waves,
we present the numeric results for quasinormal modes, and show the dependence
upon the $C$-metric parameters and the charge of scalar fields. The analytic
expressions of the solutions allow us to make fast numerical calculations of
high precision without restrictions on the model parameters.
###### Contents
1. 1 Introduction
2. 2 Heun’s equations
3. 3 Asymptotic behaviors and Hawking radiation
1. 3.1 Asymptotic behaviors
2. 3.2 Hawking radiation
4. 4 Boundary conditions for Quasinormal modes and superradiance at asymptotic limits
1. 4.1 Coefficients of asymptotic behaviors in the tortoise coordinate
2. 4.2 Applications to QNMs
3. 4.3 Applications to superradiance
5. 5 Exact GHE and applications to quasinormal modes and superradiance
6. 6 Numerical results for QNMs
1. 6.1 Acceleration modes
2. 6.2 Near extremal modes
3. 6.3 Photon sphere modes
7. 7 Conclusions and Discussions
8. A Derivation of the radial Heun equation
## 1 Introduction
The existence of black holes (BHs) is one of the most striking predictions in
Einstein’s theory of general relativity. BHs are exact solutions of the
Einstein field equations, characterized by elementary macroscopic quantities
such as mass, angular momentum and electric charge. Their physics has been an
immensely fruitful research ground, playing a vital role in examining
gravitational fields under extreme conditions. Research on BHs has recently
experienced substantial advancements, propelled by detecting gravitational
waves from the neutron stars and black hole collisions [1, 2]. Additionally,
there have been groundbreaking measurements of the central BHs in the
M87-galaxies [3]. These developments mark a significant leap forward in
understanding the dynamics and properties of BHs in our cosmos.
Black holes were predicted to enjoy remarkable phenomena, such as Hawking
radiation, when quantum effects come into play [4]. Under specific conditions,
they can amplify incident waves at the cost of the black hole spins [5] or
charges [6, 7] through, e.g., the Penrose process and superradiance [8].
Perhaps more significantly, the detected gravitational waves from a pair of
merging BHs [1] indicate that the interaction of two BHs can be conditionally
divided into four stages: the Newtonian stage, the merger of two BHs into a
single one, the ringdown phase, and the formation of a single final BH with
the consequent ringing [1, 2]. The ringdown phase or ringdown waveform is
originated from the distorted final resultant black hole and comprises the
superposition of quasinormal modes (QNMs). Each QNM possesses a complex
frequency, where the real part corresponds to the oscillation frequency and
the imaginary part is the inverse of the damping time. This damping time is
uniquely determined by the mass and angular momentum of the black hole. As the
frequencies and damping time of QNMs are directly related to the “No Hair”
theorem of BHs, a precise identification of QNMs serves as a conclusive
indicator for BHs and provides a crucial test for general relativity in the
context of strong gravitational fields [9].
Furthermore, it is well known that small perturbations of a black hole
background take the form of damped oscillations, which also lead to QNMs. By
causality, QNMs can be calculated when the perturbation is purely ingoing at
the exterior event horizon is purely outgoing at spatial infinity [10, 11, 12,
13]. Various types of fields have been used as the test fields to induce
perturbations in different black hole backgrounds, based on solutions of the
Teukolsky master equations [14, 15].
In most cases, the master wave equations can be separated into a set of
ordinary differential equations (ODEs). Various methods have been employed to
solve these separated ODEs, both approximately and analytically [16, 17, 18,
19, 20, 21] — at least for subclasses of gravitational backgrounds that allow
for that. In recent years, more researchers have been interested in solving
the ODEs analytically in terms of the Heun functions by transforming the ODEs
into Heun’s equations [22, 23, 24, 25, 26]. In [20, 27, 28], the exact
formulation for wave scatterings of BHs and Kerr-AdS5 type spacetimes was
presented based on the general Heun equation (GHE) and their exact solutions.
In [18, 29, 30], QNMs of the Kerr-dS BHs were studied in terms of the Heun
functions.
In this paper, we develop and employ the analytic approach to study solutions
and applications of the master equation in a charged $C$-metric background
[31]. The $C$-metric serves as a mathematical model for describing a pair of
black holes that are causally separated and moving apart from each other with
opposite accelerations. This metric is an extension of the Schwarzschild
solution and introduces an extra parameter associated with the acceleration of
the black hole, in addition to its mass parameter [32, 33]. In the charged
version of the $C$-metric, an electric charge parameter is included to account
for the influence of an electromagnetic field. The $C$-metric represents a
class of boost-symmetric black hole geometry and can be interpreted as a black
hole that has been accelerated under its interaction with a local cosmological
medium.
In [34, 35], the conformally invariant Klein-Gordon (KG) wave equation for a
massless neutral scalar field in the charged $C$-metric black hole was
derived. Then [31] further generalizes this coupling to the charged scalar
field case. The master equations in both cases can all be separated into
radial and angular parts (see Appendix A of [31]). Using the asymptotic
behaviors of the solutions of the separated ODEs and the Mathematica package
QNMSpectral developed in [10], QNMs and superradiance of the $C$-metric black
hole were investigated in [31]. These results indicated that the $C$-metric
remains stable under the perturbation of neutral or charged scalar fields.
This stability is reflected in the frequencies of QNM, which exhibit a
monotonic decay over time.
In this work, we study Hawking radiation and wave scattering of a charged
massless scalar field in the charged $C$-metric black hole background. The
corresponding conformally invariant KG wave equation separates into two ODEs
with five singularities. We show that the radial and angular ODEs can be
respectively transformed into the general Heun equation. Exact solutions of
the radial Heun equation are obtained in terms of the local Heun functions and
connection coefficients. This enables us to determine the exact behaviours of
the asymptotic wave functions. We apply the exact results to analyse Hawking
radiation, quasi-normal modes and superradiance. Numerical simulations for
quasinormal modes are also presented.
This paper is organized as follows. In Sec. 2, we derive the Heun equations
describing the radial and angular parts of the KG wave equation in the
$C$-metric background. In Sec. 3, we determine the asymptotic behaviors of the
radial wave function and compute the Hawking radiation. In Sec. 4, we derive
the exact coefficients of the asymptotic wave functions in the tortoise
coordinates and apply them to discuss the boundary conditions for QNMs and
superradiance. In Sec. 5, we present exact solutions of the Heun equations via
connection coefficients and obtain an analytic formulation of QNMs and
superradiance. Sec. 6 gives our numerical simulation results on QNMs and
provides a comparison of our results with those of other approaches. We
present the conclusions of the paper in Sec. 7. In Appendix A we present more
detail regarding the derivation of the radial Heun equation of sec. 3. Note
that, throughout the paper, we set the gravitational constant and the speed of
light equal to 1.
## 2 Heun’s equations
The line element of the charged $C$-metric in the Boyer-Lindquist coordinates
reads [31, 34, 35]
$\displaystyle ds^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{\Lambda^{2}}\bigg{(}-f(r)dt^{2}+f^{-1}(r)dr^{2}+P(\theta)^{-1}r^{2}d\theta^{2}+P(\theta)r^{2}\sin^{2}\theta
d\varphi^{2}\bigg{)},$ (1a) where $\Lambda=1-\tilde{\alpha}r\cos\theta$ works
as a conformal factor, $\tilde{\alpha}$ is the acceleration parameter which is
positive real in the de Sitter spacetime, and the functions $f(r)$ and
$P(\theta)$ are given by $\displaystyle f(r)$ $\displaystyle=$
$\displaystyle\bigg{(}1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\bigg{)}(1-\tilde{\alpha}^{2}r^{2}),$
(1b) $\displaystyle P(\theta)$ $\displaystyle=$ $\displaystyle
1-2\tilde{\alpha}M\cos\theta+\tilde{\alpha}^{2}Q^{2}\cos^{2}\theta,$ (1c)
with $\theta\in(0,\pi)$ and $M$ being the mass of the $C$-metric black hole.
$Q$ is the electric charge, and the electromagnetic potential associated with
the charged black hole source is $A_{\mu}=(-Q/r,0,0,0)$.
The conformally invariant wave equation for a massless charged scalar field
$\phi$ in the $C$-metric background is given by [31]
$\displaystyle 0$ $\displaystyle=$ $\displaystyle\frac{\partial}{\partial
r}\bigg{(}r^{2}f(r)\frac{\partial\tilde{\phi}}{\partial
r}\bigg{)}-\frac{r^{4}}{r^{2}f(r)}\frac{\partial^{2}\tilde{\phi}}{\partial
t^{2}}-\frac{2iqQr}{r^{2}f(r)}\frac{\partial\tilde{\phi}}{\partial
t}+\frac{q^{2}Q^{2}}{r^{2}f(r)}\tilde{\phi}$ (2)
$\displaystyle+\frac{1}{6}\bigg{(}r^{2}f^{\prime\prime}(r)+4rf^{\prime}(r)+2f(r)\bigg{)}\tilde{\phi}+\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\bigg{(}P(\theta)\sin\theta\frac{\partial}{\partial\theta}\tilde{\phi}\bigg{)}$
$\displaystyle+\frac{1}{P(\theta)\sin^{2}\theta}\frac{\partial^{2}\tilde{\phi}}{\partial\varphi^{2}}+\frac{1}{6}\bigg{(}P^{\prime\prime}(\theta)+3\cot\theta
P^{\prime}(\theta)-2P(\theta)\bigg{)}\tilde{\phi},$
where $\tilde{\phi}=\Lambda^{-1}\phi$ and $q$ is the charge of the scalar
field.
The conformally invariant wave equation (2) is separable [36]. Indeed, we
consider the Ansatz
$\tilde{\phi}(r,\theta,\varphi,t)=e^{-i\omega
t}R(r)e^{im\varphi}\,\Theta(\theta),$ (3)
where $\omega$ is the frequency (energy) of the scalar field or wave under BH
perturbations, and $m\in\mathbb{Z}$ is the azimuthal number. Moreover, $R(r)$
and $\Theta(\theta)$ are functions of the radial and angular variables $r$ and
$\theta$, respectively [37, 38]. Then, substituting (3) into (2), we find that
the wave equation is separated into the radial and angular ODEs,
$\displaystyle
R(r)^{\prime\prime}+\frac{\Delta^{\prime}(r)}{\Delta(r)}R(r)^{\prime}+\frac{1}{\Delta(r)}\bigg{[}\frac{\Omega(r)}{\Delta(r)}+\frac{1}{6}\Delta^{\prime\prime}(r)+\lambda\bigg{]}R(r)=0,$
(4a)
$\displaystyle\Theta^{\prime\prime}(\theta)+\left(\frac{P^{\prime}(\theta)}{P(\theta)}+\cot\theta\right)\Theta^{\prime}(\theta)+\frac{1}{P(\theta)}\bigg{[}\frac{-m^{2}}{P(\theta)\sin^{2}\theta}+2\tilde{\alpha}M\cos\theta$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+2\tilde{\alpha}^{2}Q^{2}\cos^{2}\theta+\frac{\tilde{\alpha}^{2}Q^{2}-1}{3}+\lambda\bigg{]}\Theta(\theta)=0,$
(4b)
where the prime denotes the derivative with respect to the argument, $\lambda$
is the separation constant and
$\displaystyle\Delta(r)$ $\displaystyle=$ $\displaystyle
r^{2}f(r)=-\tilde{\alpha}^{2}(r^{2}-2Mr+Q^{2})\bigg{(}r^{2}-\frac{1}{\tilde{\alpha}^{2}}\bigg{)},~{}~{}~{}$
(5a) $\displaystyle\Omega(r)$ $\displaystyle=$
$\displaystyle\omega^{2}r^{4}-2qQ\omega r+q^{2}Q^{2}.$ (5b)
Before proceeding further, it is worth mentioning that when waves propagate
near or cross the horizon of a black hole, the extreme gravitational time
dilation leads to a substantial change in the observed frequencies $\omega$.
Typically, this effect leads to a discrete set of complex frequencies
representing exponential decay of the waves if the imaginary part is negative,
and dynamical instability of the waves if the imaginary part is positive [10].
It is thus reasonable to assume that $\omega$ is complex, i.e.
$\omega=\omega_{{}_{R}}+i\omega_{{}_{I}}$, where
$\omega_{{}_{R}}=\textrm{Re}[\omega]$, $\omega_{{}_{I}}=\textrm{Im}[\omega]$,
introduced above, are respectively the real and imaginary parts. In the
following we will use $J(r)$ to denote the square root of $\Omega(r)$,
$J(r)=\sqrt{\Omega(r)},$ (6)
which can be expressed as
$\displaystyle
J(r)=\bigg{[}\sqrt{\frac{\bar{r}+a}{2}}+i\;\textrm{sgn}(b)\sqrt{\frac{\bar{r}-a}{2}}\bigg{]},$
(7)
with $\bar{r}=\sqrt{a^{2}+b^{2}}$ being the modulus, with
$a=(\omega_{{}_{R}}^{2}-\omega_{{}_{I}}^{2})r^{4}+q^{2}Q^{2}-2qQ\omega_{{}_{R}}r$,
and $b=2(\omega_{{}_{R}}\omega_{{}_{I}}r^{4}-qQ\omega_{{}_{I}}r)$.
The separated ODEs (4a) and (4) have five regular singularities. We now show
that both equations can be transformed into the general Heun differential
equation via the change of variables.
We start with the radial equation. Setting
$\displaystyle R(r)=\Delta^{-\frac{1}{2}}(r)\Psi(r),$ (8)
we obtain from (4a) the ODE for $\Psi(r)$,
$\displaystyle\Psi^{\prime\prime}(r)+\bigg{[}\frac{1}{4}\big{(}\frac{\Delta^{\prime}(r)}{\Delta(r)}\big{)}^{2}-\frac{1}{3}\frac{\Delta^{\prime\prime}(r)}{\Delta(r)}+\frac{\Omega(r)}{\Delta^{2}(r)}+\frac{\lambda}{\Delta(r)}\bigg{]}\Psi(r)=0.~{}~{}~{}$
(9)
This equation has four finite regular singularities as well as one infinite
regular singularity at $r=\infty$. The finite singularities are determined by
$\displaystyle\Delta(r)=-\tilde{\alpha}^{2}(r-r_{+})(r-r_{-})(r-r^{\prime}_{+})(r-r^{\prime}_{-})=0,$
(10)
where $r_{\pm}=M\pm\sqrt{M^{2}-Q^{2}}$ and
$r^{\prime}_{\pm}=\pm\frac{1}{\tilde{\alpha}}$. We set the ordering of the
four roots as $r^{\prime}_{-}<0\leq r_{-}<r_{+}<r^{\prime}_{+}$. Then
$r=r_{\pm},r_{+}^{\prime}$ are event, Cauchy and acceleration horizons,
respectively, while $r=r_{-}^{\prime}$ is not physical. We are interested in
the static region $r_{+}<r<r^{\prime}_{+}$, in which $\Delta(r)$ is positive
and the metric has fixed signature [34, 24]. This ordering requires that
$r_{+}\leq\frac{1}{\tilde{\alpha}}$, i.e.,
$\tilde{\alpha}(M+\sqrt{M^{2}-Q^{2}})\leq 1$. When the equality holds, the
black hole becomes extremal. This case is also known as the Nariai limit.
Another extremal limit occurs when $M=Q$. In this case, the event and Cauchy
horizons coincide. Note that in the static region $P(\theta)>0$ for all
$\theta\in(0,\pi)$ [39, 31].
We make the following variable change
$\displaystyle
z:=\frac{r^{\prime}_{+}-r_{-}}{r^{\prime}_{+}-r_{+}}\frac{r-r_{+}}{r-r_{-}}.$
(11)
Under this transformation,
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;r=r_{+}\rightarrow
z=0,\;\;\quad r=r^{\prime}_{+}\rightarrow z=1,$ (12a)
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;r=r^{\prime}_{-}\rightarrow
z=z_{r},\;\;\quad r=r_{-}\rightarrow z=\infty,$ (12b)
where $z_{r}=z_{\infty}\frac{r^{\prime}_{-}-r_{+}}{r^{\prime}_{-}-r_{-}}$ with
$z_{\infty}=\frac{r^{\prime}_{+}-r_{-}}{r^{\prime}_{+}-r_{+}}$. Thus this
transformation maps the region of interest, $r_{+}<r<r^{\prime}_{+}$, to the
one $0<z<1$. In terms of the new variable $z$, the radial equation (9) becomes
$\displaystyle\Psi^{\prime\prime}(z)+\frac{2}{z-z_{\infty}}\Psi^{\prime}(z)$
$\displaystyle~{}~{}~{}+\frac{(r_{+}-r_{-})^{2}z^{2}_{\infty}}{(z-z_{\infty})^{4}}\bigg{[}\frac{1}{4}\bigg{(}\frac{\Delta^{\prime}(r)}{\Delta(r)}\bigg{)}^{2}-\frac{1}{3}\frac{\Delta^{\prime\prime}(r)}{\Delta(r)}+\frac{\Omega(r)}{\Delta^{2}(r)}+\frac{\lambda}{\Delta(r)}\bigg{]}\Psi(z)=0.~{}~{}~{}$
(13)
Setting
$\displaystyle\Psi(z)=z^{C_{1}}(z-1)^{C_{2}}(z-z_{r})^{C_{3}}(z-z_{\infty})^{-1}\mathcal{Y}(z),$
(14)
after tedious computations (see Appendix A for details), we arrive at the
following radial ODE
$\displaystyle\mathcal{Y}^{\prime\prime}(z)+\bigg{(}\frac{\gamma}{z}+\frac{\delta}{z-1}+\frac{\epsilon}{z-z_{r}}\bigg{)}\mathcal{Y}^{\prime}(z)+\frac{\alpha\beta
z-{q_{r}}}{z(z-1)(z-z_{r})}\mathcal{Y}(z)=0,~{}~{}~{}$ (15)
where
$\displaystyle\gamma=2C_{1},\;\;\quad\delta=2C_{2},\;\;\quad\epsilon=2C_{3},\;\;\quad\alpha=1\pm\sum_{j=1}^{4}\tilde{B}_{j},$
(16a)
$\displaystyle\beta=1\pm\sum_{j=1}^{3}\tilde{B}_{j}\mp\tilde{B}_{4},\;\;\quad{q_{r}}=2C_{1}C_{3}+2C_{1}C_{2}z_{r}-Az_{r},$
(16b)
with
$\displaystyle
C_{j}=\frac{1}{2}\pm\tilde{B}_{j},\;\;\quad\tilde{B}_{j}:=i\frac{J(r_{j})}{\Delta^{\prime}(r_{j})},$
(17a) $\displaystyle A$ $\displaystyle=$
$\displaystyle\frac{1}{3}\bigg{[}\frac{r^{\prime}_{+}-r_{+}}{r^{\prime}_{+}-r_{-}}+\frac{1}{2}\frac{r_{-}-r_{+}}{r_{-}-r^{\prime}_{+}}-\frac{1}{2}\frac{(r_{-}-r_{+})(r^{\prime}_{+}-r_{+})}{(r^{\prime}_{+}-r_{-})(r^{\prime}_{-}-r_{+})}\bigg{]}$
(17b)
$\displaystyle+\frac{1}{z^{2}_{r}}\bigg{(}2E+D+\frac{2E}{z_{r}}\bigg{)}-\frac{\lambda}{\alpha^{2}}\frac{1}{(r_{-}-r^{\prime}_{+})(r_{-}-r^{\prime}_{-})}\frac{z_{\infty}}{z_{r}},$
$\displaystyle E$ $\displaystyle=$
$\displaystyle\frac{J^{2}(r_{+})}{[\Delta^{\prime}(r_{-})]^{2}}z^{4}_{\infty},$
(17c) $\displaystyle D$ $\displaystyle=$
$\displaystyle\frac{-4\omega^{2}r_{-}r^{3}_{+}+2qQ\omega(r_{-}+3r_{+})-4q^{2}Q^{2}}{[\Delta^{\prime}(r_{-})]^{2}}z^{3}_{\infty}.$
(17d)
Here and throughout, we use the identification
$r_{1},r_{2},r_{3},r_{4}\Longleftrightarrow
r_{+},r^{\prime}_{+},r^{\prime}_{-},r_{-},$ (18)
respectively. It can be easily checked that
$\gamma+\delta+\epsilon=\alpha+\beta+1$. So the radial equation (15) is the
general Heun equation with accessory parameter ${q_{r}}$.
We now consider the angular part. Making the variable change $x=\cos\theta$
and setting
$\displaystyle\Theta(x)=[(x^{2}-1)P(x)]^{\frac{1}{2}}\mathcal{X}(x),$ (19)
where $P(x)=1-2\tilde{\alpha}Mx+\tilde{\alpha}^{2}Q^{2}x^{2}$, then the
angular ODE (4) becomes
$\displaystyle\mathcal{X}^{\prime\prime}(x)+\bigg{[}\frac{1}{4}\bigg{(}\frac{\mathcal{P}^{\prime}(x)}{\mathcal{P}(x)}\bigg{)}^{2}-\frac{1}{3}\frac{\mathcal{P}^{\prime\prime}(x)}{\mathcal{P}(x)}+\frac{(im)^{2}}{\mathcal{P}^{2}(x)}+\frac{\lambda}{\mathcal{P}(x)}\bigg{]}\mathcal{X}(x)=0.$
(20)
Here
$\displaystyle\mathcal{P}(x)$ $\displaystyle=$ $\displaystyle(x^{2}-1)P(x)$
(21a) $\displaystyle\equiv$
$\displaystyle\tilde{\alpha}^{2}Q^{2}\,(x-x_{+})(x-x_{-})(x-x^{\prime}_{+})(x-x^{\prime}_{-}),$
(21b) $\displaystyle x_{+}$ $\displaystyle=$ $\displaystyle-1,\quad
x_{-}=\frac{1}{\tilde{\alpha}Q^{2}}(M+\sqrt{M^{2}-Q^{2}}),$ (21c)
$\displaystyle x^{\prime}_{+}$ $\displaystyle=$ $\displaystyle 1,\quad
x^{\prime}_{-}=\frac{1}{\tilde{\alpha}Q^{2}}(M-\sqrt{M^{2}-Q^{2}}).$ (21d)
Let $w^{s}_{\infty}=\frac{x^{\prime}_{+}-x_{-}}{x^{\prime}_{+}-x_{+}}$,
$w_{s}=z^{s}_{\infty}\,\frac{x^{\prime}_{-}-x_{+}}{x^{\prime}_{-}-x_{-}}$.
Similarly to the radial case, making the variable change
$\displaystyle
w:=\frac{x^{\prime}_{+}-x_{-}}{x^{\prime}_{+}-x_{+}}\frac{x-x_{+}}{x-x_{-}},$
(22)
and setting
$\displaystyle\mathcal{X}(w)=w^{{\cal C}_{1}}(w-1)^{{\cal
C}_{2}}(w-w_{s})^{{\cal C}_{3}}(w-w^{s}_{\infty})^{-1}\mathcal{Y}_{s}(w),$
(23)
we find after a long computation that the angular part is also transformed
into the general Heun equation,
$\displaystyle\mathcal{Y}_{s}^{\prime\prime}(w)+\bigg{(}\frac{\gamma_{s}}{w}+\frac{\delta_{s}}{w-1}+\frac{\epsilon_{s}}{w-w_{s}}\bigg{)}\mathcal{Y}_{s}^{\prime}(w)+\frac{\alpha_{s}\beta_{s}w-q_{s}}{w(w-1)(w-w_{s})}\mathcal{Y}_{s}(w)=0,~{}~{}~{}$
(24)
with
$\displaystyle\gamma_{s}=2{\cal C}_{1},\;\;\quad\delta_{s}=2{\cal
C}_{2},\;\;\quad\epsilon_{s}=2{\cal C}_{3},\;\;\quad\alpha_{s}=1,$ (25a)
$\displaystyle\beta_{s}=1\mp 2\tilde{\cal B}_{4},\;\;\quad q_{s}=2{\cal
C}_{1}{\cal C}_{3}+2{\cal C}_{1}{\cal C}_{2}w_{s}-{\cal A}w_{s},~{}~{}~{}$
(25b) and $\displaystyle{\cal C}_{j}=\frac{1}{2}\pm\tilde{\cal
B}_{j},\;\;\quad\tilde{\cal
B}_{j}:=\frac{m}{\mathcal{P}^{\prime}(x_{j})},\quad j=1,2,3,4,$ (25c)
with the following correspondence used here and throughout the rest of our
paper
$x_{1},x_{2},x_{3},x_{4}\Longleftrightarrow
x_{+},x^{\prime}_{+},x^{\prime}_{-},x_{-}.$ (26)
Here $\sum_{j=1}^{4}\tilde{\cal B}_{j}=0$ and ${\cal A}$ has the same form of
$A$ in (17b) with the following replacements: $r\mapsto x,z_{r}\mapsto
w_{s},z_{\infty}\mapsto w^{s}_{\infty},\tilde{\alpha}\mapsto
i\tilde{\alpha}Q,D\mapsto\frac{-4(im)^{2}}{[\mathcal{P}^{\prime}(x_{-})]^{2}}(w^{s}_{\infty})^{3}$
and
$E\mapsto\frac{(im)^{2}}{[\mathcal{P}^{\prime}(x_{-})]^{2}}(w^{s}_{\infty})^{4}$.
We remark that the parameters in (24) satisfy the condition
$\gamma_{s}+\delta_{s}+\epsilon_{s}=\alpha_{s}+\beta_{s}+1$, as required.
## 3 Asymptotic behaviors and Hawking radiation
In this section we determine the asymptotic behaviours of solutions of the
radial Heun equation (15) and apply them to analyse Hawking radiation.
### 3.1 Asymptotic behaviors
The radial GHE (15) has two linearly independent solutions in the vicinity of
$z=0$ [28, 40, 18],
$\displaystyle\ y_{01}(z)$ $\displaystyle=$
$\displaystyle\textrm{HeunG}[a,q_{r},\alpha,\beta,\gamma,z],$ (27a)
$\displaystyle\ y_{02}(z)$ $\displaystyle=$ $\displaystyle
z^{1-\gamma}\,\textrm{HeunG}[a,q_{r}+(1-\gamma)(\epsilon+a\delta),1+\beta-\gamma,1+\alpha-\gamma,2-\gamma,\delta,z],~{}~{}~{}~{}~{}~{}$
(27b)
where $a=z_{r}$ and $\textrm{HeunG}[\,\ldots,z]$ is the local Heun function
$\displaystyle\textrm{HeunG}[a,q,\alpha,\beta,\gamma,z]=\sum^{\infty}_{n=0}c_{n}z^{n},$
(28)
with coefficients $c_{n}$ defined by the three-term recurrence relation,
$\displaystyle-qc_{0}+a\gamma c_{1}=0,$ (29a) $\displaystyle
P_{n}c_{n-1}-(Q_{n}+q)c_{n}+R_{n}c_{n+1}=0,\;(n\geq 1),$ (29b)
where
$\displaystyle P_{n}$ $\displaystyle=$ $\displaystyle(n-1+\alpha)(n-1+\beta),$
(30a) $\displaystyle Q_{n}$ $\displaystyle=$ $\displaystyle
n[(n-1+\gamma)(a+1)+a\delta+\epsilon],$ (30b) $\displaystyle R_{n}$
$\displaystyle=$ $\displaystyle a(n+1)(n+\gamma).$ (30c)
The local Heun function (28) is normalized at $z=0$ as
$\displaystyle\textrm{HeunG}[a,q,\alpha,\beta,\gamma,0]=1.$ (31)
Asymptotically, when $z\rightarrow 0$, it holds
$\displaystyle\ y_{01}(z)$ $\displaystyle\sim$ $\displaystyle
1+\mathcal{O}(z),$ (32a) $\displaystyle\ y_{02}(z)$ $\displaystyle\sim$
$\displaystyle z^{1-\gamma}(1+\mathcal{O}(z)).$ (32b)
It then follows from (14) and (8) that the asymptotic radial wave function at
the exterior event horizon is given by
$\displaystyle R(r)\sim(r-r_{+})^{\tilde{B}_{1}}+(r-r_{+})^{-\tilde{B}_{1}}.$
(33)
Similarly, at $z=1$, there will be two linearly independent solutions,
$\displaystyle\ y_{11}(z)$ $\displaystyle=$
$\displaystyle\textrm{HeunG}[1-a,\alpha\beta-q,\alpha,\beta,\gamma,\delta,1-z],$
(34a) $\displaystyle\ y_{12}(z)$ $\displaystyle=$
$\displaystyle(1-z)^{1-\delta}\textrm{HeunG}[1-a,((1-a)\gamma+\epsilon)(1-\delta),\alpha\beta-q,1+\beta-\delta,$
(34b)
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}1+\alpha-\delta,2-\delta,\gamma,1-z].~{}~{}~{}~{}~{}~{}$
Asymptotically, when $z\rightarrow 1$, it holds
$\displaystyle\ y_{11}(z)$ $\displaystyle\sim$ $\displaystyle
1+\mathcal{O}(1-z),$ (35a) $\displaystyle\ y_{12}(z)$ $\displaystyle\sim$
$\displaystyle(1-z)^{1-\delta}(1+\mathcal{O}(1-z)).$ (35b)
We thus obtain a similar asymptotic radial wave functions at the acceleration
horizon,
$\displaystyle
R(r)\sim(r-r^{\prime}_{+})^{\tilde{B}_{2}}+(r-r^{\prime}_{+})^{-\tilde{B}_{2}}.$
(36)
These asymptotic solutions are the same as those obtained via the Damour-
Ruffini-Sannan (DRS) method [47, 48, 49].
The tortoise coordinate and the surface gravity are defined as follow
$\displaystyle
dr_{*}:=\frac{dr}{\Delta(r)};\;\;\quad\kappa:=\frac{\Delta^{\prime}(r_{j})}{2r^{2}_{j}},\quad
j=1,2,3,4.$ (37)
This implies that the tortoise coordinate is
$\displaystyle
r_{*}=\sum^{4}_{j}\frac{\ln|r-r_{j}|}{\Delta^{\prime}(r)}=\frac{\ln|r-r_{+}|}{2\kappa(r_{+})r^{2}_{+}}+\frac{\ln|r-r_{-}|}{2\kappa(r_{-})r^{2}_{-}}+\frac{\ln|r-r^{\prime}_{+}|}{2\kappa(r^{\prime}_{+})r^{\prime
2}_{+}}+\frac{\ln|r-r^{\prime}_{-}|}{2\kappa(r^{\prime}_{-})r^{\prime
2}_{-}}.~{}~{}~{}$ (38)
In terms of the tortoise coordinate, eq. (4a) is written as
$\displaystyle\frac{d^{2}R(r)}{dr^{2}_{*}}+\big{[}\Omega(r)-V_{\textrm{eff}}(r)\big{]}R(r)=0,$
(39)
where
$V_{\textrm{eff}}(r)=-\Delta(r)\big{(}\frac{1}{6}\Delta^{\prime\prime}(r)+\lambda\big{)}$.
In the asymptotic limit $r\rightarrow r_{j}$, we have
$\displaystyle\frac{d^{2}R(r)}{dr^{2}_{*}}+\Omega(r)R(r)=0.$ (40)
Its solutions can be expressed in terms of the parameters of the GHE,
$\displaystyle R(r)\sim(r-r_{j})^{\pm\tilde{B}_{j}},\;\;j=1,2,$ (41)
which are consistent with those from GHE results (33) and (36).
### 3.2 Hawking radiation
As the first application of the results based on the general Heun equations,
we consider Hawking radiation, which can be interpreted as a scattering
problem with wave modes crossing the BH event horizon or the acceleration
horizon [4].
It can be easily checked in the ordering $r^{\prime}_{-}<0\leq
r_{-}<r_{+}<r^{\prime}_{+}$ that $\kappa(r_{+})>0$ and
$\kappa(r^{\prime}_{+})<0$. Thus, from the results in the previous section,
the spatial-dependent ingoing and outgoing waves have the form,
$\displaystyle
R_{\textrm{in/out}}(r>r_{j})=(r-r_{j})^{\pm\frac{iJ(r_{j})}{2\kappa(r_{j})r^{2}_{j}}}.$
(42)
As per the different signs of $\textrm{Re}[J]$, there will be two branches.
For $\textrm{Re}[J]>0$, we have the following ingoing/outgoing solutions
around different horizons. Around the event horizon $(r-r_{+})\rightarrow
0^{+}$,
$\displaystyle\ R_{\textrm{in}}(r>r_{+})$ $\displaystyle=$
$\displaystyle(r-r_{+})^{-\frac{iJ(r_{+})}{2\kappa(r_{+})r^{2}_{+}}},$ (43a)
$\displaystyle\ R_{\textrm{out}}(r>r_{+})$ $\displaystyle=$
$\displaystyle(r-r_{+})^{\frac{iJ(r_{+})}{2\kappa(r_{+})r^{2}_{+}}},$ (43b)
and around the acceleration horizon $(r-r^{\prime}_{+})\rightarrow 0^{+}$,
$\displaystyle\ R_{\textrm{in}}(r>r^{\prime}_{+})$ $\displaystyle=$
$\displaystyle(r-r^{\prime}_{+})^{\frac{iJ(r^{\prime}_{+})}{2\kappa(r^{\prime}_{+})r^{\prime
2}_{+}}},$ (44a) $\displaystyle\ R_{\textrm{out}}(r>r^{\prime}_{+})$
$\displaystyle=$
$\displaystyle(r-r^{\prime}_{+})^{-\frac{iJ(r^{\prime}_{+})}{2\kappa(r^{\prime}_{+})r^{\prime
2}_{+}}}.$ (44b)
Considering the time-dependent damping properties, focusing on the outgoing
waves, we have
$\displaystyle\ R_{\textrm{out}}(t,r>r_{+})$ $\displaystyle=$ $\displaystyle
e^{-i\omega t}(r-r_{+})^{\frac{iJ(r_{+})}{2\kappa(r_{+})r^{2}_{+}}},$ (45a)
$\displaystyle\ R_{\textrm{out}}(t,r>r^{\prime}_{+})$ $\displaystyle=$
$\displaystyle e^{-i\omega
t}(r-r^{\prime}_{+})^{-\frac{iJ(r^{\prime}_{+})}{2\kappa(r^{\prime}_{+})r^{\prime
2}_{+}}}.$ (45b)
As a result, the outgoing wave from the event horizon is (45a).
This wave can be analytically continued inside the event horizon
($r_{-}<r<r_{+}$) by the lower half complex $r$-plane around the unit circle
$r=r_{+}-i0$: $r-r_{+}\rightarrow(r_{+}-r)e^{-i\pi}$ [21],
$\displaystyle R_{\textrm{out}}^{c}(t,r<r_{+})$ $\displaystyle=$
$\displaystyle e^{-i\omega
t}[(r_{+}-r)e^{-i\pi}]^{\frac{iJ(r_{+})}{2\kappa(r_{+})r^{2}_{+}}}$ (46)
$\displaystyle=$ $\displaystyle e^{-i\omega
t}(r_{+}-r)^{\frac{iJ(r_{+})}{2\kappa(r_{+})r^{2}_{+}}}(e^{-i\pi})^{\frac{i[\textrm{Re}[J(r_{+})]+i\textrm{Im}[J(r_{+})]}{2\kappa(r_{+})r^{2}_{+}}}$
$\displaystyle=$ $\displaystyle e^{-i\omega
t}\bigg{[}(r_{+}-r)^{\frac{iJ(r_{+})}{2\kappa(r_{+})r^{2}_{+}}}\bigg{]}\bigg{[}e^{\frac{i\pi\textrm{Im}[J(r_{+})]}{2\kappa(r_{+})r^{2}_{+}}}\bigg{]}\bigg{[}e^{\frac{\pi\textrm{Re}[J(r_{+})]}{2\kappa(r_{+})r^{2}_{+}}}\bigg{]},$
where $R_{\textrm{out}}^{c}(t,r<r_{+})$ denotes the analytically continued
outgoing wave.
After analytic continuation, we can get the scattering probability and
outgoing energy density for Hawking radiation:
$\displaystyle\Gamma(J)_{r_{+}}=\bigg{|}\frac{R_{\textrm{out}}(t,r>r_{+})}{R_{\textrm{out}}^{c}(t,r<r_{+})}\bigg{|}^{2}=e^{-\frac{\pi\textrm{Re}[J(r_{+})]}{\kappa(r_{+})r^{2}_{+}}},$
(47a) $\displaystyle
N(J)_{r_{+}}=\frac{\Gamma_{J}}{1-\Gamma_{J}}=\frac{1}{e^{\frac{\hbar\textrm{Re}[J(r_{+})]}{k_{B}T_{+}r^{2}_{+}}}-1},$
(47b)
where $k_{B}$ is the Boltzmann constant and $T_{+}$ the Hawking-Unruh
temperature at the event horizon: $k_{B}T_{+}=\frac{\hbar\kappa(r_{+})}{\pi}$.
When the $qQ$-dependent term can be ignored, and the frequency $\omega$ is
real, i.e. $\textrm{Re}J(r_{+})\rightarrow\omega r^{2}_{+}$, the results in
(47) reduce to those in [21, 19, 20] for a spherically symmetric background,
$\displaystyle\Gamma(J)_{r_{+}}\rightarrow
e^{\frac{-\pi\omega}{\kappa(r_{+})}};\;\;\;N(J)_{r_{+}}\rightarrow
N(\omega)_{r_{+}}=\frac{1}{e^{\frac{\hbar\omega^{2}}{k_{B}T_{+}}-1}}.$ (48)
Similarly, we can also get the Hawking radiation for the acceleration horizon
$\displaystyle\Gamma(J)_{r^{\prime}_{+}}$ $\displaystyle=$ $\displaystyle
e^{\frac{\pi\textrm{Re}[J(r^{\prime}_{+})]}{\kappa(r^{\prime}_{+})r^{\prime
2}_{+}}},$ (49a) $\displaystyle N(J)_{r^{\prime}_{+}}$ $\displaystyle=$
$\displaystyle\frac{1}{e^{-\frac{\hbar\textrm{Re}[J(r^{\prime}_{+})]}{k_{B}T^{\prime}_{+}r^{\prime
2}_{+}}}-1},$ (49b)
where $T^{\prime}_{+}$ is the Hawking-Unruh temperature at the acceleration
horizon: $k_{B}T^{\prime}_{+}=\frac{\hbar\kappa(r^{\prime}_{+})}{\pi}$.
For $\textrm{Re}[J]<0$, after the same procedure as above, for the event
horizon, we obtain
$\displaystyle\Gamma(J)_{r_{+}}=e^{\frac{\pi\textrm{Re}[J(r_{+})]}{\kappa(r_{+})r^{2}_{+}}},$
(50a) $\displaystyle
N(J)_{r_{+}}=\frac{\Gamma_{J}}{1-\Gamma_{J}}=\frac{1}{e^{-\frac{\hbar\textrm{Re}[J(r_{+})]}{k_{B}\tilde{T}_{+}r^{2}_{+}}}-1},$
(50b)
with $k_{B}\tilde{T}_{+}=-\frac{\hbar\kappa(r_{+})}{\pi}$. At the acceleration
horizon, it holds
$\displaystyle\Gamma(J)_{r^{\prime}_{+}}$ $\displaystyle=$ $\displaystyle
e^{-\frac{\pi\textrm{Re}[J(r^{\prime}_{+})]}{\kappa(r^{\prime}_{+})r^{\prime
2}_{+}}},$ (51a) $\displaystyle N(J)_{r^{\prime}_{+}}$ $\displaystyle=$
$\displaystyle\frac{1}{e^{\frac{\hbar\textrm{Re}[J(r^{\prime}_{+})]}{k_{B}\tilde{T}^{\prime}_{+}r^{\prime
2}_{+}}}-1},$ (51b)
with $k_{B}\tilde{T}^{\prime}_{+}=-\frac{\hbar\kappa(r^{\prime}_{+})}{\pi}$.
## 4 Boundary conditions for Quasinormal modes and superradiance at
asymptotic limits
Asymptotically, the boundary conditions of the wave functions can generally be
characterized by the tortoise coordinates $r_{*}\rightarrow\pm\infty$ and
$x_{*}\rightarrow\pm\infty$ ($x\rightarrow\mp 1$) for the radial and angular
parts, respectively [31, 13].
In the following, we discuss the asymptotic behaviors of the GHE solutions in
general without considering any specific boundary or regularity conditions. We
will then discuss the boundary conditions for two applications, QNMs and
superradiance.
### 4.1 Coefficients of asymptotic behaviors in the tortoise coordinate
We first determine the proportionality coefficients in the asymptotic wave
functions (33), (36) and (41).
From the definition of tortoise coordinate in eq. (38), asymptotically when
$r\rightarrow r_{j}$, we have
$\displaystyle
r_{*}\sim\sum^{4}_{j}\frac{\ln|r-r_{j}|}{\Delta^{\prime}(r_{j})}+d_{j},$ (52)
where $d_{j}$ are constant coefficients defined by
$d_{j}:=\sum_{k\neq j}\frac{\ln|r_{j}-r_{k}|}{\Delta^{\prime}(r_{k})},\;\quad
j,k=1,2,3,4.$ (53)
Solving (52) gives,
$\displaystyle r-r_{j}\sim
e^{-d_{j}\Delta^{\prime}(r_{j})}e^{\Delta^{\prime}(r_{j})r_{*}}.$ (54)
Then from (8), (14), (32) and (35), we can determine the asymptotic wave
solutions in terms of the tortoise coordinate $r_{*}$ as follows.
Around $z=0$, we have,
$\displaystyle\ R_{01}(r\rightarrow r_{+})\sim(r-r_{+})^{\pm\tilde{B}_{1}},$
(55a) $\displaystyle\ R_{02}(r\rightarrow
r_{+})\sim(r-r_{+})^{\mp\tilde{B}_{1}},$ (55b)
or in tortoise coordinate, $r_{*}\rightarrow-\infty$,
$\displaystyle\ R_{01}(r_{*}\rightarrow-\infty)$ $\displaystyle\sim$
$\displaystyle A_{01}^{(\mp)}e^{\pm iJ(r_{+})r_{*}},$ (56a) $\displaystyle\
R_{02}(r_{*}\rightarrow-\infty)$ $\displaystyle\sim$ $\displaystyle
A_{02}^{(\pm)}e^{\mp iJ(r_{+})r_{*}},$ (56b)
where the constant coefficients are
$\displaystyle\ A_{01}^{(\mp)}$ $\displaystyle=$
$\displaystyle\frac{(-1)^{C_{2}}(-z_{r})^{C_{3}}(-z_{\infty})^{-1}}{\sqrt{-\tilde{\alpha}^{2}\prod_{k\neq
1}(r_{+}-r_{k})}}\,e^{\mp id_{1}J(r_{+})},$ (57a) $\displaystyle\
A_{02}^{(\pm)}$ $\displaystyle=$
$\displaystyle\frac{(-1)^{C_{2}}(-z_{r})^{C_{3}}(-z_{\infty})^{-1}}{\sqrt{-\tilde{\alpha}^{2}\prod_{k\neq
1}(r_{+}-r_{k})}}\,e^{\pm id_{1}J(r_{+})}.$ (57b)
Around $z=1$, we get,
$\displaystyle\ R_{11}(r\rightarrow
r^{\prime}_{+})\sim(r-r^{\prime}_{+})^{\pm\tilde{B}_{2}},$ (58a)
$\displaystyle\ R_{12}(r\rightarrow
r^{\prime}_{+})\sim(r-r^{\prime}_{+})^{\mp\tilde{B}_{2}},$ (58b)
or in tortoise coordinate, or $r_{*}\rightarrow+\infty$,
$\displaystyle\ R_{11}(r_{*}\rightarrow+\infty)\sim A_{11}^{(\mp)}e^{\pm
iJ(r^{\prime}_{+})r_{*}},$ (59a) $\displaystyle\
R_{12}(r_{*}\rightarrow+\infty)\sim A_{12}^{(\pm)}e^{\mp
iJ(r^{\prime}_{+})r_{*}},$ (59b)
with the following constant coefficients,
$\displaystyle\ A_{11}^{(\mp)}$ $\displaystyle=$
$\displaystyle\frac{(1)^{C_{1}}(1-z_{r})^{C_{3}}(1-z_{\infty})^{-1}}{\sqrt{-\tilde{\alpha}^{2}\prod_{k\neq
2}(r^{\prime}_{+}-r_{k})}}\,e^{\mp id_{2}J(r^{\prime}_{+})},$ (60a)
$\displaystyle\ A_{12}^{(\pm)}$ $\displaystyle=$
$\displaystyle\frac{(1)^{C_{1}}(1-z_{r})^{C_{3}}(1-z_{\infty})^{-1}}{\sqrt{-\tilde{\alpha}^{2}\prod_{k\neq
2}(r^{\prime}_{+}-r_{k})}}\,e^{\pm id_{2}J(r^{\prime}_{+})}.$ (60b)
Similarly, we can determine the coefficients for the angular solutions from
(19), (23), (32) and (35) as follows.
From (4), after making a similar variable change, one obtain an equation
similar to (4a),
$\displaystyle\mathcal{X}^{\prime\prime}(x)+\frac{\mathcal{P}^{\prime}(x)}{\mathcal{P}(x)}\mathcal{X}^{\prime}(x)+\frac{1}{\mathcal{P}(x)}\bigg{[}\frac{-m^{2}}{\mathcal{P}(x)}+\frac{1}{6}\mathcal{P}^{\prime\prime}(x)+\lambda\bigg{]}\mathcal{X}(x)=0,$
(61)
Let $w_{*}$ denote the tortoise coordinate for the angular equation, which is
defined as,
$\displaystyle
dw_{*}:=\frac{dx}{\mathcal{P}(x)};\;\;w_{*}\sim\sum^{4}_{j}\frac{\ln|x-x_{j}|}{\mathcal{P}^{\prime}(x)}+d^{s}_{j},$
(62)
where $d^{s}_{j}$ are constant coefficients defined by
$d^{s}_{j}:=\sum_{k\neq
j}\frac{\ln|x_{j}-x_{k}|}{\mathcal{P}^{\prime}(x_{k})},\;\quad j,k=1,2,3,4.$
(63)
Solving (62) gives,
$\displaystyle x-x_{j}\sim
e^{-d^{s}_{j}\mathcal{P}^{\prime}(x_{j})}e^{\mathcal{P}^{\prime}(x_{j})w_{*}}.$
(64)
In terms of $w_{*}$, equation (61) takes a form analogue to (39),
$\displaystyle\frac{d^{2}\mathcal{X}(x)}{dw^{2}_{*}}+\big{[}-m^{2}-V_{\textrm{eff}}(x)\big{]}\mathcal{X}(x)=0,$
(65)
where
$V_{\textrm{eff}}(x)=-\mathcal{P}(x)\big{(}\frac{1}{6}\mathcal{P}^{\prime\prime}(x)+\lambda\big{)}$.
In the asymptotic limit $x\rightarrow x_{j}$, we have
$\displaystyle\frac{d^{2}\mathcal{X}(x)}{dw^{2}_{*}}-m^{2}\mathcal{X}(x)=0.$
(66)
Its boundary conditions are
$\displaystyle\mathcal{X}(x)\sim e^{\pm mw_{*}},$ (67)
In terms of the parameters of the GHE, this become
$\displaystyle\mathcal{X}(x)\sim(x-x_{j})^{\frac{1}{2}+{\cal
C}_{j}},(x-x_{j})^{\frac{3}{2}-{\cal C}_{j}}\;\;j=1,2.$ (68)
In more detail, around $w=0$ ($x\rightarrow x_{+}$), it holds
$\displaystyle\ \mathcal{X}_{01}(x\rightarrow x_{+})$ $\displaystyle\sim$
$\displaystyle(x-x_{+})^{(1\pm\tilde{B}^{s}_{1})},$ (69a) $\displaystyle\
\mathcal{X}_{02}(x\rightarrow x_{+})$ $\displaystyle\sim$
$\displaystyle(x-x_{+})^{(1\mp\tilde{B}^{s}_{1})}.$ (69b)
In terms of $w_{*}$, we have
$\displaystyle\ \mathcal{X}_{01}(w_{*}\rightarrow+\infty)$ $\displaystyle\sim$
$\displaystyle A^{(\pm)s}_{01}e^{(\mathcal{P}^{\prime}(x_{+})\pm m)w_{*}},$
(70a) $\displaystyle\ \mathcal{X}_{02}(w_{*}\rightarrow+\infty)$
$\displaystyle\sim$ $\displaystyle
A^{(\mp)s}_{02}e^{(\mathcal{P}^{\prime}(x_{+})\mp m)w_{*}},$ (70b)
with
$\mathcal{P}^{\prime}(x_{+})=-2(1+2\tilde{\alpha}M+\tilde{\alpha}^{2}Q^{2})$
and
$\displaystyle\ A^{(\pm)s}_{01}$ $\displaystyle=$
$\displaystyle(-1)^{C^{s}_{2}}(-z_{s})^{C^{s}_{3}}(-z^{s}_{\infty})^{-1}\sqrt{\tilde{\alpha}^{2}Q^{2}\prod_{k\neq
1}(x_{+}-x_{k})}~{}e^{-d^{s}_{1}\mathcal{P}^{\prime}(x_{+})(1\pm\tilde{B}^{s}_{1})},$
(71a) $\displaystyle\ A^{(\mp)s}_{02}$ $\displaystyle=$
$\displaystyle(-1)^{C^{s}_{2}}(-z_{s})^{C^{s}_{3}}(-z^{s}_{\infty})^{-1}\sqrt{\tilde{\alpha}^{2}Q^{2}\prod_{k\neq
1}(x_{+}-x_{k})}~{}e^{-d^{s}_{1}\mathcal{P}^{\prime}(x_{+})(1\mp\tilde{B}^{s}_{1})}.$
(71b)
Around $w=1$ ($x\rightarrow x^{\prime}_{+}$), it holds
$\displaystyle\ \mathcal{X}_{11}(x\rightarrow x^{\prime}_{+})$
$\displaystyle\sim$
$\displaystyle(x-x^{\prime}_{+})^{(1\pm\tilde{B}^{s}_{2})},$ (72a)
$\displaystyle\ \mathcal{X}_{12}(x\rightarrow x^{\prime}_{+})$
$\displaystyle\sim$
$\displaystyle(x-x^{\prime}_{+})^{(1\mp\tilde{B}^{s}_{2})}.$ (72b)
In terms of $w_{*}$, they are
$\displaystyle\ \mathcal{X}_{11}(w_{*}\rightarrow-\infty)$ $\displaystyle\sim$
$\displaystyle A^{(\pm)s}_{11}e^{(\mathcal{P}^{\prime}(x^{\prime}_{+})\pm
m)w_{*}},$ (73a) $\displaystyle\ \mathcal{X}_{12}(w_{*}\rightarrow-\infty)$
$\displaystyle\sim$ $\displaystyle
A^{(\mp)s}_{12}e^{(\mathcal{P}^{\prime}(x^{\prime}_{+})\mp m)w_{*}},$ (73b)
with
$\mathcal{P}^{\prime}(x^{\prime}_{+})=2(1-2\tilde{\alpha}M+\tilde{\alpha}^{2}Q^{2})$
and
$\displaystyle\ A^{(\pm)s}_{11}$ $\displaystyle=$
$\displaystyle(1)^{C^{s}_{1}}(1-z_{s})^{C^{s}_{3}}(1-z^{s}_{\infty})^{-1}\sqrt{\tilde{\alpha}^{2}Q^{2}\prod_{k\neq
2}(x^{\prime}_{+}-x_{k})}~{}e^{-d^{s}_{2}\mathcal{P}^{\prime}(x^{\prime}_{+})(1\pm\tilde{B}^{s}_{2})},$
(74a) $\displaystyle\ A^{(\mp)s}_{12}$ $\displaystyle=$
$\displaystyle(1)^{C^{s}_{1}}(1-z_{s})^{C^{s}_{3}}(1-z^{s}_{\infty})^{-1}\sqrt{\tilde{\alpha}^{2}Q^{2}\prod_{k\neq
2}(x^{\prime}_{+}-x_{k})}~{}e^{-d^{s}_{2}\mathcal{P}^{\prime}(x^{\prime}_{+})(1\mp\tilde{B}^{s}_{2})}.$
(74b)
Comparing these equations with (70), (73) and (67), we see that the exact
results are different from the asymptotic solutions. As can be seen later, the
same is true for our numeric results.
### 4.2 Applications to QNMs
When a wave propagates near or crosses the horizon of a black hole, the
extreme gravitational time dilation causes a significant alteration in the
observed frequency. This effect usually manifests as a discrete set of complex
frequencies, the so-called QNMs, which characterize the decay or growth of the
wave. The real part of the QNM describes the damped oscillation, and the
imaginary part the exponential decay, provided it is negative, while a
positive imaginary part indicates a dynamical instability [10].
Quasi normal modes can generally be understood as an eigenvalue problem of the
wave equation in the black hole background under the following boundary
conditions, characterized by the frequencies $\omega$ [13, 41, 42, 43]. More
precisely, there are only ingoing waves at the event horizon, while the
ingoing modes are discarded at the acceleration horizon because they are
unobservable beyond this horizon.
In view of (55), the outgoing and ingoing wave modes at the event horizon are
(provided that Re$J(r_{+})>0$),
$\displaystyle\ \textrm{outgoing}:\;\;R_{01}(r\rightarrow
r_{+})\sim(r-r_{+})^{-\tilde{B}_{1}},$ (75a) $\displaystyle\
\textrm{ingoing}:\;\;R_{02}(r\rightarrow
r_{+})\sim(r-r_{+})^{+\tilde{B}_{1}},$ (75b)
or from (56),
$\displaystyle\ \textrm{outgoing}:\;\;R_{01}(r_{*}\rightarrow-\infty)$
$\displaystyle\sim$ $\displaystyle A_{01}^{(+)}e^{-iJ(r_{+})r_{*}},$ (76a)
$\displaystyle\ \textrm{ingoing}:\;\;R_{02}(r_{*}\rightarrow-\infty)$
$\displaystyle\sim$ $\displaystyle A_{02}^{(-)}e^{+iJ(r_{+})r_{*}},$ (76b)
since $r_{*}\rightarrow-\infty$ at $r\rightarrow r_{+}$.
Similarly, from equation (58), we have outgoing and ingoing wave modes
(provided that Re$J(r^{\prime}_{+})<0$),
$\displaystyle\ \textrm{outgoing}:\;\;R_{11}(r\rightarrow
r^{\prime}_{+})\sim(r-r^{\prime}_{+})^{-\tilde{B}_{2}},$ (77a) $\displaystyle\
\textrm{ingoing}:\;\;R_{12}(r\rightarrow
r^{\prime}_{+})\sim(r-r^{\prime}_{+})^{+\tilde{B}_{2}},$ (77b)
or from (59),
$\displaystyle\ \textrm{outgoing}:\;\;R_{11}(r_{*}\rightarrow+\infty)$
$\displaystyle\sim$ $\displaystyle A_{11}^{(+)}e^{-iJ(r^{\prime}_{+})r_{*}},$
(78a) $\displaystyle\ \textrm{ingoing}:\;\;R_{12}(r_{*}\rightarrow+\infty)$
$\displaystyle\sim$ $\displaystyle A_{12}^{(-)}e^{+iJ(r^{\prime}_{+})r_{*}},$
(78b)
since $r_{*}\rightarrow+\infty$ at $r\rightarrow r^{\prime}_{+}$. It follows
that the wave functions under the boundary conditions to determine the QNMs
are
$\displaystyle R(r)\sim\left\\{\begin{array}[]{ll}\ R_{02}(r),\quad
r\rightarrow r_{+},\\\ \\\ \ R_{11}(r),\quad r\rightarrow
r^{\prime}_{+}.\end{array}\right.$ (82)
Based on (82), we can use the standard matching procedure to derive the QNMs
[34, 31, 35]. The logic of this method is similar to that for studying
quasibound modes (QBMs): Solving the radial GHE in two different asymptotic
regions and matching the solutions in their overlapped region (see, e.g. [18,
40] and references therein).
To apply this procedure, we should determine the solutions of the angular GHE
and their regularity conditions. It can be shown that, to ensure regularity,
we need to impose $|m|<2$. From (69) at $x\rightarrow x_{+}$
($w_{*}\rightarrow+\infty$), we have
$\displaystyle\ \mathcal{X}_{01}(x\rightarrow x_{+})\sim
A^{(-)s}_{01}(x-x_{+})^{1-\tilde{\cal B}_{1}},~{}~{}~{}|m|<2,$ (83a)
$\displaystyle\ \mathcal{X}_{02}(x\rightarrow x_{+})\sim
A^{(-)s}_{02}(x-x_{+})^{1+\tilde{\cal B}_{1}},~{}~{}~{}|m|<2,$ (83b)
Similarly from (72) at $x\rightarrow x^{\prime}_{+}$
($w_{*}\rightarrow-\infty$), we get
$\displaystyle\ \mathcal{X}_{11}(x\rightarrow x^{\prime}_{+})\sim
A^{(-)s}_{11}(x-x^{\prime}_{+})^{1-\tilde{\cal B}_{2}},~{}~{}~{}|m|<2,$ (84a)
$\displaystyle\ \mathcal{X}_{12}(x\rightarrow x^{\prime}_{+})\sim
A^{(-)s}_{12}(x-x^{\prime}_{+})^{1+\tilde{\cal B}_{2}},~{}~{}~{}|m|<2.$ (84b)
### 4.3 Applications to superradiance
Superradiance denotes the interaction between the wave functions and the black
hole’s rotations. An incident wave from the accelerating horizon, scattering
off the BH, will be partially reflected backward the acceleration horizon and
partially transmitted through the potential barrier and into the event
horizon.
The boundary conditions for superradiance contain the ingoing wave modes at
the event horizon and the transmitting wave modes (outgoing and ingoing)
across the acceleration horizon [31]. From (76) and (78),
$\displaystyle R(r)\sim\left\\{\begin{array}[]{ll}\ R_{02}(r),\quad
r\rightarrow r_{+},\\\ \\\ \ R_{11}(r)+R_{12}(r),\quad r\rightarrow
r^{\prime}_{+}.\end{array}\right.$ (88)
To obtain the reflection and transmission amplitudes, we write
$\displaystyle R(r)\sim\left\\{\begin{array}[]{ll}\
\mathcal{T}e^{+iJ(r_{+})r_{*}},\quad r\rightarrow r_{+},\\\ \\\ \
\mathcal{I}e^{+iJ(r^{\prime}_{+})r_{*}}+\mathcal{R}e^{-iJ(r^{\prime}_{+})r_{*}},\quad
r\rightarrow r^{\prime}_{+},\end{array}\right.$ (92)
where $\mathcal{T}$ is the transmission wave into the event horizon,
$\mathcal{I}$ the incident wave into the acceleration horizon and
$\mathcal{R}$ the reflection wave away from the acceleration horizon. Then,
the Wronskians for these waves and their linearly independent solutions are
$\displaystyle
W[\mathcal{T}e^{+iJ(r_{+})r_{*}},\mathcal{T}^{*}e^{-iJ(r_{+})r_{*}}]$
$\displaystyle=$ $\displaystyle|\mathcal{T}|^{2}[-2iJ(r_{+})],$ (93a)
$\displaystyle
W[\mathcal{I}e^{+iJ(r^{\prime}_{+})r_{*}},\mathcal{I}^{*}e^{-iJ(r^{\prime}_{+})r_{*}}]$
$\displaystyle=$ $\displaystyle|\mathcal{I}|^{2}[-2iJ(r^{\prime}_{+})],$ (93b)
$\displaystyle
W[\mathcal{R}e^{-iJ(r^{\prime}_{+})r_{*}},\mathcal{R}^{*}e^{+iJ(r^{\prime}_{+})r_{*}}]$
$\displaystyle=$
$\displaystyle|\mathcal{R}|^{2}[+2iJ(r^{\prime}_{+})].~{}~{}~{}~{}~{}~{}$
(93c)
They must coincide at both boundaries, i.e.
$\displaystyle|\mathcal{T}|^{2}[-2iJ(r_{+})]=|\mathcal{I}|^{2}[-2iJ(r^{\prime}_{+})]+|\mathcal{R}|^{2}[2iJ(r^{\prime}_{+})].$
(94)
This yields
$\displaystyle|\mathcal{R}|^{2}=|\mathcal{I}|^{2}-\frac{J(r_{+})}{J(r^{\prime}_{+})}|\mathcal{T}|^{2}.$
(95)
The previous relation implies that $\frac{J(r_{+})}{J(r^{\prime}_{+})}<0$.
Hence, the reflection amplitude will be larger than the incident amplitude,
which is the indication that there is superradiance. This agrees with the
result in [31]. However, we still need to settle the parameters to calculate
the exact superradiant amplifications. This can be achieved using the exact
solutions.
## 5 Exact GHE and applications to quasinormal modes and superradiance
In the previous section (Sec.4.2), we studied the asymptotic behaviours of the
radial wave functions in the respective convergence region around $z=0$ and
$z=1$ and in their overlapped convergence region by matching the solutions.
In this section, we explore the exact connection-coefficient formalism
proposed in [28], which provides an effective approach to many of the
scattering-related problems. Specifically, the connection coefficients enable
us to establish an overlapped convergence disk, in which we can determine the
exact behaviors of the radial solutions at an arbitrary point in the disk
without analytic continuation.
Let $C_{11},C_{12},C_{21},C_{22}$ denote the connection coefficients
connecting the two exact solutions in (27) (of convergence at $z=0$) and (34)
(which is convergent at $z=1$). Then, by definition, in the overlapped
convergence region, the two solutions are related by
$\displaystyle\ y_{01}(z)$ $\displaystyle=$ $\displaystyle
C_{11}y_{11}(z)+C_{12}y_{12}(z),$ (96a) $\displaystyle\ y_{02}(z)$
$\displaystyle=$ $\displaystyle C_{21}y_{11}(z)+C_{22}y_{12}(z).$ (96b)
Then, we can obtain the connection coefficients around $z=0$,
$\displaystyle C_{11}=\frac{W[y_{01},y_{12}]}{W[y_{11},y_{12}]}$ ,
$\displaystyle\qquad C_{12}=\frac{W[y_{01},y_{11}]}{W[y_{12},y_{11}]},$ (97a)
$\displaystyle C_{21}=\frac{W[y_{02},y_{12}]}{W[y_{11},y_{12}]}$ ,
$\displaystyle\qquad C_{22}=\frac{W[y_{02},y_{11}]}{W[y_{12},y_{11}]},$ (97b)
where $W$ is the Wronskian,
$W[f_{1}(z),f_{2}(z)]=\bigg{|}\begin{array}[]{cc}f_{1}(z)&f_{2}(z)\\\
f^{\prime}_{1}(z)&f^{\prime}_{2}(z)\\\ \end{array}\bigg{|}.$ (98)
Around $z=1$ [28, 29], they are given by
$\displaystyle D_{11}=\frac{W[y_{11},y_{02}]}{W[y_{01},y_{02}]}$ ,
$\displaystyle\qquad D_{12}=\frac{W[y_{11},y_{01}]}{W[y_{02},y_{01}]},$ (99a)
$\displaystyle D_{21}=\frac{W[y_{12},y_{02}]}{W[y_{01},y_{02}]}$ ,
$\displaystyle\qquad D_{22}=\frac{W[y_{12},y_{01}]}{W[y_{02},y_{01}]}.$ (99b)
Consider the asymptotic behaviors of the wave functions in terms of the
connection coefficients. Within the context of conformal diagrams [44], we can
distinguish the wave functions in the four parts with different boundary
conditions as follows [28, 45].
* •
$\mathcal{R}_{\textrm{in}}(r)$: around $z=0$ or $r=r_{+}$, with a reference
horizon between $z=0$ and $z=1$, no outgoing waves from $z=0$;
Corresponding to $r_{*}\rightarrow-\infty$, only ingoing modes should be
present for the event horizon.
* •
$\mathcal{R}_{\textrm{up}}(r)$: around $z=1$ or $r=r^{\prime}_{+}$, with a
reference horizon (or potential barrier) between $z=0$ and $z=1$, no ingoing
waves from $z=1$;
Corresponding to $r_{*}\rightarrow+\infty$, only outgoing modes should be
present for the acceleration horizon.
* •
$\mathcal{R}_{\textrm{down}}(r)=\mathcal{R}^{*}_{\textrm{up}}(r)$.
* •
$\mathcal{R}_{\textrm{out}}(r)=\mathcal{R}^{*}_{\textrm{in}}(r)$.
Their relations with the GHE solutions are [28, 45, 29]
$\displaystyle\mathcal{R}_{\textrm{in}}(r)=\left\\{\begin{array}[]{ll}\
R_{02}(r),\;\;r\rightarrow r_{+};\\\ \\\ \
C_{21}R_{11}(r)+C_{22}R_{12}(r),\;\;r\rightarrow
r^{\prime}_{+},\end{array}\right.$ (103)
$\displaystyle\mathcal{R}_{\textrm{up}}(r)=\left\\{\begin{array}[]{ll}\
D_{11}R_{01}(r)+D_{12}R_{02}(r),\;\;r\rightarrow r_{+};\\\ \\\ \
R_{11}(r),\;\;r\rightarrow r^{\prime}_{+},\end{array}\right.$ (107)
where
$R_{Ii}=\Delta^{-\frac{1}{2}}(r)z^{\frac{\gamma}{2}}(z-1)^{\frac{\delta}{2}}(z-z_{r})^{\frac{\epsilon}{2}}(z-z_{\infty})^{-1}y_{Ii}(z),$
(108)
with $I=0,1$ and $i=1,2$. Similar relations can be found for the angular wave
solutions $\mathcal{X}^{s}_{Ii}$.
Comparing (103) with (82), it can be seen that in terms of the connection
coefficients, the QNMs are obtained from (103) by setting $C_{22}=0$, or
equivalently from (107) by requiring $D_{11}=0$. For the superradiance, in
view of (88), we find that in terms of the connection coefficient formalism,
one should look at equation (103). Comparing with (95), we obtain
$\displaystyle|C_{21}|^{2}|A_{11}|^{2}=|C_{22}|^{2}|A_{12}|^{2}-\frac{J(r_{+})}{J(r^{\prime}_{+})}|A_{02}|^{2}.$
(109)
One defines the amplification factor as [8, 31]
$\displaystyle Z_{0m}=\frac{|\mathcal{R}|^{2}}{|\mathcal{I}|^{2}}-1,$ (110)
where “0” denotes the scalar field with spin weight $s=0$. Then, the
amplification factor for the superradiance is [28, 29]
$\displaystyle
Z_{0m}=-\frac{J(r_{+})}{J(r^{\prime}_{+})}\bigg{|}\frac{\prod_{k\neq
2}(r^{\prime}_{+}-r_{k})}{\prod_{k\neq
1}(r_{+}-r_{k})]}\bigg{|}\bigg{(}\frac{1-z_{\infty}}{z_{\infty}}\bigg{)}^{2}\frac{1}{|C_{22}|^{2}},$
(111)
which can be simplified to
$\displaystyle
Z_{0m}=-\frac{\tilde{B}_{1}}{\tilde{B}_{2}}\bigg{(}\frac{1-z_{\infty}}{z_{\infty}}\bigg{)}^{2}\frac{1}{|C_{22}|^{2}}.$
(112)
Here we have used $\prod_{k\neq
j}(r^{\prime}_{j}-r_{k})=\Delta^{\prime}(r_{j})$ and
$\tilde{B}_{j}:=i\frac{J(r_{j})}{\Delta^{\prime}(r_{j})}$, with $j,k=1,2,3,4$.
We can then see that superradiance exists in (111), provided that
$\frac{J(r_{+})}{J(r^{\prime}_{+})}<0$. This agrees with the result in [28].
## 6 Numerical results for QNMs
In this section, we carry out numerical computations for QNMs. In [34, 35,
31], the program package QNMspectral developed in [10] was applied to do
numerical simulations (see also [13]). We will use the Mathematica package
designed to solve Heun equations based on the procedure used in [30]. Since
the angular Heun equation is independent of $\omega$, we are able to first
determine $\lambda$ and then use the obtained $\lambda$ values to determine
$\omega$ subsequently.
By the regularity consideration in Section 4.2, there are four cases while
analysing the solutions of the angular Heun equation for the computation of
$\lambda$:
* •
$1-\tilde{\cal B}_{1}\geq 0$ and $1-\tilde{\cal B}_{2}\geq 0$, which
corresponds to
$\displaystyle W[y^{s}_{01},y^{s}_{11}]=0.$ (113)
* •
$1-\tilde{\cal B}_{1}\geq 0$ and $1+\tilde{\cal B}_{2}\geq 0$, which
corresponds to
$\displaystyle W[y^{s}_{01},y^{s}_{12}]=0.$ (114)
* •
$1+\tilde{\cal B}_{1}\geq 0$ and $1-\tilde{\cal B}_{2}\geq 0$, which
corresponds to
$\displaystyle W[y^{s}_{02},y^{s}_{11}]=0.$ (115)
* •
$1+\tilde{\cal B}_{1}\geq 0$ and $1+\tilde{\cal B}_{2}\geq 0$, which
corresponds to
$\displaystyle W[y^{s}_{02},y^{s}_{12}]=0.$ (116)
Every cases can lead to $\lambda$. In this work, we choose the first branch of
solution to discuss numerical results.
For the determination of $\omega$ from the radial part, we choose the
connection coefficient $C_{22}=0$, that is
$\displaystyle\frac{W[y_{02},y_{11}]}{W[y_{12},y_{11}]}=0.$ (117)
These conditions enable us to compute the separation constant $\lambda$ and,
subsequently the frequency $\omega$.
In the following numeric outputs, we show the results of $\lambda$ and
$\omega$ which lead to a convergent local Heun functions HeunG$[\,\ldots,z]$
in the simulation.
### 6.1 Acceleration modes
Acceleration modes are purely imaginary modes when $qQ=0$ and $\tilde{\alpha}$
is not a too big value. When $qQ=0$, for $z=\frac{5}{10}$, $m=0$,
$(M,Q,\tilde{\alpha},0)=(1,\frac{1}{2},\tilde{\alpha},0)$ with
$\tilde{\alpha}=0.1,0.2,0.3$, we find a series of outputs for $\lambda$.
Corresponding to each $\lambda$ value, we find there are a series of outputs
for $\omega$ (up to $\mathcal{O}(10^{-5}$)). The results are shown in Table 1.
$\lambda$ | $\omega$
---|---
$\lambda=13.09782$ | $\omega=\pm 0.03505i;\,\pm 0.12773i;\,\pm 0.27955i$
$\lambda=92.00843$ | $\omega=\pm 0.19681i;\,\pm 0.30231i;\,\pm 0.40124i$
$\lambda=249.78348$ | $\omega=\pm 0.06997i;\,\pm 0.16049i;\,\pm 0.34988i$
$\vdots$ | $\vdots$
Table 1: $\omega$ at $\tilde{\alpha}=0.1$ for different $\lambda$
Similarly, for $\tilde{\alpha}=0.2$ and $0.3$, we find a series of numerical
values for $\lambda$, respectively; see Table 2 for these results.
$\tilde{\alpha}$ | $\lambda$
---|---
$0.2$ | $\lambda\sim 6;\,44;\,120;\,\ldots$
Table 2: $\lambda$ outputs for $\tilde{\alpha}=0.2$.
In Tables 3 and 4, we present the $\omega$ values corresponding to the
specific $\lambda$ values obtained.
$\lambda$ | $\omega$
---|---
$\lambda=6.18154$ | $\omega=\pm 0.08599i;\,\pm 0.25704i;\,\pm 0.71084i$
Table 3: Outputs of $\omega$ at $\tilde{\alpha}=0.2$ for specific $\lambda$ $\lambda$ | $\omega$
---|---
$\lambda=3.67468$ | $\omega=\pm 0.12674i;\,\pm 0.32354i;\,\pm 0.73738i$
Table 4: Outputs of $\omega$ at $\tilde{\alpha}=0.3$ for specific $\lambda$
From the results above, we see that there are indeed $\omega$ values whose
norms are roughly the same as those of $\tilde{\alpha}$. See Table 5 below.
$\tilde{\alpha}$ | $\omega$
---|---
$0.1$ | $\omega=\pm 0.12773i$
$0.2$ | $\omega=\pm 0.25704i$
$0.3$ | $\omega=\pm 0.32354i$
Table 5: Outputs of $\omega$ for different $\tilde{\alpha}$
However, there are various other values of $\omega$ for each $\tilde{\alpha}$.
Moreover, we have no reason to rule out unstable modes.
We now investigate the effect of $qQ$ on $\omega$.
$q$ | $\omega$
---|---
$0$ | $\omega=-2.45640\times 10^{-26}-0.12773i$
$0.001$ | $\omega=5\times 10^{-7}-0.14147i$
$0.01$ | $\omega=5\times 10^{-6}-0.14368i$
$0.1$ | $\omega=5\times 10^{-5}-0.14465i$
$0.2$ | $\omega=1\times 10^{-4}-0.14508i$
$0.3$ | $\omega=1.5\times 10^{-4}-0.14523i$
$\vdots$ | $\vdots$
$2.0$ | $2.00975\times 10^{-3}-0.15511i$
Table 6: Varying $q$ for negative branch of $\tilde{\alpha}=0.1$
As can be seen from table 6, there are real parts in the acceleration modes.
These real parts are proportional to the values of $qQ$ but subdominant
compared with the imaginary parts. In other words, the increase of $qQ$ leads
to the increase of the decay rate or the decrease of the lifetime, as seen
from Fig.1.
Figure 1: The green dots denote the modes varying with the increase of $q$
from 0 to 2 (i.e. $qQ$ from 0 to 1). The orange dots denote the other two
acceleration modes.
However, we did not observe any turning points for the real parts, in contrast
to Fig.2 in [31]. For the $m=1,2,-1,-2$ cases, we find the results given in
Table 7222In our tables, the symbol “$--$” means that we did not find a
converging solution to the angular/radial Heun equation..
$m$, $\lambda$ | $\omega$
---|---
$m=2,\lambda=--$ | $--$
$m=1,\lambda=3.20225$ | $\omega=\pm 0.04197i;\pm 0.14467i;\pm 0.32099i$
$m=0,\lambda=13.09782$ | $\omega=\pm 0.03505i;\pm 0.12773i;\pm 0.27955i$
$m=-1,\lambda=20.24490$ | $\omega=\pm 0.09563i;\pm 0.14926i;\pm 0.36783i$
$m=-2,\lambda=26.01624$ | $\omega=\pm 0.25469i;\pm 0.34846i$
Table 7: Outputs of $\omega$ at $\tilde{\alpha}=0.1$ and $q=0$ for varying $m$
### 6.2 Near extremal modes
Near extremal (NE) modes are also pure imaginary in the limit $Q\rightarrow
M$. In the following, we will take $\frac{Q}{M}=0.999$ to study this limit.
For $z=\frac{5}{10}$, $(l,m)=(0,0)$,
$(M,Q,\tilde{\alpha},0)=(1,\frac{999}{1000},\tilde{\alpha},0)$ with
$\tilde{\alpha}=0.1,0.3,0.5$, we find the results given in Table 8.
$\tilde{\alpha}$ | $\omega$ | $\lambda$
---|---|---
$0.1$ | $--$ | $\lambda=3.30654$
$0.3$ | $\omega=-0.14694i;\,-2.05724i;-\,3.08586$ | $\lambda=1.01289$
$0.5$ | $\omega=-0.12475i;\,-0.37425i;\pm 1.49700i;\,\pm 2.49500i$ | $\lambda=0.50044$
Table 8: $\omega$ at $Q=0.999$ and $q=0$ for varying $\tilde{\alpha}$
For $\tilde{\alpha}\leq 0.1$, we did not find any $\omega$ that makes the Heun
function converge. That is, for a small acceleration parameter, there is no NE
mode.
Regarding the effect of $qQ$ on $\omega$, it turns out that there is no
standard pattern like that in Fig.3 of [31]. The simulation is too sensitive
to the $qQ$ parameter at the NE limit to yield convergent solutions.
### 6.3 Photon sphere modes
Photon sphere (PS) modes are oscillatory QNMs. It is clear from (5b) that when
$qQ=0$, the PS modes are symmetric and this symmetry is broken when $qQ\neq
0$.
We first investigate the $qQ=0$ case. For $z=\frac{5}{10}$, $(l,m)=(0,0)$,
$(M,Q,\tilde{\alpha},0)=(1,\frac{1}{2},\tilde{\alpha},0)$ with
$\tilde{\alpha}=0.1,0.2,0.3$, we have the results given in Table 9.
$\tilde{\alpha}$ | $\omega$
---|---
$0.1$ | $\omega=5.19700-9.05388i;-5.40843-8.53097i$
$0.2$ | $--;\;\;\;\;\;\;\;\;\omega=-0.35652-1.20282i$
$0.3$ | $\omega=-0.12674i$
Table 9: Outputs of $\omega$ at $\tilde{\alpha}=0.1,0.2,0.3$
With the increase of $\tilde{\alpha}$, one observe that the oscillation
frequencies decrease while the lifetime increases distinctively. This is
generally consistent with the trend of the r.h.s of Fig.1 in [31].
For the effect of $qQ$ on $\Omega$, we show the numeric results for
$\tilde{\alpha}=0.1$ in Table 10.
$q$ | $\omega$
---|---
$0$ | $\omega=5.19700-9.05388i;-5.40843-8.53097i$
$0.1$ | $\omega=5.18795-9.01970i;-5.40469-8.53595i$
$0.2$ | $\omega=5.15238-9.08646i;-5.40127-8.54160i$
$0.3$ | $\omega=5.21390-9.02358i;-5.39789-8.54748i$
$0.4$ | $\omega=5.20503-9.03020i;-5.39450-8.55351i$
$0.5$ | $\omega=5.21119-9.00279i;-5.39110-8.55962i$
$0.6$ | $\omega=5.16298-9.00165i;-5.38769-8.56580i$
$0.7$ | $\omega=5.19550-9.01807i;-5.38427-8.57203i$
$0.8$ | $\omega=5.16185-9.01152i;-5.38083-8.57831i$
$0.9$ | $\omega=5.19494-9.04580i;-5.37738-8.58463i$
$1.0$ | $\omega=5.31502-9.01849i;-5.37391-8.59098i$
$1.1$ | $\omega=5.23809-9.05574i;-5.37043-8.59735i$
$1.2$ | $\omega=5.20695-9.02102i;-5.36694-8.60376i$
$1.3$ | $\omega=5.20970-9.02110i;-5.36343-8.61019i$
$1.4$ | $\omega=5.20632-9.00793i;-5.35991-8.61665i$
$1.5$ | $\omega=5.20163-9.03112i;-5.35638-8.62313i$
$1.6$ | $\omega=5.20886-9.04105i;-5.35283-8.62963i$
$1.7$ | $\omega=5.23168-8.97363i;-5.34927-8.63615i$
$1.8$ | $\omega=5.29659-9.12748i;-5.34570-8.64269i$
$1.9$ | $\omega=5.51560-8.98556i;-5.34211-8.64924i$
$2.0$ | $\omega=5.17890-9.02047i;-5.33851-8.65582i$
Table 10: Varying $qQ\in(0,1)$ for PS modes at $\tilde{\alpha}=0.1$
Our numeric results demonstrate that there are acceleration modes, NE modes,
as well as PS modes. However, besides the stable modes, we do find unstable
QNMs, and there is no reason to rule them out. Thus, there is a possibility
that the $C$-metric BHs are unstable against linear massless charged scalar
perturbations. We leave a more detailed analysis of this question to the
future.
## 7 Conclusions and Discussions
We have shown that the radial and angular ODEs of the conformally invariant
wave equation for the massless charged KG equation in the charged $C$-metric
background can respectively be transformed into the general Heun equations. We
have investigated the asymptotic behaviors of the radial wave functions from
both the Heun solutions and the DRS approximations. The two approaches provide
the same wave functions asymptotically. We have applied these wave functions
to explore Hawking radiation, obtaining the scattering probability and energy
density. Furthermore, we have also provided the exact proportionality
coefficients in these wave functions. The wave solutions were then further
analysed under different physical boundary conditions, such as those for QNMs
and superradiance. We have also obtained the exact wave solutions in terms of
local Heun functions and connection coefficients. These enabled us to
determine QNMs and superradiance.
We have used a code in Mathematica to solve for $\omega$ and $\lambda$ in the
radial and angular equations, arriving at numerical results for QNMs. We
underline that our approach is based on the Heun equations and proves to be
efficient in solving the wave functions and determining the QNMs. It would be
interesting to develop this approach further for computing QNMs, superradiance
and other physical quantities. For backgrounds that can be treated exactly
thanks to, e.g., the reduction to Heun equation, our analysis provides an
effective alternative compared to other well-established numerical techniques,
such as the recurrence method and the DRS approximation. The methods presented
in this paper can be applied to study exact solutions and wave scattering of
test fields in other backgrounds. It is also of particular interest to
consider Dirac fields in a $C$-metric black hole. We hope to examine some of
the above problems and report their results elsewhere.
## Acknowledgments
M.Chen has been supported by the UQ Research Training Scholarship.
G.Tartaglino-Mazzucchelli has been supported by the Australian Research
Council (ARC) Future Fellowship FT180100353, ARC Discovery Project
DP240101409, and the Capacity Building Package of the University of
Queensland. Y.Z.Zhang has been partially supported by Australian Research
Council Discovery Project DP190101529.
## Appendix A Derivation of the radial Heun equation
In this appendix, we provide the computation details for deriving the radial
Heun equation of section 2.
Setting $\Psi(z)=(z-z_{\infty})^{-1}\Phi(z)$, we transform (2) to the form
$\displaystyle\Phi^{\prime\prime}(z)+\frac{(r_{+}-r_{-})^{2}z^{2}_{\infty}}{(z-z_{\infty})^{4}}\bigg{[}\frac{1}{4}\big{(}\frac{\Delta^{\prime}(r)}{\Delta(r)}\big{)}^{2}-\frac{1}{3}\frac{\Delta^{\prime\prime}(r)}{\Delta(r)}+\frac{\Omega(r)}{\Delta^{2}(r)}+\frac{\lambda}{\Delta(r)}\bigg{]}\Phi(z)=0.$
(118)
After re-expressing all the functions of $r$ in terms of the new variable $z$
defined in (11), we arrive at the following ODE with the singularity
$z=z_{\infty}$ removed,
$\displaystyle\Phi^{\prime\prime}(z)+\bigg{\\{}\bigg{[}\frac{A_{1}}{z}+\frac{A_{2}}{z-1}+\frac{A_{3}}{z-z_{r}}\bigg{]}+\bigg{[}\frac{B_{1}}{z^{2}}+\frac{B_{2}}{(z-1)^{2}}+\frac{B_{3}}{(z-z_{r})^{2}}\bigg{]}\bigg{\\}}\Phi(z)=0.$
(119)
Here, we have introduced the various constants
$\displaystyle A_{1}$ $\displaystyle=$
$\displaystyle\frac{1}{3}\bigg{[}\frac{r^{\prime}_{+}-r_{+}}{r^{\prime}_{+}-r_{-}}+\frac{1}{2}\frac{r_{-}-r_{+}}{r_{-}-r^{\prime}_{+}}-\frac{1}{2}\frac{(r_{-}-r_{+})(r^{\prime}_{+}-r_{+})}{(r^{\prime}_{+}-r_{-})(r^{\prime}_{-}-r_{+})}\bigg{]}$
(120a)
$\displaystyle+\frac{1}{z^{2}_{r}}\bigg{(}2E+D+\frac{2E}{z_{r}}\bigg{)}-\frac{\lambda}{\alpha^{2}}\frac{1}{(r_{-}-r^{\prime}_{+})(r_{-}-r^{\prime}_{-})}\frac{z_{\infty}}{z_{r}},$
$\displaystyle A_{2}$ $\displaystyle=$
$\displaystyle\frac{1}{3}\bigg{[}\frac{r^{\prime}_{+}-r_{+}}{r_{+}-r_{-}}-\frac{1}{2}\frac{r_{-}-r^{\prime}_{+}}{r_{-}-r_{+}}-\frac{1}{2}\frac{(r_{-}-r^{\prime}_{+})(r^{\prime}_{+}-r_{+})}{(r^{\prime}_{+}-r^{\prime}_{-})(r_{-}-r_{+})}\bigg{]}$
(120b)
$\displaystyle+\frac{1}{(z_{r}-1)^{2}}\bigg{[}2\frac{A+B+C+D+E}{z_{r}-1}+(2A+B)-(2E+D)\bigg{]}$
$\displaystyle+\frac{\lambda}{\alpha^{2}}\frac{1}{(r_{-}-r^{\prime}_{+})(r_{-}-r^{\prime}_{-})}\frac{z_{\infty}}{z_{r}-1},$
$\displaystyle A_{3}$ $\displaystyle=$
$\displaystyle\frac{1}{3}\frac{(r^{\prime}_{+}-r_{+})(r^{\prime}_{-}-r_{-})}{(r^{\prime}_{+}-r_{-})(r_{+}-r_{-})}+\frac{1}{6}\frac{(r_{-}-r^{\prime}_{-})^{2}(r^{\prime}_{+}-r_{+})}{(r_{-}-r_{+})(r^{\prime}_{+}-r_{-})(r^{\prime}_{-}-r_{+})}$
(120c)
$\displaystyle+\frac{1}{6}\frac{(r_{-}-r^{\prime}_{-})^{2}(r^{\prime}_{+}-r_{+})}{(r_{-}-r^{\prime}_{+})(r^{\prime}_{+}-r^{\prime}_{-})(r_{-}-r_{+})}-\frac{1}{z^{2}_{r}}\bigg{[}(2E+D)+\frac{2E}{z_{r}}\bigg{]}$
$\displaystyle-\frac{1}{(z_{r}-1)^{2}}\bigg{[}\;2\frac{A+B+C+D+E}{z_{r}-1}+(2A+B)-(2E+D)\bigg{]}$
$\displaystyle-\frac{\lambda}{\alpha^{2}}\frac{1}{(r_{-}-r^{\prime}_{+})(r_{-}-r^{\prime}_{-})}\frac{z_{\infty}}{z_{r}}\frac{1}{z_{r}-1},$
with
$\displaystyle A=\frac{\Omega(r_{-})}{[\Delta^{\prime}(r_{-})]^{2}},$ (121a)
$\displaystyle
B=\frac{-4\omega^{2}r_{-}^{3}r_{+}+2qQ\omega(3r_{-}+r_{+})-4q^{2}Q^{2}}{[\Delta^{\prime}(r_{-})]^{2}}z_{\infty},$
(121b) $\displaystyle
C=\frac{6[\omega^{2}r_{-}^{2}r^{2}_{+}-qQ\omega(r_{-}+r_{+})+q^{2}Q^{2}]}{[\Delta^{\prime}(r_{-})]^{2}}z^{2}_{\infty},$
(121c) $\displaystyle
D=\frac{-4\omega^{2}r_{-}r^{3}_{+}+2qQ\omega(r_{-}+3r_{+})-4q^{2}Q^{2}}{[\Delta^{\prime}(r_{-})]^{2}}z^{3}_{\infty},$
(121d) $\displaystyle
E=\frac{\Omega(r_{+})}{[\Delta^{\prime}(r_{-})]^{2}}z^{4}_{\infty},$ (121e)
$\displaystyle\left(\Delta^{\prime}(r_{-})\right)^{2}=\alpha^{4}(r_{-}-r_{+})^{2}(r_{-}-r^{\prime}_{+})^{2}(r_{-}-r^{\prime}_{-})^{2},$
(121f)
and
$\displaystyle B_{1}-\frac{1}{4}$ $\displaystyle=$
$\displaystyle\frac{\Omega(r_{+})}{[\Delta^{\prime}(r_{+})]^{2}},$ (122a)
$\displaystyle B_{2}-\frac{1}{4}$ $\displaystyle=$
$\displaystyle\frac{\Omega(r^{\prime}_{+})}{[\Delta^{\prime}(r^{\prime}_{+})]^{2}},$
(122b) $\displaystyle B_{3}-\frac{1}{4}$ $\displaystyle=$
$\displaystyle\frac{\Omega(r^{\prime}_{-})}{[\Delta^{\prime}(r^{\prime}_{-})]^{2}}.$
(122c)
For convenience, we introduce the quantity $B_{4}$ which is defined from $A$
above via the relation
$\displaystyle
B_{4}-\frac{1}{4}=A=\frac{\Omega(r_{-})}{[\Delta^{\prime}(r_{-})]^{2}}.$ (123)
Then, it can be verified that the parameters $A_{i}$ and $B_{i}$ satisfy the
following relations:
$\displaystyle A_{1}+A_{2}+A_{3}=0,$ (124a)
$\displaystyle(A_{1}+A_{3})+(A_{1}+A_{2})z_{r}=B_{1}+B_{2}+B_{3}-B_{4}.$
(124b)
We now make the substitution
$\displaystyle\Phi(z)=z^{C_{1}}(z-1)^{C_{2}}(z-z_{r})^{C_{3}}\mathcal{Y}(z).$
(125)
This substitution transforms (119) into the form
$\displaystyle\mathcal{Y}^{\prime\prime}(z)+\bigg{\\{}\frac{2C_{1}}{z}+\frac{2C_{2}}{z-1}+\frac{2C_{3}}{z-z_{r}}\bigg{\\}}\mathcal{Y}^{\prime}(z)$
$\displaystyle~{}~{}~{}+\bigg{\\{}\frac{A_{1}}{z}+\frac{A_{2}}{z-1}+\frac{A_{3}}{z-z_{r}}+\frac{2C_{1}C_{2}}{z(z-1)}+\frac{2C_{1}C_{3}}{z(z-z_{r})}+\frac{2C_{2}C_{3}}{(z-1)(z-z_{r})}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+\frac{B_{1}+C^{2}_{1}-C_{1}}{z^{2}}+\frac{B_{2}+C^{2}_{2}-C_{2}}{(z-1)^{2}}+\frac{B_{3}+C^{2}_{3}-C_{3}}{(z-z_{r})^{2}}\bigg{\\}}\mathcal{Y}(z)=0.$
(126)
Choosing $C_{j},i=1,2,3$ such that
$\displaystyle C^{2}_{j}-C_{j}+B_{j}=0,$ (127a)
$\displaystyle\Longrightarrow\;\;C_{j}=\frac{1}{2}\pm
i\sqrt{B_{j}-\frac{1}{4}},$ (127b)
we can eliminate the $1/z^{2},1/(z-1)^{2},1/(z-z_{r})^{2}$ terms in (A). By
means of the relation (124a), we have
$\displaystyle\mathcal{Y}^{\prime\prime}(z)+\bigg{\\{}\frac{2C_{1}}{z}+\frac{2C_{2}}{z-1}+\frac{2C_{3}}{z-z_{r}}\bigg{\\}}\mathcal{Y}^{\prime}(z)$
$\displaystyle~{}~{}~{}+\bigg{\\{}\frac{[-(A_{1}+A_{3})-(A_{1}+A_{2})z_{r}+2C_{1}C_{2}+2C_{1}C_{3}+2C_{2}C_{3}]z-q}{z(z-1)(z-z_{r})}\bigg{\\}}\mathcal{Y}(z)=0,~{}~{}~{}~{}~{}~{}~{}$
(128)
where $q=2C_{1}C_{3}+2C_{1}C_{2}z_{r}-A_{1}z_{r}$. This equation can be
written in the form of the general Heun equation (15), i.e.
$\displaystyle\mathcal{Y}^{\prime\prime}(z)+\bigg{(}\frac{\gamma}{z}+\frac{\delta}{z-1}+\frac{\epsilon}{z-z_{r}}\bigg{)}\mathcal{Y}^{\prime}(z)+\frac{\alpha\beta
z-q}{z(z-1)(z-z_{r})}\mathcal{Y}(z)=0,~{}~{}~{}~{}$ (129)
where $\gamma,\delta,\epsilon,q$ are given in (16), and $\alpha$ and $\beta$
are
$\displaystyle\alpha$ $\displaystyle=$
$\displaystyle\bigg{(}C_{1}+C_{2}+C_{3}-\frac{1}{2}\bigg{)}$ (130a)
$\displaystyle\pm\sqrt{(C_{1}^{2}-C_{1})+(C_{2}^{2}-C_{2})+(C_{3}^{2}-C_{3})+\frac{1}{4}+(A_{1}+A_{3})+(A_{1}+A_{2})z_{r}},~{}~{}~{}~{}~{}~{}$
$\displaystyle\beta$ $\displaystyle=$
$\displaystyle\bigg{(}C_{1}+C_{2}+C_{3}-\frac{1}{2}\bigg{)}$ (130b)
$\displaystyle\mp\sqrt{(C_{1}^{2}-C_{1})+(C_{2}^{2}-C_{2})+(C_{3}^{2}-C_{3})+\frac{1}{4}+(A_{1}+A_{3})+(A_{1}+A_{2})z_{r}}.~{}~{}~{}~{}~{}~{}$
It can be easily seen that these parameters satisfy the required condition
$\gamma+\delta+\epsilon=\alpha+\beta+1$ for the GHE.
We now show that the $\alpha$ and $\beta$ expressions above are exactly the
same as those in (16). Using the identity (124b) and the relation (127a), the
expressions for $\alpha$ and $\beta$ can be simplified to the following
$\displaystyle\alpha=\bigg{(}C_{1}+C_{2}+C_{3}-\frac{1}{2}\bigg{)}\pm
i\sqrt{B_{4}-\frac{1}{4}},$ (131a)
$\displaystyle\beta=\bigg{(}C_{1}+C_{2}+C_{3}-\frac{1}{2}\bigg{)}\mp
i\sqrt{B_{4}-\frac{1}{4}}.$ (131b)
Introducing $\tilde{B}_{j}=\sqrt{B_{j}-\frac{1}{4}},j=1,2,3,4$, then (127b)
can be written as
$\displaystyle
C_{j}=\frac{1}{2}\pm\tilde{B}_{j},\qquad\tilde{B}_{j}:=i\frac{J(r_{j})}{\Delta^{\prime}(r_{j})},$
(132)
with the same identification as before:
$r_{1},r_{2},r_{3},r_{4}\Longleftrightarrow
r_{+},r^{\prime}_{+},r^{\prime}_{-},r_{-}$. In terms of $\tilde{B}_{j}$,
$\alpha$ and $\beta$ become
$\displaystyle\alpha=1\pm\sum_{j=1}^{4}\tilde{B}_{j},\;\;\beta=1\pm\sum_{j=1}^{3}\tilde{B}_{j}\mp\tilde{B}_{4},$
(133)
which are nothing but the ones in (16)-(17a). This completes our derivation of
(15).
The derivation of the angular Heun equation is completely analogue to the
derivation above.
## References
* [1] B.P.Abbott, et al, “LIGO Scientific and Virgo Collaborations”, Phys. Rev. Lett., 116: 221101 (2016).
* [2] R.Abbott, et al, “GWTC-2:Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run”, Phys. Rev. X, 11: 021053 (2021).
* [3] The Event Horizon Telescope Collaboration, “First M87 Event Horizon Telescope Results. I. The Shadow of the supermassive Black Hole”, Astrophys. J. Lett , 875: L1 (2019).
* [4] S.W.Hawking, “Information loss in black holes”, Phys. Rev. D, 72: 084013 (2005).
* [5] R.Penrose and R.M.Floyd, “Extraction of rotational energy from a black hole”, Nature, 229: 177 (1971).
* [6] J.D.Bekenstein, “Extraction of energy and charge from a black hole”, Phys. Rev. D, 7: 949 (1973).
* [7] K.Destounis, “Superradiant instability of charged scalar fields in higher-dimensional Reissner-Nordström-de Sitter black holes”, Phys. Rev. D, 100: 044054 (2019).
* [8] R.Brito, V.Cardoso, and P.Pani, “Superradiance”, Lect. Notes Phys., 906: 1 (2015).
* [9] E.Berti, V.cardoso and C.M.Will, “On gravitational-wave spectroscopy of massive black holes with the space interferometer LISA”, arXiv: gr-qc/0512160v2 [gr-qc] (2006).
* [10] A.Jansen, “Overdamped modes in Schwarzchild-de Sitter and a Mathematica package for the numerical computation of quasinormal modes”, Eur. Phys. J. Plus , 132: 546 (2017).
* [11] K.D.Kokkotas and B.G.Schmidt, “Quasinormal modes of stars and black holes”, Living Relativity, 2: 2 (1999).
* [12] D.Birmingham, I.Sachs, and S.N.Solodukhin, “Conformal Field Theory Interpretation of Black Hole Quasinormal Modes”, Phys. Rev. Lett., 88: 151301 (2002).
* [13] E.Berti, V.Cardoso and A.O.Starinets, “Quasinormal modes of black holes and black branes”, Class. Quantum Grav., 26: 163001 (2009).
* [14] S.A.Teukolsky, “Perturbations of a rotating black hole. I. Fundamental equations for gravitational, electromagnetic, and neutrino-field perturbations”, Astrophys. J., 185: 635-647 (1973).
* [15] W.H.Press and S.A.Teukolsky, “Perturbations of a rotating black hole. II. Dynamical stability of the Kerr metric”, Astrophys. J., 185: 649-673 (1973).
* [16] D.V.Gal’tsov and D.Nú$\tilde{\textrm{n}}$ez, “Exact solutions to the First-order perturbation problem in a de Sitter background”, Gen. Rel. Grav., 21: 3 (1989).
* [17] Y.Huang and H.S.Zhang, “Quasibound states of charged dilatonic black holes”, Phys. Rev. D, 103: 044062 (2021).
* [18] H.S.Vieira and K.D.Kokkotas, “Quasibound states of Schwarzchild acoustic black holes”, Phys. Rev. D, 104: 024035 (2021).
* [19] H.S.Vieira, V.B.Bezerra and C.R.Muniz, “Exact solutions of the Klein-Gordon equation in the Kerr-Newman background and Hawking radiation”, Ann. Phys., 350: 14-28 (2014).
* [20] H.S.Vieira and V.B.Bezerra, “Charged scalar fields in a Kerr-Sen black hole: exact solutions, Hawking radiation, and resonant frequencies”, Chin. Phys. C , 43: 035102 (2019).
* [21] S.Q.Wu and X.Cai, “Massive complex scalar field in the Kerr-Sen geometry: Exact solution of wave equation and Hawking radiation ”, J. Math. Phys. , 44: 3 (2003).
* [22] A.Ronveaux, “Heun’s Differential Equations”, Oxford University, New York, (1995).
* [23] S.Mano, H.Suzuki and E.Takasugi, “Analytic solutions of the Teukolsky equation and their low frequency expansions”, Prog. Theor. Phys., 95: 6 (1996).
* [24] H.Suzuki, E.Takasugi and H.Umetsu, “Perturbations of Kerr-de Sitter black hole and Heun’s equations”, Prog. Theor. Phys., 100: 3 (1998).
* [25] D.Batic and H.Schmid, “Heun equation, Teukolsky equation, and Type-D matrics”, J. Math. Phys. , 48: 042502 (2007).
* [26] M.Hortacsu, “The radial Teukolsky equation for Kerr-Newman-de Sitter geometry: Revisited”, Eur. Phys. J. Plus , 136: 13 (2021).
* [27] H.S.Vieira and V.B.Bezerra, “Confluent Heun functions and the physics of black holes: resonant frequencies, Hawking radiation and scattering of scalar waves”, Ann. Phys., 373: 28-42 (2016).
* [28] H.Motohashi and S.Noda, “Exact solution for wave scattering from black holes: Formulation”, Prog. Theor. Exp. Phys., 083E03 (2021).
* [29] S.Noda and H.Motohashi, “Spectroscopy of Kerr-$AdS_{5}$ spacetime with the Heun function: Quasinormal modes, greybody factor, and evaporation”, Phys. Rev. D, 106: 064025 (2022).
* [30] Y.Hatsuda, “Quasinormal modes of Kerr-de Sitter black holes via the Heun function”, Class. Quantum Grav., 38: 025015 (2021).
* [31] K.Destounis, G.Mascher and K.D.Kokkotas, “Dynamical behavior of the C-metric: Charged scalar fields, quasinormal modes, and superradiance”, Phys. Rev. D, 105: 124058 (2022).
* [32] J.B.Griffiths, P.Krtouš and J.Podolský, “Interpreting the C-metric”, Class. Quantum Grav., 23: 6745-6766 (2006).
* [33] K.Hong and E.Teo, “A new form of the C-metric”, Class. Quantum Grav., 20: 3269-3277 (2003).
* [34] K.Destounis, R.D.B. Fontana and F.C.Mena, “Accelerating black holes: Quasinormal modes and late-time tails”, Phys. Rev. D, 102: 044005 (2020).
* [35] K.Destounis, R.D.B. Fontana and F.C.Mena, “Stability of the Cauchy horizon in accelerating black-hole spacetimes”, Phys. Rev. D, 102: 104037 (2020).
* [36] D.Kofron, “Separability of test fields equations on the C-metric background”, Phys. Rev. D, 92: 124064 (2015).
* [37] S.Yoshida, N.Uchikata and T.Futamase, “Quasinormal modes of Kerr-de Sitter black holes”, Phys. Rev. D, 81: 044005 (2010).
* [38] A.Zhidenko, “Quasi-normal modes of Schwarzchild-de Sitter black holes”, Class. Quant. Grav., 21: 273-280 (2004).
* [39] D.Bini, C.Cherubini and A.Geralico, “Massless field perturbations of the spinning C-metric”, J. Math. Phys. , 49: 062502 (2008).
* [40] H.S.Vieira, K.Destounis and K.D.Kokkotas, “Slowly-rotating curved acoustic black holes: Quasinormal modes, Hawking-Unruh radiation and quasibound states”, arXiv: 2112.08711v2 [gr-qc] (2022).
* [41] E.W.Leaver, “Quasinormal modes of Reissner-Nordström black holes”, Phys. Rev. D, 41: 10 (1990).
* [42] N.Adersson and C.J.Howls, “The asymptotoc quasinormal mode spectrum of non-rotating black holes”, Class. Quant. Grav., 21: 1623-1642 (2004).
* [43] E.Berti and K.D.Kokkotas, “Quasinormal modes of Reissner-Nordström-anti-de Sitter black holes: Scalar, electromagnetic, and gravitational perturbations”, Phys. Rev. D, 67: 064020 (2003).
* [44] M.Giammatteo and I.G.Moss, “Gravitational quasinormal modes for Kerr anti-de Sitter black holes”, Class. Quant. Grav., 22: 1803-1824 (2005).
* [45] F.Willenborg, D.Philipp and C.Lämmerzahl, “Exact wave-optical inaging of a Kerr-de Sitter black hole using Heun’s equation”, arXiv: 2310.12917v1 [gr-qc] (2023).
* [46] Y.Hatsuda, “Quasinormal Modes of C-metric from SCFTs”, arXiv: 2308.16677v1 [gr-qc] (2023).
* [47] T.Damour and R.Ruffini, “Black-hole evaporation in Klein-Sauter-Heisenberg-Euler formalism”, Phys. Rev. D, 14: 2 (1976).
* [48] D.N.Page, “Particle emission rates from a black hole. II. Massless particles from a rotating hole”, Phys. Rev. D, 14: 3260 (1976).
* [49] S.Sannan, “Heuristic Derivation of the Probability Distributions of Particles Emitted by a Black Hole”, Gen. Relat. Grav., 20: 3 (1988).
* [50] F.Belgiorno and S.L.Cacciatori, “The absence of normalizable time-periodic solutions for the Dirac equation in the Kerr-Newman-dS black hole background ”, J. Phys. A: Math. Theor. , 42: 135207 (2009).
* [51] R.D.B.Fontana and F.C.Mena, “Quasinormal modes and stability of accelerating Reissner-Norsdtröm AdS black holes ”, J. High Energy Phys. , 10: 047 (2022).
* [52] A.M.Ghezelbash and R.B.Mann, “Entropy and mass bounds of Kerr-de Sitter spacetimes”, Phys. Rev. D, 72: 064024 (2005).
|
# Towards a Universal Continuous Knowledge Base
Gang Chen1,3,4, Maosong Sun1,3,4,5, and Yang Liu1,2,3,4,5
1Department of Computer Science and Technology, Tsinghua University
2Institute for AI Industry Research, Tsinghua University
3Institute for Artificial Intelligence, Tsinghua University
4Beijing National Research Center for Information Science and Technology
5Beijing Academy of Artificial Intelligence Yang Liu is the corresponding
author<EMAIL_ADDRESS>
###### Abstract
In artificial intelligence (AI), knowledge is the information required by an
intelligent system to accomplish tasks. While traditional knowledge bases use
discrete, symbolic representations, detecting knowledge encoded in the
continuous representations learned from data has received increasing attention
recently. In this work, we propose a method for building a continuous
knowledge base (CKB) that can store knowledge imported from multiple, diverse
neural networks. The key idea of our approach is to define an interface for
each neural network and cast knowledge transferring as a function simulation
problem. Experiments on text classification show promising results: the CKB
imports knowledge from a single model and then exports the knowledge to a new
model, achieving comparable performance with the original model. More
interesting, we import the knowledge from multiple models to the knowledge
base, from which the fused knowledge is exported back to a single model,
achieving a higher accuracy than the original model. With the CKB, it is also
easy to achieve knowledge distillation and transfer learning. Our work opens
the door to building a universal continuous knowledge base to collect, store,
and organize all continuous knowledge encoded in various neural networks
trained for different AI tasks.
## 1 Introduction
A knowledge base (KB) is a centralized repository where structured and
unstructured information is stored, organized, and shared. KBs have been
widely used to accomplish artificial intelligence (AI) tasks. While their
initial use was closely connected with expert systems (Hayes-Roth et al.,,
1983), knowledge graphs such as Freebase (Bollacker et al.,, 2008) have become
a typical example of knowledge base used in AI systems recently, showing their
effectiveness in applications such as image classification (Marino et al.,,
2017), natural language processing (Zhang et al.,, 2018), and bioinformatics
(Ernst et al.,, 2015).
While the use of discrete representations enables KBs to be easily
interpretable to humans, it is notoriously challenging to build such discrete
knowledge bases. On one hand, it is time-consuming and labor-intensive to
manually construct KBs (Lenat and Guha,, 1990). The situation aggravates when
it comes to a vertical domain for which only a limited number of annotators
qualify to have the expertise. On the other hand, although machine learning
approaches are able to mine knowledge from data automatically, they often
inevitably have limitations in the correctness, depth, and breadth of
knowledge they can acquire (Richardson and Domingos,, 2003). Therefore, how to
build large-scale, high-quality, and wide-coverage knowledge bases still
remains a grand challenge to the community.
The past two decades have witnessed the rapid progress of deep learning
(Hinton and Salakhutdinov,, 2006), which has proven effective in learning
continuous representations from data, leading to substantial improvements in a
variety of AI tasks like speech recognition (Dahl et al.,, 2011), image
classification (Krizhevsky et al.,, 2012), and machine translation (Vaswani et
al.,, 2017). More recently, there has been a significant paradigm shift from
learning continuous representations from limited labeled data in a supervised
fashion to from abundant unlabeled data in a self-supervised way (Devlin et
al.,, 2019; Brown et al.,, 2020), which further advances the development of
learning continuous representations from data.
Given the remarkable success of deep learning, an interesting question
naturally arises: Is knowledge encoded in the learned continuous
representations? A number of researchers have developed probing methods to
evaluate the extent to which continuous representations encode knowledge of
interest (Linzen et al.,, 2016; Belinkov et al.,, 2017; Blevins et al.,, 2018;
Hewitt and Manning,, 2019). For example, while Belinkov et al., (2017) conduct
a quantitive evaluation that sheds lights on the ability of neural machine
translation models to capture word structure, Hewitt and Manning, (2019)
propose a structural probe that can test whether syntax trees are consistently
embedded in a linear transformation of word representation space.
Figure 1: A universal continuous knowledge base. $\mathcal{M}$ denotes the
knowledge base. $\mathrm{NN}_{\bm{\theta}_{n}}$ ($n\in[1,6]$) denotes the
$n$-th neural network parameterized by $\bm{\theta}_{n}$. On one hand,
continuous knowledge encoded in a neural network can be imported to the
knowledge base. On the other hand, continuous knowledge stored in the
knowledge based can be exported to a neural network. Note that these neural
networks are independent of each other: they can have different model
structures and be trained for different AI tasks.
If knowledge can be defined as the information required by a system to
accomplish AI tasks, we would like to distinguish between two categories of
knowledge: discrete and continuous. While discrete knowledge uses symbolic
representations explicitly handcrafted by humans, continuous knowledge is
implicitly encoded in neural networks automatically trained on data. If each
trained neural network can be seen as a repository of continuous knowledge,
there is an important question that needs to be answered: Is it possible to
build a universal continuous knowledge base that stores knowledge imported
from diverse neural networks?
In this work, we propose a method for building a universal continuous
knowledge base (CKB). As shown in Figure 1, the CKB allows for knowledge
transferring between multiple, diverse neural networks. The knowledge encoded
in one neural network can be imported to the CKB, from which the stored
knowledge can be exported to another neural network. As a neural network can
be seen as a parameterized function, we treat the function and its parameters
as the continuous knowledge encoded in the neural network (Section 2.1). Using
a memory hierarchy to represent the knowledge base (Section 2.2), our approach
defines an interface function for each neural network and casts importing and
exporting knowledge as a function simulation problem (Sections 2.3 and 2.5).
We adopt multi-task training to import knowledge from multiple neural networks
(Section 2.4). Experiments on text classification show that our method is able
to fuse knowledge imported from multiple, diverse neural networks and obtain
better performance than single neural networks. It is also easy to use the
knowledge base to simulate other learning paradigms such as knowledge
distillation and transfer learning.
## 2 Approach
To use a continuous knowledge base to store the knowledge encoded in neural
networks, we need to answer a number of fundamental questions:
1. 1.
What is the knowledge encoded in a neural network? (Section 2.1)
2. 2.
How to design a continuous knowledge base? (Section 2.2)
3. 3.
How to import the knowledge encoded in a neural network into the continuous
knowledge base? (Section 2.3)
4. 4.
How to import the knowledge encoded in multiple neural networks to the
continuous knowledge base? (Section 2.4)
5. 5.
How to export the knowledge stored in the continuous knowledge base to a
neural network? (Section 2.5)
### 2.1 Knowledge Encoded in a Neural Network
A neural network can be seen as a parameterized, composite function. For
example, Figure 2 shows a simple feed-forward neural network involving the
composition of two non-linear functions:
$\displaystyle\mathbf{h}$ $\displaystyle=f(\mathbf{W}_{xh}\mathbf{x})$ (1)
$\displaystyle\mathbf{y}$ $\displaystyle=g(\mathbf{W}_{hy}\mathbf{h})$ (2)
where $\mathbf{x}$ is the input layer, $\mathbf{h}$ is the hidden layer,
$\mathbf{y}$ is the output layer, $\mathbf{W}_{xh}$ is the weight matrix
between the input and hidden layers, $\mathbf{W}_{hy}$ is the weight matrix
between the hidden and output layers, and $f(\cdot)$ and $g(\cdot)$ are two
non-linear functions. For simplicity, we omit bias terms. The neural network
shown in Figure 2 can also be denoted by
$\displaystyle\mathbf{y}=\mathrm{FNN}_{\bm{\theta}}(\mathbf{x})=g_{\bm{\theta}_{2}}(f_{\bm{\theta}_{1}}(\mathbf{x}))$
(3)
where $\mathrm{FNN}_{\bm{\theta}}(\cdot)$ is a non-linear function
parameterized by $\bm{\theta}=\\{\mathbf{W}_{xh},\mathbf{W}_{hy}\\}$,
$f_{\bm{\theta}_{1}}(\cdot)$ is a non-linear function parameterized by
$\bm{\theta}_{1}=\\{\mathbf{W}_{xh}\\}$, and $g_{\bm{\theta}_{2}}(\cdot)$ is a
non-linear function parameterized by $\bm{\theta}_{2}=\\{\mathbf{W}_{hy}\\}$.
Figure 2: Example of a feed-forward neural network that can be seen as a
composite function. We treat the parameterized function as the continuous
knowledge encoded in the neural network. $\mathbf{x}$ is the input,
$\mathbf{h}$ is the hidden state, and $\mathbf{y}$ is the output.
$\mathbf{W}_{xh}$ and $\mathbf{W}_{hy}$ are parameters.
We distinguish between two functions related to neural networks:
1. 1.
Global function: a function that maps the input of a neural network to its
output.
2. 2.
Local function: a function that participates in the composition of a global
function.
For example, $\mathrm{FNN}_{\bm{\theta}}(\cdot)$ is a global function and
$f_{\bm{\theta}_{1}}(\cdot)$ is a local function.
Given a trained neural network that is able to accomplish an AI task, we
believe that it is the parameterized function that represents the continuous
knowledge encoded in the neural network. It is important to note that both the
function and the parameters are indispensable. On one hand, if a function has
no parameters (e.g., the max pooling function), it can be directly called and
there is no need to import it to the knowledge base. On the other hand, as
parameters are defined to be bound to a function, parameters themselves are
useless if the associated function is missing.
Figure 3: (a) A feed-forward neural network and (b) its interface to the
knowledge base. The neural network is represented as a parameterized function
$\mathrm{FNN}_{\bm{\theta}}(\cdot)$ that takes $\mathbf{x}$ as input and
outputs $\mathbf{y}$. Its interface to the knowledge base $\mathcal{M}$ is
also defined as a parameterized function
$\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}(\cdot)$ that shares the same
dimensions of input and output with the neural network. An interface is used
to facilitate transferring knowledge between a neural network and the
knowledge base.
### 2.2 Continuous Knowledge Base
#### 2.2.1 Memory Hierarchy
To build a CKB, we propose to use two levels of real-valued matrices inspired
by the use of memory hierarchy in computer architecture (Hennessy and
Patterson,, 2011). At the high level, the CKB maintains one real-valued matrix
$\mathbf{M}^{h}$. At the low level, the CKB maintains $K$ real-valued
matrices:
$\mathbf{M}^{l}_{1},\dots,\mathbf{M}^{l}_{k},\dots,\mathbf{M}^{l}_{K}$. As a
result, the CKB consists of $K+1$ real-valued matrices:
$\displaystyle\mathcal{M}=\left\\{\mathbf{M}^{h},\mathbf{M}^{l}_{1},\dots,\mathbf{M}^{l}_{K}\right\\}$
(4)
Note that these real-valued matrices are learnable parameters of the CKB.
#### 2.2.2 Implementation of an Interface
To facilitate importing and exporting knowledge between a neural network and
the CKB, we introduce an interface for the neural network. As shown in Figure
3, given a neural network $\mathrm{FNN}_{\bm{\theta}}(\cdot)$, its interface
can be defined as a function:
$\displaystyle\mathbf{y}=\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}(\mathbf{x},\mathcal{M})$
(5)
where $\mathbf{x}$ is the input, $\mathbf{y}$ denotes the output, and
$\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}(\cdot)$ is an interface
function parameterized by $\bm{\phi}$ tailored for
$\mathrm{FNN}_{\bm{\theta}}(\cdot)$. Note that the input and output of the
interface have the same dimensions with those of
$\mathrm{FNN}_{\bm{\theta}}(\cdot)$. With interfaces, we can cast importing
and exporting knowledge between the neural network
$\mathrm{FNN}_{\bm{\theta}}(\cdot)$ and the knowledge base as a function
simulation problem:
$\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}(\mathbf{x},\mathcal{M})$ runs
the same way $\mathrm{FNN}_{\bm{\theta}}(\mathbf{x})$ does given the same
input $\mathbf{x}$ (see Sections 2.3, 2.4, and 2.5 for details).
Figure 4: Illustration of how an interface to the continuous knowledge base
works. $\mathcal{M}$ is a continuous knowledge base, which is organized as a
memory hierarchy: low-level real-valued matrices
$\mathbf{M}^{l}_{1},\dots\mathbf{M}^{l}_{K}$ and a high-level matrix
$\mathbf{M}^{h}$. The knowledge base provides an interface for each neural
network. Given an input $\mathbf{x}$, the interface first generates two extra
matrices $\tilde{\mathbf{M}}^{l}$, $\tilde{\mathbf{M}}^{h}$. Note that the
matrices generated on the fly are denoted by dashed rectangles. Then, the
interface uses the attention function to generate hidden states
$\mathbf{h}_{1},\dots,\mathbf{h}_{K}$, which are concatenated with
$\mathbf{M}^{h}$ and $\tilde{\mathbf{M}}^{h}$ to serve as the key and value
(i.e., $\mathbf{H}$) of another attention function to generate the output
$\mathbf{y}$.
Figure 4 shows an example that illustrates how an interface works. Given an
input $\mathbf{x}$, our approach first adds two extra matrices on the fly:
$\displaystyle\tilde{\mathbf{M}}^{l}$
$\displaystyle=\mathbf{x}\mathbf{W}^{l},$ (6)
$\displaystyle\tilde{\mathbf{M}}^{h}$ $\displaystyle=\mathbf{x}\mathbf{W}^{h}$
(7)
where $\mathbf{W}^{l},\mathbf{W}^{h}\in\bm{\phi}$ are two interface
parameters.
Then, for each low-level matrix $\mathbf{M}^{l}_{k}$, we use the attention
function (Vaswani et al.,, 2017) to obtain a hidden state:
$\begin{split}\tilde{\mathbf{M}}^{l}_{k}&=\big{[}\mathbf{M}^{l}_{k};\tilde{\mathbf{M}}^{l}\big{]},k=1,\dots,K\\\
\mathbf{h}_{k}&=\mathrm{Attention}(\mathbf{x}\mathbf{W}^{\text{q}},\tilde{\mathbf{M}}^{l}_{k}\mathbf{W}^{\text{k}},\tilde{\mathbf{M}}^{l}_{k}\mathbf{W}^{\text{v}})\end{split}$
(8)
where $\mathbf{W}^{\text{q}}$, $\mathbf{W}^{\text{k}}$, and
$\mathbf{W}^{\text{v}}$ are the transformation matrices for query, key, and
value used in the attention mechanism, respectively.
Finally, the output is calculated by calling the attention function again:
$\displaystyle\mathbf{y}=\mathrm{Attention}(\mathbf{x}\mathbf{W}^{\text{q}},\mathbf{H}\mathbf{W}^{\text{k}},\mathbf{H}\mathbf{W}^{\text{v}})$
(9)
where the hidden state matrix $\mathbf{H}$ is the concatenation of the high-
level matrix, the hidden states, and the extra matrix
$\tilde{\mathbf{M}}^{h}$:
$\displaystyle\mathbf{H}=\big{[}\mathbf{M}^{h};\mathbf{h}_{1};\dots;\mathbf{h}_{K};\tilde{\mathbf{M}}^{h}\big{]}$
(10)
While every neural network has its own interface to the knowledge base and the
interface parameters are often different, the continuous knowledge base
$\mathcal{M}$ is shared among all neural networks.
#### 2.2.3 Interfaces for Global and Local Functions
As the implementation of an interface is transparent to the model structure,
it is easy to define an interface for an arbitrary neural network since one
only needs to specify the parameterized function and the dimensions of its
input and output.
Often, it is convenient to directly define an interface for the global
function if there are only a small number of parameters. For example, the
global function for the feed-forward neural network shown in Figure 2 is
$\mathrm{FNN}_{\bm{\theta}}(\cdot)$, we can use Eq. (5) to define its
interface.
However, it is more efficient to define an interface for a local function if
the neural network calls it frequently. Consider a recurrent neural network
defined as a global function:
$\displaystyle\mathbf{y}=\mathrm{RNN}_{\bm{\theta}}(\mathbf{x}_{1:T})$ (11)
where $\mathbf{x}_{1:T}=\mathbf{x}_{1},\dots,\mathbf{x}_{T}$ is the input
sequence and $\mathbf{y}$ is the output.
The global function $\mathrm{RNN}_{\bm{\theta}}(\cdot)$ runs by calling the
local function $f_{\bm{\theta}}(\cdot)$ repeatedly:
$\displaystyle\mathbf{h}_{t}=f_{\bm{\theta}}(\mathbf{x}_{t},\mathbf{h}_{t-1}),\
t=1,\dots,T$ (12)
where $\mathbf{x}_{t}$ and $\mathbf{h}_{t}$ are the input and the hidden state
at time step $t$, respectively. Note that we let $\mathbf{y}=\mathbf{h}_{T}$
for simplicity.
As a result, instead of defining an interface for the global function
$\mathrm{RNN}_{\bm{\theta}}(\cdot)$, it is more suitable to define an
interface for the local function $f_{\bm{\theta}}(\cdot)$:
$\displaystyle\mathbf{h}_{t}=\mathrm{Interface}^{f}_{\bm{\phi}}(\mathbf{x}_{t},\mathbf{h}_{t-1},\mathcal{M})$
(13)
Figure 5: Importing and exporting knowledge for single neural networks. We
cast importing knowledge from a neural network to the knowledge base as a
function simulation problem: the knowledge is successfully imported only if
the interface $\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}$ runs the same
way the neural network $\mathrm{FNN}_{\bm{\theta}}$ does. This is done by
finding knowledge base and interface parameters (i.e., $\hat{\mathcal{M}}$ and
$\hat{\bm{\phi}}$) that minimize the difference between the outputs of two
functions (i.e., $\Delta(\mathbf{y}_{1},\mathbf{y}_{2})$). Note that the
parameters of the neural networks $\bm{\theta}$ are fixed during importing.
Similarly, exporting knowledge is also treated as a function simulation
problem: finding model parameters $\hat{\bm{\theta}}$ to enable the neural
network to imitate the interface while keeping $\mathcal{M}$ and $\bm{\phi}$
fixed.
### 2.3 Importing Knowledge from Single Neural Networks
As shown in Figure 5, we cast importing knowledge from a neural network to the
knowledge base as a function simulation (Jiao et al.,, 2020) problem: the
knowledge base stores the knowledge encoded in a neural network only if the
corresponding interface runs in the same way the neural network does. For
example, to import the knowledge from the feed-forward neural network shown in
Figure 2 to the knowledge base, we require that the following equation holds
for an arbitrary input:
$\displaystyle\forall\mathbf{x}\in\mathcal{X}:\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}(\mathbf{x},\mathcal{M})=\mathrm{FNN}_{\bm{\theta}}(\mathbf{x})$
(14)
where $\mathcal{X}$ is a set of all possible inputs.
As a result, the importing process is equivalent to an optimization problem.
For example, given a set of inputs
$\mathcal{D}=\\{\mathbf{x}^{(n)}\\}_{n=1}^{N}$, importing
$\mathrm{FNN}_{\bm{\theta}}(\cdot)$ to the knowledge base $\mathcal{M}$ is
done by
$\displaystyle\hat{\mathcal{M}},\hat{\bm{\phi}}=\mathop{\rm
argmin}_{\mathcal{M},\bm{\phi}}\Big{\\{}L_{\mathrm{import}}(\mathcal{D},\mathcal{M},\bm{\phi})\Big{\\}}$
(15)
where the loss function is defined as
$\displaystyle
L_{\mathrm{import}}(\mathcal{D},\mathcal{M},\bm{\phi})=\sum_{n=1}^{N}\Delta\Big{(}\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}(\mathbf{x}^{(n)},\mathcal{M}),\mathrm{FNN}_{\bm{\theta}}(\mathbf{x}^{(n)})\Big{)}$
(16)
We use $\Delta(\cdot)$ (e.g., a cosine function) to measure the difference
between the outputs of two functions.
### 2.4 Importing Knowledge from Multiple Neural Networks
To import the knowledge encoded in multiple neural networks to the knowledge
base, a natural way is to minimize the importing loss functions of these
neural networks jointly. For example, let
$\mathcal{D}_{1}=\\{\mathbf{x}^{(m)}\\}_{m=1}^{M}$ be a set of inputs for a
feed-forward neural network and
$\mathcal{D}_{2}=\\{\mathbf{x}^{(n)}\\}_{n=1}^{N}$ be a set of inputs for a
convolutional neural network. Note that the two datasets are independent: the
input to the feed-forward neural network can be an image and the input to the
convolutional neural network can be a natural language sentence.
The importing loss function of the feed-forward neural network can be defined
as
$\displaystyle
L_{\mathrm{import}}^{\mathrm{FNN}}(\mathcal{D}_{1},\mathcal{M},\bm{\phi}_{1})=\sum_{m=1}^{M}\Delta\Big{(}\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}_{1}}(\mathbf{x}^{(m)},\mathcal{M}),\mathrm{FNN}_{\bm{\theta}_{1}}(\mathbf{x}^{(m)})\Big{)}$
(17)
Similarly, the importing loss function of the convolutional neural network can
be defined as
$\displaystyle
L_{\mathrm{import}}^{\mathrm{CNN}}(\mathcal{D}_{2},\mathcal{M},\bm{\phi}_{2})=\sum\limits_{n=1}^{N}\Delta\Big{(}\mathrm{Interface}^{\mathrm{CNN}}_{\bm{\phi}_{2}}(\mathbf{x}^{(n)},\mathcal{M}),\mathrm{CNN}_{\bm{\theta}_{2}}(\mathbf{x}^{(n)})\Big{)}$
(18)
Then, importing $\mathrm{FNN}_{\bm{\theta}_{1}}(\cdot)$ and
$\mathrm{CNN}_{\bm{\theta}_{2}}(\cdot)$ to the knowledge base $\mathcal{M}$
synchronously is given by
$\displaystyle\hat{\mathcal{M}},\hat{\bm{\phi}}_{1},\hat{\bm{\phi}}_{2}=\mathop{\rm
argmin}_{\mathcal{M},\bm{\phi}_{1},\bm{\phi}_{2}}\Big{\\{}L_{\mathrm{import}}^{\mathrm{FNN}}(\mathcal{D}_{1},\mathcal{M},\bm{\phi}_{1})+L_{\mathrm{import}}^{\mathrm{CNN}}(\mathcal{D}_{2},\mathcal{M},\bm{\phi}_{2})\Big{\\}}$
(19)
It is easy to extend the above approach to more than two neural networks.
### 2.5 Exporting Knowledge to a Neural Network
We can also use the interface to export the knowledge stored in the CKB to a
neural network. Still, we treat exporting knowledge from the knowledge base to
a neural network as a function simulation problem: the knowledge is exported
to the neural network only if the the neural network runs in the same way the
interface does. For example, to export the knowledge from the CKB to the feed-
forward neural network shown in Figure 2, we require that the following
equation holds for an arbitrary input:
$\displaystyle\forall\mathbf{x}\in\mathcal{X}:\mathrm{FNN}_{\bm{\theta}}(\mathbf{x})=\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}(\mathbf{x},\mathcal{M})$
(20)
As a result, the importing processing is also equivalent to an optimization
problem: our goal is to modify the parameters of the neural network to
minimize the difference between the neural network and its interface. For
example, given a set of inputs $\mathcal{D}=\\{\mathbf{x}^{(n)}\\}_{n=1}^{N}$,
exporting the knowledge stored in the knowledge base $\mathcal{M}$ to
$\mathrm{FNN}_{\bm{\theta}}(\cdot)$ is done by
$\displaystyle\hat{\bm{\theta}}=\mathop{\rm
argmin}_{\bm{\theta}}\Big{\\{}L_{\mathrm{export}}(\mathcal{D},\bm{\theta})\Big{\\}}$
(21)
where the loss function is defined as
$\displaystyle
L_{\mathrm{export}}(\mathcal{D},\bm{\theta})=\sum_{n=1}^{N}\Delta\left(\mathrm{Interface}^{\mathrm{FNN}}_{\bm{\phi}}(\mathbf{x}^{(n)},\mathcal{M}),\mathrm{FNN}_{\bm{\theta}}(\mathbf{x}^{(n)})\right)$
(22)
As the knowledge base and interfaces are fixed during the exporting process,
exporting knowledge to multiple neural networks is equivalent to exporting it
to single neural networks in parallel.
## 3 Experiments
### 3.1 Setting
We evaluated our approach on text classification on two public datasets:
1. 1.
Amazon positive-negative review dataset (Fu et al.,, 2018). The training set
contains 796,000 reviews. The test set contains 4,000 reviews. We split the
original test set into two parts: 2,000 reviews as the validation set and
2,000 reviews as the test set. On average, each review contains 19 words.
2. 2.
Yelp polarity review dataset (Zhang et al.,, 2015). The training set contains
560,000 reviews. The test set contains 38,000 reviews. We split the original
training set into two parts: 550,000 reviews as the training set and 10,000 as
the validation set. On average, each review contains 163 words.
Configuration | #Param. | Accuracy
---|---|---
model ($\bm{\theta}$) | interface ($\bm{\phi}$) | CKB ($\mathcal{M}$) | Amazon | Yelp
RNN | 0.20M | - | - | 84.30 | 96.31
RNN $\mapsto$ CKB $\mapsto$ RNN | - | 1.05M | 81.92K | 84.40 | 96.32
CNN | 0.26M | - | - | 84.35 | 95.95
CNN $\mapsto$ CKB $\mapsto$ CNN | - | 0.79M | 81.92K | 84.15 | 95.67
ANN | 0.79M | - | - | 84.15 | 93.51
ANN $\mapsto$ CKB $\mapsto$ ANN | - | 1.31M | 81.92K | 84.15 | 93.61
BERT | 85.64M | - | - | 87.90 | 97.34
BERT $\mapsto$ CKB $\mapsto$ BERT | - | 19.09M | 1.70M | 87.60 | 96.69
GPT-2 | 85.64M | - | - | 87.80 | 97.62
GPT-2 $\mapsto$ CKB $\mapsto$ GPT-2 | - | 19.09M | 1.70M | 87.75 | 97.13
Table 1: Results of importing and exporting knowledge for single models. “RNN
$\mapsto$ CKB $\mapsto$ RNN” denotes first importing the knowledge encoded in
the RNN model that achieves an accuracy of 84.30 on the Amazon dataset to CKB,
from which the knowledge stored is exported to another RNN model with the same
model structure. Note that the number of parameters does not include the word
embedding layer and the classification layer.
We used the following five neural networks tailored for text classification in
our experiments:
1. 1.
RNN (Liu et al.,, 2016): recurrent neural network. We used a single gate
recurrent unit (GRU) (Cho et al.,, 2014) layer as the encoder. Its hidden size
is set to 256. We defined the interface for RNN at the local level: the input
of the interface consists of the $t$-th word $\mathbf{x}_{t}$ and the
$(t-1)$-th hidden state $\mathbf{h}_{t-1}$ and the output is the $t$-th hidden
state $\mathbf{h}_{t}$.
2. 2.
CNN (Kim,, 2014): convolution neural network. We followed Kim, (2014) to use
three filter windows (i.e., 3, 4, and 5) with 80 feature maps, respectively.
We defined its interface at the local level: the input of the interface is a
sequence of consecutive words $\mathbf{X}_{t:t+w}$ and the output is a hidden
state $\mathbf{h}_{t}$.
3. 3.
ANN (Vaswani et al.,, 2017): attention-based neural network. We used a single
Transformer encoder layer as the encoder. Its hidden size and intermediate
size are 256 and 1024, respectively. We defined the interface for ANN at the
global level: the input of the interface is the entire sequence $\mathbf{X}$
and the output is the sequence of hidden states $\mathbf{H}$.
4. 4.
BERT (Devlin et al.,, 2019): Bidirectional encoder representations from
Transformers. The BERT base model 111https://huggingface.co/bert-base-uncased
was fine-tuned on the two text classification datasets. We defined the
interface for BERT at the global level: the input of the interface is the
entire sequence $\mathbf{X}$ and the output is the final output of the BERT
encoder. Note that the BERT base model uses 12 attention layers. Our interface
directly predicts the output of the 12-th layer.
5. 5.
GPT-2 (Radford et al.,, 2019): a language model based on masked self-
attention. We fine-tuned GPT-2 222https://huggingface.co/gpt2 for text
classification and defined its interface similar to that of BERT.
6. 6.
ALBERT (Lan et al.,, 2020): a lite BERT. 333https://huggingface.co/albert-
base-v2. The interface for ALBERT is defined at the global level in a similar
way.
We removed documents that contain less than 5 words and only retained the
first 512 tokens for documents that have more than 512 tokens. For RNN, CNN,
and ANN, we used BPE (Sennrich et al.,, 2016) with 32K operations to
preprocess the datasets. For other methods, we used their built-in tokenizers
to preprocess the datasets.
We used 10 low-level $\text{30}\times\text{256}$ matrices and one high-level
$\text{20}\times\text{256}$ matrix to build the CKB for small models (i.e.,
RNN, CNN, and ANN). For big models like BERT and GPT-2, the CKB contains 20
low-level $\text{40}\times\text{2,048}$ metrices and one high-level
$\text{20}\times\text{2,048}$ matrix. We used AdamW (Loshchilov and Hutter,,
2019) to optimize parameters of the CKB, interfaces, and neural networks.
Please refer to Appendix A for more details.
Method | Configuration | Accuracy
---|---|---
Single | RNN | 84.30
CNN | 84.35
ANN | 84.15
Ours | RNN $\mapsto$ CKB $\mapsto$ CNN | 83.35
RNN $\mapsto$ CKB $\mapsto$ ANN | 83.10
CNN $\mapsto$ CKB $\mapsto$ RNN | 82.70
CNN $\mapsto$ CKB $\mapsto$ ANN | 83.85
ANN $\mapsto$ CKB $\mapsto$ RNN | 82.05
ANN $\mapsto$ CKB $\mapsto$ CNN | 83.10
Table 2: Results of knowledge transferring between different models on the
Amazon dataset.
Method | Configuration | Accuracy
---|---|---
Single | RNN | 84.30
CNN | 84.35
ANN | 84.15
BERT | 87.90
GPT-2 | 87.80
Ensemble | RNN & CNN | 85.25
CNN & ANN | 85.40
ANN & RNN | 85.00
RNN & CNN & ANN | 85.35
BERT & GPT-2 | 88.45
Ours | $\\{$RNN, CNN$\\}\mapsto$ CKB $\mapsto$ RNN | 85.05
$\\{$RNN, CNN$\\}\mapsto$ CKB $\mapsto$ CNN | 84.05
$\\{$CNN, ANN$\\}\mapsto$ CKB $\mapsto$ CNN | 84.25
$\\{$CNN, ANN$\\}\mapsto$ CKB $\mapsto$ ANN | 84.30
$\\{$ANN, RNN$\\}\mapsto$ CKB $\mapsto$ ANN | 84.35
$\\{$ANN, RNN$\\}\mapsto$ CKB $\mapsto$ RNN | 85.20
$\\{$RNN, CNN, ANN$\\}\mapsto$ CKB $\mapsto$ RNN | 84.95
$\\{$RNN, CNN, ANN$\\}\mapsto$ CKB $\mapsto$ CNN | 83.95
$\\{$RNN, CNN, ANN$\\}\mapsto$ CKB $\mapsto$ ANN | 84.10
$\\{$BERT, GPT-2$\\}\mapsto$ CKB $\mapsto$ BERT | 88.20
$\\{$BERT, GPT-2$\\}\mapsto$ CKB $\mapsto$ GPT-2 | 88.35
Table 3: Results of importing and exporting knowledge for multiple models on
the Amazon dataset.
### 3.2 Importing and Exporting Knowledge for Single Neural Networks
Table 1 shows the results of importing and exporting knowledge for single
neural networks. This experiment aims to verify whether CKB is able to import
and export continuous knowledge. We find that our approach is capable of
retaining the expressive power of the original neural network across a variety
of architectures.
Table 1 also lists the numbers of model, interface, and CKB parameters. The
interface for CNN has the fewest parameters (i.e., 0.79M) because it only
takes a substring of the input sequence as input. As the interface for RNN
takes both the current token and the last hidden state as input, it has more
parameters (i.e., 1.05M) than that of CNN. The interface for ANN has more
parameters than that of RNN because it takes the entire sequence as input and
contains an additional feed-forward layer. Since the interfaces for BERT and
GPT-2 directly imitate the ouput of the final layer (i.e., the 12-th layer),
they have much fewer parameters than the original models.
Table 2 shows the results of importing and exporting knowledge for different
single models. We import the knowledge from one model to the CKB and then
export the knowledge to a different model, which is similar to the setting of
zero-shot learning. We find that the knowledge stored in RNN, CNN, and ANN can
be transferred to each other with only small performance degradation.
### 3.3 Importing and Exporting Knowledge for Multiple Neural Networks
Table 3 shows the results of importing and exporting knowledge for multiple
models on the Amazon dataset. We find that first importing multiple models to
the CKB and then exporting the fused knowledge to a single model can result in
improved accuracy for the single model. For example, the single model RNN
obtains an accuracy of 84.30% while “{RNN, CNN} $\mapsto$ CKB $\mapsto$RNN”
achieves 85.05%, suggesting that the knowledge stored in CKB helps to improve
RNN. Similar results were also observed for larger models such as BERT and
GPT-2. However, we also found that not all single models to which the
knowledge is exported obtain higher accuracies. For example, “{RNN, CNN}
$\mapsto$ CKB $\mapsto$CNN” obtains a lower accuracy than the original CNN. As
a result, how to ensure all participating models benefit from the integration
still needs further exploration.
Our approach significantly differs from model ensemble in two aspects. First,
while model ensemble has to maintain all participating models during
inference, CKB can export its knowledge to a single model. Second, model
ensemble requires all participating models are trained for the same task while
CKB in principle can take advantage of models trained for different tasks.
### 3.4 Effect of the Capacity of Continuous Knowledge Base
Configuration | #Param. | Accuracy
---|---|---
BERT | - | 87.90
BERT $\mapsto$ CKB $\mapsto$ BERT | 81.92K | 86.55
1.70M | 87.60
3.11M | 87.65
Table 4: Effect of the capacity of the continuous knowledge base on the Amazon
dataset.
Table 4 shows the effect of model capacity of CKB on classification accuracy.
We find that the accuracy generally rises with the increase of the number of
model parameters (i.e., $\mathcal{M}$) and the benefit becomes modest on
larger models.
### 3.5 Knowledge Distillation via Continuous Knowledge Base
Method | Hidden Size | Accuracy
---|---|---
Base | RNN | 256 | 84.30
RNN | 64 | 83.30
KD | RNN | 64 | 83.00
Ours | RNN | 64 | 83.00
Table 5: Results of mimicking knowledge distillation (KD) by CKB on the Amazon
dataset.
We can use the CKB to realize the goal of knowledge distillation (KD). As our
CKB can export the stored knowledge to a blank model, KD can be done by
importing the knowledge from the teacher model to the CKB and then exporting
the knowledge to the student model. In our experiments, we used a big RNN with
256 hidden size and a small RNN with 64 hidden size as the teacher and the
student models, respectively. As shown in Table 5, KD with our CKB achieved
the same performance as the standard KD method. The performances of two KD
methods are slightly worse than that of the small model trained on labeled
data. One possible reason is that the teacher model contains noise, affecting
the performance of the student model.
### 3.6 Transfer Learning via Continuous Knowledge Base
Method | Setting | Accuracy
---|---|---
PT & FT | ALBERT | Init $+$ FT | 79.20
ALBERT | PT $+$ FT | 84.50
Ours | CKB | Init $+$ FT | 82.10
CKB | TL $+$ FT | 84.10
Table 6: Results of Transfer Learning by CKB on the Amazon dataset. “PT”,
“FT”, and “TL” denote pre-training, fine-tuning, and transfer learning,
respectively. “Init” means model initialization.
It is easy to imitate transfer learning based on our CKB. In our experiments,
we used the CKB to mimic the transfer learning where a pre-trained language
model (i.e., ALBERT) is fine-tuned for text classification. As shown in Table
6, the ALBERT trained from scratch on the Amazon dataset obtained 79.20
accuracy scores on the test set while the pre-trained ALBERT can obtain 84.50
accuracy scores after fine-tuning. Analogously, the CKB-based model trained
from scratch with a randomly initialized CKB on the Amazon dataset achieved
82.10 accuracy scores on the test set. However, the CKB-based model, which
imported the knowledge from the pre-trained ALBERT, performed much better.
Note that in the knowledge transfer phase, we only used the unlabeled text
data from the Amazon dataset to import the knowledge from the pre-trained
ALBERT to the CKB.
## 4 Related Work
Our work draws inspiration from two lines of research: memory networks and
knowledge bases from pre-trained models.
### 4.1 Memory Networks
Memory networks (MNs) are first proposed by Weston et al., (2015). The
proposed MNs reason with inference components combined with a long-term memory
which acts as a dynamic knowledge base to store some knowledge from the input.
Sukhbaatar et al., (2015) extend MNs to the end-to-end paradigm by introducing
the attention mechanism (Bahdanau et al.,, 2015) to estimate the relevance of
each item in the memory. Kumar et al., (2016) propose dynamic memory networks
(DMNs) that use episodic memories to help generate better answers to given
questions. The episodic memory in DMNs can be updated dynamically according to
the input. Nematzadeh et al., (2020) also argue that the separation of
computation and storage is necessary and discuss the advantage of improving
memory in AI systems. Different from the existing MNs which store the input-
related knowledge, the proposed CKB is a global knowledge base that aims to
store the knowledge from different competent neural network models.
### 4.2 Knowledge Bases from Pre-trained Models
Recently, a number of works have been studied on what does the pre-trained
language model learns (Petroni et al.,, 2019; Bouraoui et al.,, 2020; Rogers
et al.,, 2020; Wang et al.,, 2020). Petroni et al., (2019) convert the fact
(i.e., subject-relation-object triple) into the cloze statement to test the
factual and commonsense knowledge in the pre-trained language model. By
transforming relational triples into masked sentences, Feldman et al., (2019)
propose to mine commonsense knowledge from pre-trained models. Bouraoui et
al., (2020) fine-tune the pre-trained BERT (Devlin et al.,, 2019) to predict
whether a given word pair is likely to be an instance of some relations. Wang
et al., (2020) state that pre-trained language models would be open knowledge
graphs and propose an unsupervised method to build knowledge graphs. Rombach
and Esser, (2020) propose a conditional invertible neural network to translate
between fixed representations from different off-the-shelf models. These
methods show that neural network models would contain knowledge while our CKB
investigates how to store and use this uninterpretable knowledge.
## 5 Conclusion and Future Work
We propose to build a universal continuous knowledge base (CKB) in this work.
Different from conventional knowledge bases using discrete symbols to
represent information, the proposed CKB stores the knowledge in multi-level
real-valued matrices. Based on the formalization where a neural network model
is a parameterized composite function that maps the input to the output, our
CKB imports the knowledge from the neural network model by learning the
mapping between the input and the output with the model-dependent interface.
Experiments on text classification show that continuous knowledge can be
imported and exported between neural networks and the CKB. Our CKB can also
mimic knowledge distillation and transfer learning in a novel paradigm. In the
future, we will extend the CKB to cross-task scenarios.
Our work has only touched the surface of building a universal continuous
knowledge base. There are a number of interesting directions awaiting further
exploration: a more sophisticated design of memory hierarchy, integrating
neural networks trained for different AI tasks, continual learning that can
import multiple neural networks asynchronously, visualization and
interpretation of the internal workings, and importing and exporting knowledge
between discrete and continuous KBs.
## Acknowledgements
We would like to thank Qun Liu, Xin Jiang, and Meng Zhang for their
constructive comments on this work. This work was supported by the National
Key R&D Program of China (No. 2017YFB0202204), National Natural Science
Foundation of China (No. 61925601, No. 61761166008) and Huawei Noah’s Ark Lab.
## References
* Bahdanau et al., (2015) Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015.
* Belinkov et al., (2017) Belinkov, Y., Durrani, N., Dalvi, F., Sajjad, H., and Glass, J. (2017). What do neural machine translation models learn about morphology? In Proceedings of ACL 2017.
* Blevins et al., (2018) Blevins, T., Levy, O., and Zettlemoyer, L. (2018). Deep rnns encode soft hierarchical syntax. In Proceedings of ACL 2018.
* Bollacker et al., (2008) Bollacker, K., Evans, C., Paritosh, P., Sturge, T., and Taylor, J. (2008). Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of SIGMOD 2008.
* Bouraoui et al., (2020) Bouraoui, Z., Camacho-Collados, J., and Schockaert, S. (2020). Inducing relational knowledge from BERT. In Proceedings of AAAI 2020.
* Brown et al., (2020) Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. (2020). Language models are few-shot learners. In Proceedings of NeurIPS 2020.
* Cho et al., (2014) Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP 2014.
* Dahl et al., (2011) Dahl, G. E., Yu, D., Deng, L., and Acero, A. (2011). Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing.
* Devlin et al., (2019) Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT 2019.
* Ernst et al., (2015) Ernst, P., Siu, A., and Weikum, G. (2015). Knowlife: A versatile approach for constructing a large knowledge grapha for biomedical sciences. BMC Bioinformatics.
* Feldman et al., (2019) Feldman, J., Davison, J., and Rush, A. (2019). Commonsense knowledge mining from pretrained models. In Proceedings of EMNLP-IJCNLP 2019.
* Fu et al., (2018) Fu, Z., Tan, X., Peng, N., Zhao, D., and Yan, R. (2018). Style transfer in text: Exploration and evaluation. In Proceedings of AAAI 2018.
* Hayes-Roth et al., (1983) Hayes-Roth, F., Waterman, D., and Lenat, D. (1983). Building Expert Systems. Addison-Wesley.
* Hennessy and Patterson, (2011) Hennessy, J. L. and Patterson, D. A. (2011). Computer Architecture: A Quantitative Approach. Morgan Kaufmann.
* Hewitt and Manning, (2019) Hewitt, J. and Manning, C. (2019). A structural probe for finding syntax in word representations. In Proceedings of NAACL-HLT 2019.
* Hinton and Salakhutdinov, (2006) Hinton, G. and Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science.
* Jiao et al., (2020) Jiao, X., Yin, Y., Shang, L., Jiang, X., Chen, X., Li, L., Wang, F., and Liu, Q. (2020). TinyBERT: Distilling bert for natural language understanding. In Proceedings of EMNLP 2020: Findings.
* Kim, (2014) Kim, Y. (2014). Convolutional neural networks for sentence classification. In Proceedings of EMNLP 2014.
* Krizhevsky et al., (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). Imagenet classification with deep convolutional neural networks. In Proceedings of NeurIPS 2012.
* Kumar et al., (2016) Kumar, A., Irsoy, O., Ondruska, P., Iyyer, M., Bradbury, J., Gulrajani, I., Zhong, V., Paulus, R., and Socher, R. (2016). Ask me anything: Dynamic memory networks for natural language processing. In Proceedings of ICML 2016.
* Lan et al., (2020) Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. (2020). ALBERT: A lite bert for self-supervised learning of language representations. In Proceedings of ICLR 2020.
* Lenat and Guha, (1990) Lenat, D. B. and Guha, R. V. (1990). Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project. Addison-Wesley.
* Linzen et al., (2016) Linzen, T., Dupoux, E., and Goldberg, Y. (2016). Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics.
* Liu et al., (2016) Liu, P., Qiu, X., and Huang, X. (2016). Recurrent neural network for text classification with multi-task learning. In Proceedings of IJCAI 2016.
* Loshchilov and Hutter, (2019) Loshchilov, I. and Hutter, F. (2019). Decoupled weight decay regularization. In Proceedings of ICLR 2019.
* Marino et al., (2017) Marino, K., Salakhutdinov, R., and Gupta, A. (2017). The more you know: Using knowledge graphs for image classification. In Proceedings of CVPR 2017.
* Nematzadeh et al., (2020) Nematzadeh, A., Ruder, S., and Yogatama, D. (2020). On memory in human and artificial language processing systems. In Proceedings of ICLR 2020 Workshop.
* Petroni et al., (2019) Petroni, F., Rocktäschel, T., Riedel, S., Lewis, P., Bakhtin, A., Wu, Y., and Miller, A. (2019). Language models as knowledge bases? In Proceedings of EMNLP-IJCNLP 2019.
* Radford et al., (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog.
* Richardson and Domingos, (2003) Richardson, M. and Domingos, P. (2003). Building large knowledge bases by mass collaboration. In Proceedings of K-CAP 2003.
* Rogers et al., (2020) Rogers, A., Kovaleva, O., and Rumshisky, A. (2020). A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics.
* Rombach and Esser, (2020) Rombach, R. and Esser, P. (2020). Network-to-network translation with conditional invertible neural networks. In Proceedings of NeurIPS 2020.
* Sennrich et al., (2016) Sennrich, R., Haddow, B., and Birch, A. (2016). Neural machine translation of rare words with subword units. In Proceedings of ACL 2016.
* Sukhbaatar et al., (2015) Sukhbaatar, S., Weston, J., Fergus, R., et al. (2015). End-to-end memory networks. In Proceedings of NeurIPS 2015.
* Vaswani et al., (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. In Proceedings of NeurIPS 2017.
* Wang et al., (2020) Wang, C., Liu, X., and Song, D. (2020). Language models are open knowledge graphs. arXiv preprint arXiv:2010.11967.
* Weston et al., (2015) Weston, J., Chopra, S., and Bordes, A. (2015). Memeory networks. In Proceedings of ICLR 2015.
* Zhang et al., (2015) Zhang, X., Zhao, J., and LeCun, Y. (2015). Character-level convolutional networks for text classification. In Proceedings of NeurIPS 2015.
* Zhang et al., (2018) Zhang, Y., Dai, H., Kozareva, Z., Smola, A. J., and Song, L. (2018). Variational reasoning for question answering with knowledge graph. In Proceedings of AAAI 2018.
## Appendix A Experimental Details
### A.1 Hyper-parameter Values
When importing and exporting continuous knowledge between neural networks and
the knowledge base, each mini-batch contains 8,192 tokens for the Amazon
dataset and 24,576 tokens for the Yelp dataset, respectively. We used the
AdamW optimizer (Loshchilov and Hutter,, 2019) with $\beta_{1}=0.9$,
$\beta_{2}=0.98$, $\epsilon=10^{-9}$, and L2 weight decay of $0.01$ to
optimize parameters. For knowledge importing, we set the learning rate to
2e-4. For knowledge exporting, the learning rate was set to 1e-4.
### A.2 Model Selection
For importing and exporting knowledge for single neural networks, model
selection is done by choosing the checkpoint with the highest accuracy on the
validation set. When synchronously importing knowledge from multiple neural
networks to the continuous knowledge base (CKB), a problem is that these
models might not achieve the highest performance on the validation set at the
same time. To address this problem, we select the checkpoint as follows:
$\displaystyle\hat{c}=\mathop{\rm
argmax}_{c}\bigg{\\{}\\!\min_{i\in[1,N]}\\!\Big{\\{}\mathrm{acc}(c,\mathrm{NN}_{i})\Big{\\}}\bigg{\\}}$
(23)
where $N$ is the number of neural networks, $\mathrm{NN}_{i}$ is the $i$-th
neural network, $c$ is a checkpoint, and $\mathrm{acc(\cdot)}$ is a function
that calculates accuracy on the validation set.
### A.3 Runtime Environment
We conducted all experiments on a server with the following environment:
* •
Operation System: Ubuntu 18.04.2 LTS
* •
CPU: AMD EPYC 7302 16-Core Processor
* •
GPU: GeForce RTX 3090
|
# Probabilistic degenerate Dowling polynomials associated with random
variables
Taekyun Kim Department of Mathematics, Kwangwoon University, Seoul 139-701,
Republic of Korea<EMAIL_ADDRESS>and Dae San Kim Department of Mathematics,
Sogang University, Seoul 121-742, Republic of Korea<EMAIL_ADDRESS>
###### Abstract.
The aim of this paper is to study probabilistic versions of the degenerate
Whitney numbers of the second kind and those of the degenerate Dowling
polynomials, namely the probabilistic degenerate Whitney numbers of the second
kind associated with $Y$ and the probabilistic degenerate Dowling polynomials
associated with $Y$. Here Y is a random variable whose moment generating
function exists in some neighborhood of the origin. We derive some properties,
explicit expressions, certain identities, recurrence relations and generating
functions for those numbers and polynomials. In addition, we investigate their
generalizations, namely the probabilistic degenerate $r$-Whitney numbers of
the second kind associated with $Y$ and the probabilistic degenerate
$r$-Dowling polynomials associated with $Y$, and get similar results to the
aforementioned numbers and polynomials.
###### Key words and phrases:
probabilistic degenerate Whitney numbers of the second kind associated with
$Y$; probabilistic degenerate Dowling polynomials associated with $Y$;
probabilistic degenerate $r$-Whitney numbers of the second kind associated
with $Y$; probabilistic degenerate $r$-Dowling polynomials associated with $Y$
###### 2010 Mathematics Subject Classification:
11B73; 11B83
## 1\. Introduction
Carlitz initiated a study of degenerate versions of special polynomials and
numbers in his work on the degenerate Bernoulli and degenerate Euler
polynomials and numbers (see [7]). It is remarkable that various degenerate
versions of many special numbers and polynomials have been explored recently
by some mathematicians not only with their number-theoretic or combinatorial
interests but also with their applications to other areas, including
probability, quantum mechanics and differential equations. In the course of
this quest, many different tools are employed, like generating functions,
combinatorial methods, $p$-adic analysis, umbral calculus, operator theory,
differential equations, special functions, probability theory and analytic
number theory (see [8,10,12-15,17,18] and the references therein).
Assume that $Y$ is a random variable such that the moment generating function
of $Y$
(1)
$E[e^{tY}]=\sum_{n=0}^{\infty}E[Y^{n}]\frac{t^{n}}{n!},\,\,(|t|<r)\,\,\,\mathrm{exists\,\,for\,\,some}\,\,r>0,$
where $E$ stands for the mathematical expectation (see [3,4,16,24]). The aim
of this paper is to study probabilistic versions of the degenerate Whitney
numbers of the second kind and those of the degenerate Dowling polynomials,
namely the probabilistic degenerate Whitney numbers of the second kind
associated with $Y$, $W_{m,\lambda}^{Y}(n,k)$ and the probabilistic degenerate
Dowling polynomials associated with $Y$, $D_{m,\lambda}^{Y}(n,x)$. In
addition, we investigate their generalizations, namely the probabilistic
degenerate $r$-Whitney numbers of the second kind associated with $Y$,
$W_{m,\lambda}^{(Y,r)}(n,k)$ and the probabilistic degenerate $r$-Dowling
polynomials associated with $Y$, $D_{m,\lambda}^{(Y,r)}(n,x)$. Then we derive
some properties, explicit expressions, certain identities, recurrence
relations and generating functions for those numbers and polynomials.
The outline of this paper is as follows. In Section 1, we recall the Whitney
numbers of the second kind, the $r$-Whitney numbers of the second kind, the
Dowling polynomials and the $r$-Dowling polynomials. We remind the reader of
the degenerate exponentials and the degenerate Stirling numbers of the second.
We recall the degenerate Whitney numbers of the second kind, the degenerate
$r$-Whitney numbers of the second kind, the degenerate Dowling polynomials and
the degenerate $r$-Dowling polynomials. We remind the reader of the partial
Bell polynomials and the complete Bell polynomials. Then, for any random
variable $Y$ satisfying the moment condition (see (1)), we recall the
definition of the probabilistic degenerate Stirling numbers of the second kind
associated with $Y$. Section 2 is the main result of this paper. Assume that
$Y$ is a random variable satisfying the moment condition in (1). Let
$(Y_{j})_{j\geq 1}$ be a sequence of mutually independent copies of the random
variable $Y$, and let $S_{k}=Y_{1}+\cdots+Y_{k},\ (k\geq 1)$, with $S_{0}=0$,
(see [1,3,4,16,25,26]). We define $W_{m,\lambda}^{Y}(n,k)$. Then three
different expressions of those numbers are obtained in terms of expectations
of various random variables in Theorems 2.1-2.3. Another expression is derived
in terms of the partial Bell polynomials in Theorem 2.15. In Theorem 2.4, we
derive a finite sum identity involving those numbers and
$E\Big{[}(mS_{k}+1)_{n,\lambda}\Big{]}$. Then we define
$D_{m,\lambda}^{Y}(n,x)$. We obtain the generating function of
$D_{m,\lambda}^{Y}(n,x)$ in Theorem 2.5 and two explicit expressions of those
polynomials in Theorems 2.6 and 2.7. We derive a recurrence relation for
$D_{m,\lambda}^{Y}(n,x)$ in Theorem 2.8. In Theorem 2.9, we get an identity
for $D_{m,\lambda}^{Y}(n,x)$, which shows that those polynomials do not
satisfy the binomial identity. We deduce a finite sum identity involving
$D_{m,\lambda}^{Y}(n,x)$ and the partial Bell polynomials in Theorem 2.11.
Moreover, higher order derivatives of $D_{m,\lambda}^{Y}(n,x)$ are obtained in
Theorem 2.14. Next, we define $W_{m,\lambda}^{(Y,r)}(n,k)$. We get an explicit
expression for those polynomials in Theorem 2.10. In Theorems 2.12 and 2.13,
we derive identities involving those polynomials and the partial Bell
polynomials. Then we define $D_{m,\lambda}^{(Y,r)}(n,x)$. Finally, we obtain
the generating function and an explicit expression, respectively in Theorem
2.16 and Theorem 2.17. In the rest of this section, we recall the facts that
are needed throughout this paper.
A finite lattice $L$ is geometric if it is a finite semimodular lattice which
is also atomic. Dowling constructed an important finite geometric lattice
$Q_{n}(G)$ out of a finite set of $n$ elements and a finite group $G$ of order
$m$, called Dowling lattice of rank $n$ over a finite group of order $m$ (see
[11,13,15]). If $L$ is the Dowling lattice $Q_{n}(G)$ of rank $n$ over a
finite group of $G$ order $m$, then the Whitney numbers of the second kind
$W_{Q_{n}(G)}(n,k)$ are denoted by $W_{m}(n,k)$ and satisfy the following
relation
(2)
$x^{n}=\sum_{k=0}^{n}W_{m}(n,k)m^{k}\bigg{(}\frac{x-1}{m}\bigg{)}_{k},\quad(n\geq
0),\quad(\mathrm{see}\ [11,13,15]),$
where $(x)_{0}=1,\ (x)_{n}=x(x-1)\cdots(x-n+1),\ (n\geq 1)$.
From (2), we have
(3) $(mx+1)^{n}=\sum_{k=0}^{n}W_{m}(n,k)m^{k}(x)_{k},\quad(n\geq
0),\quad(\mathrm{see}\ [11,13,15,18]).$
Let $r$ be a nonnegative integer. Then, as generalizations of $W_{m}(n,k)$,
the $r$-Whitney numbers of the second kind are defined by
(4) $(mx+r)^{n}=\sum_{k=0}^{n}W_{m}^{(r)}(n,k)m^{k}(x)_{k},\quad(n\geq
0),\quad(\mathrm{see}\ [13]).$
For $n\geq 0$, the Dowling polynomials $D_{m}(n,x)$ and the $r$-Dowling
polynomials $D_{m}^{(r)}(n,x)$ are respectively given by
(5) $D_{m}(n,x)=\sum_{k=0}^{n}W_{m}(n,k)x^{k},\quad(n\geq 0),$
and
$D_{m}^{(r)}(n,x)=\sum_{k=0}^{n}W_{m}^{(r)}(n,k)x^{k},\quad(n\geq
0),\quad(\mathrm{see}\ [15]).$
It is well known that the Stirling numbers of the second kind ${n\brace k}$
are defined by
(6) $x^{n}=\sum_{k=0}^{n}{n\brace k}(x)_{k},\quad(n\geq 0),\quad(\mathrm{see}\
[1-25]).$
For any nonzero $\lambda\in\mathbb{R}$, the degenerate exponentials are
defined by
(7) $e_{\lambda}^{x}(t)=(1+\lambda
t)^{\frac{x}{\lambda}}=\sum_{k=0}^{\infty}(x)_{k,\lambda}\frac{t^{k}}{k!},\quad(\mathrm{see}\
[10,12,14]),$
where
(8)
$(x)_{0,\lambda}=1,\quad(x)_{n,\lambda}=x(x-\lambda)(x-2\lambda)\cdots(x-(n-1)\lambda),\quad(n\geq
1).$
Note that $\lim_{\lambda\rightarrow 0}e_{\lambda}^{x}(t)=e^{xt}$.
In [10], the degenerate Stirling numbers of the second kind are given by
(9) $(x)_{n,\lambda}=\sum_{k=0}^{n}{n\brace k}_{\lambda}(x)_{k},\quad(n\geq
0).$
Note that $\displaystyle\lim_{\lambda\rightarrow 0}{n\brace
k}_{\lambda}={n\brace k}$.
Recently, the degenerate Whitney numbers of the second kind
$W_{m,\lambda}(n,k)$ are defined by
(10)
$(mx+1)_{n,\lambda}=\sum_{k=0}^{n}W_{m,\lambda}(n,k)m^{k}(x)_{k},\quad(n\geq
0),\quad(\mathrm{see}\ [13]).$
Note that $\lim_{\lambda\rightarrow 0}W_{m,\lambda}(n,k)=W_{m}(n,k)$.
In view of (5), the degenerate Dowling polynomials are defined by
(11) $D_{m,\lambda}(n,x)=\sum_{k=0}^{n}W_{m,\lambda}(n,k)x^{k},\quad(n\geq
0),\quad(\mathrm{see}\ [13]).$
From (10) and (11), we can derive the following equations.
(12)
$e_{\lambda}(t)\frac{1}{k!}\bigg{(}\frac{e_{\lambda}^{m}(t)-1}{m}\bigg{)}^{k}=\sum_{n=k}^{\infty}W_{m,\lambda}(n,k)\frac{t^{n}}{n!},$
and
(13)
$e_{\lambda}(t)e^{x\big{(}\frac{e_{\lambda}^{m}(t)-1}{m}\big{)}}=\sum_{n=0}^{\infty}D_{m,\lambda}(n,x)\frac{t^{n}}{n!},\quad(\mathrm{see}\
[13]).$
For $r\geq 0$, the degenerate $r$-Whitney numbers of the second kind is given
by
(14)
$(mx+r)_{n,\lambda}=\sum_{k=0}^{n}W_{m,\lambda}^{(r)}(n,k)m^{k}(x)_{k},\quad(n\geq
0),\quad(\mathrm{see}\ [13]).$
From (14), we note that
(15)
$e_{\lambda}^{r}(t)\frac{1}{k!}\bigg{(}\frac{e_{\lambda}^{m}(t)-1}{m}\bigg{)}^{k}=\sum_{n=k}^{\infty}W_{m,\lambda}^{(r)}(n,k)\frac{t^{n}}{n!},\quad(\mathrm{see}\
[13]).$
The degenerate $r$-Dowling polynomials are defined by
(16)
$e_{\lambda}^{r}(t)e^{x(\frac{e_{\lambda}^{m}(t)-1}{m})}=\sum_{n=0}^{\infty}D_{m,\lambda}^{(r)}(n,x)\frac{t^{n}}{n!},\quad(\mathrm{see}\
[15]).$
By (15) and (16), we get
(17)
$D_{m,\lambda}^{(r)}(n,x)=\sum_{k=0}^{n}W_{m,\lambda}^{(r)}(n,k)x^{k},\quad(n\geq
0),\quad(\mathrm{see}\ [11,15]).$
For any integer $k\geq 0$, the partial Bell polynomials are given by
(18)
$\frac{1}{k!}\bigg{(}\sum_{m=1}^{\infty}x_{m}\frac{t^{m}}{m!}\bigg{)}^{k}=\sum_{n=k}^{\infty}B_{n,k}(x_{1},x_{2},\dots,x_{n-k+1})\frac{t^{n}}{n!},\quad(\mathrm{see}\
[7,12]),$ (19) $\displaystyle B_{n,k}(x_{1},x_{2},\dots,x_{n-k+1})$
$\displaystyle=\sum_{\begin{subarray}{c}l_{1}+l_{2}+\cdots+l_{n-k+1}=k\\\
l_{1}+2l_{2}+\cdots+(n-k+1)l_{n-k+1}=n\end{subarray}}\frac{n!}{l_{1}!l_{2}!\cdots
l_{n-k+1}!}\bigg{(}\frac{x_{1}}{1!}\bigg{)}^{l_{1}}\bigg{(}\frac{x_{2}}{2!}\bigg{)}^{l_{2}}\cdots\bigg{(}\frac{x_{n-k+1}}{(n-k+1)!}\bigg{)}^{l_{n-k+1}}.$
The complete Bell polynomials are defined by
(20)
$\exp\bigg{(}\sum_{i=1}^{\infty}x_{i}\frac{t^{i}}{i!}\bigg{)}=\sum_{n=0}^{\infty}B_{n}(x_{1},x_{2},\dots,x_{n})\frac{t^{n}}{n!},\quad(\mathrm{see}\
[12]).$
Thus, by (19) and (20), we get
(21)
$B_{n}(x_{1},x_{2},\dots,x_{n})=\sum_{k=0}^{n}B_{n,k}(x_{1},x_{2},\dots,x_{n-k+1}),\quad(n\geq
0).$
Recently, for any random variable $Y$ satisfying (1), the probabilistic
degenerate Stirling numbers of the second kind associated with $Y$ are
introduced by
(22)
$\frac{1}{k!}\big{(}E[e_{\lambda}^{Y}(t)]-1\big{)}^{k}=\sum_{n=k}^{\infty}{n\brace
k}_{Y,\lambda}\frac{t^{n}}{n!},\quad(k\geq 0),\quad(\mathrm{see}\ [16]).$
Note that ${n\brace k}_{Y,\lambda}={n\brace k}_{\lambda}$, when $Y=1$.
## 2\. Probabilistic degenerate Dowling polynomials associated with random
variables
Throughout this section, we assume that $Y$ is a random variable such that the
moment generating function of $Y$
$E[e^{tY}]=\sum_{n=0}^{\infty}E[Y^{n}]\frac{t^{n}}{n!},\,\,(|t|<r)\,\,\,\mathrm{exists\,\,for\,\,some}\,\,r>0,\quad(\mathrm{see}\
[1,3,4,16,21,24,25,26]).$
We let $(Y_{j})_{j\geq 1}$ be a sequence of mutually independent copies of the
random variable $Y$, and let
$S_{0}=0,\quad
S_{k}=Y_{1}+Y_{2}+\cdots+Y_{k},\quad(k\in\mathbb{N}),\quad(\mathrm{see}\
[1,3,4,16,25,26]).$
In view of (12), we define the probabilistic degenerate Whitney numbers of the
second kind associated with $Y$ by
(23)
$\frac{1}{k!}\bigg{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\bigg{)}^{k}e_{\lambda}(t)=\sum_{n=k}^{\infty}W_{m,\lambda}^{Y}(n,k)\frac{t^{n}}{n!},\quad(k\geq
0).$
When $Y=1$, we have $W_{m,\lambda}^{Y}(n,k)=W_{m,\lambda}(n,k)$, (see (12)).
From (23), we note taht
(24) $\displaystyle\sum_{n=k}^{\infty}W_{m,\lambda}^{Y}(n,k)\frac{t^{n}}{n!}$
$\displaystyle=\frac{1}{m^{k}k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\Big{(}E\big{[}e_{\lambda}^{mY}(t)\big{]}\Big{)}^{j}e_{\lambda}(t)$
$\displaystyle=\frac{1}{m^{k}k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big{[}e_{\lambda}^{mY_{1}}(t)\big{]}\cdots
E\big{[}e_{\lambda}^{mY_{j}}(t)\big{]}e_{\lambda}(t)$
$\displaystyle=\frac{1}{m^{k}k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\Big{[}e_{\lambda}^{m(Y_{1}+\cdots+Y_{j})+1}(t)\Big{]}$
$\displaystyle=\sum_{n=0}^{\infty}\frac{1}{m^{k}k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\Big{[}(mS_{j}+1)_{n,\lambda}\Big{]}\frac{t^{n}}{n!}.$
Therefore, by comparing the coefficients on both sides of (24), we obtain the
following theorem.
###### Theorem 2.1.
For $n,k$ with $n\geq k\geq 0$, we have
$W_{m,\lambda}^{Y}(n,k)=\frac{1}{m^{k}k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big{[}(mS_{j}+1)_{m,\lambda}\big{]}.$
Now, we define the operator $\triangle_{my}$ by
(25) $\displaystyle\triangle_{my}f(x)=f(x+my)-f(x),$
$\displaystyle\triangle_{my_{1},my_{2},\dots,my_{k}}f(x)=\triangle_{my_{1}}\circ\triangle_{my_{2}}\circ\cdots\circ\triangle_{my_{k}}f(x),\quad(\mathrm{see}\
[4-16]).$
From (25), we can derive the following equation
(26)
$\triangle_{my_{1},my_{2},\dots,my_{k}}e_{\lambda}^{x}(t)=\big{(}e_{\lambda}^{my_{k}}-1\big{)}\big{(}e_{\lambda}^{my_{k-1}}-1\big{)}\cdots\big{(}e_{\lambda}^{my_{1}}-1\big{)}e_{\lambda}^{x}(t).$
Thus, by (23) and (26), we get
(27) $\displaystyle\sum_{n=k}^{\infty}W_{m,\lambda}^{Y}(n,k)\frac{t^{n}}{n!}$
$\displaystyle=\frac{1}{k!m^{k}}e_{\lambda}(t)\Big{(}E[e_{\lambda}^{mY}(t)]-1\Big{)}^{k}$
$\displaystyle=\frac{1}{k!m^{k}}e_{\lambda}(t)\Big{(}E\big{[}e_{\lambda}^{mY_{1}}(t)\big{]}-1\Big{)}\Big{(}E\big{[}e_{\lambda}^{mY_{2}}(t)\big{]}-1\Big{)}\cdots\Big{(}E\big{[}e_{\lambda}^{mY_{k}}(t)\big{]}-1\Big{)}$
$\displaystyle=\frac{1}{k!m^{k}}E[\triangle_{mY_{1},mY_{2},\dots,Y_{k}}e_{\lambda}^{x}(t)]|_{x=1}=\sum_{n=0}^{\infty}\frac{1}{k!m^{k}}E[\triangle_{mY_{1},mY_{2},\dots,mY_{k}}(x)_{n,\lambda}]|_{x=1}\frac{t^{n}}{n!}.$
By comparing the coefficients on both sides of (27), we obtain the following
theorem.
###### Theorem 2.2.
For $n\geq k\geq 0$, we have
$\frac{1}{k!m^{k}}E[\triangle_{mY_{1},mY_{2},\dots,mY_{k}}(1)_{n,\lambda}]=W_{m,\lambda}^{Y}(n,k),$
where
$E[\triangle_{mY_{1},mY_{2},\dots,mY_{k}}(1)_{n,\lambda}]=E[\triangle_{mY_{1},mY_{2},\dots,mY_{k}}(x)_{n,\lambda}]|_{x=1}$,
and we use the same convention as this one in the sequel.
From the equation (45) of [16], we note that
(28)
$E\Big{[}\triangle_{mY_{1},mY_{2},\dots,mY_{k}}f(x)\Big{]}=\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}E\Big{[}f(x+mS_{l})\Big{]}.$
Taking $f(x)=(x)_{n,\lambda}$ in (28) and then evaluating at $x=1$, we get
(29)
$E\Big{[}\triangle_{mY_{1},mY_{2},\dots,mY_{k}}(1)_{n,\lambda}\Big{]}=\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}E\Big{[}(mS_{l}+1)_{n,\lambda}\Big{]}.$
Thus, by using (9), (29) and Theorem 2.2, we get
(30) $\displaystyle W_{m,\lambda}^{Y}(n,k)$
$\displaystyle=\frac{1}{k!m^{k}}E[\triangle_{mY_{1},mY_{2},\dots,mY_{k}}(1)_{n,\lambda}]=\frac{1}{k!m^{k}}\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}E\Big{[}(mS_{l}+1)_{n,\lambda}\Big{]}$
$\displaystyle=\frac{1}{k!m^{k}}\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}\sum_{j=0}^{n}{n\brace
j}_{\lambda}E[(mS_{l}+1)_{j}]$
$\displaystyle=\frac{1}{k!m^{k}}\sum_{j=0}^{n}{n\brace
j}_{\lambda}\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}E\Big{[}(mS_{l}+1)_{j}\Big{]}.$
Therefore, by (30), we obtain the following theorem.
###### Theorem 2.3.
For $n\geq k\geq 0$, we have
$W_{m,\lambda}^{Y}(n,k)=\frac{1}{m^{k}k!}\sum_{j=0}^{n}{n\brace
j}_{\lambda}\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}E\Big{[}(mS_{l}+1)_{j}\Big{]}.$
Before proceeding further, we recall the well-known binomial inversion given
by
(31) $a_{k}=\sum_{l=0}^{k}\binom{k}{l}b_{l}\Longleftrightarrow
b_{k}=\sum_{l=0}^{k}(-1)^{k-l}\binom{k}{l}a_{l}.$
By (29), (31) and Theorem 2.2, we get
(32)
$\displaystyle\sum_{k=0}^{N}E\Big{[}(mS_{k}+1)_{n,\lambda}\Big{]}=\sum_{k=0}^{N}\sum_{l=0}^{k}\binom{k}{l}E\Big{[}\triangle_{mY_{1},mY_{2},\dots,mY_{l}}(1)_{n,\lambda}\Big{]}$
$\displaystyle\quad=\sum_{l=0}^{N}E\Big{[}\triangle_{mY_{1},mY_{2},\dots,mY_{l}}(1)_{n,\lambda}\Big{]}\sum_{k=l}^{N}\binom{k}{l}=\sum_{l=0}^{N}\binom{N+1}{l+1}E\Big{[}\triangle_{mY_{1},mY_{2},\dots,mY_{l}}(1)_{n,\lambda}\Big{]}$
$\displaystyle\quad=\sum_{l=0}^{N}l!m^{l}W_{m,\lambda}^{Y}(n,l)\binom{N+1}{l+1}.$
Therefore, by (32), we obtain the following theorem.
###### Theorem 2.4.
For $N\geq 0$, we have
$\sum_{k=0}^{N}E\Big{[}(mS_{k}+1)_{n,\lambda}\Big{]}=\sum_{l=0}^{N}l!m^{l}\binom{N+1}{l+1}W_{m,\lambda}^{Y}(n,l).$
Now, we define the probabilistic degenerate Dowling polynomials associated
with $Y$ by
(33)
$D_{m,\lambda}^{Y}(n,x)=\sum_{k=0}^{n}W_{m,\lambda}^{Y}(n,k)x^{k},\quad(n\geq
0).$
Note that if $Y=1$, then $D_{m,\lambda}^{Y}(n,x)=D_{m,\lambda}(n,x),\ (n\geq
0)$. When $x=1$, $D_{m,\lambda}^{Y}(n)=D_{m,\lambda}^{Y}(n,1)$ are called the
probabilistic degenerate Dowling numbers associated with $Y$.
From (23) and (33), we note that
(34) $\displaystyle\sum_{n=0}^{\infty}D_{m,\lambda}^{Y}(n,x)\frac{t^{n}}{n!}$
$\displaystyle=\sum_{n=0}^{\infty}\sum_{k=0}^{n}W_{m,\lambda}^{Y}(n,k)x^{k}\frac{t^{n}}{n!}$
$\displaystyle=\sum_{k=0}^{\infty}x^{k}\sum_{n=k}^{\infty}W_{m,\lambda}^{Y}(n,k)\frac{t^{n}}{n!}=\sum_{k=0}^{\infty}x^{k}e_{\lambda}(t)\frac{1}{k!}\bigg{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\bigg{)}^{k}$
$\displaystyle=e_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}.$
Therefore, by (34), we obtain the following theorem.
###### Theorem 2.5.
Let $Y$ be a random variable. Then we have
(35)
$e_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}=\sum_{n=0}^{\infty}D_{m,\lambda}^{Y}(n,x)\frac{t^{n}}{n!}.$
By (35), we get
(36)
$\displaystyle\sum_{n=0}^{\infty}D_{m,\lambda}^{Y}(n,x)\frac{t^{n}}{n!}=e_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}=e^{-\frac{x}{m}}e_{\lambda}(t)e^{\frac{x}{m}E[e_{\lambda}^{mY}(t)]}$
$\displaystyle=e^{-\frac{x}{m}}e_{\lambda}(t)\sum_{k=0}^{\infty}\frac{\big{(}\frac{x}{m}\big{)}^{k}}{k!}\Big{(}E\big{[}e_{\lambda}^{mY}(t)\big{]}\Big{)}^{k}=e^{-\frac{x}{m}}\sum_{k=0}^{\infty}\frac{\big{(}\frac{x}{m}\big{)}^{k}}{k!}E\Big{[}e_{\lambda}^{m(Y_{1}+\cdots+Y_{k})+1}(t)\Big{]}$
$\displaystyle=\sum_{n=0}^{\infty}e^{-\frac{x}{m}}\sum_{k=0}^{\infty}\frac{x^{k}}{m^{k}k!}E\Big{[}(mS_{k}+1)_{n,\lambda}\Big{]}\frac{t^{n}}{n!}.$
Therefore, by (36), we obtain the following Dobinski-like formula.
###### Theorem 2.6.
For $n\geq 0$, we have
(37)
$D_{m,\lambda}^{Y}(n,x)=e^{-\frac{x}{m}}\sum_{k=0}^{\infty}\frac{x^{k}}{m^{k}k!}E\Big{[}(mS_{k}+1)_{n,\lambda}\Big{]}.$
From (35), we note that
(38)
$\displaystyle\sum_{n=0}^{\infty}D_{m,\lambda}^{Y}(n,x)\frac{t^{n}}{n!}=e_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}=e_{\lambda}(t)e^{\frac{x}{m}\sum_{j=1}^{\infty}E[(mY)_{j,\lambda}]\frac{t^{j}}{j!}}$
$\displaystyle=e_{\lambda}(t)\sum_{k=0}^{\infty}\frac{1}{k!}\bigg{(}\frac{x}{m}\sum_{j=1}^{\infty}E[(mY)_{j,\lambda}]\frac{t^{j}}{j!}\bigg{)}^{k}$
$\displaystyle=e_{\lambda}(t)\sum_{k=0}^{\infty}\sum_{n=k}^{\infty}B_{n,k}\bigg{(}\frac{x}{m}E[(mY)_{1,\lambda}],\frac{x}{m}E[(mY)_{2,\lambda}],\dots,\frac{x}{m}E[(mY)_{n-k+1,\lambda}]\bigg{)}\frac{t^{n}}{n!}$
$\displaystyle=\sum_{j=0}^{\infty}\frac{(1)_{j,\lambda}}{j!}t^{j}\sum_{l=0}^{\infty}\sum_{k=0}^{l}B_{l,k}\bigg{(}\frac{x}{m}E[(mY)_{1,\lambda}],\frac{x}{m}E[(mY)_{2,\lambda}],\dots,\frac{x}{m}E[(mY)_{l-k+1,\lambda}]\bigg{)}\frac{t^{l}}{l!}$
$\displaystyle=\sum_{n=0}^{\infty}\sum_{l=0}^{n}\binom{n}{l}(1)_{n-l,\lambda}\sum_{k=0}^{l}B_{l,k}\bigg{(}\frac{x}{m}E[(mY)_{1,\lambda}],\frac{x}{m}E[(mY)_{2,\lambda}],\dots,\frac{x}{m}E[(mY)_{l-k+1,\lambda}]\bigg{)}\frac{t^{n}}{n!}.$
Therefore, by comparing the coefficients on both sides of (38), we obtain the
following theorem.
###### Theorem 2.7.
For $n\geq 0$, we have
$\displaystyle D_{m,\lambda}^{Y}(n,x)$
$\displaystyle\quad=\sum_{l=0}^{n}\sum_{n=0}^{l}\binom{n}{l}(1)_{n-l,\lambda}B_{l,k}\bigg{(}\frac{x}{m}E[(mY)_{1,\lambda}],\frac{x}{m}E[(mY)_{2,\lambda}],\dots,\frac{x}{m}E[(mY)_{l-k+1,\lambda}]\bigg{)}.$
From (35), we have
(39)
$\displaystyle\sum_{n=0}^{\infty}D_{m,\lambda}^{Y}(n+1,x)\frac{t^{n}}{n!}=\frac{d}{dt}\sum_{n=0}^{\infty}D_{m,\lambda}^{Y}(n,x)\frac{t^{n}}{n!}=\frac{d}{dt}\bigg{(}e_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}\bigg{)}$
$\displaystyle=e_{\lambda}^{1-\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}+\frac{xE[mYe_{\lambda}^{mY-\lambda}(t)]}{m}e_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}$
$\displaystyle=\sum_{l=0}^{\infty}(-1)^{l}\lambda^{l}t^{l}\sum_{k=0}^{\infty}D_{m,\lambda}^{Y}(k,x)\frac{t^{k}}{k!}+\sum_{l=0}^{\infty}\frac{x}{m}E\big{[}(mY)_{l+1,\lambda}\big{]}\frac{t^{l}}{l!}\sum_{k=0}^{\infty}D_{m,\lambda}^{Y}(k,x)\frac{t^{k}}{k!}$
$\displaystyle=\sum_{n=0}^{\infty}\sum_{k=0}^{n}(-1)^{n-k}\lambda^{n-k}\frac{D_{m,\lambda}^{Y}(k,x)}{k!}t^{n}+\sum_{n=0}^{\infty}\sum_{k=0}^{n}\binom{n}{k}D_{m,\lambda}^{Y}(k,x)\frac{x}{m}E[(mY)_{n-k+1,\lambda}]\frac{t^{n}}{n!}$
$\displaystyle=\sum_{n=0}^{\infty}\bigg{(}\sum_{k=0}^{n}(-1)^{n-k}\lambda^{n-k}\frac{n!}{k!}D_{m,\lambda}^{Y}(k,x)+\frac{x}{m}\sum_{k=0}^{n}\binom{n}{k}D_{m,\lambda}^{Y}(k,x)E[(mY)_{n-k+1,\lambda}]\bigg{)}\frac{t^{n}}{n!}.$
Therefore, by comparing the coefficients on both sides of (39), we obtain the
following theorem.
###### Theorem 2.8.
For $n\geq 0$, we have
$\displaystyle D_{m,\lambda}^{Y}(n+1,x)$
$\displaystyle=\sum_{k=0}^{n}(-1)^{n-k}\lambda^{n-k}\frac{n!}{k!}D_{m,\lambda}^{Y}(k,x)+\frac{x}{m}\sum_{k=0}^{n}\binom{n}{k}D_{m,\lambda}^{Y}(k,x)E[(mY)_{n-k+1,\lambda}].$
By (35), we get
(40)
$\displaystyle\sum_{n=0}^{\infty}\sum_{k=0}^{n}\binom{n}{k}(1)_{n-k,\lambda}D_{m,\lambda}^{Y}(k,x+y)\frac{t^{n}}{n!}=e_{\lambda}(t)e_{\lambda}(t)e^{(x+y)\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}$
$\displaystyle=e_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}e_{\lambda}(t)e^{y\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}$
$\displaystyle=\sum_{l=0}^{\infty}D_{m,\lambda}^{Y}(l,x)\frac{t^{l}}{l!}\sum_{k=0}^{\infty}D_{m,\lambda}^{Y}(k,y)\frac{t^{k}}{k!}=\sum_{n=0}^{\infty}\bigg{(}\sum_{k=0}^{n}\binom{n}{k}D_{m,\lambda}^{Y}(n-k,x)D_{m,\lambda}^{Y}(k,y)\bigg{)}\frac{t^{n}}{n!}.$
By (40), we obtain the following theorem which shows that
$D_{m,\lambda}^{Y}(n,x)$ does not satisfy the binomial identity.
###### Theorem 2.9.
For $n\geq 0$, we have
$\sum_{k=0}^{n}\binom{n}{k}(1)_{n-k,\lambda}D_{m,\lambda}^{Y}(k,x+y)=\sum_{k=0}^{n}\binom{n}{k}D_{m,\lambda}^{Y}(n-k,x)D_{m,\lambda}^{Y}(k,y).$
Let $r$ be a nonnegative integer. Then we consider the probabilistic
degenerate $r$-Whitney numbers of the second kind associated with $Y$ defined
by
(41)
$\frac{1}{k!}\bigg{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\bigg{)}^{k}e_{\lambda}^{r}(t)=\sum_{n=k}^{\infty}W_{m,\lambda}^{(Y,r)}(n,k)\frac{t^{n}}{n!},\quad(k\geq
0).$
When $Y=1,$ we have $W_{m,\lambda}^{(Y,r)}(n,k)=W_{m,\lambda}^{(r)}(n,k)$,
(see (15)).
From (41), we have
(42)
$\displaystyle\sum_{n=k}^{\infty}W_{m,\lambda}^{(Y,r)}(n,k)\frac{t^{n}}{n!}$
$\displaystyle=\frac{1}{k!m^{k}}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\Big{(}E[e_{\lambda}^{mY}(t)]\Big{)}^{j}e_{\lambda}^{r}(t)$
$\displaystyle=\frac{1}{m^{k}k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\Big{[}e_{\lambda}^{mS_{j}+r}(t)\Big{]}$
$\displaystyle=\sum_{n=0}^{\infty}\bigg{(}\frac{1}{m^{k}k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\Big{[}(mS_{j}+r)_{n,\lambda}\Big{]}\bigg{)}\frac{t^{n}}{n!}.$
Therefore, by comparing the coefficients on both sides of (42), we obtain the
following theorem.
###### Theorem 2.10.
For $r\in\mathbb{N}\cup\\{0\\}$ and $n\geq k\geq 0$, we have
$W_{m,\lambda}^{(Y,r)}(n,k)=\frac{1}{m^{k}k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\Big{[}(mS_{j}+r)_{n,\lambda}\Big{]}.$
From (35), we note that
(43)
$\displaystyle\sum_{n=0}^{\infty}\sum_{k=0}^{n}\binom{n}{k}(x-1)_{n-k,\lambda}D_{m,\lambda}^{Y}(k,x)\frac{t^{n}}{n!}=\bigg{(}e_{\lambda}(t)e^{\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}\bigg{)}^{x}$
$\displaystyle=\bigg{(}e_{\lambda}(t)e^{\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}-1+1\bigg{)}^{x}=\sum_{k=0}^{\infty}\binom{x}{k}\bigg{(}e_{\lambda}(t)e^{\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}-1\bigg{)}^{k}$
$\displaystyle=\sum_{k=0}^{\infty}\binom{x}{k}k!\frac{1}{k!}\bigg{(}\sum_{j=1}^{\infty}D_{m,\lambda}^{Y}(j)\frac{t^{j}}{j!}\bigg{)}^{k}$
$\displaystyle=\sum_{k=0}^{\infty}\binom{x}{k}k!\sum_{n=k}^{\infty}B_{n,k}\Big{(}D_{m,\lambda}^{Y}(1),D_{m,\lambda}^{Y}(2),\dots,D_{m,\lambda}^{Y}(n-k+1)\Big{)}\frac{t^{n}}{n!}$
$\displaystyle=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\binom{x}{k}k!B_{n,k}\Big{(}D_{m,\lambda}^{Y}(1),D_{m,\lambda}^{Y}(2),\dots,D_{m,\lambda}^{Y}(n-k+1)\Big{)}\frac{t^{n}}{n!}.$
Therefore, by comparing the coefficients on both sides of (43), we obtain the
following theorem.
###### Theorem 2.11.
For $n\geq 0$, we have
$\sum_{k=0}^{n}\binom{n}{k}(x-1)_{n-k,\lambda}D_{m,\lambda}^{Y}(k,x)=\sum_{k=0}^{n}\binom{x}{k}k!B_{n,k}\Big{(}D_{m,\lambda}^{Y}(1),D_{m,\lambda}^{Y}(2),\dots,D_{m,\lambda}^{Y}(n-k+1)\Big{)}.$
Now, we observe that
(44)
$te_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}=t\sum_{j=0}^{\infty}D_{m,\lambda}^{Y}(j,x)\frac{t^{j}}{j!}=\sum_{j=1}^{\infty}jD_{m,\lambda}^{Y}(j-1,x)\frac{t^{j}}{j!}.$
Thus, by (41) and (44), we get
(45)
$\displaystyle\bigg{(}\sum_{j=1}^{\infty}jD_{m,\lambda}^{Y}(j-1,x)\frac{t^{j}}{j!}\bigg{)}^{k}=t^{k}\bigg{(}e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}e_{\lambda}(t)\bigg{)}^{k}$
$\displaystyle=t^{k}\sum_{j=0}^{\infty}k^{j}x^{j}\frac{1}{j!}\Big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\Big{)}^{j}e_{\lambda}^{k}(t)=t^{k}\sum_{j=0}^{\infty}k^{j}x^{j}\sum_{n=j}^{\infty}W_{m,\lambda}^{(Y,k)}(n,j)\frac{t^{n}}{n!}$
$\displaystyle=\sum_{n=0}^{\infty}\sum_{j=0}^{n}k^{j}x^{j}W_{m,\lambda}^{(Y,k)}(n,j)\frac{t^{n+k}}{n!}=\sum_{n=k}^{\infty}k!\sum_{j=0}^{n-k}\binom{n}{k}k^{j}x^{j}W_{m,\lambda}^{(Y,k)}(n-k,j)\frac{t^{n}}{n!}.$
From (45), we can derive the following equation
(46)
$\displaystyle\sum_{n=k}^{\infty}\sum_{j=0}^{n-k}\binom{n}{k}k^{j}x^{j}W_{m,\lambda}^{(Y,k)}(n-k,j)\frac{t^{n}}{n!}=\frac{1}{k!}\bigg{(}\sum_{j=1}^{\infty}jD_{m,\lambda}^{Y}(j-1,x)\frac{t^{j}}{j!}\bigg{)}^{k}$
$\displaystyle=\sum_{n=k}^{\infty}B_{n,k}\Big{(}D_{m,\lambda}^{Y}(0,x),2D_{m,\lambda}^{Y}(1,x),3D_{m,\lambda}^{Y}(2,x),\dots,(n-k+1)D_{m,\lambda}^{Y}(n-k,x)\Big{)}\frac{t^{n}}{n!}.$
Therefore, by (46), we obtain the following theorem.
###### Theorem 2.12.
For $n,k$ with $n\geq k\geq 0$, we have
$\displaystyle\sum_{j=0}^{n-k}\binom{n}{k}k^{j}x^{j}W_{m,\lambda}^{(Y,k)}(n-k,j)$
$\displaystyle=B_{n,k}\Big{(}D_{m,\lambda}^{Y}(0,x),2D_{m,\lambda}^{Y}(1,x),3D_{m,\lambda}^{Y}(2,x),\dots,(n-k+1)D_{m,\lambda}^{Y}(n-k,x)\Big{)}.$
We note that
(47)
$\displaystyle\sum_{n=k}^{\infty}B_{n,k}\Big{(}D_{m,\lambda}^{Y}(1,x)-(1)_{1,\lambda},D_{m,\lambda}^{Y}(2,x)-(1)_{2,\lambda},\dots,D_{m,\lambda}^{Y}(n-k+1,x)-(1)_{n-k+1,\lambda}\Big{)}\frac{t^{n}}{n!}$
$\displaystyle=\frac{1}{k!}\bigg{(}\sum_{j=1}^{\infty}\Big{(}D_{m,\lambda}^{Y}(j,x)-(1)_{j,\lambda}\Big{)}\frac{t^{j}}{j!}\bigg{)}^{k}=\frac{1}{k!}\bigg{(}e_{\lambda}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}-e_{\lambda}(t)\bigg{)}^{k}$
$\displaystyle=e_{\lambda}^{k}(t)\frac{1}{k!}\bigg{(}e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}-1\bigg{)}^{k}=\sum_{j=k}^{\infty}{j\brace
k}x^{j}\frac{1}{j!}\bigg{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\bigg{)}^{j}e_{\lambda}^{k}(t)$
$\displaystyle=\sum_{j=k}^{\infty}{j\brace
k}x^{j}\sum_{n=j}^{\infty}W_{m,\lambda}^{(Y,k)}(n,j)\frac{t^{n}}{n!}=\sum_{n=k}^{\infty}\sum_{j=k}^{n}{j\brace
k}W_{m,\lambda}^{(Y,k)}(n,j)x^{j}\frac{t^{n}}{n!}.$
Therefore, by (2), we obtain the following theorem.
###### Theorem 2.13.
For $n,k$ with $n\geq k\geq 0$, we have
$\displaystyle
B_{n,k}\Big{(}D_{m,\lambda}^{Y}(1,x)-(1)_{1,\lambda},D_{m,\lambda}^{Y}(2,x)-(1)_{2,\lambda},\dots,D_{m,\lambda}^{Y}(n-k+1,x)-(1)_{n-k+1,\lambda}\Big{)}$
$\displaystyle=\sum_{j=k}^{n}{j\brace k}W_{m,\lambda}^{(Y,k)}(n,j)x^{j}.$
From (35), we have
(48)
$\displaystyle\sum_{n=0}^{\infty}\bigg{(}\frac{d}{dx}\bigg{)}^{k}D_{m,\lambda}^{Y}(n,x)\frac{t^{n}}{n!}=\bigg{(}\frac{d}{dx}\bigg{)}^{k}e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}e_{\lambda}(t)$
$\displaystyle=k!\frac{1}{k!}\bigg{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\bigg{)}^{k}e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}e_{\lambda}(t)$
$\displaystyle=\frac{k!}{m^{k}}\frac{1}{k!}\Big{(}E[e_{\frac{\lambda}{m}}^{Y}(mt)]-1\Big{)}^{k}e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}e_{\lambda}(t)$
$\displaystyle=\frac{k!}{m^{k}}\sum_{l=k}^{\infty}{l\brace
k}_{Y,\frac{\lambda}{m}}\frac{m^{l}t^{l}}{l!}\sum_{j=0}^{\infty}D_{m,\lambda}^{Y}(j,x)\frac{t^{j}}{j!}$
$\displaystyle=\sum_{n=k}^{\infty}k!\sum_{j=0}^{n-k}\binom{n}{j}D_{m,\lambda}^{Y}(j,x){n-j\brace
k}_{Y,\frac{\lambda}{m}}m^{n-k-j}\frac{t^{n}}{n!}.$
In particular, for $k=1$, we have
(49)
$\displaystyle\sum_{n=1}^{\infty}\frac{d}{dx}D_{m,\lambda}^{Y}(n,x)\frac{t^{n}}{n!}=\bigg{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\bigg{)}e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}e_{\lambda}(t)$
$\displaystyle=\frac{1}{m}\sum_{l=1}^{\infty}E\big{[}(Y)_{l,\frac{\lambda}{m}}\big{]}\frac{m^{l}t^{l}}{l!}\sum_{j=0}^{\infty}D_{m,\lambda}^{Y}(j,x)\frac{t^{j}}{j!}$
$\displaystyle=\sum_{n=1}^{\infty}\sum_{j=0}^{n-1}\binom{n}{j}E[(Y)_{n-j,\frac{\lambda}{m}}]m^{n-j-1}D_{m,\lambda}^{Y}(j,x)\frac{t^{n}}{n!}.$
By comparing the coefficients on both sides of (48) and (49), we obtain the
following theorem.
###### Theorem 2.14.
For $n,k\in\mathbb{N}$ with $n\geq k$, we have
$\bigg{(}\frac{d}{dx}\bigg{)}^{k}D_{m,\lambda}^{Y}(n,x)=k!\sum_{j=0}^{n-k}\binom{n}{j}D_{m,\lambda}^{Y}(j,x){n-j\brace
k}_{Y,\frac{\lambda}{m}}m^{n-k-j}.$
In particular, for $k=1$, we have
$\frac{d}{dx}D_{m,\lambda}^{Y}(n,x)=\sum_{j=0}^{n-1}\binom{n}{j}E\big{[}(Y)_{n-j,\frac{\lambda}{m}}\big{]}D_{m,\lambda}^{Y}(j,x)m^{n-j-1},\quad(n\geq
1).$
We observe that
(50)
$\displaystyle\sum_{n=k}^{\infty}W_{m,\lambda}^{Y}(n,k)\frac{t^{n}}{n!}=\frac{1}{k!}\bigg{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\bigg{)}^{k}e_{\lambda}(t)$
$\displaystyle=\frac{1}{m^{k}}\frac{1}{k!}\Big{(}E[e_{\frac{\lambda}{m}}^{Y}(mt)]-1\Big{)}^{k}e_{\lambda}(t)=\frac{1}{m^{k}}\frac{1}{k!}\bigg{(}\sum_{j=1}^{\infty}E[(Y)_{j,\frac{\lambda}{m}}]m^{j}\frac{t^{j}}{j!}\bigg{)}^{k}e_{\lambda}(t)$
$\displaystyle=\frac{1}{m^{k}}\sum_{l=k}^{\infty}B_{l,k}\Big{(}E[(Y)_{1,\frac{\lambda}{m}}]m,E[(Y)_{2,\frac{\lambda}{m}}]m^{2},\dots,E[(Y)_{l-k+1,\frac{\lambda}{m}}]m^{l-k+1}\Big{)}\frac{t^{l}}{l!}\sum_{j=0}^{\infty}(1)_{j,\lambda}\frac{t^{j}}{j!}$
$\displaystyle=\sum_{n=k}^{\infty}\frac{1}{m^{k}}\sum_{l=k}^{n}\binom{n}{l}B_{l,k}\Big{(}E[(Y)_{1,\frac{\lambda}{m}}]m,E[(Y)_{2,\frac{\lambda}{m}}]m^{2},\dots,E[(Y)_{l-k+1,\frac{\lambda}{m}}]m^{l-k+1}\Big{)}(1)_{n-l,\lambda}\frac{t^{n}}{n!}.$
Thus, by (50), we get the next theorem.
###### Theorem 2.15.
For $n,k$ with $n\geq k\geq 0$, we have
$W_{m,\lambda}^{Y}(n,k)=\frac{1}{m^{k}}\sum_{l=k}^{n}\binom{n}{l}B_{l,k}\Big{(}E[(Y)_{1,\frac{\lambda}{m}}]m,E[(Y)_{2,\frac{\lambda}{m}}]m^{2},\dots,E[(Y)_{l-k+1,\frac{\lambda}{m}}]m^{l-k+1}\Big{)}(1)_{n-l,\lambda}.$
Now, we define the probabilistic degenerate $r$-Dowling polynomials associated
with $Y$ by
(51)
$D_{m,\lambda}^{(Y,r)}(n,x)=\sum_{k=0}^{n}W_{m,\lambda}^{(Y,r)}(n,k)x^{k},\quad(n\geq
0).$
From (51), we note that
(52)
$\displaystyle\sum_{n=0}^{\infty}D_{m,\lambda}^{(Y,r)}(n,x)\frac{t^{n}}{n!}$
$\displaystyle=\sum_{n=0}^{\infty}\sum_{k=0}^{n}W_{m,\lambda}^{(Y,r)}(n,k)x^{k}\frac{t^{n}}{n!}$
$\displaystyle=\sum_{k=0}^{\infty}x^{k}\sum_{n=k}^{\infty}W_{m,\lambda}^{(Y,r)}(n,k)\frac{t^{n}}{n!}=\sum_{k=0}^{\infty}x^{k}e_{\lambda}^{r}(t)\frac{1}{k!}\bigg{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\bigg{)}^{k}$
$\displaystyle=e_{\lambda}^{r}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}.$
By (52), we obtain the following theorem.
###### Theorem 2.16.
Let $r$ be a nonnegative integer. Then we have
(53)
$e_{\lambda}^{r}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}=\sum_{n=0}^{\infty}D_{m,\lambda}^{(Y,r)}(n,x)\frac{t^{n}}{n!}.$
By (53) , we get
(54)
$\displaystyle\sum_{n=0}^{\infty}D_{m,\lambda}^{(Y,r)}(n,x)\frac{t^{n}}{n!}$
$\displaystyle=e_{\lambda}^{r}(t)e^{x\big{(}\frac{E[e_{\lambda}^{mY}(t)]-1}{m}\big{)}}$
$\displaystyle=e^{-\frac{x}{m}}e_{\lambda}^{r}(t)\sum_{k=0}^{\infty}\frac{\big{(}\frac{x}{m}\big{)}^{k}}{k!}\Big{(}E[e_{\lambda}^{mY}(t)]\Big{)}^{k}=e^{-\frac{x}{m}}\sum_{k=0}^{\infty}\frac{\big{(}\frac{x}{m}\big{)}^{k}}{k!}E[e_{\lambda}^{mS_{k}+r}]$
$\displaystyle=\sum_{n=0}^{\infty}e^{-\frac{x}{m}}\sum_{k=0}^{\infty}\frac{x^{k}}{m^{k}k!}E\big{[}(mS_{k}+r)_{n,\lambda}\big{]}\frac{t^{n}}{n!}.$
By (54), we obtain the following Dobinski-like theorem.
###### Theorem 2.17.
For $n\geq 0$, we have
$D_{m,\lambda}^{(Y,r)}(n,x)=e^{-\frac{x}{m}}\sum_{k=0}^{\infty}\frac{x^{k}}{k!m^{k}}E\Big{[}(mS_{k}+r)_{n,\lambda}\Big{]}.$
When $Y=1$, we have
$D_{m,\lambda}^{(Y,r)}(n,x)=D_{m,\lambda}^{(r)}(n,x),\quad(n\geq 0).$
## 3\. Conclusion
In this paper, we studied by using generating functions the probabilistic
degenerate Whitney numbers of the second kind associated with $Y$ and the
probabilistic degenerate Dowling polynomials associated with $Y$. Here $Y$ is
a random variable satisfying the moment condition in (1). In addition, we
investigated generalizations of those numbers and polynomials, namely the
probabilistic degenerate $r$-Whitney numbers of the second kind associated
with $Y$ and the probabilistic degenerate $r$-Dowling polynomials associated
with $Y$. We derived some properties, explicit expressions, certain
identities, recurrence relations and generating functions for those numbers
and polynomials.
It is one of our future projects to continue to study degenerate versions,
$\lambda$-analogues and probabilistic versions of many special polynomials and
numbers and to find their applications to physics, science and engineering as
well as to mathematics.
## References
* [1] Abbas, M.; Bouroubi, S. _On new identities for Bell’s polynomials,_ Discrete Math. 293 (2005), no. 1-3, 5-10.
* [2] Abramowitz, M.; Stegun, I. A. _Handbook of mathematical functions with formulas, graphs, and mathematical tables,_ For sale by the Superintendent of Documents. National Bureau of Standards Applied Mathematics Series, No. 55. U. S. Government Printing Office, Washington, DC, 1964.
* [3] Adams, C. R.; Morse, A. P. _Random sampling in the evaluation of a Lebesgue integral,_ Bull. Amer. Math. Soc. 45 (1939), no. 6, 442-447.
* [4] Adell, Jose A. _Probabilistic Stirling numbers of the second kind and applications,_ J. Theoret. Probab. 35 (2022), no.1, 636-652.
* [5] Araci, S.; Acikgoz, M. _A note on the Frobenius-Euler numbers and polynomials associated with Bernstein polynoomials,_ Adv. Stud. Contemp. Math. (Kyungshang) 22 (2012), no. 3, 399-406.
* [6] Boubellouta, K.; Boussayoud, A.; Araci, S.; Kerada, M. _Some theorems on generating functions and their applications,_ Adv. Stud. Contemp. Math. (Kyungshang) 30 (2020), no. 3, 307-332.
* [7] Carlitz, L. _Degenerate Stirling, Bernoulli and Eulerian numbers,_ Utilitas Math. 15 (1979), 51-88.
* [8] Chung, S.-K.; Jang, G.-W.; Kwon, J.; Lee, J. _Some identities of the degenerate Changhee numbers of second kind arising from differential equations,_ Adv. Stud. Contemp. Math. (Kyungshang) 28 (2018), no. 4, 577-587.
* [9] Comtet, L. _Advanced combinatorics,_ The art of finite and infinite expansions. Revised and enlarged edition, D. Reidel Publishing Co., Dordrecht, 1974.
* [10] Kim, D. S.; Kim, T. _A note on a new type of degenerate Bernoulli numbers,_ Russ. J. Math. Phys. 27 (2020), no. 2, 227-235.
* [11] Kim, D. S.; Kim, T. _Normal ordering associated with $\lambda$-Whitney numbers of the first kind in $\lambda$-shift algebra,_ Russ. J. Math. Phys. 30 (2023), no. 3, 310-319.
* [12] Kim, D. S.; Kim, T.; Lee, S.-H.; Park, J.-W. _Some new formulas of complete and incomplete degenerate Bell polynomials,_ Adv. Difference Equ. 2021, Paper No. 326.
* [13] Kim, T.; Kim, D. S. _Degenerate Whitney numbers of first and second kind of Dowling lattices,_ Russ. J. Math. Phys. 29 (2022), no. 3, 358-377.
* [14] Kim, T.; Kim, D. S. _Some identities involving degenerate Stirling numbers associated with several degenerate polynomials and numbers,_ Russ. J. Math. Phys. 30 (2023), no. 1, 62-75.
* [15] Kim, T.; Kim, D. S. _Degenerate $r$-Whitney numbers and degenerate $r$-Dowling polynomials via boson operators,_ Adv. Appl. Math. 140 (2022), Paper No. 102394.
* [16] Kim, T.; Kim, D. S. _Probabilistic degenerate Bell polynomials associated with random variables,_ Russ. J. Math. Phys. 30 (2023), no. 4, 528-542.
* [17] Kim, T.; Kim, D. S.; Dolgy, D. V.; Park, J.-W. _Degenerate binomial and Poisson random variables associated with degenerate Lah-Bell polynomials,_ Open Math. 19 (2021), no. 1, 1588-1597.
* [18] Ma, Y.; Kim, T.; Lee, H.; Kim, D. S. _Some identities of fully degenerate Dowling and fully degenerate Bell polynomials arising from umbral calculus,_ Fractals 30(2022), no. 10, 2240257. https://doi.org/10.1142/S0218348X22402575
* [19] Pyo, S.-S. _Degenerate Cauchy numbers and polynomials of the fourth kind,_ Adv. Stud. Contemp. Math. (Kyungshang) 28 (2018), no. 1, 127-138.
* [20] Roman, S. _The umbral calculus,_ Pure and Applied Mathematics 111, Academic Press, Inc. [Harcourt Brace Jobvanovich, Publishers], New York, 1984.
* [21] Ross, S. M. _Introduction to probability models,_ Twelfth edition of Academic Press, London, 2019.
* [22] Simsek, Y. _Identities on the Changhee numbers and Apostol-type Daehee polynomials,_ Adv. Stud. Contemp. Math. (Kyungshang) 27 (2017), no. 2, 199-212.
* [23] Simsek, Y. _Construction of generalized Leibnitz type numbers and their properties,_ Adv. Stud. Contemp. Math. (Kyungshang) 31 (2021), no. 3, 311-323.
* [24] Soni, R.; Vellaisamy, P.; Pathak, A. K. _A probabilistic generalization of the Bell polynomials,_ J. Anal. (2023). https://doi.org/10.1007/s41478-023-00642-y
* [25] Ta, B. Q. _Probabilistic approach to Appell polynomials,_ Expo. Math. 33 (2015), no. 3, 269-294.
* [26] Teicher, H. _An inequality on Poisson probabilities,_ Ann. Math. Statist. 26 (1955), 147-149.
|
# Generalized Ellis-Bronikov traversable wormholes in $f(R)$ gravity with
anisotropic dark matter
C. R. Muniz111E-mail<EMAIL_ADDRESS>Universidade Estadual do Ceará,
Faculdade de Educação, Ciências e Letras de Iguatu, 63500-000, Iguatu, CE,
Brazil. R. V. Maluf222E-mail<EMAIL_ADDRESS>Universidade Federal do
Ceará, Centro de Ciências, Departamento de Física 60000-000, Fortaleza, CE,
Brazil.
###### Abstract
This paper studies generalized Ellis-Bronikov (E-B) traversable wormholes in
$f(R)$ extended gravity. We assume that these wormholes are supported by
anisotropic dark matter (DM) according to the most often used phenomenological
models, namely those of Navarro-Frenk-White (NFW), Thomas-Fermi (T-F), and
Pseudo-isothermal (PI). Initially, we obtain the field equations in a general
scenario of $f(R)$ theories in the metric formalism for the static and
spherically symmetric Morris-Thorne spacetime. Then we particularize to a
$f(R)$ model with power-law, including the Starobinky modified gravity.
Following, we analyze the energy conditions which are not dependent on the DM
models (Null and Weak Energy Conditions – NEC and WEC) and those which are
model-dependent (Strong and Dominant Energy Conditions SEC and DEC). Finally,
we compare some E-B wormhole solutions and the mentioned DM models, discussing
the feasibility of the wormhole-dark matter system in different scenarios.
$f(R)$ modified gravity; Wormholes; Dark matter.
## I Introduction
The $\Lambda$CDM model of the standard cosmology, supported by observational
data, preconizes that the stuff of our universe consists of about 4$\%$
baryonic matter, 29.6$\%$ dark matter, and 67.4$\%$ dark energy Planck:2018vyg
. Concerning galaxies and their clusters, dark matter strongly contributes to
the formation, evolution, and coalescence of these structures Trujillo-
Gomez:2010jbn through the only fundamental interaction that it is apparently
capable of experiencing: gravity.
Dark matter halos are present in the vast majority of galaxies, and thus one
has considered the formation of traversable wormholes inside them (see, e.g.,
Sarkar:2019uhk ; Jusufi:2019knb ; Xu:2020wfm ). These hypothetical objects are
predicted by general relativity (GR) and represent a kind of tunnel in the
spacetime that connects two distant regions of the same universe or two
different universes (see Morris:1988cz ; Visser , and references therein).
This subject has been recently investigated in a more fundamental level from
the J. Maldacena works Maldacena:2013xja ; Maldacena:2017axo ;
Maldacena:2020sxe and also applied in the context of condensed matter systems
Gonzalez:2009je ; Alencar:2021ejd . Usually, it is required some type of
exotic matter sourcing traversable wormholes. However, in scenarios of
modified theories of gravity, such a feature can change with non-exotic matter
working as a source for the wormhole geometry Pavlovic:2014gba ;
Mehdizadeh:2019qvc ; Sahoo:2020sva ; Moti ; Alencar ; Sadeghi:2022sto .
Modified $f(R)$ gravity has also drawn much attention in the last few years.
It is based on a generalization of Einstein’s field equations by replacing the
Ricci scalar curvature, $R$, with a general function, $f(R)$, in the
gravitational action. A relevant feature of $f(R)$ gravity is that,
differently from $\Lambda$CDM standard cosmology based on GR, it is
unnecessary postulating dark energy or introduce any kind of new matter field
to explain the cosmic inflation Kehagias:2013mya ; Ketov ; Aziz:2021evx ;
Sharma:2022tce and the present phase of the Universe accelerated expansion
Nojiri ; Sotiriou . In fact, the simplest $f(R)$ model, due to A. Starobinsky
Starobinsky1 ; Starobinsky2 , with an additional Ricci scalar quadratic term
in the Einstein-Hilbert action and therefore only one free parameter, leads to
a de Sitter phase for as long as this term dominates, being compatible with
the experimental data collected from the Planck satellite Akrami .
In this paper, we will investigate the possibility of occurring generalized
Ellis-Bronikov (E-B) traversable wormholes in $f(R)$ extended gravity. The
wormholes are admitted to be sourced by anisotropic dark matter compatible
with the most often used phenomenological models, namely those of Navarro-
Frenk-White (NFW), Thomas-Fermi (T-F) and pseudo-isothermal (PI). Thus, we
will obtain the general field equations of any $f(R)$ theory in the metric
formalism, and then we will particularize to a power-law $f(R)$ model,
including the Starobinky modified gravity. Following, we will analyze the
energy conditions associated with the source, which are not dependent on the
DM models (Null and Weak Energy Conditions – NEC and WEC) and those that are
model-dependent (Strong and Dominant Energy Conditions – SEC and DEC).
Finally, we compare some E-B wormhole solutions with the mentioned DM profiles
and the $f(R)$ Starobinsky-like models regarding the energetic feasibility of
wormhole-dark matter systems.
The paper is organized as follows: Section II reviews the general Morris-
Thorne wormhole solution in $f(R)$ gravity. Section III discusses the
generalized E-B wormhole sourced by anisotropic dark matter in the Starobinsky
$f(R)$ model. In section IV, we analyze the energy conditions of the system
wormhole-dark matter, and then finally present the conclusions and close the
paper in section V.
## II Wormhole solutions in $f(R)$ gravity
Let us start by considering the following action for a $f(R)$ modified
theories of gravity:
$S=\frac{1}{2\kappa}\int d^{4}x\sqrt{-g}\;f(R)+S_{M}(g^{\mu\nu},\psi)\,,$ (1)
where $\kappa=8\pi G$ will be taken equal to the unit $(\kappa=1$) in order to
keep simplicity and $f(R)$ is initially an arbitrary function of the curvature
scalar. Besides this, $S_{M}(g^{\mu\nu},\psi)$ is the matter action with
$S_{M}=\int d^{4}x\sqrt{-g}\;{\cal L}_{m}(g_{\mu\nu},\psi)$, where ${\cal
L}_{m}$ is the matter Lagrangian density, in which the matter is minimally
coupled to the metric $g_{\mu\nu}$ and $\psi$ denotes the matter fields.
Varying the action (1) with respect to the metric, we get the gravitational
field equation:
$FR_{\mu\nu}-\frac{1}{2}f\,g_{\mu\nu}-\nabla_{\mu}\nabla_{\nu}F+g_{\mu\nu}\Box
F=\,T^{(m)}_{\mu\nu}\,,$ (2)
where we define $F\equiv df/dR$ and
$\Box=g^{\alpha\beta}\nabla_{\alpha}\nabla_{\beta}$. The energy-momentum
tensor associated with the matter content is given by
$T^{(m)}_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta\sqrt{-g}{\cal
L}_{m}}{\delta g^{\mu\nu}}.$ (3)
Considering the contraction of Eq. (2), we arrive to the following relation
$FR-2f+3\,\Box F=\,T\,,$ (4)
which shows that the Ricci scalar is a fully dynamical degree of freedom, and
$T=T^{(m)\mu}{}_{\mu}$ is the trace of the energy-momentum tensor.
The trace equation (4) can be used to simplify the field equations and then
can be kept as a constraint equation. Thus, substituting the trace equation
into Eq. (2) and re-organizing the terms, we end up with the following
gravitational field equation
$G_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}R\,g_{\mu\nu}=T^{{\rm
eff}}_{\mu\nu}\,,$ (5)
where the effective stress-energy tensor is given by $T^{{\rm
eff}}_{\mu\nu}=T^{(c)}_{\mu\nu}+\tilde{T}^{(m)}_{\mu\nu}$. The term
$\tilde{T}^{(m)}_{\mu\nu}$ corresponds to
$\tilde{T}^{(m)}_{\mu\nu}=T^{(m)}_{\mu\nu}/F\,,$ (6)
and the curvature stress-energy tensor, $T^{(c)}_{\mu\nu}$, is defined as
$\displaystyle
T^{(c)}_{\mu\nu}=\frac{1}{F}\left[\nabla_{\mu}\nabla_{\nu}F-\frac{1}{4}g_{\mu\nu}\left(RF+\Box
F+T\right)\right]\,.$ (7)
It is also interesting to consider the conservation law for the above
curvature stress-energy tensor. Taking into account the Bianchi identities,
$\nabla^{\mu}G_{\mu\nu}=0$, and the diffeomorphism invariance of the matter
part of the action, which yields $\nabla^{\mu}T^{(m)}_{\mu\nu}=0$, we verify
that the effective Einstein field equation provides the following conservation
law
$\nabla^{\mu}T^{(c)}_{\mu\nu}=\frac{1}{F^{2}}T^{(m)}_{\mu\nu}\nabla^{\mu}F\,.$
(8)
Once the scenario for the $f(R)$ theories of gravity has been presented, we
will study the wormhole solutions in this framework. The following line
element describes the geometry of a static and spherically symmetric
traversable wormholes Morris:1988cz :
$ds^{2}=-e^{2\Phi(r)}dt^{2}+\frac{dr^{2}}{1-b(r)/r}+r^{2}\,(d\theta^{2}+\sin^{2}{\theta}\,d\phi^{2})\,,$
(9)
where $\Phi(r)$ and $b(r)$ are arbitrary functions of the radial coordinate,
$r$, denoted as the redshift function, and the shape function, respectively.
The radial coordinate $r$ decreases from infinity to a minimum value $r_{0}$,
the radius of the throat, where $b(r_{0})=r_{0}$. A fundamental property of a
wormhole is that a flaring out condition of the throat, given by
$(b-b^{\prime}r)/b^{2}>0$ Morris:1988cz , and at the throat
$b(r_{0})=r=r_{0}$, such that the condition $b^{\prime}(r_{0})<1$ is imposed
to have wormhole solutions. It is precisely these restrictions that impose the
NEC violation in classical general relativity. Another condition that needs to
be satisfied is $1-b(r)/r>0$. For the wormhole to be traversable, one must
demand that there are no horizons present, which are identified as the
surfaces with $e^{2\Phi}\rightarrow 0$, so that $\Phi(r)$ must be finite
everywhere. In the analysis outlined below, we consider that the redshift
function is constant, $\Phi^{\prime}=0$, which simplifies the calculations
considerably, and provide interesting exact wormhole solutions.
Relative to the matter content of the wormhole, we impose that the energy-
momentum tensor that threads the wormhole satisfies the energy conditions,
which will be discussed in section IV, and is given by the following
anisotropic distribution of matter
$T^{(m)}_{\mu\nu}=(\rho+p_{t})U_{\mu}\,U_{\nu}+p_{t}\,g_{\mu\nu}+(p_{r}-p_{t})\chi_{\mu}\chi_{\nu}\,,$
(10)
where $U^{\mu}$ is the four-velocity, $\chi^{\mu}$ is the unit spacelike
vector in the radial direction, i.e.,
$\chi^{\mu}=\sqrt{1-b(r)/r}\,\delta^{\mu}{}_{r}$. $\rho(r)$ is the energy
density, $p_{r}(r)$ is the radial pressure measured in the direction of
$\chi^{\mu}$, and $p_{t}(r)$ is the transverse pressure measured in the
orthogonal direction to $\chi^{\mu}$. Taking into account the above
considerations, the stress-energy tensor is given by the following profile:
$T^{(m)\mu}{}_{\nu}={\rm diag}[-\rho(r),p_{r}(r),p_{t}(r),p_{t}(r)]$.
Taking into account the above considerations, the effective field equation (5)
provides the following equations for the wormhole geometry
$\displaystyle-\frac{b^{\prime}}{r^{2}}+\frac{\rho}{F}+\frac{H}{F}$
$\displaystyle=$ $\displaystyle 0,$ (11)
$\displaystyle-\frac{b}{r^{3}}-\frac{p_{r}}{F}+\frac{1}{F}\left[H+F^{\prime\prime}\left(\frac{b}{r}-1\right)+F^{\prime}\left(\frac{b^{\prime}}{2r}-\frac{b}{2r^{2}}\right)\right]$
$\displaystyle=$ $\displaystyle 0,$ (12)
$\displaystyle-\frac{b^{\prime}}{2r^{2}}+\frac{b}{2r^{3}}-\frac{p_{t}}{F}+\frac{1}{F}\left[H+F^{\prime}\left(\frac{b}{r^{2}}-\frac{1}{r}\right)\right]$
$\displaystyle=$ $\displaystyle 0.$ (13)
where the prime denotes a derivative with respect to the radial coordinate,
$r$. The term $H=H(r)$ is defined as
$H(r)=\frac{1}{4}\left(FR+\Box F+T\right)\,,$ (14)
and the Ricci curvature scalar is $R=2b^{\prime}(r)/r^{2}$. Thus, the term
$\Box F$ is encoded by the following expression
$\Box
F=F^{\prime\prime}\left(1-\frac{b}{r}\right)-F^{\prime}\left(\frac{b^{\prime}}{2r}+\frac{3b}{2r^{2}}-\frac{2}{r}\right).$
(15)
So we can explicitly write $H(r)$ as
$H=-\frac{\rho}{4}+\frac{p_{r}}{4}+\frac{p_{t}}{2}+\frac{Fb^{\prime}}{2r^{2}}-F^{\prime}\left(\frac{b^{\prime}}{8r}+\frac{3b}{8r^{2}}-\frac{1}{2r}\right)-F^{\prime\prime}\left(\frac{b}{4r}-\frac{1}{4}\right).$
(16)
Note that the field equations (11)-(13), despite their complexities, form an
algebraic system of equations for the components of the energy-momentum
tensor. From the field equations (11)-(13), and with the help of (16), we can
express the radial and transverse pressures in the following way
$\displaystyle p_{r}$ $\displaystyle=$
$\displaystyle-\rho+F\left(\frac{b^{\prime}}{r^{2}}-\frac{b}{r^{3}}\right)+F^{\prime}\left(\frac{b^{\prime}}{2r}-\frac{b}{2r^{2}}\right)+F^{\prime\prime}\left(\frac{b}{r}-1\right),$
(17) $\displaystyle p_{t}$ $\displaystyle=$
$\displaystyle-\rho+F\left(\frac{b^{\prime}}{2r^{2}}+\frac{b}{2r^{3}}\right)+F^{\prime}\left(\frac{b}{r^{2}}-\frac{1}{r}\right).$
(18)
It is worth mentioning that the solutions obtained for $p_{r}$ and $p_{t}$
depend on the energy density $\rho$ and on the explicit forms of the functions
$F(r)$ and $b(r)$. In this work, we are interested in studying the behavior of
these pressures and evaluating the resulting energy conditions in a dark
matter context considering the generalized Ellis-Bronikov model for the form
of the wormhole function $b(r)$. We will also assume the Starobinsky-like
model for the $f(R)$ function, motivated by its phenomenological feasibility
in the cosmological context. Such models will be better exemplified in what
follows.
## III Generalized Ellis-Bronikov wormhole sourced by anisotropic dark matter
in the Starobinsky-like model
The generalised Ellis–Bronnikov model of wormholes was discussed in Kar:1995jz
; DuttaRoy:2019hij ; Sharma:2021kqb as a two parameter ($n$ and $r_{0}$)
family of simple Lorentzian wormholes, where $n$ is a free even-integer
exponent and $r_{0}$ is the throat radius. This wormhole presents a shape
function given by
$b(r)=r-r^{(3-2n)}(r^{n}-r_{0}^{n})^{(2-\frac{2}{n})}.$ (19)
The exponent $n=2$ furnishes the original and simplest solution for that
wormhole, in which $b(r)=r_{0}^{2}/r$.
The $f(R)$ model that we will henceforth employ is $f(R)=R+\alpha R^{m}$,
which has a cosmological motivation, namely the pure Starobinsky one ($m=2$).
It was shown that models with other values for $m$ (specifically, $1<m\leq 2$)
can in addition cure the formation of curvature singularities in the
gravitational collapse Bamba .
Considering that $R=2b^{\prime}(r)/r^{2}$ and $b(r)$ is given by (19), we have
$F(r)=1+2^{m-1}\alpha
m\left[\frac{1}{r^{2}}+\frac{1}{r^{2n}}(2n-3)\left(r^{n}-r_{0}^{n}\right)^{2-\frac{2}{n}}+\frac{2}{r^{n}}(1-n)\left(r^{n}-r_{0}^{n}\right)^{1-\frac{2}{n}}\right]^{m-1}.$
(20)
Finally, let us define the dark matter content that we will use in our
analysis. As in Jusufi:2019knb ; Xu:2020wfm , we will adopt simple but well-
motivated dark matter density profiles with respect to the $\Lambda$CDM
scenario. Compilations of observations from galaxies and their agglomerates,
supported by numerical simulations Dubinski:1991bm ; Navarro:1995iw ;
Navarro:1996gj , suggest that the dark matter halo can be described by the
Navarro-Frenk-White (NFW) density profile, whose analytical expression is
defined as
$\rho=\rho_{NFW}=\frac{\rho_{s}}{\frac{r}{R_{s}}\left(1+\frac{r}{R_{s}}\right)^{2}},$
(21)
where $\rho_{s}$ is the density within the central region dominated by dark
matter and $R_{s}$ is its characteristic radius. The NFW density profile
accounts for a large family of models for dark matter from which the collision
effects between particles are very weak.
The second case analyzed is based on the Bose-Einstein Condensation dark
matter model and can be described by the Thomas-Fermi (TF) profile, whose
application is more relevant in small distances of galaxies. In fact, the
interactions of dark matter particles can no longer be neglected in the inner
regions of galaxies, so it ceases to be cold. The TF profile can be
represented in the form Boehmer:2007um
$\rho=\rho_{TF}=\rho_{s}\frac{\sin(kr)}{kr},$ (22)
with $k=\pi/R_{s}$. The last case to be considered is the Pseudo isothermal
(PI) profile, related to a class of dark matter models present in theories of
modified gravity known as MOND (modified Newtonian dynamics) Begeman:1991iy .
The PI profile is given by
$\rho=\rho_{PI}=\frac{\rho_{s}}{1+\left(\frac{r}{R_{s}}\right)^{2}}.$ (23)
In the context of wormhole studies, these profiles of dark matter density are
examined in Jusufi:2019knb ; Xu:2020wfm and are depicted here in Fig. 1 as
functions of the radial coordinate, $r$. The NFW model profile seems to grow
indefinitely near the origin $r=0$. However, this feature proceeds from the
relatively low resolution of the used numerical simulations (order of 1 kpc).
Despite this, the radial coordinate is always greater than (or equals to) the
wormhole throat radius, $r_{0}$, and therefore the origin is out of our
analysis. Also, Fig. 1 reveals that close to the wormhole throat, the dark
matter density is positive for all cases and very similar in the TF and PI
models, while the NFW profile is higher than the others.
Figure 1: The behavior of dark matter density $\rho$ as a function of radial
distance $r$ for the three analyzed profiles. The parameter settings are:
$\rho_{s}=0.05$ and $R_{s}=10$ in Planck units.
## IV Energy conditions
In order to investigate the energy conditions associated with the anisotropic
dark matter as a source for the wormhole, we replace Eq. (20) and its
derivatives, as well as the dark matter density profiles into Eqs. (17) and
(18). Thus, we find the expressions for the radial and lateral pressures,
respectively. As these are quite involved, we will discuss the energy
conditions graphically. Before this, we must notice that, by Eqs. (17)-(18),
the Null Energy Conditions (NEC) and the Weak Energy Conditions (WEC), for
which $\rho+p_{i}\geq 0$ (also $\rho\geq 0$ for the latter), are independent
of the employed dark matter model. On the other hand, the more restrictive of
the energy conditions (Strong Energy Conditions-SEC, $\rho+p_{i}\geq 0$,
$\rho+\sum_{i}p_{i}\geq 0$) is model-dependent.
Figure 2: Radial dependence of the energy density and pressure combinations
for $n=2$ (top) and $n=4$ (bottom) Ellis-Bronikov wormhole solutions, in the
pure Starobinsky $f(R)$ model ($m=2$) and for the three dark matter profiles,
considering $\alpha=0$ (left panel) and $\alpha=0.5$ (right panel),
$\rho_{s}=0.05$, $R_{s}=10$, and $r_{0}=1$, in Planck units.
In Fig. 2, we depict the energy density and pressure combinations as functions
of the radial coordinate in order to examine NEC, WEC, and SEC, considering
that $\rho>0$. Notice that for $n=2$ (the simplest Ellis-Bronikov solution)
and $\alpha=0$ (i.e., in general relativity), none of these conditions are
obeyed in all space ($r\geq r_{0}$), or they are partially satisfied if we
observe that $\rho+p_{t}=0$. For $n\geq 4$, they are all met exactly at the
wormhole throat. However, as we can see through the general expressions valid
at this point, namely
$\left.(\rho+p_{r}+2p_{t})\right|_{r\rightarrow
r_{0}}=\left\\{\begin{array}[]{cc}2^{m}m(3-2m)\alpha\left(-\frac{1}{r_{0}^{2}}\right)^{m}-\frac{2}{r_{0}^{2}}\left(1+\frac{\rho_{s}r_{0}R_{s}^{3}}{\left(r_{0}+R_{s}\right){}^{2}}\right),&\
\ n=2;\\\
2^{m}m(-5+6m)\alpha\left(\frac{1}{r_{0}^{2}}\right)^{m}+\frac{2}{r_{0}^{2}}\left(1-\frac{\rho_{s}r_{0}R_{s}^{3}}{\left(r_{0}+R_{s}\right){}^{2}}\right),&\
\ n=4;\\\
2^{m}m\alpha\left(\frac{1}{r_{0}^{2}}\right)^{m}+\frac{2}{r_{0}^{2}}\left(1-\frac{\rho_{s}r_{0}R_{s}^{3}}{\left(r_{0}+R_{s}\right){}^{2}}\right),&\
\ n\geq 6,\end{array}\right.$ (24)
for $\alpha=0$, a greater concentration of dark matter may make it hard to
fulfill SEC, since the total sum of the pressures with the density becomes
negative when $\rho_{s}>(r_{0}+R_{s})^{2}/(r_{0}R_{s}^{3})$.
Figure 3: The dominant energy condition (DEC): Radial dependence of the energy
density and pressure $p_{r}$ and $p_{t}$ for $n=2$ (top) and $n=4$ (bottom)
Ellis-Bronikov wormhole solutions, in the simplified Starobinsky $f(R)$ model
($m=2$) and NFW model for dark matter, considering $\alpha=0$ (left panel) and
$\alpha=0.5$ (right panel), $\rho_{s}=0.05$, $R_{s}=10$, and $r_{0}=1$, in
Planck units. Figure 4: Parameter space where the regions with values of both
the throat radius and dark matter characteristic density are highlighted,
which allow SEC to be fulfilled at the wormhole throat, for some values of
$m$, with $n=4$, $\alpha=0.8$, $R_{s}=9$, in NFW DM model (Planck units are
used).
Moving away from general relativity (i.e., $\alpha\neq 0$; this constant must
be positive in order to support cosmological observations Kehagias:2013mya ),
in considering $n=2$, NEC, WEC and SEC are not met nearby and at the wormhole
throat; however, SEC is partially obeyed in a limited region of the space
sufficiently far from it. For $n=4$, those conditions are all satisfied nearby
the throat, provided the content of dark matter is not much high. On the other
hand, according to the second one of Eqs. (24), a greater coupling constant
$\alpha$ can compensate for the growth of the dark matter and thus SEC to
continue being satisfied at the throat. This conclusion is also valid for the
solutions in which $n\geq 6$. Away from the throat, the present energy
conditions are partially fulfilled for all the cases.
Although the three dark matter models under consideration are used in
different contexts, they present the same properties in the wormhole
formation, as evidenced in the previous analysis of the energy conditions,
that is, the influence of different DM density profiles is very similar. As we
can see, some associated behaviors practically coincide. Therefore, we will
focus on the analysis of the NFW profile in what follows.
Regarding the Dominant Energy Conditions (DEC), in which $-\rho\leq
p_{i}\leq\rho$, Fig. (3) depicts $\rho-|p_{i}|$ as functions of the radial
coordinate, $r$, for diverse situations. Considering $\alpha=0$ and $n=2$,
those conditions are partially met with respect to the lateral pressure, in
all space. For $n\geq 4$, DEC is not satisfied anywhere. When $\alpha\neq 0$,
the energy conditions under consideration are partially satisfied in a few
regions of space, for $n\geq 2$.
In Fig. 4 we depict the parameter space where, in the colored region,
$\rho+p_{r}+2p_{t}\geq 0$ is represented at the wormhole throat, for some
values of $m$. Notice that the parameter space is the largest with respect to
the Starobinsky model ($m=2$) for small throat radius and high DM
concentrations. In other words, it is the most energetically favorable.
However, the other power-law $f(R)$ models ($m<2$) are the most favored when
one considers greater throat radius and low DM densities. Therefore, from the
point of view of fulfillment of SEC, the model with the greater deviation from
GR prevails over the other in scenarios where one has very high DM densities
and microscopic wormholes, as expected.
## V Conclusions
We have studied the generalized Ellis-Bronikov (E-B) traversable wormhole
solutions, which depend on a free parameter besides the throat radius, in
$f(R)$ extended gravity. The shape function associated with the wormhole was
therefore fixed. Because the object is zero-tidal, we can only consider an
anisotropic dark matter as a source for it. We have analyzed three DM
phenomenological models, namely, those of Navarro-Frenk-White (NFW), Thomas-
Fermi (T-F), and Pseudo-isothermal (P-I). Hence, we found the correct field
equations by particularizing the approach to the Starobinky-like $f(R)$ power-
law model.
Following, we have analyzed the energy conditions, focusing on SEC and DEC,
since they depend on the material source profile, differently from that occurs
with NEC and WEC. Dark matter does not obey SEC anywhere in GR (where the
coupling constant $\alpha$ vanishes) for the simplest E-B wormhole ($n=2$).
These energy conditions are precisely met at the wormhole throat in the other
E-B models ($n\geq 4$). Nevertheless, a higher amount of dark matter can
undermine this fulfillment. Concerning DEC, it is not obeyed anywhere in all
situations.
In taking into account the quadratic gravity, for $n=2$ E-B wormhole, SEC is
partially fulfilled in a limited region of space. On the other hand, these
energy conditions are entirely fulfilled nearby and at the wormhole throat for
$n\geq 4$. On the other hand, a higher concentration of dark matter can spoil
such fulfillment, as in RG case. But a greater coupling constant can restore
it at the wormhole throat, as well as a smaller size of this latter. Fig. 4
illustrates such competition between the involved parameters and some power-
law $f(R)$ models, being the pure Starobinsky model ($m=2$) the most
energetically favorable for small throat radii and high concentration of DM
since the associated parameter space is the largest. Otherwise, the power-law
$f(R)$ models with smaller exponents are the most favored. Still, in this
context, we have shown that, unlike GR, DEC is partially satisfied in limited
regions of space for all $n$.
We have thus shown that different models of DM present the same properties in
the wormhole formation. Finally, we can conclude that anisotropic dark matter
can support a class of traversable wormholes as non-exotic matter, at least in
regions nearby and at the throat, in the simplest phenomenological $f(R)$
extended theory of gravity.
###### Acknowledgements.
The authors thank the Conselho Nacional de Desenvolvimento Científico e
Tecnolóogico (CNPq), grants no 308268/2021-6 (CRM) and no 311732/2021-6 (RVM)
for financial support.
## References
* (1) N. Aghanim et al., Planck collaboration, “Planck 2018 results. VI. Cosmological parameters”, Astron. Astrophys., 641, A6 (2020). doi: 10.1051/0004-6361/201833910, arXiv: 1807.06209, [astro-ph.CO], Erratum: Astron.Astrophys. 652, C4 (2021).
* (2) S. Trujillo-Gomez, A. Klypin, J. Primack, and A. J. Romanowsky, “Galaxies in LCDM with Halo Abundance Matching: luminosity-velocity relation, baryonic mass-velocity relation, velocity function and clustering”, Astrophys. J., 742, 16 (2011), doi: 10.1088/0004-637X/742/1/16, arXiv: 1005.1289, [astro-ph.CO].
* (3) N. Sarkar, S. Sarkar, F. Rahaman, P. K. F. Kuhfittig, and G. S. Khadekar, “Possible formation of wormholes from dark matter in an isothermal galactic halo and void”, Mod. Phys. Lett. A, 34, 1950188 (2019), doi: 10.1142/S0217732319501888 arXiv: 1905.02531, [physics.gen-ph].
* (4) K. Jusufi, M. Jamil, and M. Rizwan, “On the possibility of wormhole formation in the galactic halo due to dark matter Bose-Einstein condensates”, Gen. Rel. Grav. 51, 102 (2019), doi: 10.1007/s10714-019-2586-2, arXiv: 1903.01227, [gr-qc].
* (5) Z. Xu, M. Tang, G. Cao and S-N. Zhang, “Possibility of traversable wormhole formation in the dark matter halo with istropic pressure”, Eur. Phys. J. C, 80, 70 (2020) 10.1140/epjc/s10052-020-7636-0.
* (6) M. S. Morris and K. S. Thorne, “Wormholes in space-time and their use for interstellar travel: A tool for teaching general relativity,” Am. J. Phys. 56, 395 (1988).
* (7) M. Visser, “Lorentzian Wormholes: From Einstein to Hawking”, American Institute of Physics (1996).
* (8) J. Maldacena and L. Susskind, “Cool horizons for entangled black holes”, Fortsch. Phys. 61, 781 (2013), doi: 10.1002/prop.201300020, arxiv: 1306.0533 [hep-th].
* (9) J. Maldacena, D. Stanford, and Z. Yang, “Diving into traversable wormholes”, Fortsch. Phys., 65 1700034 (2017). doi: 10.1002/prop.201700034, arXiv: 1704.05333 [hep-th].
* (10) J. Maldacena and A. Milekhin, “Humanly traversable wormholes”, Phys. Rev. D103, 066007 (2021), doi: 10.1103/PhysRevD.103.066007, arXiv: 2008.06618 [hep-th].
* (11) J. Gonzalez and J. Herrero, “Graphene wormholes: A Condensed matter illustration of Dirac fermions in curved space”, Nucl. Phys. B825, 426 (2010), doi: 10.1016/j.nuclphysb.2009.09.028, arXiv: 0909.3057 [cond-mat.mes-hall].
* (12) G. Alencar, V. B. Bezerra, and C. R. Muniz, “Casimir wormholes in $2+1$ dimensions with applications to the graphene”, Eur. Phys. J. C81, 924 (2021), doi: 10.1140/epjc/s10052-021-09734-0, arXiv: 2104.13952 [gr-qc].
* (13) P. Pavlovic and M. Sossich, “Wormholes in viable $f(R)$ modified theories of gravity and Weak Energy Condition”, Eur. Phys. J. C75, 117 (2015), doi: 10.1140/epjc/s10052-015-3331-y, arXiv: 1406.2509 [gr-qc].
* (14) M. R. Mehdizadeh and A. H. Ziaie, “Traversable wormholes in Einsteinian cubic gravity”, Mod. Phys. Lett. A35, 2050017 (2019), doi: 10.1142/S0217732320500170, arXiv: 1903.10907 [gr-qc].
* (15) P. Sahoo, P. H. R. S. Moraes, M. M. Lapola, and P. K. Sahoo, “Traversable wormholes in the traceless f(R,T) gravity”, Int. J. Mod. Phys. D30, 2150100 (2021). doi: 10.1142/S0218271821501005, arXiv: 2012.00258, [gr-qc].
* (16) R. Moti and A. Shojai, “ Traversability of quantum improved wormhole solution”, Phys. Rev. D101, 124042 (2020), doi:10.1103/PhysRevD.101.124042 arXiv: 2006.06190 [gr-qc].
* (17) G. Alencar, V. B. Bezerra, C. R. Muniz, and H. S. Vieira, Universe 7, 238 (2021), “Ellis–Bronnikov Wormholes in Asymptotically Safe Gravity”, doi:10.3390/universe7070238 arXiv:2106.02476 [gr-qc].
* (18) J. Sadeghi, B. Pourhassan, S. N. Gashti, and S. Upadhyay, “Smeared mass source wormholes in modified f(R) gravity with the Lorentzian density distribution function”, Mod. Phys. Lett. A37, 2250018 (2022), doi: 10.1142/S0217732322500183, arXiv: 2203.04543 [gr-qc].
* (19) A. Kehagias, A. M. Dizgah, and A. Riotto, “Remarks on the Starobinsky model of inflation and its descendants”, Phys. Rev. D, 89, 043527 (2014). doi: 10.1103/PhysRevD.89.043527.
* (20) S. V. Ketov, “On the supersymmetrization of inflation in f (R) gravity”, Prog. Theor. Exp. Phys., 2013 123B04 (2013), doi: 10.1093/ptep/ptt105.
* (21) S. Aziz, S. K. Jha, and A. Rahaman, “The inflationary scenario in the $f(R)$ gravity model with a $R^{4}$ term”, Class. Quant. Grav., 38, 225008 (2021). doi: 10.1088/1361-6382/ac2dd0.
* (22) A. K. Sharma and M. M. Verma, “Power-law Inflation in the $f$(R) Gravity”, Astrophys. J. 926, 29 (2022) doi: 10.3847/1538-4357/ac3ed7.
* (23) S. Nojiri and S. D. Odintsov, “Introduction to Modified Grvity and Gravitational Alternative for Dark Energy”, Int. J. Geom. Met. Mod. Phys., 04, 115 (2007). doi: 10.1142/S0219887807001928.
* (24) T. P. Sotiriou and V. Faraoni, “f(R) theories of gravity” Rev. Mod. Phys. 82, 451 (2010), doi: 10.1103/RevModPhys.82.451.
* (25) A. A. Starobinsky, “Spectrum Of Relict Gravitational Radiation And The Early State Of The Universe”, J. Exp. Theor. Phys. Lett. 30, 682 (1979).
* (26) A. A. Starobinsky, “A new type of isotropic cosmological models without singularity”, Phys. Lett. B. 91, 99 (1980).
* (27) Y. Akrami et al., Planck Collaboration, “Planck 2018 results X. Constraints on inflation”, Astron. and Atroph. 641, A10 (2020). doi: 10.1051/0004-6361/201833887, arXiv: 1807.06211v2 [astro-ph.CO].
* (28) S. Kar, S. Minwalla, D. Mishra, and D. Sahdev, Resonances in the transmission of massless scalar waves in a class of wormholes, Phys. Rev. D51, 1632 (1995), doi: 10.1103/PhysRevD.51.1632,
* (29) P. D. Roy, S. Aneesh, and S. Kar, “Revisiting a family of wormholes: geometry, matter, scalar quasinormal modes and echoes,” Eur. Phys. J. C. 80, 9, 850 (2020).
* (30) V. Sharma and S. Ghosh, “Generalised Ellis-Bronnikov wormholes embedded in warped braneworld background and energy conditions”, Eur. Phys. J. C81, 1004(2021), doi: 10.1140/epjc/s10052-021-09789-z, arXiv: 2111.07329 [gr-qc].
* (31) K. Bamba, S. Nojiri, and S. D. Odintsov, “Time-dependent matter instability and star singularity in f(R) gravity”, Phys.Lett. B698, 451 (2011). doi: 10.1016/j.physletb.2011.03.038.
* (32) J. Dubinski and R. G. Carlberg, “The Structure of cold dark matter halos,” Astrophys. J. 378 (1991), 496 doi:10.1086/170451
* (33) J. F. Navarro, C. S. Frenk and S. D. M. White, “The Structure of cold dark matter halos,” Astrophys. J. 462 (1996), 563-575 doi:10.1086/177173 [arXiv:astro-ph/9508025 [astro-ph]].
* (34) J. F. Navarro, C. S. Frenk and S. D. M. White, “A Universal density profile from hierarchical clustering,” Astrophys. J. 490 (1997), 493-508 doi:10.1086/304888 [arXiv:astro-ph/9611107 [astro-ph]].
* (35) C. G. Boehmer and T. Harko, “Can dark matter be a Bose-Einstein condensate?,” JCAP 06 (2007), 025. doi:10.1088/1475-7516/2007/06/025 [arXiv:0705.4158 [astro-ph]].
* (36) K. G. Begeman, A. H. Broeils and R. H. Sanders, “Extended rotation curves of spiral galaxies: Dark haloes and modified dynamics,” Mon. Not. Roy. Astron. Soc. 249 (1991), 523.
|
# autohaem: 3D printed devices for automated preparation of blood smears
Samuel McDermott<EMAIL_ADDRESS>Jaehyeon Kim Department of Physics,
Cambridge University, UK Aikaterini Anna Leledaki Duncan Parry Louis Lee
Alexandre Kabla Department of Engineering, Cambridge University, UK
Catherine Mkindi Ifakara Health Institute, Bagamoyo, Tanzania Richard Bowman
Department of Physics, Bath University, UK Pietro Cicuta Department of
Physics, Cambridge University, UK
###### Abstract
The process of making blood smears is common in both research and clinical
settings, for investigating the health of blood cells and the presence of
blood-borne parasites. It is very often carried out manually. We focus here on
smears for malaria diagnosis and research which are frequently analysed by
optical microscopy and require a high quality. Automating the smear
preparation promises to increase throughput and to improve quality and
consistency of the smears. We present here two devices (manual and motorised)
designed to aid in the making of blood smears. These are fully documented,
open-source hardware, and an important principle was to make them easily
fabricated locally anywhere. Designs and assembly instructions are freely
available under an open licence. We also describe an image analysis pipeline
for characterising the quality of smears, and use it to optimise the settings
and tunable parameters in the two devices. The devices are shown to perform as
well as expert human operators, whilst not requiring a trained operator and
offering potential advantages in reproducibility and standardisation across
facilities.
## I Introduction
Blood smears are used in diagnosis for a variety of hematological disorders
such as anemia. They are also the preferred method of diagnosis of parasitic
infections, such as malaria. Malaria is a widespread parasitic disease,
affecting a large fraction of the world’s population. Our work is motivated by
our experience in both research laboratories and clinical settings in lower-
income countries, where blood smears are made routinely. In large-scale blood
testing centers (e.g. large hospitals in developed countries) this process is
automated and high throughput, but there are relatively few of these
facilities. We focus here on the challenges faced in the much more widespread
situation of smears being performed by hand. Smear preparation is time
consuming, repetitive and labour-intensive. High quality and consistent blood
smears are essential for analysis by optical microscopy of blood cells, and
therefore to achieve accurate diagnoses or characterisation for research.
The current ‘gold standard’ for malaria diagnosis is by optical microscopy
examination of blood smears [1]. A thin film of the patients bloods is fixed
onto a microscope slide and stained. The microscopist looks at the smear,
counting the parasites in various fields of view. This process is highly
sensitive for clinical malaria, allowing differentiation of malaria species
and parasite stages, and can be used to calculate parasite density in the
blood [2].
Unfortunately, there is ample evidence that performance of routine microscopy
in health facilities is poor [3, 4, 5, 6]. Health facilities preparing good
blood films have been shown to be 24 times more likely to have accurate
microscopy diagnosis than poor quality blood films, and well stained blood
smears are 10 times more likely to have accurate microscopy diagnosis compared
to poorly stained blood films [5]. Moreover, competency in preparing the
samples consistently appears to be lower than required [7, 8, 9]. Common
issues with blood smears include greasy and dirty slides, unevenly spread
films (too thick, thin, or streaky), poorly positioned blood films, too much
or too little blood used, and issues with the Giemsa staining protocol, such
as incorrect dye concentration or inconsistent buffer solution.
Our team is actively developing low-cost automation of the imaging and
analysis of blood smears for malaria [10], which tackles one aspect of
robustness and consistency in the diagnostics. It is clear that even the best
image analysis algorithms will struggle with poor quality blood smears. Poor
quality smears cannot even be used for human diagnosis. In the best scenario,
they need to be remade, delaying effective treatment. This is also true in
research settings, where poor quality smears can hamper experiments and affect
repeatability.
This paper describes our work in creating a series of devices which we call
“autohaem”. autohaem devices aim to enable even non-experts to produce
consistent, high quality, thin film blood smears at low cost. We introduce two
devices, mechanical and electronic, both with completely open designs. They
can be manufactured using 3D printers and assembled using common tools,
enabling labs to build their own devices. This open hardware approach is
already starting to revolutionise various aspects of research laboratories
[11, 12, 10]. 3D printers are becoming increasingly common in the lower-income
countries for prototyping and manufacturing. autohaem devices are fully open-
source, with complete printing and assembly instructions [13, 14], assisting
research laboratories who regularly make blood smears and so supporting
research efforts in countries with lower resources.
A pipeline for automated analysis of smear quality is presented and used for
device optimisation. Red Blood Cells (RBCs), at the typical hematocrit for
malaria research, are used as the testing media. This pipeline will also be
suitable for a more systematic analysis of blood smear preparation, for
example to help with training and evaluation of technicians.
Figure 1: There are a large number of variables in the key steps of the manual
smear protocol as recommended by the WHO, represented in these images.
Different users (or even the same person over time) therefore have a high
chance of performing the procedure inconsistently, leading to unusable smears
and or making data not comparable across labs or over time. a) Placement of
drop of blood onto microscope slide. b) Pulling back spreader slider into
blood drop until blood is drawn along its edge. c) Pushing the spreader slide
along the microscope slide, at an angle of
$45\text{\,}\mathrm{\SIUnitSymbolDegree}$, pulling the blood behind. d) The
completed blood smear.
## II Design parameters
In order to achieve a sustainable, useful device, several options have been
considered, materials and parameters were chosen and optimised. This section
describes the main design process, under the constraints of low cost and local
manufacture.
### II.1 3D printing
The devices need to be able to be 3D printed: PLA was chosen as the material
because it is a widely available and relatively cheap filament used for fused
filament fabrication 3D printers. It is recyclable and biodegradable (under
certain industrial conditions). Other thermoplastics are likely to perform
broadly the same; stiffer materials will provide an improved rigidity to the
frame, but the slide holder geometry might have to be tweaked. If the devices
become contaminated with blood, PLA can be cleaned using 70% ethanol.
To be able to be printed with the widest range of printers possible, it is
necessary to design the 3D printed parts to meet certain requirements. All
parts were designed to be printed with a layer height of
$150\text{\,}\mathrm{\SIUnitSymbolMicro m}$. None of the parts requires
support material, which either requires a different material such as PVA, or
extra manufacturing time to clean the parts after printing. The parts are all
designed to have good adhesion to the print bed, by using curved corners and
orientating the parts such that the larger surfaces are on the print bed.
Therefore, they should not require ‘brims’ in order to adhere to the print
bed, but we recommend that the print bed should be heated. Where screws are
likely to be undone, or are required to be extra strong, nut traps are used to
hold nuts in place. The devices have been printed and tested on Ultimaker (S3,
S5) and Prusa (i3 MK3) printers.
Figure 2: The first device presented here to improve smears, the manually
operated autohaem smear. a) 3D-rendered exploded view of autohaem smear
showing the non-3D printed parts. b) Photo of an autohaem smear showing the
assembled device with the two microscope slides in their positions.
### II.2 Non-printed parts
Non-printed parts were chosen to ensure that the devices were suitable for
assembly around the world. Common machine fixings such as M3 hex head screws
were used. Other non-printed parts are clearly specified in the assembly
instructions, with examples of where to purchase them from.
Likewise, electronic components were specified to be purchasable from many
large electronics suppliers around the world. They can be soldered using a
simple hand soldering iron, and powered using common power supplies.
Figure 3: The autohaem smear device makes smear production more reproducible,
as characterised in this work. Once the slides and drop of blood have been
loaded, the spreader slide remains at a fixed angle and so the operator only
needs to move the slider handle. The sequence of images shows the key steps in
the operation of the non-motorised device. a) Placement of microscope slide in
slot. b), c) Placement of drop of blood on microscope slide. d) Insertion of
spreader slide into slider slot. e) Pulling back slider. f) Pushing slider
forward. g) Lifting up slider and removing microscope slide with smear. h)
Removing spreader slide. (Photos reproduced from [13].)
### II.3 Spreader blade
Most commercial automated smearing devices use single use plastic blades as a
spreader. This is not a sustainable solution, generating large amounts of
plastic waste. It is also not suitable for use in devices in lower-income
countries and remote testing facilities, where access to procurement of
proprietary single use parts is not possible. We designed autohaem devices to
function solely with microscope slides, which is more sustainable.
## III Conventional process of smearing
Blood smears are a common process in working with blood and blood diseases,
and especially in lower-income countries where blood smears are made manually.
The WHO’s basic malaria microscopy learner’s guide [15] describes the
currently accepted way for making manual thin blood smears (key steps are
illustrated in Figure 1):
1. 1.
Drop the blood on the slide.
2. 2.
Using another slide as a spreader, touch the drop of blood with its edge. The
blood should be drawn along the length of the edge.
3. 3.
Push the spreader along the side, at an angle of
$45\text{\,}\mathrm{\SIUnitSymbolDegree}$ whilst remaining in constant contact
with the slide. Our analysis of human experts shows this is done at
approximately $6\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$.
The steps in making blood smears for research are essentially the same,
although in many research applications the blood product might be a refined
fraction of the whole blood, for example only the RBCs are typically used for
culturing blood stage malaria in research. For accurate optical microscopy
diagnosis, which in this application implies counting parasites (and in some
cases identifying their species) the thin film blood smears must have RBCs
that are evenly spread. On a macro level this can be seen as an absence of
linear ridges or ‘hesitation marks’. In terms of RBCs density, a sweet spot
needs to be achieved where cells are not overlapping, but are also not so
sparse that many fields of view are required to observe a large enough cell
number. The typical parasetemia for research cultures is
$5\text{\,}\mathrm{\char 37\relax}$. For quantifying parasitemia for clinical
purposes, the CDC recommends 500 RBCs are counted for high parasetemia
($>10\%$) and 2000 RBCs for low parasetemia ($<1\%$) [16].
Figure 4: The second device presented in this work is motorised and improves
ease of use. a) The 3D rendered exploded view of autohaem smear+ showing the
non-3D printed parts. b) Photo of the assembled autohaem smear+ showing the
two microscope slides in their positions. (Photo reproduced from [14].). c)
Schematics of the electronics modules of autohaem smear+.
## IV The autohaem smear device
autohaem smear is the simpler and more portable of the two devices developed
here. It is entirely mechanical and requires no electricity, making it
suitable for labs or clinics with an unreliable electricity supply. The device
fixes the angle of the spreader slide, ensures that the motion of the spreader
is restricted to the horizontal axis, and that the spreader is in constant
contact with the slide. The user is able to change the speed of their
smearing, but as this is the only user-changeable parameter, it is much
simpler to make reproducible and consistently high quality smears compared to
a fully manual approach.
### IV.1 Design
The autohaem smear device is shown in Figure 2. It consists of a 3D printed
main body, which contains a slot to place the horizontal microscope slide. At
one end of the device, a living hinge is used to hold two parallel ground
steel rods. The hinge allows the rods to be lifted to give access to the
microscope slide underneath. The rods sit on two rod rests, with integrated
magnets and held together at the other end by a handle. This ensures that the
rods remain secured in position during smearing. On the rods is the slider.
The slider moves smoothly along the rod with two Oilite bushings and has a
handle for the user to move. The slider contains a slit in which the spreader
slide (a second microscope slide) is inserted. The spreader slide rests on the
horizontal slide with a force due to its weight.
### IV.2 Operation of autohaem smear
Operation of the autohaem smear device is show in Figure 3. Firstly the user
lifts up the metal rods with the handle. They place the microscope slide down
into the slot and place the drop of blood onto the slide. They then lower the
rods into the lower position where they are held in place by the magnets. A
second microscope slide, acting as the spreader, is inserted into the slot of
the slider until it touches the slide underneath. The user then pulls the
handle back until the edge of the spreader meets the blood and the blood runs
along the edge of the spreader. The user then pushes the slider forward with a
steady motion to produce the smear. The slider can then be lifted, the slide
with the smear removed, and the spreader slide removed through the bottom of
the slider to prevent contamination of residual blood. The edge of the
spreader slide should be cleaned, and then can be used as the next microscope
slide. The slide with the smear can then be fixed, stained and examined,
according to the WHO protocols [15].
### IV.3 Angle of smear
The angle that the spreader slide makes with the microscope slide is
constrained by the autohaem smear. By designing modifications of the slider
component, it is therefore possible to adjust this angle to investigate how it
effects the quality of the smear, in order to determine the optimal angle.
## V The autohaem smear+ device
The second device presented here is autohaem smear+. This is an electro-
mechanical device, suitable for laboratories or hospitals with a high
throughput of blood smears. One user could prepare the next blood sample while
the device is smearing, or could operate 2 devices simultaneously, dropping
the blood on one, while the other one is smearing. The device is able to
control the angle and speed of the spreader slide, whilst keeping it level and
in constant contact with the microscope slide. It is possible to program the
speed of the smear (e.g. to optimise for the average environmental conditions
of particular labs, which might affect the smear), and this speed should then
be set constant for every smear, regardless of user.
### V.1 Hardware design
The device, as shown in Figure 4, consists of a 3D printed main body. This has
a slot in which holds the microscope slide in position, and houses the
electronics. The slider is also 3D printed, and like in the autohaem smear
holds the slide at a fixed angle. It is attached to a lead screw nut to drive
its motion on one end. On the other end is an inverted hook with a magnet on
top. A ground steel metal rod is held above the slide, and is used to guide
the slider. The rod prevents the slider from rotating counter-clockwise, and
the magnet prevents the slider from rotating clockwise. There is also housing
for the Nema 17 motor and the limit switch.
### V.2 Electronics and Arduino code design
The slider is driven by an Iverntech Nema 17 stepper motor with an integrated
100 mm Tr8x8 lead screw. This was chosen because it is a common component in
the z-axis drive of 3D printers, and so are readily available. As it is a
stepper motor, its position is set precisely, enabling the slider to move at
precise positions and speeds. It is driven by a Polulu DRV8825 stepper motor
driver. This driver was chosen as it meets the high (1.5 A) current rating of
the motor without additional cooling. It also has current control, so that a
maximum current output can be set, allowing higher voltages and therefore
higher step rates. The driver is plugged into a stepper motor driver expansion
board. These generic boards (also usually used for 3D printers and CNC
machines) provide convenient, solder-free pinouts to the driver, along with a
motor connector, a decoupling capacitor, and dipswitches for microstepping.
The motor has a step size specification of
$1.8\text{\,}\mathrm{\SIUnitSymbolDegree}$ and the lead screw pitch of 8 mm,
meaning that 200 steps of the motor will move the slider 8 mm. This was micro-
stepped down to 1/2 step, to increase smoothness, whilst being able to
maintain a suitable speed (the Arduino Uno can support approximately 4000
steps per second). This results in the motor moving the slider 8 mm in 400
half-steps, at a maximum speed of
$8\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$. A 12 V, 2 A DC power supply
is used to power the motor, connected through an external DC socket onto the
motor controller expansion board.
The device is controlled using an Arduino Uno, chosen for its availability and
the ease of using it to control external electronic components. The motor
driver expansion board is connected to the Arduino through the ‘step’,
‘direction’ and ‘enable’ pins. The Arduino is also connected to a limit
switch, which defines the position of the slider at start up and an
illuminated LED button, which is pressed to start the smear, and illuminates
whilst the motor is moving. The Arduino code makes use of the ‘AccelStepper’
library [17], which provides functions for moving the motor gracefully though
acceleration and deceleration, as well as providing convenient functions for
positioning and moving the motor.
Figure 5: Operation of autohaem smear+ only requires the push of a button,
once the slides and the blood drop have been loaded. The device performs very
reproducibly and autonomously, so higher overall throughput can be achieved by
preparing the blood while the device operates or by having multiple devices
running simultaneously. The sequence of images shows the key steps in the
operation of the electro-mechanical device. a) Plugging in device, device
performs calibration. b) Rotating slider and placing microscope slide in slot.
c), d) Placing drop of blood on microscope slide. e) Rotating slider into
position. f) Inserting spreader microscope slide. g) Pressing button to start
smear. h) Device is producing the smear. i) Removing spreader slide. (Photos
reproduced from [14].)
### V.3 Operation of autohaem smear+
As shown in Figure 5, the user starts the operation of the device by plugging
it in. On startup, the device performs an initial calibration: The slider
moves backward until it makes contact with the limit switch, then moves
forward to its idle position. The user then rotates the slider around the axis
of the lead screw, in order to place the microscope slide in the slot. They
place their drop of blood in the centre of the two circle cutouts. The user
rotates the slider back into position and inserts the spreader microscope
slide into the slot in the top. Upon pressing the button, the device starts to
create the smear. The LED illuminates to show the device is in operation and
the slider starts to move forwards. The slider then slows down as it enters
the region where the blood drop is. The slider microscope slide moves through
the drop of blood, so that the blood is drawn along the edge of the spreader
slide. The slider then moves forward at a user defined speed until it reaches
its idle position, where it stops. The user rotates the slider and removes the
spreader slide from underneath, preventing contamination with any residual
blood. The spreader slide can then be cleaned and re-used as the microscope or
spreader slider. The microscope slide is then removed. When microscope slides
are in short supply, they can be cleaned and reused after examination [15].
### V.4 Angle and speed of smear
Two parameters can be controlled when using autohaem smear+: the angle that
the spreader slide makes with the microscope slide, and the speed of the
slider as it creates the smear. These two parameters can be adjusted to find
the optimal angle and speed of the device.
## VI Availability and Documentation
Both autohaem devices have been designed to be open source and accessible from
their inception. The 3D models have been designed in OpenSCAD, with version
control using git and hosted on GitLab [18, 19]. Code reuse between the
different designs was achieved using submodules [20]. The OpenSCAD files are
compiled using GitLab’s CI/CD tools, enabling continuous development and
compilation of the STL parts. The Arduino code is also hosted on Gitlab [21].
Good documentation is key to enabling others to replicate designs or
collaborate on future improvements. The autohaem device assembly instructions
[13, 14] are written to be compiled with GitBuilding [22], allowing clear
instructions that are easy to version control. They are built using GitLabs
CI/CD tools and hosted on GitLab pages, meaning there are constantly up to
date instructions. They consist of a Bill of Materials, step-by-step
instructions with photos for reference, electronics diagrams [23], and a guide
for loading the software onto the Arduino.
## VII Characterisation of smear quality and Optimisation of devices
In order to test the performance of the devices, they were tested using human
blood. Currently the assessment of blood smear quality is done using
subjective judgement: a person is judged on their ability to make a good smear
when their assessor judges that it looks correct. This is clearly a process
that does not easily lead to general quality standards. Here we present a
methodical, analytical way of judging blood smears, allowing much finer
gradation of success.
### VII.1 Blood smearing
#### VII.1.1 Blood preparation
$200\text{\,}\mathrm{\SIUnitSymbolMicro L}$ of $3\text{\,}\mathrm{\char
37\relax}$ hematocrit human RBCs in RPMI were spun down and
$150\text{\,}\mathrm{\SIUnitSymbolMicro L}$ supernatant was removed. The blood
cells were resuspended and $3\text{\,}\mathrm{\SIUnitSymbolMicro L}$ was
pipetted for each blood smear.
#### VII.1.2 Manual creation of smears
Four researchers who regularly make blood smears (experts) each created five
blood smears using their usual manual technique.
#### VII.1.3 autohaem smear blood smears
autohaem smear was used by a non-expert to make five blood smears for each of
these angles:
[$30\text{\,}\mathrm{\SIUnitSymbolDegree}$,$40\text{\,}\mathrm{\SIUnitSymbolDegree}$,$50\text{\,}\mathrm{\SIUnitSymbolDegree}$,$60\text{\,}\mathrm{\SIUnitSymbolDegree}$,$70\text{\,}\mathrm{\SIUnitSymbolDegree}$].
The speed of the smear was made as consistent as possible (approximately
$5\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$).
#### VII.1.4 autohaem smear+ blood smears
autohaem smear+ was used by a non-expert to make five blood smears for each of
these angles and speeds:
[$30\text{\,}\mathrm{\SIUnitSymbolDegree}$,$40\text{\,}\mathrm{\SIUnitSymbolDegree}$,$50\text{\,}\mathrm{\SIUnitSymbolDegree}$,$60\text{\,}\mathrm{\SIUnitSymbolDegree}$,$70\text{\,}\mathrm{\SIUnitSymbolDegree}$];
[$3\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$,
$4\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$,
$5\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$,
$6\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$,
$7\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$,
$8\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$].
Figure 6: The smear analysis pipeline can be used to evaluate the performance
of the autohaem devices, but also potentially for teaching and evaluation of
human operators. Once the blood smears have been produced, they are imaged
under a microscope producing several micrographs. These micrographs are imaged
using Cellpose, with the default model and by specifying the average RBC size
in pixels. Cellpose generates a set of masks, one set for each micrograph. The
set contains a number of segmented regions, each one corresponding to an
individual RBC. These masks can be used as an input for CellProfiler, with a
custom CellProfiler pipeline used to extract relevant metadata about the
images and RBCs to be stored in data files for further analysis.
#### VII.1.5 Fixing and staining
All the smears were fixed by covering the smear with methanol for
$30\text{\,}\mathrm{s}$. They were then stained by covering the smear in
$20\text{\,}\mathrm{\char 37\relax}$ Giemsa in distilled water (7.2pH) for 10
minutes before rinsing and drying, similar to the standard WHO protocol [15].
Table 1: Comparison of performance of segmentation tools for RBC smears. 10 fields of view of manually created blood smears were segmented using each tool. The images were also segmented by a human and the number of corrected identified (true positive), unidentified (false negative) and wrongly-identified cells was calculated for each tool’s output. From these results it was determined that Cellpose had the best precision and sensitivity. Segmentation Tool | Precision | Sensitivity
---|---|---
CellProfiler (Thresholding) | 0.934 | 0.844
Weka | 0.988 | 0.917
ImageJ (Hough transform) | 0.970 | 0.948
Cellpose | 0.995 | 0.983
### VII.2 Imaging
All the blood smears were imaged on a Nikon Ti-E microscope using a
$60\text{\,}\times$, 1.4NA objective lens. The camera was a FLIR Grasshopper 3
colour camera, the images were converted to greyscale for analysis with
CellPose [24]. From a starting position, 20 images were captured moving along
the length of the smear (approximately 4mm). The size of the field of view is
$1920\times 1200$ pixels ($188\text{\,}\mathrm{\SIUnitSymbolMicro m}$ $\times$
$117\text{\,}\mathrm{\SIUnitSymbolMicro m}$).
### VII.3 Smear analysis pipeline
The smear analysis pipeline uses well documented existing tools to
quantitatively evaluate the quality of thin film blood smears. The pipeline is
illustrated in Figure 6. It is run as a single python notebook file [25].
The images, converted into grayscale, are segmented using Cellpose [24].
Cellpose is a generalist deep learning-based segmentation algorithm for cells
which does not require model retraining or parameter adjustments. Cellpose was
chosen as the segmentation tool for RBCs as it had the best precision and
sensitivity when tested using test smears of RBCs (Table 1). The typical
diameter of the cells (in pixels) is measured and used as a parameter to aid
the software, and Cellpose’s default training model was used. The software
outputs a .png image for each original image, where each ‘object’ or RBC
appears as a different brightness level in the image, creating a series of
masks corresponding to each RBC. Cellpose was used to segment all the images
using its python module [26].
Figure 7: We optimised the two devices to the point that they have a
performance as good as, or better than, expert operators. These graphs show
the comparison of results from smears created by human experts (a mean taken
across the experts), versus autohaem smear, and autohaem smear+. The values
for the mean human expert are the center of the colourmaps, meaning that
regions in blue have a higher value and the regions in red have a lower value.
a) Median number of RBCs within a single field of view. b) Interquartile range
of RBCs within a single field of view. c) Mean index of aggregation per field
of view. d) Mean adjacent neighbours per RBC. e) Mean eccentricity per RBC.
These masks are then processed using CellProfiler [27]. CellProfiler is a
software tool that can create modular image analysis pipelines, with many
useful computational tools. A pipeline [25] was generated using the GUI, and
then run using the command line. This pipeline ingested the masks and
extracted metadata based on their filenames. Any RBCs on the boundary of the
image were removed. Its modules were used to calculate the following
parameters from each image:
* •
RBC count: The number of RBCs per field of view.
* •
RBC location: The centre coordinate of each RBC.
* •
RBC neighbours: How many adjacent neighbours each RBC has. This includes
neighbours which were previously filtered for being on the boundary of the
image.
* •
RBC eccentricity: How circular each of the RBCs are.
Having processed the images through the smear analysis pipeline, CSV data
files were compiled containing information about the fields of view and the
RBCs contained within. Using these records, the following analyses can be
carried out for each operator, device, angle and speed (these measures were
averaged over the 20 Fields of View per smear, for five smears):
* •
Median number of RBCs per field of view: To determine the density of the
smear. Too few RBCs in a field of view means that more fields of view are
required to be examined. Too many RBCs in a field of view means that the image
will be crowded and it will be hard to observe.
* •
Interquartile range of RBCs per field of view: To determine how consistent the
smears are, or whether there are ranges of high and low densities. A lower
interquartile range indicates a more consistent smear.
* •
Index of aggregation: The Clark and Evans index of aggregation is a standard
measure of clustering of a point pattern [28]. It is the ratio of the observed
mean nearest neighbour distance to that expected for a Poisson distribution of
the same intensity. The nearest neighbour distance for each RBC, $r_{i}$ (that
are not touching the boundary of the field of view, but including distances to
those which are) was calculated using CellProfiler. The mean distance,
$\overline{r_{O}}$, was found for each field of view:
$\overline{r_{O}}=\frac{\sum r_{i}}{n}.$
The expected distance to the nearest neighbour, $\overline{r_{E}}$ was also
calculated:
$\displaystyle\rho$ $\displaystyle=$ $\displaystyle\frac{n}{s}$
$\displaystyle\overline{r_{E}}$ $\displaystyle=$
$\displaystyle\frac{1}{2\sqrt{\rho}},$ (1)
where $n$ is the number of cell in the field of view, and $s$ is the size of
the field of view in pixels. The deviation of the observed pattern from the
expected random pattern is measured using the ratio for index of aggregation,
$R$:
$R=\frac{\overline{r_{O}}}{\overline{r_{E}}}.$
If the spatial pattern is random $R=1$, if there is aggregation $R$ tends to
0, and if there is a regular pattern, $R$ tends to 2.15.
* •
Adjacent neighbours: The mean count of adjacent (touching) neighbours per RBC.
This is a measure of how clumped the RBCs are in the smear, a value of zero
indicates that there are no touching or overlapping RBCs in the field of view.
* •
RBC eccentricity: The average eccentricity of the RBCs in a field of view.
Defined by CellProfiler [29] as ‘the eccentricity of the ellipse that has the
same second-moments as the region. The eccentricity is the ratio of the
distance between the foci of the ellipse and its major axis length.’ A RBC
with an eccentricity of zero is a perfect circle, and an eccentricity of 1 is
a line. A good blood smear with have RBCs that retain their original circular
shape. If the smear is done poorly, for example by pushing the blood rather
than pulling it, the RBCs will tend to be oval.
Table 2: The results of the nearest neighbour search to find the optimal parameters. The expert’s mean was modified to generate optimal values which could be used in the nearest neighbour search to find the optimal parameters of the two devices. Smear quality measurements | Experts’ mean | Optimal values | | autohaem smear
---
($30\text{\,}\mathrm{\SIUnitSymbolDegree}$)
| autohaem smear+
---
($50\text{\,}\mathrm{\SIUnitSymbolDegree}$,
$7\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$)
Median RBC count per field of view | 55.6 | 55.6 | 54.0 | 66.5
Interquartile range of RBC count per FoV | 21.6 | 0 | 35.5 | 23.5
Mean index of aggregation | 1.11 | 2.15 | 1.09 | 1.17
Mean adjacent neighbours per RBC | 0.17 | 0 | 0.331 | 0.197
Mean eccentricity per RBC | 0.498 | 0 | 0.535 | 0.469
Figure 8: Typical Fields of View of blood smears produced in the three
conditions: (a) expert humans, (b) autohaem smear and (c) autohaem smear+, in
each case chosen with RBC densities close to the median density for each
group. autohaem smear is set at the optimum angle of
$30\text{\,}\mathrm{\SIUnitSymbolDegree}$. The density of RBCs is close to
that obtained by the human experts, and remains evenly distributed across the
field of view (f.o.v.). The autohaem smear+ is set at its optimum angle of
$50\text{\,}\mathrm{\SIUnitSymbolDegree}$ and speed of
$7\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$. The RBC density is higher
here than that of the human experts but there is reduced aggregation across
the field of view. This performance means that a non-expert is able to use
autohaem smear and autohaem smear+ to produce a blood smear as good, or
better, than that of a human expert. Images are converted to greyscale because
Cellpose uses greyscale images for segmentation. Scale bars
$20\text{\,}\mathrm{\SIUnitSymbolMicro m}$.
## VIII Results
The smear analysis was compared across the human experts, autohaem smear, and
autohaem smear+, as shown in Figure 7. Using these values, the optimum
parameters for the devices were found.
### VIII.1 Device performance
The performance of the devices was compared to the human experts, in Figure 7.
The colourbars are centered on the value of the mean human expert, such that
squares that are blue have a higher value than the experts, and squares that
are red have a lower value.
#### VIII.1.1 Density of smear
For autohaem smear, increasing the angle of the spreader slide results in a
higher density of RBCs (Figure 7 a)). This reaches a maximum at
$60\text{\,}\mathrm{\SIUnitSymbolDegree}$, with a median value of 120 RBCs per
field of view. Beyond $60\text{\,}\mathrm{\SIUnitSymbolDegree}$ the density
decreases. This can also be seen in the plot for autohaem smear+, where
increasing the angle corresponds to an increase in density until a maximum is
reached. A similar effect is seen for increasing speed–higher speeds produce a
higher density, until a maximum is reached. This means the highest density of
RBCs is produced at $60\text{\,}\mathrm{\SIUnitSymbolDegree}$ (the same as
autohaem smear) and $6\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$. The
mean human expert smear density is between the lowest and highest density of
the devices, showing that the devices produce smears in the expected range of
densities.
The reason for the reduction at high angles and speeds is because the spreader
slide tends to stick-slip with these parameters. The resulting smear consists
of sharp ridges. This effect is quantified in the Interquartile Range (IQR) of
the density of RBCs across the fields of view (Figure 7 b)). For autohaem
smear+, the IQR at low speeds and angles are lower, indicating that the Fields
of View have consistent densities of RBCs. At higher speeds and angles, the
IQR increases, showing that the ridges seen in the smears are producing
inconsistent RBC densities across the fields of view. The smears produced by
moderate angles and speeds have an IQR consistent with that of the human
experts.
#### VIII.1.2 Index of aggregation
In Figure 7 c), it can be seen that the smears which have a low speed and
medium angle have a lower index of aggregation, indicating more clumping. At
the lowest angle, $30\text{\,}\mathrm{\SIUnitSymbolDegree}$, the clumping is
low due to the lower density of the smear. At the higher speeds the smear is
more uniform.
#### VIII.1.3 Adjacent neighbours
In Figure 7 d), smears created at low angles with low speeds tend to have RBCs
with fewer adjacent neighbours. Those Fields of View with the highest density
of RBCs intuitively have RBCs with more adjacent neighbours. For the Fields of
View with intermediate densities, there appears to be no relation between the
parameters and the number of adjacent neighbours.
#### VIII.1.4 Eccentricity
In Figure 7 e), there appears to be no correlation between the parameters and
the eccentricity of the RBCs. The slowest and lowest angle RBCs have some of
the highest eccentricities, as the blood starts to get ahead of the spreader
blade and is pushed rather than pulled.
### VIII.2 Optimal parameters
In order to determine the optimal parameters (angle, speed) for the devices,
nearest neighbour searching was performed. Each of the set of parameters was
given a ‘smear vector’, describing its performance in the five analyses. Two
such smear vector sets were created for these parameters, one for autohaem
smear and one for autohaem smear+.
The mean of human experts was also given a smear vector, with values shown in
Table 2. The measurements were scaled so that they were equally important. The
optimal smear vector was also calculated by matching the median density to
that of an human expert, assigning the optimal index of aggregation to 2.15
(upper limit indicating a regular spatial pattern), and assigning all the
other measurements optimal values of 0.
A nearest neighbour search was performed between the optimal smear vector and
both smear spaces. For autohaem smear, the optimal angle was found to be
$30\text{\,}\mathrm{\SIUnitSymbolDegree}$, with its smear vector shown in
Table 2. For autohaem smear+, the optimal angle and speed were found to be
$50\text{\,}\mathrm{\SIUnitSymbolDegree}$ and
$7\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$, with its smear vector also
shown in Table 2. These values agree with the preconceived notions of how
humans are taught to make manual smears. They are close to the recommended
$45\text{\,}\mathrm{\SIUnitSymbolDegree}$ angle, and correspond to a speed
that is close to those of human experts
($6\text{\,}\mathrm{cm}\text{\,}{\mathrm{s}}^{-1}$). Typical images of smears
produced by the expert humans and the two devices at their optimum parameters
are shown in Figure 8.
## IX Conclusions
In this work we have developed and presented the autohaem range of devices for
automated blood smearing. autohaem smear is a mechanical device and autohaem
smear+ is an electro-mechanical device. The devices are designed to be
sustainable and all the designs and assembly instructions are available under
an open source licence.
An automated smear analysis pipeline was created and used here as a tool for
quantifying the quality of blood smears. We propose that it can have further
applications in assessing human proficiency in making blood smears.
Both devices were operated exploring two tunable parameters. Once optimised,
autohaem smear can fix the angle that a spreader slide makes with the
microscope slide, and autohaem smear+ can also fix the speed that the slider
moves, making smears more consistent. In this work, blood smears were made by
the two devices and by human experts using the manual protocol. The quality of
the smears was quantified and the optimum parameters of angle and speed
determined for the two autohaem devices. The devices were shown to perform on
par with human experts, even when operated by users with no experience of
making blood smears. These devices hence promise to deliver better
standardisation and robustness of quality across labs and over time.
The next stage of this work will be to analyse the performance of these
devices using whole blood. Improvements to the devices will include providing
better protection to perform in non-ideal environments, for example rural
clinics that will be dustier than a laboratory microbiology safety cabinet,
and to map out optimal device settings as a function of different temperature
and humidity. On the hardware side, a better suspension of the spreader slide
would be an improvement and might reduce ridges in the smears.
### Contributions
SM designed the study, was the lead designer and maintainer of the devices,
software, and assembly instructions, prepared the blood samples and reagents,
performed testing of the devices, carried out the data analysis, and drafted
the manuscript; JK contributed to the device designs, software, and assembly
instructions and performed testing of the devices; AAL, DP and LL contributed
to the device designs and data analysis; AK contributed to the device designs
and the design of the study, and coordinated the study; CM contributed to the
design of the study; RB contributed to the device designs and the design of
the study; PC conceived of the study, designed the study, coordinated the
study, and helped draft the manuscript. All authors gave final approval for
publication and agree to be held accountable for the work performed therein.
###### Acknowledgements.
SM, AK, CM, PC were funded by a GCRF QR grant awarded by Cambridge University.
RWB is supported by a Royal Society award URF\R1\180153.
## Data Availability Statement
All materials and data are available at: https://gitlab.com/autohaem.
## References
* World Health Organization [2016] World Health Organization, _Malaria microscopy quality assurance manual – Ver. 2_ (World Health Organization, 2016) p. 140.
* WHO [2010] WHO, _Basic malaria microscopy: Tutor’s guide_ , 2nd ed. (WHO, 2010).
* Kahama-Maro _et al._ [2011] J. Kahama-Maro, V. D’Acremont, D. Mtasiwa, B. Genton, and C. Lengeler, “Low quality of routine microscopy for malaria at different levels of the health system in Dar es Salaam,” Malaria Journal 10, 1–10 (2011).
* Harchut _et al._ [2013] K. Harchut, C. Standley, A. Dobson, B. Klaassen, C. Rambaud-Althaus, F. Althaus, and K. Nowak, “Over-diagnosis of malaria by microscopy in the Kilombero Valley, Southern Tanzania: An evaluation of the utility and cost-effectiveness of rapid diagnostic tests,” Malaria Journal 12, 1–9 (2013).
* Sori _et al._ [2018] G. Sori, O. Zewdie, G. Tadele, and A. Samuel, “External quality assessment of malaria microscopy diagnosis in selected health facilities in Western Oromia, Ethiopia,” Malaria Journal 17, 1–7 (2018).
* Ngasala and Bushukatale [2019] B. Ngasala and S. Bushukatale, “Evaluation of malaria microscopy diagnostic performance at private health facilities in Tanzania,” Malaria Journal 18, 1–7 (2019).
* Kiggundu _et al._ [2011] M. Kiggundu, S. L. Nsobya, M. R. Kamya, S. Filler, S. Nasr, G. Dorsey, and A. Yeka, “Evaluation of a comprehensive refresher training program in malaria microscopy covering four districts of Uganda,” American Journal of Tropical Medicine and Hygiene 84, 820–824 (2011).
* Mukadi _et al._ [2016] P. Mukadi, V. Lejon, B. Barbé, P. Gillet, C. Nyembo, A. Lukuka, J. Likwela, C. Lumbala, J. Mbaruku, W. V. Veken, D. Mumba, P. Lutumba, J. J. Muyembe, and J. Jacobs, “Performance of microscopy for the diagnosis of malaria and human African trypanosomiasis by diagnostic laboratories in the democratic Republic of the Congo: Results of a nation-wide external quality assessment,” PLoS ONE 11, 1–15 (2016).
* West _et al._ [2016] N. West, S. Gyeltshen, S. Dukpa, K. Khoshnood, S. Tashi, A. Durante, and S. Parikh, “An Evaluation of the National Malaria Surveillance System of Bhutan, 2006–2012 as It Approaches the Goal of Malaria Elimination,” Frontiers in Public Health 4, 1–10 (2016).
* Collins _et al._ [2020] J. T. Collins, J. Knapper, J. Stirling, J. Mduda, C. Mkindi, V. Mayagaya, G. A. Mwakajinga, P. T. Nyakyi, V. L. Sanga, D. Carbery, L. White, S. Dale, Z. Jieh Lim, J. J. Baumberg, P. Cicuta, S. McDermott, B. Vodenicharski, and R. Bowman, “Robotic microscopy for everyone: the OpenFlexure microscope,” Biomedical Optics Express 11, 2447 (2020).
* Baden _et al._ [2015] T. Baden, A. M. Chagas, G. Gage, T. Marzullo, L. L. Prieto-Godino, and T. Euler, “Open Labware: 3-D Printing Your Own Lab Equipment,” PLoS Biology 13, 1–12 (2015).
* Amann, Witzleben, and Breuer [2019] S. Amann, M. v. Witzleben, and S. Breuer, “3D-printable portable open-source platform for low-cost lens-less holographic cellular imaging,” Scientific reports 9, 11260 (2019).
* 202 [2021a] “autohaem smear v2.0.0 assembly instructions,” https://autohaem.gitlab.io/autohaem-smear/v2.0.0/ (2021a).
* 202 [2021b] “autohaem smear+ v2.0.0 assembly instructions,” https://autohaem.gitlab.io/autohaem-smear-plus/v2.0.0/ (2021b).
* World Health Organization [2010] World Health Organization, _Basic Malaria Microscopy: Learner’s Guide_ (World Health Organization, 2010) p. 50.
* Centers for Disease Control and Prevention [2016] Centers for Disease Control and Prevention, “Blood Specimens - Microscopic Examination,” https://www.cdc.gov/dpdx/diagnosticprocedures/blood/microexam.html (2016).
* McCauley [2020] M. McCauley, “AccelStepper,” http://www.airspayce.com/mikem/arduino/AccelStepper/index.html (2020).
* 202 [2021c] “autohaem smear,” https://gitlab.com/autohaem/autohaem-smear (2021c).
* 202 [2021d] “autohaem smear plus,” https://gitlab.com/autohaem/autohaem-smear-plus (2021d).
* 202 [2021e] “7.11 Git Tools - Submodules,” https://git-scm.com/book/en/v2/Git-Tools-Submodules (2021e).
* 202 [2021f] “autohaem smear plus arduino,” https://gitlab.com/autohaem/autohaem-smear-plus-arduino (2021f).
* 202 [2021g] “gitbuilding,” https://gitbuilding.io (2021g).
* 202 [2021h] “autohaem smear+ electronics diagram,” https://gitlab.com/autohaem/autohaem-smear-plus/-/blob/v2.0.0/docs/images/attach_everything_together/electronics_diagram.svg (2021h).
* Stringer _et al._ [2021] C. Stringer, T. Wang, M. Michaelos, and M. Pachitariu, “Cellpose: a generalist algorithm for cellular segmentation,” Nature Methods 18, 100–106 (2021).
* 202 [2021i] “smear analysis pipeline,” https://gitlab.com/autohaem/smear-analysis (2021i).
* Pachitariu and Stringer [2021] M. Pachitariu and C. Stringer, “cellpose,” https://pypi.org/project/cellpose/ (2021).
* McQuin _et al._ [2018] C. McQuin, A. Goodman, V. Chernyshev, L. Kamentsky, B. A. Cimini, K. W. Karhohs, M. Doan, L. Ding, S. M. Rafelski, D. Thirstrup, W. Wiegraebe, S. Singh, T. Becker, J. C. Caicedo, and A. E. Carpenter, “CellProfiler 3.0: Next-generation image processing for biology,” PLOS Biology 16, e2005970 (2018).
* Clark and Evans [1954] P. J. Clark and F. C. Evans, “Distance to Nearest Neighbor as a Measure of Spatial Relationships in Populations,” Ecology 35, 445–453 (1954).
* Broad Institute [2020] Broad Institute, “Measurement,” https://cellprofiler-manual.s3.amazonaws.com/CellProfiler-4.0.5/modules/measurement.html (2020).
|
# Introducing PT-REX ,
the Point-to-point TRend EXtractor
A. Ignesti<EMAIL_ADDRESS>
###### Abstract
Investigating the spatial correlation between different emissions in an
extended astrophysical source can provide crucial insights into their physical
connection, hence it can be the key to understand the nature of the system.
The point-to-point analysis of surface brightness is a reliable method to do
such an analysis. In this work we present PT-REX, a software to carry out
these studies between radio and X-ray emission in extended sources. We discuss
how to reliably carry out this analysis and its limitation and we introduce
the Monte Carlo point-to-point analysis, which allows to extend this approach
to poorly-resolved sources. Finally we present and discuss the application of
our tool to study the diffuse radio emission in a galaxy cluster.
###### keywords:
Techniques: image processing , methods: observational, statistical , radio
continuum: general , radio continuum: general
The point-to-point analysis is a reliable method to investigate the physical
connection between different components in complex, extended astrophysical
sources;
We introduce the Monte Carlo point-to-point analysis which allows to reliably
perform the point-to-point analysis also to small or poorly-resolved sources;
We present PT-REX, a new software that allows to easily perform point-to-point
analysis between radio and X-ray emission. It features several statistical
tools to reliably evaluate the spatial correlation for a plethora of
scientific cases.
As an example, we present here how to use PT-REX to perform the point-to-point
analysis on a diffuse radio source in the galaxy cluster RX J1347.5-1145. Due
to its architecture, PT-REX can be used to conduct point-to-point analysis on
every kind of extended emission, as well as radio and X-ray.
## 1 Introduction
It is no wonder that a multi-wavelength analysis can be the best way to
investigate an astrophysical system. In the case of extended sources, the
study of the spatial correlation between the surface brightness, $I_{a}$ and
$I_{b}$, produced by two different emission mechanisms, which can be simply
expressed as $I_{a}\propto I_{b}^{k}$, can reveal the physical connection
between the processes that are taking place in the source. Observing a
positive spatial correlation indicates that the two components responsible for
the emissions occupy the same volume in the source, whereas the slope $k$ may
provide some insights into their physical link. These studies can be carried
out by using the point-to-point (ptp) analysis which is, basically, the
comparison of surface brightness measured by two different observations made
by sampling the extended source with a grid. Under the assumption that each
cell of the grid cover the same space of the celestial sphere in each
observation, the ptp analysis is more flexible than a comparison between
surface brightness radial profiles because it can be seamlessly performed on
asymmetrical sources and it can be more responsive to the presence of
substructures embed in the extended emission111The drawback is that, contrary
to radial profile which can be easily interpreted by assuming spherical
symmetry of the system, understanding the physical meaning of a ptp trend
could be not trivial, especially for non-spherical objects.. One of the first
applications of the ptp analysis could be found in Govoni et al. [1], where it
was featured to study the spatial correlation between radio and X-ray emission
in galaxy clusters hosting diffuse radio emission. In this context, the trend
between radio and X-ray emission indicates that the thermal plasma is
permeated with magnetic field and relativistic particles and it suggests that
the radio emission depends on the local properties of the thermal intra-
cluster medium (ICM) [e.g., 2, for a detailed discussion of this].
In this work we present PT-REX, a tool to easily carry out ptp analysis
between radio and X-ray emission. We also introduce a new approach, the Monte
Carlo ptp analysis, which extends the application of the point-to-point
analysis to small, poorly-resolved objects. PT-REX offers the possibility to
study the spatial correlation by exploiting a set of different fitting methods
to tackle different scientific problems. This paper is structured as follow.
In Section 2 we present the tool, we explain how to use it effectively and we
introduce the different fitting methods. In Section 3 we show how to conduct
an analysis with PT-REX on a real diffuse raadio source and how the different
statistical methods can produce different results.
## 2 PT-REX
We present the Point-to-point TRend EXtractor (PT-REX
)222https://github.com/AIgnesti/PT-REX/blob/master/PTREX.tar.xz, a flexible
Python script to easily carry out the ptp analysis on every kind of extended
radio source. PT-REX handles most of the operations with the the Common
Astronomy Software Applications (CASA) packages v6.0 [3] developed by the
National Radio Astronomy Observatory. We integrated the CASA tools with a
variety of Python libraries from Astropy [4, 5] and Scipy [6]. The code is
structured in a series of tasks to handle the individual steps of a ptp
analysis independently, from defining a grid to sample the radio emission to
accurately analyze the data with several statistical methods. A preliminary
version of PT-REX has already been used in Ignesti et al. [7], in which we
also present the concept of Monte Carlo ptp analysis. Here we present in
detail each task and we discuss how to run PT-REX to perform ptp analysis
effectively.
### 2.1 Data preparation
PT-REX works by combining radio and X-ray images of an extended source.
Therefore, in order to have reliable results, input images must have matching
coordinates systems to assure that the source is mapped by the same sky
coordinates. Radio images can be produced with any preferred software,
provided that they include information about the beam size and the
pixel/arcsec scale in their header to be read with CASA task imhead.
Concerning the X-ray images, multiple observations can be combined together to
improve the count statistic. The X-ray images can be provided as a single
exposure-corrected and background-subtracted image in units of surface
brightness (e.g., photons cm-2 $s^{-1}$) or by providing the count, background
and exposure images separately. CASA region files, which are necessary to
define the grid and the masks (see Section 2.2 and 2.5) can be defined while
running PT-REX by using the CASA imview. Finally, ancillary information about
the calibration error of the radio images and the preferred statistic method
(see Section 2.6) have to provided before running the analysis.
### 2.2 Sampling algorithm
The core of the ptp analysis is the sampling of the diffuse emission. We
developed a simple algorithm that converts a rectangular region into a grid
that follows the morphology of the radio source. The region is intended to
include the source which is going to be sampled, and thus we refer to is as
region of interest. The boundaries of the sampling grid, as the cell-size
(here intended as the size of the grid cells), the lower threshold in surface
brightness to be sampled, and the regions to exclude, have to be provided at
the begin of the analysis. In order to have a reliable reconstruction of the
radio flux density and to reduce the correlation between contiguous cells, the
cell-size has to be large at least as the resolution of the radio image. A
larger cell-size can be adopted to increase the signal-to-noise ratio of each
cell in the radio and X-ray observations. However, for a given source using
larger cells entails a lower number of points (i.e., a lower statistic) to
study the spatial correlation that can potentially jeopardize the analysis. A
rule of the thumb is that 15 cells, at least, are necessary to sample the
diffuse emission to assure a reliable outcome of the analysis, so a compromise
between resolution of the grid and signal-to-noise has to be found. Finally,
the threshold defines the lowest value of surface brightness that is going to
be sampled by the grid and it is expressed in unit of Jy beam-1.
Figure 1: Flowchart of sampling algorithm.
The sampling algorithm is quite straightforward and it described in Figure 1.
After a preliminary check on the cell-size, the region of interest is
converted in a rectangular grid. At this step, the coordinates of each cell
are defined in the pixel units of the radio image. Then the radio surface
brightness and the position with respect to the mask are evaluated for each
cell of the grid, starting from the bottom-left to the top-right. These checks
are done with the CASA task imstat. All those cells that do not meet the
requirements (i.e., those measuring a radio surface brightness below the
threshold or overlapping with the masked regions) are excluded, whereas the
others are converted in J2000 coordinates and finally stored in a region file,
which is the final output of the routine. Every sampling grid can be displayed
on the top of the radio image by using the CASA viewer and it can be further
modified manually to better adapt to the science case. After having defined a
sampling grid, we can use it to compare the radio and X-ray emissions.
### 2.3 Single-mesh analysis
We define a single-mesh ptp (SMptp) analysis a ptp analysis carried out by
using only a mesh to sample the radio emission. It is composed of two steps:
first the radio and X-ray surface brightness are measured in each cell of the
grid, then they are compared as $I_{\text{R}}$ vs $I_{\text{X}}$ to evaluate
the spatial correlation. We present the flowchart of the process in Figure 2.
The routine that collect the values of $I_{\text{R}}$ and $I_{\text{X}}$ is
straightforward. For a given sampling grid, which was previously created by
using the sampling algorithm, $I_{\text{R}}$ and $I_{\text{X}}$ are evaluated
for each of its cells. Because the cells are defined in J2000 coordinates
according to the radio map coordinate system (see Section 2.2), by using maps
with matching coordinates each cell will sample the same direction of the sky
for both the radio and X-ray images. However, we note that if the X-ray map
pixel-size is larger than the radio map one, then there could be a mismatch
between the position of the grid cells in the two images due to the different
sampling of the sky coordinates. In this case, we suggest to check how the
sampling grid is positioned on the X-ray map before running the SMptp analysis
and, if possible, to regrid the X-ray map to match the size and the pixel-size
of the radio one.
For the radio image, the flux density in each cell is measured with the CASA
task imstat and the converted in $I_{\text{R}}$ by dividing for the area of
the cell, $\Omega_{\text{c}}$, in units of arcsec2. The associated error,
$\sigma_{\text{R}}$, is computed as;
$\sigma_{\text{R}}=\frac{\sqrt{\left(f\cdot
S\right)^{2}+\left(\text{RMS}\cdot\sqrt{\Omega_{\text{c}}/\Omega_{B}}\right)^{2}}}{\Omega_{\text{c}}}$
(1)
where $f$ and RMS are, namely, the amplitude calibration error and the root
mean square of the radio image provided by the user, $S$ is the flux density
measured in the cell and $\Omega_{B}$ is the beam area. Since every cell has
the same size, the second term of Equation 1 is the same for all and it
becomes significant only for those cells with a low radio surface brightness.
As for the X-ray images, when several Chandra observations of the same cluster
are involved, we compute the total $I_{\text{X}}$ of a cell as
$I_{\text{X}}=\frac{\sum N_{\text{cnt,i}}-\sum N_{\text{bkg,i}}}{\sum
q_{\text{exp,i}}}\frac{1}{\Omega_{\text{c}}}=\frac{\sum\left(S_{\text{X,i}}\cdot
q_{\text{exp,i}}\right)}{\sum q_{\text{exp,i}}}\frac{1}{\Omega_{\text{c}}},$
(2)
where
$S_{\text{X,i}}=\frac{N_{\text{cnt,i}}-N_{\text{bkg,i}}}{q_{\text{exp,i}}}$
(3)
is the flux measured for the $i$-th Chandra observation, $\Omega_{\text{c}}$
is the angular area of the cell in units of arcsec2 , and $N_{\text{cnt,i}}$,
$N_{\text{bkg,i}}$ (in units of counts), and $q_{\text{exp,i}}$ (in units of
counts cm2 s photons-1) are, respectively, the values measured on the counts,
the background, and the exposure map of the $i$-th Chandra observation. When
no background or exposure maps are provided, their values are set to,
respectively, 0 and 1 for each cell333This entails that, by providing only the
count image, this procedure is valid for every other kind of observation where
the signal can be divided into counts, e.g. an optical image. This makes PT-
REX virtually able to compare radio emission with other kinds of emissions, as
well as X-ray emission.. We derive the associated errors on $S_{\text{X,i}}$
by assuming a Poisson error for $N_{\text{cnt,i}}$ and $N_{\text{bkg,i}}$ and
computing the error propagation of Equation 2. During this phase, the sampling
is further refined by excluding all these cells that measure negative values
of $I_{\text{R}}$ and $I_{\text{X}}$ or upper limits in radio and X-ray, i.e.
those values with relative uncertainties greater than 100$\%$. As a caveat, we
note a limit of the current sampling algorithm. Due to the fact that the
sampling is mainly driven by the signal-to-noise ratio of the radio image, the
final grid may include cells with an X-ray emission close to the noise level
which may negatively impact on the fit. In this case, we suggest to either
increase the cell-size to reach a compromise between the X-ray signal-to-noise
and the resulting sampling cells number, or to manually remove the most
critical cells by using the CASA task imview to display and modify the grid.
Figure 2: Flowchart of the SMptp routine.
Once the values of $I_{\text{R}}$ and $I_{\text{X}}$ have been calculated, the
fitting can begin. We fit the $I_{\text{R}}$-$I_{\text{X}}$ distribution with
a power-law relation as:
$I_{\text{R}}=A\cdot I_{\text{X}}^{k}$ (4)
We propose a set of different fitting algorithms to measure $k$ and its
associated error $\sigma_{k}$ (see Section 2.6). We also provide a direct
estimate of the linear correlation in the logarithmic space by estimating the
Spearman and Pearson ranks with the scipy.stats library. At the end of this
routine a data file with $I_{\text{R}}$ and $I_{\text{X}}$ and relative errors
is produced as output. These values can be used for further analysis, e.g. to
be examined with a fitting method which is not included in PT-REX [for example
LIRA, 8] or, by combining multiple SMptp analysis of the same object observed
at different radio frequencies, to study the spatial correlation between
spectral index and $I_{\text{X}}$. In addition to the data file, PT-REX
produces a simple plot with the data and the best-fit line with the interval
which has the $95\%$ chance of containing the true regression line.
As a caveat, we note that the reliability of the SMptp analysis is limited by
the assumption that any sampling grid provides an unambiguous reconstruction
of the real surface brightness of the source. Such an assumption is valid for
those sources that can be sampled by large number of cells, i.e. where radio
emission is well resolved by the observation, but it may not be true for the
other smaller or poorly-resolved objects. In the next section we discuss how
the ptp analysis can be extended also to these cases.
### 2.4 Monte-Carlo analysis
Extended sources may contain smaller substructures (e.g. bright filaments
embed in the extended emission or surface brightness gradients). If the
resolution of the observation is sufficient to fully resolve them, i.e. the
angular resolution is lower than half of the angular scale of these features,
the result of the SMptp analysis will not depend on the sampling grid because
the surface brightness will be reliably reconstructed by every possible
combination of cells. Otherwise, if these substructures can not be properly
sampled by the observation/grid, the resulting SMptp analysis will be fatally
biased by the choice of the sampling grid. In this latter case, an approach
more complex than the SMptp is required to estimate the trend of the spatial
correlation.
The major feature introduced by PT-REX is the possibility to use an
automatic, randomly-generated sampling routine to combine several SMptp
analysis into a Monte Carlo ptp (MCptp) analysis. By repeating several cycles
of SMptp analysis with randomly-generated grids, PT-REX produces a
distribution of values of $k$ that describe its parameter space, thus it
allows us to reliably estimate the trend (and its uncertainties). This routine
makes uses of the sampling algorithm and the SMptp routine presented in the
previous sections. We present the flowchart of the process in Figure 3. After
setting the number of Monte Carlo iterations ($N$), the region of interest,
the cell-size and the $I_{\text{R}}$ lower threshold are set, the Python
function numpy.random is used to generate a number $N$ of coordinates $(x,y)$
within the region of interest. These coordinates are used to define $N$
different, rectangular regions centered in $(x,y)$ and large enough to include
the original region of interest, hence the radio source. Then these $N$
regions are converted in sampling grids by the sampling algorithm and used to
carry out the corresponding SMptp analysis to measure $k\pm\sigma_{k}$.
Figure 3: Flowchart of the MCptp routine.
At the end of each cycle, a random value $k_{b}$ is extracted from a normal
distribution centered in $k$ with sigma $\sigma_{k}$ to be finally stored.
This boot-strapping procedure enables us to transpose the error of each
individual fit in the final $k_{b}$ distribution. After all the $N$ different
grids have been exploited, the result of the MCptp analysis, $k$, is computed
as:
$k=\overline{k}_{\text{b}}\pm\sigma_{k_{\text{b}}}$ (5)
where $\overline{k}_{\text{b}}$ and $\sigma_{k_{\text{b}}}$ are the mean and
the standard deviation of the distribution of boot-strapped $k_{b}$ obtained
at the end of each cycle. Usually SMptp and MCptp performed with the same
parameters (mask, starting region, cell-size and lower threshold) should be
consistent within 1 $\sigma$. If not, this could be due to 1) a very peculiar
choice of the SMptp mesh or 2) the presence of unmasked field source that are
erroneously, yet consistently, included in the sampling. An histogram of the
$k_{b}$ distribution is produced in output, which may point out additional
information about the source. On the one hand, observing a dispersion
significantly larger than the SMptp uncertainties indicates that the random
sampling affected the estimate, which is sign of a poor sampling of the radio
emission. On the other hand, an asymmetrical distribution may indicate the
presence of a secondary component in the radio source that, for fortuitous
combinations of cells have pivoted the fit. For instance, low-brightness
components or strong X-ray point-sources embed in the radio emission can
produce a negative skewness in the distribution, whereas the presence of point
sources with a strong radio and X-ray emission can induce a positive skewness.
MCptp analysis is advised for those sources that can be sampled with $<30$
cells. We suggest setting the number $N$ to a minimum of 100 to adequately
probe the parameter space of $k$. We note that the number of iterations, the
sizes of the region of interest and of the cells and the number of X-ray maps
involved in the analysis affect severely the duration of the procedure. As a
basic rule, mind that adjusting the cell-size by a factor $f$ changes the
number of sampling cells of $\sim 1/f^{2}$, which, in turns, modifies the
processing time of the same amount. PT-REX does not feature parallel
processing, thus it performance in terms of processing time may vary depending
on the available hardware. A scientific application of MCptp analysis to study
radio mini-halos in galaxy clusters is presented in Ignesti et al. [7].
### 2.5 Generate a mask
Field sources and those embed in the radio emission not associated with the
extended source (e.g. central radio galaxies) can jeopardize the results of
the ptp analysis. When the subtraction of those sources from the data is not
possible, they can be masked and excluded from the ptp analysis. PT-REX
includes a tool that allows the user to produce masks for the analysis simply
by providing the regions intended to be excluded. The regions are used to
define a matrix with the same size of the radio image that allows the sampling
routine to recognize and exclude any unwanted sources. Typically, there are
two kind of sources that have to be masked, 1) those embed in the extended
emission, as physically located within the source but its emission is not
related it, or background and foreground sources along the line of sight or 2)
those that are outside the diffuse emission but close to the region of
interest. As a simple rule of the thumb, we suggest to define carefully the
regions to be masked within the source and the region of interest. On the on
hand, define a mask that exceeds the size of the unwanted source can lead the
sampling routine to exclude a number of cells larger than the necessary, and
thus to reduce the number of point to evaluate the spatial correlation. On the
other hand, a mask smaller than the source should not contain the contribution
of the source in the extracted surface brightness, thus jeopardizing the
analysis. As for the sources close to the region of interest, they can be
problematic during a MCptp analysis. At the begin of the cycle a new region of
interest is defined and some of them can be erroneously included and sampled.
So we suggest to adopt large masks to safely account for their presence within
$2\times$, both in width and in height, the size of the region of interest.
### 2.6 Fitting algorithms
Fitting $I_{\text{R}}\propto I_{\text{X}}^{k}$ is a crucial part of the ptp
analysis, and different scientific problems may require different statistic
methods to evaluate the trend. For this reason, PT-REX includes a range of
fitting algorithms:
* 1.
Least squares (LS): data can be fitted with a power-law relation
$I_{\text{R}}=A\cdot I_{\text{X}}^{k}$ by using the least-squares method with
the scipy.optimize.curve$\\_$fit method. Only the uncertainties on
$I_{\text{R}}$ are taken into account. This method estimates the best-fitting
parameter of the power-law which minimize the distance from the data, under
the assumption that the data intrinsically follow a power-law distribution and
the scatter is only due to observational errors. Due to its assumptions, this
method can be biased by outliers that can pivot the fit. For a physical point
of view, the assumption of an intrinsic, perfect correlation between the two
quantities may be questionable. In a complex, physical system the apparent
correlation between two quantities can depend on a third, unknown factor. In
this case, an internal scatter of the data is expected regardless of the
quality of the observations and, thus, the base assumptions of this method can
lead to biases in the scientific conclusions. Therefore, although we include
this method for the sake of completeness because it has been largely used in
literature, we advise to use it cautiously;
* 2.
BCES orthogonal and bisector: The fitting method is the bivariate correlated
errors and intrinsic scatter (BCES) presented in Akritas and Bershady [9]. The
BCES regression offers several advantages compared to ordinary least squares
fitting, as measurement errors on both variables and it can account for an
intrinsic internal scatter of the data. The fitting is performed by the bces
module444https://github.com/rsnemmen/BCES. $I_{\text{R}}$ and $I_{\text{X}}$
are transposed in the logarithmic space before being fitted as
$\text{log}I_{\text{R}}=k\text{log}I_{\text{R}}+\text{log}A$. By default, we
assume that the errors on $I_{\text{R}}$ and $I_{\text{X}}$ are not
correlated. We included in PT-REX both the orthogonal method and the bisector
method, which compute the symmetric lines constructed from the BCES best-fits
of $(I_{\text{R}}|I_{\text{X}})$ and $(I_{\text{X}}|I_{\text{R}})$;
* 3.
LinMix: this is a Bayesian method to account for measurement errors in linear
regression introduced in Kelly [10]. This method allows for heteroscedastic
and possibly correlated measurement errors and intrinsic scatter in the
regression relationship. The method is based on deriving a likelihood function
for the measured data, especially for the case when the intrinsic distribution
of the independent variables can be approximated using a mixture of Gaussian
functions. LinMix incorporates multiple independent variables, nondetections,
and selection effects (e.g., Malmquist bias). We implemented this algorithm
with the linmix module555https://github.com/jmeyers314/linmix. This method
derives a likelihood function for the data, thus the best-fit slope is
estimated from the mean of the posterior distribution. To run this method, a
number of chains for the bayesian algorithm, n$\\_$chain, and the number of
gaussian to build the prior, K, has to be defined by the user. This method is
significantly more time-consuming than the other options. We advise to use it
when a large number of cells is involved and the chose of the lower threshold
is expected to impact on the fit.
A detailed discussion about the best fitting strategy to adopt for different
science cases can be found in Isobe et al. [11], Akritas and Bershady [9],
Kelly [10], Hogg et al. [12].
## 3 Application to a scientific case
We present here the application of PT-REX to study the correlation between
radio and X-ray emission in the mini-halo in the RX J1347.5-1145 galaxy
cluster. Radio mini-halos are diffuse radio sources observed at the center of
relaxed clusters and whose origins are still debated. Although they strongly
differ, all the models which have been proposed to explain the origin of the
relativistic, radio-emitting electrons agree on linking the properties of the
non-thermal particles to the thermal gas [e.g., 2, 13, for reviews].
Therefore, studying the spatial correlation between radio and X-ray emission
is crucial to investigate this physical connection and to discriminate between
the different scenarios. The mini-halo in RX J1347.5-1145 is elongated, where
the brightest part located toward north with a fainter extension toward south.
The diffuse radio emission surrounds the bright radio galaxy GALEX
J134730.-114509, and two large extended radio sources are placed side by side
with it. The X-ray emission exhibits an elliptical shape, although more
symmetric than the radio emission, with the major axis aligned along
northwest-southeast. Previous X-ray studies unveiled the complex dynamical
status of this cluster where shocks and merging sub-clusters coexist with a
sloshing cool-core [e.g., 14, 15, 16, 17]. Therefore, due to the complex
morphology of this system, the ptp analysis is the best suited to study the
connection between radio and X-ray emission.
Figure 4: X-ray image of RXJ1347.5-1145 in the 0.5-2.0 keV band with the -3,
3, 24, 96$\sigma$ level countours of the VLA radio image from Gitti et al.
[18] (1$\sigma$= 0.04 mJy beam-1). We report here the region of interest
(green dashed), a random smpling grid (green continuous) and the mask (grey).
We combined the Very Large Array observation at 1.4 GHz presented in Gitti et
al. [18] (RMS=0.04 mJy beam-1, beam 17.7${}^{\prime\prime}\times
13.6^{\prime\prime}$) with an X-ray image in the 0.5-2.0 keV band produced
from the archival Chandra observation 2222 (PI Khan, exposure time 100 ks),
which we processed following the standard analysis
procedure666https://cxc.cfa.harvard.edu/ciao/threads/index.html. We used the
3$\times$RMS level contours of the VLA image to define the region of interest,
which indicates to PT-REX the size and the position of the radio source, and
the mask (Figure 4). We carefully defined the mask within the region of
interest by making sure to remove the emission of the central galaxy and of
the field sources, whereas we adopted a more crude approach adopted outside
the region by masking the other sources with large squares. Then we define a
first grid to sample the emission above the 3$\sigma=0.12$ mJy beam-1 level of
the radio image. The beam area is 273.6 arcsec2 ( 23 pixel), so we use cell
with a 17.5′′ size and a total area of 289 arcsec2 (25 pixel). The resulting
grid shown in Figure 4 is composed of 16 cells which were then used in a SMptp
analysis. The Spearman and Pearson ranks that we measured are 0.79 and 0.8,
respectively, which indicate a strong linear correlation between
$I_{\text{R}}$ and $I_{\text{X}}$. This indicates that the ICM component
responsible of the radio and X-ray emission occupy the same volume, which is
in agreement to what is observed in radio mini-halos. In Figure 5 we report
the resulting SMptp analysis carried out with the algorithms implemented in
PT-REX. Albeit the large errors, they are all consistent within 1 $\sigma$ and
indicate a sub-linear correlation between radio and X-ray emission.
Figure 5: Results of the SMptp analysis carried out with the grid shown in
Figure 4 with the different fitting algorithm: Least squares (top-left), BCES
orthogonal (top-right), BCES bisector (bottom-left), LinMix (bottom-right). We
report the 95$\%$ confidence interval.
However, the low number of cells indicates that the source is poorly resolved,
so our results may be biased by the grid which we used. Therefore we performed
also a MCptp analysis to test this possibility and to better constrain $k$. We
present here the results of 500 iterations of MCptp analysis carried out with
the BCES orthogonal fit. We used the mask and region of interest presented in
Figure 4 with the same cell size and threshold adopted for the SMptp analysis.
In Figure 6 we report the mean and the standard deviation of the final
distribution and of the first 50, 100 and 200 iterations.
Figure 6: Histogram with the distribution of $k$ produced by 500 iterations
of the MCptp routine. We individually report the mean and standard deviation
of the first 50, 100, 200 iterations and of the total distribution.
In Table 1 we report the results of the all the ptp analysis carried out. All
the method produced estimates of $k$ that are in agreement within 1$\sigma$,
although the MCptp estimates a linear trend. All of the SMptp analysis
resulted in a sub-linear trend and, amongst them, the LS fit estimated the
flattest trend because it was pivoted by the two points at the extremes of the
distribution (Figure 5, top-left panel). The two BCES estimators produced
similar results, whereas the the LinMix fit (which was performed with 100
chains and a prior composed of two gaussians) returned a slightly flatter
value of $k$, albeit with larger errors. By comparing the SMptp analysis
performed with the BCES orthogonal algorithm with the MCptp, we can conclude
that the grid we used in the former was slightly biased toward flatter values,
most likely by the large errors of the low-brightness points. The
uncertainties of the MCptp are larger than the ones of the SMptp, regardless
of the fitting method. This indicate that the random sampling has
significantly affected the estimate of $k$, i.e. it confirms that the
observation does not fully resolve the mini-halo and there may be sub-
structures embed in the emission likely associated with the features (such as
shocks, sloshing and merging sub-clumps) of the thermal ICM. Therefore, by
featuring PT-REX we could comprehensively study the spatial correlation
between diffuse radio and X-ray emission in RXJ1347.5-1145, finding that they
are linked by a linear trend. This implies that the distribution of magnetic
field and relativistic particles is deeply bounded to the distribution of the
thermal plasma and, therefore, it may suggest that the diffusion and re-
acceleration of the radio-emitting electrons depends mostly on the local
properties of the thermal gas (e.g. its turbulence). Our result also suggests
that the current observations may be missing some components of this radio
source, which are instead glimpsed by the random sampling. The scientific
interpretation of this result in the context of origin of the radio emission
and the connection with the complex dynamical state of this cluster is not
trivial, and it is beyond the scope of this work. Here we just mention that
this result is in agreement to what is observed in general for radio mini-
halos [7].
SMptp: fitting method | $k$
---|---
Least squares | 0.66$\pm$0.19
BCES orthogonal | 0.85$\pm$0.14
BCES bisector | 0.88$\pm$0.12
LinMix | 0.75$\pm$0.19
MCptp: number of iterations | $k$
50 | 1.09$\pm$0.28
100 | 1.03$\pm$0.28
200 | 1.07$\pm$0.26
500 | 1.06$\pm$0.26
Table 1: Results of the different ptp analysis.
## 4 Conclusions
We introduced PT-REX, a software to estimate the point-to-point trend between
radio and X-ray surface brightness in extended sources. We presented here a
set of tools to perform this analysis and we discussed how to run them
effectively for a variety of scientific problems. We also introduced the Monte
Carlo ptp analysis, which allows to explore the parameter space of the scaling
by combining numerous random sampling of the source. This method is advised
when studying small or poorly-resolved sources. Finally, we showed how to use
PT-REX by studying the $I_{\text{R}}$-$I_{\text{X}}$ trend in the cluster
RXJ1347.5-1145. Despite the low resolution of the radio image, we could
reliably estimate a linear scaling between radio and X-ray emission
($k=1.06\pm 0.26$) that indicates a strong bond between the thermal gas and
the radio-emitting plasma. With PT-REX similar studies can be easily conducted
on a variety of science cases to derive some useful (and sometimes unexpected)
insights into their physics.
## Acknowledgments
AI thanks G. Brunetti and M. Sereno for the useful discussions, M. Gitti for
providing the VLA image of RXJ1347.5-1145, and L. Bruno for having tested the
code with infinite patience. This research made use of
Astropy777http://www.astropy.org, a community-developed core Python package
for Astronomy [4, 5], SciPy [6] and APLpy, an open-source plotting package for
Python [19].
## References
* Govoni et al. [2001] F. Govoni, T. A. Enßlin, L. Feretti, G. Giovannini, A comparison of radio and X-ray morphologies of four clusters of galaxies containing radio halos, A&A 369 (2001) 441–449. doi:10.1051/0004-6361:20010115.
* Brunetti and Jones [2014] G. Brunetti, T. W. Jones, Cosmic Rays in Galaxy Clusters and Their Nonthermal Emission, International Journal of Modern Physics D 23 (2014) 30007. doi:10.1142/S0218271814300079. arXiv:1401.7519.
* McMullin et al. [2007] J. P. McMullin, B. Waters, D. Schiebel, W. Young, K. Golap, CASA Architecture and Applications, in: R. A. Shaw, F. Hill, D. J. Bell (Eds.), Astronomical Data Analysis Software and Systems XVI, volume 376 of Astronomical Society of the Pacific Conference Series, 2007, p. 127.
* Astropy Collaboration et al. [2013] Astropy Collaboration, T. P. Robitaille, E. J. Tollerud, P. Greenfield, M. Droettboom, E. Bray, T. Aldcroft, M. Davis, A. Ginsburg, A. M. Price-Whelan, W. E. Kerzendorf, A. Conley, N. Crighton, K. Barbary, D. Muna, H. Ferguson, F. Grollier, M. M. Parikh, P. H. Nair, H. M. Unther, C. Deil, J. Woillez, S. Conseil, R. Kramer, J. E. H. Turner, L. Singer, R. Fox, B. A. Weaver, V. Zabalza, Z. I. Edwards, K. Azalee Bostroem, D. J. Burke, A. R. Casey, S. M. Crawford, N. Dencheva, J. Ely, T. Jenness, K. Labrie, P. L. Lim, F. Pierfederici, A. Pontzen, A. Ptak, B. Refsdal, M. Servillat, O. Streicher, Astropy: A community Python package for astronomy, A&A 558 (2013) A33. doi:10.1051/0004-6361/201322068. arXiv:1307.6212.
* Astropy Collaboration et al. [2018] Astropy Collaboration, A. M. Price-Whelan, B. M. Sipőcz, H. M. Günther, P. L. Lim, S. M. Crawford, S. Conseil, D. L. Shupe, M. W. Craig, N. Dencheva, A. Ginsburg, J. T. Vand erPlas, L. D. Bradley, D. Pérez-Suárez, M. de Val-Borro, T. L. Aldcroft, K. L. Cruz, T. P. Robitaille, E. J. Tollerud, C. Ardelean, T. Babej, Y. P. Bach, M. Bachetti, A. V. Bakanov, S. P. Bamford, G. Barentsen, P. Barmby, A. Baumbach, K. L. Berry, F. Biscani, M. Boquien, K. A. Bostroem, L. G. Bouma, G. B. Brammer, E. M. Bray, H. Breytenbach, H. Buddelmeijer, D. J. Burke, G. Calderone, J. L. Cano Rodríguez, M. Cara, J. V. M. Cardoso, S. Cheedella, Y. Copin, L. Corrales, D. Crichton, D. D’Avella, C. Deil, É. Depagne, J. P. Dietrich, A. Donath, M. Droettboom, N. Earl, T. Erben, S. Fabbro, L. A. Ferreira, T. Finethy, R. T. Fox, L. H. Garrison, S. L. J. Gibbons, D. A. Goldstein, R. Gommers, J. P. Greco, P. Greenfield, A. M. Groener, F. Grollier, A. Hagen, P. Hirst, D. Homeier, A. J. Horton, G. Hosseinzadeh, L. Hu, J. S. Hunkeler, Ž. Ivezić, A. Jain, T. Jenness, G. Kanarek, S. Kendrew, N. S. Kern, W. E. Kerzendorf, A. Khvalko, J. King, D. Kirkby, A. M. Kulkarni, A. Kumar, A. Lee, D. Lenz, S. P. Littlefair, Z. Ma, D. M. Macleod, M. Mastropietro, C. McCully, S. Montagnac, B. M. Morris, M. Mueller, S. J. Mumford, D. Muna, N. A. Murphy, S. Nelson, G. H. Nguyen, J. P. Ninan, M. Nöthe, S. Ogaz, S. Oh, J. K. Parejko, N. Parley, S. Pascual, R. Patil, A. A. Patil, A. L. Plunkett, J. X. Prochaska, T. Rastogi, V. Reddy Janga, J. Sabater, P. Sakurikar, M. Seifert, L. E. Sherbert, H. Sherwood-Taylor, A. Y. Shih, J. Sick, M. T. Silbiger, S. Singanamalla, L. P. Singer, P. H. Sladen, K. A. Sooley, S. Sornarajah, O. Streicher, P. Teuben, S. W. Thomas, G. R. Tremblay, J. E. H. Turner, V. Terrón, M. H. van Kerkwijk, A. de la Vega, L. L. Watkins, B. A. Weaver, J. B. Whitmore, J. Woillez, V. Zabalza, Astropy Contributors, The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package, AJ 156 (2018) 123\. doi:10.3847/1538-3881/aabc4f. arXiv:1801.02634.
* Virtanen et al. [2020] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. Jarrod Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. Carey, İ. Polat, Y. Feng, E. W. Moore, J. Vand erPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, S. . . Contributors, SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python, Nature Methods 17 (2020) 261–272. doi:https://doi.org/10.1038/s41592-019-0686-2.
* Ignesti et al. [2020] A. Ignesti, G. Brunetti, M. Gitti, S. Giacintucci, Radio and X-ray connection in radio mini-halos: implications for hadronic models, arXiv e-prints (2020) arXiv:2006.09254. arXiv:2006.09254.
* Sereno [2016] M. Sereno, A Bayesian approach to linear regression in astronomy, MNRAS 455 (2016) 2149–2162. doi:10.1093/mnras/stv2374. arXiv:1509.05778.
* Akritas and Bershady [1996] M. G. Akritas, M. A. Bershady, Linear Regression for Astronomical Data with Measurement Errors and Intrinsic Scatter, APJ 470 (1996) 706\. doi:10.1086/177901. arXiv:astro-ph/9605002.
* Kelly [2007] B. C. Kelly, Some Aspects of Measurement Error in Linear Regression of Astronomical Data, APJ 665 (2007) 1489–1506. doi:10.1086/519947. arXiv:0705.2774.
* Isobe et al. [1990] T. Isobe, E. D. Feigelson, M. G. Akritas, G. J. Babu, Linear Regression in Astronomy. I., APJ 364 (1990) 104\. doi:10.1086/169390.
* Hogg et al. [2010] D. W. Hogg, J. Bovy, D. Lang, Data analysis recipes: Fitting a model to data, arXiv e-prints (2010) arXiv:1008.4686. arXiv:1008.4686.
* van Weeren et al. [2019] R. J. van Weeren, F. de Gasperin, H. Akamatsu, M. Brüggen, L. Feretti, H. Kang, A. Stroe, F. Zandanel, Diffuse Radio Emission from Galaxy Clusters, Space Science Reviews 215 (2019) 16\. doi:10.1007/s11214-019-0584-z. arXiv:1901.04496.
* Mason et al. [2010] B. S. Mason, S. R. Dicker, P. M. Korngut, M. J. Devlin, W. D. Cotton, P. M. Koch, S. M. Molnar, J. Sievers, J. E. Aguirre, D. Benford, J. G. Staguhn, H. Moseley, K. D. Irwin, P. Ade, Implications of a High Angular Resolution Image of the Sunyaev-Zel’Dovich Effect in RXJ1347-1145, APJ 716 (2010) 739–745. doi:10.1088/0004-637X/716/1/739.
* Gitti et al. [2007] M. Gitti, R. Piffaretti, S. Schindler, Mass distribution in the most X-ray-luminous galaxy cluster RX J1347.5-1145 studied with XMM-Newton, A&A 472 (2007) 383–394. doi:10.1051/0004-6361:20077580. arXiv:0706.3001.
* Johnson et al. [2012] R. E. Johnson, J. Zuhone, C. Jones, W. R. Forman, M. Markevitch, Sloshing Gas in the Core of the Most Luminous Galaxy Cluster RXJ1347.5-1145, APJ 751 (2012) 95\. doi:10.1088/0004-637X/751/2/95. arXiv:1106.3489.
* Kreisch et al. [2016] C. D. Kreisch, M. E. Machacek, C. Jones, S. W. Randall, Merger Hydrodynamics of the Luminous Cluster RX J1347.5-1145, APJ 830 (2016) 39\. doi:10.3847/0004-637X/830/1/39. arXiv:1607.04674.
* Gitti et al. [2007] M. Gitti, C. Ferrari, W. Domainko, L. Feretti, S. Schindler, Discovery of diffuse radio emission at the center of the most X-ray-luminous cluster RX J1347.5-1145, A&A 470 (2007) L25–L28. doi:10.1051/0004-6361:20077658. arXiv:0706.3000.
* Robitaille and Bressert [2012] T. Robitaille, E. Bressert, APLpy: Astronomical Plotting Library in Python, 2012\. arXiv:1208.017.
|
# Damped Dirac magnon in a metallic kagomé antiferromagnet FeSn
Seung-Hwan Do Materials Science and Technology Division, Oak Ridge National
Laboratory, Oak Ridge, Tennessee 37831, USA Koji Kaneko Materials and Life
Science Division, J-PARC Center, Tokai, Ibaraki, 319-1195, Japan Materials
Sciences Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki,
319-1195, Japan Ryoichi Kajimoto Materials and Life Science Division, J-PARC
Center, Tokai, Ibaraki, 319-1195, Japan Kazuya Kamazawa Neutron Science and
Technology Center, Comprehensive Research Organization for Science and
Society, Tokai, Ibaraki, 319-1106, Japan Matthew B. Stone Neutron Scattering
Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
Shinichi Itoh Materials and Life Science Division, J-PARC Center, Tokai,
Ibaraki, 319-1195, Japan Institute of Materials Structure Science, High
Energy Accelerator Research Organization, Tsukuba, Ibaraki, 305-0081, Japan
Takatsugu Masuda Institute for Solid State Physics, The University of Tokyo,
Chiba 277-8581, Japan Institute of Materials Structure Science, High Energy
Accelerator Research Organization, Tsukuba, Ibaraki, 305-0081, Japan German
D. Samolyuk Materials Science and Technology Division, Oak Ridge National
Laboratory, Oak Ridge, Tennessee 37831, USA Elbio Dagotto Materials Science
and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee
37831, USA Department of Physics and Astronomy, University of Tennessee,
Knoxville, Tennessee 37996, USA William R. Meier Materials Science and
Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee
37831, USA Brian C. Sales Materials Science and Technology Division, Oak
Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Hu Miao Materials
Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge,
Tennessee 37831, USA Andrew D. Christianson Materials Science and Technology
Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
###### Abstract
The kagomé lattice is a fertile platform to explore topological excitations
with both Fermi-Dirac and Bose-Einstein statistics. While relativistic Dirac
Fermions and flat-bands have been discovered in the electronic structure of
kagomé metals, the spin excitations have received less attention. Here we
report inelastic neutron scattering studies of the prototypical kagomé
magnetic metal FeSn. The spectra display well-defined spin waves extending up
to 120 meV. Above this energy, the spin waves become progressively broadened,
reflecting interactions with the Stoner continuum. Using linear spin wave
theory, we determine an effective spin Hamiltonian that reproduces the
measured dispersion. This analysis indicates that the Dirac magnon at the
K-point remarkably occurs on the brink of a region where well-defined spin
waves become unobservable. Our results emphasize the influential role of
itinerant carriers on the topological spin excitations of metallic kagomé
magnets.
††preprint: APS/123-QED
This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-
AC05-00OR22725 with the U.S. Department of Energy. The United States
Government retains and the publisher, by accepting the article for
publication, acknowledges that the United States Government retains a non-
exclusive, paid-up, irrevocable, world-wide license to publish or reproduce
the published form of this manuscript, or allow others to do so, for United
States Government purposes. The Department of Energy will provide public
access to these results of federally sponsored research in accordance with the
DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
The interplay between charge, spin, and geometric frustration is an important
underlying theme to problems at the forefront of condensed matter physics [1,
2, 3, 4, 5, 6, 7, 8, 9]. Kagomé magnets, consisting of a corner shared
transition-metal triangular-network (Fig. 1(a)), are an ideal platform to
explore correlated topological states, including the fractional quantum Hall
effect [1, 2, 3, 4], the intrinsic Chern state [10, 11, 12, 9] and magnetic
Weyl semimetals [13]. While the charge excitations of kagomé magnets have been
extensively investigated [14, 8, 15, 6, 16, 5, 5, 9, 13], their magnetic
counterparts and the intertwined correlations between charge and spin degrees
of freedom have not yet been investigated in detail.
Similar to the electronic structure, a spin model with nearest-neighbor
magnetic exchange, $J_{1}$, yields a Dirac magnon at the K-point and a flat
magnon band, as shown in Fig. 1(c) [17, 18, 19]. Time-reversal symmetry
breaking interactions, such as the Dzyaloshinskii-Moriya interaction in
magnetic insulators, introduce a gap at the Dirac point and can induce a
topological thermal Hall effect [18, 20, 19, 21, 17]. In addition, magnon-
magnon interactions may modify the dispersion to realize interaction-
stabilized topological magnons [22, 23]. This simplified picture is, however,
challenged in a metallic kagomé magnet, where the presence of itinerant
electrons will introduce long-range magnetic interactions through, $e.g.$ RKKY
(Ruderman–Kittel–Kasuya–Yosida) interactions, that dramatically change the
magnon dispersion as shown in Fig. 1(d). Moreover, the high-energy spin wave
excitations will interact with the particle-hole continuum of the Stoner
excitation (Fig. 1(e)), resulting in mode decay.
Figure 1: (a) (b) Crystal and magnetic structure of FeSn. The exchange paths
between Fe spins are indicated. The spin wave dispersions of a ferromagnetic
kagomé lattice with $J_{1}$=-1 meV and (c) $J_{2}$=0 and (d)
$J_{2}$=0.2$J_{1}$ ($J_{2}$=-0.2$J_{1}$), are displayed with black and gray
(blue) curves, respectively. High symmetry points are indicated in the inset
to (d). (e) Schematic of a Stoner excitation spectra (continuum) and magnon
(sharp dispersion) as a function of momentum (Q) and energy ($E$). The spin
wave mode decays into a particle-hole pair near the Fermi energy ($E_{F}$)
when it enters the Stoner continuum. The continuum boundary shifts with gap
$\Delta$ ($\delta$), reflecting the direct (indirect) electronic transitions,
as shown in the inset.
To explore the effects of itinerant carriers on the magnons in the
ferromagnetic kagomé spin-lattice, we study the spin excitation spectra of the
metallic kagomé magnet FeSn using inelastic neutron scattering (INS). The
measured spectra show relatively sharp spin waves of the ferromagnetic kagomé
spin-lattice below 120 meV. At higher energies, the spin waves exhibit decay
due to interactions with the Stoner continua. Interestingly, we find that
while the Dirac magnon remains, the upper branch of the Dirac band is heavily
damped, uncovering a non-trivial interplay between magnon and continuum.
FeSn crystallizes in a hexagonal structure (P6$/mmm$) with the Fe atoms
forming a two-dimensional kagomé spin-lattice(Fig. 1(b)). Below
$T_{\text{N}}$=365 K, the Fe spins form ferromagnetic kagomé layers which are
stacked antiferromagnetically along the $c$-axis with an ordering wave vector
of $\text{{Q}}_{\text{m}}$=(0,0,1/2). As we show in this letter, the dominant
in-plane ferromagnetic interactions allow the behavior of the quasi two-
dimensional ferromagnetic kagomé spin-lattice to be probed. For the INS
measurements, 4.43 g of FeSn single crystals were grown using the flux method
[24] and co-aligned on aluminum plates with a [$H,0,L$] horizontal scattering
plane. The INS data were obtained at $T$=100 K using HRC [25] (incident
energies $E_{\text{i}}$=40 and 153 meV) and 4SEASONS [26] ($E_{\text{i}}$=27
meV, 46 meV, 96 meV, and $E_{\text{i}}$=300 meV) spectrometers at the Japan
Proton Accelerator Research Complex (J-PARC). Additional data were collected
with the SEQUOIA [27] spectrometer ($E_{\text{i}}$=500 meV) at the Spallation
Neutron Source (SNS) at Oak Ridge National Laboratory (see [28] for additional
details).
Figure 2: (a) Contour map of the INS intensity along high symmetry directions
(given in (f)). The data ((a),(c)) were measured using HRC with $E_{i}$=153
meV. The spectrum above (below) the horizontal line at 30 meV was obtained
from the BZ for $\Gamma$ at Q=(0,0,1/2) ((0,0,3/2)), integrating over Q=0.22
Å-1 along the vertical direction. Horizontal (vertical) error bars of pink
(green) circles indicate the fitted peaks full width at half maxima (FWHM),
and vertical (horizontal) error bars indicate the range of energy (momentum)
integration. (b) INS data (left) and spin wave calculations (right) as
described in the text along the out-of-plane direction through the ZC,
measured using the 4SEASONS spectrometer with $E_{\text{i}}$=46 meV. The solid
line is the calculated magnon dispersion. (c) Constant energy slice of the
magnon spectra in the [$H$,0,$L$] plane and the calculated spectra. (d) Low-
energy spectrum of $I(\textbf{Q},E)$ near the ZC measured using
$E_{\text{i}}$=27 meV at 4SEASONS, and (e) the corresponding calculation
including an easy-plane anisotropy of $D_{z}$=0.2 meV. Figure 3: (a) High-
energy INS spectra (plotted as $E\times I(\text{{Q}},E)$) and spin wave
calculations ((b)) along high symmetry directions as indicated in the right
panel of the $HK$-reciprocal space map. Data were obtained by integrating over
Q=0.19 Å-1 and -4$\leq L\leq$4\. The calculation was performed for an
identical Q-integration range and convoluted with the instrumental resolution
of SEQUOIA. The black solid lines display the magnon dispersion for $L$=0.5.
Horizontal (vertical) error bars of white filled circles indicate the fitted
peaks FWHM (range of energy integration). (c) Constant energy cut along the
high symmetry directions, integrated over energy $\pm$5 meV. Solid lines are
Gaussian fits described in the text with fitted values displayed in (a). (d)
INS spectra obtained from HRC ($E_{i}$=153 meV), integrated over -3$\leq
L\leq$3\.
Figure 2 shows the spectra in the three-dimensional hexagonal Brillouin zone
(BZ), measured by INS. The acoustic magnons emanate from
$\textbf{Q}_{m}$=$\Gamma$(0,0,1/2), and disperse throughout the entire BZ.
Strongly dispersive magnons in the $HK$-plane extend well above 80 meV,
whereas the magnon dispersion along the out-of-plane direction has a bandwidth
of less than 20 meV indicating the dominant spin-spin interactions are within
the kagomé-lattice planes. The nearly two-dimensional character of the spin
excitation spectrum is further evidenced by the rod-like scattering shown in
Fig. 2(c).
Table 1: Hamiltonian parameters determined from the spin wave theory analysis. Label (number of paths) | $J_{1}$ (4) | $J_{int1}$ (2) | $J_{2}$ (4) | $J_{int2}$ (8) | $J_{3}$ (2) | $J_{4}$ (4) | $D_{z}$
---|---|---|---|---|---|---|---
$J_{ij}^{\text{Fit}}$ (meV) | -44.33 $\pm 1.56$ | 4.51$\pm 1.00$ | 12.23 $\pm 1.06$ | 1.27 $\pm 0.24$ | -5.28 $\pm 2.32$ | -4.60 $\pm 0.90$ | 0.1
Distance (Å) | 2.65 | 4.45 | 4.59 | 5.18 | 5.30 | 5.30 | -
The high-energy spectra were measured using the SEQUOIA spectrometer with
$E_{i}$=500 meV. We integrate the INS data over $-4\leq L\leq$4 r.l.u. to
enhance statistics. Note that due to momentum and energy conservation, high-
energy transfer data is obtained from a larger magnitude $L$-region, which
results in lower scattering intensity from the magnetic form factor
contribution. As shown in Fig. 3, the excitations extend to at least 200 meV.
Two individual magnon branches are observed corresponding to the lower- and
mid-magnon bands in Fig.1(c) of the ferromagnetic kagomé spin-lattices through
the M- and K-points in the BZ. The higher energy spectral weight above
$\sim$120 meV is diffuse, and becomes indiscernible from background above
$\sim$200 meV. Figure 3(c) shows momentum scans through
$\Gamma$’-$M$-$\Gamma$-K-X for increasing energy transfer. Along both the
$\Gamma$-M and $\Gamma$-K directions, the peak linewidths broaden as a
function of Q near the zone boundary (ZB), and the peak-positions are intact
over a wide energy range 120$<E<$170 meV (80$<E<$120 meV) near the K
(M)-point. These Q-, $E$\- peak broadenings indicate the decay of the magnons,
resulting from the quasiparticle scattering [29, 30, 31]. Considering the
metallicity of FeSn along with the collinear spin configuration, FeSn
presumably has a large magnon-electron interaction, which results in strong
damping of the magnon spectra.
Figure 4: (a)-(d) Constant energy slices of the INS data ($E\times
I(\text{{Q}},E)$) and spin wave calculations. Dashed lines indicate the first
BZ in the $HK$-plane. The color bar for (a)(b) ((c),(d)) is shown in right of
(b) ((d)). INS spectra through the K-point along (e) transverse- and (f)
radial-directions (see arrows in insets). (g) Momentum scans at constant
energy through the K-point along the transverse direction. The dispersion was
extracted by fitting the spectra to Gaussian functions (solid lines) and the
results are displayed as circles in (e). The lines in (e)(f) represent the
linearly crossing magnons for $L$=0.5 at the Dirac node. Horizontal (vertical)
error bars in (e)(f) indicate the fitted FWHM (range of energy integration).
(h) Constant wave-vector scan at the Dirac point. Data are shown as symbols
and the spectral weight from LSWT (shaded region) is described in the text.
The line is a guide to eye. The data was obtained by integrating over the
momentum region [$H$,0,0]=$\pm$0.05, [2$K$,-$K$,0]=$\pm$0.06, and
[0,0,$L$]=$\pm$4\. (e),(f),(h) For clarity, the nonmagnetic background was
obtained from the scattering at Q=(2/3,2/3,0) and subtracted from the measured
intensities [28].
To understand the observed spin wave spectra and the underlying spin-spin
interactions, we use linear spin wave theory (LSWT) with the Hamiltonian,
${\cal H}=J_{n}\sum_{i,j}S_{i}S_{j}-D_{z}\sum_{i}(S_{i}^{z})^{2}$, as
implemented in the SpinW software package [32]. We set $S$=1 considering the
measured effective moment of 3.4 $\mu_{B}$ (2.8 $\mu_{B}$ for $S$=1, where
$g$=2) [24]. $J_{n}$ and $D_{z}$ correspond to Heisenberg exchange couplings
for the $n$th nearest-neighbor and a single-ion anisotropy, respectively
111Indeed, Dzyalloshinskii-Moriya interaction (DMI) along $z$-axis ($c$) is
symmetrically allowed. However, since the DMI on the spins aligning in the
plane ($z\perp S$) does not change the dispersion we exclude the DMI in the
model Hamiltonian.. Interactions up to fourth (second) nearest-neighbor in-
the-plane (out-of-plane) direction (see Fig. 1(a)(b)) were considered. Note
that $J_{3}$ and $J_{4}$ have the same distance but different paths. Hence,
the distinction of these parameters is maintained due to the potential effects
on the RKKY interaction of the complicated band structure near the fermi
surface [34]. The measured dispersion is fitted to the calculated dispersion
(see [28]), yielding the parameters listed in Table 1. The parameters indicate
a dominant nearest-neighbor ferromagnetic interaction $J_{1}$ responsible for
the ferromagnetic kagomé spin-lattice. We also determine non-negligible
further neighbor exchanges, $J_{2}(\sim-0.28J_{1})$, $J_{3}(\sim 0.12J_{1})$,
and $J_{4}(\sim 0.1J_{1})$, are present. The sign and relative size of the
parameters from the spin wave analysis are largely consistent with parameters
determined from first principles calculations (see Supplemental materials [28]
and [24]). Furthermore, the symmetry-allowed easy-plane single-ion anisotropy
($D_{z}>$0) reproduces the peaked intensity data near 4 meV shown in Fig 2(d)
[28].
The refined spin wave scattering intensity is compared to the experimental
data in Figs. 2, 3, and 4. The calculations reproduce the low-energy spectra.
However, the scattering intensity and dispersion deviate from the calculation
at the zone boundary and well-defined modes are essentially absent above 200
meV in the measurements. This discrepancy in the scattering intensity is
ascribed to interactions with the Stoner continuum [35, 36, 37, 38], and
indicates the energy scale of the Stoner excitations. Due to the large number
of electronic bands in FeSn, it is challenging to make direct comparisons to
the magnetic spectra measured here. However, electronic band structure
calculations do indicate splitting of majority and minority spin bands near
the fermi energy [39, 24]. The minimum energy of an indirect inter-band
transition for these bands near Q=$\Gamma$ is $\sim$0.1-0.2 eV, which results
in a gap of the Stoner excitations with finite momenta (see Fig.1(e)), and is
consistent with the energy scale above which damping begins to dominate the
INS spectra.
The determined spin Hamiltonian and the symmetry of the spin configuration
preserves time reversal symmetry, and permit the existence of a Dirac point in
the magnon spectrum. LSWT presents a sharp linear magnon band crossing at
$E\sim$120 meV at the K-point (see dispersion line in Fig. 3(a)(b)). However,
due to interactions with electron, the spin waves near the Dirac node are
susceptible to decay. Figure 4(a)-(d) presents constant energy slices measured
up to 180 meV. The low-energy spectrum below the Dirac node are reproduced by
LSWT. The Dirac node is evident at the K-point at 120 meV as shown in Fig.
4(b). Above 120 meV, the excitations significantly broaden. The is
particularly evident near the zone boundary and the broadening increases with
increasing energy transfer. Figures 4(e) and (f) highlight the dispersion in
the vicinity of the Dirac nodes along transverse and radial directions,
respectively. Figure 4(g) shows constant energy scans along the transverse
direction through the Dirac nodal point as having two clear peaks below 100
meV and above 150 meV, but only a single peak between 100 meV and 150 meV in
the vicinity of the two crossing bands. Peak positions extracted from Gaussian
fits compare well to the LSWT dispersion curve in Figs. 4(e). We note that
finite spectral weight likely due to damping from interactions with the
continuum is present between the two peaks above the Dirac node. In contrast,
the momentum scan along the radial direction deviates from the calculated
dispersion above 120 meV, as shown in Fig. 4(f). Rather than two peaks,
constant energy scans along this direction show a broadened spectral weight
centered near the Dirac node. These results demonstrate that the scattering
with itinerant electrons reconstruct the upper Dirac cone dispersion, but also
the diffusive continuum from the decay fills in the Dirac cone. Figure 4(h)
shows an energy scan at the Dirac node compared to the calculated spectral
weight of the LSWT model convoluted with the instrumental energy resolution.
The decayed spectral weight is visible above 150 meV and extends well beyond
the LSWT model of the scattering.
Additionally, the LSWT completely fails to explain the observed upper spectral
weight above 120 meV along $\Gamma$ to M (see Fig 3(a)(b)). We note that
adjusting the exchange values of the Hamiltonian to have a large
antiferromagnetic $J_{4}\sim$3.5 meV (with ferromagnetic $J_{3}$) decreases
the upper magnon branch down to 120 meV and reproduces the observed spectral
weight near M. However, this results in a large discrepancy in the other
magnon bands (see Supplemental Material [28]). It is worth noting that a
fluctuation continua is also present at the top of the lower magnon branch at
Q=M (zone boundary) above 80 meV (see Fig. 3(d)). It connects the lower magnon
branch to the upper spectral weight without a gap in the spectrum. This in
turn generates a band touching at M around the Dirac node, resulting in a weak
ring-shaped spectral weight in the all constant energy slices between 80 meV
and 150 meV (see Fig. 4). This continuous scattering confirms that the
excitation near M is not simply due to a spin wave excitation. Therefore, a
likely component of the measured spectral weight near M is the decayed spectra
of the upper magnon band. To explain this may require a comparison to the
itinerant band model [40, 38], a more sophisticated approach which includes
the correction from the interactions with itinerant electrons [41, 42], or
spin-fermion model [43, 44, 45].
In summary, we have found that the spin excitation spectrum in the
ferromagnetic kagomé metal FeSn is quasi-two-dimensional with progressively
stronger damping of the spin waves with increasing energy transfer. The
determined exchange terms for the spin Hamiltonian provide for a symmetry
allowed magnon Dirac nodal point near the electronic continua. The interaction
with the itinerant electrons is large near the nodal point, resulting in a
significant spectral broadening with momentum dependence. The interactions are
also large near the M-point, which results in continuous spectral weight
between the lower and upper magnon bands. A more complete understanding of
these observations require calculations which account for the electron-magnon
interactions. It will be particularly interesting to check if the spin-charge
coupled spectra in the kagomé metallic magnet possesses the topology arising
from correlation effects.
###### Acknowledgements.
We acknowledge M. Lumsden for useful discussions. This research was supported
by the U.S. Department of Energy, Office of Science, Basic Energy Sciences,
Materials Science and Engineering Division. Work at the Oak Ridge National
Laboratory Spallation Neutron Source was supported by U.S. DOE, Office of
Science, BES, Scientific User Facilities Division. The neutron experiment at
the Materials and Life Science Experimental Facility of the J-PARC was
performed under a user program (Proposal No. 2019B0248 and 2020A0217).
## References
* Tang _et al._ [2011] E. Tang, J.-W. Mei, and X.-G. Wen, Phys. Rev. Lett. 106, 236802 (2011).
* Sheng _et al._ [2011] D. N. Sheng, Z.-C. Gu, K. Sun, and L. Sheng, Nature Communications 2, 389 (2011).
* Neupert _et al._ [2011] T. Neupert, L. Santos, C. Chamon, and C. Mudry, Phys. Rev. Lett. 106, 236804 (2011).
* Sun _et al._ [2011] K. Sun, Z. Gu, H. Katsura, and S. Das Sarma, Phys. Rev. Lett. 106, 236803 (2011).
* Yin _et al._ [2018] J.-X. Yin, S. S. Zhang, H. Li, K. Jiang, G. Chang, B. Zhang, B. Lian, C. Xiang, I. Belopolski, H. Zheng, _et al._ , Nature 562, 91 (2018).
* Ye _et al._ [2018] L. Ye, M. Kang, J. Liu, F. Von Cube, C. R. Wicker, T. Suzuki, C. Jozwiak, A. Bostwick, E. Rotenberg, D. C. Bell, _et al._ , Nature 555, 638 (2018).
* Yin _et al._ [2019] J.-X. Yin, S. S. Zhang, G. Chang, Q. Wang, S. S. Tsirkin, Z. Guguchia, B. Lian, H. Zhou, K. Jiang, I. Belopolski, N. Shumiya, D. Multer, M. Litskevich, T. A. Cochran, H. Lin, Z. Wang, T. Neupert, S. Jia, H. Lei, and M. Z. Hasan, Nature Physics 15, 443 (2019).
* Kang _et al._ [2020a] M. Kang, L. Ye, S. Fang, J.-S. You, A. Levitan, M. Han, J. I. Facio, C. Jozwiak, A. Bostwick, E. Rotenberg, M. K. Chan, R. D. McDonald, D. Graf, K. Kaznatcheev, E. Vescovo, D. C. Bell, E. Kaxiras, J. van den Brink, M. Richter, M. Prasad Ghimire, J. G. Checkelsky, and R. Comin, Nature Materials 19, 163 (2020a).
* Yin _et al._ [2020] J.-X. Yin, W. Ma, T. A. Cochran, X. Xu, S. S. Zhang, H.-J. Tien, N. Shumiya, G. Cheng, K. Jiang, B. Lian, Z. Song, G. Chang, I. Belopolski, D. Multer, M. Litskevich, Z.-J. Cheng, X. P. Yang, B. Swidler, H. Zhou, H. Lin, T. Neupert, Z. Wang, N. Yao, T.-R. Chang, S. Jia, and M. Zahid Hasan, Nature 583, 533 (2020).
* Thouless _et al._ [1982] D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982).
* Haldane [1988] F. D. M. Haldane, Phys. Rev. Lett. 61, 2015 (1988).
* Xu _et al._ [2015] G. Xu, B. Lian, and S.-C. Zhang, Phys. Rev. Lett. 115, 186802 (2015).
* Liu _et al._ [2019] D. F. Liu, A. J. Liang, E. K. Liu, Q. N. Xu, Y. W. Li, C. Chen, D. Pei, W. J. Shi, S. K. Mo, P. Dudin, T. Kim, C. Cacho, G. Li, Y. Sun, L. X. Yang, Z. K. Liu, S. S. P. Parkin, C. Felser, and Y. L. Chen, Science 365, 1282 (2019).
* Kang _et al._ [2020b] M. Kang, S. Fang, L. Ye, H. C. Po, J. Denlinger, C. Jozwiak, A. Bostwick, E. Rotenberg, E. Kaxiras, J. G. Checkelsky, and R. Comin, Nature Communications 11, 4004 (2020b).
* Liu _et al._ [2020] Z. Liu, M. Li, Q. Wang, G. Wang, C. Wen, K. Jiang, X. Lu, S. Yan, Y. Huang, D. Shen, J.-X. Yin, Z. Wang, Z. Yin, H. Lei, and S. Wang, Nature Communications 11, 4002 (2020).
* Ortiz _et al._ [2020] B. R. Ortiz, S. M. L. Teicher, Y. Hu, J. L. Zuo, P. M. Sarte, E. C. Schueller, A. M. M. Abeykoon, M. J. Krogstad, S. Rosenkranz, R. Osborn, R. Seshadri, L. Balents, J. He, and S. D. Wilson, Phys. Rev. Lett. 125, 247002 (2020).
* Owerre [2017] S. Owerre, Journal of Physics Communications 1, 025007 (2017).
* Mook, Alexander and Henk, Jürgen and Mertig, Ingrid [2014] Mook, Alexander and Henk, Jürgen and Mertig, Ingrid, Physical Review B 89, 134409 (2014).
* Chisnell _et al._ [2015] R. Chisnell, J. S. Helton, D. E. Freedman, D. K. Singh, R. I. Bewley, D. G. Nocera, and Y. S. Lee, Phys. Rev. Lett. 115, 147201 (2015).
* Mook, Alexander and Henk, Jürgen and Mertig, Ingrid [2014] Mook, Alexander and Henk, Jürgen and Mertig, Ingrid, Phys. Rev. B 90, 024412 (2014).
* Zhang _et al._ [2013] L. Zhang, J. Ren, J.-S. Wang, and B. Li, Phys. Rev. B 87, 144101 (2013).
* Mook, Alexander and Plekhanov, Kirill and Klinovaja, Jelena and Loss, Daniel [2021] Mook, Alexander and Plekhanov, Kirill and Klinovaja, Jelena and Loss, Daniel, Phys. Rev. X 11, 021061 (2021).
* McClarty and Rau [2019] P. A. McClarty and J. G. Rau, Phys. Rev. B 100, 100405(R) (2019).
* Sales _et al._ [2019] B. C. Sales, J. Yan, W. R. Meier, A. D. Christianson, S. Okamoto, and M. A. McGuire, Phys. Rev. Materials 3, 114203 (2019).
* Itoh _et al._ [2011] S. Itoh, T. Yokoo, S. Satoh, S. ichiro Yano, D. Kawana, J. Suzuki, and T. J. Sato, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 631, 90 (2011).
* Kajimoto _et al._ [2011] R. Kajimoto, M. Nakamura, Y. Inamura, F. Mizuno, K. Nakajima, S. Ohira-Kawamura, T. Yokoo, T. Nakatani, R. Maruyama, K. Soyama, _et al._ , Journal of the Physical Society of Japan 80, SB025 (2011).
* Granroth _et al._ [2010] G. Granroth, A. Kolesnikov, T. Sherline, J. Clancy, K. Ross, J. Ruff, B. Gaulin, and S. Nagler, in _Journal of Physics: Conference Series_ , Vol. 251 (IOP Publishing, 2010) p. 012058.
* [28] “See supplemental material for additional data and analysis.” .
* Zhitomirsky and Chernyshev [2013] M. E. Zhitomirsky and A. L. Chernyshev, Rev. Mod. Phys. 85, 219 (2013).
* Oh _et al._ [2016] J. Oh, M. D. Le, H.-H. Nahm, H. Sim, J. Jeong, T. Perring, H. Woo, K. Nakajima, S. Ohira-Kawamura, Z. Yamani, _et al._ , Nature communications 7, 13146 (2016).
* Chen _et al._ [2020] X. Chen, I. Krivenko, M. B. Stone, A. I. Kolesnikov, T. Wolf, D. Reznik, K. S. Bedell, F. Lechermann, and S. D. Wilson, Nature communications 11, 3076 (2020).
* Toth and Lake [2015] S. Toth and B. Lake, Journal of Physics: Condensed Matter 27, 166002 (2015).
* Note [1] Indeed, Dzyalloshinskii-Moriya interaction (DMI) along $z$-axis ($c$) is symmetrically allowed. However, since the DMI on the spins aligning in the plane ($z\perp S$) does not change the dispersion we exclude the DMI in the model Hamiltonian.
* Roth _et al._ [1966] L. M. Roth, H. J. Zeiger, and T. A. Kaplan, Phys. Rev. 149, 519 (1966).
* Korenman and Prange [1972] V. Korenman and R. Prange, Physical Review B 6, 2769 (1972).
* Ibuka _et al._ [2017] S. Ibuka, S. Itoh, T. Yokoo, and Y. Endoh, Phys. Rev. B 95, 224406 (2017).
* Adams _et al._ [2000] C. P. Adams, T. E. Mason, E. Fawcett, A. Z. Menshikov, C. D. Frost, J. B. Forsyth, T. G. Perring, and T. M. Holden, Journal of Physics: Condensed Matter 12, 8487 (2000).
* Diallo, S. O. and Antropov, V. P. and Perring, T. G. and Broholm, C. and Pulikkotil, J. J. and Ni, N. and Bud’ko, S. L. and Canfield, P. C. and Kreyssig, A. and Goldman, A. I. and McQueeney, R. J. [2009] Diallo, S. O. and Antropov, V. P. and Perring, T. G. and Broholm, C. and Pulikkotil, J. J. and Ni, N. and Bud’ko, S. L. and Canfield, P. C. and Kreyssig, A. and Goldman, A. I. and McQueeney, R. J., Phys. Rev. Lett. 102, 187206 (2009).
* Lin _et al._ [2020] Z. Lin, C. Wang, P. Wang, S. Yi, L. Li, Q. Zhang, Y. Wang, Z. Wang, H. Huang, Y. Sun, Y. Huang, D. Shen, D. Feng, Z. Sun, J.-H. Cho, C. Zeng, and Z. Zhang, Phys. Rev. B 102, 155103 (2020).
* Ewings _et al._ [2011] R. A. Ewings, T. G. Perring, J. Gillett, S. D. Das, S. E. Sebastian, A. E. Taylor, T. Guidi, and A. T. Boothroyd, Phys. Rev. B 83, 214519 (2011).
* Park _et al._ [2011] H. Park, K. Haule, and G. Kotliar, Phys. Rev. Lett. 107, 137007 (2011).
* Müller, Mathias C. T. D. and Friedrich, Christoph and Blügel, Stefan [2016] Müller, Mathias C. T. D. and Friedrich, Christoph and Blügel, Stefan, Phys. Rev. B 94, 064433 (2016).
* Liang _et al._ [2014] S. Liang, A. Mukherjee, N. D. Patel, C. B. Bishop, E. Dagotto, and A. Moreo, Phys. Rev. B 90, 184507 (2014).
* Liang _et al._ [2012] S. Liang, G. Alvarez, C. Şen, A. Moreo, and E. Dagotto, Phys. Rev. Lett. 109, 047001 (2012).
* Liang _et al._ [2013] S. Liang, A. Moreo, and E. Dagotto, Phys. Rev. Lett. 111, 047004 (2013).
|
# Deformations of Hopf algebras by partial actions
Grasiela Martini, Antonio Paques and Leonardo Duarte Silva Universidade
Federal do Rio Grande, Brazil<EMAIL_ADDRESS>Universidade Federal
do Rio Grande do Sul, Brazil<EMAIL_ADDRESS>Universidade Federal do Rio
Grande do Sul, Brazil<EMAIL_ADDRESS>
###### Abstract.
In this work we study deformations of a Hopf algebra $H$ by partial actions of
$H$ on its base field $\Bbbk$, via partial smash product algebras. We
introduce the concept of a $\lambda$-Hopf algebra as a Hopf algebra obtained
as a partial smash product algebra, and show that every Hopf algebra is a
$\lambda$-Hopf algebra. Moreover, a method to compute partial actions of a
given Hopf algebra on its base field is developed and, as an application, we
exhibit all partial actions of such type for some families of Hopf algebras.
MSC 2020: 16T05, 16T99, 16S40, 16S99, 16W99.
Key words and phrases: Deformation of a Hopf algebra, $\lambda$-Hopf algebra,
partial action, partial smash product algebra.
The third author was partially supported by CNPq, Brazil.
###### Contents
1. 1 Introduction
2. 2 Preliminaries
3. 3 Partial actions of a Hopf algebra on its base field
1. 3.1 Properties of a partial action
2. 3.2 About the computations of a partial action of $H$ on $\Bbbk$
3. 3.3 Partial actions of the pointed non-semisimple Hopf algebras $H$ with $dim_{\Bbbk}(H)=8,16$
4. 4 $\lambda$-Hopf algebras
1. 4.1 Definition and Properties
2. 4.2 Taft’s algebra as a $\lambda$-Hopf algebra
3. 4.3 Examples of $\lambda$-Hopf algebras
## 1\. Introduction
The first notion of a partial action that appeared in the literature was a
partial action of groups on algebras within the theory of operator algebras.
Precisely, in the Exel’s paper [17] whose aim was to describe the structure of
suitable $C^{*}$-algebras with the unitary circle $\mathbb{S}_{1}$ acting by
automorphisms. In [15], Dokuchaev and Exel presented the definition of a group
acting partially on an algebra, carrying it to a purely algebraic context.
From this previous work, the partial group actions arose the attention of many
other researchers. In particular, a Galois theory for commutative ring
extensions was developed using partial actions [16]. Such a work motivated
Caenepeel and Janssen to extend it to the context of Hopf algebras [12].
The notion of a partial action have several applications and extensions to
other settings, as groupoids [8], categories [2] and among many others, as can
be noticed in [14] and references therein. In the partial Hopf actions theory,
examples constitute a fundamental part of its understanding and development.
An interesting example of partial action is given by scalars, that is, a Hopf
algebra $H$ acting partially on a $\Bbbk$-algebra $A$ via a partial action of
$H$ on $\Bbbk$. Precisely, if $\cdot:H\otimes\Bbbk\longrightarrow\Bbbk$ is a
partial action of $H$ on $\Bbbk$, then $H$ acts partially on the algebra $A$
via $\rightharpoonup:H\otimes A\longrightarrow A$, $h\rightharpoonup a=(h\cdot
1_{\Bbbk})a$. In fact, many authors have obtained a characterization for this
particular case of partial action, even in more general settings. For
instance, see [13, Lemma 4.1], [2, Section 3.2] and [3, Examples 2.3 - 2.5].
However, even with this characterization, few concrete examples are known.
Another point of interest, which has always been involved with the concept of
actions, are the deformations, and they expand their horizons when carried to
the partial context. Given a global action of $H$ on $A$, the corresponding
smash product algebra $A\\#H$ is actually a deformation of the algebra
$A\otimes H$ by such an action. In particular, since the unique global action
of $H$ on $\Bbbk$ is given by the counit of $H$, the corresponding smash
product algebra $\Bbbk\\#H$ coincides with the algebra $\Bbbk\otimes H\simeq
H$, that is, there are no deformations of $H$ using global actions in this
way. However, in the partial setting, that is, with a partial (not global)
action of $H$ on $\Bbbk$, the partial smash product algebra
$\underline{\Bbbk\\#H}$ coincides no more with $H$, that is, using a partial
(not global) action of $H$ on $\Bbbk$ one can produce a non-trivial
deformation of $H$.
In this work we deal with deformations of this type, that is, deformations of
a Hopf algebra $H$ by partial actions of $H$ over its base field $\Bbbk$. To
do so, we investigate partial actions of $H$ on $\Bbbk$ and partial smash
product algebras. In particular, we have increased the source of examples as
well as characterized some specific properties of such partial actions.
This paper is organized as follows. First we recall in Section 2 some
definitions and results about partial actions of a Hopf algebra $H$ on an
algebra $A$. In particular, the partial smash product algebra
$\underline{A\\#H}$ is presented. We focus in the particular case when $A$ is
the base field $\Bbbk$ of $H$. Some guiding examples are presented and
important notations are fixed. Then, we dedicate Section 3 to study partial
actions of a Hopf algebra over its base field. First, some properties and a
method to compute such partial actions are developed, then examples are
exhibited. Finally, in Section 4 we investigate partial smash product
algebras. We obtain a characterization of the conditions under which such an
algebra results in a Hopf algebra. The concept of a $\lambda$-Hopf algebra is
introduced, some results concerning it are proved and several examples are
calculated using the previously computed partial actions.
### Conventions
In this work, we deal only with algebras over a fixed field $\Bbbk$. Unadorned
$\otimes$ means $\otimes_{\Bbbk}$. We write $G(H)=\\{g\in H\setminus\\{0\\}\
|\ \Delta(g)=g\otimes g\\}$ for the group of the group-like elements of a Hopf
algebra $H$. Given $g,h\in G(H)$, an element $x\in H$ is called a _$(g,h)$
-primitive element_ if $\Delta(x)=x\otimes g+h\otimes x$, and the linear space
of all $(g,h)$-primitive elements of $H$ is denoted by $P_{g,h}(H)$. If no
emphasis on the elements $g,h\in G(H)$ is needed, an element $x\in P_{g,h}(H)$
is called simply of a _skew-primitive element_. Clearly,
$\Bbbk\\{g-h\\}\subseteq P_{g,h}(H)$; these are called the trivial ones.
Finally, given two algebras $A,B,$ we recall that there exist natural
inclusions $A,B\hookrightarrow A\otimes B;$ then, for $a\in A$ and $b\in B$,
by $a,b\in A\otimes B$ we mean $a\otimes 1_{B},1_{A}\otimes b\in A\otimes B$.
## 2\. Preliminaries
We present here some definitions and preliminaries results about partial
actions of a Hopf algebra $H$ on an algebra $A$. Some examples are presented
and some notations are fixed. For details we refer to [2, 4, 12].
A _left partial action of a Hopf algebra $H$ on an algebra $A$_ is a linear
map $\cdot:H\otimes A\longrightarrow A$, denoted by $\cdot(h\otimes a)=h\cdot
a$, such that:
1. (i)
$1_{H}\cdot a=a$;
2. (ii)
$h\cdot ab=(h_{1}\cdot a)(h_{2}\cdot b)$;
3. (iii)
$h\cdot(k\cdot a)=(h_{1}\cdot 1_{A})(h_{2}k\cdot a)$,
for all $h,k\in H$ and $a,b\in A$. In this case, $A$ is called a _partial
$H$-module algebra_.
A left partial action is _symmetric_ if in addition we have
1. (iv)
$h\cdot(k\cdot a)=(h_{1}k\cdot a)(h_{2}\cdot 1_{A})$,
for all $h,k\in H$ and $a\in A$.
Clearly, every global action is a partial action. The definition of a _right
partial action_ is given in an analogous way. Since throughout this work we
deal only with left partial actions, from now on by a _partial action_ we mean
a left partial action.
There exists a characterization for partial actions of $H$ on $\Bbbk$, where
$\Bbbk$ is the base field of the Hopf algebra $H$, given by a linear map
$\lambda$ of $H^{*}=Hom_{\Bbbk}(H,\Bbbk)$. In fact, this map gives rise to an
important source of examples for the partial action theory in several
settings, as for example in partial actions of group algebras on (co)algebras,
partial actions of weak Hopf algebras on (co)algebras and partial actions of
Hopf algebras on categories (see [3, 4, 9, 13] for details).
The characterization given in the next result is more general. In fact, in
[13] the authors suppose that $H$ is a weak Hopf algebra. However, we present
this result with the assumption that $H$ is a Hopf algebra, since we deal only
with such algebras in this work.
###### Proposition 2.1.
[13, Lemma 4.1] Let $H$ be a Hopf algebra. Then, $\Bbbk$ is a partial
$H$-module algebra if and only if there exists a linear map $\lambda\in H^{*}$
such that $\lambda(1_{H})=1_{\Bbbk}$ and
$\displaystyle\lambda(h)\lambda(k)=\lambda(h_{1})\lambda(h_{2}k),$ (2.1)
for all $h,k\in H$. Moreover, the corresponding partial action is symmetric if
and only if $\lambda$ satisfies the additional condition
$\displaystyle\lambda(h)\lambda(k)=\lambda(h_{1}k)\lambda(h_{2}),$ (2.2)
for all $h,k\in H$.
From now on, by a partial action of a Hopf algebra $H$ on its base field
$\Bbbk$ we mean a linear map $\lambda\in H^{*}$ such that
$\lambda(1_{H})=1_{\Bbbk}$ and satisfies condition (2.1).
Observe that if $\lambda\in H^{*}$ is a partial action of $H$ on $\Bbbk$, then
clearly $\lambda$ is an idempotent element in the convolution algebra $H^{*}$.
The next examples, to the best of the authors’ knowledge, constitute all the
explicit examples of partial actions of Hopf algebras on the base field known
in the literature:
###### Example 2.2.
[2, Example 3.7] Let $G$ be a group. Partial actions of $\Bbbk G$ on $\Bbbk$
are parametrized by subgroups of $G$. Indeed, if $N$ is a subgroup of $G$,
then $\delta_{N}$ is a partial action of $\Bbbk G$ on $\Bbbk$, where
$\delta_{N}:\Bbbk G\longrightarrow\Bbbk$ is given by
$\delta_{N}(g)=\begin{cases}1_{\Bbbk},&\textrm{if }g\in N\\\ 0,&\textrm{if
}g\notin N\end{cases},$ for all $g\in G$. Conversely, let $\lambda$ be a
partial action of $\Bbbk G$ on $\Bbbk$. Then, $\lambda(g)\in\\{0,1_{\Bbbk}\\}$
and the subset $N=\\{g\in G\,|\,\lambda(g)\neq 0\\}=\\{g\in
G\,|\,\lambda(g)=1_{\Bbbk}\\}$ is a subgroup of $G$. Thus,
$\lambda=\delta_{N}$.
Using the correspondence of the previous example, given $N$ a subgroup of $G$,
we denote by $\lambda_{N}$ the partial action of $\Bbbk G$ on $\Bbbk$, given
as $\lambda_{N}=\delta_{N}$. In particular, $\lambda_{G}=\varepsilon_{\Bbbk
G}$ is the global action of $\Bbbk G$ on $\Bbbk$.
###### Example 2.3.
[2, Subsection 3.2] Let $G$ be a finite group. Then, partial actions of
$(\Bbbk G)^{*}$ on $\Bbbk$ are in one-to-one correspondence with
$\\{N\ |\ N\textrm{ is a subgroup of }G\textrm{ and }char(\Bbbk)\nmid|N|\\}.$
Indeed, let $\\{g^{*}\ |\ g\in G\\}$ be the dual basis for the canonical basis
$\\{g\ |\ g\in G\\}$ of $\Bbbk G$. Then, for each subgroup $N$ of $G$ such
that $char(\Bbbk)\nmid|N|$, the linear map $\lambda^{N}:(\Bbbk
G)^{*}\longrightarrow\Bbbk$ given by
$\lambda^{N}(g^{*})=\begin{cases}1_{\Bbbk}/|N|,&\textrm{ if }g\in N\\\
0,&\textrm{ if }g\notin N\end{cases},$ for all $g\in G$, is a partial action
of $(\Bbbk G)^{*}$ on $\Bbbk$. Conversely, let $\lambda$ be a partial action
of $(\Bbbk G)^{*}$ on $\Bbbk$. Then, the subset $N=\\{g\in
G\,|\,\lambda(g^{*})\neq 0\\}$ is a subgroup of $G$. In this case, it is known
that $\lambda(g^{*})=1_{\Bbbk}/|N|$, for all $g\in N$, and so
$\lambda=\lambda^{N}$.
###### Example 2.4.
[12, Proposition 4.10] Let $\mathfrak{g}$ be a Lie algebra and
$U(\mathfrak{g})$ its universal enveloping algebra. Then, every partial
$U(\mathfrak{g})$-module algebra $A$ is in fact a global one. That is, there
is no genuine partial action (_i.e._ , a partial action that is not a global
one) of $U(\mathfrak{g})$ on any algebra $A$. In particular, if $\lambda$ is a
partial action of $U(\mathfrak{g})$ on $\Bbbk$, then $\lambda$ is necessarily
the counit of $U(\mathfrak{g})$.
###### Example 2.5.
[2, Example 3.8] Consider $\mathbb{H}_{4}$ the Sweedler’s Hopf algebra.
Precisely, $\mathbb{H}_{4}$ is the algebra over the field $\Bbbk$,
$char(\Bbbk)\neq 2$, generated by the letters $g$ and $x$ satisfying the
relations $g^{2}=1,x^{2}=0$ and $xg=-gx$. The set $\\{1,g,x,gx\\}$ is a basis
for $\mathbb{H}_{4}$, $g$ is a group-like element and $x$ is a
$(1,g)$-primitive element. Thus, it is straightforward to check that for any
$\alpha\in\Bbbk$ the map $\lambda_{\alpha}:\mathbb{H}_{4}\longrightarrow\Bbbk$
given by $\lambda_{\alpha}(1)=1_{\Bbbk},\lambda_{\alpha}(g)=0$ and
$\lambda_{\alpha}(x)=\lambda_{\alpha}(gx)=\alpha$ is a partial action of
$\mathbb{H}_{4}$ on $\Bbbk$. Moreover, these are all partial actions of
$\mathbb{H}_{4}$ on $\Bbbk$, _i.e._ , if $\lambda$ is a partial action of
$\mathbb{H}_{4}$ on $\Bbbk$, then $\lambda=\varepsilon$ (and so the global
one) or $\lambda=\lambda_{\alpha}$ for some $\alpha\in\Bbbk$.
In this work we are interested in deformations of a Hopf algebra $H$ through
partial actions of $H$ on $\Bbbk$. Precisely, given a partial action of $H$ on
$\Bbbk$, we want to study the partial smash product algebra
$\underline{\Bbbk\\#H}$. For this purpose, we start recalling that given any
global $H$-module algebra $A$, via an action $\triangleright:H\otimes
A\longrightarrow A$, we can endow the tensor product $A\otimes H$ with an
algebra structure induced by
$(a\otimes h)(b\otimes g)=a(h_{1}\triangleright b)\otimes h_{2}g,$
for $a,b\in A$ and $h,g\in H$. This structure is denoted by $A\\#H$ and called
_the smash product algebra of $A$ with $H$_. We typically denote the element
$a\otimes h$ by $a\\#h\in A\\#H$. In particular, note that $\Bbbk\\#H=H$.
The same construction can be performed in the partial case. But, in this case,
it turns that $A\\#H$ is not a unital algebra since $1_{A}\\#1_{H}$ is only a
left unit.
###### Definition 2.6.
[12] Let $A$ be a partial $H$-module algebra via $\cdot:H\otimes
A\longrightarrow A$. Then, the vector subspace
$\underline{A\\#H}=(A\\#H)(1_{A}\\#1_{H})$ is a unital algebra with the
multiplication induced by
$\underline{(x\\#h)}\ \underline{(y\\#g)}=\underline{x(h_{1}\cdot
y)\\#h_{2}g},$
for all $x,y\in A$, $h,g\in H$, where
$\underline{(x\\#h)}=(x\\#h)(1_{A}\\#1_{H})=x(h_{1}\cdot 1_{A})\\#h_{2}$ is a
typical element of $\underline{A\\#H}$ and the unit is given by
$\underline{1_{A}\\#1_{H}}$. The unital algebra $\underline{A\\#H}$ is called
the _partial smash product algebra of $A$ with $H$_.
For a partial action of $H$ on $\Bbbk$ given by
$\lambda:H\longrightarrow\Bbbk$, the corresponding partial smash product
algebra $\underline{\Bbbk\\#H}$ is generated by
$\left\\{\underline{1_{\Bbbk}\\#h}=1_{\Bbbk}\\#\lambda(h_{1})h_{2}\,|\,h\in
H\right\\}$. Also notice
$\underline{1_{\Bbbk}\\#h}\,\,\underline{1_{\Bbbk}\\#k}=\underline{1_{\Bbbk}\\#\lambda(h_{1})h_{2}k}$,
for all $h,k\in H$.
For example, let $\lambda_{N}$ be the partial action of $\Bbbk G$ on $\Bbbk$
given in Example 2.2. In this case, it follows that $\underline{\Bbbk\\#\Bbbk
G}\simeq\Bbbk N$.
## 3\. Partial actions of a Hopf algebra on its base field
So far we know, only the partial actions on the base field presented in
Examples 2.2 \- 2.5 are known in the literature. For this reason, in this
section some properties and a method to compute such partial actions are
developed, and then we exhibit these partial actions for some families of Hopf
algebras.
### 3.1. Properties of a partial action
This subsection is dedicated to present some properties of a partial action
$\lambda:H\longrightarrow\Bbbk$. Most of the results are about the values that
the map $\lambda$ can admit for group-like and skew-primitive elements.
The following proposition generalizes to any Hopf algebra $H$ an important
fact observed in Example 2.2 just for group algebras.
###### Proposition 3.1.
Let $\lambda:H\longrightarrow\Bbbk$ be a partial action. Then, for any $g\in
G(H)$, $\lambda(g)\in\\{0,1_{\Bbbk}\\}$ and $N=\\{g\in
G(H)\,|\,\lambda(g)=1_{\Bbbk}\\}$ is a subgroup of $G(H)$. In particular, if
$g\in N$, then $\lambda(g^{i})=1_{\Bbbk},$ for each $i\in\mathbb{Z}$.
Moreover, if $g\in G(H)$ has prime order $p$ and $\lambda(g)=0$, then
$\lambda(g^{i})=0,$ for each $i\in\mathbb{Z}$ such that $gcd(i,p)\\!=\\!1$.
###### Proof.
First, for any $g\in G(H)$,
$\lambda(g)\lambda(1_{H})=\lambda(g)\lambda(g1_{H})$ by (2.1). Since
$\lambda(1_{H})=1_{\Bbbk}$, it follows that $\lambda(g)=\lambda(g)^{2}$ and so
$\lambda(g)\in\\{0,1_{\Bbbk}\\}$. Now, clearly $1_{H}\in N$. Let $g,h\in N$,
then we obtain that $\lambda(h)\lambda(h^{-1})=\lambda(h)$ by (2.1), and
consequently $h^{-1}\in N$. Again, using (2.1), we have that
$\lambda(g)\lambda(h)=\lambda(g)\lambda(gh)$, and hence $gh\in N$. Thus, $N$
is a subgroup of $G(H)$. If $g\in N$, then clearly $\lambda(g^{i})=1_{\Bbbk}$,
since $\langle g\rangle\subseteq N$, where $\langle g\rangle$ stands for the
subgroup generated by $g$.
Now suppose that $g^{p}=1$ and $\lambda(g)=0$, where $p\in\mathbb{Z}$ is a
prime number. In this case, $g\notin N$ and $\langle
g\rangle=\\{1,g,g^{2},...,g^{p-1}\\}$. Consequently, we have that
$N\cap\langle g\rangle=\\{1\\}$ and thus $\lambda(g^{i})=0$, whenever
$gcd(i,p)=1$. ∎
The next result shows the behavior of a partial action
$\lambda:H\longrightarrow\Bbbk$ for certain elements.
###### Lemma 3.2.
Let $\lambda:H\longrightarrow\Bbbk$ be a partial action. For $g,t\in G(H)$ and
$x\in P_{g,t}(H)$, it holds that:
* a)
If $\lambda(g)=1$, then $\lambda(gu)=\lambda(u)$, for all $u\in H$;
* b)
If $\lambda(g)=\lambda(t)$, then $\lambda(x)=0$;
* c)
If $\lambda(x)=0$ and $\lambda(t)=1$, then $\lambda(xu)=0$, for all $u\in H$;
* d)
If $\lambda(g)=1$ and $\lambda(t)=0$, then $\lambda(xt^{-1})=-\lambda(x)$;
* e)
If $\lambda(g)=0$ and $\lambda(t)=1$, then $\lambda(xg^{-1})=-\lambda(x)$.
Moreover, if $gt=tg$, then, for all $i\in\mathbb{Z}$, we also have that:
* f)
If $\lambda(g)=1$ and $\lambda(t)=0$, then $\lambda(t^{i}x)=0$ or
$\lambda(t^{i})+\lambda(t^{i+1})=1$;
* g)
If $\lambda(g)=0$ and $\lambda(t)=1$, then $\lambda(g^{i}x)=0$ or
$\lambda(g^{i})+\lambda(g^{i+1})=1$;
* h)
If $\lambda(g)=\lambda(t)=0$, then $\lambda(g^{-1}t^{-1}x)=0$.
###### Proof.
It follows directly from the application of equality (2.1) for good choices of
elements $h,k\in H$ and Proposition 3.1. ∎
### 3.2. About the computations of a partial action of $H$ on $\Bbbk$
Let $H$ be a Hopf algebra. In this subsection a method to compute partial
actions of $H$ on its base field is given.
Recall by Proposition 2.1 that $\lambda\in H^{*}$ is a partial action of $H$
on $\Bbbk$ if and only if the map $\lambda$ satisfies
$\lambda(1_{H})=1_{\Bbbk}$ and
$\lambda(u)\lambda(v)=\lambda(u_{1})\lambda(u_{2}v)$, for all $u,v\in H$.
###### Definition 3.3.
Let $\mathcal{B}$ be a basis of $H$ and consider the linear map
$\Lambda:H\longrightarrow\Bbbk[X_{b}\ |\ b\in\mathcal{B}]$,
$\Lambda(b)=X_{b}$. We define by _partial system associated with
$\mathcal{B}$_ the following system of equations:
$\displaystyle\left\\{\begin{array}[]{ll}\Lambda(1_{H})=1_{\Bbbk}\\\
\Lambda(a)\Lambda(b)=\Lambda(a_{1})\Lambda(a_{2}b)\end{array}\right.\qquad
a,b\in\mathcal{B}.$ (3.3)
Hence, the map $\lambda:H\longrightarrow\Bbbk$ is a partial action of $H$ on
$\Bbbk$ if and only if $(\lambda(b))_{b\in\mathcal{B}}$ is a solution of
(3.3). In particular, the partial system associated with $\mathcal{B}$ always
has at least one solution: the global action of $H$ on $\Bbbk$, namely
$(\varepsilon(b))_{b\in\mathcal{B}}$.
Now, extend $G(H)$ to a basis $\mathcal{B}=G(H)\sqcup\mathcal{B}^{\prime}$ of
$H$, where $\sqcup$ denotes the disjoint union. Then, (3.3) is rewritten as
$\displaystyle\left\\{\begin{array}[]{ll}\Lambda(1_{H})=1_{\Bbbk}\\\
\Lambda(g)\Lambda(v)=\Lambda(g)\Lambda(gv)\\\
\Lambda(u)\Lambda(v)=\Lambda({u}_{1})\Lambda({u}_{2}v)\end{array}\right.\qquad
g\in G(H),u\in\mathcal{B}^{\prime},v\in\mathcal{B}.$
Let $\lambda:H\longrightarrow\Bbbk$ be a partial action. Proposition 3.1
ensures that $\lambda(h)\in\\{0,1_{\Bbbk}\\}$, for all $h\in G(H)$, and there
exists a subgroup $N$ of $G(H)$ such that $\lambda|_{\Bbbk G(H)}=\lambda_{N}$
(see Example 2.2). Then, by Lemma 3.2 (a), we have that
$\lambda(v)=\lambda(gv)$ for all $g\in N,v\in\mathcal{B}$. Therefore,
$(\lambda(b))_{b\in\mathcal{B}}$ is a solution of the following system:
$\displaystyle\left\\{\begin{array}[]{ll}\Lambda(g)=1_{\Bbbk}\\\
\Lambda(h)=0\\\ \Lambda(u)=\Lambda(gu)\\\
\Lambda(u)\Lambda(v)=\Lambda({u}_{1})\Lambda({u}_{2}v)\end{array}\right.\qquad
g\in N,h\in G(H)\setminus N,u\in\mathcal{B}^{\prime},v\in\mathcal{B}.$ (3.8)
On the other hand, let $N$ be a subgroup of $G(H)$ and consider a system of
equations as (3.8). If there exists a solution
$(\alpha_{b})_{b\in\mathcal{B}}$ to the latter system, then it clearly is also
a solution of the partial system associated with $\mathcal{B}$.
Fix a subgroup $N$ of $G(H)$. For $x,y\in\mathcal{B}^{\prime}$, define the
relation
$x\sim_{N}y\quad\textrm{ if and only if}\quad\exists\,\,g\in N\textrm{ such
that }y=gx.$
Notice that it is an equivalence relation on the set $\mathcal{B}^{\prime}$.
Given $x\in\mathcal{B}^{\prime}$, $[x]=\\{y\in\mathcal{B}^{\prime}\ |\
y\sim_{N}x\\}$ denotes the _equivalence class of $x$_. In particular, for any
$x\in\mathcal{B}^{\prime}$, $[x]=\\{x\\}\sqcup[x]^{\perp}$, where
$[x]^{\perp}=\\{y\in\mathcal{B}^{\prime}\ |\ y\sim_{N}x,y\neq x\\}$.
Write $\widetilde{N}$ for a _transversal set of the relation $\sim_{N}$ on
$\mathcal{B}^{\prime}$_, that is, $\widetilde{N}$ is a subset of
$\mathcal{B}^{\prime}$ consisting of exactly one representative from each
equivalence class. Then, we have a partition of $\mathcal{B}^{\prime}$ given
by $\mathcal{B}^{\prime}=\widetilde{N}\sqcup N^{\perp}$, where
$N^{\perp}=\mathcal{B}^{\prime}\setminus\widetilde{N}$. Observe that
$N^{\perp}=\cup_{x\in\widetilde{N}}[x]^{\perp}$.
Consider now the following system of equations:
$\displaystyle\left\\{\begin{array}[]{ll}\Lambda(g)=1_{\Bbbk}\\\
\Lambda(h)=0\\\ \Lambda(u)=\Lambda(gu)\end{array}\right.\qquad g\in N,h\in
G(H)\setminus N,u\in\mathcal{B}^{\prime}.$ (3.12)
Then, $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of (3.12) if and only if
$\alpha_{g}=1_{\Bbbk}$, $\alpha_{h}=0$, $\alpha_{y}=\alpha_{x}$, and
$\alpha_{x}\in\Bbbk$ is a free parameter, for all $g\in N$, $h\in
G(H)\setminus N$, $x\in\widetilde{N}$ and $y\in[x]$. In this case, we say that
$(\alpha_{b})_{b\in\mathcal{B}}$ is an _initial $N$-condition for the partial
system associated with $\mathcal{B}$_.
Finally, we define the following sets: $\mathcal{B}_{t,s}=\left\\{x\in
P_{t,s}(H)\ |\ t\in N,s\in G(H)\setminus N\right\\}$ and
$\widetilde{\mathcal{B}}=\widetilde{N}\setminus\left(\cup_{t,s\in
G(H)}\mathcal{B}_{t,s}\right).$ With these notations, the system
$\displaystyle\left\\{\begin{array}[]{ll}\Lambda(u)\Lambda(v)=\Lambda({u}_{1})\Lambda({u}_{2}v)\end{array}\right.\qquad
u\in\widetilde{\mathcal{B}},v\in\mathcal{B},$ (3.14)
is called of the _$N$ -reduced partial system associated with $\mathcal{B}$_.
###### Theorem 3.4.
Let $\mathcal{B}=G(H)\sqcup\mathcal{B}^{\prime}$ be a basis of $H$ and
$\lambda:H\longrightarrow\Bbbk$ a linear map. Then, $\lambda$ is a partial
action of $H$ on $\Bbbk$ if and only if $N=\\{g\in G(H)\ |\
\lambda(g)=1_{\Bbbk}\\}$ is a subgroup of $G(H)$ and
$(\lambda(b))_{b\in\mathcal{B}}$ is both: an initial $N$-condition for the
partial system associated with $\mathcal{B}$ and also a solution of the
$N$-reduced partial system associated with $\mathcal{B}$.
###### Proof.
If $\lambda$ is a partial action of $H$ on $\Bbbk$, then we have that $N$ is a
subgroup of $G(H)$ by Proposition 3.1, and $(\lambda(b))_{b\in\mathcal{B}}$ is
a solution of the system (3.8). Then, clearly $(\lambda(b))_{b\in\mathcal{B}}$
is both: an initial $N$-condition for the partial system associated with
$\mathcal{B}$ and a solution of the $N$-reduced partial system associated with
$\mathcal{B}$.
For the converse, it is sufficient to check that
$(\lambda(b))_{b\in\mathcal{B}}$ satisfies each equation
$\Lambda(u)\Lambda(v)=\Lambda({u}_{1})\Lambda({u}_{2}v)$, for
$u\in\mathcal{B}^{\prime}\setminus\widetilde{\mathcal{B}},v\in\mathcal{B}.$
Then, in this case, $(\lambda(b))_{b\in\mathcal{B}}$ is also a solution of the
system given in (3.8) and therefore $\lambda$ is a partial action.
First, suppose $u\in N^{\perp}.$ Note that $u\in[x]^{\perp}$ for some
$x\in\widetilde{N}$, and so there exists $g\in N$ such that $u=gx$. Then,
$\Lambda(u)\Lambda(v)=\Lambda({u}_{1})\Lambda({u}_{2}v)$ means
$\Lambda(gx)\Lambda(v)=\Lambda(gx_{1})\Lambda(gx_{2}v)$. Since
$(\lambda(b))_{b\in\mathcal{B}}$ is an initial $N$-condition for the partial
system associated with $\mathcal{B}$, we have $\lambda(gw)=\lambda(w)$ for any
$w\in H$, and the equality
$\lambda(x)\lambda(v)=\lambda(x_{1})\lambda(x_{2}v)$ holds because
$(\lambda(b))_{b\in\mathcal{B}}$ is solution of (3.14). Hence,
$\lambda(gx)\lambda(v)=\lambda(x)\lambda(v)=\lambda(x_{1})\lambda(x_{2}v)=\lambda(gx_{1})\lambda(gx_{2}v).$
Finally, if $u\in\mathcal{B}_{t,s}\cap\widetilde{N}$, then the equation
$\Lambda(u)\Lambda(v)=\Lambda(u_{1})\Lambda(u_{2}v)$ is the same as
$\Lambda(u)\Lambda(v)=\Lambda(u)\Lambda(tv)+\Lambda(s)\Lambda(uv)$, for any
$v\in\mathcal{B}$. Since $(\lambda(b))_{b\in\mathcal{B}}$ is a solution of
(3.12), then $\lambda(v)=\lambda(tv)$ and $\lambda(s)=0$. Thus, the equality
$\lambda(u)\lambda(v)=\lambda(u)\lambda(tv)+\lambda(s)\lambda(uv)$ holds. ∎
Similar to Lemma 3.2, we have the following result.
###### Lemma 3.5.
Let $\mathcal{B}=G(H)\sqcup\mathcal{B}^{\prime}$ be a basis of $H$, $g,t\in
G(H)$, $x\in P_{g,t}(H)\cap\mathcal{B}^{\prime}$ and
$(\alpha_{b})_{b\in\mathcal{B}}$ an initial $N$-condition for the partial
system associated with $\mathcal{B}$. If $(\alpha_{b})_{b\in\mathcal{B}}$ is
also a solution of the $N$-reduced partial system associated with
$\mathcal{B}$, then:
* a)
If $\alpha_{g}=\alpha_{t}$, then $\alpha_{x}=0$;
* b)
If $\alpha_{t}=1_{\Bbbk}$ and $\alpha_{x}=0$, then $\alpha_{xu}=0$ for all
$u\in H$ such that $xu\in\mathcal{B}^{\prime}$;
* c)
If $\alpha_{g}=1_{\Bbbk}$, $\alpha_{t}=0$ and
$xt^{-1}\in\mathcal{B}^{\prime}$, then $\alpha_{xt^{-1}}=-\alpha_{x}$;
* d)
If $\alpha_{g}=0$, $\alpha_{t}=1_{\Bbbk}$ and
$xg^{-1}\in\mathcal{B}^{\prime}$, then $\alpha_{xt^{-1}}=-\alpha_{x}$.
###### Proof.
Straightforward from Theorem 3.4 and Lemma 3.2. ∎
Given a partial action $\lambda:H\longrightarrow\Bbbk$, we say that $\lambda$
has _initial condition $N$_ if $N=\\{g\in G(H)\ |\ \lambda(g)=1_{\Bbbk}\\}$.
Now, based on Theorem 3.4, we present our method to calculate partial actions
of $H$ on $\Bbbk$.
The Method: Let $H$ be a given Hopf algebra. To obtain a partial action of $H$
on $\Bbbk$, we perform the following steps:
* Step 1.
Extend $G(H)$ to a basis $\mathcal{B}$ of $H$;
* Step 2.
Consider $N$ a subgroup of $G(H)$;
* Step 3.
Regard the solutions $(\alpha_{b})_{b\in\mathcal{B}}$ of the system (3.12);
* Step 4.
Investigate the $N$-reduced partial system associated with $\mathcal{B}$;
* Step 5.
Conclusion: If $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of the
$N$-reduced partial system associated with $\mathcal{B}$, then the linear map
$\lambda:H\longrightarrow\Bbbk$ given by $\lambda(b)=\alpha_{b}$, for all
$b\in\mathcal{B}$, is a partial action of $H$ on $\Bbbk$; Otherwise, there is
not a partial action of $H$ on $\Bbbk$ with initial condition $N$.
Remarks. 1) For Step 4, Lemma 3.5 is useful to determine if
$(\alpha_{b})_{b\in\mathcal{B}}$ can be a solution of (3.14) or not, since it
imposes necessary conditions on the values $\alpha_{u}\in\Bbbk$ for some
elements $u\in\widetilde{N}$.
2) Since every partial action of $H$ on $\Bbbk$ has an initial condition, if
we apply the method to all subgroups of $G(H)$, then we obtain all partial
actions of $H$ on $\Bbbk$. Moreover, the method gives an explicit
characterization of each partial action $\lambda:H\longrightarrow\Bbbk$.
###### Example 3.6.
To illustrate the method, we reobtain the partial actions of the Sweedler’s
Hopf algebra $\mathbb{H}_{4}$ on its base field, previously exhibited in
Example 2.5. Recall that $G(\mathbb{H}_{4})=\\{1,g\\}$ and
$\mathcal{B}=\\{1,g,x,gx\\}$ is a basis of $\mathbb{H}_{4}$. Thus, we have two
subgroups of $G(H)$, namely $N=\\{1,g\\}$ and $N=\\{1\\}$.
1. (1)
$N=\\{1,g\\}$. In Step 3 we assume $\alpha_{1}=\alpha_{g}=1_{\Bbbk}$ and
$\alpha_{gx}=\alpha_{x}\in\Bbbk$. Now, Step 4 says to study (3.14). But, using
Lemma 3.5 (a), if $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of (3.14),
then necessarily $\alpha_{x}=0$. In this case, the only possibility for the
associated partial action $\lambda:\mathbb{H}_{4}\longrightarrow\Bbbk$,
$\lambda(b)=\alpha_{b}$, is the counit $\varepsilon$. Thus, as conclusion,
$\lambda$ is a partial action of $\mathbb{H}_{4}$ on $\Bbbk$ with initial
condition $\\{1,g\\}$ if and only if $\lambda=\varepsilon$;
2. (2)
$N=\\{1\\}$. In this case, by Step 3 we suppose $\alpha_{1}=1_{\Bbbk}$,
$\alpha_{g}=0$ and $\alpha_{x},\alpha_{gx}\in\Bbbk$. Now, we move to Step 4.
Note that $\widetilde{N}=\\{x,gx\\}$. As $x\in\mathcal{B}_{1,g}$, it follows
that $\widetilde{\mathcal{B}}=\\{gx\\}$ and the system (3.14) has only $4$
equations:
$\displaystyle\left\\{\Lambda(gx)\Lambda(v)=\Lambda(gx)\Lambda(gv)+\Lambda(1)\Lambda(gxv)\right.\qquad
v\in\mathcal{B}.$
In order to verify if $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of the
system above, we have
$\left\\{\begin{array}[]{ll}\alpha_{gx}=\alpha_{gx}\\\
0=\alpha_{gx}-\alpha_{x}\\\ \alpha_{gx}\alpha_{x}=\alpha_{gx}\alpha_{gx}\\\
\alpha_{gx}\alpha_{gx}=\alpha_{gx}\alpha_{x}.\end{array}\right.$
Thus, $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of the $N$-reduced
partial system associated with $\mathcal{B}$ if and only if
$\alpha_{x}=\alpha_{gx}$.
Consequently, for any $\alpha\in\Bbbk$, the linear map
$\lambda_{\alpha}:\mathbb{H}_{4}\longrightarrow\Bbbk$ given by
$\lambda(1)=1_{\Bbbk}$, $\lambda(g)=0$ and $\lambda(x)=\lambda(gx)=\alpha$, is
a partial action of $\mathbb{H}_{4}$ on $\Bbbk$.
Therefore, the partial actions of $\mathbb{H}_{4}$ on $\Bbbk$ are exactly
$\varepsilon$ and $\lambda_{\alpha}$, $\alpha\in\Bbbk$.
We point out the benefit of our method: since a partial action of $H$ on
$\Bbbk$ corresponds to a linear map $\lambda\in H^{*}$ satisfying
$\lambda(1_{H})=1_{\Bbbk}$ and (2.1), to calculate explicitily such a map is a
task attached to solve the system (3.3), for some basis $\mathcal{B}$ of $H$.
However, with our method we deal only with the potentially smaller system
(3.14). In the Sweedler’s Hopf algebra, for instance, the system (3.3) has
$4^{2}=16$ equations, while we deal only with the system of 4 equations
(3.14), in the case $N=\\{1\\}$.
Since our developments deal with group-like and skew-primitive elements, it is
expected that the method will be effective mainly for a Hopf algebra generated
by elements of these types, as illustrated through the example above.
Using the method described, we were able to determine all partial actions on
the base field of some particular families of Hopf algebras, as can be seen in
the following subsection.
### 3.3. Partial actions of the pointed non-semisimple Hopf algebras $H$ with
$dim_{\Bbbk}(H)=8,16$
In this subsection, for each pointed non-semisimple Hopf algebra $H$ with
$dim_{\Bbbk}(H)=8,16$, we exhibit all partial actions of $H$ on its base field
$\Bbbk$. Throughout this subsection $\Bbbk$ is assumed to be an algebraically
closed field of characteristic zero.
The 8-dimensional Hopf algebras were classified independently by [21] and
[19]. There exist five (up to isomorphism) 8-dimensional pointed non-
semisimple Hopf algebras. We use the notations and presentations for such Hopf
algebras as given in [10]:
$\mathcal{A}_{2}=\langle
g,\,x,\,y\,|\,g^{2}=1,\,\,x^{2}=y^{2}=0,\,\,xg=-gx,\,\,yg=-gy,\,\,yx=-xy\rangle,$
where $g\in G(\mathcal{A}_{2})$ and $x,y\in P_{1,g}(\mathcal{A}_{2})$;
$\mathcal{A}_{4}^{\prime}=\langle
g,\,x\,|\,g^{4}=1,\,\,x^{2}=0,\,\,xg=-gx\rangle,$ where $g\in
G(\mathcal{A}_{4}^{\prime})$ and $x\in P_{1,g}(\mathcal{A}_{4}^{\prime})$;
$\mathcal{A}_{4}^{\prime\prime}=\langle
g,\,x\,|\,g^{4}=1,\,x^{2}=g^{2}-1,\,xg=-gx\rangle,$ where $g\in
G(\mathcal{A}_{4}^{\prime\prime})$ and $x\in
P_{1,g}(\mathcal{A}_{4}^{\prime\prime})$;
$\mathcal{A}_{4,q}^{\prime\prime\prime}=\langle
g,\,x\,|\,g^{4}=1,\,\,x^{2}=0,\,\,gx=qxg\rangle,$ where $q$ is a primitive
root of unity of order $4$, $g\in G(\mathcal{A}_{4,q}^{\prime\prime\prime})$
and $x\in P_{1,g^{2}}(\mathcal{A}_{4,q}^{\prime\prime\prime})$;
$\mathcal{A}_{2,2}=\langle
g,\,h,\,x\,|\,g^{2}=h^{2}=1,\,x^{2}=0,\,xg=-gx,\,xh=-hx,\,hg=gh\rangle,$ where
$g,h\in G(\mathcal{A}_{2,2})$ and $x\in P_{1,g}(\mathcal{A}_{2,2})$.
We observe that the subscript index in each Hopf algebra above described
represents its group-like subalgebra.
###### Theorem 3.7.
For each pointed non-semisimple Hopf algebra of dimension 8, all partial
actions on its base field are computed. They are presented in Tables 1 \- 5,
where $\alpha,\beta,\gamma\in\Bbbk$ and $\gamma^{2}=-1$.
Table 1. Partial actions of $\mathcal{A}_{2}$ on $\Bbbk$ | $1$ | $g$ | $x$ | $y$ | $gx$ | $gy$ | $xy$ | $gxy$
---|---|---|---|---|---|---|---|---
$\lambda_{G(\mathcal{A}_{2})}=\varepsilon_{\mathcal{A}_{2}}$ | $1$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $1$ | $0$ | $\alpha$ | $\beta$ | $\alpha$ | $\beta$ | $0$ | $0$
Table 2. Partial actions of $\mathcal{A}_{4}^{\prime}$ on $\Bbbk$ | $1$ | $g$ | $g^{2}$ | $g^{3}$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$
---|---|---|---|---|---|---|---|---
$\lambda_{G(\mathcal{A}_{4}^{\prime})}=\varepsilon_{\mathcal{A}_{4}^{\prime}}$ | $1$ | $1$ | $1$ | $1$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $1$ | $0$ | $1$ | $0$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1\\}}$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 3. Partial actions of $\mathcal{A}_{4}^{\prime\prime}$ on $\Bbbk$ $(\gamma^{2}=-1)$ | $1$ | $g$ | $g^{2}$ | $g^{3}$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$
---|---|---|---|---|---|---|---|---
$\lambda_{G(\mathcal{A}_{4}^{\prime\prime})}=\varepsilon_{\mathcal{A}_{4}^{\prime\prime}}$ | $1$ | $1$ | $1$ | $1$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $1$ | $0$ | $1$ | $0$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1\\}}$ | $1$ | $0$ | $0$ | $0$ | $\gamma$ | $0$ | $0$ | $\gamma$
Table 4. Partial actions of $\mathcal{A}_{4,q}^{\prime\prime\prime}$ on $\Bbbk$ | $1$ | $g$ | $g^{2}$ | $g^{3}$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$
---|---|---|---|---|---|---|---|---
$\lambda_{G(\mathcal{A}_{4,q}^{\prime\prime\prime})}=\varepsilon_{\mathcal{A}_{4,q}^{\prime\prime\prime}}$ | $1$ | $1$ | $1$ | $1$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $1$ | $0$ | $0$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $1$ | $0$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 5. Partial actions of $\mathcal{A}_{2,2}$ on $\Bbbk$ | $1$ | $g$ | $h$ | $gh$ | $x$ | $gx$ | $hx$ | $ghx$
---|---|---|---|---|---|---|---|---
$\lambda_{G(\mathcal{A}_{2,2})}=\varepsilon_{\mathcal{A}_{2,2}}$ | $1$ | $1$ | $1$ | $1$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $1$ | $0$ | $0$ | $0$ | $\alpha$ | $\alpha$ | $0$ | $0$
$\lambda_{\\{1,gh\\}}$ | $1$ | $0$ | $0$ | $1$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,h\\}}$ | $1$ | $0$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g\\}}$ | $1$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
###### Proof.
To obtain the partial actions presented in Tables 1 \- 5, one does routine
computations applying the method presented. We will do just some computations
to obtain all partial actions of $\mathcal{A}_{4}^{\prime\prime}$ on $\Bbbk$,
presented in Table 3.
Step 1: $G(\mathcal{A}_{4}^{\prime\prime})=\\{1,g,g^{2},g^{3}\\}$ and
$\mathcal{B}=\\{1,g,g^{2},g^{3},x,gx,g^{2}x,g^{3}x\\}$.
For Step 2, we note that $G(\mathcal{A}_{4}^{\prime\prime})$ has 3 subgroups:
$\\{1,g,g^{2},g^{3}\\}$, $\\{1,g^{2}\\}$ and $\\{1\\}$.
1. (1)
$N=\\{1,g,g^{2},g^{3}\\}$. Here, if we proceed as in Example 3.6 (1), we
obtain that $\lambda$ is a partial action of $\mathcal{A}_{4}^{\prime\prime}$
on $\Bbbk$ with initial condition $\\{1,g,g^{2},g^{3}\\}$ if and only if
$\lambda=\varepsilon_{\mathcal{A}_{4}^{\prime\prime}}$;
2. (2)
$N=\\{1,g^{2}\\}$. By Step 3, we consider $\alpha_{1}=\alpha_{g^{2}}=1$,
$\alpha_{g}=\alpha_{g^{3}}=0$, $\alpha_{g^{2}x}=\alpha_{x}$ and
$\alpha_{g^{3}x}=\alpha_{gx}$, where $\alpha_{x},\alpha_{gx}\in\Bbbk$. For
Step 4, since $x\in\mathcal{B}_{1,g}$ we have
$\widetilde{\mathcal{B}}=\\{gx\\}$. Then, the system (3.14) has 8 equations
$\displaystyle\left\\{\Lambda(gx)\Lambda(v)=\Lambda(gx)\Lambda(gv)+\Lambda(g^{2})\Lambda(gxv)\right.\qquad
v\in\mathcal{B}.$
If we replace $(\alpha_{b})_{b\in\mathcal{B}}$ in the system above, we get
$\left\\{\begin{array}[]{ll}\alpha_{gx}=\alpha_{gx}\\\
0=\alpha_{gx}-\alpha_{x}\\\ \alpha_{gx}=\alpha_{gx}\\\
0=\alpha_{gx}-\alpha_{x}\\\ \alpha_{gx}\alpha_{x}=\alpha_{gx}\alpha_{gx}\\\
\alpha_{gx}\alpha_{gx}=\alpha_{gx}\alpha_{x}\\\
\alpha_{gx}\alpha_{x}=\alpha_{gx}\alpha_{gx}\\\
\alpha_{gx}\alpha_{gx}=\alpha_{gx}\alpha_{x}.\end{array}\right.$
Clearly, $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of the system above if
and only if $\alpha_{x}=\alpha_{gx}$.
Consequently, for any $\alpha\in\Bbbk$, the linear map
$\lambda_{N}:\mathcal{A}_{4}^{\prime\prime}\longrightarrow\Bbbk$ given by
$\lambda(1)=\lambda(g^{2})=1$, $\lambda(g)=\lambda(g^{3})=0$ and
$\lambda(x)=\lambda(gx)=\lambda(g^{2}x)=\lambda(g^{3}x)=\alpha$, is a partial
action of $\mathbb{H}_{4}$ on $\Bbbk$.
3. (3)
$N=\\{1\\}$. By Step 3, we consider $\alpha_{1}=1$,
$\alpha_{g}=\alpha_{g^{2}}=\alpha_{g^{3}}=0$ and
$\alpha_{x},\alpha_{gx},\alpha_{g^{2}x}\alpha_{g^{3}x}\in\Bbbk$.
Now we move to Step 4. As $x\in\mathcal{B}_{1,g}$,
$\widetilde{\mathcal{B}}=\\{gx,g^{2}x,g^{3}x\\}$. Then, the system (3.14) has
3 blocks of 8 equations each:
$\displaystyle\left\\{\begin{array}[]{ll}\Lambda(gx)\Lambda(v)=\Lambda(gx)\Lambda(gv)+\Lambda(g^{2})\Lambda(gxv)\\\
\Lambda(g^{2}x)\Lambda(v)=\Lambda(g^{2}x)\Lambda(g^{2}v)+\Lambda(g^{3})\Lambda(g^{2}xv)\\\
\Lambda(g^{3}x)\Lambda(v)=\Lambda(g^{3}x)\Lambda(g^{3}v)+\Lambda(1)\Lambda(g^{3}xv)\end{array}\right.\qquad
v\in\mathcal{B}.$
Suppose that $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of the system
above. Then, by Lemma 3.5 (a), we obtain $\alpha_{gx}=\alpha_{g^{2}x}=0$.
Thus, we can discard the first and second block of equations in the system
above, and deal only with the third. That is,
$\left\\{\begin{array}[]{ll}\alpha_{g^{3}x}=\alpha_{g^{3}x}\\\
0=\alpha_{g^{3}x}-\alpha_{x}\\\ 0=\alpha_{gx}\\\ 0=-\alpha_{g^{2}x}\\\
\alpha_{g^{3}x}\alpha_{x}=\alpha_{g^{3}x}\alpha_{g^{3}x}\\\
0=\alpha_{g^{3}x}\alpha_{x}+1\\\ 0=\alpha_{g^{3}x}\alpha_{gx}\\\
\alpha_{g^{3}x}\alpha_{g^{3}x}=-1.\end{array}\right.$
Hence, $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of the system above if
and only if $\alpha_{x}=\alpha_{g^{3}x}$ and $\alpha_{x}^{2}=-1$.
Therefore, all partial actions of $\mathcal{A}_{4}^{\prime\prime}$ on $\Bbbk$
are given in Table 3. ∎
The 16-dimensional pointed non-semisimple Hopf algebras were classified by
[11], and there exist 29 isomorphism classes, all of them listed below. We
will adopt the notation and presentation for these Hopf algebras given by
generators and relations in [1], and we write $G(H)$, $P_{g,h}(H)$ and
$\varepsilon_{H}$ shortly by $G$, $P_{g,h}$ and $\varepsilon$, respectively,
since the Hopf algebra $H$ is obvious on each case:
$\mathbf{H_{1}}=\langle
g,\,x,\,y,\,z\,\,|\,\,g^{2}=1,\,\,x^{2}=y^{2}=z^{2}=0,\,\,ab=-ba\rangle,$
where $a,b\in\\{g,x,y,z\\},a\neq b$, $g\in G$ and $x,y,z\in P_{1,g}$;
$\mathbf{H_{2}}=\langle
g,\,x,\,y\,\,|\,\,g^{4}=1,\,\,x^{2}=y^{2}=0,\,\,xg=qgx,\,\,yg=qgy,\,\,yx=-xy\rangle,$
where $q$ is a primitive root of unity of order $4$, $g\in G$ and $x,y\in
P_{1,g^{2}}$;
$\mathbf{H_{3}}=\langle
g,\,x,\,y\,\,|\,\,g^{4}=1,\,\,x^{2}=y^{2}=0,\,\,xg=qgx,\,\,yg=-qgy,\,\,yx=-xy\rangle,$
where $q$ is a primitive root of unity of order $4$, $g\in G$ and $x,y\in
P_{1,g^{2}}$;
$\mathbf{H_{4}}=\langle
g,\,x,\,y\,\,|\,\,g^{4}=1,\,\,x^{2}=y^{2}=0,\,\,xg=-gx,yg=-gy,\,\,yx=-xy\rangle,$
where $g\in G$ and $x,y\in P_{1,g}$;
$\mathbf{H_{5}}=\langle
g,x,y|g^{4}=1,x^{2}=0,y^{2}=g^{2}-1,xg=-gx,yg=-gy,yx=-xy\rangle,$ where $g\in
G$ and $x,y\in P_{1,g}$;
$\mathbf{H_{6}}=\langle
g,x,y\,|\,g^{4}=1,x^{2}=0,y^{2}=g^{2}-1,xg=-gx,yg=-gy,yx=-xy+g^{2}-1\rangle,$
where $g\in G$ and $x,y\in P_{1,g}$;
$\mathbf{H_{7}}=\langle
g,x,y\,\,|\,\,g^{4}=1,\,x^{2}=0,\,y^{2}=0,\,xg=-gx,\,yg=-gy,\,yx=-xy\rangle,$
where $g\in G$, $x\in P_{1,g}$ and $y\in P_{1,g^{3}}$;
$\mathbf{H_{8}}=\langle
g,x,y|g^{4}=1,x^{2}=0,y^{2}=g^{2}-1,xg=-gx,yg=-gy,yx=-xy\rangle,$ where $g\in
G$, $x\in P_{1,g}$ and $y\in P_{1,g^{3}}$;
$\mathbf{H_{9}}=\langle
g,x,y\,|\,g^{4}=1,\,x^{2}=y^{2}=g^{2}-1,\,xg=-gx,\,yg=-gy,\,yx=-xy\rangle,$
where $g\in G$, $x\in P_{1,g}$ and $y\in P_{1,g^{3}}$;
$\mathbf{H_{10}}=\langle
g,\;\;x\;\;|\;\;g^{4}=1,\;\;x^{4}=0,\;\;xg=qgx\rangle,$ where $q$ is a
primitive root of unity of order $4$, $g\in G$ and $x\in P_{1,g}$;
$\mathbf{H_{11}}=\langle
g,\;\;x\;\;|\;\;g^{4}=1,\;\;x^{4}=0,\;\;xg=-qgx\rangle,$ where $q$ is a
primitive root of unity of order $4$, $g\in G$ and $x\in P_{1,g}$;
$\mathbf{H_{12}}=\mathcal{A}_{2}\otimes\Bbbk C_{2}$, where we will consider
$\mathbf{H_{12}}$ as
$\langle
g,x,y,|g^{2}=1,x^{2}=y^{2}=0,xg=-gx,yg=-gy,yx=-xy\rangle\otimes\langle
h|h^{2}=1\rangle;$
$\mathbf{H_{13}}=\langle
g,\,h,\,x,\,y\,|\,g^{2}=h^{2}=(gh)^{2}=1,\,x^{2}=y^{2}=0,\,xg=-gx,yg=-gy,\,xh=hx,\,yh=-hy,\,yx=-xy\rangle,$
where $g,h\in G$ and $x,y\in P_{1,g}$;
$\mathbf{H_{14}}=\mathbb{H}_{4}\otimes\mathbb{H}_{4}$, where $\mathbb{H}_{4}$
is the Sweedler’s Hopf algebra. Precisely, consider
$\mathbf{H_{14}}=\langle
g,x\,\,|\,\,g^{2}=1,x^{2}=0,xg=-gx\rangle\otimes\langle
h,y\,\,|\,\,h^{2}=1,y^{2}=0,yh=-hy\rangle;$
$\mathbf{H_{15}}=\langle
g,h,x,y|g^{2}=h^{2}=(gh)^{2}=1,x^{2}=y^{2}=0,yx=-xy,ab=-ba\rangle,$ where
$a\in\\{x,y\\},b\in\\{g,h\\},$ $g,h\in G$, $x\in P_{1,g}$ and $y\in P_{1,h}$;
$\mathbf{H_{16}}=\langle
g,h,x,y|g^{2}=h^{2}=(gh)^{2}=1,x^{2}=y^{2}=0,yx=-xy+gh-1,ab=-ba\rangle,$ where
$a\in\\{x,y\\},b\in\\{g,h\\},$ $g,h\in G$, $x\in P_{1,g}$ and $y\in P_{1,h}$;
$\mathbf{H_{17}}=\mathbb{H}_{4}\otimes\Bbbk(C_{2}\times C_{2})$, where we will
consider $\mathbf{H_{17}}$ as
$\langle
g,x\,\,|\,\,g^{2}=1,x^{2}=0,xg=-gx\rangle\otimes\langle\,a,b\,\,|\,\,a^{2}=b^{2}=(ab)^{2}=1\,\rangle;$
$\mathbf{H_{18}}=\langle g,x\,|\,g^{8}=1,x^{2}=0,xg=-gx\rangle,$ where $g\in
G$ and $x\in P_{1,g}$;
$\mathbf{H_{19}}=\langle g,x\,|\,g^{8}=1,x^{2}=0,xg=qgx\rangle,$ where $q$ is
a primitive root of unity of order $8$, $g\in G$ and $x\in P_{1,g^{4}}$;
$\mathbf{H_{20}}=\langle g,x\,|\,g^{8}=1,x^{2}=0,xg=qgx\rangle,$ where $q$ is
a primitive root of unity of order $4$, $g\in G$ and $x\in P_{1,g^{2}}$;
$\mathbf{H_{21}}=\langle g,x\,|\,g^{8}=1,x^{2}=0,xg=qgx\rangle,$ where $q$ is
a primitive root of unity of order $4$, $g\in G$ and $x\in P_{1,g^{6}}$;
$\mathbf{H_{22}}=\langle g,x\,|\,g^{8}=1,x^{2}=g^{2}-1,xg=-gx\rangle,$ where
$g\in G$ and $x\in P_{1,g}$;
$\mathbf{H_{23}}=\langle
g,\,h,\,x\;\,|\;\,g^{4}=h^{2}=(gh)^{4}=1,\,\;x^{2}=0,\,\;xg=-gx,\,\;\;xh=hx\rangle,$
where $g,h\in G$ and $x\in P_{1,g}$;
$\mathbf{H_{24}}=\langle
g,\,h,\,x\;\,|\;\,g^{4}=h^{2}=(gh)^{4}=1,\,\;x^{2}=0,\,\;xg=gx,\,\;\;xh=-hx\rangle,$
where $g,h\in G$ and $x\in P_{1,gh}$;
$\mathbf{H_{25}}=\langle
g,\,h,\,x\;\,|\;\,g^{4}=h^{2}=(gh)^{4}=1,\,\;x^{2}=0,\,\;xg=qgx,\,\;\;xh=hx\rangle,$
where $q$ is a primitive root of unity of order $4$, $g,h\in G$ and $x\in
P_{1,g^{2}}$;
$\mathbf{H_{26}}=\langle
g,\,h,\,x\;\,|\;\,g^{4}=h^{2}=(gh)^{4}=1,\,\;x^{2}=0,\,\;xg=gx,\,\;\;xh=-hx\rangle,$
where $g,h\in G$ and $x\in P_{1,h}$;
$\mathbf{H_{27}}=\langle
g,\,h,\,x\;\,|\;\,g^{4}=h^{2}=(gh)^{4}=1,\,\;x^{2}=0,\,\;xg=qgx,\,\;\;xh=hx\rangle,$
where $q$ is a primitive root of unity of order $4$, $g,h\in G$ and $x\in
P_{1,g^{2}h}$;
$\mathbf{H_{28}}=\langle
g,\,h,\,x\,|\,g^{4}=h^{2}=(gh)^{4}=1,\,x^{2}=g^{2}-1,\,xg=-gx,\,xh=hx\rangle,$
where $g,h\in G$ and $x\in P_{1,g}$;
$\mathbf{H_{29}}=\langle
g,\,h,\,x\,|\,g^{4}=h^{2}=(gh)^{4}=1,\,\,x^{2}=g^{2}-1,\,\,xg=gx,\,\,xh=-hx\rangle,$
where $g,h\in G$ and $x\in P_{1,gh}$.
In the following theorem, we present the partial actions of the Hopf algebras
listed above. In order to reduce the size of the tables, we will omit
$\lambda_{N}(h)$ for all $h\in G(H)$, since they are determined uniquely by
$N$.
###### Theorem 3.8.
For each pointed non-semisimple Hopf algebra of dimension 16, all partial
actions on its base field are computed. They are presented in Tables 6 \- 34,
where $\alpha,\beta,\theta,\gamma,\omega,\delta,\sigma,\Omega,\zeta\in\Bbbk$
are such that $\gamma^{2}=\omega^{2}=-1,$ $\delta\sigma=0$ and
$\Omega\zeta=-\frac{1}{2}$.
Table 6. Partial actions of $\mathbf{H_{1}}$ on $\Bbbk$ | $x$ | $gx$ | $y$ | $gy$ | $z$ | $gz$ | $xy$ | $gxy$ | $xz$ | $gxz$ | $yz$ | $gyz$ | $xyz$ | $gxyz$
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}\\!=\\!\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\theta$ | $\theta$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 7. Partial actions of $\mathbf{H_{2}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $y$ | $gy$ | $g^{2}y$ | $g^{3}y$ | $xy$ | $gxy$ | $g^{2}xy$ | $g^{3}xy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\beta$ | $0$ | $\beta$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 8. Partial actions of $\mathbf{H_{3}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $y$ | $gy$ | $g^{2}y$ | $g^{3}y$ | $xy$ | $gxy$ | $g^{2}xy$ | $g^{3}xy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\beta$ | $0$ | $\beta$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 9. Partial actions of $\mathbf{H_{4}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $y$ | $gy$ | $g^{2}y$ | $g^{3}y$ | $xy$ | $gxy$ | $g^{2}xy$ | $g^{3}xy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 10. Partial actions of $\mathbf{H_{5}}$ on $\Bbbk$ $(\gamma^{2}=-1)$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $y$ | $gy$ | $g^{2}y$ | $g^{3}y$ | $xy$ | $gxy$ | $g^{2}xy$ | $g^{3}xy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $\gamma$ | $0$ | $0$ | $\gamma$ | $0$ | $0$ | $0$ | $0$
Table 11. Partial actions of $\mathbf{H_{6}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $y$ | $gy$ | $g^{2}y$ | $g^{3}y$ | $xy$ | $gxy$ | $g^{2}xy$ | $g^{3}xy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
Table 12. Partial actions of $\mathbf{H_{7}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $y$ | $gy$ | $g^{2}y$ | $g^{3}y$ | $xy$ | $gxy$ | $g^{2}xy$ | $g^{3}xy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 13. Partial actions of $\mathbf{H_{8}}$ on $\Bbbk$ $(\gamma^{2}=-1)$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $y$ | $gy$ | $g^{2}y$ | $g^{3}y$ | $xy$ | $gxy$ | $g^{2}xy$ | $g^{3}xy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $\gamma$ | $\gamma$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 14. Partial actions of $\mathbf{H_{9}}$ on $\Bbbk$ $(\gamma^{2}=\omega^{2}=-1)$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $y$ | $gy$ | $g^{2}y$ | $g^{3}y$ | $xy$ | $gxy$ | $g^{2}xy$ | $g^{3}xy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\gamma$ | $0$ | $0$ | $\gamma$ | $\omega$ | $\omega$ | $0$ | $0$ | $0$ | $-\gamma\omega$ | $0$ | $\gamma\omega$
Table 15. Partial actions of $\mathbf{H_{10}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $x^{2}$ | $gx^{2}$ | $g^{2}x^{2}$ | $g^{3}x^{2}$ | $x^{3}$ | $gx^{3}$ | $g^{2}x^{3}$ | $g^{3}x^{3}$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}\\!=\\!\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $0$ | $-q\alpha$ | $\alpha^{2}$ | $0$ | $-q\alpha^{2}$ | $(1\\!-\\!q)\alpha^{2}$ | $\alpha^{3}$ | $\alpha^{3}$ | $\alpha^{3}$ | $\alpha^{3}$
Table 16. Partial actions of $\mathbf{H_{11}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $x^{2}$ | $gx^{2}$ | $g^{2}x^{2}$ | $g^{3}x^{2}$ | $x^{3}$ | $gx^{3}$ | $g^{2}x^{3}$ | $g^{3}x^{3}$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}\\!=\\!\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $0$ | $q\alpha$ | $\alpha^{2}$ | $0$ | $q\alpha^{2}$ | $(1\\!+\\!q)\alpha^{2}$ | $\alpha^{3}$ | $\alpha^{3}$ | $\alpha^{3}$ | $\alpha^{3}$
Table 17. Partial actions of $\mathbf{H_{12}}$ on $\Bbbk$ | $x$ | $hx$ | $y$ | $hy$ | $gx$ | $ghx$ | $gy$ | $ghy$ | $xy$ | $hxy$ | $gxy$ | $ghxy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,gh\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $\beta$ | $0$ | $\alpha$ | $0$ | $\beta$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 18. Partial actions of $\mathbf{H_{13}}$ on $\Bbbk$ | $x$ | $gx$ | $hx$ | $ghx$ | $y$ | $gy$ | $hy$ | $ghy$ | $xy$ | $gxy$ | $hxy$ | $ghxy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,gh\\}}$ | $0$ | $0$ | $0$ | $0$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 19. Partial actions of $\mathbf{H_{14}}$ on $\Bbbk$ | $y$ | $hy$ | $x$ | $gx$ | $gy$ | $ghy$ | $hx$ | $ghx$ | $xy$ | $gxy$ | $hxy$ | $ghxy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $0$ | $0$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g\\}}$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,gh\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\beta$ | $\beta$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $0$ | $0$ | $\alpha\beta$ | $\alpha\beta$ | $\alpha\beta$ | $\alpha\beta$
Table 20. Partial actions of $\mathbf{H_{15}}$ on $\Bbbk$ $(\sigma\delta=0)$ | $x$ | $gx$ | $hx$ | $ghx$ | $y$ | $gy$ | $hy$ | $ghy$ | $xy$ | $gxy$ | $hxy$ | $ghxy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,gh\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\sigma$ | $\sigma$ | $0$ | $0$ | $\delta$ | $0$ | $\delta$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 21. Partial actions of $\mathbf{H_{16}}$ on $\Bbbk$ $\left(\Omega\zeta=-\frac{1}{2}\right)$ | $x$ | $gx$ | $hx$ | $ghx$ | $y$ | $gy$ | $hy$ | $ghy$ | $xy$ | $gxy$ | $hxy$ | $ghxy$
---|---|---|---|---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,gh\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\beta$ | $\beta$ | $\beta$ | $\beta$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\Omega$ | $\Omega$ | $0$ | $0$ | $\zeta$ | $0$ | $\zeta$ | $0$ | $-\frac{1}{2}$ | $-\frac{1}{2}$ | $\frac{1}{2}$ | $\frac{1}{2}$
Table 22. Partial actions of $\mathbf{H_{17}}$ on $\Bbbk$ | $x$ | $ax$ | $bx$ | $abx$ | $gx$ | $agx$ | $bgx$ | $abgx$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,a,g,ag\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,b,g,bg\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,ab,g,abg\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,a,gb,agb\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,b,ag,abg\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,ab,ag,bg\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,a,b,ab\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,g\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,ag\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,bg\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,abg\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,a\\}}$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $\alpha$ | $\alpha$ | $0$ | $0$
$\lambda_{\\{1,b\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$
$\lambda_{\\{1,ab\\}}$ | $\alpha$ | $0$ | $0$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $\alpha$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $0$ | $0$ | $\alpha$ | $0$ | $0$ | $0$
Table 23. Partial actions of $\mathbf{H_{18}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $g^{4}x$ | $g^{5}x$ | $g^{6}x$ | $g^{7}x$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},g^{4},g^{6}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,g^{4}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 24. Partial actions of $\mathbf{H_{19}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $g^{4}x$ | $g^{5}x$ | $g^{6}x$ | $g^{7}x$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},g^{4},g^{6}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{4}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $0$ | $0$ | $\alpha$ | $0$ | $0$ | $0$
Table 25. Partial actions of $\mathbf{H_{20}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $g^{4}x$ | $g^{5}x$ | $g^{6}x$ | $g^{7}x$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},g^{4},g^{6}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{4}\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 26. Partial actions of $\mathbf{H_{21}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $g^{4}x$ | $g^{5}x$ | $g^{6}x$ | $g^{7}x$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},g^{4},g^{6}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{4}\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 27. Partial actions of $\mathbf{H_{22}}$ on $\Bbbk$ $(\gamma^{2}=-1)$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $g^{4}x$ | $g^{5}x$ | $g^{6}x$ | $g^{7}x$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},g^{4},g^{6}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,g^{4}\\}}$ | $\gamma$ | $0$ | $0$ | $\gamma$ | $\gamma$ | $0$ | $0$ | $\gamma$
$\lambda_{\\{1\\}}$ | $\gamma$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $\gamma$
Table 28. Partial actions of $\mathbf{H_{23}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $hx$ | $ghx$ | $g^{2}hx$ | $g^{3}hx$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g,g^{2},g^{3}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},h,g^{2}h\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},gh,g^{3}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 29. Partial actions of $\mathbf{H_{24}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $hx$ | $ghx$ | $g^{2}hx$ | $g^{3}hx$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g,g^{2},g^{3}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,g^{2},h,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},gh,g^{3}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $0$ | $\alpha$ | $0$ | $\alpha$
$\lambda_{\\{1\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 30. Partial actions of $\mathbf{H_{25}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $hx$ | $ghx$ | $g^{2}hx$ | $g^{3}hx$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g,g^{2},g^{3}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},h,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$
$\lambda_{\\{1,g^{2},gh,g^{3}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $0$ | $0$ | $0$ | $0$
Table 31. Partial actions of $\mathbf{H_{26}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $hx$ | $ghx$ | $g^{2}hx$ | $g^{3}hx$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g,g^{2},g^{3}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,g^{2},h,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},gh,g^{3}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $0$ | $0$ | $\alpha$ | $0$ | $0$ | $0$
Table 32. Partial actions of $\mathbf{H_{27}}$ on $\Bbbk$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $hx$ | $ghx$ | $g^{2}hx$ | $g^{3}hx$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g,g^{2},g^{3}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},h,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,h\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $\alpha$ | $0$
$\lambda_{\\{1,g^{2},gh,g^{3}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\alpha$ | $0$ | $0$ | $0$ | $0$ | $0$ | $\alpha$ | $0$
Table 33. Partial actions of $\mathbf{H_{28}}$ on $\Bbbk$ $(\gamma^{2}=-1)$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $hx$ | $ghx$ | $g^{2}hx$ | $g^{3}hx$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g,g^{2},g^{3}\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},h,g^{2}h\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,g^{2}h\\}}$ | $\gamma$ | $0$ | $0$ | $\gamma$ | $0$ | $\gamma$ | $\gamma$ | $0$
$\lambda_{\\{1,h\\}}$ | $\gamma$ | $0$ | $0$ | $\gamma$ | $\gamma$ | $0$ | $0$ | $\gamma$
$\lambda_{\\{1,g^{2},gh,g^{3}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1\\}}$ | $\gamma$ | $0$ | $0$ | $\gamma$ | $0$ | $0$ | $0$ | $0$
Table 34. Partial actions of $\mathbf{H_{29}}$ on $\Bbbk$ $(\gamma^{2}=-1)$ | $x$ | $gx$ | $g^{2}x$ | $g^{3}x$ | $hx$ | $ghx$ | $g^{2}hx$ | $g^{3}hx$
---|---|---|---|---|---|---|---|---
$\lambda_{G}=\varepsilon$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g,g^{2},g^{3}\\}}$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$ | $\alpha$
$\lambda_{\\{1,g^{2},h,g^{2}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2},gh,g^{3}h\\}}$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$\lambda_{\\{1,g^{2}\\}}$ | $\alpha$ | $0$ | $\alpha$ | $0$ | $0$ | $\alpha$ | $0$ | $\alpha$
$\lambda_{\\{1\\}}$ | $\gamma$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $\gamma$
###### Proof.
To obtain the partial actions presented in Tables 6 \- 34, routine
computations using the method described in Subsection 3.2 are performed.
We only make some considerations for $\mathbf{H_{6}}$ (Table 11), as this is
the first case where there exists a subgroup $N$ of $G(\mathbf{H_{6}})$ such
that none of the initial $N$-conditions for the partial system associated with
$\mathcal{B}$ is also a solution of the $N$-reduced partial system associated
with $\mathcal{B}$.
Step 1: let
$\mathcal{B}=\\{1,g,g^{2},g^{3},x,gx,g^{2}x,g^{3}x,y,gy,g^{2}y,g^{3}y,xy,gxy,g^{2}xy,g^{3}xy\\}$
be a basis of $\mathbf{H}_{6}$.
Step 2: since $G(\mathbf{H}_{6})=\\{1,g,g^{2},g^{3}\\}$, we have 3
possibilities for $N$: $\\{1,g,g^{2},g^{3}\\}$, $\\{1,g^{2}\\}$ and $\\{1\\}$.
1. (1)
$N=\\{1,g,g^{2},g^{3}\\}$. By Step 3 we assume
$\alpha_{1}=\alpha_{g}=\alpha_{g^{2}}=\alpha_{g^{3}}=1$,
$\alpha_{g^{3}x}=\alpha_{g^{2}x}=\alpha_{gx}=\alpha_{x}$,
$\alpha_{g^{3}y}=\alpha_{g^{2}y}=\alpha_{gy}=\alpha_{y}$ and
$\alpha_{g^{3}xy}=\alpha_{g^{2}xy}=\alpha_{gxy}=\alpha_{xy}$, where
$\alpha_{x},\alpha_{y},\alpha_{xy}\in\Bbbk$.
Step 4: investigate the $N$-reduced partial system associated with
$\mathcal{B}$. First, if $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of
(3.14), then $\alpha_{x}=\alpha_{y}=0$ and also $\alpha_{xy}=0$, by Lemma 3.5
items (a) and (b), respectively. Hence, in this case, the only possibility for
the associated partial action $\lambda:\mathbf{H}_{6}\longrightarrow\Bbbk$,
$\lambda(b)=\alpha_{b}$, is the counit $\varepsilon$.
Step 5: $\lambda$ is a partial action of $\mathbf{H}_{6}$ on $\Bbbk$ with
initial condition $\\{1,g,g^{2},g^{3}\\}$ if and only if
$\lambda=\varepsilon$;
2. (2)
$N=\\{1,g^{2}\\}$. Step 3: consider $\alpha_{1}=\alpha_{g^{2}}=1$,
$\alpha_{g}=\alpha_{g^{3}}=0$, $\alpha_{g^{2}x}=\alpha_{x}$,
$\alpha_{g^{3}x}=\alpha_{gx}$, $\alpha_{g^{2}y}=\alpha_{y}$,
$\alpha_{g^{3}y}=\alpha_{gy}$, $\alpha_{g^{2}xy}=\alpha_{xy}$ and
$\alpha_{g^{3}xy}=\alpha_{gxy}$, where
$\alpha_{x},\alpha_{gx},\alpha_{y},\alpha_{gy},\alpha_{xy},\alpha_{gxy}\in\Bbbk$.
Step 4: since $x,y\in\mathcal{B}_{1,g}$ we have
$\widetilde{\mathcal{B}}=\\{gx,gy,xy,gxy\\}$. Then, the system (3.14) has 4
blocks of 16 equations each:
$\displaystyle\left\\{\begin{array}[]{rl}\Lambda(gx)\Lambda(v)=&\\!\\!\\!\\!\Lambda(gx)\Lambda(gv)+\Lambda(g^{2})\Lambda(gxv)\\\
\Lambda(gy)\Lambda(v)=&\\!\\!\\!\\!\Lambda(gy)\Lambda(gv)+\Lambda(g^{2})\Lambda(gyv)\\\
\Lambda(xy)\Lambda(v)=&\\!\\!\\!\\!\Lambda(xy)\Lambda(v)-\Lambda(gx)\Lambda(yv)\\\
&\\!\\!\\!\\!+\Lambda(gy)\Lambda(xv)+\Lambda(g^{2})\Lambda(xyv)\\\
\Lambda(gxy)\Lambda(v)=&\\!\\!\\!\\!\Lambda(gxy)\Lambda(gv)-\Lambda(g^{2}x)\Lambda(gyv)\\\
&\\!\\!\\!\\!+\Lambda(g^{2}y)\Lambda(gxv)+\Lambda(g^{3})\Lambda(gxyv)\end{array}\right.\qquad
v\in\mathcal{B}.$
Assume that $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of the system
above.
If we replace $(\alpha_{b})_{b\in\mathcal{B}}$ in the first two blocks of
equations, we get:
$\left\\{\begin{array}[]{ll}\alpha_{gx}=\alpha_{gx}\\\
0=\alpha_{gx}-\alpha_{x}\\\ \alpha_{gx}=\alpha_{gx}\\\
0=\alpha_{gx}-\alpha_{x}\\\ \alpha_{gx}\alpha_{x}=\alpha_{gx}\alpha_{gx}\\\
\alpha_{gx}\alpha_{gx}=\alpha_{gx}\alpha_{x}\\\
\alpha_{gx}\alpha_{x}=\alpha_{gx}\alpha_{gx}\\\
\alpha_{gx}\alpha_{gx}=\alpha_{gx}\alpha_{x}\\\
\alpha_{gx}\alpha_{y}=\alpha_{gx}\alpha_{gy}+\alpha_{gxy}\\\
\alpha_{gx}\alpha_{gy}=\alpha_{gx}\alpha_{y}-\alpha_{xy}\\\
\alpha_{gx}\alpha_{y}=\alpha_{gx}\alpha_{gy}+\alpha_{gxy}\\\
\alpha_{gx}\alpha_{gy}=\alpha_{gx}\alpha_{y}-\alpha_{xy}\\\
\alpha_{gx}\alpha_{xy}=\alpha_{gx}\alpha_{gxy}\\\
\alpha_{gx}\alpha_{gxy}=\alpha_{gx}\alpha_{xy}\\\
\alpha_{gx}\alpha_{xy}=\alpha_{gx}\alpha_{gxy}\\\
\alpha_{gx}\alpha_{gxy}=\alpha_{gx}\alpha_{xy}\end{array}\right.\textrm{ \ \
and \ \ }\left\\{\begin{array}[]{ll}\alpha_{gy}=\alpha_{gy}\\\
0=\alpha_{gy}-\alpha_{y}\\\ \alpha_{gy}=\alpha_{gy}\\\
0=\alpha_{gy}-\alpha_{y}\\\
\alpha_{gy}\alpha_{x}=\alpha_{gy}\alpha_{gx}-\alpha_{gxy}\\\
\alpha_{gy}\alpha_{gx}=\alpha_{gy}\alpha_{x}+\alpha_{xy}\\\
\alpha_{gy}\alpha_{x}=\alpha_{gy}\alpha_{gx}-\alpha_{gxy}\\\
\alpha_{gy}\alpha_{gx}=\alpha_{gy}\alpha_{x}+\alpha_{xy}\\\
\alpha_{gy}\alpha_{y}=\alpha_{gy}\alpha_{gy}\\\
\alpha_{gy}\alpha_{gy}=\alpha_{gy}\alpha_{y}\\\
\alpha_{gy}\alpha_{y}=\alpha_{gy}\alpha_{gy}\\\
\alpha_{gy}\alpha_{gy}=\alpha_{gy}\alpha_{y}\\\
\alpha_{gy}\alpha_{xy}=\alpha_{gy}\alpha_{gxy}\\\
\alpha_{gy}\alpha_{gxy}=\alpha_{gy}\alpha_{xy}\\\
\alpha_{gy}\alpha_{xy}=\alpha_{gy}\alpha_{gxy}\\\
\alpha_{gy}\alpha_{gxy}=\alpha_{gy}\alpha_{xy}.\end{array}\right.$
From the second line of each block, we obtain $\alpha_{x}=\alpha_{gx}$ and
$\alpha_{y}=\alpha_{gy}$. Consequently one concludes that
$\alpha_{gxy}=\alpha_{xy}=0$, and with these parameters the two blocks of
conditions above hold.
Now, one needs analyze the third and fourth blocks of equations, that is,
$\displaystyle\left\\{\begin{array}[]{rl}\Lambda(xy)\Lambda(v)=&\\!\\!\\!\\!\Lambda(xy)\Lambda(v)-\Lambda(gx)\Lambda(yv)\\\
&\\!\\!\\!\\!+\Lambda(gy)\Lambda(xv)+\Lambda(g^{2})\Lambda(xyv)\\\
\Lambda(gxy)\Lambda(v)=&\\!\\!\\!\\!\Lambda(gxy)\Lambda(gv)-\Lambda(g^{2}x)\Lambda(gyv)\\\
&\\!\\!\\!\\!+\Lambda(g^{2}y)\Lambda(gxv)+\Lambda(g^{3})\Lambda(gxyv)\end{array}\right.\qquad
v\in\mathcal{B}.$
However, is an easy task to verify that $(\alpha_{b})_{b\in\mathcal{B}}$, with
the parameters already obtained, satisfies these blocks of equations, and no
additional conditions are required.
Step 5: As conclusion, for any $\alpha,\beta\in\Bbbk$, the linear map
$\lambda:\mathbf{H}_{6}\longrightarrow\Bbbk$ given by
$\lambda(1)=\lambda(g^{2})=1$, $\lambda(g)=\lambda(g^{3})=0$,
$\lambda(x)=\lambda(gx)=\lambda(g^{2}x)=\lambda(g^{3}x)=\alpha$,
$\lambda(y)=\lambda(gy)=\lambda(g^{2}y)=\lambda(g^{3}y)=\beta$ and
$\lambda(xy)=\lambda(gxy)=\lambda(g^{2}xy)=\lambda(g^{3}xy)=0$, is a partial
action of $\mathbf{H}_{6}$ on $\Bbbk$.
3. (3)
$N=\\{1\\}$. Step 3: we consider $(\alpha_{b})_{b\in\mathcal{B}}$ a solution
of (3.12). Now, in Step 4, we investigate the system (3.14). As
$gx,g^{3}x,g^{3}y\in\widetilde{\mathcal{B}}$, consider the particular next 5
equations of the system (3.14):
$\displaystyle\left\\{\begin{array}[]{lr}\Lambda(gx)\Lambda(v)=\Lambda(gx)\Lambda(gv)+\Lambda(g^{2})\Lambda(gxv)&\qquad\qquad
v=1\\\
\Lambda(g^{3}x)\Lambda(v)=\Lambda(g^{3}x)\Lambda(g^{3}v)+\Lambda(1)\Lambda(g^{3}xv)&v=g\\\
\Lambda(g^{3}x)\Lambda(v)=\Lambda(g^{3}x)\Lambda(g^{3}v)+\Lambda(1)\Lambda(g^{3}xv)&v=gx\\\
\Lambda(g^{3}x)\Lambda(v)=\Lambda(g^{3}x)\Lambda(g^{3}v)+\Lambda(1)\Lambda(g^{3}xv)&v=gy\\\
\Lambda(g^{3}y)\Lambda(v)=\Lambda(g^{3}y)\Lambda(g^{3}v)+\Lambda(1)\Lambda(g^{3}yv)&v=gx\end{array}\right.$
If $(\alpha_{b})_{b\in\mathcal{B}}$ is a solution of the system (3.14), then
for the particular equations above, we get:
$\left\\{\begin{array}[]{ll}\alpha_{gx}=0\\\ 0=\alpha_{g^{3}x}-\alpha_{x}\\\
\alpha_{g^{3}x}\alpha_{gx}=\alpha_{g^{3}x}\alpha_{x}\\\
\alpha_{g^{3}x}\alpha_{gy}=\alpha_{g^{3}x}\alpha_{y}-\alpha_{xy}\\\
\alpha_{g^{3}y}\alpha_{gx}=\alpha_{g^{3}y}\alpha_{x}+\alpha_{xy}+1.\end{array}\right.$
The first 3 conditions imply that $\alpha_{x}=\alpha_{gx}=\alpha_{g^{3}x}=0$.
Then, the fourth condition imposes $\alpha_{xy}=0$. But, in this case, the
fifth condition states $0=1$, a contradiction.
Hence, $(\alpha_{b})_{b\in\mathcal{B}}$ cannot be a solution to the both
systems: (3.12) and (3.14).
Therefore, all partial actions of $\mathbf{H}_{6}$ on $\Bbbk$ are given in
Table 11. ∎
In particular, Theorem 3.8 guarantees that given a Hopf algebra $H$ and $N$ a
subgroup of $G(H)$, it is not true that there always exists a partial action
$\lambda:H\longrightarrow\Bbbk$ with initial condition $N$. See, for instance,
Table 11.
## 4\. $\lambda$-Hopf algebras
In this section, the concept of a $\lambda$-Hopf algebra is introduced. Some
results and examples are developed. Subsection 4.2 is dedicated to present the
Taft’s algebra as a $\lambda$-Hopf algebra. The deformations produced with the
partial actions of pointed non-semisimple Hopf algebras with dimension 8,16 on
its base field, calculated in the previous section, are analyzed in the last
subsection.
### 4.1. Definition and Properties
In this subsection, a $\lambda$-Hopf algebra is defined: a Hopf algebra
obtained by a deformation of a Hopf algebra $L$ through a partial action
$\lambda:L\longrightarrow\Bbbk$. Some guiding examples are presented and
results concerning such types of deformations and partial actions are
developed. In particular, every Hopf algebra $H$ is actually a $\lambda$-Hopf
algebra.
Recall from Definition 2.6 that, given a partial action
$\lambda:H\longrightarrow\Bbbk$, the corresponding partial smash product
algebra $\underline{\Bbbk\\#H}$ is the algebra generated by the set
$\left\\{\underline{1_{\Bbbk}\\#h}\,|\,h\in
H\right\\}=\left\\{1_{\Bbbk}\\#\lambda(h_{1})h_{2}\,|\,h\in H\right\\}$.
Moreover $\underline{1_{\Bbbk}\\#h}=1_{\Bbbk}\\#\lambda(h_{1})h_{2}$ and
$\underline{1_{\Bbbk}\\#h}\,\,\underline{1_{\Bbbk}\\#k}=\underline{1_{\Bbbk}\\#\lambda(h_{1})h_{2}k}$
for all $h,k\in H$. Then, $\varphi:\underline{\Bbbk\\#H}\longrightarrow H$
given by
$\varphi\left(1_{\Bbbk}\\#\lambda(h_{1})h_{2}\right)=\lambda(h_{1})h_{2}$ is
an injective homomorphism of algebras. We denote by $H_{\lambda}$ the
subalgebra of $H$ given by the image of $\varphi$, that is,
$H_{\lambda}=\varphi\left(\underline{\Bbbk\\#H}\right)\subseteq H$. Hence,
$H_{\lambda}=\\{\lambda(h_{1})h_{2}\,|\,h\in H\\}$ and so
$\underline{\Bbbk\\#H}\simeq H_{\lambda}$ is a subalgebra of $H$. However, it
does not always occur that this subalgebra is a Hopf subalgebra, neither a
subbialgebra nor a subcoalgebra of $H$. We exhibit two examples to illustrate
this situation:
###### Example 4.1.
Let $G$ be a group and $\lambda_{N}$ as in Example 2.2. Then, $(\Bbbk
G)_{\lambda_{N}}\simeq\Bbbk N$ and consequently a Hopf subalgebra of $\Bbbk
G$.
###### Example 4.2.
Assume $char(\Bbbk)\neq 2$ and let $\mathbb{H}_{4}$ be the Sweedler’s Hopf
algebra. Consider the partial action $\lambda_{\alpha}$ as in Example 2.5,
_i.e._ , $\lambda_{\alpha}:\mathbb{H}_{4}\longrightarrow\Bbbk$,
$\lambda_{\alpha}(1)=1_{\Bbbk},$ $\lambda_{\alpha}(g)=0$ and
$\lambda_{\alpha}(x)=\lambda_{\alpha}(gx)=\alpha$, where $\alpha\in\Bbbk$.
The subalgebra
$(\mathbb{H}_{4})_{\lambda_{\alpha}}=\Bbbk\\{1\\}\oplus\Bbbk\\{\alpha g+gx\\}$
is not a subcoalgebra of $\mathbb{H}_{4}$, since
$\Delta(\alpha g+gx)=(\alpha g+gx)\otimes g+1\otimes
gx\notin(\mathbb{H}_{4})_{\lambda_{\alpha}}\otimes(\mathbb{H}_{4})_{\lambda_{\alpha}}.$
The previous examples show that for a given Hopf algebra $L$ and a partial
action $\lambda\in L^{*}$, $L_{\lambda}$ is not always a Hopf subalgebra of
$L$. Thus, we ask when a Hopf algebra $H$ is obtained in this way, that is,
when $H\simeq L_{\lambda}$, for some Hopf algebra $L$ and partial action
$\lambda\in L^{*}$. In this sense, we define:
###### Definition 4.3.
We say that a Hopf algebra $H$ is a _(left) $\lambda$-Hopf algebra_ if there
exists a pair $(L,\lambda)$ where $L$ is a Hopf algebra and
$\lambda:L\longrightarrow\Bbbk$ is a (left) partial action such that
$L_{\lambda}\simeq H$.
Since $\varepsilon$ is a partial action (global, in fact) and
$\varepsilon(h_{1})h_{2}=h$, for all $h\in H$, we have that
$H_{\varepsilon}=H$. Then, every Hopf algebra $H$ is an $\varepsilon$-Hopf
algebra in the usual way.
In the next example we prove that $\Bbbk G$ is a $\lambda$-Hopf algebra.
###### Example 4.4.
Let $G$ be a group. Consider a group $F$ that contains $G$ as subgroup and the
partial action $\lambda_{G}:\Bbbk F\longrightarrow\Bbbk$. It is clear that
$(\Bbbk F)_{\lambda_{G}}\simeq\Bbbk G$. Moreover, since $F$ can be chosen
arbitrarily, we do not have uniqueness for a Hopf algebra $L$ such that
$L_{\lambda}\simeq\Bbbk G$. In particular, we can consider $F$ as the group
$G\times C_{\ell}$, where $C_{\ell}$ denotes the cyclic group of order $\ell$,
$\ell\geq 1$. In this case, $dim_{\Bbbk}(\Bbbk F)=\ell\ dim_{\Bbbk}(\Bbbk G)$.
Now our goal is to determine necessary and sufficient conditions to conclude
whether $H_{\lambda}$ is a Hopf subalgebra of $H$ or not. First, we have the
following lemma:
###### Lemma 4.5.
Let $\lambda:H\longrightarrow\Bbbk$ be a partial action of $H$ on $\Bbbk$.
Then, $\lambda{|_{H_{\lambda}}}=\varepsilon{|_{H_{\lambda}}}$.
###### Proof.
Consider $\lambda(h_{1})h_{2}\in H_{\lambda}$. Then,
$\lambda(\lambda(h_{1})h_{2})=\lambda(h_{1})\lambda(h_{2})=\lambda(h).$ On the
other hand,
$\varepsilon(\lambda(h_{1})h_{2})=\lambda(h_{1})\varepsilon(h_{2})=\lambda(h_{1}\varepsilon(h_{2}))=\lambda(h).$
Thus, $\lambda{|_{H_{\lambda}}}=\varepsilon{|_{H_{\lambda}}}$. ∎
Thus, in the subalgebra $H_{\lambda}$, the maps $\varepsilon$ and $\lambda$
coincide. In particular, $\lambda$ is multiplicative on $H_{\lambda}$, and so
if $H_{\lambda}$ is a subcoalgebra (or subbialgebra or Hopf subalgebra) of
$H$, then the counit of $H_{\lambda}$ is given exactly by
$\lambda{|_{H_{\lambda}}}=\varepsilon{|_{H_{\lambda}}}$. Hence, we obtain the
following characterization:
###### Theorem 4.6.
Let $H$ be a finite-dimensional Hopf algebra and
$\lambda:H\longrightarrow\Bbbk$ a partial action. Then, $H_{\lambda}$ is a
Hopf subalgebra of $H$ if and only if
$\displaystyle\lambda(h_{1})h_{2}=\lambda(h_{1})h_{2}\lambda(h_{3})$ (4.1)
holds for all $h\in H$.
###### Proof.
First, suppose $\lambda(h_{1})h_{2}=\lambda(h_{1})h_{2}\lambda(h_{3})$ for all
$h\in H$. We already have that $H_{\lambda}$ is a subalgebra of $H$. Consider
the restriction of the comultiplication $\Delta_{H}$ of $H$ to $H_{\lambda}$,
that is, $\Delta_{H}{|_{H_{\lambda}}}:H_{\lambda}\longrightarrow H\otimes H$.
We denote $\Delta=\Delta_{H}{|_{H_{\lambda}}}$. We shall see
$\Delta(H_{\lambda})\subseteq H_{\lambda}\otimes H_{\lambda}$. Note that
$\Delta(\lambda(h_{1})h_{2})=\lambda(h_{1})\Delta(h_{2})=\lambda(h_{1})h_{2}\otimes
h_{3}=\lambda(h_{11})h_{12}\otimes h_{2}.$
Now, using the hypothesis, we get $\lambda(h_{11})h_{12}\otimes
h_{2}=\lambda(h_{11})h_{12}\lambda(h_{13})\otimes h_{2},$ and so
$\displaystyle\lambda(h_{11})h_{12}\lambda(h_{13})\otimes h_{2}$
$\displaystyle=\lambda(h_{1})h_{2}\lambda(h_{3})\otimes h_{4}$
$\displaystyle=\lambda(h_{1})h_{2}\otimes\lambda(h_{3})h_{4}$
$\displaystyle=\lambda(h_{11})h_{12}\otimes\lambda(h_{21})h_{22}.$
Therefore, $H_{\lambda}$ is closed under the comultiplication $\Delta$ and,
since we are dealing with restriction maps, we obtain that
$(H_{\lambda},\Delta,\varepsilon|{{}_{H_{\lambda}}})$ is a subcoalgebra of
$H$. Moreover, $H_{\lambda}$ is a subbialgebra of $H$ and, since $H$ is finite
dimensional, it follows that $H_{\lambda}$ is a Hopf subalgebra of $H$ (_cf._
[18, Proposition 7.6.1]).
Conversely, suppose that $H_{\lambda}$ is a Hopf subalgebra of $H$, that is,
$\Delta_{H_{\lambda}}=\Delta_{H}{|_{H_{\lambda}}}$ and
$\varepsilon_{H_{\lambda}}=\varepsilon_{H}{|_{H_{\lambda}}}$. Then,
$H_{\lambda}$ is a Hopf algebra and, in particular,
$\psi\circ(Id_{H_{\lambda}}\otimes\varepsilon_{H_{\lambda}})\circ\Delta_{H_{\lambda}}=Id_{H_{\lambda}}$
holds, where $\psi:H_{\lambda}\otimes\Bbbk\longrightarrow H_{\lambda}$ is the
canonical isomorphism. Using Lemma 4.5, we have that
$\lambda{|_{H_{\lambda}}}=\varepsilon_{H_{\lambda}}$ and then
$\psi\circ(Id_{H_{\lambda}}\otimes\lambda{|_{H_{\lambda}}})\circ\Delta_{H_{\lambda}}=Id_{H_{\lambda}}$
holds. Thus, given $\lambda(h_{1})h_{2}\in H_{\lambda}$, it follows that
$\displaystyle\lambda(h_{1})h_{2}=$ $\displaystyle
Id_{H_{\lambda}}(\lambda(h_{1})h_{2})$ $\displaystyle=$
$\displaystyle[\psi\circ(Id_{H_{\lambda}}\otimes\lambda{|_{H_{\lambda}}})\circ\Delta_{H_{\lambda}}](\lambda(h_{1})h_{2})$
$\displaystyle=$ $\displaystyle\lambda(h_{1})h_{2}\lambda(h_{3}).$
Therefore, $\lambda(h_{1})h_{2}=\lambda(h_{1})h_{2}\lambda(h_{3})$ holds for
all $h\in H$. ∎
Remark. The characterization above can be obtained through the theory of
partial matched pairs of Hopf algebras. In such a setting, the algebra
$H_{\lambda}$ is the same algebra $\Bbbk\underline{\overline{\\#}}H$
constructed in [7], and the compatibility
$\lambda(h_{1})h_{2}=\lambda(h_{1})h_{2}\lambda(h_{3})$, for all $h\in H$, is
exactly the condition needed for $(\Bbbk,H)$ be a partial matched pair of Hopf
algebras. Therefore, $\Bbbk\underline{\overline{\\#}}H$ is a Hopf algebra if
and only if $\lambda(h_{1})h_{2}=\lambda(h_{1})h_{2}\lambda(h_{3})$, for all
$h\in H$ (_cf._ [7, Corollary 4.5]).
###### Corollary 4.7.
Let $H$ be a finite-dimensional Hopf algebra and
$\lambda:H\longrightarrow\Bbbk$ a partial action. Suppose $x\in
P_{g,h}(H)\setminus\Bbbk\\{g-h\\}$ with $\lambda(g)=0$ and $\lambda(h)=1$.
Then, $\lambda(x_{1})x_{2}\neq\lambda(x_{1})x_{2}\lambda(x_{3})$ and,
consequently, $H_{\lambda}$ is not a Hopf subalgebra of $H$.
###### Proof.
Since $\Delta(x)=x\otimes g+h\otimes x$ and $\Delta_{2}(x)=x\otimes g\otimes
g+h\otimes x\otimes g+h\otimes h\otimes x$, where $\Delta_{2}=(\Delta\otimes
Id_{H})\circ\Delta=(Id_{H}\otimes\Delta)\circ\Delta$, we obtain
$\lambda(x_{1})x_{2}=\lambda(x)g+\lambda(h)x=\lambda(x)g+x$. On the other
hand,
$\lambda(x_{1})x_{2}\lambda(x_{3})=\lambda(x)g\lambda(g)+\lambda(h)x\lambda(g)+\lambda(h)h\lambda(x)=\lambda(x)h.$
As $x\neq\lambda(x)(h-g)$, we conclude
$\lambda(x_{1})x_{2}\neq\lambda(x_{1})x_{2}\lambda(x_{3})$.
Thus, Theorem 4.6 implies that $H_{\lambda}$ is not a Hopf subalgebra of $H$.
∎
The most important feature in the previous corollary is that $x\in H$ is a
non-trivial $(g,h)$-primitive element such that $\lambda(g)\neq\lambda(h)$. We
obtain a complementary result for the case when $x$ is a non-trivial
$(g,h)$-primitive element, but $\lambda(g)=1$ and $\lambda(h)=0$.
###### Corollary 4.8.
Let $H$ be a finite-dimensional Hopf algebra and
$\lambda:H\longrightarrow\Bbbk$ a partial action. If $x\in
P_{g,h}(H)\setminus\Bbbk\\{g-h\\}$ with $\lambda(g)=1$ and $\lambda(h)=0$,
then $H_{\lambda}$ is not a Hopf subalgebra of $H$.
###### Proof.
Consider $y=xh^{-1}$. Note that $y\in
P_{gh^{-1},1}(H)\setminus\Bbbk\\{gh^{-1}-1\\}$ and $\lambda(gh^{-1})=0$ by
Proposition 3.1. Hence, the result follows by Corollary 4.7. ∎
###### Corollary 4.9.
Let $H$ be a finite-dimensional Hopf algebra and
$\lambda:H\longrightarrow\Bbbk$ a partial action. If there exists $x\in
P_{g,h}(H)\setminus\Bbbk\\{g-h\\}$ such that $\lambda(g)\neq\lambda(h)$, then
$H_{\lambda}$ is not a Hopf subalgebra of $H$.
###### Proof.
Straightforward from Corollaries 4.7 and 4.8. ∎
The results above deal with $x\in P_{g,h}(H)$ such that
$\lambda(g)\neq\lambda(h)$. The next one deals when $\lambda(g)=\lambda(h)$.
###### Corollary 4.10.
Let $H$ be a Hopf algebra and $\lambda:H\longrightarrow\Bbbk$ a partial
action. Then,
1. (1)
if $g\in G(H)$ then $\lambda(g_{1})g_{2}=\lambda(g_{1})g_{2}\lambda(g_{3});$
2. (2)
if $x\in P_{g,h}(H)$ such that $\lambda(g)=\lambda(h)$, then
$\lambda(x_{1})x_{2}=\lambda(x_{1})x_{2}\lambda(x_{3}).$
In particular, if $H$ is finite-dimensional and has a basis given only by
group-like and $(g_{i},h_{i})$-primitive elements, with $g_{i},h_{i}\in G(H)$
such that $\lambda(g_{i})=\lambda(h_{i})$ for all $i$, then $H_{\lambda}$ is a
Hopf subalgebra of $H$.
###### Proof.
Recall from Proposition 3.1 that $\lambda(g)\in\\{0,1_{\Bbbk}\\}$. Then, item
(1) is clear. Now, note that $\lambda(x)=0$ by Lemma 3.2 (b), then
$\lambda(x_{1})x_{2}=\lambda(g)x$ and
$\lambda(x_{1})x_{2}\lambda(x_{3})=\lambda(g)^{2}x$. Thus, (2) holds. Finally,
the last statement in the corollary is clear by Theorem 4.6. ∎
Using these results, sometimes we can quickly check whether $H_{\lambda}$ is a
Hopf subalgebra of $H$ or not. For instance, consider
$(\mathbb{H}_{4})_{\lambda_{\alpha}}$ as in Example 4.2. Since $x\in
P_{1,g}(\mathbb{H}_{4})$ and $\lambda_{\alpha}(1)\neq\lambda_{\alpha}(g)$, it
follows that $(\mathbb{H}_{4})_{\lambda_{\alpha}}$ is not a Hopf subalgebra of
$\mathbb{H}_{4}$. We present in Subsection 4.3 the whole setting for the
partial actions computed in Subsection 3.3.
We exhibit now a condition stronger than (4.1).
###### Proposition 4.11.
Let $\lambda:H\longrightarrow\Bbbk$ be a partial action. If
$\lambda(h_{1})h_{2}=h_{1}\lambda(h_{2})$ for all $h\in H$, then
$\lambda(h_{1})h_{2}=\lambda(h_{1})h_{2}\lambda(h_{3})$ for all $h\in H$. In
this case, if $H$ is finite-dimensional, then $H_{\lambda}$ is a Hopf
subalgebra of $H$.
###### Proof.
Suppose that $\lambda(h_{1})h_{2}=h_{1}\lambda(h_{2})$, for all $h\in H$. Then
$\displaystyle\lambda(h_{1})h_{2}\lambda(h_{3})$
$\displaystyle=\lambda(h_{1})(h_{2})_{1}\lambda((h_{2})_{2})$
$\displaystyle=\lambda(h_{1})\lambda((h_{2})_{1})(h_{2})_{2}$
$\displaystyle=\lambda((h_{1})_{1})\lambda((h_{1})_{2})h_{2}$
$\displaystyle=\lambda(h_{1})h_{2},$
where the last equality follows from (2.1). Thus, if $H$ is finite-
dimensional, it follows by Theorem 4.6 that $H_{\lambda}$ is a Hopf subalgebra
of $H$. ∎
The proposition above has a stronger condition than Theorem 4.6. However,
checking $\lambda(h_{1})h_{2}=h_{1}\lambda(h_{2})$ sometimes it is easier than
checking $\lambda(h_{1})h_{2}\lambda(h_{3})=\lambda(h_{1})h_{2}$, for $h\in
H$. It happens because for the former equality is necessary to use $\Delta$
only once, while the latter needs $\Delta_{2}$.
Recall from Definition 4.3 that a Hopf algebra $H$ is a $\lambda$-Hopf algebra
when there exists a Hopf algebra $L$ and a partial action
$\lambda:L\longrightarrow\Bbbk$ such that $L_{\lambda}\simeq H$. Thus, we ask
when a Hopf algebra $H$ is a $\lambda$-Hopf algebra. We shall see that every
Hopf algebra is a $\lambda$-Hopf algebra. For this purpose, we recall how
partial actions of $H\otimes L$ on $A\otimes B$ are canonically obtained from
partial actions of $H$ and $L$ on $A$ and $B$, respectively.
###### Remark 4.12.
Let $H$ and $L$ be two Hopf algebras. Consider $\cdot_{H}$ and $\cdot_{L}$
partial actions of $H$ and $L$ on algebras $A$ and $B$, respectively. Then,
the linear map
$\rightharpoonup=(\cdot_{H}\otimes\cdot_{L})\circ(Id_{H}\otimes\tau_{L,A}\otimes
Id_{B})$ is a partial action of $H\otimes L$ on $A\otimes B$, where
$\tau_{L,A}:L\otimes A\longrightarrow A\otimes L$ is the canonical twist
isomorphism between $L$ and $A$. Moreover, if $\cdot_{H}$ and $\cdot_{L}$ are
symmetric, then so is $\rightharpoonup$. In particular, since
$\Bbbk\simeq\Bbbk\otimes\Bbbk$, we obtain partial actions of $H\otimes L$ on
$\Bbbk$ from partial actions of $H$ and $L$ on $\Bbbk$.
We emphasize that the partial actions of $H\otimes L$ on $A\otimes B$,
constructed as above, in general do not cover all partial actions of $H\otimes
L$ on $A\otimes B$, even when $A=B=\Bbbk$. As a very simple example, take
$A=B=\Bbbk$ and $H=L=\Bbbk C_{2}$. Then, $H\otimes L=\Bbbk C_{2}\otimes\Bbbk
C_{2}\simeq\Bbbk(C_{2}\times C_{2})$. Recall, by Example 2.2, that a partial
action of $\Bbbk(C_{2}\times C_{2})$ on $\Bbbk$ is parametrized by a subgroup
of $C_{2}\times C_{2}$. Since for $C_{2}=\\{1,g\\}$ we have only the trivial
subgroups, we only have two partial actions of $\Bbbk C_{2}$ on $\Bbbk$,
namely, $\lambda_{\\{1,g\\}}=\varepsilon_{\Bbbk C_{2}}$ and
$\lambda_{\\{1\\}}$. Hence, by Remark 4.12, we obtain only $4$ partial actions
of $\Bbbk C_{2}\otimes\Bbbk C_{2}\simeq\Bbbk(C_{2}\times C_{2})$ on $\Bbbk$:
$\lambda_{\\{1\\}}\otimes\lambda_{\\{1\\}}=\lambda_{\\{(1,1)\\}}$,
$\lambda_{\\{1\\}}\otimes\varepsilon_{\Bbbk
C_{2}}=\lambda_{\\{(1,1),(1,g)\\}}$, $\varepsilon_{\Bbbk
C_{2}}\otimes\lambda_{\\{1\\}}=\lambda_{\\{(1,1),(g,1)\\}}$ and
$\varepsilon_{\Bbbk C_{2}}\otimes\varepsilon_{\Bbbk
C_{2}}=\varepsilon_{\Bbbk(C_{2}\times C_{2})}.$ However, since $C_{2}\times
C_{2}$ has $5$ subgroups, there is a partial action that is not covered by
this construction. Precisely the partial action $\lambda_{\\{(1,1),(g,g)\\}}.$
This fact can also be observed in the examples given in Subsection 3.3. For
instance, we obtain 4 partial actions of $\mathbf{H_{12}}$ on $\Bbbk$ using
the corresponding partial actions of $\mathcal{A}_{2}$ and $\Bbbk C_{2}$ on
$\Bbbk$. Namely, $\varepsilon_{\mathcal{A}_{2}}\otimes\varepsilon_{\Bbbk
C_{2}}=\varepsilon_{\mathbf{H_{12}}}$,
$\varepsilon_{\mathcal{A}_{2}}\otimes\lambda_{\\{1\\}}^{\Bbbk
C_{2}}=\lambda_{\\{1,g\\}}$,
$\lambda_{\\{1\\}}^{\mathcal{A}_{2}}\otimes\varepsilon_{\Bbbk
C_{2}}=\lambda_{\\{1,h\\}}$ and
$\lambda_{\\{1\\}}^{\mathcal{A}_{2}}\otimes\lambda_{\\{1\\}}^{\Bbbk
C_{2}}=\lambda_{\\{1\\}}$. However, note that the partial action
$\lambda_{\\{1,gh\\}}$ is not obtained using Remark 4.12.
We use Remark 4.12 to conclude that every Hopf algebra is a $\lambda$-Hopf
algebra, where $\lambda$ is a genuine partial action, that is, distinct from
the counit, as follows.
###### Theorem 4.13.
Let $H$ be a Hopf algebra. Then, there exists a Hopf algebra $L$ and a partial
action $\lambda:L\longrightarrow\Bbbk$ such that $L_{\lambda}\simeq H$.
###### Proof.
Let $G$ be group, $G\neq\\{1_{G}\\}$, where $1_{G}$ denotes the identity
element of $G$. Consider the partial action of $\Bbbk G$ on $\Bbbk$
parametrized by the subgroup $\\{1_{G}\\}$, that is,
$\lambda_{\\{1_{G}\\}}:\Bbbk G\longrightarrow\Bbbk$ is given by
$\lambda_{\\{1_{G}\\}}(g)=\delta_{1_{G},g}$, that is,
$\lambda_{\\{1_{G}\\}}(1_{G})=1_{\Bbbk}$ and $\lambda_{\\{1_{G}\\}}(g)=0$, for
all $g\in G$, $g\neq 1_{G}$. By Remark 4.12,
$\lambda=\varepsilon_{H}\otimes\lambda_{\\{1_{G}\\}}$ is a partial action of
$H\otimes\Bbbk G$ on $\Bbbk$. Moreover, an easy calculation shows that
$(H\otimes\Bbbk G)_{\lambda}=H\otimes 1_{\Bbbk G}\simeq H$. Indeed, for
$h\otimes g\in H\otimes\Bbbk G$, we have
$\displaystyle\lambda((h\otimes g)_{1})(h\otimes
g)_{2}=\varepsilon(h_{1})\lambda_{\\{1_{G}\\}}(g)(h_{2}\otimes
g)=h\otimes\lambda_{\\{1_{G}\\}}(g)g=h\otimes\delta_{1_{G},g}g.$
Then, for $L=H\otimes\Bbbk G$ and $\lambda$ as above, it follows that
$L_{\lambda}=\langle\lambda((h\otimes g)_{1})(h\otimes g)_{2}\ |\ h\in H,g\in
G\rangle=\langle h\otimes 1_{G}\ |\ h\in H\rangle\simeq H,$
as Hopf algebras. Hence, $H$ is a $\lambda$-Hopf algebra. ∎
The result above is constructive, however this is not the only way to obtain a
Hopf algebra $L$ and a partial action of $L$ on $\Bbbk$ such that
$L_{\lambda}\simeq H$. We shall see in the next subsection that Taft’s algebra
is a $\lambda$-Hopf algebra by another way. Furthermore, since the group $G$
can be chosen arbitrarily in the proof of Theorem 4.13, we obtain that $H$ can
be embedded into the Hopf algebra $L$, where $dim_{\Bbbk}(L)$ is quite
arbitrary, and there exists a partial action $\lambda:L\longrightarrow\Bbbk$
such that $L_{\lambda}\simeq H$. See Example 4.4.
### 4.2. Taft’s algebra as a $\lambda$-Hopf algebra
Throughout this subsection, we assume that $\Bbbk$ is an algebraically closed
field of characteristic zero.
The Taft’s algebra of order $n$, here denoted by $T_{n}(q)$, is an important
Hopf algebra presented by Taft in [20]. We briefly recall its definition. Let
$n\geq 2$ be a positive integer and $q$ a primitive root of unity of order
$n$. As algebra $T_{n}(q)=\langle
g,\;x,\;|\;g^{n}=1,\;x^{n}=0,\;xg=qgx\rangle.$ Thus,
$\mathcal{B}=\\{g^{i}x^{j}\,|\,0\leq i,j\leq n-1\\}$ is the canonical basis
for $T_{n}(q)$, and consequently $dim_{\Bbbk}(T_{n}(q))=n^{2}$. For the
coalgebra structure, $g$ is a group-like element and $x$ is a
$(1,g)$-primitive element. In general, we have
$\Delta(g^{i}x^{j})=\sum_{\ell=0}^{j}{j\choose\ell}_{q}g^{i+\ell}x^{j-\ell}\otimes
g^{i}x^{\ell}$. So, completing the Hopf structure of $T_{n}(q)$, set
$S(g)=g^{n-1}$ and $S(x)=-g^{n-1}x.$ Also, note that $G(T_{n}(q))=\langle
g\rangle=\\{1,g,\cdots,g^{n-1}\\}=C_{n}.$
In the sequel, we present $T_{n}(q)$ as a $\lambda$-Hopf algebra, where the
Hopf algebra $L$ such that $L_{\lambda}\simeq T_{n}(q)$ is not simply
$T_{n}(q)$ tensorized by a group algebra. See Theorem 4.13. Aiming this, we
recall a family of Hopf algebras, here denoted by $T_{n}^{k}(\omega)$, and
calculate a suitable partial action
$\lambda_{k}:T_{n}^{k}(\omega)\longrightarrow\Bbbk$ such that
$(T_{n}^{k}(\omega))_{\lambda_{k}}\simeq T_{n}(\omega^{k})$.
The Hopf algebra $T_{n}^{k}(\omega)$ is a generalization of the Taft’s algebra
$T_{n}(q)$ (see [5] and [6, Appendix]). Let $k$ and $n$ be positive integers,
$n\geq 2$, and $\omega$ a primitive root of unity of order $kn$. As algebra,
$T_{n}^{k}(\omega)$ is generated by the letters $g$ and $x$ with the relations
$g^{kn}=1,$ $x^{n}=0$ and $xg=\omega gx$. Thus,
$\mathcal{B}=\\{g^{i}x^{j}\,|\,0\leq j\leq n-1,\,0\leq i\leq kn-1\\}$ is the
canonical basis of $T_{n}^{k}(\omega)$, and consequently
$dim_{\Bbbk}(T_{n}^{k}(\omega))=kn^{2}$. For the coalgebra structure, $g$ is a
group-like element and $x$ is a $(1,g^{k})$-primitive element. In general,
$\Delta(g^{i}x^{j})=\sum_{\ell=0}^{j}{j\choose\ell}_{\omega^{k}}g^{i+\ell
k}x^{j-\ell}\otimes g^{i}x^{\ell}$. To complete the Hopf structure of
$T_{n}^{k}(\omega)$, set $S(g)=g^{kn-1}$ and $S(x)=-g^{kn-k}x.$ Also, notice
that $G(T_{n}^{k}(\omega))=\langle g\rangle=\\{1,g,\cdots,g^{kn-1}\\}=C_{kn}.$
###### Remark 4.14.
Notice that $T_{n}^{k}(\omega)$ generalizes the Taft’s algebra, since when
$k=1$, $T_{n}^{1}(\omega)$ is exactly the Taft’s algebra of order $n$.
Moreover, for any positive integer $k$, the Taft’s algebra $T_{n}(q)$ is
embedded as a Hopf subalgebra of $T_{n}^{k}(\omega)$, where $q=\omega^{k}$.
Indeed, consider $T_{n}(q)=\langle
h,\;y,\;|\;h^{n}=1,\;y^{n}=0,\;yh=qhy\rangle,$ where $q=\omega^{k}$. Then,
$\varphi:T_{n}(q)\longrightarrow T_{n}^{k}(\omega)$ given by $\varphi(y)=x$
and $\varphi(h)=g^{k}$ is an injective homomorphism of Hopf algebras.
Now, we want to calculate a suitable partial action of $T_{n}^{k}(\omega)$ on
$\Bbbk$. Consider the subgroup $N=\langle g^{k}\rangle$ of
$G(T_{n}^{k}(\omega))$ and the linear map
$\lambda_{k}:T_{n}^{k}(\omega)\longrightarrow\Bbbk$ given by
$\lambda(g^{i}x^{j})=\delta_{j,0}\delta_{N}(g^{i})$ for all
$i,j\in\mathbb{Z}$, $j\geq 0$, where the symbol $\delta_{j,0}$ stands for the
Kronecker delta and $\delta_{N}$ defined as in Example 2.2.
We use Proposition 2.1 to prove that $\lambda_{k}$ as defined above is a
partial action of $T_{n}^{k}(\omega)$ on $\Bbbk$. Denotes $\lambda_{k}$ simply
by $\lambda$. Clearly $\lambda(1)=1_{\Bbbk}$. Now, it remains only to verify
condition (2.1), that is, if
$\lambda(u)\lambda(v)=\lambda(u_{1})\lambda(u_{2}v)$
holds for all $u,v\in T_{n}^{k}(\omega)$.
First, if $u=g^{i}$, $i\in\mathbb{Z}$, then (2.1) is satisfied for all $v\in
T_{n}^{k}(\omega)$. Indeed, if $v=g^{r}x^{s}$, with $r,s\in\mathbb{Z}$, $s\geq
0$, then
$\lambda(u)\lambda(v)=\lambda(g^{i})\lambda(g^{r}x^{s})=\delta_{0,0}\delta_{N}(g^{i})\delta_{s,0}\delta_{N}(g^{r})=\delta_{N}(g^{i})\delta_{s,0}\delta_{N}(g^{r}).$
On the other hand,
$\displaystyle\lambda(u_{1})\lambda(u_{2}v)$
$\displaystyle=\lambda(g^{i})\lambda(g^{i}g^{r}x^{s})=\lambda(g^{i})\lambda(g^{i+r}x^{s})$
$\displaystyle=\delta_{0,0}\delta_{N}(g^{i})\delta_{s,0}\delta_{N}(g^{i+r})=\delta_{N}(g^{i})\delta_{s,0}\delta_{N}(g^{i+r}).$
Thus, if $g^{i}\notin N$, then $\lambda(g^{i})\lambda(g^{i+r}x^{s})=0$ and
$\delta_{N}(g^{i})\delta_{s,0}\delta_{N}(g^{i+r})=0$, since
$\delta_{N}(g^{i})=0$. Otherwise, if $g^{i}\in N$, then
$\delta_{N}(g^{r})=\delta_{N}(g^{i+r})$. Hence, the equality desired holds.
Now, assume $u=g^{i}x^{j}$ with $i,j\in\mathbb{Z}$, $j\geq 1$. Then, for all
$v\in T_{n}^{k}(\omega)$, (2.1) means
$\lambda(g^{i}x^{j})\lambda(v)=\sum_{\ell=0}^{j}{j\choose\ell}_{\omega^{k}}\lambda(g^{i+\ell
k}x^{j-\ell})\lambda(g^{i}x^{\ell}v).$
Since $j\geq 1$, we get $\lambda(g^{i}x^{j})=\delta_{j,0}\delta_{N}(g^{i})=0$
and $\lambda(g^{i+\ell k}x^{j-\ell})=\delta_{j-\ell,0}\delta_{N}(g^{i+\ell
k})=0,$ for all $0\leq\ell\leq(j-1)$. Then, the previous sum is reduced only
when $\ell=j$, and results in
$0={j\choose
j}_{\omega^{k}}\lambda(g^{i+jk}x^{j-j})\lambda(g^{i}x^{j}v)=\lambda(g^{i+jk})\lambda(g^{i}x^{j}v)$
for all $v\in T_{n}^{k}(\omega)$.
For $v=g^{s}x^{t}$ with $s,t\in\mathbb{Z}$, $t\geq 0$, it follows that
$j+t\geq 1$ and consequently
$\lambda(g^{i+s}x^{j+t})=\delta_{j+t,0}\delta_{N}(g^{i+s})=0$. Hence,
$\displaystyle\lambda(g^{i+jk})\lambda(g^{i}x^{j}v)$
$\displaystyle=\lambda(g^{i+jk})\lambda(g^{i}x^{j}g^{s}x^{t})=\omega^{js}\lambda(g^{i+jk})\lambda(g^{i+s}x^{j+t})=0.$
Thus, (2.1) holds for all $u,v\in T_{n}^{k}(\omega)$, and so $\lambda$ is a
partial action of $T_{n}^{k}(\omega)$ on $\Bbbk$.
Finally, it remains to check that
$(T_{n}^{k}(\omega))_{\lambda}=T_{n}(\omega^{k})$. Recall that
$(T_{n}^{k}(\omega))_{\lambda}=\\{\lambda(u_{1})u_{2}\ |\ u\in
T_{n}^{k}(\omega)\\}$. Thus, for $u=g^{i}x^{j}$, $i,j\in\mathbb{Z}$, $j\geq
0$, we have
$\lambda((g^{i}x^{j})_{1})(g^{i}x^{j})_{2}=\sum_{\ell=0}^{j}{j\choose\ell}_{\omega^{k}}\lambda(g^{i+\ell
k}x^{j-\ell})g^{i}x^{\ell}.$
Since $\lambda(g^{i+\ell k}x^{j-\ell})=\delta_{j-\ell,0}\delta_{N}(g^{i+\ell
k})=0$, for any $0\leq\ell\leq(j-1)$, we conclude that
$\lambda((g^{i}x^{j})_{1})(g^{i}x^{j})_{2}={j\choose
j}_{\omega^{k}}\lambda(g^{i+jk}x^{0})g^{i}x^{j}=\lambda(g^{i+jk})g^{i}x^{j}=\delta_{N}(g^{i+jk})g^{i}x^{j},$
for all $i,j\in\mathbb{Z}$, $j\geq 0$.
As $g^{jk}\in N$, it follows that $\delta_{N}(g^{i+jk})=\delta_{N}(g^{i})$ and
consequently
$\lambda((g^{i}x^{j})_{1})(g^{i}x^{j})_{2}=\delta_{N}(g^{i})g^{i}x^{j}$.
Therefore,
$\displaystyle(T_{n}^{k}(\omega))_{\lambda}$
$\displaystyle=\\{\lambda(u_{1})u_{2}\ |\ u\in T_{n}^{k}(\omega)\\}$
$\displaystyle=\langle\lambda(u_{1})u_{2}\ |\ u\in\mathcal{B}\rangle$
$\displaystyle=\langle\lambda((g^{i}x^{j})_{1})(g^{i}x^{j})_{2}\ |\ 0\leq
i\leq(kn-1),0\leq j\leq(n-1)\rangle$
$\displaystyle=\langle\delta_{N}(g^{i})g^{i}x^{j}\ |\ 0\leq i\leq(kn-1),0\leq
j\leq(n-1)\rangle$ $\displaystyle=\langle\delta_{N}(g^{rk+s})g^{rk+s}x^{j}\ |\
0\leq s\leq(k-1),0\leq r,j\leq(n-1)\rangle$
$\displaystyle=\langle\delta_{N}(g^{rk})g^{rk}x^{j}\ |\ 0\leq
r,j\leq(n-1)\rangle$ $\displaystyle=\langle g^{rk}x^{j}\ |\ 0\leq
r,j\leq(n-1)\rangle$ $\displaystyle=\varphi(T_{n}(\omega^{k}))$
$\displaystyle\simeq T_{n}(\omega^{k}),$
where $\varphi$ is the injective homomorphism of Hopf algebras given in Remark
4.14.
We observe that the choice of the parameter $k$ was arbitrary. Thus, the
dimension of $T_{n}^{k}(\omega)$ can be chosen as large as desired. This
situation is the same as for group algebras (Example 4.1) and also in Theorem
4.13.
###### Example 4.15.
Consider the Sweedler’s Hopf algebra $\mathbb{H}_{4}=T_{2}(-1)$. Note that
$T_{2}^{2}(q)=\mathcal{A}^{{}^{\prime\prime\prime}}_{4,q}$ and
$T_{2}^{4}(\tilde{q})=\mathbf{H_{19}}$, where $q$ and $\tilde{q}$ are
primitive $4^{th},8^{th}$ roots of unity, respectively (see Subsection 3.3).
Then, the partial actions $\lambda_{2}=\lambda_{\\{1,g^{2}\\}}$ of
$\mathcal{A}^{{}^{\prime\prime\prime}}_{4,q}$ on $\Bbbk$ and
$\lambda_{4}=\lambda_{\\{1,g^{4}\\}}$ of $\mathbf{H_{19}}$ on $\Bbbk$, are
such that
$(\mathcal{A}^{{}^{\prime\prime\prime}}_{4,q})_{\lambda_{2}}\simeq\mathbb{H}_{4}\simeq(\mathbf{H_{19}})_{\lambda_{4}}$.
### 4.3. Examples of $\lambda$-Hopf algebras
In this subsection we show, schematically, the examples of $\lambda$-Hopf
algebras obtained with the partial actions of each pointed non-semisimple Hopf
algebra $H$ with $dim_{\Bbbk}(H)=8,16$ (see Subsection 3.3).
###### Theorem 4.16.
Let $H$ be one of the following pointed non-semisimple Hopf algebras:
* •
Dimension 8: $\mathcal{A}_{2},\mathcal{A}^{{}^{\prime}}_{4}$ or
$\mathcal{A}^{{}^{\prime\prime}}_{4}$;
* •
Dimension 16: $\mathbf{H_{1}}$, $\mathbf{H_{4}},$ $\mathbf{H_{5}},$
$\mathbf{H_{6}},$ $\mathbf{H_{7}},$ $\mathbf{H_{8}},$ $\mathbf{H_{9}},$
$\mathbf{H_{10}},$ $\mathbf{H_{11}},$ $\mathbf{H_{14}},$ $\mathbf{H_{15}},$
$\mathbf{H_{16}},$ $\mathbf{H_{18}}$ or $\mathbf{H_{22}}$.
Then, $H$ has not a partial action $\lambda$, $\lambda\neq\varepsilon$, such
that $H_{\lambda}$ is a Hopf subalgebra of $H$.
###### Proof.
Note that for these Hopf algebras, there exists a $(1,g)$-primitive element
$x$. Almost all the partial actions $\lambda,\lambda\neq\varepsilon,$ of these
Hopf algebras are such that $\lambda(g)=0$. The only exceptions are
$\lambda_{\\{1,g\\}}$ of $\mathbf{H_{i}}$, $i\in\\{14,15\\}$. For these two
specific cases, there exists $y\in\mathbf{H_{i}}$ such that $y$ is a
$(1,h)$-primitive element and $\lambda_{\\{1,g\\}}(h)=0$. Thus, by Corollary
4.9, it follows that $H_{\lambda}$ is not a Hopf algebra of $H$. ∎
In the following diagram, an arrow from $H$ to $B$,
$H\stackrel{{\scriptstyle\lambda}}{{\longrightarrow}}B$, means that $\lambda$
is a partial action of $H$ on $\Bbbk$ such that $B\simeq H_{\lambda}$, as Hopf
algebras. At this point, Corollary 4.9 is useful to discard a lot of partial
actions $\lambda$ of $H$ on $\Bbbk$, since the existence of a
$(g,h)$-primitive element such that $\lambda(g)\neq\lambda(h)$ implies that
$H_{\lambda}$ is not a Hopf subalgebra of $H$. To verify whether $H_{\lambda}$
is or not a Hopf subalgebra of $H$, one uses Theorem 4.6 or, which is usually
easier, Proposition 4.11.
|
---|---
|
$\textstyle{\underline{\textrm{dimension}\,\,16}}$$\textstyle{\underline{\textrm{dimension}\,\,8}}$$\textstyle{\mathbf{H_{2}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g^{2}\\}}}$$\textstyle{\mathbf{H_{3}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g^{2}\\}}}$$\textstyle{\mathbf{H_{13}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g\\}}}$$\textstyle{\mathcal{A}_{2}}$$\textstyle{\mathbf{H_{12}}=\mathcal{A}_{2}\otimes\Bbbk
C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\,\,\,\quad\lambda_{\\{1,g\\}}=\varepsilon_{\mathcal{A}_{2}}\otimes\lambda_{\\{1\\}}^{\Bbbk
C_{2}}}$$\textstyle{\mathbf{H_{28}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g,g^{2},g^{3}\\}}}$$\textstyle{\mathcal{A}_{4}^{\prime\prime}}$$\textstyle{\mathbf{H_{29}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g^{2},gh,g^{3}h\\}}}$
$\textstyle{\underline{\textrm{dimension}\,\,16}}$$\textstyle{\underline{\textrm{dimension}\,\,8}}$$\textstyle{\underline{\textrm{dimension}\,\,4}}$$\textstyle{\mathbf{H_{20}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\quad\lambda_{\\{1,g^{2},g^{4},g^{6}\\}}}$$\textstyle{\mathbf{H_{21}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g^{2},g^{4},g^{6}\\}}}$$\textstyle{\mathcal{A}_{4}^{\prime}}$$\textstyle{\mathbf{H_{23}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g,g^{2},g^{3}\\}}}$$\textstyle{\mathbf{H_{24}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\quad\lambda_{\\{1,g^{2},gh,g^{3}h\\}}}$$\textstyle{\mathbf{H_{27}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\quad\lambda_{\\{1,g^{2},h,g^{2}h\\}}}$$\scriptstyle{\lambda_{\\{1,g^{2}h\\}}}$$\textstyle{\mathbf{H_{17}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g,a,ga\\}}}$$\scriptstyle{\lambda_{\\{1,g,ab,gab\\}}}$$\scriptstyle{\lambda_{\\{1,g,b,gb\\}}}$$\scriptstyle{\lambda_{\\{1,g\\}}}$$\textstyle{\mathcal{A}_{2,2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g\\}}}$$\textstyle{\mathbf{H_{26}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g^{2},h,g^{2}h\\}}}$$\scriptstyle{\lambda_{\\{1,h\\}}}$$\textstyle{\mathbb{H}_{4}}$$\textstyle{\mathbf{H_{25}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g^{2},h,g^{2}h\\}}}$$\scriptstyle{\lambda_{\\{1,g,g^{2},g^{3}\\}}}$$\scriptstyle{\lambda_{\\{1,g^{2}\\}}}$$\scriptstyle{\lambda_{\\{1,g^{2},gh,g^{3}h\\}}}$$\textstyle{\mathcal{A}_{4,q}^{\prime\prime\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g^{2}\\}}}$$\textstyle{\mathbf{H_{19}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\lambda_{\\{1,g^{2},g^{4},g^{6}\\}}}$$\scriptstyle{\lambda_{\\{1,g^{4}\\}}}$
## References
* [1] A. Abella, D. Freitas and A. Morgado, _Almost involutive Hopf-Ore extensions of low dimension_ , São Paulo Journal of Mathematical Sciences 11 (1) (2017), 133-147.
* [2] E. Alvares, M. Alves and E. Batista, _Partial Hopf module categories_ , Journal of Pure and Applied Algebra 217 (8) (2013), 1517-1534.
* [3] M. Alves, E. Batista, M. Dokuchaev and A. Paques, _Globalization of Twisted Partial Hopf Actions_ , Journal of the Australian Mathematical Society 101 (1) (2016), 1-28.
* [4] M. Alves and E. Batista, _Enveloping Actions for Partial Hopf Actions_ , Communications in Algebra 38 (8) (2010), 2872-2902.
* [5] N. Andruskiewitsch and H.-J. Schneider, _Lifting of Quantum Linear Spaces and Pointed Hopf Algebras of Order $p^{3}$_, Journal of Algebra 209 (2) (1998), 658-691.
* [6] N. Andruskiewitsch and S. Natale, _Counting Arguments for Hopf Algebras of low dimension_ , Tsukuba Journal of Mathematics 25 (1) (2001), 187-201.
* [7] D. Azevedo, G. Martini, A. Paques and L. Silva, _Hopf algebras arising from partial (co)actions_ , Journal of Algebra and Its Applications doi:10.1142/S0219498821400065 (2020).
* [8] D. Bagio and A. Paques, _Partial Groupoid Actions: Globalization, Morita Theory, and Galois Theory_ , Communications in Algebra 40 (10) (2012), 3658-3678.
* [9] E. Batista, S. Caenepeel and J. Vercruysse, _Hopf Categories_ , Algebras and Representation Theory 19 (19) (2016), 1173-1216.
* [10] M. Beattie and G. Garcia, _Classifying Hopf algebras of a given dimension_ , American Mathematical Society 585 In Hopf Algebras and Tensor Categories - Contemporary Mathematics (2013), 125-152.
* [11] S. Caenepeel, S. Dascalescu and Ş. Raianu _Classifying pointed Hopf algebras of dimension 16_ , Communications in Algebra 28 (2) (2000), 541-568.
* [12] S. Caenepeel, K. Janssen, _Partial (co)actions of Hopf algebras and partial Hopf-Galois theory_ , Communications in Algebra 36 (2008), 2923-2946.
* [13] F. Castro, A. Paques, G. Quadros, A. Sant’Ana, _Partial actions of weak Hopf algebras: smash products, globalization and Morita theory_ , Journal of Pure and Applied Algebra 29 (2015), 5511-5538.
* [14] M. Dokuchaev, _Recent developments around partial actions_ , São Paulo Journal of Mathematical Sciences 13 (2019), 195-247.
* [15] M. Dokuchaev, R. Exel, _Associativity of crossed products by partial actions, enveloping actions and partial representations_ , Transactions of the American Mathematical Society 357 (2005), 1931-1952.
* [16] M. Dokuchaev, M. Ferrero, A. Paques, _Partial actions and Galois theory_ , Journal of Pure and Applied Algebra 208 (2007), 77-87.
* [17] R. Exel, _Circle actions on $C^{*}$-algebras, partial automorphisms and generalized Pimsner-Voiculescu exact sequences_, Journal of Map Analysis 122 (3) (1994), 361-401.
* [18] D. Radford, _Hopf Algebras_ , In K & E series on knots and everything (2011).
* [19] D. Ştefan, _Hopf algebras of low dimension_ , Journal of Algebra 211 (1) (1999), 343-361.
* [20] E. Taft, _The Order of the Antipode of Finite-dimensional Hopf Algebra_ , Proceedings of the National Academy of Sciences 11 (68) (1971), 2631-2633.
* [21] R. Williams, _Finite Dimensional Hopf Algebras_ , PhD. Thesis In Florida State University (1988).
|
# A Study of Dielectric Breakdown Along Insulators Surrounding Conductors in
Liquid Argon
S. Lockwitz Corresponding author<EMAIL_ADDRESS>(S. Lockwitz) and H.
Jostlein
Fermi National Accelerator Laboratory P.O. Box 500 Batavia IL 60510 USA
###### Abstract
High voltage breakdown in liquid argon is an important concern in the design
of liquid argon time projection chambers, which are often used as neutrino and
dark matter detectors. We have made systematic measurements of breakdown
voltages in liquid argon along insulators surrounding negative rod electrodes
where the breakdown is initiated at the anode. The measurements were performed
in an open cryostat filled with commercial grade liquid argon exposed to air,
and not the ultra-pure argon required for electron drift. While not addressing
all high voltage concerns in liquid argon, these measurements have direct
relevance to the design of high voltage feedthroughs especially for averting
the common problem of flash-over breakdown. The purpose of these tests is to
understand the effects of materials, of breakdown path length, and of surface
topology for this geometry and setup. We have found that the only material-
specific effects are those due to their permittivity. We have found that the
breakdown voltage has no dependence on the length of the exposed insulator. A
model for the breakdown mechanism is presented that can help inform future
designs.
###### keywords:
Liquid Argon; Time Projection Chambers; Dielectric Strength, Electric
Breakdown; High Voltage
## 1 Motivation
Liquid Argon Time-Projection Chambers (LArTPCs) are a popular detector choice
for neutrino and dark matter experiments. This technology relies upon an
electric field typically on the order of 50 kV/m to drift signal electrons
from ionization due to charged particle interactions in the argon. Drift
distances are typically on the scale of less than a few meters. It has become
important to understand parameters related to high voltage use in liquid argon
to design future detectors and understand any technical limitations.
Dielectric breakdown in liquids, particularly transformer oils, has been
actively studied for many decades, mostly to meet the engineering needs of the
power distribution industry. Very little generic research has been done on
dielectric breakdown in liquid argon. Several recent breakdown studies [1, 2,
3, 4] have yielded useful results related to breakdown through the bulk
liquid. This report presents a comparative study of dielectric breakdown in
liquid argon along insulator surfaces surrounding high voltage conductors. The
goal is to understand these breakdowns and eventually find methods to predict
and avoid them.
## 2 Experiment Setup
The experiment setup is shown schematically in Figure 1. We started with
commercially pure liquid argon with contaminants at the parts-per-million
(ppm) level. The tests were performed in an open-to-air cryostat over the
course of weeks. This condition implies that water vapor, oxygen, and nitrogen
diffused into the liquid argon over the course of the measurements. We also
noted an accumulation of dust in the liquid argon, with dust especially
attracted to areas of high electric field. The effects of these contaminants
are unknown. We have not found published data on the effect of contaminants in
the relevant range.
(a) (b)
Figure 1: Drawings of the test apparatus. The left graphic, (a), shows the
complete test setup including the current-limiting resistor. A detailed
drawing including the test piece and holder is highlighted in (b). The
insulator test piece could be removed and replaced without removing the anode
ring, holder and high voltage feedthrough from the liquid argon. The high
voltage was applied by a high voltage feedthrough placed on top of the test
piece. The ring could be raised and lowered from outside of the cryostat.
The high voltage was supplied by a Glassman LX150N12 power supply [5] able to
provide up to $-150$ kV. The power supply was connected through an oil-
insulated series resistor (75 M$\Omega$) to a high voltage feedthrough
partially submersed in liquid argon. The resistor served to partition the
energy stored in the setup and limit the energy released into the cryostat
during an electrical discharge. The high voltage feedthrough was made of a
stainless steel central conductor surrounded by an ultra high molecular weight
polyethylene (UHMW PE) tube inside an outer stainless steel grounded tube. The
lower edge of the ground tube was encased in a UHMW PE cylinder to prevent
streamer propagation along the feedthrough itself. The lower end of the UHMW
PE protruded beyond the ground tube and its surface was grooved. The inner
conductor extended further out and was machined with a concave end to accept a
sphere.
Each insulator studied started in the form of a 5.1 cm diameter smooth
cylinder, 30.2 cm long with a 2.54 cm diameter center-bored hole along the
length of the material to a depth of 16.5 cm. A 2.54 cm diameter stainless
steel rod with a 3.8 cm diameter sphere welded on its end was inserted into
each test piece as shown in Figure 1. The high voltage feedthrough was placed
on top of the welded sphere during testing. Three samples were additionally
tested with grooved profiles cut into the insulator surfaces.
Each piece was lowered into a grounded holder submerged in liquid argon to
secure the piece during testing. The holder had a grounded stainless steel
ring attached to it with a 5.2 cm inner diameter that fit loosely around the
body of the test piece. The ring thickness flared out from the test piece at a
15 degree angle from an intentionally sharp top edge near the insulator
surface to a 0.5 cm thick ring where it attached to the holder. This top edge,
the knife edge, was designed to create a high electric field to induce
breakdown along the insulator surface. The ring could be raised and lowered
along the test piece by turning a threaded rod that extended outside of the
cryostat.
### 2.1 Experiment Procedure
After insertion into the argon, each test piece was allowed to cool until
there was no visible boiling before testing began. This was done in an effort
to reduce any thermal effects on spark formation.
A LabView [6] program was used to raise the high voltage in 500 V steps every
1.6 s. When the current draw as registered by the supply was above 45 $\mu$A,
the supply would trip setting the voltage to zero. After a trip was
identified, the program would wait one minute and then begin raising the
voltage again. Breakdown values were determined from a log file monitoring the
voltage at sub-second intervals.
While the definition of a tripping spark is specific to our setup, it is the
same for all test pieces. There were occasions where there was an audible
spark that did not trip the supply. These events did not enter our dataset and
the voltage would increase until a tripping spark occurred.
We evaluated the 11 materials listed in Table 1. Tested ring elevations varied
for each sample, but at least three exposed insulator lengths were evaluated
between 3.5-12 cm for each undamaged smooth sample. For a given test piece and
length of exposed insulator above the grounded ring, the number of trips
tested ranged from 42-297 depending on the available time before refilling the
argon. The voltage values of these trips were plotted sequentially to check
for stability, and then histogrammed and fit with a Weibull function [7] as
shown for example in Figure 2. The Weibull function is described elsewhere [8]
as being an appropriate function to describe the variability of breakdown
voltages. The mean and the square root of the variance were taken from the fit
to compare breakdown voltages versus lengths between materials.
| Thermal | Chemical | Chemical | Dielectric | Dielectric
---|---|---|---|---|---
Material | Contraction | Name | Abbreviation | Constant | Strength
| (%) | | | | (kV/mm)
Delrin | 1.2 $\pm$ 0.1 | Polyoxymethylene-homopolymer | Acetal-Homopolymer | 2.7 | 20
POM
Polypropylene | 1.1 $\pm$ 0.09 | Polypropylene | PP | 2.2-2.6 | 30-40
Noryl | 0.5 $\pm$ 0.08 | Polyphenyleneoxide+Polystyrene | PPO + PS | 2.7 | 16-20
Polycarbonate | 0.6 $\pm$ 0.09 | Polycarbonate | PC | 2.9 | 15-67
Polystyrene | 0.5 $\pm$ 0.06 | Polystyrene | PS | 2.4-3.1 | 20
UHMW PE | 1.8 $\pm$ 0.3 | Polyethylene-UHMW | UHMW PE | 2.3 | 28
FR4 | 0.2 $\pm$ 0.07 | Fiberglass-epoxy | G10-FR4 | 4.8 | 20
PTFE | 0.9 $\pm$ 0.09 | Polytetrafluoroethylene | PTFE | 2.0-2.1 | 50-170
Ultem | 0.4 $\pm$ 0.07 | Polyetherimide | PEI | 3.1 | 30
PBT | 0.9 $\pm$ 0.08 | Polyethtylene Terephtalate | PBT | 3.2 | 20
Polyester | 0.9 $\pm$ 0.1 | Polybutylene Terephtalate | PET, PETB | 3 | 17
Table 1: The thermal contraction, dielectric constants, and dielectric
strengths for the materials under test. The thermal contraction was measured
between room temperature and 78 K. The Delrin dielectric constant was
referenced from [9], the FR4 values from [10], all others are from [11].
(a) (b)
Figure 2: The sequential breakdown voltages for the Ultem test piece with 1.43
cm exposed is shown in (a), and a Weibull functional fit to the resulting
distribution is shown in (b).
## 3 Results
### 3.1 Cold Shrinkage Measurements
Before electrically testing the insulators, their thermal contraction was
measured. The diameter of 5.1 cm rod stock of each material was measured at
room temperature and at liquid nitrogen temperature. The technique was to
submerge each piece in liquid nitrogen until there was no visible boiling.
Each test piece was then removed and measured at three common reference
lengths evaluating the minimum and maximum diameters. The values are reported
in Table 1. These values are useful for understanding the capabilities and
limits of components of LArTPCs, and planning for dimensional change at
cryogenic temperatures.
### 3.2 Electrical Insulator Test Results
The insulator test pieces can be classified into three groups: those that
catastrophically fractured upon submersion in liquid argon, those that broke
after a number of successful dielectric breakdown tests, and those that
underwent the full suite of testing. Here, a successful breakdown test is
defined as a dielectric breakdown near the surface of the insulator that trips
the power supply. For some materials, additional testing of surface profiles
was performed. This testing is discussed in Section 3.3.
#### 3.2.1 Group I
Polystyrene fractured upon submersion in liquid argon. The Noryl test piece
made a popping noise as it was lowered into the liquid argon. The piece was
then removed for inspection and burst as it warmed. The contraction of the
solid stock of both materials had been measured without sign of damage. It
appears that introducing the non-contracting center conductor led to too much
strain for the material when exposed to liquid argon temperatures. These
materials are problematic for use in high voltage feedthroughs.
#### 3.2.2 Group II
The next group of materials was able to undergo a number of successful
breakdown measurements, but eventually suffered mechanical damage preventing
further testing. This group included polyester, PBT, and PTFE. Polyester and
PBT both suffered catastrophic mechanical failures while testing with 11.43 cm
of material exposed. Polyester failed after 50 breakdown events, and PBT
lasted through 100.
PTFE is often used in cryogenic devices, and initially showed much promise in
this study with average breakdown values among the highest evaluated. However,
while testing with 11.43 cm length exposed, the breakdown voltage suddenly
dropped after about 17 breakdowns. Upon inspection, it was found that a
pinhole had burned through the bulk of the material and a major crack
developed around the diameter nearly severing the piece as seen in Figure 3.
This was the only material that broke through the bulk near the base of the
center conductor. The peak field at the bottom corner of the center conductor
was $\sim$400 kV/cm which is in the range of the published dielectric strength
of the material.
Figure 3: The PTFE test piece after evaluation.
#### 3.2.3 Group III
The remaining materials survived mechanically during testing. All had stable
breakdown voltages in time showing no degradation after hundreds of
discharges.
Polypropylene and UHMW PE had the highest average breakdown voltages. UHMW PE
exhibits light brown traces about 0.2-0.5 mm wide and of uniform contrast
along their length on the surface presumably where the spark traveled. UHMW PE
displays this tracing, seen in Figure 4, more clearly than any of the other
materials. These traces show that the breakdowns follow the surface closely
and we found that new traces formed as there were more breakdowns showing that
breakdowns do not preferentially follow existing traces.
(a)
(b)
Figure 4: Traces along the UHMW PE test piece are shown in (a). In the right
of (a), the traces are more dense due to many more electric breakdowns being
evaluated at that elevation. In (b), a glowing path from breakdown along the
surface of the FR4 test piece is shown. The glowing trace goes on a path from
the ring along the right side of the piece up the surface and then radially
inward to the center conductor.
#### 3.2.4 Analysis of Group III
We now turn to the measurements made on the materials of Group III: those that
survived the mechanical tests and hundreds of discharges. The average
breakdown voltages from the Weibull fit for the Group III materials are
plotted versus exposed insulator length in Figure 5. A linear fit of the
resulting averages is also shown with the slopes reported in Figure 6. As can
be seen from the fits, there is only a very weak dependence, if any, of the
breakdown voltage on exposed length.
The material’s dielectric constant was the primary driver of the difference in
the field at the ring for a given voltage. An FEA (Finite Element Analysis) of
the setup was performed for different materials with a 0.06 cm gap between the
ring and the insulator surface; an example FEA near the ring is shown in
Figure 7. For a 6 cm exposed insulator on UHMW PE, the peak field at 100 kV
was $470\pm 10$ kV/cm. Shrinking the insulator diameter to the minimum and
maximum of the thermal contractions led to a change of only 1.2% and 1.6% in
peak electric field when keeping the dielectric constant fixed.
Figure 5: Mean breakdown voltages as a function of exposed insulator length
for the materials that survived testing. Figure 6 gives the values of the
slopes, ($\Delta$V)/($\Delta$exposed length). Some configurations were
repeated at different times during the run of the experiment resulting in
multiple data points for a given material’s exposed length.
Figure 6: Slopes from the fit to the breakdown voltage versus exposed
insulator length data. Figure 7: The magnitude of the electric field near the
ring tip from the FEA of UHMW PE with a 6 cm exposed length. Here, 1 V is
applied; the field (V/cm) from a desired voltage can be scaled. The insulator
is on the left; the center conductor is on the left beyond the frame of the
image.
### 3.3 Insulators with Grooved Profiles
In addition to material and exposed insulator length, the surface profile of
an insulator is also a parameter of interest. Ridged or grooved surface
profiles are commonly used in high voltage bushings. For example, the often
seen ceramic bushings used in outdoor power distribution are designed to
prevent flash-over by increasing the creepage path and keeping it dry [12].
Grooves have also been employed in high voltage feedthroughs in liquid argon
[13]. In this section, we present a study of the performance of profiles in
liquid argon.
In UHMW PE, the profiles shown in Figure 8(a) and (b) were prepared to compare
ridge shape and spacing. In FR4, and polycarbonate, only the profile shown in
Figure 8(b) was evaluated.
Figure 8: The two profiles used in testing, and the test apparatus.
(a) (b)
Figure 9: Summary plots for the mean breakdown voltage values as a function of
exposed insulator length for the UHMW PE samples.
The results from the grooved samples show at most a small effect on breakdown
voltage. The results for the smooth and grooved UHMW PE are shown in Figure 9.
It is worth noting that the traces were observed to follow the surface of the
grooves on the UHMW PE samples.
The remaining grooved materials all suffered damage of some sort. The grooved
polycarbonate test piece developed damaged ridges during testing. Only one
distance at 4.13 cm was evaluated and it is difficult to determine if the
ridges were made weaker during the initial installation of the ring, or if the
ridges were only damaged from electrical discharge. The grooved FR4 piece also
had only one exposed length at 6.67 cm evaluated due to damage. Here, however,
the ridges were not damaged, but rather a crack burned through from the center
conductor outward along fibers of the FR4.
## 4 Discussion
We have evaluated insulator performance by studying dielectric breakdown along
the surface of insulators in liquid argon in an open cryostat. In our setup, a
high voltage conductor was encased in the insulator under test. We have
studied a variety of candidate materials and compared their performance. We
have also evaluated the effect of exposed insulator length on breakdown
voltage.
### 4.1 Dependence of Breakdown Voltage on Material
The average breakdown voltage for a common exposed insulator length of 6 cm is
shown in Figure 10 versus the inverse of the materials’ permittivities.
Materials with high permittivity increase the field near the knife edge on the
ring. We find that the material dependence of breakdown voltage can be
entirely understood as a consequence of the sample’s permittivity and the
resulting electric field near the insulator surface, and does not display any
other material effects.
Figure 10: The breakdown voltage for several materials at an exposed insulator
length of 6 cm versus the inverse permittivity.
### 4.2 Dependence of Breakdown Voltage on Exposed Insulator Length
From our testing, we found little, if any, dependence on breakdown voltage
versus exposed insulator length as shown in Figure 6. This result differs from
broad experience at room temperature in the high voltage community that
generally relates an increase in creepage path to an increase in breakdown
voltage.
In an attempt to understand breakdown behavior in liquid argon, we turned to
the literature on breakdown in oil. A paper by Meek [14] is one of the first
to tackle breakdown in dense media, and provides a narrative for the breakdown
chain of events. This paper states, “The breakdown of a uniform field is
considered to occur by the transition of an electron avalanche proceeding from
cathode to anode into a self-propagating streamer, which develops from anode
to cathode to form a conducting filament between the electrodes.” This model
of an initializing avalanche followed by streamer propagation is still
accepted today.
In our setup, there is a concentrated electric field by design near the sharp
feature at the knife edge of the ring. This field is roughly independent of
ring depth as shown in Figure 11 where different exposed lengths yield a
similar electric field from the ring tip vertically along the insulator
length. The sharp field enables the initialization of a streamer.
Figure 11: The electric field magnitude along the vertical direction from the
ring tip for three exposed insulator lengths. The dominant field near the ring
tip is independent of exposed insulator length.
Once started, the streamer propagates toward the cathode. The streamer grows
by ionization from electrons and photons; therefore it needs energy to
propagate. In Reference [15], a mechanism is described, “The cathode-directed
streamer starts near the anode. It looks like and operates as a thin
conductive needle growing from the anode. The electric field at the tip of the
‘anode needle’ is very high, which stimulates the fast streamer propagation in
the direction of the cathode.”
In our test setup, just as the electric field on the edge of the ring created
a field strong enough to ionize the argon, the highly localized charge in the
streamer head concentrates the electric field from the center conductor
supporting further ionization for streamer growth. The field from the center
conductor is independent of elevation and is the dominating field along the
insulator. This field grows the streamer without any need for additional
fields along the insulator surface. Hence, the voltage required for breakdown
is independent of exposed insulator length.
Our results of breakdown voltages as a function of exposed insulator length
are shown in Figure 5 and the resulting slopes from the fits are summarized in
Figure 6. We see no significant material dependence, and further, the central
values are consistent with zero as one would expect from the model described.
### 4.3 Dependence of Breakdown Voltage on Insulator Profile
A number of arguments have been made to motivate the addition of grooves to
high voltage components in liquid argon. Path length arguments are popular, as
is the suggestion that creating a surface perpendicular or in opposition to
the electric field inhibits the travel of charge. We observed at most a modest
effect from the addition of grooves to our test pieces.
At least part of the efficacy of grooves in our tests is due to the reduction
in field by replacing plastic with argon near the grounded ring. During
testing, we did not know the precise location of the ring tip relative to the
grooves. FEAs of the field at a common cathode voltage for Profile II grooves
were performed for different ring heights relative to the groove. A reduction
in field can be seen in Figure 12 where the ratio of the magnitude of the
field for a grooved sample to that of a smooth sample is shown for a common
input voltage. Similar results were found for polycarbonate and FR4. The
largest effect is seen when the ring is between the middle of a valley of a
groove and the top edge of a peak. Here, the ratio of the field at a common
voltage of the smooth insulator, $E_{s}$, to the field of the grooved
insulator, $E_{G}$, was nearly 0.90 for UHMW PE, 0.84 for polycarbonate, and
0.75 for FR4. These values along with the observed breakdown voltages are
summarized in Table 2. One can scale the modeled peak electric fields by the
observed breakdown voltages, $V_{S}$ and $V_{G}$ for smooth and grooved
samples respectively, to obtain the field at breakdown. If one assumes there
is a similar electric field at breakdown between the geometries, then
$V_{S}E_{S}\simeq V_{G}E_{G}$. If $(V_{S}/V_{G})\leq(E_{G}/E_{S})$, the
increase in breakdown voltage of the grooved samples can be accounted for by
the reduction in field due to the change in the permitivity by removing the
insulator material. For polycarbonate and FR4, the ratios $E_{G}/E_{S}$ are in
rough agreement with the observed average breakdown voltages ratio
$V_{S}/V_{G}$.
(a) (b)
Figure 12: Ratio of the electric field near the ring for different exposed insulator lengths with respect to the groove profile. Figure (a) is the ratio of the magnitude of the electric field in a line directed radially toward the sample from the ring tip. Figure (b) is the ratio of the magnitude of the electric field in a vertical line from the ring tip. Sample | Grooved Exposed | Grooved | Smooth | FEA | Measured
---|---|---|---|---|---
Length (cm) | $<V_{BD}>$ | $<V_{BD}>$ | $E_{G}/E_{S}$ | $V_{S}/V_{G}$
FR4 | 6.67 | $78.1\pm 14.3$ | 55.9 | 0.75 | 0.72
Polycarbonate | 4.13 | $98.4\pm 13.3$ | 84.6 | 0.84 | 0.86
UHMW PE | 1.11—15.56 | $106.5\pm 17.9$ — $82.0\pm 11.1$ | 109.9—98.6 | 0.90 | 1—1.2
Table 2: The average breakdown voltage, $V_{BD}$, for smooth and Profile II
grooved samples, and their resulting ratio, $V_{S}/V_{G}$. The smooth value is
from the fitted line to the breakdown voltages as a function of exposed
length. Also included is the minimum ratio of the electric fields vertically
from the ring tip for the samples as computed by an FEA. Here the ring tip is
just below a valley and this is where the maximum effect on the electric field
is seen.
The UHMW PE Profile II sample had a measured $V_{S}/V_{G}>1$ in contrast to
the other samples, including the UHMW PE Profile I sample, and the FEA model.
The precise effect of grooves on performance is not clear, however, they do
not appear to have a large effect on breakdown voltage in this study as shown
in Figure 9(b) where the slopes and errors are reported for the grooved and
smooth UHMW PE. The values are again consistent with zero.
## 5 Conclusion
We have performed a study of electrical breakdown in liquid argon along the
surface of insulators surrounding a conductor with regard to breakdown path
length, insulator material, and insulator surface profile. The measurements
were performed in an open-to-air cryostat initially filled with commercial
grade argon, and the breakdown was initiated at the anode. We identified some
materials that failed mechanically in our test setup. We found the breakdown
voltage does not depend upon the length of the exposed insulator. Material-
specific effects could be explained as due to their permittivities.
The evaluation of surface profiles point to at most a modest effect on
breakdown voltage by adding grooves. Two grooved materials failed mechanically
after one exposed length test. Before failing, the breakdown voltage recorded
was lower when compared to a smooth sample. However, this change in breakdown
voltage could be accounted for by the change in the peak electric field
resulting from the grooved material geometry. A limited study of exposed
length of grooved UHMW PE pointed to little, if any, effect on breakdown
voltage with all of the average voltage gains with length being consistent
with zero.
A practical consequence of our observations (in impure liquid argon) is that
for insulator tubes surrounding high voltage electrodes, the peak surface
electric field is the relevant value. Thus, the permittivity of the materials,
provided they can survive mechanically, is an important parameter of concern.
Adjusting the radial dimension will likely also improve performance. Longer
insulators provide little, if any, improvement in high voltage standoff.
Grooving the insulator surface similarly seems to have little effect beyond
reducing the local field. Improvements must focus on preventing avalanche
initiation, or preventing the avalanche to escape to where it can grow to be a
longer streamer.
## 6 Acknowledgements
The authors wish to thank Brian Rebel and Stephen Pordes for their support,
editorial contributions, and scientific discussions, and Jim Walton and Alan
Hahn for their technical assistance.
Fermilab is operated by Fermi Research Alliance, LLC under Contract No. De-
AC02-07CH11359 with the United States Department of Energy.
## References
* [1] A. Blatter et al., “Experimental study of electric breakdowns in liquid argon at centimeter scale,” JINST 9 (2014) P04006, arXiv:1401.6693 [physics.ins-det].
* [2] F. Bay et al., “Evidence of electrical breakdown induced by bubbles in liquid argon,” arXiv:1401.2777 [physics.ins-det].
* [3] M. Auger et al., “A method to suppress dielectric breakdowns in liquid argon ionization detectors for cathode to ground distances of several millimeters,” JINST 9 (2014) P07023, arXiv:1406.3929 [physics.ins-det].
* [4] R. Acciarri et al., “Liquid Argon Dielectric Breakdown Studies with the MicroBooNE Purification System,” JINST 9 no. 11, (2014) P11001, arXiv:1408.0264 [physics.ins-det].
* [5] Glassman High Voltage Inc., PO Box 317, 124 West Main Street, High Bridge, NJ 08829-0317, U.S.A.
* [6] National Instruments Corp., 11500 N Mopac Expwy, Austin, TX 78759-3504.
* [7] W. Weibull, “A statistical distribution function of wide applicability,” J. Appl. Mech 18 (1951) 293–277.
* [8] W. F. Gauster, “Über Oberflächeneffekte beim elektrischen Durchbruch von Flüssigkeiten,” Ingenieur Archiv 10 (1956) 160.
* [9] “Design guide–module iii: Delrin acetal resin.” http://plastics.dupont.com/plastics/pdflit/americas/delrin/230323c.pdf. Page 34, Figure 31.
* [10] “Fr-4.” http://en.wikipedia.org/wiki/FR-4.
* [11] “Professional plastics.” http://www.professionalplastics.com.
* [12] L. L. Alston, ed., High-Voltage Technology. Harwell Post-Graduate Series. Oxford University Press, 1968. Creepage path is discussed in Section 14.5.1.
* [13] S. Amerio et al., “Design, construction and tests of the ICARUS T600 detector,” Nucl. Instrum. Methods 527 (2004) 329.
* [14] J. M. Meek, “A theory of spark discharge,” Phys. Rev. 57 (Apr, 1940) 722–728. http://link.aps.org/doi/10.1103/PhysRev.57.722.
* [15] A. Fridman, A. Gutsol, and Y. Cho, “Non-thermal atmospheric pressure plasma,” vol. 40 of Advances in Heat Transfer, pp. 1 – 142. Elsevier, 2007. http://www.sciencedirect.com/science/article/pii/S0065271707400016.
|
# Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit Neural
Representation
Wenbo Zhao1,2, Xianming Liu1,2, Zhiwei Zhong1,2, Junjun Jiang1,2, Wei Gao3, Ge
Li3, Xiangyang Ji4
1Harbin Institute of Technology, 2Peng Cheng Laboratory
3Peking University Shenzhen Graduate School, 4Tsinghua University
Corresponding author<EMAIL_ADDRESS>
###### Abstract
Point clouds upsampling is a challenging issue to generate dense and uniform
point clouds from the given sparse input. Most existing methods either take
the end-to-end supervised learning based manner, where large amounts of pairs
of sparse input and dense ground-truth are exploited as supervision
information; or treat up-scaling of different scale factors as independent
tasks, and have to build multiple networks to handle upsampling with varying
factors. In this paper, we propose a novel approach that achieves self-
supervised and magnification-flexible point clouds upsampling simultaneously.
We formulate point clouds upsampling as the task of seeking nearest projection
points on the implicit surface for seed points. To this end, we define two
implicit neural functions to estimate projection direction and distance
respectively, which can be trained by two pretext learning tasks. Experimental
results demonstrate that our self-supervised learning based scheme achieves
competitive or even better performance than supervised learning based state-
of-the-art methods. The source code is publicly available at
https://github.com/xnowbzhao/sapcu.
## 1 Introduction
Point clouds serve as a popular tool to represent 3D data due to their
flexibility and compactness in describing objects/scenes with complex geometry
and topology. They can be easily captured by modern scanning devices, and have
been widely used in many applications, such as autonomous driving, robotics,
etc. However, due to the inherent limitations of 3D sensing technology, raw
point clouds acquired from 3D scanners are usually sparse, occluded and non-
uniform. In many downstream applications, such as surface reconstruction and
understanding, dense point clouds are desired for representing shapes with
richer geometric details. Accordingly, people turn to develop a computational
approach, referred to as point clouds upsampling, which has attracted
extensive attention in both industry and academia [21, 20, 7, 13, 22, 8].
Unlike conventional grid based images, point clouds are irregular and
unordered, which make point clouds upsampling a more challenging task than its
2D image counterpart. The goal of point clouds upsampling is two-fold: 1)
generating a dense point set from the sparse input to provide richer details
of the object; 2) generating a uniform and complete point set to cover the
underlying surface faithfully.
In recent years, deep neural networks based point clouds upsampling approaches
emerge and become popular, which adaptively learn structures from data and
achieve superior performance than traditional methods, such as optimization-
based ones [1, 5, 6]. For instance, Yu et al. [21] propose to learn multi-
level features per point and expand the point set via a multi-branch
convolution unit implicitly in feature space, which is then split to a
multitude of features for reconstruction of an upsampled point set. Wang et
al. [20] propose to progressively train a cascade of patch-based upsampling
networks on different levels of detail. Li et al. [7] apply generative
adversarial network into point clouds upsampling, which constructs an up-down-
up expansion unit in the generator for upsampling point features with error
feedback and self-correction, and formulate a self-attention unit to enhance
the feature integration. To better represent locality and aggregate the point
neighborhood information, Qian et al. [13] propose to use a Graph
Convolutional Network to perform point clouds upsampling.
In summary, the outlined deep learning based methods take a general approach:
first design an upsampling module to expand the number of points in the
feature space, then formulate losses to enforce the output points to be as
close as possible to the ground truth dense points. However, these methods
suffer from the following two limitations:
End-to-End Training. These methods are trained in an end-to-end supervised
learning manner, which requires a large amount of pairs of input sparse and
ground-truth dense point sets as the supervision information. The training
data is constructed by sampling from synthetic models, whose distributions are
inevitably biased from that of the real-scanned data. This would lead the
trained models to have poor generalization ability in real-world applications.
Thus, it is more desirable to develop self-supervised or unsupervised point
cloud upsampling schemes.
Fixed Upsampling Factor. Due to resource constraints, such as display
resolution and transmission bandwidth, the required upsampling factor is
usually various. These existing methods treat upscaling of different scale
factors as independent tasks, which train a specific deep model for a pre-
defined factor and have to build multiple networks to handle upsampling with
varying factors. This manner is clumsy, which increases both model complexity
and training time significantly. Thus, it is more desirable to develop unified
point cloud upsampling schemes that can handle arbitrary scale factor.
Some methods developed very recently [9, 22, 19, 14] investigate the above
limitations and attempt to address them:
* •
Regarding self-supervised point cloud upsampling, Liu et al. [9] propose the
coarse-to-fine framework, which downsamples the input sparse patches into
sparser ones and then exploits them as pairs of supervision information to
perform end-to-end training. [22] proposes an end-to-end self-supervised
learning manner, in which the loss functions enforce the input sparse point
cloud and the generated dense one to have similar 3D shapes and rendered
images. However, these two methods are still limited to a fixed upsampling
factor.
* •
Regarding arbitrary-scale upsampling, inspired by the counterpart Meta-SR in
image [4], Ye et al. [19] propose Meta-PU for magnification-flexible point
cloud upsampling, in which the meta-subnetwork is learned to adjust the
weights of the upsampling blocks dynamically. Qian et al. [14] design a neural
network to adaptively learn unified and sorted interpolation weights as well
as the high-order refinements, by analyzing the local geometry of the input
point cloud. However, these two methods still follow the end-to-end supervised
learning manner, which need to construct a large-scale training set including
ground truth dense point sets with scales within a wide range.
In this paper, we propose a novel and powerful point clouds upsampling method
via implicit neural representation, which can achieve self-supervised and
magnification-flexible upsampling simultaneously. Specifically, to get rid of
the requirement of ground truth dense point clouds, we do not directly learn
the mapping between the input sparse and output dense point sets.
Alternatively, inspired by the notion that an implicit surface can be
represented by signed distance function (SDF) [15, 10, 11], we turn to seek
the nearest projection points on the object surface for given seed points
through two implicit neural functions, which are used to estimate projection
direction and distance respectively. The two function can be trained by two
constructed pretext self-supervised learning tasks. In the way, as long as the
seed points are sampled densely and uniformly, we can produce high-resolution
point clouds that are dense, uniform and complete. To guarantee the uniformity
of seeds sampling, we exploit equally-paced 3D voxels to divide the space of
point cloud. Experimental results demonstrate our self-supervised learning
based scheme achieves competitive or even better performance that supervised
learning based state-of-the-art methods. The main contributions of this work
are highlighted as follows:
* •
To the best of our knowledge, we are the first in the literature to
simultaneously consider self-supervised and arbitrary-scale point clouds
upsampling.
* •
We formulate point clouds upsampling as the task of seeking nearest projection
points on the implicit surface for seed points, which can be done by two
implicit neural functions trained by pretext tasks. From the generated dense
point clouds, we can achieve arbitrary-scale upsampling by farthest point
sampling.
* •
Although our method is self-supervised, it produces high-quality dense point
clouds that are uniform and complete, and achieves competitive objective
performance and even better visual performance compared with state-of-the-art
supervised methods.
## 2 Method
Define $\mathcal{X}=\\{{\mathbf{p}}_{i}\\}_{i=1}^{n}\in\mathbb{R}^{n\times 3}$
as the input sparse point cloud. For a desirable scaling factor $r$, we target
to obtain a corresponding dense point cloud
$\mathcal{Y}=\\{{\mathbf{p}}_{i}\\}_{i=1}^{N}\in\mathbb{R}^{N\times 3}$
including $N=\lfloor r\times n\rfloor$ points. ${\mathcal{S}}$ is defined as
the underlying surface of the dense point cloud. The high-resolution point
cloud $\mathcal{Y}$ is required to be dense and uniform, as well as be able to
handle occlusion and noise, i.e., to be complete and clean.
Unlike the existing methods that take the end-to-end training framework, we do
not directly learn the mapping between the input sparse and output dense point
sets, but instead seek the nearest projection point on the object surface for
a given seed point in a self-supervised manner. By densely and uniformly
sampling seed points in the space, we can obtain dense and approximately
uniform projection points, which can describe the underlying surface
faithfully. The proposed self-supervised point clouds upsampling strategy
includes the four steps:
* •
Seeds Sampling. We represent the geometric space of the point cloud by 3D
voxel grid, from which we choose the centres of voxels that are close to the
implicit surface $\mathcal{S}$ as the seed points.
* •
Surface Projection. For seed points, we project them to the implicit surface
$\mathcal{S}$ to obtain the projected points, which construct the generated
dense point cloud.
* •
Outliers Removal. We further remove the projected points that are generated by
far seed points to achieve cleaner point cloud.
* •
Arbitrary-Scale Point Cloud Generation. To obtain the desired upsampling
factor, we adjust the number of vertices of the generated dense clouds by
farthest point sampling.
In the following, we introduce each step in detail.
### 2.1 Seeds Sampling
To obtain uniformly sampled seed points, given a point cloud, we divide the 3D
space into equally spaced voxels $\\{{\mathbf{V}}_{(x,y,z)}\\}$, where
${\mathbf{V}}_{(0,0,0)}$ represents the voxel locating in the origin of the 3D
Cartesian coordinate system. We define the resolution of a 3D voxel volume as
$l\times l\times l$. The centre of a voxel ${\mathbf{V}}_{(x,y,z)}$ is thus
${\mathbf{c}}_{(x,y,z)}=[x+0.5*l,y+0.5*l,z+0.5*l]$. The centres of voxels are
equally distributed in the space, which serve as good candidates of seed
points. However, we do not use them all, but choose the ones that are closed
to the underlying surface of the point cloud.
A reasonable principle to choose centres is according to their distances to
the surface ${\mathcal{S}}$. We choose a centre ${\mathbf{c}}_{(x,y,z)}$ as
the seed if its distance to the surface within a preset range:
$\mathbb{\text{Dist}}({\mathbf{c}}_{(x,y,z)},{\mathcal{S}})\in[D_{l},D_{u}]$.
The difficulty lies in that we cannot directly compute the distance
$\mathbb{\text{Dist}}({\mathbf{c}}_{(x,y,z)},{\mathcal{S}})$, since the
underlying surface ${\mathcal{S}}$ is unknown. We propose an alternative
strategy to approximate
$\mathbb{\text{Dist}}({\mathbf{c}}_{(x,y,z)},{\mathcal{S}})$. Specifically,
from the input sparse point sets $\mathcal{X}$, we choose $M$ points that are
nearest to ${\mathbf{c}}_{(x,y,z)}$, denoted as
$\\{{\mathbf{p}}_{c,1},{\mathbf{p}}_{c,2},\cdots,{\mathbf{p}}_{c,m},\cdots,{\mathbf{p}}_{c,M}\\}$
that are ordered from near to far. From these points, we can form a set of
triangles
$\\{T_{m}=({\mathbf{p}}_{c,1},{\mathbf{p}}_{c,2},{\mathbf{p}}_{c,m})\\}_{m=3}^{M}$.
We then perform the following approximation:
$\mathbb{\text{Dist}}({\mathbf{c}}_{(x,y,z)},{\mathcal{S}})\approx\min\mathbb{\text{Dist}}({\mathbf{c}}_{(x,y,z)},t),t\in\\{T_{m}\\}_{m=3}^{M}$
(1)
where $t$ represents a point contained in the constructed triangles. Finally,
we obtain the seed points set $C$. By setting appropriate $l$, we can generate
dense and uniformly distributed seed points.
Figure 1: The network architecture of implicit neural function.
### 2.2 Surface Projection
With the sampled seed points, the next step is to seek the their projection
points on the surface, which are the target points of the generated dense
point cloud.
In the field of 3D computer vision and graphics, it is well-known that an
implicit surface can be defined as a signed distance function (SDF) [15, 10,
11]. SDF, when passed the coordinates of a point in space, outputs the closest
distance of this point to the surface, whose sign indicates whether the point
is inside or outside of the surface. Inspired by SDF, we propose the following
feasible approach to estimate the projected point on the surface for a query
seed point.
It is worth noting that, the computation strategies of SDF, such as [15, 10,
11], cannot be directly applied for our purpose. For a 3D query point
${\mathbf{x}}$, SDF outputs: $SDF({\mathbf{x}})=s,s\in\mathbb{R}$. The sign of
$s$ only indicates it is inside or outside of a shape, but does not provide
the direction to the surface. In our method, for a seed point ${\mathbf{c}}\in
C$, we divide the task of estimating projection point into two sub-tasks: 1)
estimating the projection direction ${\mathbf{n}}\in[-1,1]^{3}$; 2) estimating
the projection distance $d\in\mathbb{R}$. Then the coordinate of the
projection point of the seed point ${\mathbf{c}}$ can be obtained as:
${\mathbf{c}}_{p}={\mathbf{c}}+{\mathbf{n}}*d$.
Projection Direction Estimation. We train a multi-layer fully-connected neural
network $f_{n}(\cdot;\Theta_{n})$ for this purpose, which takes the query
point ${\mathbf{c}}$ and the sparse point cloud $\mathcal{X}$ as inputs:
${\mathbf{n}}=f_{n}({\mathbf{c}},\mathcal{X};\Theta_{n})$ (2)
To reduce the computational complexity, we take $k$ nearest points to
${\mathbf{c}}$ in $\mathcal{X}$ instead of the whole $\mathcal{X}$ as input.
We denote this subset of points as
${\mathcal{X}}_{c}=\\{{\mathbf{p}}_{1},\cdots,{\mathbf{p}}_{k}\\}$. Moreover,
to facilitate the inference process of neural networks, we perform
normalization on the point coordinates by setting ${\mathbf{c}}$ as the
origin. In this way, we can simplify the estimation function as:
${\mathbf{n}}=f_{n}(\widehat{\mathcal{X}}_{c};\Theta_{n})$ (3)
where
$\widehat{\mathcal{X}}_{c}=\\{{\mathbf{p}}_{1}-{\mathbf{c}},\cdots,{\mathbf{p}}_{k}-{\mathbf{c}}\\}$.
Projection Distance Estimation. Similarly, for estimating the projection
distance ${\mathbf{d}}$, we also train a multi-layer fully-connected neural
network $f_{d}(\cdot;\Theta_{n})$, which takes the query point ${\mathbf{c}}$,
the subset of nearest points ${\mathcal{X}}_{c}$ and the estimated projection
direction ${\mathbf{n}}$ as inputs:
$d=f_{d}({\mathbf{c}},{\mathcal{X}}_{c},{\mathbf{n}};\Theta_{d})$ (4)
Normalization is also helpful for this network. Different from $f_{n}$, here
the input ${\mathbf{n}}$ involves direction. Therefore, it should perform
normalization on both position and direction, which can be done in two stages:
1) moving ${\mathbf{c}}$ to the origin; 2) applying the rotation matrix
$\mathbf{W}_{r}$ to rotate ${\mathbf{n}}$ to a specific direction
${\mathbf{n}}_{t}$, i.e., ${\mathbf{n}}_{t}=\mathbf{W}_{r}{\mathbf{n}}$. After
normalization, ${\mathcal{X}}_{c}$ becomes
$\widetilde{\mathcal{X}}_{c}=\\{\mathbf{W}_{r}({\mathbf{p}}_{1}-{\mathbf{c}}),\cdots,\mathbf{W}_{r}({\mathbf{p}}_{k}-{\mathbf{c}})\\}$,
which is the only required input for $f_{d}$:
$d=f_{d}(\widetilde{\mathcal{X}}_{c};\Theta_{d})$ (5)
### 2.3 Outliers Removal
In the step of seeds sampling, some points that are actually far away from
${\mathcal{S}}$ may be included into the seed points set $C$ due to the error
in approximation. The normal vector and distance of these points cannot be
well estimated, leading to outliers in the resulting dense point cloud. We
turn to exploit the post-processing procedure to remove them.
Specifically, for a projection point ${\mathbf{c}}_{p}$, we find its $v$
nearest points $\\{{\mathbf{c}}_{p,1},\cdots,{\mathbf{c}}_{p,v}\\}$. We then
compute the average bias between ${\mathbf{c}}_{p}$ and them:
$b_{p}=\frac{1}{v}\sum_{i=1}^{v}\text{Dist}({\mathbf{c}}_{p},{\mathbf{c}}_{p,i})$
(6)
For all projection points, we do in the same way to get $\\{b_{p}\\}$, the
average of which is denoted as $\bar{b}$. We determine a point as outlier if
it satisfies $b_{p}>\lambda\bar{b}$, where $\lambda$ is set as 1.5 in
practical implementation.
### 2.4 Arbitrary-Scale Point Cloud Generation
Note that the above process cannot accurately control the number of vertices
generated. Thus, it is necessary to adjust the number of vertices to achieve
upsampling with the desired scale factor. In our context, we first perform
inverse normalization on the generated point cloud, and then adjust the number
of vertices to $N$ by the farthest point sampling algorithm [12].
Scale | 2$\times$ | 4$\times$
---|---|---
Metric ($10^{-3}$) | CD $\downarrow$ | EMD $\downarrow$ | F-score $\uparrow$ | mean $\downarrow$ | std $\downarrow$ | CD $\downarrow$ | EMD $\downarrow$ | F-score $\uparrow$ | mean $\downarrow$ | std $\downarrow$
Fixed Scale | PU-Net [21] | 12.9 (6) | 6.75 (6) | 334 (5) | 4.02 (5) | 5.62 (4) | 11.3 (7) | 7.02 (7) | 462 (7) | 5.05 (7) | 6.81 (6)
MPU [20] | - | - | - | - | - | 10.4 (4) | 5.64 (5) | 527 (4) | 3.61 (4) | 5.50 (3)
PU-GAN [7] | 12.7 (5) | 5.09 (5) | 326 (6) | 4.32 (6) | 6.01 (5) | 10.9 (5) | 6.66 (6) | 484 (6) | 4.66 (6) | 6.56 (5)
PU-GCN [13] | 12.2 (3) | 4.94 (4) | 360 (3) | 3.47 (2) | 5.09 (2) | 10.2 (3) | 5.50 (4) | 537 (3) | 3.35 (2) | 5.01 (2)
PU-DR [8] | 11.0 (1) | 3.55 (1) | 409 (1) | 2.30 (1) | 4.05 (1) | 8.71 (1) | 3.98 (2) | 625 (1) | 2.24 (1) | 3.89 (1)
Arbitrary Scale | Meta-PU [19] | 12.6 (4) | 3.94 (2) | 339 (4) | 3.77 (4) | 5.41 (3) | 10.9 (5) | 3.56 (1) | 506 (5) | 3.89 (5) | 5.60 (4)
Proposed | 12.1 (2) | 4.93 (3) | 371 (2) | 3.49 (3) | 10.2 (6) | 10.1 (2) | 4.87 (3) | 561 (2) | 3.49 (3) | 9.35 (7)
Scale | $8\times$ | $16\times$
Metric ($10^{-3}$) | CD $\downarrow$ | EMD $\downarrow$ | F-score $\uparrow$ | mean $\downarrow$ | std $\downarrow$ | CD $\downarrow$ | EMD $\downarrow$ | F-score $\uparrow$ | mean $\downarrow$ | std $\downarrow$
Fixed Scale | PU-Net [21] | 9.67 (5) | 8.91 (6) | 611 (6) | 4.85 (6) | 6.81 (5) | 9.25 (7) | 10.4 (7) | 632 (7) | 6.04 (7) | 8.04 (6)
MPU [20] | - | - | - | - | - | 8.12 (5) | 7.74 (6) | 727 (5) | 4.01 (6) | 6.11 (5)
PU-GAN [7] | 8.82 (4) | 5.05 (3) | 676 (4) | 3.76 (4) | 5.51 (3) | 7.52 (2) | 6.02 (4) | 770 (3) | 3.14 (2) | 4.92 (2)
PU-GCN [13] | 8.78 (3) | 6.41 (5) | 678 (3) | 3.33 (2) | 5.06 (2) | 7.80 (4) | 7.44 (5) | 749 (4) | 3.39 (4) | 5.11 (3)
PU-DR [8] | 8.34 (1) | 3.95 (1) | 708 (1) | 2.99 (1) | 4.97 (1) | 7.29 (1) | 4.51 (1) | 779 (1) | 2.92 (1) | 4.72 (1)
Arbitrary Scale | Meta-PU [19] | 9.71 (6) | 4.33 (2) | 625 (5) | 3.95 (5) | 5.68 (4) | 8.96 (6) | 5.62 (2) | 676 (6) | 3.87 (5) | 5.59 (4)
Proposed | 8.70 (2) | 5.53 (4) | 706 (2) | 3.48 (3) | 8.84 (6) | 7.65 (3) | 5.98 (3) | 772 (2) | 3.35 (3) | 8.34 (7)
Table 1: Objective performance comparison with respect to CD, EMD, F-score,
mean and std with state-of-the-art methods. The ranking numbers are also
provided.
## 3 Implicit Neural Networks Training
In this section, we introduce the architecture of implicit neural networks and
the training strategy.
### 3.1 Architectures
The networks $f_{n}$ and $f_{d}$ share the same architecture as shown in
Figure 1, which borrows the idea of encoder-decoder framework [10]. The
network takes the normalized subset of points as input, which are feed into
the encoder to obtain a 2048-dimensional feature vector. Here we employ a
state-of-the-art method DGCNN [17] as the encoder to preserve surface
information in multi-levels. The feature vector is then passed through 4 full-
connected (FC) layers with batch normalization and ReLU, the output
dimensional of which are 1024, 512 and 128 respectively. The output
dimensional of the last FC layer is 3 for the projection direction
${\mathbf{n}}$ and 1 for the the projection distance $d$. Note that the design
of the network is not the main contribution of this paper. We can exploit any
suitable network for our purpose.
### 3.2 Training Data Preparation
To train the two implicit neural functions, we construct two pretext tasks,
for which we prepare training samples that consist of 3D points and the
corresponding ground truth projection direction and distance values. We train
with normalized watertight meshes that are constructed by the TSDF-Fusion
presented in [16, 10] from a subset of the ShapeNet [2] that consists of 13
major categories. Since $f_{n}$ and $f_{d}$ are designed with different
purposes, we prepare different training pairs for them:
* •
For preparing the training data of $f_{n}$, we firstly generate the seed
points: 50K seed points are randomly selected around the mesh surface by
limiting that the distances between them and the surface are within a preset
range $[D_{l}-\epsilon,D_{u}+\epsilon]$, where $\epsilon$ is introduced to
increase robustness. For a seed point ${\mathbf{c}}$, we find the nearest
point on the mesh and sample 5 points around it, then compute the average
vector ${\mathbf{d}}$ between ${\mathbf{c}}$ and them. In this way, we can
effectively handle the error in mesh reconstruction. The ground truth
$\widehat{\mathbf{n}}$ is finally derived as the normalization of
${\mathbf{d}}$. Secondly, for every 16 seed points, we randomly select 2048
points as the corresponding sparse point cloud $\mathcal{X}$.
* •
For preparing the training data of $f_{d}$, 50K seed points are randomly
selected around the mesh surface in the same way as $f_{n}$. We then find the
corresponding nearest projection points on the surface. The distance between a
seed point and the projection point is used as the ground truth $\widehat{d}$.
The generation of $\mathcal{X}$ is in the same way as $f_{n}$.
It is worth noting that, although the training data are generated from
watertight meshes, our scheme is capable to handle non-watertight point
clouds, which can be observed in the real-world cases shown in Figure 6.
### 3.3 Training Details
The training of $f_{n}$ and $f_{d}$ is done by minimizing the sum over losses
between the predicted and real direction/length values under the mean squared
error (MSE) loss function.
The training process is conducted on a server with two Tesla V100 GPUs. The
networks are trained for 1200 epochs with a batch size of 64, using the Adam
algorithm. Following [10], the learning rate is set as $10^{-4}$, and the
other hyperparameters of Adam are set as $\beta_{1}=0.9$, $\beta_{2}=0.999$,
$epsilon=10^{-8}$, weight decay $=0$.
## 4 Experiments
In this section, we provide extensive experimental results to demonstrate the
superior performance of our method.
### 4.1 Comparison Study
We compare the proposed self-supervised arbitrary-scale point clouds
upsampling (SSAS) method with several state-of-the-art works, which can be
divided into two categories in term of the scale factor: 1) fixed scale
methods, including PU-Net [21], MPU [20], PU-GAN [7], PU-GCN [13], PU-DR [8];
2) arbitrary scale method, including Meta-PU [19]. Note that these methods are
all supervised learning based. The compared models are trained with the
released codes by their authors, following the default settings in their
papers.
We train all these compared methods following the approach mentioned in [19]
for fair comparison. The test samples are from the dataset adopted by [21,
19]. We non-uniformly sample 2048 points using Poisson disk sampling from 20
test models to form the test set.
### 4.2 Parameters Setting
The side length of voxel $l$ and the range of the distance between seed point
and surface $[D_{l},D_{u}]$ decide the number of seed points. However, the
number is also depended on the shape of input point cloud. To ensure that
enough points are generated, we set $l=0.004$ and
$\left[D_{l},D_{u}\right]=\left[0.011,0.015\right]$. Under this condition, the
minimum number of generated points is 99001 (Chair), which meets the demands
of 16$\times$ or higher scale upsampling. The specific direction
${\mathbf{n}}_{t}$ in normalization can be arbitrarily chosen. We set
${\mathbf{n}}_{t}=\left(1,0,0\right)$ in our experiments. The number of
nearest points $k$ is set as 100 in Section 2.2 and 30 in Section 2.3. The
number of nearest point $M$ for computing
$\mathbb{\text{Dist}}({\mathbf{c}}_{(x,y,z)},{\mathcal{S}})$ affects the
number of outliers and continuity of seed points. We set $M=10$ and further
discuss the effect of different $M$ in ablation study.
Scale | $2\times$ | $4\times$
---|---|---
$p$ ($10^{-2}$) | 0.2% $\downarrow$ | 0.4% $\downarrow$ | 0.6% $\downarrow$ | 0.8% $\downarrow$ | 1.0% $\downarrow$ | 0.2% $\downarrow$ | 0.4% $\downarrow$ | 0.6% $\downarrow$ | 0.8% $\downarrow$ | 1.0% $\downarrow$
Fixed Scale | PU-Net [21] | 3.05 (6) | 2.33 (6) | 2.03 (6) | 1.86 (6) | 1.75 (6) | 2.72 (7) | 2.19 (7) | 1.97 (7) | 1.84 (7) | 1.76 (7)
MPU [20] | - | - | - | - | - | 2.53 (6) | 2.03 (6) | 1.80 (6) | 1.67 (6) | 1.59 (6)
PU-GAN [7] | 2.46 (3) | 1.86 (3) | 1.62 (4) | 1.50 (4) | 1.43 (4) | 2.45 (4) | 1.94 (5) | 1.73 (5) | 1.62 (5) | 1.56 (5)
PU-GCN [13] | 2.68 (5) | 2.06 (5) | 1.80 (5) | 1.65 (5) | 1.57 (5) | 2.40 (3) | 1.93 (4) | 1.72 (4) | 1.61 (4) | 1.54 (4)
PU-DR [8] | 1.83 (1) | 1.34 (1) | 1.17 (1) | 1.09 (1) | 1.06 (1) | 1.77 (2) | 1.46 (2) | 1.34 (2) | 1.28 (2) | 1.24 (2)
Arbitrary Scale | Meta-PU [19] | 2.53 (4) | 1.86 (3) | 1.58 (3) | 1.43 (3) | 1.35 (3) | 2.50 (5) | 1.87 (3) | 1.60 (3) | 1.45 (3) | 1.37 (3)
Proposed | 1.96 (2) | 1.50 (2) | 1.33 (2) | 1.25 (2) | 1.21 (2) | 1.72 (1) | 1.40 (1) | 1.27 (1) | 1.21 (1) | 1.19 (1)
Scale | $8\times$ | $16\times$
p ($10^{-2}$) | 0.2% $\downarrow$ | 0.4% $\downarrow$ | 0.6% $\downarrow$ | 0.8% $\downarrow$ | 1.0% $\downarrow$ | 0.2% $\downarrow$ | 0.4% $\downarrow$ | 0.6% $\downarrow$ | 0.8% $\downarrow$ | 1.0% $\downarrow$
Fixed Scale | PU-Net [21] | 2.64 (5) | 2.21 (6) | 2.02 (6) | 1.91 (6) | 1.84 (6) | 2.86 (6) | 2.42 (6) | 2.22 (7) | 2.10 (7) | 2.02 (7)
MPU [20] | - | - | - | - | - | 2.30 (4) | 1.96 (4) | 1.82 (4) | 1.73 (4) | 1.69 (4)
PU-GAN [7] | 1.72 (2) | 1.47 (3) | 1.38 (3) | 1.34 (3) | 1.32 (3) | 1.79 (3) | 1.57 (3) | 1.48 (3) | 1.44 (3) | 1.41 (3)
PU-GCN [13] | 2.31 (4) | 1.93 (4) | 1.75 (4) | 1.65 (5) | 1.59 (5) | 2.42 (5) | 2.07 (5) | 1.91 (5) | 1.82 (5) | 1.76 (5)
PU-DR [8] | 1.42 (1) | 1.20 (1) | 1.13 (1) | 1.11 (1) | 1.11 (1) | 1.56 (1) | 1.38 (1) | 1.32 (1) | 1.29 (1) | 1.28 (1)
Arbitrary Scale | Meta-PU [19] | 2.72 (6) | 2.05 (5) | 1.76 (5) | 1.60 (4) | 1.51 (4) | 3.27 (7) | 2.51 (7) | 2.14 (6) | 1.93 (6) | 1.79 (6)
Proposed | 1.72 (2) | 1.45 (2) | 1.34 (2) | 1.29 (2) | 1.27 (2) | 1.75 (2) | 1.51 (2) | 1.41 (2) | 1.36 (2) | 1.33 (2)
Table 2: Uniformity performance comparison with respect to NUC scores. The
ranking numbers are also provided.
(a) (b) (c) (d) (e) (f) (g) (h) (i)
Figure 2: $4\times$ point upsampling results of Chair, Camel and Fandisk. (a)
the input point cloud; (b) the ground truth; (c) to (i) the results of PU-Net
[21] , MPU [20], PU-GAN [7], PU-GCN [13], PU-DR [8], Meta-PU [19] and ours.
Please enlarge the PDF for more details.
### 4.3 Objective Performance Comparison
Objective Evaluation. We employ six popular metrics for objective evaluation:
1) Chamfer Distance (CD) and Earth Mover Distance (EMD) [21]: which evaluate
the similarity between the predicted points and ground truth one in the
Euclidean space. For both metrics, smaller is better. 2) F-score [18]: which
treats upsampling as a classification problem. For this metric, larger is
better. 3) mean and std [21]: which evaluate the distance between the
predicted point cloud and ground truth mesh. For both metrics, smaller is
better. 4) Normalized Uniformity Coefficient (NUC) [21]: which evaluates the
uniformity of points on randomly selected disk with different area percentage
$p=0.2\%,0.4\%,0.6\%,0.8\%,1.0\%$. For this metric, smaller is better.
In Table 1, we offer the comparison results for four scale factors
$\left[2\times,4\times,8\times,16\times\right]$ with respect to CD, EMD,
F-score, mean and std. Surprisingly, it can be found that, although our model
is trained in a self-supervised manner without accessing to the ground-truth,
it achieves competitive performance with those supervised learning based ones
with respect to metrics CD, EMD, F-score and mean. Taking CD for example, our
method is ranked #2, #2, #2 and #3 among seven compared methods for
$\left[2\times,4\times,8\times,16\times\right]$ respectively. Similar results
can be found for F-score, for which our method is ranked #2 for all cases.
Note that our method performs worst with respect to std, which is because that
outliers cannot be completely removed.
Table 2 shows the uniformity evaluation results with respect to NUC. Our
method is ranked #2 for $\left[2\times,8\times,16\times\right]$, just blow PU-
DR [8]. In the case of $4\times$, our method is ranked #1. These results
demonstrate that the proposed method produces dense and uniform point clouds.
Inference Time Cost Comparison. This experimental analysis is conducted on a
server with two 1080Ti GPU. In our inference process, estimation of direction
and length are the most time-consuming steps. According to the experiments,
generating 40000 projected points by our method would cost 46s in average. As
a comparison, the time costs of generating 40000 points ($16\times$) are
342.4s by MPU [20] and 0.228s by Meta-PU [19]. It should be noticed that the
estimation of each point is independently performed and thus can be done in
parallel to speed up significantly.
### 4.4 Subjective Performance Comparison
Input $4\times$ $16\times$ $64\times$
Figure 3: $4\times$, $16\times$ and $64\times$ upsampling results of Moai.
512 points 1024 points 2048 points 4096 points
Figure 4: $4\times$ Upsampling results of Eight with varying size of input.
The first row are the inputs, the second row are the corresponding upsampling
results.
Clean 1% 2%
Figure 5: $4\times$ Upsampling results of Star with different additive
Gaussian noise level. Figure 6: $8\times$ Upsampling result on a real-world
sample from KITTI. Please enlarge the PDF for more details.
(a) (b) (c)
Figure 7: Ablation on the choice of $M$. (a) input point cloud, (b) $M=3$, (c)
$M=10$.
(a) (b) (c)
Figure 8: Ablation on the outliers removal. (a) input point cloud, (b) without
outliers removal, (c) with outliers removal.
Visual Comparison. Figure 2 illustrates the $4\times$ upsampling results
generated by our method and the compared state-of-the-art methods on three
models Chair, Camel and Fandisk. The results show that our method achieves
better visual performance than other methods. The produced high-resolution
point clouds are dense and uniform, which also have continuous and complete
contours. Specifically, results of the highlighted part of Chair show that our
method succeeds in recovering structure from very few points; results of the
highlighted part of Camel show that our method can handle complex contour.
Furthermore, our method can reconstruct the edge region very well, as
demonstrated in the highlighted part of Fandisk. The above visual comparisons
verify the superiority of our proposed method.
Results on Variable Scales. Figure 3 shows the upsampling results of Moai with
different scale factors.
It can be observed that the contours of all the results are consistent, and
the uniformity of points is well preserved.
Robustness against Varying Sizes of Input. Figure 4 shows the $4\times$
upsampling results of Eight with different sizes of input point sets. Our
method generates consistent outlines regardless of the number of input points.
Figure 5 shows the $4\times$ upsampling results of star with noise level 0%,
1% and 2%. Our scheme also works well on the noisy input while the uniformity
is well preserved. Overall, our method is robust to the input size and noise.
Result on Real-world Sample. We choose one real-world sample from KITTI [3] to
evaluate the generalization capability of our method. In Figure 6, the
upsampling results of three different regions are presented. It can be found
that, even though the input point cloud is sparse and non-uniform, our scheme
can still recover the high-resolution one very well.
### 4.5 Ablation Study
About the choice of $M$. $M$ works in the distance approximation of a seed
point to the underlying surface, which affects the number of outliers and the
continuity of projected points. When $M$ increases, both the continuity and
the number of outliers increase. To show the effect of different $M$, we
provided the 4$\times$ upsampling result of cow with $M=3$ and $M=10$. The
results are shown in Figure 7. It can be observed that, when $M=3$, no outlier
is introduced to the blue box. However, the surface is discontinuous in the
red box. When $M=10$, the surface in the red box becomes continuous, however,
a few amount of outliers is still introduced to the blue box after outliers
removal.
About the necessity of outliers removal. To show the effect of outliers
removal, we provid the 4$\times$ upsampling result of cow with and without
outliers removal. The results are shown in Figure 8. It can be observed that
the outliers removal does not affect the smooth region in the red box, while
it can remove most of the outliers in the blue box.
### 4.6 Limitations
The limitation of our method are two-fold. Firstly, even outliers removal is
performed, there still exist a certain number of outliers. Secondly, our
method cannot precisely control the number of upsampled point set. We have to
first generate a dense one with over-sampled points and then adjust the number
of vertices to the target number by the farthest point sampling algorithm.
## 5 Conclusion
In this paper, we present a novel and effective point clouds upsampling method
via implicit neural representation, which can achieve self-supervised and
arbitrary-scale upsampling simultaneously. We formulate point clouds
upsampling as the task of seeking nearest projection points on the implicit
surface for seed points, which can be done by two implicit neural functions
trained without the ground truth dense point clouds. Extensive experimental
results demonstrate that our method can produce high-quality dense point
clouds that are uniform and complete, and achieves competitive objective
performance and even better visual performance compared with state-of-the-art
supervised methods.
## 6 Acknowledgements
This work was supported by National Key Research and Development Project under
Grant 2019YFE0109600, National Natural Science Foundation of China under
Grants 61922027, 6207115 and 61932022.
## References
* [1] Marc Alexa, Johannes Behr, Daniel Cohen-Or, Shachar Fleishman, David Levin, and Claudio T. Silva. Computing and rendering point set surfaces. IEEE Transactions on visualization and computer graphics, 9(1):3–15, 2003.
* [2] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
* [3] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, pages 3354–3361. IEEE, 2012.
* [4] Xuecai Hu, Haoyuan Mu, Xiangyu Zhang, Zilei Wang, Tieniu Tan, and Jian Sun. Meta-SR: A magnification-arbitrary network for super-resolution. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1575–1584, 2019.
* [5] Hui Huang, Dan Li, Hao Zhang, Uri Ascher, and Daniel Cohen-Or. Consolidation of unorganized point clouds for surface reconstruction. ACM transactions on graphics (TOG), 28(5):1–7, 2009.
* [6] Hui Huang, Shihao Wu, Minglun Gong, Daniel Cohen-Or, Uri Ascher, and Hao Zhang. Edge-aware point set resampling. ACM transactions on graphics (TOG), 32(1):1–12, 2013.
* [7] Ruihui Li, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. PU-GAN: A point cloud upsampling adversarial network. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7202–7211, 2019.
* [8] Ruihui Li, Xianzhi Li, Pheng-Ann Heng, and Chi-Wing Fu. Point cloud upsampling via disentangled refinement. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 344–353, 2021.
* [9] Xinhai Liu, Xinchen Liu, Zhizhong Han, and Yu-Shen Liu. SPU-Net: Self-supervised point cloud upsampling by coarse-to-fine reconstruction with self-projection optimization, 2020.
* [10] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4460–4470, 2019.
* [11] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. DeepSDF: Learning continuous signed distance functions for shape representation. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 165–174, 2019.
* [12] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
* [13] Guocheng Qian, Abdulellah Abualshour, Guohao Li, Ali Thabet, and Bernard Ghanem. PU-GCN: Point cloud upsampling using graph convolutional networks. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11678–11687, 2021.
* [14] Yue Qian, Junhui Hou, Sam Kwong, and Ying He. Deep magnification-flexible upsampling over 3d point clouds. IEEE Transactions on Image Processing, 30:8354–8367, 2021.
* [15] David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1955–1964, 2018.
* [16] David Stutz and Andreas Geiger. Learning 3d shape completion under weak supervision. CoRR, abs/1805.07290, 2018.
* [17] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5):1–12, 2019.
* [18] Huikai Wu, Junge Zhang, and Kaiqi Huang. Point cloud super resolution with adversarial residual graph networks. arXiv preprint arXiv:1908.02111, 2019.
* [19] Shuquan Ye, Dongdong Chen, Songfang Han, Ziyu Wan, and Jing Liao. Meta-PU: An arbitrary-scale upsampling network for point cloud. IEEE Transactions on Visualization and Computer Graphics, pages 1–1, 2021.
* [20] Wang Yifan, Shihao Wu, Hui Huang, Daniel Cohen-Or, and Olga Sorkine-Hornung. Patch-based progressive 3d point set upsampling. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5951–5960, 2019.
* [21] Lequan Yu, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. PU-Net: Point cloud upsampling network. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2790–2799, 2018.
* [22] Yifan Zhao, Le Hui, and Jin Xie. SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable Rendering, page 2214–2223. Association for Computing Machinery, New York, NY, USA, 2021.
|
# The Power of Two Matrices in Spectral Algorithms
Souvik Dhara⋆, Julia Gaudio†, Elchanan Mossel‡, Colin Sandon∣ Division of
Applied Mathematics, Brown University Department of Industrial Engineering
and Management Sciences, Northwestern University Department of Mathematics,
Massachusetts Institute of Technology Department of Mathematics, École
Polytechnique Fédérale de Lausanne<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
Spectral algorithms are some of the main tools in optimization and inference
problems on graphs. Typically, the graph is encoded as a matrix and
eigenvectors and eigenvalues of the matrix are then used to solve the given
graph problem. Spectral algorithms have been successfully used for graph
partitioning, hidden clique recovery and graph coloring. In this paper, we
study the power of spectral algorithms using two matrices in a graph
partitioning problem. We use two different matrices resulting from two
different encodings of the same graph and then combine the spectral
information coming from these two matrices.
We analyze a two-matrix spectral algorithm for the problem of identifying
latent community structure in large random graphs. In particular, we consider
the problem of recovering community assignments _exactly_ in the censored
stochastic block model, where each edge status is revealed independently with
some probability. We show that spectral algorithms based on two matrices are
optimal and succeed in recovering communities up to the information theoretic
threshold. On the other hand, we show that for most choices of the parameters,
any spectral algorithm based on one matrix is suboptimal. This is in contrast
to our prior works (2022a, 2022b) which showed that for the symmetric
Stochastic Block Model and the Planted Dense Subgraph problem, a spectral
algorithm based on one matrix achieves the information theoretic threshold. We
additionally provide more general geometric conditions for the
(sub)-optimality of spectral algorithms.
_Acknowledgement:_ S.D., E.M and C.S. were partially supported by Vannevar
Bush Faculty Fellowship ONR-N00014-20-1-2826. S.D. was supported by Simons-
Berkeley Research Fellowship and Vannevar Bush Faculty Fellowship
ONR-N0014-21-1-2887. E.M. and C.S. were partially supported by NSF award
DMS-1737944. E.M. was partially supported by Simons Investigator award
(622132) and by ARO MURI W911NF1910217. J.G. was partially supported by NSF
award CCF-2154100. Part of this work was completed while S.D. and C.S. were at
the MIT Mathematics Department, and also while S.D. was at The Simons
Institute for the Theory of Computing.
## 1\. Introduction
Spectral algorithms are some of the main tools in graph algorithms and
combinatorial optimization. Some famous and classical examples include
spectral algorithms for the hidden clique problem [4], graph bisection [6],
and graph coloring [3, 5]. These algorithms encode the graph into a matrix by
recording the status of each present/absent edge of the graph as an entry of
the matrix. The most natural encoding is the adjacency matrix representation,
where edges are encoded by the value $1$ and non-edges are encoded by the
value $0$. Given the encoding matrix, a small number of eigenvectors for this
matrix are used to solve the given graph problem.
> Our interest in this work lies in graph problems for which using multiple
> matrix representations gives an advantage over using a single matrix.
In particular, we are interested in the power of spectral algorithms in such a
scenario in the context of finding clusters in a planted partition model
called the Censored Stochastic Block Model (CSBM). In this model, there are
two clusters of approximate sizes $n\rho$ and $n(1-\rho)$, and the edges
inside each of the clusters appear independently with probabilities
$p_{1},p_{2}$ respectively, while edges between the two clusters appear with
probability $q$. Moreover, each edge status is revealed with probability
$t\log n/n$ for some fixed $t>0$. Thus the statuses of most edges are unknown.
The censored model was introduced to model the fact that in many social
networks, not all of the connections between individual nodes are known.
Given an instance of a censored graph with no vertex labels, the problem is to
recover the partitions _exactly_ with high probability. This is often referred
to as the _exact recovery problem_. We note that some applications of spectral
algorithms to the exact recovery problem use an additional combinatorial
clean-up stage (see e.g. [7, 16, 17]), but we follow [1, 9, 8] in studying
spectral algorithms that do not look at the graph after the top eigenvectors
have been found. This is partially motivated by the fact that most real
applications of spectral algorithms do not include a combinatorial clean-up
stage.
The classical case in the literature considers exact recovery in the
Stochastic Block Model where there is no censoring and
$p_{1},p_{2},q=\Theta(\log n/n)$. In order to achieve exact recovery up to the
information theoretic boundary, prior works used some trimming and post-
processing steps together with the spectral algorithm [7, 16, 17]. However,
the question of whether a direct spectral algorithm based on the top two
eigenvectors of the of the adjacency matrix would be optimal remained open
until the recent resolution by Abbe, Fan, Wang, and Zhong [1] for
$p_{1}=p_{2}$. In the _censored_ SBM, there are three possible observations
(present, absent, or censored), so spectral recovery using a binary-valued
adjacency matrix is suboptimal. Instead, one can use a ternary-valued encoding
matrix. It was recently shown in [9, 8] that, for some special cases of the
planted partition model such as the planted dense subgraph problem $(p_{2}=q)$
and the symmetric stochastic block model ($p_{1}=p_{2},\rho=1/2$), a spectral
algorithm based on the top two eigenvectors of a signed adjacency matrix is
optimal. This raises the question:
> _Are spectral algorithms based on the top eigenvectors of a signed adjacency
> matrix optimal for all censored stochastic block models?_
The main contributions of this article are as follows:
1. (1)
In contrast with the success stories in [9, 8], whenever $p_{1},p_{2},q$ are
distinct, a spectral algorithm based on the top two eigenvectors of a signed
adjacency matrix is always suboptimal (Theorem 1.7 Part 2).
2. (2)
We propose spectral algorithms with two encoding matrices, where we take an
appropriate linear combination of the corresponding top eigenvectors. We show
that these algorithms are always optimal (Theorem 1.10). The optimality of
spectral algorithms with two matrices is also shown in the more general
setting $k\geq 2$ communities (Theorem 1.12).
Thus, these results exhibit a strict separation between spectral algorithm
classes with one versus multiple encoding matrices, and this separation can be
realized for even elementary planted partition models. To our knowledge, this
general phenomena was not observed in the substantial prior literature for
recovery problems in the planted partition problems.
### 1.1. Model and Objective.
We start by defining the Censored Stochastic Block Model.
###### Definition 1.1 (Censored Stochastic Block Model (CSBM)).
Let $\rho\in(0,1)^{k}$ be such that $\sum_{i=1}^{k}\rho_{i}=1$ and let
$P\in(0,1)^{k\times k}$ be a symmetric matrix. Suppose we have $n$ vertices
and each vertex $v\in[n]$ is assigned a community assignment
$\sigma_{0}(v)\in[k]$ according to the distribution $\rho$ independently,
i.e., $\mathbb{P}(\sigma_{0}(v)=i)=\rho_{i}$ for $i\in[k]$.
* $\triangleright$
For $u,v\in[n]$ and $u\neq v$, the edge $\\{u,v\\}$ exists independently with
probability $P_{\sigma_{0}(u)\sigma_{0}(v)}$. Self-loops do not occur.
* $\triangleright$
For every pair of vertices $\\{u,v\\}$, its connectivity status is revealed
independently with probability $\frac{t\log n}{n}$, and is censored otherwise
for some fixed $t>0$.
The output is a random graph with edge statuses given by
$\\{\texttt{present},\texttt{absent},\texttt{censored}\\}$. The distribution
of this random graph is called the Censored Stochastic Block Model. We write
$G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$ to denote a graph generated from the
above model, with vertex labels removed (i.e., $\sigma_{0}$ unknown).
###### Definition 1.2 (Exact recovery).
Consider the $n\times k$ membership matrix $S_{0}$, where
$(S_{0})_{ui}=\mathbbm{1}\\{\sigma_{0}(u)=i\\}$, i.e., the $u$-th row
indicates the community membership of $u$. Given an estimator $\hat{\sigma}$,
construct $\hat{S}$ similarly as
$\hat{S}_{ui}=\mathbbm{1}\\{\hat{\sigma}(u)=i\\}$. We say that an estimator
achieves _exact recovery_ if there exists a $k\times k$ permutation matrix $J$
such that $\hat{S}J=S_{0}$.
### 1.2. Information theoretic boundary.
We start by discussing the information theoretic threshold. The result will be
stated in terms of a Chernoff–Hellinger divergence, introduced by Abbe and
Sandon [2].
###### Definition 1.3 (Chernoff–Hellinger divergence).
Given two vectors $\mu,\nu\in(\mathbb{R}_{+}\setminus\\{0\\})^{l}$, define
$\mathrm{CH}_{\xi}(\mu,\nu)=\sum_{i\in[l]}\big{[}\xi\mu_{i}+(1-\xi)\nu_{i}-\mu_{i}^{\xi}\nu_{i}^{1-\xi}\big{]}\quad\text{
for }\xi\in[0,1].$
The Chernoff–Hellinger divergence of $\mu$ and $\nu$ is defined as
$\begin{split}\Delta_{+}(\mu,\nu)=\max_{\xi\in[0,1]}\mathrm{CH}_{\xi}(\mu,\nu).\end{split}$
(1.1)
Define
$\begin{split}t_{c}:=\Big{(}\min_{i\neq
j}\Delta_{+}(\theta_{i},\theta_{j})\Big{)}^{-1},\quad\text{where
}\theta_{i}=(\rho_{r}P_{ri},\rho_{r}(1-P_{ri}))_{r\in[k]}\in\mathbb{R}^{2k}.\end{split}$
(1.2)
###### Theorem 1.4 (Information theoretic threshold).
Let $G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$. If $t<t_{c}$, then for any
estimator $\hat{\sigma}$,
$\displaystyle\lim_{n\to\infty}\mathbb{P}(\hat{\sigma}\text{ achieves exact
recovery})=0.$
### 1.3. Spectral algorithms.
For comparing the performance of spectral algorithms with one matrix versus
spectral algorithms with more than one matrix, we first specialize to the case
of two communities.
To define spectral algorithms formally, we first define the threshold
procedures we allow to apply on vectors. These are the procedures that will be
applied to the leading eigenvectors of the encoding matrices.
Algorithm 1 Classify
1:Censored graph $G$ on $n$ vertices, vectors
$(u_{i})_{i=1}^{m}\subset\mathbb{R}^{n}$, and scalars,
$\gamma_{1},\dots,\gamma_{m},T\in\mathbb{R}$.
2:Community classification.
3:Compute possible score vectors
$U=\\{\sum_{i=1}^{m}s_{i}\gamma_{i}u_{i}\text{ for all }s_{i}\in\\{\pm
1\\}\\}.$
4:Compute possible assignments
$\hat{S}(U)=\\{\hat{\sigma}=\mathrm{sign}(u-T1):u\in U\\}$ and output a
community assignment $1+(1+\hat{\sigma})/2$ that maximizes the posterior
probability $\mathbb{P}(G\mid\hat{\sigma})$ over $\hat{\sigma}\in\hat{S}(U)$.
Since eigenvectors are determined up to a sign flip, Step 2 above is required
in order to resolve this sign ambiguity. This will be explained in more detail
in Remark 1.13.
###### Definition 1.5 (Signed adjacency matrix).
Given $y>0$ and a graph $G$ with edge statuses
$\\{\texttt{present},\texttt{absent},\texttt{censored}\\}$, define the signed
adjacency matrix $A(G,y)$ as the $n\times n$ matrix with
$\displaystyle A_{ij}$ $\displaystyle=\begin{cases}1&\text{if }\\{i,j\\}\text{
is present}\\\ -y&\text{if }\\{i,j\\}\text{ is absent}\\\ 0&\text{if
}\\{i,j\\}\text{ is censored}.\end{cases}$
Let us define the class of algorithms Spectral-One that use a single encoding
matrix.
###### Definition 1.6 (Spectral-One).
An algorithm $\mathcal{A}(G,y,a_{1},a_{2},T)$ in the Spectral-One class takes
a censored graph $G$ as input, an encoding parameter $y\in\mathbb{R}_{+}$, and
scalars $a_{1},a_{2},T\in\mathbb{R}$. The algorithm then computes the top two
eigenvectors $u_{1},u_{2}$ of $A=A(G,y)$, and gives the output of
$\textsc{Classify}((u_{i})_{i=1}^{2},(a_{i})_{i=1}^{2},T)$. We denote the
output of algorithm $\mathcal{A}$ in this class as
$\hat{\sigma}_{\scriptscriptstyle\mathcal{A}}$.
For the two community case, we will always consider the parameters:
$P=\begin{pmatrix}p_{1}&q\\\
q&p_{2}\end{pmatrix}\quad\bar{\rho}=(\rho,1-\rho)\quad\text{and}\quad\rho,p_{1},p_{2},q\in(0,1).$
(1.3)
###### Theorem 1.7 (Failure of Spectral-One in most cases).
Let $G\sim\textsc{CSBM}_{n}^{2}(\bar{\rho},P,t)$ with $\bar{\rho},P$ given by
(1.3).
1. (1)
Suppose that $p_{1},p_{2},q$ are not distinct. If $p_{1}=p_{2}=p$, then assume
$p+q\neq 1$. 111The case $p_{1}=p_{2}=p$, $\rho=\frac{1}{2}$ is covered in [9]
without the assumption $p+q\neq 1$. In this case, spectral algorithms succeed
for $t>t_{c}$. There exist explicitly computable constants
$y\in\mathbb{R}_{+}$ and $\gamma_{1},\gamma_{2}\in\mathbb{R}$ such that the
algorithm $\mathcal{A}=\mathcal{A}(G,y,\gamma_{1},\gamma_{2},0)$ from the
class Spectral-One satisfies
$\lim_{n\to\infty}\mathbb{P}(\hat{\sigma}_{\scriptscriptstyle\mathcal{A}}\text{
achieves exact recovery})=1,\quad\text{ for any }t>t_{c}.$
In particular, Algorithm 5.1 produces such an estimator.
2. (2)
Suppose that $p_{1},p_{2},q$ are distinct. There exists $\delta_{0}>0$ such
that, if $t<t_{c}+\delta_{0}$, then for any $\mathcal{A}\in\textsc{Spectral-
One}$,
$\lim_{n\to\infty}\mathbb{P}(\hat{\sigma}_{\scriptscriptstyle\mathcal{A}}\text{
achieves exact recovery})=0.$
For the case $p_{1}=p_{2}$, Theorem 1.7 Part 1 generalizes the result of [9,
Theorem 2.2] to the case $\rho\neq 1/2$. Part 2 of the result is in sharp
contrast with the results in [9, 8]; together, these results essentially say
that the censored planted dense subgraph problem ($p_{2}=q$) and the symmetric
censored stochastic block models ($p_{1}=p_{2}$) are remarkably the only cases
where an algorithm from Spectral-One is successful222For the edge-case
$p_{1}=p_{2}=p$ and $p+q=1$, the rank of $\mathbb{E}[A]$ is 1 for the value of
$y$ that we would want to use. This is why it is ruled out in Theorem 1.7 Part
1. . The possible limitation of Spectral-One was shown in [8, Theorem 2.6] for
the special case of $q=1/2$, $p_{1}=1-p_{2}$ and $\rho=1/2$.
###### Remark 1.8.
It is worthwhile to note that the choice of encoding parameters $\\{1,-y,0\\}$
is completely general and one does not get a more powerful class of algorithms
by allowing an arbitrary ternary encoding. In fact, as our proof shows, if
$p_{1},p_{2},q$ are distinct, then even if one allows arbitrary encodings, the
Spectral-One algorithms still fail sufficiently near the threshold (see Remark
5.2).
Next, we will show that spectral algorithms with two matrices are always
optimal for the recovery of two communities. Let us define the class of
algorithms Spectral-Two that uses two encoding matrices instead of one.
###### Definition 1.9 (Spectral-Two).
An algorithm $\mathcal{A}(G,y_{1},y_{2},(a_{i})_{i=1}^{4},T)$ in the Spectral-
Two class takes as input a censored graph $G$, two encoding parameter
$y_{1},y_{2}\in\mathbb{R}_{+}$ with $y_{1}\neq y_{2}$ and
$(a_{i})_{i=1}^{4}\subset\mathbb{R}$, $T\in\mathbb{R}$. The algorithm
considers two signed adjacency matrices $A_{1}=A(G,y_{1})$ and
$A_{2}=A(G,y_{2})$, and computes their top two eigenvectors
$u_{1}^{r},u_{2}^{r}$, for $r=1,2$. Then the algorithm outputs
$\textsc{Classify}((u_{i}^{r})_{i,r=1,2},(a_{i})_{i=1}^{4},T)$. As before, we
denote the output of algorithm $\mathcal{A}$ from this class as
$\hat{\sigma}_{\scriptscriptstyle\mathcal{A}}$.
###### Theorem 1.10 (Spectral-Two always succeeds in recovering two
communities).
Let $G\sim\textsc{CSBM}_{n}^{2}(\bar{\rho},P,t)$ with $\bar{\rho},P$ given by
(1.3). There exists a set $\mathcal{Y}\subset\mathbb{R}_{+}$ with
$|\mathcal{Y}|\leq 3$ such that for any $y_{1}\neq y_{2}$ and
$y_{1},y_{2}\notin\mathcal{Y}$, there exist explicit
$(a_{i})_{i=1}^{4}\subset\mathbb{R}^{4}$ such that the algorithm
$\mathcal{A}(G,y_{1},y_{2},(a_{i})_{i=1}^{4},0)$ from the class Spectral-Two
satisfies
$\lim_{n\to\infty}\mathbb{P}(\hat{\sigma}_{\scriptscriptstyle\mathcal{A}}\text{
achieves exact recovery})=1,\quad\text{ for any }t>t_{c}.$
In particular, Algorithm 5.2 produces such an estimator.
Theorem 1.10 not only shows that Spectral-Two algorithms are always
successful, but also shows that the choice of the encoding parameters
$y_{1},y_{2}$ does not matter too much as long as $y_{1}\neq y_{2}$ and they
both lie outside a finite exception set. For example, we can choose
$y_{1},y_{2}\sim\text{Uniform}[0,1]$ independently. Avoiding the finite
exception set helps us ensure that $A_{1}$ and $A_{2}$ both have two
eigenvectors with large, distinct eigenvalues. On the other hand, in Theorem
1.7 1 for Spectral-One algorithms, the choice of the encoding is quite
important. In fact, for $p_{1}=p_{2}=p$ or $p_{1}=p$ and $p_{2}=q$, the only
choice of $y$ that yields an optimal algorithm is
$\log(\frac{1-q}{1-p})/\log\frac{p}{q}$. Thus, Spectral-Two algorithms leads
to a much broader and flexible class of algorithms as compared to Spectral-
One.
Finally, we show that Spectral-Two succeeds for the recovery of $k\geq 3$
communities, as long as the parameters $P,\rho$ satisfies certain conditions.
To this end, let us define Spectral-Two for general $k$.
Algorithm 2 Classify-Multiple
1:Censored graph $G$ on $n$ vertices, vectors
$(u_{i})_{i=1}^{m}\subset\mathbb{R}^{n}$, and scalars,
$\gamma_{1},\dots,\gamma_{m},T\in\mathbb{R}$.
2:Community classification.
3:Compute possible assignments $\hat{S}$ consisting of $\hat{\sigma}(\cdot;s)$
with $s\in\\{\pm 1\\}^{2k}$ such that
$\displaystyle\hat{\sigma}(j;s)=\operatornamewithlimits{argmax}_{i\in[k]}\sum_{i=1}^{m}s_{i}\gamma_{i}u_{i}.$
4:Output $\hat{\sigma}(\cdot;s)$ that maximizes the posterior probability over
$\mathbb{P}(G\mid\hat{\sigma})$ over $\hat{\sigma}\in\hat{S}$.
###### Definition 1.11 (Spectral-Two for $k\geq 3$ communities).
An algorithm $\mathcal{A}(G,y_{1},y_{2},(a_{i})_{i=1}^{2k},T)$ in this class
takes as input a censored graph $G$, two encoding parameter
$y_{1},y_{2}\in\mathbb{R}_{+}$ with $y_{1}\neq y_{2}$ and
$(a_{i})_{i=1}^{2k}\subset\mathbb{R}^{k}$, $T\in\mathbb{R}$. The algorithm
considers two signed adjacency matrices $A_{1}=A(G,y_{1})$ and
$A_{2}=A(G,y_{2})$, and computes their top $k$ eigenvectors
$(u_{i}^{1})_{i\in[k]},(u_{i}^{2})_{i\in[k]}$. Then the algorithm outputs
$\textsc{Classify-
Multiple}((u_{i}^{r})_{i\in[k],r=1,2},(a_{i})_{i=1}^{2k},T)$. As before, we
denote the output of algorithm $\mathcal{A}$ from this class as
$\hat{\sigma}_{\scriptscriptstyle\mathcal{A}}$.
###### Theorem 1.12 (Success of Spectral-Two for $k\geq 3$ communities).
Let $G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$ where $\rho\in(0,1)^{k}$ is such
that $\sum_{i}\rho_{i}=1$, and $P\in(0,1)^{k\times k}$ is a symmetric matrix.
Further, suppose that $P\cdot\text{diag}(\rho)$ has exactly $k$ distinct non-
zero eigenvalues. There exists a finite set $\mathcal{Y}\subset\mathbb{R}_{+}$
such that for any $y_{1}\neq y_{2}$ and $y_{1},y_{2}\notin\mathcal{Y}$, there
exist explicit constants $(a_{i})_{i=1}^{2k}\subset\mathbb{R}^{k}$ such that
the algorithm $\mathcal{A}(G,y_{1},y_{2},(a_{i})_{i=1}^{2k},0)$ from the class
Spectral-Two satisfies
$\lim_{n\to\infty}\mathbb{P}(\hat{\sigma}_{\scriptscriptstyle\mathcal{A}}\text{
achieves exact recovery})=1,\quad\text{ for any }t>t_{c}.$
In particular, Algorithm 6 produces such an estimator.
###### Remark 1.13.
The fact that the encoding parameters $y_{1},y_{2}$ lie outside a finite set
in Theorems 1.10 and 1.12 is required to ensure that $\mathbb{E}[A(G,y_{1})]$,
$\mathbb{E}[A(G,y_{2})]$ have $k$ distinct and non-zero eigenvalues. The
requirement of having $k$ non-zero eigenvalues is intuitive as we seek to
recover an underlying rank $k$ structure. On the other hand, the eigenvectors
of $A(G,y)$ can only be approximated up to an unknown orthogonal
transformation. This causes an ambiguity for defining the final estimator.
When the eigenvalues are distinct, this ambiguity can be resolved by going
over all possible sign flips $s$ and choosing the best among them, as in
Algorithm 1.3 Step 2, or Algorithm 1.3 Step 2.
###### Remark 1.14.
The condition in Theorem 1.12 that $P\cdot\text{diag}(\rho)$ has distinct and
non-zero values can be relaxed. In fact, if $P^{\scriptscriptstyle(y)}$ is the
matrix such that
$P^{\scriptscriptstyle(y)}_{ij}:=\rho_{j}(P_{ij}-y(1-P_{ij}))$, then by Lemma
6.1, the same conclusions as Theorem 1.12 hold as long as there exists a $y$
such that $P^{\scriptscriptstyle(y)}$ has $k$ distinct and non-zero
eigenvalues. In fact, we can simply choose $y\sim\text{Uniform}((0,1))$.
### 1.4. Proof ideas
We now give a brief outline of the proofs. For a vertex $v$, we call
$d(v)=(d_{+j},d_{-j})_{j\in[k]}\in\mathbb{Z}_{+}^{2k}$ to be the _degree
profile_ of a vertex, where $d_{+j}=d_{+j}(v),d_{-j}=d_{-j}(v)$ respectively
denote the number of present and absent edges from $v$ to community $j$ for
$j\in[k]$. Let us re-scale $\bar{d}(v)=d(v)/t\log n$. The proof consists
mainly of two steps:
Step 1: Characterization of spectral algorithms using degree profiles. Given
any signed adjacency matrix $A=A(G,y)$, the starting point of our analysis is
to find a good $\ell_{\infty}$-approximation for the eigenvectors. Using a
recent general framework by Abbe, Fan, Wang and Zhong [1], we can show that
the two leading eigenvalues $u_{1},u_{2}$ of $A$ satisfy (see Corollary 4.2):
$\displaystyle\min_{s\in\\{\pm
1\\}}\bigg{\|}su_{i}-\frac{Au_{i}^{\star}}{\lambda_{i}^{\star}}\bigg{\|}_{\infty}=o\bigg{(}\frac{1}{\sqrt{n}}\bigg{)},\quad\text{
for }i\in[k],$
with probability $1-o(1)$, where $(\lambda_{i}^{\star},u_{i}^{\star})$ is the
$i$-th largest eigenvalue/eigenvector pair of $\mathbb{E}[A]$. Note that
$\mathbb{E}[A]$ is a rank-two matrix with $u_{i}^{\star}$’s taking the same
constant value corresponding to all vertices in the same community. The low
rank of $\mathbb{E}[A]$ allows us to express $Au_{i}^{\star}$ as a linear
combination of the degree profiles and thus drastically reduce the dimension
of the problem. Using this representation, any linear combination of the
$u_{i}^{\prime}s$ is also an expressible linear combination of degree
profiles. Hence, we show that spectral algorithms essentially are
asymptotically equivalent to classifying vertices depending on whether
$\langle w_{\scriptscriptstyle\mathrm{Spec}},\bar{d}(v)\rangle>(T+o(1))$ or
$\langle w_{\scriptscriptstyle\mathrm{Spec}},\bar{d}(v)\rangle<(T-o(1))$ for
some $w_{\scriptscriptstyle\mathrm{Spec}}\in\mathbb{R}^{2k},T\in\mathbb{R}$.
Step 2: Geometry of degree profiles. At this point, the problem reduces to
understanding whether, for a given vector $w$, a hyperplane orthogonal to $w$
can separate re-scaled degree profiles. To this end, for each community $i$,
we define a measure of _dissonance_ $\eta_{i}$ for rescaled degree profiles,
and define the $\delta$-_dissonance range_ as
$\mathrm{DR}_{\delta}(i):=\\{\bar{d}:\eta_{i}(\bar{d})\leq\delta\\}$. We show
that $\mathrm{DR}_{\delta}(i)$’s are closed and convex sets. Moreover, (1) if
$1/t<\delta$, then all the re-scaled degree profiles from community $i$ lie in
$\mathrm{DR}_{\delta}(i)$ and (2) if $\delta<1/t$, then the re-scaled degree
profiles from community $i$ are asymptotically _dense_ in
$\mathrm{DR}_{\delta}(i)$ (see Lemma 2.4). In a sense, one can think of
$\mathrm{DR}_{1/t}(i)$ as the cloud of re-scaled degree profiles arising from
community $i$.
$x^{\star}$$w^{\star}$$\mathrm{DR}_{\delta}(1)$$\mathrm{DR}_{\delta}(2)$Optimally
separating hyperplane Figure 1. Visualizing dissonance ranges of two
communities near $t_{c}$.
Next, consider the “hardest” scenario when $t=t_{c}$. In that case, we show
that the clouds $\mathrm{DR}_{1/t_{c}}(i)$ and $\mathrm{DR}_{1/t_{c}}(j)$
corresponding to communities $i$ and $j$ intersect only at a single point
$x^{\star}$ (see Lemma 2.9), and as $t$ increases away from $t_{c}$, the two
clouds gradually separate. Due to convexity, $\mathrm{DR}_{1/t_{c}}(i)$ and
$\mathrm{DR}_{1/t_{c}}(j)$ lie on two opposite sides of the tangent hyperplane
at $x^{\star}$. Let $w^{\star}$ be such that this tangent hyperplane is given
by $H^{\star}=\\{x:\langle w^{\star},x-x^{\star}\rangle=0\\}$. Then
$H^{\star}$ is the only hyperplane that separates the clouds of degree
profiles near $t_{c}$; see Figure 1. Thus, as long as we are trying to
separate clouds of degree profiles using this $H^{\star}$, we will succeed for
any $t>t_{c}$. However, if we try to separate the clouds with a different
hyperplane $\\{x:\langle w,x-x^{\star}\rangle=0\\}$ for some
$w\notin\mathrm{Span}(w^{\star})$, then we will fail sufficiently close to
$t_{c}$.
Combining this with the asymptotic characterization of spectral algorithms, it
thus remains to be seen whether we can choose the parameters of the spectral
algorithm in such a way that
$w_{\scriptscriptstyle\mathrm{Spec}}\in\mathrm{Span}(w^{\star})$. For
Spectral-One algorithms in the two community case, we show that
$w_{\scriptscriptstyle\mathrm{Spec}}$ takes values in a restricted set
$\\{w\in\mathbb{R}^{4}:\frac{w_{1}}{w_{2}}=\frac{w_{3}}{w_{4}}=y\\}$, no
matter the choice of the parameters. Thus, for Spectral-One algorithms,
generally $w_{\scriptscriptstyle\mathrm{Spec}}\notin\mathrm{Span}(w^{\star})$
except for the specific cases in Theorem 1.7 1. However, for Spectral-Two
algorithms, there always exists a way to choose the linear combinations in
such a way that
$w_{\scriptscriptstyle\mathrm{Spec}}\in\mathrm{Span}(w^{\star})$, which
ensures their optimality.
Information Theoretic Threshold. There is an alternate way of characterizing
the information theoretic boundary by observing that even the “best” estimator
will separate communities using the hyperplane $H^{\star}$ above. Consider the
problem of classifying a single vertex $v$ given $G$ and
$(\sigma_{0}(u))_{u\in[n]\setminus\\{v\\}}$. The MAP estimator for the
community assignment of $v$ is called the genie-based estimator. This is an
optimal estimator (even though it is not computable given $G$). Now, a direct
computation shows that genie-based estimator classifies a vertex in one of the
two communities based on whether $\langle w^{\star},\bar{d}(v)\rangle>0$ or
$\langle w^{\star},\bar{d}(v)\rangle<0$, with the same $w^{\star}$ as above
(see [9, Proposition 6.1]). Thus, in a sense, separating degree profiles based
on hyperplanes orthogonal to $w^{\star}$ is the optimal decision rule. When
$t<t_{c}$, the degree profile clouds of the two communities overlap
significantly, and therefore even the optimal estimator misclassifies a
growing number of vertices. This gives rise to the information theoretic
impossibility region for exact recovery when $t<t_{c}$.
### 1.5. Discussion
Theorems 1.10 and 1.12 prove optimality of spectral algorithms using two
matrices. The use of two matrices hinges on the fact that there are three
types of edge-information present, absent, and censored, and the information
about a vertex’s community coming from present and absent edges are of the
same order. We believe that our results generalize in a straightforward manner
to the scenario of labelled edges, where the possible edge statuses
$\\{\text{present,absent}\\}$ are replaced by $D$ different types. The case
with $D$ labels has been studied in [14] when there are two balanced
communities, and symmetric intra-community edge probabilities for each of the
$D$ types. In this case, [14] identified the information-theoretic threshold.
However, no efficient algorithms were provided. We believe that optimal
spectral algorithms in the general $D$-labeled edge scenario must use $D$
different encoding matrices.
We also believe that the framework of this paper can be extended beyond graphs
to other important machine learning problems with censoring on top of an
underlying low-rank structure. This may include non-square matrices (e.g.
items vs features matrix in recommender systems). We leave these as
interesting future research questions.
### 1.6. Notation.
Let $[n]=\\{1,,2,\dots,n\\}$. We often use the Bachmann?Landau asymptotic
notation $o(1),O(1)$ etc. For two sequences $(a_{n})_{n\geq 1}$ and
$(b_{n})_{n\geq 1}$, we write $a_{n}\asymp b_{n}$ as a shorthand for
$\lim_{n\to\infty}\frac{a_{n}}{b_{n}}=1$. Given a sequence of probability
measures $(\mathbb{P}_{n})_{n\geq 1}$, a sequence of events
$(\mathcal{E}_{n})_{n\geq 1}$ is said to hold _with high probability_ if
$\lim_{n\to\infty}\mathbb{P}_{n}(\mathcal{E}_{n})=1$
For a vector $x\in\mathbb{R}^{d}$, we define
$\|x\|_{2}=(\sum_{i}x_{i}^{2})^{1/2}$ and $\|x\|_{\infty}=\max_{i}|x_{i}|$.
For $x\in\mathbb{R}^{d}$ and $r>0$, we denote the open $\ell_{2}$-ball of
radius $r$ around $x$ by $B_{2}(x,r)$. Similarly, for $X\subset\mathbb{R}^{d}$
and $r>0$, we denote the open $\ell_{2}$-ball of radius $r$ around $X$ by
$B_{2}(X,r)$. For a collection of vectors $(x_{i})_{i}\subset\mathbb{R}^{d}$,
we denote their linear span by $\mathrm{Span}((x_{i})_{i})$. Also, given a
subspace $\mathcal{Z}\subset\mathbb{R}^{d}$, the projection of $x$ onto
$\mathcal{Z}$ will be denoted by
$\mathrm{Proj}_{\scriptscriptstyle\mathcal{Z}}(x)$.
For a matrix $M\in\mathbb{R}^{n\times d}$, we use $M_{i\cdot}$ to refer to its
$i$-th row, represented as a row vector. Given a matrix $M$,
$\|M\|_{2}=\max_{\|x\|_{2}=1}\|Mx\|_{2}$ is the spectral norm,
$\|M\|_{2\to\infty}=\max_{i}\|M_{i\cdot}\|_{2}$ is the matrix $2\to\infty$
norm, and $\|M\|_{F}=(\sum_{i,j}M_{ij}^{2})^{1/2}$ is the Frobenius norm.
Whenever we apply a real-value function to a vector, it should be interpreted
as a coordinatewise operation.
Throughout, we condition on the event that the random community assignments
given by $\sigma_{0}$ are close to their expected sizes. Specifically, note
that, since $n_{j}:=\\{v:\sigma_{0}(v)=j\\}$ are marginally distributed as
$\text{Bin}\left(n,\rho_{j}\right)$, and therefore, for all
$\varepsilon\in(0,1)$,
$\begin{split}\left|n_{j}-n\rho_{j}\right|\leq\varepsilon n\end{split}$ (1.4)
with probability at least $1-2\exp(-\varepsilon^{2}n/2)$ by applying the
McDiarmid inequality. Throughout, the notation
$\mathbb{P}(\cdot),\mathbb{E}[\cdot]$ conditions on a fixed value of
$\sigma_{0}$ satisfying (1.4) with $\varepsilon=n^{-1/3}$.
### 1.7. Organization
We start analyzing the geometric properties of the degree profile clouds in
Section 2, which lies in the heart of all the proofs. Subsequently, in Section
3, we prove the impossibility result and also prove that the Maximum a
Posteriori (MAP) Estimator always succeeds up to the information theoretic
threshold. The entrywise bounds for the top eigenvectors are provided in
Section 4. Finally, we complete the proofs of Theorems 1.7, 1.10 in Section 5.
## 2\. Geometry of Degree Profiles
In this section, we develop the technical tools for Step 2 in Section 1.4. We
will develop these tools for general $k$-community CSBMs. Throughout, we fix
$\rho\in(0,1)^{k}$ such that $\sum_{i=1}^{k}\rho_{i}=1$ and let
$P\in(0,1)^{k\times k}$ be a symmetric matrix. Let us define degree profiles,
which will be the main object of analysis in this section.
###### Definition 2.1 (Degree profile).
Suppose that $G\sim\textsc{CSBM}_{n}^{k}(P,\rho,t)$. For a vertex $v$, we
define $d(v)=(d_{+r},d_{-r})_{r\in[k]}\in\mathbb{Z}_{+}^{2k}$ to be the
_degree profile_ of $v$, where $d_{+r}=d_{+r}(v)$ and $d_{-r}=d_{-r}(v)$
respectively denote the number of present and absent edges from $v$ to
community $r$ for $r\in[k]$.
In Section 2.1, we start by defining dissonance ranges for degree profiles in
each community, and prove their desirable analytic properties. In Section 2.2,
we use dissonance ranges to characterize the clouds of degree profiles. Next,
in Section 2.3, we provide a general criterion describing when degree profiles
cannot be separated by hyperplanes. Section 2.4 focuses on analyzing the
degree profiles in the “hardest” case when dissonance ranges barely overlap.
Finally, in Section 2.5, all the above analysis leads to a tractable criterion
for when we can separate clouds of degree profiles using hyperplanes.
### 2.1. Dissonance range and its properties.
Let us start by defining the dissonance range and obtaining some basic
analytic properties.
###### Definition 2.2 (Dissonance range).
Given $i\in[k]$ and $x\in\mathbb{R}_{+}^{2k}$, the _dissonance_ of $x$
relative to community $i$ is given by
$\eta_{i}(x)=\sum_{r=1}^{k}\bigg{[}x_{1,r}\log\bigg{(}\frac{x_{1,r}}{\mathrm{e}\rho_{r}P_{ri}}\bigg{)}+x_{2,r}\log\bigg{(}\frac{x_{2,r}}{\mathrm{e}\rho_{r}(1-P_{ri})}\bigg{)}\bigg{]}+1,$
(2.1)
where we regard the terms in these expressions as being $0$ if the
corresponding entry of $x$ is $0$. We also define the $\delta$-_dissonance
range_ relative to community $i$ by
$\mathrm{DR}_{\delta}(i):=\\{x:\eta_{i}(x)\leq\delta\\}$.
###### Lemma 2.3.
Fix $i\in[k]$ and $\delta>0$. Then $\mathrm{DR}_{\delta}(i)$ is a bounded,
closed and convex subset of $\mathbb{R}_{+}^{2k}$. Then, for any
$\delta^{\prime}>\delta$, there exist $\varepsilon>0$ such that
$\displaystyle
B_{2}(\mathrm{DR}_{\delta}(i),\varepsilon)\cap\mathbb{R}_{+}^{2k}\subset\mathrm{DR}_{\delta^{\prime}}(i).$
###### Proof.
We first show that $\mathrm{DR}_{\delta}(i)$ is bounded. Note that
$z\log(z)\to\infty$ if and only if $z\to\infty$. Thus, if
$\mathrm{DR}_{\delta}(i)$ were unbounded, then we could find a subsequence
$(x_{k})_{k\geq 1}\subset\mathrm{DR}_{\delta}(i)$ such that
$\eta_{i}(x_{k})\to\infty$. However, $\eta_{i}(x_{k})\leq\delta$ by definition
of $\mathrm{DR}_{\delta}(i)$. This leads to a contradiction and hence
$\mathrm{DR}_{\delta}(i)$ is bounded.
Next, since $\eta_{i}$ is continuous, we have that $\mathrm{DR}_{\delta}(i)$
is closed. Further, $\eta_{i}$ is a sum of convex function and hence it is
convex. Therefore, its sublevel set $\mathrm{DR}_{\delta}(i)$ is convex.
To show the last claim, note that $\eta_{i}$ is uniformly continuous on
$[0,b]^{2k}$ for any $b>0$. Thus, there exists $\varepsilon>0$ such that for
any $x,x^{\prime}\in[0,b]^{2k}$ with $\|x-x^{\prime}\|_{2}\leq\varepsilon$, we
have $|\eta_{i}(x)-\eta_{i}(x^{\prime})|\leq\delta^{\prime}-\delta$. This
proves
$B_{2}(\mathrm{DR}_{\delta}(i),\varepsilon)\cap\mathbb{R}_{+}^{2k}\subset\mathrm{DR}_{\delta^{\prime}}(i)$,
and completes the proof of the lemma. ∎
### 2.2. Relating dissonance range with degree profiles of CSBMs
Our next goal will be to identify which degree profiles are likely to occur in
CSBMs.
###### Lemma 2.4.
Fix $0<\delta<\delta^{\prime}$. Let $t\in(1/\delta^{\prime},1/\delta)$ and
$G\sim\textsc{CSBM}_{n}^{k}(P,\rho,t)$. The following holds with probability
$1-o(1)$:
1. (1)
There exists $c>0$ such that for every $i\in[k]$ and $d\in\mathbb{Z}_{+}^{2k}$
such that $d/(t\log(n))\in\mathrm{DR}_{\delta}(i)$, there are at least $n^{c}$
vertices in community $i$ with degree profile $d$.
2. (2)
For each $i\in[k]$ and for every vertex $v\in G$ in community $i$, the degree
profile of $v$ is of the form $xt\log(n)$ for some
$x\in\mathrm{DR}_{\delta^{\prime}}(i)$.
In order to prove this lemma, we need the Poisson approximation result stated
below. The proof of this follows from a straightforward application of
Stirling’s approximation and will therefore be provided in Appendix A.
###### Lemma 2.5.
Let $(S_{r})_{r\in[k]}$ be a partition of $[n]$ such that
$|S_{r}|=n\rho_{r}(1+O(\log^{-2}n))$ for all $r\in[k]$, where
$\rho\in(0,1)^{k}$. Suppose that $\\{W_{v}\\}_{v=1}^{n}$ is i.i.d. from a
distribution taking values in $\\{a,b,c\\}$ and, if $v\in S_{r}$,
$\mathbb{P}(W_{v}=a)=\alpha\psi_{r}$,
$\mathbb{P}(W_{v}=b)=\alpha(1-\psi_{r})$, and $\mathbb{P}(W_{v}=c)=1-\alpha$.
Fix $i\in[k]$. Also, let $V\subset S_{i}$ be such that $|V|=O(n/\log^{2}n)$.
For $x=a,b$, let $D_{x,r}:=\\#\\{v\in S_{r}:W_{v}=x\\}$ for
$r\notin[k]\setminus\\{i\\}$ and $D_{x,i}:=\\#\\{v\in S_{i}\cap
V^{c}:W_{v}=x\\}$. Let $D=(D_{a,r},D_{b,r})_{r\in[k]}$ and also let
$d=(d_{1,r},d_{2,r})_{r\in[k]}\in\mathbb{Z}_{+}^{2k}$ be such that
$\|d\|_{1}=o(\log^{3/2}n)$ and $\alpha=t\log n/n$. Then
$\displaystyle\mathbb{P}\left(D=d\right)\asymp\prod_{r=1}^{k}P\left(\rho_{r}\psi_{r}t\log
n;d_{1,r}\right)P\left(\rho_{r}(1-\psi_{r})t\log n;d_{2,r}\right),$
where $P(\lambda;m)$ is the probability that a $\mathrm{Poisson}(\lambda)$
random variable takes value $m$.
###### Proof of Lemma 2.4.
To prove the first part, fix $i\in[k]$ and let $d\in\mathbb{Z}_{+}^{2k}$ be
such that $d/(t\log(n))\in\mathrm{DR}_{\delta}(i)$. Recall that $n_{j}$ is the
number of vertices in community $j$ for every $j\in[k]$. By (1.4),
$\left|n_{j}-n\rho_{j}\right|\leq n^{\frac{2}{3}}$ for all $j\in[k]$, with
probability $1-o(1)$. In the subsequent proof, we always condition on this
event, even if it is not mentioned explicitly.
Let $S_{i}$ be a random set of $2n/\log^{2}(n)$ vertices in community $i$.
Next, let $S^{\prime}_{i}$ be the subset of $S_{i}$ consisting of all vertices
$v$ such that all the connections between $v$ and $S_{i}$ are censored. To
lower-bound the size of $S^{\prime}_{i}$, let $X$ be the number of revealed
connections among vertices in $S$. By a counting argument,
$|S^{\prime}_{i}|\geq|S_{i}|-2X$. Observe that
$\mathbb{E}[X]=\binom{2n/\log^{2}n}{2}\frac{t\log n}{n}=O(n/\log^{3}(n))$. The
Markov inequality then implies that $X=o(|S_{i}|)$ with high probability,
which implies $|S^{\prime}_{i}|\geq\frac{1}{2}|S_{i}|=\frac{n}{\log^{2}(n)}$.
Let $\mathscr{F}^{\prime}$ denote the sigma-algebra with respect to which
$S_{i}$ and $(n_{j})_{j\in[k]}$ are measurable, and let
$\mathscr{F}:=\mathscr{F}^{\prime}\cap\bigg{\\{}|n_{j}-n\rho_{j}|\leq
n^{\frac{2}{3}},\forall
j\in[k]\bigg{\\}}\cap\bigg{\\{}|S_{i}^{\prime}|\geq\frac{n}{\log^{2}(n)}\bigg{\\}}.$
Fix $v\in[n]$. Since $\mathrm{DR}_{\delta}(i)$ is bounded, we have that
$\|d\|_{1}=O(\log n)$. Thus, by Lemma 2.5,
$\begin{split}{cond-prob-simpl}&\mathbb{P}(d(v)=d\mid\mathscr{F}\cap\\{v\in
S_{i}^{\prime}\\})\\\ &\asymp\mathrm{e}^{-t\log
n}\prod_{j=1}^{k}\frac{\big{[}\rho_{j}P_{i,j}t\log(n)\big{]}^{d_{+j}}}{d_{+j}!}\frac{\big{[}\rho_{j}(1-P_{i,j})t\log(n)\big{]}^{d_{-j}}}{d_{-j}!}\\\
&\asymp
n^{-t}\prod_{j=1}^{k}\frac{(\rho_{j}P_{i,j}t\log(n))^{d_{+j}}}{\sqrt{2\pi
d_{+j}}(d_{+j}/\mathrm{e})^{d_{+j}}}\frac{(\rho_{j}(1-P_{i,j})t\log(n))^{d_{-j}}}{\sqrt{2\pi
d_{-j}}(d_{-j}/\mathrm{e})^{d_{-j}}}\\\
&=n^{-t}\prod_{j=1}^{k}\frac{1}{2\pi\sqrt{d_{+j}d_{-j}}}\bigg{(}\frac{\mathrm{e}\rho_{j}P_{i,j}t\log(n)}{d_{+j}}\bigg{)}^{d_{+j}}\bigg{(}\frac{\mathrm{e}\rho_{j}(1-P_{i,j})t\log(n)}{d_{-j}}\bigg{)}^{d_{-j}}\\\
&=\bigg{(}\prod_{j=1}^{k}\frac{1}{2\pi\sqrt{d_{+j}d_{-j}}}\bigg{)}n^{-t\eta_{i}(d/t\log
n)},\end{split}$ (2.2)
where in the final step, we have used the definition of $\eta_{i}$ from (2.1).
Next, since $d/(t\log(n))\in\mathrm{DR}_{\delta}(i)$, we have that
$\eta_{i}(d/t\log(n))\leq\delta$, and thus (LABEL:cond-prob-simpl) yields, for
all sufficiently large $n$,
$\begin{split}p_{n}:=\mathbb{P}(d(v)=d\mid\mathscr{F}\cap\\{v\in
S_{i}^{\prime}\\})\geq C\frac{n^{-t\delta}}{\log^{k}n}+o(n^{-1})\geq
n^{-t\delta(1+o(1))},\end{split}$ (2.3)
for some $C>0$. Next, if $d^{\prime}(v)$ denotes the degree profile of vertex
$v$ discarding all the present and absent edges in $S_{i}$, then
$d(v)=d^{\prime}(v)$ for all $v\in S_{i}^{\prime}$. Moreover, conditionally on
$\mathscr{F}^{\prime}$, $\\{d^{\prime}(v)\\}_{v\in S_{i}^{\prime}}$ are
independent. Thus, conditionally on $\mathscr{F}$, $|\\{v\in
S_{i}^{\prime}:d(v)=d\\}|$ is distributed as a
$\text{Bin}(|S_{i}^{\prime}|,p_{n})$ random variable. Note also that,
conditionally on $\mathscr{F}$, $|S_{i}^{\prime}|p_{n}\geq 2n^{c}$ for some
$c>0$. Thus, using concentration of binomial random variables, we conclude
that
$|\\{v\in
S_{i}^{\prime}:d(v)=d\\}|\geq\frac{1}{2}~{}|S_{i}^{\prime}|~{}p_{n}\geq n^{c}$
with probability at least $1-\exp(-c^{\prime}n^{c})$ for some $c^{\prime}>0$.
Observing that $|\\{d\in\mathbb{Z}_{+}^{2k}:d/(\log
n)\in\mathrm{DR}_{\delta}(i)\\}|=O\left(\text{polylog}(n)\right)=o(\exp(c^{\prime}n^{c}))$,
the claim follows by a union bound.
In order to prove the second part, we again use (LABEL:cond-prob-simpl). By
the union bound and [12, Corollary 2.4], there exists a sufficiently large
constant $C>0$ such that
$\begin{split}\mathbb{P}(\exists v\in[n]:\|d(v)\|_{1}>C\log
n)=o(n^{-1}).\end{split}$ (2.4)
Now, for any $d$ such that $d/(t\log n)\notin\mathrm{DR}_{\delta^{\prime}}(i)$
and $\|d\|_{1}\leq C\log n$, we can use (LABEL:cond-prob-simpl) to show that,
for all sufficiently large $n$, and fixed $v\in[n]$
$\displaystyle\mathbb{P}(d(v)=d)\leq(1+o(1))\bigg{(}\prod_{j=1}^{k}\frac{1}{2\pi\sqrt{d_{+,j}d_{-,j}}}\bigg{)}n^{-t\eta_{i}(d/t\log
n)}\leq n^{-t\delta^{\prime}}.$ (2.5)
Now,
$\displaystyle\mathbb{P}(\exists v\text{ with }\sigma_{0}(v)=i:d(v)/(t\log
n)\notin\mathrm{DR}_{\delta^{\prime}}(i))$ $\displaystyle\leq
n\mathbb{P}(d(v)=d\text{ for some }d/(t\log
n)\notin\mathrm{DR}_{\delta^{\prime}}(i)\text{ and }\|d\|_{1}\leq C\log
n)+o(1)$ $\displaystyle\leq n(C\log n)^{2k}n^{-t\delta^{\prime}}+o(1)=o(1),$
where in the last step we have used that $t\delta^{\prime}>1$. Hence the proof
is complete. ∎
### 2.3. Separating degree profiles using hyperplanes
As discussed in Section 1.4, the $\ell_{\infty}$ approximation guarantee for
the eigenvectors gives us an alternative characterization of spectral
algorithms in terms of separating degree profiles of different communities
using certain hyperplanes. The next proposition allows us to determine when
separation using hyperplanes is impossible. Before the statement we need a
couple of definitions. Let $V_{i}$ denote the vertices in community $i$.
###### Definition 2.6 (Separates communities).
We say that $w\in\mathbb{R}^{2k}$ _separates communities $(i,j)$ with margin
$\beta>0$_ if
$\min_{v\in V_{i}}w^{T}d(v)\geq\beta/2\quad\text{and}\quad\max_{v\in
V_{j}}w^{T}d(v)\leq-\beta/2.$
If $w$ separates communities $(i,j)$ with margin $\beta>0$, then computing the
weighted degree profile $w^{T}d(v)$ for each $v\in V_{i}\cup V_{j}$ allows us
to distinguish these two communities. Note that if $w$ separates communities
$(i,j)$ with margin $\beta$, then $-w$ also separates communities $(i,j)$ with
margin $\beta$. Next we define the scenario where a finite number of
hyperplanes cannot separate the two communities.
###### Definition 2.7 (Confuses communities).
Let $(w_{r})_{r=1}^{m}\subset\mathbb{R}^{2k}$ and let
$(\gamma_{r})_{r=1}^{m}\subset\mathbb{R}$. We say that
$[(w_{r})_{r=1}^{m},(\gamma_{r})_{r=1}^{m}]$ _confuses communities $(i,j)$ at
level $\beta$_ if there exist $u\in V_{i}$, $v\in V_{j}$, and
$s\in\\{-1,1\\}^{m}$ such that $s_{r}(w_{r}^{T}d(u)-\gamma_{r})>\beta$ and
$s_{r}(w_{r}^{T}d(v)-\gamma_{r})>\beta$ for all $1\leq r\leq m$.
In other words, there are representatives from communities $i$ and $j$, such
that both of their degree profiles appear on the same sides of all the
hyperplanes $\\{x:w_{r}^{T}x=\gamma_{r}\\}$. A larger value of $\beta$ means
that the pair of degree profiles is farther from the hyperplanes. Note that
the notion of confusion also rules out the possibility of separation with
multiple hyperplanes. We now prove the following proposition:
###### Proposition 2.8.
Let $G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$,
$\mathcal{Z}\subset\mathbb{R}^{2k}$ be a linear subspace, and let $\delta>0$
be such that $t\delta<1$. Suppose further that there are communities $i$ and
$j$ such that the projections of $\mathrm{DR}_{\delta}(i)$ and
$\mathrm{DR}_{\delta}(j)$ onto $\mathcal{Z}$ overlap. Then, for any
$m\in\mathbb{N}$, there exists $\varepsilon>0$ such that for any unit vectors
$w_{1},...,w_{m}\in\mathcal{Z}$ and $\gamma_{1},...,\gamma_{m}\in\mathbb{R}$,
with probability $1-o(1)$, $[(w_{r})_{r=1}^{m},(\gamma_{r}\log n)_{r=1}^{m}]$
confuses communities $i$ and $j$ at level $\varepsilon\log(n)$.
###### Proof.
Let
$z_{0}\in\text{Proj}_{\mathcal{Z}}(\mathrm{DR}_{\delta}(i))\cap\text{Proj}_{\mathcal{Z}}(\mathrm{DR}_{\delta}(j))$.
There must exist $z_{i}\in\mathrm{DR}_{\delta}(i)$ and
$z_{j}\in\mathrm{DR}_{\delta}(j)$ such that
$\text{Proj}_{\mathcal{Z}}(z_{i})=\text{Proj}_{\mathcal{Z}}(z_{j})=z_{0}$.
Now, let $\delta^{\prime}=\frac{1}{2}\left(\delta+\frac{1}{t}\right)$, so that
$\delta<\delta^{\prime}<\frac{1}{t}$. By Lemma 2.3, there exists $\mu>0$ such
that
$B_{2}(\mathrm{DR}_{\delta}(i),\mu)\cap\mathbb{R}_{+}^{2k}\subseteq\mathrm{DR}_{\delta^{\prime}}(i)$
and
$B_{2}(\mathrm{DR}_{\delta}(j),\mu)\cap\mathbb{R}_{+}^{2k}\subseteq\mathrm{DR}_{\delta^{\prime}}(j)$.
So, if we let $\mu_{0}=\mu/3k$ and
$z^{\prime}_{0}=z_{0}+\text{Proj}_{\mathcal{Z}}(\mu_{0},\mu_{0},...,\mu_{0})$
then
$\displaystyle B_{2}(z^{\prime}_{0},\mu_{0})\cap\mathcal{Z}$
$\displaystyle=\text{Proj}_{\mathcal{Z}}(B_{2}(z_{i}+(\mu_{0},...,\mu_{0}),\mu_{0}))$
$\displaystyle\subseteq\text{Proj}_{\mathcal{Z}}(B_{2}(z_{i},\mu)\cap\mathbb{R}_{+}^{2k})$
$\displaystyle\subseteq\text{Proj}_{\mathcal{Z}}(\mathrm{DR}_{\delta^{\prime}}(i))$
By the same logic,
$B_{2}(z^{\prime}_{0},\mu_{0})\cap\mathcal{Z}\subseteq\text{Proj}_{\mathcal{Z}}(\mathrm{DR}_{\delta^{\prime}}(j))$.
Fix $m\in\mathbb{N}$, unit vectors $w_{1},\dots,w_{m}\in\mathbb{R}^{2k}$, and
$\gamma_{1},\dots,\gamma_{m}$. Then there exist $\varepsilon>0$, an open ball
$B\subseteq B_{2}(z^{\prime}_{0},\mu_{0})\cap\mathcal{Z}$, and
$s\in\\{-1,1\\}^{m}$ such that for all $x\in B$,
$s_{r}\left(w_{r}^{T}x-\frac{\gamma_{r}}{t}\right)>\frac{\varepsilon}{t}$
for all $1\leq r\leq m$. In other words, the open ball $B$ is separated from
all hyperplanes defined by $(w_{r},\gamma_{r})_{r=1}^{m}$.
Now, observe that for all sufficiently large $n$ there exist
$x_{i}\in\mathrm{DR}_{\delta^{\prime}}(i)\cap\text{Proj}^{-1}_{\mathcal{Z}}(B)$
and
$x_{j}\in\mathrm{DR}_{\delta^{\prime}}(j)\cap\text{Proj}^{-1}_{\mathcal{Z}}(B)$
such that $x_{i}t\log(n),x_{j}t\log(n)\subseteq\mathcal{Z}_{+}^{2k}$. Since
$t\delta^{\prime}<1$, by Lemma 2.4, with probability $1-o(1)$, there exist
vertices $u\in V_{i}$ and $v\in V_{j}$ such that $d(u)/(t\log(n))=x_{i}$ and
$d(v)/(t\log(n))=x_{j}$. By the above, we have
$s_{r}\left(w_{r}\frac{d(u)}{t\log
n}-\frac{\gamma_{r}}{t}\right)=s_{r}\left(w_{r}\frac{d(v)}{t\log
n}-\frac{\gamma_{r}}{t}\right)>\frac{\varepsilon}{t}$
for all $1\leq r\leq m$. Multiplying through by $t\log n$, we conclude that
$[(w_{r})_{r=1}^{m},(\gamma_{r}\log n)_{r=1}^{m}]$ confuses communities $i$
and $j$ at level $\varepsilon\log n$. ∎
### 2.4. When dissonance ranges barely overlap.
At this point, the key question is what hyperplanes can separate the rescaled
degree profiles from different communities. In order to answer that, we
consider the “hardest” case where
$t=t_{0}=1/\Delta_{+}(\theta_{i},\theta_{j})$, with $\theta_{i}$ defined by
(1.2). Define the $2k$-dimensional vector $w^{\star}$ as
$\begin{split}w^{\star}=\bigg{(}\log\frac{P_{ri}}{P_{rj}},\log\frac{1-P_{ri}}{1-P_{rj}}\bigg{)}_{r\in[k]}.\end{split}$
(2.6)
Below, we show that the hyperplane orthogonal to $w^{\star}$ almost separates
the dissonance ranges even for $t=t_{0}$. We also set up additional properties
that will help us to show that a hyperplane orthogonal to $w\neq w^{\star}$
cannot separate the dissonance ranges just above $t_{0}$, and also to
establish the impossibility of exact recovery (Theorem 1.4) below $t_{0}$.
###### Lemma 2.9.
Suppose that $1\leq i<j\leq k$ and let
$t_{0}=1/\Delta_{+}(\theta_{i},\theta_{j})$, where $\theta_{i}$ is defined by
(1.2). Then $\mathrm{DR}_{1/t_{0}}(i)$ and $\mathrm{DR}_{1/t_{0}}(j)$
intersect at a single point. Let $x^{\star}$ be this intersection point of
$\mathrm{DR}_{1/t_{0}}(i)$ and $\mathrm{DR}_{1/t_{0}}(j)$. Let
$H:=\\{x:\langle w^{\star},x-x^{*}\rangle\geq 0\\}$ be the half-space created
by the hyperplane through $x^{\star}$ perpendicular to $w^{\star}$. Then
$\mathrm{DR}_{1/t_{0}}(i)\cap H=\mathrm{DR}_{1/t_{0}}(i)$ and
$\mathrm{DR}_{1/t_{0}}(j)\cap H=\\{x^{\star}\\}$, i.e., the hyperplane
$\\{x:\langle w^{\star},x-x^{*}\rangle=0\\}$ separates
$\mathrm{DR}_{1/t_{0}}(i)\setminus\\{x^{\star}\\}$ and
$\mathrm{DR}_{1/t_{0}}(j)\setminus\\{x^{\star}\\}$. Also, there exists $r>0$
such that
$B_{2}(x^{\star}+rw^{\star},r\|w^{\star}\|_{2})\subset\mathrm{DR}_{1/t_{0}}(i)$
and
$B_{2}(x^{\star}-rw^{\star},r\|w^{\star}\|_{2})\subset\mathrm{DR}_{1/t_{0}}(j)$.
For $t<t_{0}$, the intersection $\mathrm{DR}_{1/t}(i)\cap\mathrm{DR}_{1/t}(j)$
has a non-empty interior.
###### Proof.
Recall the definition of $\Delta_{+}$ and $\mathrm{CH}_{\xi}$ from Definition
1.3. Let $\xi^{\star}$ be the maximizer of (1.1). We claim that
$0<\xi^{\star}<1$. Indeed,
$\displaystyle\Delta_{+}(\theta_{i},\theta_{j})=1-\min_{\xi\in[0,1]}\sum_{r\in[k]}\rho_{r}\left(P_{ri}^{\xi}P_{rj}^{1-\xi}+(1-P_{ri})^{\xi}(1-P_{rj})^{1-\xi}\right)=:1-\min_{\xi\in[0,1]}f(\xi).$
Now, $f(0)=f(1)=1$, and $f(1/2)<1$ by the AM-GM inequality. Therefore, the
minimum of $f$ is not attained at $\\{0,1\\}$, which proves $0<\xi^{\star}<1$.
Next, define the $2k$-dimensional vector
$\displaystyle
x^{\star}=\left(\rho_{r}P_{ri}^{\xi^{\star}}P_{rj}^{1-\xi^{\star}},\rho_{r}(1-P_{ri})^{\xi^{\star}}(1-P_{rj})^{1-\xi^{\star}}\right)_{r\in[k]}.$
Setting
$\frac{\mathrm{d}}{\mathrm{d}\xi}\mathrm{CH}_{\xi}(\theta_{i},\theta_{j})\big{|}_{\xi=\xi^{\star}}=0$
yields
$\begin{split}\sum_{r\in[k]}\bigg{[}\rho_{r}P_{ri}^{\xi^{\star}}P_{rj}^{1-\xi^{\star}}\log\frac{P_{ri}}{P_{rj}}+\rho_{r}(1-P_{ri})^{\xi^{\star}}(1-P_{rj})^{1-\xi^{\star}}\log\frac{1-P_{ri}}{1-P_{rj}}\bigg{]}=\langle
w^{\star},x^{\star}\rangle=0.\end{split}$ (2.7)
Also recall $\eta_{i}$ from (2.1). Then,
$\displaystyle\eta_{i}(x^{\star})-\eta_{j}(x^{\star})$
$\displaystyle=\sum_{r=1}^{k}\bigg{[}x_{1,r}^{\star}\log\bigg{(}\frac{x_{1,r}^{\star}}{\mathrm{e}\rho_{r}P_{ri}}\bigg{)}+x_{2,r}^{\star}\log\bigg{(}\frac{x_{2,r}^{\star}}{\mathrm{e}\rho_{r}(1-P_{ri})}\bigg{)}\bigg{]}$
$\displaystyle\qquad-\sum_{r=1}^{k}\bigg{[}x_{1,r}^{\star}\log\bigg{(}\frac{x_{1,r}^{\star}}{\mathrm{e}\rho_{r}P_{rj}}\bigg{)}+x_{2,r}^{\star}\log\bigg{(}\frac{x_{2,r}^{\star}}{\mathrm{e}\rho_{r}(1-P_{rj})}\bigg{)}\bigg{]}$
$\displaystyle=\sum_{r=1}^{k}\bigg{[}x_{1,r}^{\star}\log\bigg{(}\frac{P_{rj}}{P_{ri}}\bigg{)}+x_{2,r}^{\star}\log\bigg{(}\frac{1-P_{ri}}{1-P_{rj}}\bigg{)}\bigg{]}=\langle
w^{\star},x^{\star}\rangle=0.$
Therefore,
$\displaystyle\eta_{i}(x^{\star})$
$\displaystyle=\eta_{j}(x^{\star})=\xi^{\star}\eta_{i}(x^{\star})+(1-\xi^{\star})\eta_{j}(x^{\star})$
$\displaystyle=\sum_{r=1}^{k}\bigg{[}x_{1,r}^{\star}\log\bigg{(}\frac{x_{1,r}^{\star}}{\mathrm{e}\rho_{r}P_{ri}^{\xi^{\star}}P_{rj}^{1-\xi^{\star}}}\bigg{)}+x_{2,r}^{\star}\log\bigg{(}\frac{x_{2,r}^{\star}}{\mathrm{e}\rho_{r}(1-P_{ri})^{\xi^{\star}}(1-P_{rj})^{1-\xi^{\star}}}\bigg{)}\bigg{]}+1$
$\displaystyle=\sum_{r=1}^{k}\left[x_{1,r}^{\star}\log(1/\mathrm{e})+x_{2,r}^{\star}\log(1/\mathrm{e})\right]+1$
$\displaystyle=1-\sum_{r=1}^{k}\big{[}x_{1,r}^{\star}+x_{2,r}^{\star}\big{]}=\sum_{r=1}^{k}\left[\rho_{r}-\rho_{r}P_{ri}^{\xi^{\star}}P_{rj}^{1-\xi^{\star}}-\rho_{r}(1-P_{ri})^{\xi^{\star}}(1-P_{rj})^{1-\xi^{\star}}\right]=\Delta_{+}(\theta_{i},\theta_{j}).$
Therefore, $x^{\star}\in\mathrm{DR}_{1/t_{0}}(i)\cap\mathrm{DR}_{1/t_{0}}(j)$.
Next, observe that
$\displaystyle\nabla\eta_{i}(x^{\star})$
$\displaystyle=\bigg{(}\log\bigg{(}\frac{x_{1,r}^{\star}}{\mathrm{e}\rho_{r}P_{ri}}\bigg{)}+1,\log\bigg{(}\frac{x_{2,r}^{\star}}{\mathrm{e}\rho_{r}(1-P_{ri})}\bigg{)}+1\bigg{)}_{r\in[k]}$
$\displaystyle=\bigg{(}\log\bigg{(}\frac{P_{ri}}{P_{rj}}\bigg{)}^{\xi^{\star}-1},\log\bigg{(}\frac{1-P_{ri}}{1-P_{rj}}\bigg{)}^{\xi^{\star}-1}\bigg{)}_{r\in[k]}=(\xi^{\star}-1)w^{\star}.$
Similarly, we also have that
$\displaystyle\nabla\eta_{j}(x^{\star})$
$\displaystyle=\bigg{(}\log\bigg{(}\frac{x_{1,r}^{\star}}{\mathrm{e}\rho_{r}P_{rj}}\bigg{)}+1,\log\bigg{(}\frac{x_{2,r}^{\star}}{\mathrm{e}\rho_{r}(1-P_{rj})}\bigg{)}+1\bigg{)}_{r\in[k]}$
$\displaystyle=\bigg{(}\log\bigg{(}\frac{P_{ri}}{P_{rj}}\bigg{)}^{\xi^{\star}},\log\bigg{(}\frac{1-P_{ri}}{1-P_{rj}}\bigg{)}^{\xi^{\star}}\bigg{)}_{r\in[k]}=\xi^{\star}w^{\star}.$
By convexity of $\eta_{i}$ and $\eta_{j}$, for any $x\in[0,\infty)^{2k}$, we
have that
$\displaystyle\eta_{i}(x)$ $\displaystyle\geq\eta_{i}(x^{\star})+\langle
x-x^{\star},\nabla\eta_{i}(x^{\star})\rangle=\frac{1}{t_{0}}+\langle
x-x^{\star},(\xi^{\star}-1)w^{\star}\rangle$ (2.8)
and
$\displaystyle\eta_{j}(x)$ $\displaystyle\geq\eta_{j}(x^{\star})+\langle
x-x^{\star},\nabla\eta_{j}(x^{\star})\rangle=\frac{1}{t_{0}}+\langle
x-x^{\star},\xi^{\star}w^{\star}\rangle.$ (2.9)
The above equalities can hold only at $x=x^{\star}$ since $\eta_{i}$ and
$\eta_{j}$ are strictly convex. Now, for any $x\in\mathrm{DR}_{1/t_{0}}(i)$,
we have $\eta_{i}(x)\leq 1/t_{0}$. Thus, (2.8) implies that
$(\xi^{\star}-1)\langle x-x^{\star},w^{\star}\rangle\leq 0$, in which case, we
must have $\langle x-x^{\star},w^{\star}\rangle\geq 0$ for all
$x\in\mathrm{DR}_{1/t_{0}}(i)$, and therefore $\mathrm{DR}_{1/t_{0}}(i)\subset
H$. On the other hand, (2.9) implies that $\xi^{\star}\langle
x-x^{\star},w^{\star}\rangle\leq 0$, and since $0<\xi^{\star}<1$, the equality
holds if and only if $x=x^{\star}$. Therefore, $\mathrm{DR}_{1/t_{0}}(j)\cap
H=\\{x^{\star}\\}$, which proves the first part of the claim.
Next, observe that by continuity of the second derivatives of $\eta_{i}$ and
$\eta_{j}$, there must exist $r_{0},c>0$ such that for all $x$ with
$||x-x^{\star}||_{2}\leq r_{0}$,
$\displaystyle\eta_{i}(x)$ $\displaystyle\leq\eta_{i}(x^{\star})+\langle
x-x^{\star},\nabla\eta_{i}(x^{\star})\rangle+c\|x-x^{\star}\|_{2}^{2}$
$\displaystyle=\frac{1}{t_{0}}+\langle
x-x^{\star},(\xi^{\star}-1)w^{\star}\rangle+c\|x-x^{\star}\|_{2}^{2}$
$\displaystyle=\frac{1}{t_{0}}+c\|x-x^{\star}+(\xi^{\star}-1)w^{\star}/2c\|_{2}^{2}-\|(\xi^{\star}-1)w^{\star}\|_{2}^{2}/4c\leq\frac{1}{t_{0}},$
for
$\|x-x^{\star}+(\xi^{\star}-1)w^{\star}/2c\|_{2}\leq\|(\xi^{\star}-1)w^{\star}\|_{2}/2c$,
and
$\displaystyle\eta_{j}(x)$ $\displaystyle\leq\eta_{j}(x^{\star})+\langle
x-x^{\star},\nabla\eta_{j}(x^{\star})\rangle+c\|x-x^{\star}\|_{2}^{2}$
$\displaystyle=\frac{1}{t_{0}}+\langle
x-x^{\star},\xi^{\star}w^{\star}\rangle+c\|x-x^{\star}\|_{2}^{2}$
$\displaystyle=\frac{1}{t_{0}}+c||x-x^{\star}+\xi^{\star}w^{\star}/2c||_{2}^{2}-||\xi^{\star}w^{\star}||_{2}^{2}/4c\leq\frac{1}{t_{0}},$
for
$\|x-x^{\star}+\xi^{\star}w^{\star}/2c\|_{2}\leq\|\xi^{\star}w^{\star}\|_{2}/2c$.
In order to ensure that $\eta_{i}(x),\eta_{j}(x)\leq 1/t_{0}$, set
$r=\min(r_{0}/\|w^{\star}\|_{2},\xi^{\star}/c,(1-\xi^{\star})/c)/2$. The ball
of radius $r||w^{\star}||_{2}$ centered on $x^{\star}-rw^{\star}$ is
completely contained in $\mathrm{DR}_{1/t_{0}}(j)$ and the ball of radius
$r||w^{\star}||_{2}$ centered on $x^{\star}+rw^{\star}$ is completely
contained in $\mathrm{DR}_{1/t_{0}}(i)$, as desired.
Finally, for $t<t_{0}$, observe that
$B_{2}(\mathrm{DR}_{1/t_{0}}(i),\tilde{\varepsilon})\subset\mathrm{DR}_{1/t}(i)$
for some $\tilde{\varepsilon}>0$, and thus $x^{\star}$ is in the interior of
$\mathrm{DR}_{1/t}(i)$. Similarly, $x^{\star}$ is in the interior of
$\mathrm{DR}_{1/t}(j)$. Therefore, the intersection
$\mathrm{DR}_{1/t}(i)\cap\mathrm{DR}_{1/t}(j)$ has a non-empty interior. ∎
### 2.5. A necessary and sufficient condition for optimal recovery.
Finally, we combine the results of the above sections to give a condition for
when it is possible or impossible to separate degree profiles using
hyperplanes. Recall the notions of separating communities and confusing
communities from Definitions 2.6, 2.7.
###### Proposition 2.10.
Let $G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$, $1\leq i<j\leq k$, and $w^{\star}$
be as in (2.6).
1. (1)
If $t>1/\Delta_{+}(\theta_{i},\theta_{j})$, then there exists $\varepsilon>0$
such that $w^{\star}$ separates communities $i$ and $j$ with margin
$\varepsilon\log(n)$ with probability $1-o(1)$.
2. (2)
Let $\mathcal{Z}\subset\mathbb{R}^{2k}$ be a linear subspace and
$w^{\star}\notin\mathcal{Z}$. There exists $\mu>0$ such that if
$t\Delta_{+}(\theta_{i},\theta_{j})<1+\mu$, then for every $m>0$ there exists
$\varepsilon>0$ such that the following holds with probability $1-o(1)$: For
every $z_{1},...,z_{m}\in\mathcal{Z}$ and
$\gamma_{1},...,\gamma_{m}\in\mathbb{R}$,
$[(z_{r})_{r=1}^{m},(\gamma_{r})_{r=1}^{m}]$ confuses communities $i$ and $j$
at level $\varepsilon\log(n)$.
###### Proof.
To prove the first part, define $t_{0}=1/\Delta_{+}(\theta_{i},\theta_{j})$,
so that $t_{0}<t$. By Lemma 2.9, there exists $x^{\star}$ such that
$\mathrm{DR}_{1/t_{0}}(i)\cap\mathrm{DR}_{1/t_{0}}(j)=\\{x^{\star}\\}$.
Additionally, the hyperplane $\\{x:\langle w^{\star},x-x^{\star}\rangle=0\\}$
separates $\mathrm{DR}_{1/t_{0}}(i)$ and $\mathrm{DR}_{1/t_{0}}(j)$. Note that
by (2.7), the hyperplane is equivalently written as $\\{x:\langle
w^{\star},x\rangle=0\\}$. Thus, for all $x\in\mathrm{DR}_{1/t_{0}}(i)$, we
have $\langle w^{\star},x\rangle>0$, while for all
$x\in\mathrm{DR}_{1/t_{0}}(j)$, we have $\langle w^{\star},x\rangle<0$.
Since $\mathrm{DR}_{1/t_{0}}(i)$ and $\mathrm{DR}_{1/t_{0}}(j)$ are both
closed, convex sets, $x^{\star}$ is neither in the interior of
$\mathrm{DR}_{1/t_{0}}(i)$ nor in the interior of $\mathrm{DR}_{1/t_{0}}(j)$.
Fix some $\delta\in(\frac{1}{t},\frac{1}{t_{0}})$. By Lemma 2.3, there exists
$\varepsilon^{\prime}>0$ such that
$B_{2}(\mathrm{DR}_{\delta}(i),\varepsilon^{\prime})\subset\mathrm{DR}_{1/t_{0}}(i)$.
Therefore, we can conclude that $x^{\star}\notin\mathrm{DR}_{\delta}(i)$.
Similarly $x^{\star}\notin\mathrm{DR}_{\delta}(j)$. Hence,
$\mathrm{DR}_{\delta}(i)\cap\mathrm{DR}_{\delta}(j)=\varnothing$. Also, since
$\mathrm{DR}_{1/t_{0}}(i)\backslash\\{x^{\star}\\}\subset\\{x:\langle
w^{\star},x\rangle>0\\}$, and
$\mathrm{DR}_{1/t_{0}}(j)\backslash\\{x^{\star}\\}\subset\\{x:\langle
w^{\star},x\rangle<0\\}$, we can conclude that the hyperplane $\\{x:\langle
w^{\star},x\rangle=0\\}$ separates $\mathrm{DR}_{\delta}(i)$ and
$\mathrm{DR}_{\delta}(j)$. Since dissonance ranges are closed by Lemma 2.3,
there exists $\varepsilon>0$ such that for any
$x^{\scriptscriptstyle(i)}\in\mathrm{DR}_{\delta}(i)$ and
$x^{\scriptscriptstyle(j)}\in\mathrm{DR}_{\delta}(j)$, we have
$\displaystyle\langle
w^{\star},x^{\scriptscriptstyle(i)}\rangle>\frac{\varepsilon}{2t}\quad\text{and}\quad\langle
w^{\star},x^{\scriptscriptstyle(j)}\rangle<-\frac{\varepsilon}{2t}.$
By Lemma 2.4, $d(u)/(t\log(n))\in\mathrm{DR}_{\delta}(i)$ for every $u\in
V_{i}$ with probability $1-o(1)$. Similarly,
$d(v)/(t\log(n))\in\mathrm{DR}_{\delta}(j)$ for every $v\in V_{j}$ with
probability $1-o(1)$. Therefore, with probability $1-o(1)$, we have that for
all $u\in V_{i}$ and $v\in V_{j}$,
$\displaystyle\langle
w^{\star},d(u)\rangle>\frac{\varepsilon}{2}\log(n)\quad\text{and}\quad\langle
w^{\star},d(v)\rangle<-\frac{\varepsilon}{2}\log(n).$
We conclude that $w^{\star}$ separates communities $i$ and $j$ with margin
$\varepsilon\log n$ with high probability.
Next, suppose that $w^{\star}\not\in\mathcal{Z}$. By Lemma 2.9, there exists
$r>0$ such that
$B_{2}(x^{\star}+rw^{\star},r\|w^{\star}\|_{2})\subset\mathrm{DR}_{1/{t_{0}}}(i)$
and
$B_{2}(x^{\star}-rw^{\star},r\|w^{\star}\|_{2})\subset\mathrm{DR}_{1/{t_{0}}}(j)$.
Next, let $w^{\prime}$ be the projection of $w^{\star}$ onto $\mathcal{Z}$.
The fact that $w^{\star}\notin\mathcal{Z}$ implies that
$w^{\star}-w^{\prime}\neq 0$ and $\|w^{\prime}\|_{2}<\|w^{\star}\|_{2}$. Let
$x^{\scriptscriptstyle(i)}=x^{\star}+r(w^{\star}-w^{\prime})$ and
$x^{\scriptscriptstyle(j)}=x^{\star}-r(w^{\star}-w^{\prime})$. We claim that
there exists a sufficiently small $r^{\prime}>0$ such that
$\displaystyle
B_{2}(x^{\scriptscriptstyle(i)},r^{\prime})\subset\mathrm{DR}_{1/t_{0}}(i)\quad\text{and}\quad
B_{2}(x^{\scriptscriptstyle(j)},r^{\prime})\subset\mathrm{DR}_{1/t_{0}}(j).$
(2.10)
Indeed, take $y\in B_{2}(x^{\scriptscriptstyle(i)},r^{\prime})$. Then,
$\displaystyle\|y-(x^{\star}+rw^{\star})\|_{2}\leq
r^{\prime}+r\|w^{\prime}\|_{2}.$
Since $\|w^{\prime}\|_{2}<\|w^{\star}\|_{2}$, we can pick $r^{\prime}$ such
that $\|y-(x^{\star}+rw^{\star})\|_{2}\leq r\|w^{\star}\|_{2}$, and therefore
$B_{2}(x^{\scriptscriptstyle(i)},r^{\prime})\subset
B_{2}(x^{\star}+rw^{\star},r\|w^{\star}\|_{2})\subset\mathrm{DR}_{1/t_{0}}(i)$.
The second conclusion of (2.10) follows similarly.
By (2.10), since $x^{(i)}$ and $x^{(j)}$ lie in interiors of
$\mathrm{DR}_{1/t_{0}}(i)$ and $\mathrm{DR}_{1/t_{0}}(j)$ respectively, there
exists $\mu>0$ such that, for any $t<t_{0}+\mu$, $x^{\scriptscriptstyle(i)}$
and $x^{\scriptscriptstyle(j)}$ also lie in interiors of
$\mathrm{DR}_{1/t}(i)$ and $\mathrm{DR}_{1/t}(j)$ respectively. Note that
$\mathrm{Proj}_{\mathcal{Z}}(x^{\scriptscriptstyle(i)})=\mathrm{Proj}_{\mathcal{Z}}(x^{\scriptscriptstyle(j)})$,
therefore the projections $\mathrm{DR}_{1/t}(i)$ and $\mathrm{DR}_{1/t}(j)$
onto $\mathcal{Z}$ overlap. The desired conclusion follows by Proposition 2.8.
∎
The above result yields a corollary which is useful in designing our
classification algorithm for $k\geq 3$ communities (Algorithm 6).
###### Corollary 2.11.
If $t>1/\Delta_{+}(\theta_{i},\theta_{j})$, then there exists $\varepsilon>0$
such that with probability $1-o(1)$
$\big{(}\log(P_{ri}),\log(1-P_{ri})\big{)}_{r\in[k]}\cdot d(v)>\max_{j\neq
i}\big{(}\log(P_{rj}),\log(1-P_{rj})\big{)}_{r\in[k]}\cdot
d(v)+\varepsilon\log(n)$
for all $i\in[k]$ and $v\in V_{i}$.
###### Proof.
Proposition 2.10 implies that with probability $1-o(1)$,
$\big{(}\log(P_{ri}),\log(1-P_{ri})\big{)}_{r\in[k]}\cdot
d(v)>\big{(}\log(P_{rj}),\log(1-P_{rj})\big{)}_{r\in[k]}\cdot
d(v)+\varepsilon\log(n)$
for every $i,j\in[k],i\neq j$ and $v\in V_{i}$ . The claim follows. ∎
## 3\. Achievability and Impossibility
Let us define the Maximum A Posteriori (MAP) estimator, which is the optimal
estimator of $\sigma_{0}$. Given a realization $G$ of the censored graph, the
MAP estimator outputs
$\hat{\sigma}_{\scriptscriptstyle\mathrm{MAP}}\in\operatornamewithlimits{argmax}_{\sigma}\mathbb{P}(\sigma_{0}=\sigma\mid
G)$, choosing uniformly at random from the argmax set. In this section, we
start by proving Theorem 1.4, which is essentially equivalent to showing that
$\hat{\sigma}_{\scriptscriptstyle\mathrm{MAP}}$ does not succeed in exact
recovery for $t<t_{c}$. Next we prove that, in the two community case, the
estimator $\hat{\sigma}_{\scriptscriptstyle\mathrm{MAP}}$ always succeeds for
$t>t_{c}$. This shows the statistical achievability for the exact recovery
problem.
### 3.1. Impossibility
###### Proof of Theorem 1.4.
Recall that we have $t<t_{c}$ in this case where $t_{c}$ is given by (1.2).
Fix $i<j$ such that $t<t_{0}=1/\Delta_{+}(\theta_{i},\theta_{j})$. Using the
final conclusion of Lemma 2.9, we have that
$\mathrm{DR}_{1/t}(i)\cap\mathrm{DR}_{1/t}(j)$ contains an open ball. By Lemma
2.4 (1), there exists $d\in\mathbb{Z}_{+}^{2k}$ such that $d/(t\log
n)\in\mathrm{DR}_{1/t}(i)\cap\mathrm{DR}_{1/t}(j)$ and there are $L_{n}$ pairs
of vertices $\\{(u_{l},v_{l}):l\in[L_{n}]\\}$ with $u_{l}$’s from community 1,
$v_{l}$’s from community 2, and $L_{n}\to\infty$ such that
$d(u_{l})=d(v_{l})=d$ for all $l\in[L_{n}]$. Let
$\Sigma:=\operatornamewithlimits{argmax}_{\sigma}\mathbb{P}(\sigma_{0}=\sigma\mid
G)$. The above shows that $|\Sigma|\geq L_{n}$ with probability $1-o(1)$,
since swapping the labels of $u_{l}$ and $v_{l}$ leads to an equiprobable
assignment as they have the same degree profile. Now,
$\hat{\sigma}_{\scriptscriptstyle\mathrm{MAP}}$ makes a uniform selection from
$\Sigma$. Thus, conditionally on $|\Sigma|\geq L_{n}$,
$\hat{\sigma}_{\scriptscriptstyle\mathrm{MAP}}$ fails to recover community
labels of all the vertices in $\\{(u_{l},v_{l}):l\in[L_{n}]\\}$ with
probability at least $1-1/L_{n}$. Since $L_{n}\to\infty$ and $|\Sigma|\geq
L_{n}$ with probability $1-o(1)$, we have shown that
$\hat{\sigma}_{\scriptscriptstyle\mathrm{MAP}}$ fails to achieve exact
recovery with probability $1-o(1)$. Since
$\hat{\sigma}_{\scriptscriptstyle\mathrm{MAP}}$ fails, any other estimator
also fails in exact recovery, completing the proof. ∎
### 3.2. Statistical Achievability
###### Theorem 3.1.
Let $G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$. If $t>t_{c}$, then
$\lim_{n\to\infty}\mathbb{P}(\hat{\sigma}_{\scriptscriptstyle\mathrm{MAP}}\text{
achieves exact recovery})=1.$
In order to prove Theorem 3.1, we require two concentration results. Given a
graph $\mathcal{G}=(V,E)$ and $W\subseteq V$, let $e(W)$ be the number of
edges with both endpoints in $W$.
###### Lemma 3.2 ([13, Corollary 2.3]).
Let $0\leq p_{n}\leq 0.99$ and let $\mathcal{G}$ be a sample from an Erdős-
Rényi random graph on vertex set $[n]$ and with edge probability $p_{n}$.
Then, with probability $1-o(1)$,
$\left|e(W)-\binom{|W|}{2}p_{n}\right|\leq O(\sqrt{np_{n}})|W|\quad\text{for
all }W\subseteq[n].$
###### Lemma 3.3.
Let $X_{1},X_{2},\dots,X_{n}$ be a sequence of independent discrete random
variables, whose support is a finite set $S$. Let $X=\sum_{i=1}^{n}X_{i}$ and
$Y=\sum_{i=1}^{n}|X_{i}|$. Let $L=\max\\{|x|:x\in S\\}$. Then for any
$\delta\in(0,1)$,
$\mathbb{P}\left(\left|X-\mathbb{E}\left[X\right]\right|\geq\delta|\mathbb{E}\left[X\right]|\right)\leq\exp\bigg{(}2-C\delta^{2}\frac{\left(\mathbb{E}[X]\right)^{2}}{L\mathbb{E}[Y]}\bigg{)},$
where $C>0$ is a universal constant.
The proof of Lemma 3.3 follows directly from [15, Theorem 1.3]. See Appendix B
for details. We will also need the following definitions in the proof of
Theorem 3.1.
###### Definition 3.4 (Permissible relabeling).
A permutation $\pi:[k]\to[k]$ is called a _permissible relabeling_ if
$\rho(i)=\rho(\pi(i))$ for all $i\in[k]$ and $P_{ij}=P_{\pi(i),\pi(j)}$ for
all $i,j\in[k]$. Let $\mathcal{P}(\rho,P)$ denote the set of permissible
relabelings.
###### Definition 3.5 (Discrepancy).
Given two assignments $\sigma,\sigma^{\prime}:[n]\to[k]$, their _discrepancy_
$\textsc{Disc}(\sigma,\sigma^{\prime})$ is defined as
$\min_{\pi\in\mathcal{P}(\rho,P)}\left\\{\mathrm{d}_{\scriptscriptstyle
H}((\pi\circ\sigma),\sigma^{\prime})\right\\},$
where $\mathrm{d}_{\scriptscriptstyle H}(\cdot,\cdot)$ denotes the Hamming
distance.
Note that, if an estimator $\hat{\sigma}$ satisfies
$\textsc{Disc}(\hat{\sigma},\sigma_{0})=0$ with high probability, then
$\hat{\sigma}$ achieves exact recovery. Next, let $E_{+}$ and $E_{-}$
respectively denote the sets of present and absent edges of $G$. For a
community assignment $\sigma$, communities $i,j\in[k]$ and $\Box\in\\{+,-\\}$,
define
$\displaystyle S^{ij}_{\scriptscriptstyle\Box}(G,\sigma)=\\{e=\\{u,v\\}\in
E_{\scriptscriptstyle\Box}:\\{\sigma(u),\sigma(v)\\}=\\{i,j\\}\\}\quad\text{and}\quad
s^{ij}_{\scriptscriptstyle\Box}(G,\sigma)=|S^{ij}_{\scriptscriptstyle\Box}(G,\sigma)|.$
For example, $s_{-}^{11}(G,\sigma)$ is the number of absent edges with both
endpoints in community 1 according to $\sigma$. Define
$\begin{split}z(G,\sigma)=2\sum_{i,j\in[k]:j\geq
i}\big{[}s^{ij}_{+}(G,\sigma)\log
P_{ij}+s^{ij}_{-}(G,\sigma)\log(1-P_{ij})\big{]}.\end{split}$ (3.1)
The idea is to show that the maximizer of $z(G,\sigma)$ yields a configuration
$\sigma$ with zero discrepancy. We state this in the following two lemmas
which deal with low and high values of discrepancies separately.
###### Lemma 3.6.
There exists $c\in(0,1)$ such that with high probability
$\begin{split}z(G,\sigma)<z(G,\sigma_{0})\quad\text{for all }\sigma\text{ such
that }0<\textsc{Disc}(\sigma,\sigma_{0})\leq cn.\end{split}$ (3.2)
For the high discrepancy case, we need to restrict the range of $\sigma$. To
that end, for any $\eta>0$, define
$\begin{split}{defn-sigma-
range}\Sigma_{0}(\eta):=\\{\sigma:[n]\mapsto[k]:|\\{v:\sigma(v)=i\\}|\in((\rho_{i}-\eta)n,(\rho_{i}+\eta)n),\
\forall i\in[k]\\}.\end{split}$ (3.3)
###### Lemma 3.7.
Fix any $c\in(0,1]$. There exists an $\eta>0$ such that with high probability
$\begin{split}z(G,\sigma)<z(G,\sigma_{0})\quad\text{for all
}\sigma\in\Sigma_{0}(\eta)\text{ such that
}\textsc{Disc}(\sigma,\sigma_{0})\geq cn.\end{split}$ (3.4)
###### Proof of Theorem 3.1.
Fix $c$ such that both the conclusions of Lemmas 3.6 and 3.7 hold. Let $\eta$
be picked according to Lemma 3.7. Rather than analyzing the MAP estimator, we
will analyze the estimator
$\overline{\sigma}=\operatornamewithlimits{argmax}_{\sigma\in\Sigma_{0}(\eta)}\\{z(G,\sigma)\\}.$
Lemmas 3.6 and 3.7 yield $\textsc{Disc}(\overline{\sigma},\sigma_{0})=0$, and
therefore $\overline{\sigma}$ succeeds in exact recovery, with high
probability. Since the MAP estimator is optimal, this also implies that the
MAP estimator succeeds in exact recovery with high probability. ∎
###### Proof of Lemma 3.6.
Let $\textsc{Disc}(\sigma,\sigma_{0})=\delta n$ for some $\delta>0$ (to be
chosen later). Let $\pi\in\mathcal{P}(\rho,P)$ be such that
$\mathrm{d}_{\scriptscriptstyle H}(\sigma\circ\pi,\sigma_{0})=\delta n$.
However, since $z(G,\sigma)=z(G,\sigma\circ\pi)$ for any
$\pi\in\mathcal{P}(\rho,P)$, we can without loss of generality assume that
$\mathrm{d}_{\scriptscriptstyle H}(\sigma,\sigma_{0})=\delta n$. Let us fix
$\Box\in\\{+,-\\}$. To prove (3.2), we start by analyzing
$s_{\scriptscriptstyle\Box}^{rj}(G,\sigma)-s_{\scriptscriptstyle\Box}^{rj}(G,\sigma_{0})$
with $r,j\in[k]$. Fix $r\neq j$. We decompose
$\begin{split}&s_{\scriptscriptstyle\Box}^{rj}(G,\sigma)-s_{\scriptscriptstyle\Box}^{rj}(G,\sigma_{0})\\\
&=\sum_{\\{u,v\\}\in
E_{\scriptscriptstyle\Box}}\mathbbm{1}_{\\{\\{\sigma(u),\sigma(v)\\}=\\{r,j\\},\\{\sigma_{0}(u),\sigma_{0}(v)\\}=\\{r,i\\},i\neq
j\\}}+\sum_{\\{u,v\\}\in
E_{\scriptscriptstyle\Box}}\mathbbm{1}_{\\{\\{\sigma(u),\sigma(v)\\}=\\{r,j\\},\\{\sigma_{0}(u),\sigma_{0}(v)\\}=\\{i,j\\},i\neq
r\\}}\\\ &\quad+\sum_{\\{u,v\\}\in
E_{\scriptscriptstyle\Box}}\mathbbm{1}_{\\{\\{\sigma(u),\sigma(v)\\}=\\{r,j\\},\\{\sigma(u),\sigma(v)\\}\cap\\{\sigma_{0}(u),\sigma_{0}(v)\\}=\varnothing\\}}-\sum_{\\{u,v\\}\in
E_{\scriptscriptstyle\Box}}\mathbbm{1}_{\\{\\{\sigma(u),\sigma(v)\\}=\\{r,i\\},i\neq
j,\\{\sigma_{0}(u),\sigma_{0}(v)\\}=\\{r,j\\}\\}}\\\ &\quad-\sum_{\\{u,v\\}\in
E_{\scriptscriptstyle\Box}}\mathbbm{1}_{\\{\\{\sigma(u),\sigma(v)\\}=\\{i,j\\},i\neq
r,\\{\sigma_{0}(u),\sigma_{0}(v)\\}=\\{r,j\\}\\}}-\sum_{\\{u,v\\}\in
E_{\scriptscriptstyle\Box}}\mathbbm{1}_{\\{\\{\sigma_{0}(u),\sigma_{0}(v)\\}=\\{r,j\\},\\{\sigma(u),\sigma(v)\\}\cap\\{\sigma_{0}(u),\sigma_{0}(v)\\}=\varnothing\\}}.\end{split}$
(3.5)
To analyze (LABEL:decomp-difference), denote the six terms above by (I), (II),
…, (VI) respectively.
Let $H_{\scriptscriptstyle\Box}(\sigma)$ be the graph on
$\\{v:\sigma(v)\neq\sigma_{0}(v)\\}$ where $\\{u,v\\}$ is an edge of
$H_{\scriptscriptstyle\Box}(\sigma)$ if and only if $\\{u,v\\}\in
E_{\scriptscriptstyle\Box}$. Let $e(H_{\scriptscriptstyle\Box}(\sigma))$
denote the number of edges in $H_{\scriptscriptstyle\Box}(\sigma)$. We will
show that
$\begin{split}\bigg{|}\text{(I)}-\sum_{i\in[k]\setminus\\{j\\}}\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{{\scriptscriptstyle\Box}r}(v)\bigg{|}\leq
3k\cdot e(H_{\scriptscriptstyle\Box}(\sigma)).\end{split}$ (3.6)
To compute (I), fix $i,r,j$, $r\neq j$ $i\neq j$, and consider two cases:
Case I: $i,j,r$ are distinct. Denote this contribution as (Ia). There are two
subcases. Suppose that the $r$-labeled vertex under $\sigma,\sigma_{0}$ is the
same vertex. Think of $u$ being such that $\sigma(u)=\sigma_{0}(u)=r$. The
number of such edges is
$\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{{\scriptscriptstyle\Box}r}(v)-\text{Err}_{\scriptscriptstyle\text{(I)}}$,
where $\text{Err}_{\scriptscriptstyle\text{(I)}}$ is the number of
$\\{u,v\\}\in E_{\scriptscriptstyle\Box}$ such that $\sigma(u)\neq
r,\sigma_{0}(u)=r,\sigma(v)=j,\sigma_{0}(v)=i$. To see this, note that the
summation
$\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{{\scriptscriptstyle\Box}r}(v)$ counts
all edges (present or absent depending on $\Box=+$ or $\Box=-$) from
$\\{v:\sigma(v)=j,\sigma_{0}(v)=i\\}$ to $\\{u:\sigma_{0}(u)=r\\}$. However,
this causes an over-counting because these edges may be incident to $u$’s with
$\sigma(u)\neq r$, resulting in the substraction of
$\text{Err}_{\scriptscriptstyle\text{(I)}}$. Note that
$\text{Err}_{\scriptscriptstyle\text{(I)}}$ is at most
$e(H_{\scriptscriptstyle\Box}(\sigma))$. Next, consider the second subcase,
where the $r$-labeled vertex under $\sigma,\sigma_{0}$ are different. Since
$i,j,r$ are distinct, such edges will have both endpoints in
$\\{v:\sigma(v)\neq\sigma_{0}(v)\\}$. Therefore,
$\begin{split}{distinct-
contributions}\bigg{|}\text{(Ia)}-\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{{\scriptscriptstyle\Box}r}(v)\bigg{|}\leq
2e(H_{\scriptscriptstyle\Box}(\sigma)).\end{split}$ (3.7)
Case II: $i=r$. Denote this contribution as (Ib). Since $r\neq j$, we only
need to consider the case where one of the endpoints is labeled $r$ by both
$\sigma,\sigma_{0}$. An argument identical to the first part of Case I shows
$\begin{split}{identical-
contributions}\bigg{|}\text{(Ia)}-\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{{\scriptscriptstyle\Box}r}(v)\bigg{|}\leq
e(H_{\scriptscriptstyle\Box}(\sigma)).\end{split}$ (3.8)
Combining (LABEL:distinct-contributions) and (LABEL:identical-contributions),
(3.6) follows immediately. Bounds similar to (3.6) also hold for Terms (II),
(IV), and (V). Term (III) is easily bounded by
$e(H_{\scriptscriptstyle\Box}(\sigma))$. Finally, we simply drop Term (VI) for
upper bounding (3.6).
For $r=j$, we get a similar decomposition as (LABEL:decomp-difference), except
that the second and fifth terms would be omitted. For each of the terms, we
can also prove (3.6). In particular,
$\left|\text{(I)}-\sum_{i\in[k]\setminus\\{j\\}}\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{{\scriptscriptstyle\Box}r}(v)\right|\leq(k-1)\cdot
e(H_{\scriptscriptstyle\Box}(\sigma))\leq 3k\cdot
e(H_{\scriptscriptstyle\Box}(\sigma)).$
Next, we need to bound $e(H_{\scriptscriptstyle\Box}(\sigma))$. Note that the
number of vertices in $H_{\scriptscriptstyle\Box}(\sigma)$ is
$\mathrm{d}_{\scriptscriptstyle H}(\sigma,\sigma_{0})$, where
$\mathrm{d}_{\scriptscriptstyle H}(\cdot,\cdot)$ denotes the Hamming distance.
Letting $\tau=\max_{a,b\in[k]}\max\\{P_{ab},1-P_{ab}\\}$, we see that there is
a coupling such that, with probability 1, $H_{\scriptscriptstyle\Box}(\sigma)$
is a subgraph of an Erdős-Rényi random graph on vertex set $[n]$ and edge
probability $\frac{\tau t\log n}{n}$. Applying Lemma 3.2, we obtain that with
probability $1-o(1)$
$\begin{split}{bound-Err-I}e(H_{\scriptscriptstyle\Box}(\sigma))\leq\frac{\tau
t\log n}{2n}\mathrm{d}_{\scriptscriptstyle
H}(\sigma,\sigma_{0})^{2}+O(\sqrt{\log n})\mathrm{d}_{\scriptscriptstyle
H}(\sigma,\sigma_{0})\quad\text{for all }\sigma\in[k]^{n}.\end{split}$ (3.9)
Combining (3.6) and (LABEL:bound-Err-I), we get an estimate for (I) in
(LABEL:decomp-difference). Similar estimates for (II), (IV), (V) can be
deduced using an identical argument. The term (III) can be directly bounded by
$e(H_{\scriptscriptstyle\Box}(\sigma))$ as well and (VI) can be dropped.
Therefore, (LABEL:decomp-difference) yields that with probability $1-o(1)$
$\begin{split}s_{\scriptscriptstyle\Box}^{rj}(G,\sigma)-s_{\scriptscriptstyle\Box}^{rj}(G,\sigma_{0})&\leq\sum_{i:i\neq
j}\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{{\scriptscriptstyle\Box}r}(v)+\sum_{i:i\neq
r}\sum_{u:\sigma(u)=r,\sigma_{0}(u)=i}d_{{\scriptscriptstyle\Box}j}(u)\\\
&\qquad-\sum_{i:i\neq
j}\sum_{v:\sigma(v)=i,\sigma_{0}(v)=j}d_{{\scriptscriptstyle\Box}r}(v)-\sum_{i:i\neq
r}\sum_{u:\sigma(u)=i,\sigma_{0}(u)=r}d_{{\scriptscriptstyle\Box}j}(u)\\\
&\qquad\quad+\frac{8k\tau t\log n}{n}\mathrm{d}_{\scriptscriptstyle
H}(\sigma,\sigma_{0})^{2}+O(\sqrt{\log n})\mathrm{d}_{\scriptscriptstyle
H}(\sigma,\sigma_{0}).\end{split}$ (3.10)
For $r=j$, a bound identical to (3.10) holds after omitting the second and the
fourth terms. Next, by Proposition 2.10 1, there exists $\varepsilon>0$ such
that for all $i,j\in[k]$ and $i>j$, with high probability,
$\begin{split}\langle
w^{\star}_{ij},d(v)\rangle=\sum_{r\in[k]}d_{+r}(v)\log\frac{P_{ri}}{P_{rj}}+d_{-r}(v)\log\frac{1-P_{ri}}{1-P_{rj}}\
\ \begin{cases}\geq\varepsilon\log n,\quad&\forall v:\sigma_{0}(v)=j\\\
\leq-\varepsilon\log n,\quad&\forall v:\sigma_{0}(v)=i\end{cases}\end{split}$
(3.11)
Let
$L=\sum_{i,j\in[k]}\sum_{r\in[k]}\big{|}\log\frac{P_{ri}}{P_{rj}}\big{|}+\big{|}\log\frac{1-P_{ri}}{1-P_{rj}}\big{|}$.
Thus, (3.10) yields
$\displaystyle z(G,\sigma)-z(G,\sigma_{0})$ $\displaystyle\leq
4\sum_{i,j\in[k]:i>j}\sum_{r\in[k]}\bigg{[}\bigg{(}\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{+r}(v)-\sum_{v:\sigma(v)=i,\sigma_{0}(v)=j}d_{+r}(v)\bigg{)}\log\frac{P_{ri}}{P_{rj}}$
$\displaystyle\quad+\bigg{(}\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}d_{-r}(v)-\sum_{v:\sigma(v)=i,\sigma_{0}(v)=j}d_{-r}(v)\bigg{)}\log\frac{1-P_{ri}}{1-P_{rj}}\bigg{]}$
$\displaystyle\qquad+\frac{8k^{3}L\tau t\log
n}{n}\mathrm{d}_{\scriptscriptstyle H}(\sigma,\sigma_{0})^{2}+O(\sqrt{\log
n})\mathrm{d}_{\scriptscriptstyle H}(\sigma,\sigma_{0})$
$\displaystyle=4\sum_{i,j\in[k]:i>j}\bigg{[}\sum_{v:\sigma(v)=j,\sigma_{0}(v)=i}\langle
w^{\star}_{ij},d(v)\rangle-\sum_{v:\sigma(v)=i,\sigma_{0}(v)=j}\langle
w^{\star}_{ij},d(v)\rangle\bigg{]}$ $\displaystyle\qquad+\frac{8k^{3}L\tau
t\log n}{n}\mathrm{d}_{\scriptscriptstyle
H}(\sigma,\sigma_{0})^{2}+O(\sqrt{\log n})\mathrm{d}_{\scriptscriptstyle
H}(\sigma,\sigma_{0})$ $\displaystyle\leq-4\mathrm{d}_{\scriptscriptstyle
H}(\sigma,\sigma_{0})\varepsilon\log n+\frac{8k^{3}L\tau t\log
n}{n}\mathrm{d}_{\scriptscriptstyle H}(\sigma,\sigma_{0})^{2}+O(\sqrt{\log
n})\mathrm{d}_{\scriptscriptstyle H}(\sigma,\sigma_{0}).$
Thus, for any $\delta\leq\frac{\varepsilon}{3k^{3}L\tau t}$, we can ensure
that $z(G,\sigma)-z(G,\sigma_{0})<0$ for all $\sigma$ with
$\mathrm{d}_{\scriptscriptstyle H}(\sigma,\sigma_{0})=\delta n$ with high
probability. Thus the proof follows by taking
$c=\frac{\varepsilon}{2k^{3}L\tau t}$. ∎
###### Proof of Lemma 3.7.
Fix $c\in(0,1]$. Define,
$\displaystyle\eta=\frac{1}{6}\left(\min\\{|\rho_{i}-\rho_{j}|:\rho_{i}\neq\rho_{j},i,j\in[k]\\}\land\min\\{\rho_{i}:i\in[k]\\}\right)\wedge\frac{c}{4k}.$
(3.12)
Throughout, we condition on the event that $\sigma_{0}\in\Sigma_{0}(\eta)$,
where $\Sigma_{0}(\eta)$ is defined in (LABEL:defn-sigma-range). Due to (1.4),
this conditioning event holds with high probablity. Fix an assignment
$\sigma\in\Sigma_{0}(\eta)$ satisfying $\textsc{Disc}(\sigma,\sigma_{0})\geq
cn$. The idea is to show that
$\mathbb{E}\left[z(G,\sigma)-z(G,\sigma_{0})\right]\leq-Cn\log n$, and use the
concentration bound in Lemma 3.3 to conclude that (3.2) holds.
We first compute the expected difference
$\mathbb{E}[z(G,\sigma)-z(G,\sigma_{0})]$. Let
$V_{ij}:=\\{v:\sigma_{0}(v)=i,\sigma(v)=j\\}$ and $\nu_{ij}=|V_{ij}|$. Fix
$i,j,a,b$ such that $a\geq i$, $j\geq b$ and we also have $i\neq j$ or $a\neq
b$. Thus, $V_{ij}\cap V_{ab}=\varnothing$. The expected number of present
edges between $V_{ij}$ and $V_{ab}$ is $\nu_{ij}\nu_{ab}\times\alpha P_{ia}$,
where $\alpha=\frac{t\log n}{n}$. The contribution of these edges to
$\mathbb{E}[z(G,\sigma)-z(G,\sigma_{0})]$ is
$2\nu_{ij}\nu_{ab}\times\alpha
P_{ia}\times\left(\log(P_{jb})-\log(P_{ia})\right)=2\nu_{ij}\nu_{ab}\times\alpha\times
P_{ia}\log\frac{P_{jb}}{P_{ia}}.$
Similarly, the contribution from absent edges is
$2\nu_{ij}\nu_{ab}\times\alpha\times(1-P_{ia})\log\frac{1-P_{jb}}{1-P_{ia}}.$
Summing over all contributions, and noting that the contribution for the terms
with $i=j$ and $a=b$ is zero due to the presence of $\log$ terms, we obtain
$\displaystyle\mathbb{E}\left[z(G,\sigma)-z(G,\sigma_{0})\right]$
$\displaystyle=2\alpha\sum_{i,j,a,b\in[k]:a\geq i,b\geq
j}\nu_{ij}\nu_{ab}\bigg{(}P_{ia}\log\frac{P_{jb}}{P_{ia}}+(1-P_{ia})\log\frac{1-P_{jb}}{1-P_{ia}}\bigg{)}$
$\displaystyle=-2\alpha\sum_{i,j,a,b\in[k]:a\geq i,b\geq
j}\nu_{ij}\nu_{ab}D_{\scriptscriptstyle\mathrm{KL}}\left(P_{ia},P_{jb}\right),$
where $D_{\scriptscriptstyle\mathrm{KL}}(\cdot,\cdot)$ denotes the
Kullback?Leibler divergence. Our goal is to upper-bound the expectation. Note
that all terms are nonpositive, so it suffices to bound a subset of the terms.
We treat two disjoint cases separately.
Case 1: For all $i$, there is at most one $j\in[k]$ such that
$\nu_{ij}\geq\frac{\eta n}{k}$. Fixing $i$, the pigeonhole principle then
implies that there is exactly one such $j$. But since
$\sum_{l}\nu_{il}\geq(\rho_{i}-\eta)n$, we know that
$\nu_{ij}\geq\rho_{i}n-(k-1)\frac{\eta n}{k}-\eta
n>\left(\rho_{i}-2\eta\right)n.$
Next we claim that we cannot have $\nu_{ij}>(\rho_{i}-2\eta)n$ and
$\nu_{i^{\prime}j}>(\rho_{i}-2\eta)n$ with $i\neq i^{\prime}$. Supposing
otherwise, we would have
$2(\rho_{l}-2\eta)n\leq\sum_{i}\nu_{il}\leq(\rho_{l}+\eta)n,$
which implies $\rho_{l}\leq 5\eta$. However, $\eta\leq\frac{\rho_{l}}{6}$ by
definition (3.12), and we have arrived at a contradiction. Hence, there exists
a unique permutation $\pi:[k]\to[k]$ such that
$\nu_{i,\pi(i)}>\left(\rho_{i}-2\eta\right)n$ for all $i\in[k]$.
Next, we argue that $\pi$ is not permissible. Indeed, if $\pi$ were
permissible, then
$\displaystyle\textsc{Disc}(\sigma,\sigma_{0})\leq\mathrm{d}_{\scriptscriptstyle
H}(\pi\circ\sigma,\sigma_{0})=\sum_{i}\sum_{j:\pi(j)\neq
i}\nu_{ij}=\sum_{i}\bigg{(}\sum_{j}\nu_{ij}-\nu_{i,\pi(i)}\bigg{)}\leq k\times
3\eta n<cn,$
where the second-to-last step uses $\sum_{j}\nu_{ij}\leq(\rho_{i}+\eta)n$ and
$\nu_{i,\pi(i)}>\left(\rho_{i}-2\eta\right)n$ and the last step follows from
(3.12). This leads to a contradiction and thus $\pi$ is not permissible.
Next, observe that
$\displaystyle\sum_{i,a}\nu_{i\pi(i)}\nu_{a\pi(a)}D_{\scriptscriptstyle\mathrm{KL}}\left(P_{ia},P_{\pi(i),\pi(a)}\right)$
$\displaystyle\geq
n^{2}\sum_{i,a}(\rho_{\pi(i)}-2\eta)(\rho_{\pi(a)}-2\eta)D_{\scriptscriptstyle\mathrm{KL}}\left(P_{ia},P_{\pi(i),\pi(a)}\right)$
$\displaystyle=n^{2}\sum_{i,a}(\rho_{i}-2\eta)(\rho_{a}-2\eta)D_{\scriptscriptstyle\mathrm{KL}}\left(P_{\pi(i),\pi(a)},P_{ia}\right)$
$\displaystyle\geq\frac{4}{9}n^{2}\sum_{i,a}\rho_{i}\rho_{a}\cdot
D_{\scriptscriptstyle\mathrm{KL}}\left(P_{\pi(i),\pi(a)},P_{ia}\right).$
Since $\pi(\cdot)$ is not permissible, there must exist $(i,a)$ for which
$P_{ia}\neq P_{\pi(i),\pi(a)}$, and thus the above term is at most
$C^{\prime}n^{2}$ for some constant $C^{\prime}>0$. Hence,
$\mathbb{E}\left[z(G,\sigma)-z(G,\sigma_{0})\right]\leq-\frac{8\alpha
C^{\prime}n^{2}}{9}$.
Case 2: There exist $i,j,j^{\prime}$ with $j\neq j^{\prime}$ such that
$\nu_{ij},\nu_{ij^{\prime}}\geq\frac{\eta n}{k}$. Let $b\in[k]$ be such that
$P_{jb}\neq P_{j^{\prime}b}$. If no such $b$ exists, then communities $j$ and
$j^{\prime}$ are indistinguishable. In that case, $t_{c}=\infty$ and exact
recovery will be impossible for any fixed $t$. Let $a\in[k]$ be such that
$\nu_{ab}\geq\frac{n}{k}$, which is guaranteed to exist by the pigeonhole
principle. Then either
$P_{ia}\neq P_{jb}\text{ or }P_{ia}\neq P_{j^{\prime}b}.$
Therefore,
$\displaystyle-\mathbb{E}\left[z(G,\sigma)-z(G,\sigma_{0})\right]$
$\displaystyle\geq 2\alpha\sum_{i,j,a,b\in[k]:a\geq i,b\geq
j}\nu_{ij}\nu_{ab}D_{\scriptscriptstyle\mathrm{KL}}\left(P_{ia},P_{jb}\right)$
$\displaystyle\geq\alpha\frac{\eta
n}{k}\cdot\frac{n}{k}\left(D_{\scriptscriptstyle\mathrm{KL}}\left(P_{ia},P_{jb}\right)+D_{\scriptscriptstyle\mathrm{KL}}\left(P_{ia},P_{j^{\prime}b}\right)\right)\geq\alpha
C^{\prime}n^{2}.$
Summarizing both cases, we have shown that there exists a constant
$C^{\prime\prime}>0$ such that
$\begin{split}\mathbb{E}\left[z(G,\sigma)-z(G,\sigma_{0})\right]\leq-\alpha
C^{\prime\prime}n^{2}=-tC^{\prime\prime}n\log n.\end{split}$ (3.13)
We next apply Lemma 3.3 to establish concentration of the difference
$z(G,\sigma)-z(G,\sigma_{0})$. Letting $\mathscr{P},\mathscr{A}$ denote the
set of present and absent edges respectively, note that
$\displaystyle X$
$\displaystyle:=\frac{1}{2}\mathbb{E}[z(G,\sigma)-z(G,\sigma_{0})]$
$\displaystyle=\sum_{1\leq u<v\leq
n}\bigg{[}\mathbbm{1}_{\\{\\{u,v\\}\in\mathscr{P}\\}}\log\frac{P_{\sigma(u),\sigma(v)}}{P_{\sigma_{0}(u),\sigma_{0}(v)}}+\mathbbm{1}_{\\{\\{u,v\\}\in\mathscr{A}\\}}\log\frac{1-P_{\sigma(u),\sigma(v)}}{1-P_{\sigma_{0}(u),\sigma_{0}(v)}}\bigg{]}.$
Denote each term in the summation by $X_{uv}$. Then $X=\sum_{1\leq u<v\leq
n}X_{uv}$ is a sum of independent random variables conditionally on
$\sigma_{0}$, for any $\sigma\in[k]^{n}$. Let $Y=\sum_{1\leq u<v\leq
n}|X_{uv}|$. Then for any $\delta\in(0,1)$,
$\displaystyle\mathbb{P}\left(z(G,\sigma)-z(G,\sigma_{0})\geq(1-\delta)\mathbb{E}\left[z(G,\sigma)-z(G,\sigma_{0})\right]\right)$
$\displaystyle\leq\exp\bigg{(}2-C\delta^{2}\frac{\left(\mathbb{E}[X]\right)^{2}}{L\mathbb{E}[Y]}\bigg{)},$
where $C$ is the universal constant from Lemma 3.3, and $L>0$ is a constant
depending on $P,t$. To upper-bound $\mathbb{E}[Y]$, note that for any $1\leq
u<v\leq n$
$\displaystyle\frac{\mathbb{E}\left[|X_{uv}|\right]}{|\mathbb{E}\left[X_{uv}\right]|}$
$\displaystyle=\frac{P_{\sigma_{0}(u),\sigma_{0}(v)}\left|\log\frac{P_{\sigma(u),\sigma(v)}}{P_{\sigma_{0}(u),\sigma_{0}(v)}}\right|+\left(1-P_{\sigma_{0}(u),\sigma_{0}(v)}\right)\left|\log\frac{1-P_{\sigma(u),\sigma(v)}}{1-P_{\sigma_{0}(u),\sigma_{0}(v)}}\right|}{\left|P_{\sigma_{0}(u),\sigma_{0}(v)}\log\frac{P_{\sigma(u),\sigma(v)}}{P_{\sigma_{0}(u),\sigma_{0}(v)}}+\left(1-P_{\sigma_{0}(u),\sigma_{0}(v)}\right)\log\frac{1-P_{\sigma(u),\sigma(v)}}{1-P_{\sigma_{0}(u),\sigma_{0}(v)}}\right|}.$
Taking a maximum on the right hand side over $\sigma\in[k]^{n}$, there exists
a constant $C^{(1)}>0$ depending on $P$ such that
$\mathbb{E}\left[|X_{uv}|\right]\leq C^{(1)}|\mathbb{E}\left[X_{uv}\right]|$
for all $u,v$. It follows that
$\displaystyle\mathbb{E}[Y]$ $\displaystyle\leq C^{(1)}\sum_{1\leq u<v\leq
n}\left|\mathbb{E}[X_{uv}]\right|\leq C^{(2)}n\log n,$
for some constant $C^{(2)}>0$. Also, by (3.13),
$|\mathbb{E}[X]|\geq\frac{tC^{\prime\prime}}{2}n\log n$. Therefore there
exists a constant $\tilde{C}>0$ such that
$\displaystyle\mathbb{P}\left(z(G,\sigma)-z(G,\sigma_{0})\geq(1-\delta)\mathbb{E}\left[z(G,\sigma)-z(G,\sigma_{0})\right]\right)$
$\displaystyle\leq\exp\big{(}2-\tilde{C}\delta^{2}n\log n\big{)}.$
Taking $\delta=\frac{1}{2}$ and using (3.13), we conclude that
$z(G,\sigma)-z(G,\sigma_{0})<0$ with probability at least
$1-\exp(2-\frac{\tilde{C}}{4}n\log n)$. Finally, we take a union bound over
the set $\\{\sigma:\textsc{Disc}(\sigma,\sigma_{0})\geq cn\\}$, whose
cardinality is at most $k^{n}$. Since $\exp(2-\frac{\tilde{C}}{4}n\log
n)k^{n}=o(1)$, we conclude that (3.4) holds with high probability.
∎
## 4\. Entrywise eigenvector bounds
Our analysis of spectral algorithms relies on precise entrywise control of
eigenvectors of adjacency matrices, which is guaranteed by the following
result. As before, on a fixed value of $\sigma_{0}$ satisfying (1.4) with
$\varepsilon=n^{-1/3}$.
###### Theorem 4.1.
Fix $k\in\mathbb{N}$, $\rho\in(0,1)^{k}$ such that $\sum_{i=1}^{k}\rho_{i}=1$,
a symmetric matrix $P\in(0,1)^{k\times k}$, and let
$G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$. Define $A=A(G,y)$ for some constant
$y>0$, and let $A^{\star}=\mathbb{E}[A]$. Let $(\lambda_{i},u_{i})$ and
$(\lambda_{i}^{\star},u_{i}^{\star})$ denote the $i$-th largest eigenpair of
$A$ and $A^{\star}$ respectively. Let $r,s$ be integers satisfying $1\leq
r\leq k$ and $0\leq s\leq k-r$. Let
$U=(u_{s+1},\dots,u_{s+r})\in\mathbb{R}^{n\times r}$,
$U^{\star}=(u^{\star}_{s+1},\dots,u^{\star}_{s+r})\in\mathbb{R}^{n\times r}$,
and
$\Lambda^{\star}=\text{diag}(\lambda^{\star}_{s+1},\dots,\lambda^{\star}_{s+r})\in\mathbb{R}^{r\times
r}$. Suppose that
$\begin{split}\Delta^{\star}:=(\lambda^{\star}_{s}-\lambda^{\star}_{s+1})\land(\lambda^{\star}_{s+r}-\lambda^{\star}_{s+r+1})\land\min_{i\in[r]}\left|\lambda^{\star}_{s+i}\right|>0,\end{split}$
(4.1)
where $\lambda_{0}^{\star}=\infty$ and $\lambda_{k+1}^{\star}=-\infty$. Then,
with probability at least $1-O(n^{-3})$,
$\inf_{O\in\mathcal{O}^{r\times r}}\left\|UO-
AU^{\star}\left(\Lambda^{\star}\right)^{-1}\right\|_{2\to\infty}\leq\frac{C}{\log\log(n)\sqrt{n}},$
(4.2)
for some $C=C(\rho,P,t,y)>0$, where $\mathcal{O}^{r\times r}$ denotes the set
of $r\times r$ orthogonal matrices.
###### Corollary 4.2.
Recall the notation from Theorem 4.1. If all eigenvalues of $A^{\star}$ are
distinct and nonzero, then with probability $1-O(n^{-3})$, for all $i\in[k]$,
$\min_{s\in\\{\pm
1\\}}\left\|su_{i}-\frac{Au_{i}^{\star}}{\lambda_{i}^{\star}}\right\|_{\infty}\leq\frac{C}{\log\log(n)\sqrt{n}},$
for some $C=C(\rho,P,t,y)>0$.
The proof of Theorem 4.1 relies on an entrywise eigenvector perturbation bound
derived in [1]. We provide the statement for a general random matrix $A$ here
for completeness, and will verify these general conditions subsequently for
$G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$. Also, we reuse the notation from
Theorem 4.1. Let $H=U^{T}U^{\star}$, with singular value decomposition given
by $H=\bar{W}\bar{\Sigma}\bar{V}^{T}$. Let
$\text{sgn}(H)=\bar{W}\bar{V}^{T}\in\mathbb{R}^{r\times r}$, which is an
orthonormal matrix, called the _matrix sign function_ [10]. Given this setup,
[1, Theorem 2.1 Part (2)] gives the following result.
###### Theorem 4.3 (Theorem 2.1 Part (2), [1]).
Let $A$ be a random matrix as described above. Suppose that the following
assumptions are satisfied, for some $\gamma>0$ and
$\varphi(x):\mathbb{R}_{+}\to\mathbb{R}_{+}$.
1. (1)
(Properties of $\varphi$) $\varphi(x)$ is continuous and non-decreasing in
$\mathbb{R}_{+}$, $\varphi(0)=0$, and $\frac{\varphi(x)}{x}$ is non-increasing
in $\mathbb{R}_{+}$.
2. (2)
(Incoherence) $\|A^{\star}\|_{2\to\infty}\leq\gamma\Delta^{\star}$, where
$\Delta^{\star}$ is defined in (4.1).
3. (3)
(Row- and column-wise independence) For any $m\in[n]$, the entries in the
$m$th row and column are independent with others, i.e. $\\{A_{ij}:i=m\text{ or
}j=m\\}$ are independent of $\\{A_{ij}:i\neq m,j\neq m\\}$.
4. (4)
(Spectral norm concentration) Define
$\kappa=\frac{1}{\Delta^{\star}}\max_{i\in[r]}|\lambda^{\star}_{s+i}|$, and
suppose $32\kappa\max\\{\gamma,\varphi(\gamma)\\}\leq 1$. Then, for some
$\delta_{0}\in(0,1)$,
$\mathbb{P}\left(\|A-A^{\star}\|_{2}\leq\gamma\Delta^{\star}\right)\geq
1-\delta_{0}.$
5. (5)
(Row concentration) There exists $\delta_{1}\in(0,1)$ such that for any
$m\in[n]$ and $W\in\mathbb{R}^{n\times r}$,
$\mathbb{P}\left(\left\|(A-A^{\star})_{m,\cdot}W\right\|_{2}\leq\Delta^{\star}\|W\|_{2\to\infty}\varphi\left(\frac{\|W\|_{F}}{\sqrt{n}\|W\|_{2\to\infty}}\right)\right)\geq
1-\frac{\delta_{1}}{n}.$ (4.3)
Then, with probability at least $1-\delta_{0}-2\delta_{1}$,
$\left\|U\mathrm{sgn}(H)-AU^{\star}(\Lambda^{\star})^{-1}\right\|_{2\to\infty}\leq
C_{0}\Big{(}\kappa(\kappa+\varphi(1))(\gamma+\varphi(\gamma))\|U^{\star}\|_{2\to\infty}+\frac{\gamma}{\Delta^{\star}}\|A^{\star}\|_{2\to\infty}\Big{)},$
where $C_{0}>0$ is an absolute constant.
###### Remark 4.4.
Theorem 4.3 can be applied to the recovery of a single eigenvector $u_{l}$ by
setting $r=1$ and $s=l-1$. In that case, the requirement (4.3) simplifies to
$\mathbb{P}\left(\left|(A-A^{\star})_{m,\cdot}\cdot
w\right|\leq\Delta^{\star}\|w\|_{\infty}\varphi\left(\frac{\|w\|_{2}}{\sqrt{n}\|w\|_{\infty}}\right)\right)\geq
1-\frac{\delta_{1}}{n}$
for each $w\in\mathbb{R}^{n}$. The conclusion becomes
$\min_{s\in\\{\pm
1\\}}\left\|su_{k}-\frac{Au_{k}^{\star}}{\lambda_{k}^{\star}}\right\|_{\infty}\leq
C_{0}\Big{(}\kappa(\kappa+\varphi(1))(\gamma+\varphi(\gamma))\|u_{k}^{\star}\|_{\infty}+\frac{\gamma}{\Delta^{\star}}\|A^{\star}\|_{2\to\infty}\Big{)}.$
In order to prove Theorem 4.1, we verify the five conditions of Theorem 4.3.
The following lemma states properties of the eigenspace of $A^{\star}$.
###### Lemma 4.5.
Let $G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$ where $\rho\in(0,1)^{k}$ is such
that $\sum_{i}\rho_{i}=1$, and $P\in(0,1)^{k\times k}$ is a symmetric matrix.
Define $A=A(G,y)$ as in Definition 1.5. Denote $A^{\star}=\mathbb{E}[A]$ and
let $(\lambda_{l}^{\star},u_{l}^{\star})_{l\in[k]}$ be the top $k$ eigenpairs.
Then there exists constants $(\nu_{l})_{l\in[k]}$ depending on $P$, $\rho$,
$t$, and $y$ such that
$\begin{split}{eq:lambda-star-
asymp}\lambda_{l}^{\star}=(1+o(1))\nu_{1}\log(n)\quad\text{for all
}l\in[k]\end{split}$ (4.4)
Moreover, if $y$ is such that $\nu_{l}$’s are distinct, then there exist
constants $(\zeta_{lj})_{l,j\in[k]}$ depending on $P$, $\rho$, $t$, and $y$
such that
$\begin{split}{eq:u-star-
asymp}u_{lu}^{\star}=(1+o(1))\frac{\zeta_{lj}}{\sqrt{n}}\quad\text{ for all
}u\in\\{v:\sigma_{0}(v)=j\\}\end{split}$ (4.5)
###### Proof.
Since $A^{\star}_{ij}=\Theta\big{(}\frac{\log n}{n}\big{)}$ for $i\neq j$, the
proof of (LABEL:eq:lambda-star-asymp) follows by a straightforward adaptation
of the proof of [8, Lemma 3.2]. Next, consider the matrix
$B^{\star}=A^{\star}+D$, where $D$ is a diagonal matrix whose $(u,u)$-th entry
is $P_{jj}-y(1-P_{jj})$ if $u\in\\{v:\sigma_{0}(v)=j\\}$. Then $B^{\star}$ has
rank $k$ with distinct eigenvalues, and the corresponding eigenvectors will
have the form in (LABEL:eq:u-star-asymp). The $1+o(1)$ corrections account for
the diagonal perturbation, which can be dealt with using identical arguments
as [9, Lemma 5.3]. ∎
Among the conditions in Theorem 4.3, only the fourth and the fifth are
substantial. We verify them in the two lemmas below.
###### Lemma 4.6.
Let $A$ be a symmetric and zero-diagonal random matrix. Suppose that the
entries $\\{A_{ij}:i<j\\}$ are independent, $A_{ij}\in[a,b]$ for two constants
$a<b$, and $\mathbb{E}[|A_{ij}|]\leq p$ for all $i,j$, where $\frac{c_{0}\log
n}{n}\leq p\leq 1-c_{1}$ for constants $c_{0},c_{1}>0$. Then, for any $c>0$,
there exists $c^{\prime}>0$ such that
$\mathbb{P}\left(\|A-\mathbb{E}[A]\|_{2}\leq c^{\prime}\sqrt{np}\right)\geq
1-2n^{-c}.$
###### Proof.
Let $A=A^{+}-A^{-}$, where $A^{+}_{ij}=\max\\{A_{ij},0\\}$ and
$A^{-}_{ij}=-\min\\{A_{ij},0\\}$ for all $i,j$. Then
$\displaystyle\|A-\mathbb{E}[A]\|_{2}\leq\|A^{+}-\mathbb{E}[A^{+}]\|_{2}+\|A^{-}-\mathbb{E}[A_{-}]\|_{2}.$
(4.6)
Note that $A^{+}$ and $A^{-}$ are symmetric and zero-diagonal matrices with
independent upper-triangular entries. Also, note that for all $i\neq j$,
$\max\big{\\{}\mathbb{E}[A_{ij}^{+}],\mathbb{E}[A_{ij}^{-}]\big{\\}}\leq\mathbb{E}[|A_{ij}|]\leq
p.$
If $b\leq 0$, then $\|A^{+}-\mathbb{E}[A^{+}]\|_{2}=0$. Otherwise, suppose
$b>0$. By [11, Theorem 5], for any $c>0$, there exists $c_{+}>0$ such that
$\displaystyle\mathbb{P}\left(\|A^{+}-\mathbb{E}[A^{+}]\|_{2}>c_{+}\sqrt{b}\cdot\sqrt{np}\right)$
$\displaystyle=\mathbb{P}\bigg{(}\bigg{\|}\frac{1}{b}A^{+}-\frac{1}{b}\mathbb{E}[A^{+}]\bigg{\|}_{2}>c_{+}\sqrt{\frac{np}{b}}\bigg{)}\leq
n^{-c}.$ (4.7)
Similarly, if $a\geq 0$, then $\|A^{-}-\mathbb{E}[A^{-}]\|_{2}=0$. Otherwise,
suppose $a<0$. By [11, Theorem 5], for any $c>0$, there exists $c_{-}>0$ such
that
$\displaystyle\mathbb{P}\left(\|A^{-}-\mathbb{E}[A^{-}]\|_{2}>c_{-}\sqrt{|a|}\sqrt{np}\right)$
$\displaystyle=\mathbb{P}\bigg{(}\bigg{\|}\frac{1}{|a|}A^{-}-\frac{1}{|a|}\mathbb{E}[A^{-}]\bigg{\|}_{2}>c_{-}\sqrt{\frac{np}{|a|}}\bigg{)}\leq
n^{-c}.$ (4.8)
Stated above in terms of upper bound on probabilities. Take
$c^{\prime}=c_{+}\sqrt{\max\\{b,0\\}}+c_{-}\sqrt{|\min\\{a,0\\}|}$. Combining
(4.6), (4.7), and (4.8) along with a union bound, the proof is complete. ∎
###### Lemma 4.7.
Let $r\in\mathbb{N}$ be a constant, and $W\in\mathbb{R}^{n\times r}$ be a
fixed matrix. Let $\\{Z_{i}\\}_{i=1}^{n}$ be independent random variables
where $\mathbb{P}(Z_{i}=1)=p_{i}$, $\mathbb{P}(Z_{i}=-y)=q_{i}$, and
$\mathbb{P}(Z_{i}=0)=1-p_{i}-q_{i}$. Finally, let
$\overline{Z}\in\mathbb{R}^{n}$, where
$\overline{Z}_{i}=Z_{i}-\mathbb{E}[Z_{i}]$ for $i\in[n]$. Then for any
$\beta\geq 0$,
$\mathbb{P}\bigg{(}\big{\|}\overline{Z}^{T}W\big{\|}_{2}\geq
r\frac{\max\\{1,y\\}(2+\beta)n}{1\lor\log\big{(}\frac{\sqrt{n}\|W\|_{2\to\infty}}{\|W\|_{F}}\big{)}}\|W\|_{2\to\infty}\max_{i}\\{p_{i}+q_{i}\\}\bigg{)}\leq
2r\exp\big{(}-\beta n\max_{i}\\{p_{i}+q_{i}\\}\big{)}.$
###### Proof.
Let $w_{j}=W_{\cdot j}$ denote the $j$th column of $W$, for $j\in[r]$. We will
show that
$\frac{r\|W\|_{2\to\infty}}{1\lor\log\big{(}\frac{\sqrt{n}\|W\|_{2\to\infty}}{\|W\|_{F}}\big{)}}\geq\sum_{j=1}^{r}\frac{\|w_{j}\|_{\infty}}{1\lor\log\big{(}\frac{\sqrt{n}\|w_{j}\|_{\infty}}{\|w_{j}\|_{2}}\big{)}}.$
(4.9)
Given (4.9), we then obtain
$\displaystyle\mathbb{P}\bigg{(}\big{\|}\overline{Z}^{T}W\big{\|}_{2}\geq
r\frac{\max\\{1,y\\}(2+\beta)n}{1\lor\log\big{(}\frac{\sqrt{n}\|W\|_{2\to\infty}}{\|W\|_{F}}\big{)}}\|W\|_{2\to\infty}\max_{i}\\{p_{i}+q_{i}\\}\bigg{)}$
$\displaystyle\leq\mathbb{P}\bigg{(}\sum_{j=1}^{r}\bigg{|}\sum_{i=1}^{n}W_{ij}\overline{Z}_{i}\bigg{|}\geq\max\\{1,y\\}(2+\beta)n\max_{i}\\{p_{i}+q_{i}\\}\sum_{j=1}^{r}\frac{\|w_{j}\|_{\infty}}{1\lor\log\big{(}\frac{\sqrt{n}\|w_{j}\|_{\infty}}{\|w_{j}\|_{2}}\big{)}}\bigg{)}$
(4.10)
$\displaystyle\leq\sum_{i=1}^{r}\mathbb{P}\bigg{(}\bigg{|}\sum_{i=1}^{n}W_{ij}\overline{Z}_{i}\bigg{|}\geq\frac{\max\\{1,y\\}(2+\beta)n\|w_{j}\|_{\infty}}{1\lor\log\big{(}\frac{\sqrt{n}\|w_{j}\|_{\infty}}{\|w_{j}\|_{2}}\big{)}}\max_{i}\\{p_{i}+q_{i}\\}\bigg{)}$
(4.11) $\displaystyle\leq 2r\exp\big{(}-\beta
n\max_{i}\\{p_{i}+q_{i}\\}\big{)}.$ (4.12)
Here (4.10) follows from (4.9) and the fact that $\|x\|_{2}\leq\|x\|_{1}$ for
any finite dimensional vector. Next, (4.11) follows by the union bound, and
(4.12) is an application of [9, Lemma 5.2].
It remains to prove (4.9). Since $\|w_{j}\|_{2}\leq\|W\|_{F}$ for all
$j\in[r]$, we obtain
$\displaystyle\sum_{j=1}^{r}\frac{\|w_{j}\|_{\infty}}{1\lor\log\big{(}\frac{\sqrt{n}\|w_{j}\|_{\infty}}{\|w_{j}\|_{2}}\big{)}}$
$\displaystyle\leq\sum_{j=1}^{r}\frac{\|w_{j}\|_{\infty}}{1\lor\log\left(\frac{\sqrt{n}\|w_{j}\|_{\infty}}{\|W\|_{F}}\right)}.$
Let $g(c,x):=\frac{x}{1\vee\log(cx)}$ for $c>0$. Then
$\frac{\partial}{\partial x}g(c,x)=1$ for $x<\mathrm{e}/c$, and
$\frac{\partial}{\partial x}g(c,x)=\frac{\log(cx)-1}{\log^{2}(cx)}>0$ for
$x>\mathrm{e}/c$. Therefore, $g(c,\cdot)$ is increasing for any $c>0$. Since
$\|w_{j}\|_{\infty}\leq\|W\|_{2\to\infty}$ for all $j$, we obtain
$\displaystyle\sum_{j=1}^{r}\frac{\|w_{j}\|_{\infty}}{1\lor\log\big{(}\frac{\sqrt{n}\|w_{j}\|_{\infty}}{\|W\|_{F}}\big{)}}=\sum_{j=1}^{r}g\bigg{(}\frac{\sqrt{n}}{\|W\|_{F}},\|w_{j}\|_{\infty}\bigg{)}\leq
rg\bigg{(}\frac{\sqrt{n}}{\|W\|_{F}},\|W\|_{2\to\infty}\bigg{)}=\frac{r\|W\|_{2\to\infty}}{1\lor\log\left(\frac{\sqrt{n}\|W\|_{2\to\infty}}{\|W\|_{F}}\right)},$
which completes the proof of (4.9). ∎
###### Proof of Theorem 4.1.
We now verify the conditions of Theorem 4.3 for the signed adjacency matrix
$A=A(G,y)$ when $G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$. Set
$\varphi(x)=r\frac{2\log(n)}{\Delta^{\star}}\max\\{1,y\\}(t+2)\bigg{(}1\lor\log\Big{(}\frac{1}{x}\Big{)}\bigg{)}^{-1}.$
Note that $\lim_{x\to 0^{+}}\varphi(0)=0$ and $\frac{\varphi(x)}{x}$ is non-
increasing on $\mathbb{R}_{+}$. Thus the first condition holds.
To verify the second condition, we find that
$\|A^{\star}\|_{2\to\infty}=\Theta\big{(}\frac{\log n}{\sqrt{n}}\big{)}$.
Applying Lemma 4.6 with $c=3$, and using the fact that
$|A^{\star}_{ij}|\leq\frac{t\log(n)}{n}\max_{i,j\in[k]}P_{ij}$, there exists
$c^{\prime}>0$ such that
$\mathbb{P}\left(\|A-\mathbb{E}[A]\|_{2}\leq
c^{\prime}\sqrt{\log(n)}\right)\geq 1-n^{-3}.$ (4.13)
By assumption, we have $\Delta^{\star}>0$. Moreover, by Lemma 4.5, we have
$\Delta^{\star}=\Theta(\log(n))$. Let
$\gamma=c^{\prime}\sqrt{\log(n)}/\Delta^{\star}$. Therefore,
$\|A^{\star}\|_{2\to\infty}\leq\gamma\Delta^{\star}$ is satisfied for $n$
large enough.
The third condition is immediate.
The second part of the fourth condition holds with $\delta_{0}=n^{-3}$ due to
(4.13). To verify the first part, note that $\kappa=\Theta(1)$ by Lemma 4.5
and $\gamma=o(1)$. Then $32\kappa\max\\{\gamma,\varphi(\gamma)\\}\leq 1$ for
all sufficiently large $n$.
To verify the fifth condition, fix $W\in\mathbb{R}^{n\times r}$ and $m\in[n]$.
By Lemma 4.7 with $p_{i}\in\\{\frac{t\log n}{n}P_{ab}:a,b\in[k]\\}$,
$p_{i}+q_{i}=\frac{t\log n}{n}$ and $\beta=\frac{4}{t}$, we obtain
$\mathbb{P}\bigg{(}\big{\|}\big{(}(A-A^{\star})_{m,\cdot}\big{)}\cdot
W\big{\|}_{2}\geq
r\frac{\max\\{1,y\\}(2+4/t)n}{1\lor\log\left(\frac{\sqrt{n}\|W\|_{2\to\infty}}{\|W\|_{F}}\right)}\|W\|_{2\to\infty}\frac{t\log
n}{n}\bigg{)}\leq 2r\exp\left(-\frac{4n}{t}\times\frac{t\log n}{n}\right),$
which can be re-written as
$\mathbb{P}\left(\big{\|}\big{(}(A-A^{\star})_{m,\cdot}\big{)}W\big{\|}_{2}\geq\Delta^{\star}\|W\|_{2\to\infty}\varphi\left(\frac{\|W\|_{F}}{\sqrt{n}\|W\|_{2\to\infty}}\right)\right)\leq
2rn^{-4}.$
Therefore, the fifth condition is satisfied with $\delta_{1}=2rn^{-3}$.
Applying Theorem 4.3, we conclude that with probability at least
$1-(1+4r)n^{-3}\geq 1-5rn^{-3}$,
$\displaystyle\inf_{O\in\mathcal{O}^{r\times r}}\left\|UO-
AU^{\star}\left(\Lambda^{\star}\right)^{-1}\right\|_{2\to\infty}$
$\displaystyle\leq
C_{0}\Big{(}\kappa(\kappa+\varphi(1))(\gamma+\varphi(\gamma))\|u_{k}^{\star}\|_{\infty}+\frac{\gamma}{\Delta^{\star}}\|A^{\star}\|_{2\to\infty}\Big{)}\leq\frac{C(P,\rho,t,y)}{\log\log(n)\sqrt{n}}.$
∎
## 5\. Performance of Spectral Algorithms
Throughout this section, we use the notation
$\displaystyle
y(p,q)=\frac{\log\frac{1-q}{1-p}}{\log\frac{p}{q}}\quad\text{for}~{}p\neq q.$
(5.1)
This will be the choice of $y$ value for which Spectral-One algorithms are
optimal in the cases stated in Theorem 1.7 1. Also, as before, we condition on
a fixed value of $\sigma_{0}$ satisfying (1.4) with $\varepsilon=n^{-1/3}$.
Recall that our spectral algorithms use the top two eigenvectors of the signed
adjacency matrix/matrices. In general, the signed adjacency matrix should have
two main eigenvectors which correspond (up to a potential sign flip) to the
main eigenvectors of the expected adjacency matrix. However, this could run
into complications if both eigenvalues are the same or one of the eigenvalues
is $0$. In order to address this, we have the following eigenvalue
characterization. The proof is provided in Appendix C.
###### Lemma 5.1.
Let $0<p_{1},p_{2},q<1$ be not all the same, $\rho\in(0,1)$ and define
$A^{\prime}=A^{\prime}(y):=\begin{pmatrix}p_{1}-y(1-p_{1})&q-y(1-q)\\\
q-y(1-q)&p_{2}-y(1-p_{2})\end{pmatrix}\begin{pmatrix}\rho&0\\\
0&1-\rho\end{pmatrix}$ (5.2)
for each $y>0$. Then all of the following hold.
1. (1)
For any fixed $p_{1},p_{2},q,\rho\in(0,1)$, there exists a set $\mathcal{Y}$
with $|\mathcal{Y}|\leq 3$ such that the eigenvalues of $A^{\prime}$ are
distinct and nonzero for all $y\notin\mathcal{Y}$.
2. (2)
If $p_{1}=p$, $p_{2}=q$, $p\neq q$, and $y=y(p,q)$ then the eigenvalues of
$A^{\prime}$ are distinct and nonzero.
3. (3)
If $p_{1}=p_{2}=p$, $p\neq q$, and $y=y(p,q)$ then the eigenvalues of
$A^{\prime}$ are distinct and nonzero if and only if $p+q\neq 1$.
### 5.1. One matrix
In order to prove Theorem 1.7 1, we provide an algorithm which is an instance
of Spectral-One, that will succeed up to the information theoretic threshold
when
$\begin{split}\text{either }p_{1}=p_{2}=p,p\neq q\text{ and }p+q\neq
1\quad\text{or}\quad p_{1}=p\text{ and }p_{2}=q\neq p.\end{split}$ (5.3)
Algorithm 3 One-matrix community detection
1:Parameters $t>0$, $\rho\in(0,1)$, $p_{1},p_{2},q\in(0,1)$ satisfying (5.3)
and $G\sim\textsc{CSBM}_{n}^{2}(\bar{\rho},P,t)$.
2:Community classification $\hat{\sigma}\in\\{1,2\\}^{n}$.
3:Construct an $n\times n$ matrix $A=A(G,y)$ as defined in Definition 1.5.
4:Find the top two eigenpairs $(\lambda_{1},u_{1})\text{ and
}(\lambda_{2},u_{2})$ of $A$.
5:Compute $(a_{1},a_{2})$, the weights produced by Algorithm 5.1.
6:Let $U=\\{s_{1}a_{1}u_{1}+s_{2}a_{2}u_{2}:s_{1},s_{2}\in\\{\pm 1\\}\\}$. For
each $u\in U$, let $\hat{\sigma}(\cdot;u)=1+(1+\mathrm{sign}(u))/2$.
7:Return $\hat{\sigma}=\operatornamewithlimits{argmax}_{u\in
U}\mathbb{P}(G\mid\hat{\sigma}(\cdot;u))$.
Algorithm 4 Find weights (one matrix)
1:Parameters $t>0$, $\rho\in(0,1)$, $p_{1},p_{2},q\in(0,1)$ satisfying (5.3).
2:Weights $(a_{1},a_{2})$
3:Let $\mathcal{V}_{1}:=\\{i:i\leq\rho n\\}$ and define $B$ to be the
symmetric block matrix where $B_{ij}$ is $\frac{t\log
n}{n}[p_{1}-y(p_{1},q)(1-p_{1})]$ if $i,j\in\mathcal{V}_{1}$, $\frac{t\log
n}{n}[p_{2}-y(p_{2},q)(1-p_{2})]$ if $i,j\notin\mathcal{V}_{1}$, and
$\frac{t\log n}{n}[q-y(p_{1},q)(1-q)]$ if
$i\in\mathcal{V}_{1},j\notin\mathcal{V}_{1}$ or
$i\notin\mathcal{V}_{1},j\in\mathcal{V}_{1}$. Let the eigenpairs of $B$ be
denoted $(\gamma_{1},v_{1}),(\gamma_{2},v_{2})$.
4:Set $\alpha_{1}=\log\frac{p}{q}$. If $p_{1}=p_{2}=p$, set
$\alpha_{2}=-\alpha_{1}$. Otherwise ($p_{2}=q$), set $\alpha_{2}=0$. Let $z$
be a block vector with $z_{i}=\alpha_{1}$ if $i\in\mathcal{V}_{1}$ and
$z_{i}=\alpha_{2}$ if $i\not\in\mathcal{V}_{1}$.
5:Return $(a_{1},a_{2})$ satisfying $\begin{split}\sqrt{n}\log
n\left(a_{1}\frac{v_{1}}{\gamma_{1}}+a_{2}\frac{v_{2}}{\gamma_{2}}\right)=z.\end{split}$
(5.4)
It is worthwhile to note that finding weights in Algorithm 5.1 does not
require any information about $\sigma_{0}$.
###### Proof of Theorem 1.7.
Let $n_{i}$ be the number of vertices in community $i$ for $i=1,2$. Throughout
the proof, we will condition on $\sigma_{0}$ satisfying
$\left|n_{i}-\rho_{i}n\right|\leq n^{2/3}$. This event has probability
$1-o(1)$ as shown earlier in (1.4).
First, suppose that (5.3) holds. We will first prove Theorem 1.7 1 by showing
Algorithm 5.1 succeeds up to the information theoretic limit. Let $A=A(G,y)$
with $y=y(p_{1},q)$, and define $A^{\star}=\mathbb{E}[A]$. Let
$(\lambda_{i},u_{i})$ and $(\lambda_{i}^{\star},u_{i}^{\star})$ denote the
$i$-th largest eigenpair of $A$ and $A^{\star}$ respectively. We claim that
$\begin{split}{eval-sep-A-star}\lambda_{1}^{\star}=(1+o(1))\nu_{1}\log
n,\quad\lambda_{2}^{\star}=(1+o(1))\nu_{2}\log n\quad\text{with
}\nu_{1}\neq\nu_{2},\nu_{1},\nu_{2}\neq 0.\end{split}$ (5.5)
Indeed, consider the matrix $B$ defined in Step 1 of Algorithm 5.1, whose
eigenvalues are $t\log n$ times the corresponding eigenvalues the matrix
$A^{\prime}$ defined in (5.2). Under the conditions of Theorem 1.7 1, Lemma
5.1 (Parts 2 and 3), shows that the non-zero eigenvalues of $B$ are
$\nu_{1}\log n$ and $\nu_{2}\log n$ with $\nu_{1}\neq\nu_{2}$. Next, suppose
$O$ is the permutation matrix such that, in $OA^{\star}O^{T}$, the rows and
columns corresponding to vertices in community 1 appear before those in
community 2. By Weyl’s theorem, the top two eigenvalues of $OA^{\star}O^{T}$
are within $1+o(1)$ multiplicative factor of those of $B$. Since $O$ is an
orthogonal matrix, (LABEL:eval-sep-A-star) follows immediately.
Using (LABEL:eval-sep-A-star), we can apply Corollary 4.2 and conclude that,
with probability $1-O(n^{-3})$,
$\left\|s_{1}u_{1}-\frac{Au_{1}^{\star}}{\lambda_{1}^{\star}}\right\|_{\infty}\leq\frac{C}{\sqrt{n}\log\log
n}~{}~{}~{}\text{and}~{}~{}~{}\left\|s_{2}u_{2}-\frac{Au_{2}^{\star}}{\lambda_{2}^{\star}}\right\|_{\infty}\leq\frac{C}{\sqrt{n}\log\log
n},$
for some $s_{1},s_{2}\in\\{-1,1\\}$ and some constant $C>0$. Consequently, for
any $a_{1},a_{2}\in\mathbb{R}$, with probability $1-o(1)$,
$\begin{split}{eq:vec-approximation-in-
proof}\left\|s_{1}a_{1}u_{1}+s_{2}a_{2}u_{2}-A\left(\frac{a_{1}}{\lambda_{1}^{\star}}u_{1}^{\star}+\frac{a_{2}}{\lambda_{2}^{\star}}u_{2}^{\star}\right)\right\|_{\infty}\leq\frac{C(|a_{1}|+|a_{2}|)}{\sqrt{n}\log\log
n}.\end{split}$ (5.6)
In Step 5 of Algorithm 5.1, we pick $(a_{1},a_{2})$ according to Algorithm
5.1. Let $\mathcal{V}_{1}^{\prime}:=\\{i:i\leq n_{1}(\sigma_{0})\\}$ and
define $B^{\prime},v_{1}^{\prime},v_{2}^{\prime}$ similarly as $B,v_{1},v_{2}$
in Algorithm 5.1 by replacing $\mathcal{V}_{1}$ by $\mathcal{V}_{1}^{\prime}$.
For $l=1,2$, note that $v_{l}$ takes some value $\frac{\zeta_{l1}}{\sqrt{n}}$
on $\mathcal{V}_{1}$ and $\frac{\zeta_{l2}}{\sqrt{n}}$ on
$\mathcal{V}_{1}^{c}$ for constants $\zeta_{l1},\zeta_{l2}$. Using identical
steps as [9, Lemma 5.3], we can argue that $v_{l}^{\prime}$ also takes value
$(1+o(1))\frac{\zeta_{l1}}{\sqrt{n}}$ on $\mathcal{V}_{1}^{\prime}$ and
$(1+o(1))\frac{\zeta_{l2}}{\sqrt{n}}$ on $(\mathcal{V}_{1}^{\prime})^{c}$.
Therefore,
$\displaystyle\sqrt{n}\log
n\left(a_{1}\frac{v_{1}^{\prime}}{\gamma_{1}}+a_{2}\frac{v_{2}^{\prime}}{\gamma_{2}}\right)=\tilde{z},$
where $\tilde{z}$ is a block vector taking values $(1+o(1))\alpha_{1}$ on
$\mathcal{V}_{1}^{\prime}$ and $(1+o(1))\alpha_{2}$ on
$(\mathcal{V}_{1}^{\prime})^{c}$. Next, note that the matrix $A^{\star}$ can
be obtained from $B^{\prime}$ by jointly permuting the row and column labels
and then setting the diagonal entries to be zero. Also, noting that
$\lambda_{l}^{\star}=(1+o(1))\gamma_{l}$, we can ensure that
$\begin{split}{form-lin-
comb}\sqrt{n}\log(n)\left(a_{1}\frac{u_{1}^{\star}}{\lambda_{1}^{\star}}+a_{2}\frac{u_{2}^{\star}}{\lambda_{2}^{\star}}\right)=z^{\star},\end{split}$
(5.7)
where $z^{\star}$ is a block vector taking value $(1+o(1))\alpha_{1}$ on
$V_{1}:=\\{v:\sigma_{0}(v)=+1\\}$ and $(1+o(1))\alpha_{2}$ on
$V_{2}:=\\{v:\sigma_{0}(v)=-1\\}$. Let
$\tau=A\left(a_{1}\frac{u_{1}^{\star}}{\lambda_{1}^{\star}}+a_{2}\frac{u_{2}^{\star}}{\lambda_{2}^{\star}}\right).$
Then, $\sqrt{n}\log(n)\tau=(1+o(1))Az^{\star}$. By (LABEL:form-lin-comb), with
probability $1-o(1)$, for each $v\in[n]$,
$\begin{split}{eq:tau}\sqrt{n}\log(n)\tau_{v}&=\alpha_{1}d_{+1}(v)-y\alpha_{1}d_{-1}(v)+\alpha_{2}d_{+2}(v)-y\alpha_{2}d_{-2}(v)+o(\log
n),\end{split}$ (5.8)
where $(d_{+1}(v),d_{-1}(v),d_{+2}(v),d_{-2}(v))$ denotes the degree profile
of $v$. Also, in this case, note that $w^{\star}$ in (2.6) simplifies to
$\displaystyle
w^{\star}=\bigg{(}\log\frac{p_{1}}{q},\log\frac{1-p_{1}}{1-q},\log\frac{q}{p_{2}},\log\frac{1-q}{1-p_{2}}\bigg{)}.$
(5.9)
In order to apply Proposition 2.10, we need to ensure that the coefficients of
(LABEL:eq:tau) coincide with $w^{\star}$ up to a scalar factor. There are two
cases to consider. First, suppose $p_{1}=p_{2}=p$, and $p\neq q$ (where we
rule out the case $\\{p+q=1,\rho\neq 1/2\\}$). Recalling that $y=y(p,q)$, we
obtain
$\displaystyle w^{\star}=(1,-y,-1,y)\log\left(\frac{p}{q}\right).$ (5.10)
Comparing (LABEL:eq:tau) and (5.10), we see that the choice
$\alpha_{1}=\log\frac{p}{q}$ and $\alpha_{2}=-\alpha_{1}$ equates the
coefficients of the leading terms of (LABEL:eq:tau) with the entries of
(5.10). These are the values of $(\alpha_{1},\alpha_{2})$ chosen in Algorithm
5.1 Step 2.
Next, suppose $p_{2}=q$ and recall that $p_{1}\neq q$. By our choice of
$y=y(p_{1},q)$, we have that
$\displaystyle w^{\star}=(1,-y,0,0)\log\left(\frac{p}{q}\right).$
In this case, we need $\alpha_{1}=\log\frac{p_{1}}{q}$ and $\alpha_{2}=0$,
which is also the case by our choice in Algorithm 5.1 Step 2. (Note that any
choice of the form $(\alpha_{1},\alpha_{2})=c(1,-1)$ would lead to
$\sqrt{n}\log(n)\tau_{v}-o(\log(n))\propto\langle w^{\star},d(v)\rangle$.)
Thus, in both cases, our choices of $(\alpha_{1},\alpha_{2})$ yield that, with
probability $1-o(1)$, $\sqrt{n}\log(n)\tau_{v}=\langle
w^{\star},d(v)\rangle+o(\log n)$ for each $v\in[n]$. By Proposition 2.10, we
conclude that for some $\varepsilon>0$,
$\displaystyle\sqrt{n}\log(n)\min_{v\in
V_{1}}\tau_{v}\geq\frac{1}{2}\varepsilon\log
n\quad\text{and}\quad\sqrt{n}\log(n)\max_{v\in
V_{2}}\tau_{v}\leq-\frac{1}{2}\varepsilon\log n$
with probability $1-o(1)$, and consequently
$\displaystyle\min_{v\in
V_{1}}\tau_{v}\geq\frac{\varepsilon}{2\sqrt{n}}\quad\text{and}\quad\max_{v\in
V_{2}}\tau_{v}\leq-\frac{\varepsilon}{2\sqrt{n}}.$
Finally, since
$\frac{C}{\sqrt{n}\log\log(n)}=o\big{(}\frac{1}{\sqrt{n}}\big{)}$, we conclude
with probability $1-o(1)$,
$\min_{v\in
V_{1}}\left(s_{1}a_{1}u_{1}+s_{2}a_{2}u_{2}\right)_{v}>\frac{\varepsilon}{3\sqrt{n}}\quad\text{and}\quad\max_{v\in
V_{1}}\left(s_{1}a_{1}u_{1}+s_{2}a_{2}u_{2}\right)_{v}<-\frac{\varepsilon}{3\sqrt{n}}.$
Therefore, thresholding the vector $s_{1}a_{1}u_{1}+s_{2}a_{2}u_{2}$ at zero
correctly identifies the communities with high probability. In other words,
$\mathrm{sign}(s_{1}a_{1}u_{1}+s_{2}a_{2}u_{2})$ coincides with the MAP
estimator. While $s_{1},s_{2}$ are unknown, the final step of Algorithm 1.3
chooses the best one among the four candidate community partitions arising
from the four possible sign combinations. By Theorem 3.1, we know that the MAP
estimator is the unique maximizer of the posterior probability. Therefore, the
spectral algorithm will identify the correct candidate. This completes the
proof of Theorem 1.7 1.
To prove Theorem 1.7 2, let $p_{1},p_{2},q$ be distinct. Notice that
(LABEL:eq:tau) would hold for any $\alpha_{1},\alpha_{2}$ and the
corresponding choices of $a_{1},a_{2}$. The particular choice of
$\alpha_{1},\alpha_{2}$ was only needed after (LABEL:eq:tau) to compare it
with $w^{\star}$. By Proposition 2.10 and (LABEL:eq:tau), in order for
Spectral-One algorithms to be successful, we must have
$w^{\star}=(\alpha_{1},-y\alpha_{1},\alpha_{2},-y\alpha_{2})$. Suppose for the
sake of contradiction that
$w^{\star}=(\alpha_{1},-y\alpha_{1},\alpha_{2},-y\alpha_{2})$ for some
$\alpha_{1},\alpha_{2}$. Since all the entries of $w^{\star}$ are nonzero, we
know that $\alpha_{1},\alpha_{2}\neq 0$. By taking coordinate ratios, we have
that
$\displaystyle
y=\frac{\log\frac{1-q}{1-p_{1}}}{\log\frac{p_{1}}{q}}=y(p_{1},q)\quad\text{and}\quad
y=\frac{\log\frac{1-q}{1-p_{2}}}{\log\frac{p_{2}}{q}}=y(p_{2},q).$ (5.11)
Now, we claim that for any fixed $q\in(0,1)$, the function $y(p,q)$ is
strictly increasing. Indeed,
$\displaystyle\frac{\partial}{\partial
p}y(p,q)=\frac{\log\frac{p}{q}\times\frac{1}{1-p}-\log\frac{1-q}{1-p}\times\frac{1}{p}}{\log^{2}\frac{p}{q}}=\frac{1}{p\log\frac{p}{q}}\bigg{(}\frac{p}{1-p}-y(p,q)\bigg{)}.$
(5.12)
Using the fact that $1-\frac{1}{x}<\log x<x-1$ for any $x>0$,
$\displaystyle\text{ for any }p>q:\quad
y(p,q)=\frac{\log\frac{1-q}{1-p}}{\log\frac{p}{q}}<\frac{\frac{1-q}{1-p}-1}{1-\frac{q}{p}}=\frac{p}{1-p}$
(5.13) $\displaystyle\text{ for any }p<q:\quad
y(p,q)=\frac{\log\frac{1-q}{1-p}}{\log\frac{p}{q}}=\frac{\log\frac{1-p}{1-q}}{\log\frac{q}{p}}>\frac{1-\frac{1-q}{1-p}}{\frac{q}{p}-1}=\frac{p}{1-p}.$
(5.14)
Therefore, $\frac{\partial}{\partial p}y(p,q)>0$ for any $p\in(0,1)$, which
proves that $y(p,q)$ is strictly increasing. However, $p_{1}\neq p_{2}$ and
therefore $y(p_{1},q)\neq y(p_{2},q)$. Thus, (5.11) leads to a contradiction.
In other words, it is not possible to choose $\alpha_{1},\alpha_{2}$ so that
$w^{\star}=(\alpha_{1},-y\alpha_{1},\alpha_{2},-y\alpha_{2})$. The proof then
follows by applying Proposition 2.10 2. ∎
###### Remark 5.2.
Instead of using the encoding $\\{1,-y,0\\}$ for present, absent and censored
edges, we could have instead used a more general encoding of the form
$\\{c_{1},-yc_{2},c_{3}\\}$. In that case, the entrywise approximation would
still hold. One could go though the same steps to show that the decision rule
for the spectral algorithm would again be asymptotically based on determining
whether some linear expression such as (LABEL:eq:tau) is above or below a
certain threshold $T$. Thus, for $p_{1},p_{2},q$ which are distinct, an
identical argument shows that spectral algorithms with more general encoding
also do not succeed sufficiently close to $t_{c}$.
### 5.2. Two matrices
In this section, we will prove Theorem 1.10. Let us start by describing the
algorithm that always succeeds up to the information theoretic threshold in
the two community case.
Algorithm 5 Two-matrix community detection for two communities
1:Parameters $t>0$, $\rho,p_{1},p_{2},q\in(0,1)$ such that
$|\\{p_{1},p_{2},q\\}|\geq 2$, and
$G\sim\textsc{CSBM}_{n}^{2}(\bar{\rho},P,t)$.
2:Community classification $\hat{\sigma}\in\\{1,2\\}^{n}$.
3:Fix $y,\tilde{y}\notin\mathcal{Y}$ where $\mathcal{Y}$ is given by Lemma 5.1
Part (1). Construct two $n\times n$ matrices $A=A(G,y)$,
$\tilde{A}=A(G,\tilde{y})$ as defined in Definition 1.5.
4:Find the top two eigenpairs of $A$ and $\tilde{A}$, respectively denoting
them $((\lambda_{l},u_{l}))_{l=1,2}$ and
$((\tilde{\lambda}_{l},\tilde{u}_{l}))_{l=1,2}$.
5:Use Algorithm 5.2 on input $\left(t,\rho,\begin{pmatrix}p_{1}&q\\\
q&p_{2}\end{pmatrix},y,\tilde{y}\right)$ to compute the weights
$(c_{1},c_{2},\tilde{c}_{1},\tilde{c}_{2})$.
6:Let
$U=\\{s_{1}c_{1}u_{1}+s_{2}c_{2}u_{2}+\tilde{s}_{1}\tilde{c}_{1}\tilde{u}_{1}+\tilde{s}_{2}\tilde{c}_{2}\tilde{u}_{2}:s_{1},s_{2},\tilde{s}_{1},\tilde{s}_{2}\in\\{\pm
1\\}\\}.$ For each $u\in U$, let
$\hat{\sigma}(\cdot;u)=1+(1+\mathrm{sign}(u))/2$.
7:Return $\hat{\sigma}=\operatornamewithlimits{argmax}_{u\in
U}\mathbb{P}(G\mid\hat{\sigma}(\cdot;u))$.
Algorithm 6 Find weights (two matrices, two communities)
1:Parameters $t>0$, $\rho,p_{1},p_{2},q\in(0,1)$ such that
$|\\{p_{1},p_{2},q\\}|\geq 2$, and $y,\tilde{y}\notin\mathcal{Y}$,
$y\neq\tilde{y}$ where $\mathcal{Y}$ is given by Lemma 5.1 Part (1).
2:Weights $(c_{1},c_{2},\tilde{c}_{1},\tilde{c}_{2})$
3:Let $\mathcal{V}_{1}:=\\{i:i\leq\rho n\\}$ and define $B$ to be the
symmetric block matrix where $B_{ij}$ is $\frac{t\log
n}{n}[p_{1}-y(p_{1},q)(1-p_{1})]$ if $i,j\in\mathcal{V}_{1}$, $\frac{t\log
n}{n}[p_{2}-y(p_{2},q)(1-p_{2})]$ if $i,j\notin\mathcal{V}_{1}$, and
$\frac{t\log n}{n}[q-y(p_{1},q)(1-q)]$ if
$i\in\mathcal{V}_{1},j\notin\mathcal{V}_{1}$ or
$i\notin\mathcal{V}_{1},j\in\mathcal{V}_{1}$. Define $\tilde{B}$ similarly by
replacing $y$ by $\tilde{y}$. Let the eigenpairs of $B$ and $\tilde{B}$ be
$((\gamma_{l},v_{l}))_{l=1,1}$ and
$((\tilde{\gamma}_{l},\tilde{v}_{l}))_{l=1,2}$, respectively.
4:Solve the following system for
$\alpha_{1},\alpha_{2},\tilde{\alpha}_{1},\tilde{\alpha}_{2}$:
$\begin{split}{coeffs-2matrix}&\alpha_{1}+\tilde{\alpha}_{1}=\log\frac{p_{1}}{q},\quad-y\alpha_{1}-\tilde{y}\tilde{\alpha}_{1}=\log\frac{1-p_{1}}{1-q},\\\
&\alpha_{2}+\tilde{\alpha}_{2}=\log\frac{q}{p_{2}}\quad-y\alpha_{2}-\tilde{y}\tilde{\alpha}_{2}=\log\frac{1-q}{1-p_{2}}.\end{split}$
(5.15) Let $z$ be a block vector with $z_{i}=\alpha_{1}$ for
$i\in\mathcal{V}_{1}$ and $z_{i}=\alpha_{2}$ for $i\not\in\mathcal{V}_{1}$.
Define $\tilde{z}$ similarly by replacing $(\alpha_{1},\alpha_{2})$ by
$(\tilde{\alpha}_{1},\tilde{\alpha}_{2})$.
5:Return $(c_{1},c_{2},\tilde{c}_{1},\tilde{c}_{2})$ satisfying
$\begin{split}{two-z-choices}\sqrt{n}\log
n\left(c_{1}\frac{v_{1}}{\gamma_{1}}+c_{2}\frac{v_{2}}{\gamma_{2}}\right)&=z\quad\text{and}\quad\sqrt{n}\log
n\left(\tilde{c}_{1}\frac{\tilde{v}_{1}}{\tilde{\gamma}_{1}}+\tilde{c}_{2}\frac{\tilde{v}_{2}}{\tilde{\gamma}_{2}}\right)=\tilde{z}.\end{split}$
(5.16)
###### Proof of Theorem 1.10.
As in the proof of Theorem 1.7, we condition on $\sigma_{0}$ satisfying
$\left|n_{i}-\rho_{i}n\right|\leq n^{2/3}$. Fix
$y,\tilde{y}\notin\mathcal{Y}$, $y\neq\tilde{y}$ where $\mathcal{Y}$ is given
by Lemma 5.1 Part (1). Recall all the notation in Algorithms 5.2, 5.2. Let
$A^{\star}:=\mathbb{E}[A]$ and $\tilde{A}^{\star}:=\mathbb{E}[\tilde{A}]$, and
let $((\lambda_{l}^{\star},u_{l}^{\star}))_{l=1,1}$, and
$(\tilde{\lambda}_{l}^{\star},\tilde{u}_{l}^{\star}))_{l=1,2}$ be the top
eigenpairs of the corresponding matrices. Applying Corollary 4.2, we have that
with probability $1-o(1)$
$\displaystyle\left\|s_{1}c_{1}u_{1}+s_{2}c_{2}u_{2}+\tilde{s}_{1}\tilde{c}_{1}\tilde{u}_{1}+\tilde{s}_{2}\tilde{c}_{2}\tilde{u}_{2}-A\bigg{(}c_{1}\frac{u_{1}^{\star}}{\lambda_{1}^{\star}}+c_{2}\frac{u_{2}^{\star}}{\lambda_{2}^{\star}}\bigg{)}-\tilde{A}\bigg{(}\tilde{c}_{1}\frac{\tilde{u}_{1}^{\star}}{\tilde{\lambda}_{1}^{\star}}+\tilde{c}_{2}\frac{\tilde{u}_{2}^{\star}}{\tilde{\lambda}_{2}^{\star}}\bigg{)}\right\|_{\infty}$
$\displaystyle\leq\frac{C\left(|c_{1}|+|c_{2}|+|\tilde{c}_{1}|+|\tilde{c}_{2}|\right)}{\sqrt{n}\log\log
n},$
for some $s_{1},s_{2},\tilde{s}_{1},\tilde{s}_{2}\in\\{\pm 1\\}$. Let
$\tau=A\bigg{(}c_{1}\frac{u_{1}^{\star}}{\lambda_{1}^{\star}}+c_{2}\frac{u_{2}^{\star}}{\lambda_{2}^{\star}}\bigg{)}+\tilde{A}\bigg{(}\tilde{c}_{1}\frac{\tilde{u}_{1}^{\star}}{\tilde{\lambda}_{1}^{\star}}+\tilde{c}_{2}\frac{\tilde{u}_{2}^{\star}}{\tilde{\lambda}_{2}^{\star}}\bigg{)}.$
Using (LABEL:two-z-choices), we can repeat the arguments above (LABEL:eq:tau),
to show that $\sqrt{n}\log(n)\tau=(1+o(1))z^{\star}$, where $z_{i}^{\star}$ is
$\alpha_{1}+\tilde{\alpha}_{1}$ on $V_{1}:=\\{u:\sigma_{0}(u)=+1\\}$ and
$\alpha_{2}+\tilde{\alpha}_{2}$ on $V_{2}:=\\{u:\sigma_{0}(u)=-1\\}$.
Consequently, with probability $1-o(1)$, for each $v\in[n]$,
$\displaystyle\sqrt{n}\log(n)\tau_{v}$
$\displaystyle=d_{+1}(v)\left(\alpha_{1}+\tilde{\alpha}_{1}\right)-d_{-1}(v)\left(y\alpha_{1}+\tilde{y}\tilde{\alpha}_{1}\right)+d_{+2}(v)\left(\alpha_{2}+\tilde{\alpha}_{2}\right)-d_{-2}(v)\left(y\alpha_{2}+\tilde{y}\tilde{\alpha}_{2}\right)+o(\log
n).$
By Proposition 2.10 and (LABEL:coeffs-2matrix), there exists some
$\varepsilon>0$ such that with probability $1-o(1)$
$\displaystyle\sqrt{n}\log(n)\min_{v\in
V_{1}}\tau_{v}\geq\frac{1}{2}\varepsilon\log n\text{ and
}\sqrt{n}\log(n)\max_{v\in V_{2}}\tau_{v}\leq-\frac{1}{2}\varepsilon\log n$
with probability $1-o(1)$. The rest of the proof is identical to the final
part of the argument in the proof of Theorem 1.7 1. ∎
## 6\. More than two communities
In this section, we will prove Theorem 1.12. Similar to Lemma 5.1, we need the
following, whose proof is provided in Appendix C.
###### Lemma 6.1.
Let $\rho\in(0,1)^{k}$, and $P\in(0,1)^{k\times k}$ be a symmetric matrix. For
any $y>0$, let $P^{\scriptscriptstyle(y)}$ be the matrix such that
$P^{\scriptscriptstyle(y)}_{ij}=\rho_{j}(P_{ij}-y(1-P_{ij}))$ for all $i,j$.
Then, either (1) $P^{\scriptscriptstyle(y)}$ has a zero eigenvalue for all $y$
or (2) $P^{\scriptscriptstyle(y)}$ has repeated eigenvalues for all $y$ or (3)
there is a finite set $\mathcal{Y}$ such that $P^{\scriptscriptstyle(y)}$ has
distinct nonzero eigenvalues for all $y\not\in\mathcal{Y}$.
Consequently, if $P^{\scriptscriptstyle(0)}:=P\cdot\text{diag}(\rho)$ has $k$
distinct, non-zero eigenvalues, then (3) holds.
Let us describe the algorithm that always succeeds up to the information
theoretic threshold in the $k$-community case.
Algorithm 7 Two-matrix community detection for general $k\geq 3$ communities
1:Parameters $t>0$, $\rho\in(0,1)^{k}$ such that $\sum_{i}\rho_{i}=1$, a
symmetric matrix $P\in(0,1)^{k\times k}$, and also
$G\sim\textsc{CSBM}_{n}^{k}(\rho,P,t)$.
2:Community classification $\hat{\sigma}\in[k]^{n}$.
3:Fix $y,\tilde{y}\notin\mathcal{Y}$ where $\mathcal{Y}$ is given by Lemma
6.1. Construct two $n\times n$ matrices $A=A(G,y)$, $\tilde{A}=A(G,\tilde{y})$
as defined in Definition 1.5.
4:Find the top $k$ eigenpairs of $A$ and $\tilde{A}$, respectively denoting
them $((\lambda_{l},u_{l}))_{l\in[k]}$ and
$((\tilde{\lambda}_{l},\tilde{u}_{l}))_{l\in[k]}$. Let $U$ (respectively
$\tilde{U}$) be the $n\times k$ matrix whose $i$-th column is $u_{i}$
(respectively $\tilde{u}_{i}$).
5:Use Algorithm 6 on input $\left(t,\rho,P,y,\tilde{y}\right)$ to compute the
weight vectors $\left(c_{i},\tilde{c}_{i}\right)_{i\in[k]}$.
6:For $s\in\\{\pm 1\\}^{k}$, let $D^{\scriptscriptstyle(s)}:=\text{diag}(s)$.
For any $s,\tilde{s}\in\\{\pm 1\\}^{k}$, construct the estimator
$\begin{split}{estimator-k-
com}\hat{\sigma}(v;s,\tilde{s})=\operatornamewithlimits{argmax}_{i\in[k]}\Big{\\{}\big{(}UD^{\scriptscriptstyle(s)}c_{i}\big{)}_{v}+\big{(}\tilde{U}D^{\scriptscriptstyle(\tilde{s})}\tilde{c}_{i}\big{)}_{v}\Big{\\}}\quad\text{
for each }v\in[n].\end{split}$ (6.1)
7:Return $\hat{\sigma}=\operatornamewithlimits{argmax}_{s,\tilde{s}\in\\{\pm
1\\}^{k}}\mathbb{P}(G\mid\hat{\sigma}(\cdot;s,\tilde{s}))$.
Algorithm 8 Find weights (Two matrices, $k\geq 3$ communities)
1:Parameters $t>0$, $\rho\in(0,1)^{k}$ such that $\sum_{i}\rho_{i}=1$, a
symmetric matrix $P\in(0,1)^{k\times k}$, and $y,\tilde{y}\notin\mathcal{Y}$
where $\mathcal{Y}$ is given by Lemma 6.1.
2:Weight vectors
$\left(c_{i},\tilde{c}_{i}\right)_{i=1}^{k}\subset\mathbb{R}^{k}$.
3:For $k\geq 1$, let $\mathcal{V}_{k}:=\\{i:n\sum_{j=0}^{k-1}\rho_{j}\leq
i\leq n\sum_{j=1}^{k}\rho_{j}\\}$ with $\rho_{0}=0$. Define $B$ to be the
symmetric block matrix where $B_{uv}=\frac{t\log n}{n}[P_{ij}-y(1-P_{ij})]$ if
$u\in\mathcal{V}_{i}$ and $v\in\mathcal{V}_{j}$. Define $\tilde{B}$ similarly
by replacing $y$ by $\tilde{y}$. Let the top $k$ eigenpairs of $B$ and
$\tilde{B}$ be $((\gamma_{i},v_{i}))_{i\in[k]}$, and
$((\tilde{\gamma}_{i},\tilde{v}_{i}))_{i\in[k]}$. Let $V$ (respectively
$\tilde{V}$) be the $n\times k$ matrix whose $i$th column is
$\nicefrac{{v_{i}}}{{\gamma_{i}}}$ (respectively
$\nicefrac{{\tilde{v}_{i}}}{{\tilde{\gamma}_{i}}}$).
4:Solve the following system for $\\{\alpha_{ri}\\}_{r,i\in[k]}$,
$\\{\tilde{\alpha}_{ri}\\}_{r,i\in[k]}$: $\begin{split}{z-star-
gen-k}\alpha_{ri}+\tilde{\alpha}_{ri}=\log\left(P_{ri}\right),\quad-y\alpha_{ri}-\tilde{y}\tilde{\alpha}_{ri}=\log\left(1-P_{ri}\right),\quad\forall
r,i\in[k].\end{split}$ (6.2) For $i\in[k]$, let $z_{i}$ (respectively
$\tilde{z}_{i}$) be the block vector with $z_{iv}=\alpha_{ri}$ (respectively
$\tilde{z}_{iv}=\tilde{\alpha}_{ri}$) when $v\in\mathcal{V}_{r}$.
5:Return $\left(c_{i},\tilde{c}_{i}\right)_{i=1}^{k}$ solving
$\begin{split}\sqrt{n}\log(n)Vc_{i}=z_{i}\quad\text{and}\quad\sqrt{n}\log(n)\tilde{V}\tilde{c}_{i}=\tilde{z}_{i}\quad\text{for
all }i\in[k].\end{split}$ (6.3)
###### Proof of Theorem 1.12.
The argument is identical to the proof of Theorem 1.10. We skip redoing all
the details for general $k\geq 3$ and instead give an overview of the steps.
Indeed, since $P\cdot\text{diag}(\rho)$ has $k$ distinct, non-zero eigenvalues
by our assumption, Lemma 6.1 implies that the eigenvalues of
$\mathbb{E}[A(G,y)]$ are also distinct for sufficiently large $n$. Applying
the entrywise bounds for the eigenvectors in Corollary 4.2 holds for general
$k$. The parameters (LABEL:z-star-gen-k) and (6.3) are chosen in such a way
that the approximating vector $\tau$ satisfies, with probability $1-o(1)$, for
all $v\in[n]$,
$\tau_{v}=\big{(}\log(P_{ri}),\log(1-P_{ri})\big{)}_{r\in[k]}\cdot d(v)+o(\log
n).$
The estimator described by (LABEL:estimator-k-com) is constructed so that for
some $s,\tilde{s}\in\\{\pm 1\\}^{k}$, we have
$\hat{\sigma}(v;s,\tilde{s})=\operatornamewithlimits{argmax}_{i\in[k]}\tau_{v}.$
Corollary 2.11 implies that for this pair $(s,\tilde{s})$, we have
$\hat{\sigma}(v;s,\tilde{s})=\sigma_{0}(v)$ for all $v$ with high probability.
Finally, the correct pair $s,\tilde{s}$ is chosen in Step 7, by again
appealing to statistical achievability (Theorem 3.1). ∎
###### Remark 6.2.
We can simplify the algorithms by taking $A$ and $\tilde{A}$ without any
ternary encoding if both $P\cdot\text{diag}(\rho)$ and
$(J-P)\cdot\text{diag}(\rho)$ have $k$ distinct, non-zero eigenvalues. Indeed,
define $A,\tilde{A}$
$\displaystyle A_{ij}=\begin{cases}1&\quad\text{if }\\{i,j\\}\text{ is
present}\\\ 0&\quad\text{if }\\{i,j\\}\text{ is absent or
censored}\end{cases}\quad\text{and}\quad\tilde{A}_{ij}=\begin{cases}1&\quad\text{if
}\\{i,j\\}\text{ is absent}\\\ 0&\quad\text{if }\\{i,j\\}\text{ is present or
censored.}\end{cases}$
We can simply set $\alpha_{ri}=\log(P_{ri})$ and
$\tilde{\alpha}_{ri}=\log(1-P_{ri})$, and choose $c_{i},\tilde{c}_{i}$
according to (6.3). With this choice, the estimator in (LABEL:estimator-k-com)
(optimized over the signs as in Algorithm 6 Step 5) achieves exact recovery up
to the information theoretic threshold.
Of course, such a simplification might not be possible for many possible
choices of parameters. For example, in the two community case, we can take
$\rho=1/2$, and $p_{1},p_{2},q$ such that $p_{1}p_{2}-q^{2}\neq 0$ but
$(1-p_{1})(1-p_{2})-(1-q)^{2}=0$. One such choice is
$p_{1}=\frac{23}{25},p_{2}=\frac{17}{25},q=\frac{3}{5}$.
## References |
CBADCcontrol-bounded analog-to-digital converter
CTSDM[CT-$\Sigma\Delta$M]continuous-time sigma-delta modulator
CTSDcontinuous-time sigma-delta
DTSDMdiscrete-time sigma-delta modulator
CRFBcascade of resonators with feedback
CRFFcascade of resonators with feedforward
ENOBeffective number of bits
SNRsignal-to-noise ratio
SNDRsignal-to-noise-and-distortion ratio
SQNRsignal-to-quantization-noise ratio
PSDpower spectral density
NTFnoise transfer function
STFsignal transfer function
DCdigital control
ASanalog system
DEdigital estimator
DACdigital-to-analog converter
ADCanalog-to-digital converter
FIRfinite impulse response
FSfull scale
OSRoversampling ratio
OTAoperational transconductance amplifier
PLLphase-locked loop
VCOvoltage-controlled oscillator
GBWPgain-bandwidth product
LMSleast mean squares
RLSrecursive least squares
LMMSElinear minimum mean square error
SFDRspurious-free dynamic range
Calibrating Control-Bounded ADCs
Hampus Malmberg^1, Till Mettler^1, Thomas Burger^1, Fredrik Feyling^2, and Hans-Andrea Loeliger^1
Dept. of Information Technology & Electrical Engineering, ETH Zürich, Zürich, Switzerland^1
Dept. of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway^2
The paper considers the calibration of control-bounded analog-to-digital converters.
It is demonstrated that variations of the analog frontend can be addressed by calibrating the digital estimation filter.
In simulations (both behavioral and transistor level) of a leapfrog analog frontend, the proposed calibration method restores essentially the nominal performance.
Moreover, with digital-filter calibration in mind,
the paper reformulates the design problem of control-bounded converters and thereby clarifies the role of sampling, desired filter shape, and nominal conversion error.
Control-Bounded Analog-to-Digital Conversion, Calibration, Component Variations, RLS
§ INTRODUCTION
Stabilizing an analog system using a digital control amounts to implicit analog-to-digital conversion.
This is the core idea behind the CBADC concept, first proposed in
[1] and further developed in [2] and [3]. The CBADC perspective allows more general analog frontends, i.e.,
analog system and digital control combinations, at the expense of a post-processing digital estimation step.
Specifically, it falls on the digital estimator to consolidate the joint effort of the digital control into an estimate
of the input signal fed into the analog system.
A CBADC analog frontend partially resembles CTSDM [5]. However,
in contrast to CTSDM, the analog frontend of a CBADC may contain high-order filters
while having a guaranteed stable nominal operation.
Moreover, the high-level design of a CBADC is a continuous-time filter design task, whereas
CTSDM are typically considered a discrete-time design concept converted into the continuous-time domain [6].
Crucially, the digital estimator's ability to estimate relies on its knowledge of the actual analog frontend.
As reported in [7], this makes CBADC particularly sensitive to component variations if not accounted for.
In this paper, we address this issue by demonstrating that the digital estimation filter can be calibrated to compensate for such variations.
§ THE LEAPFROG FRONTEND
[european voltages]
[op amp] (amp1) at (0,0) $A_1$;
[rground] at (amp1.+) ;
[inner sep=0] (vgnd1) at ($(amp1.-) + (-0.5,-0.5)$) ;
(vgnd1) to[short, *-] (amp1.-);
(vgnd1) to[short, *-] ++(0,1.25) to[C, l=$C_1$] ++ (2.875,0) -| (amp1.out);
[op amp, yscale=-1, rotate=180] (quantizer1) at ($(amp1) + (0, -1.5)$) ;
($(quantizer1) + (0.5, -0.25)$) – ++(-0.25,0) – ++(0, 0.5) – ++(-0.25, 0);
($(quantizer1) + (0.1875, 0)$) – ++(0.125,0);
(fs) at ($(quantizer1) + (1.75, 0.675)$) $f_s$;
[Arrow] (fs) – ++(-1.1, 0);
[rground] at (quantizer1.+) ;
(amp1.out) to[short, *-] (quantizer1.-);
(quantizer1.out) – ++ (-0.5,0) to[R, l=$R_{\kappa_1}$] (vgnd1);
(vgnd1) – ++(-0.5,0.5) to[R, l=$R_{\alpha_1}$] ++(0,1.75) node[inner sep=0] (alpha1) ;
(quantizer1.-) to[open, v^=$x_1(t)$] (quantizer1.+);
(quantizer1.out) to[open, v_=$s_1(t)$] ($(quantizer1.+) + (0,-0.785)$);
[op amp] (amp2) at ($(amp1) + (5,0)$) $A_2$;
[rground] at (amp2.+) ;
[inner sep=0] (vgnd2) at ($(amp2.-) + (-0.5,-0.5)$) ;
(vgnd2) to[short, *-] (amp2.-);
(vgnd2) to[short, *-] ++(0,1.25) to[C, l=$C_2$] ++ (2.875,0) -| (amp2.out);
[op amp, yscale=-1, rotate=180] (quantizer2) at ($(amp2) + (0, -1.5)$) ;
($(quantizer2) + (0.5, -0.25)$) – ++(-0.25,0) – ++(0, 0.5) – ++(-0.25, 0);
($(quantizer2) + (0.1875, 0)$) – ++(0.125,0);
(fs) at ($(quantizer2) + (1.75, 0.675)$) $f_s$;
[Arrow] (fs) – ++(-1.1, 0);
[rground] at (quantizer2.+) ;
(amp2.out) to[short, *-] (quantizer2.-);
(quantizer2.out) – ++ (-0.5,0) to[R, l=$R_{\kappa_2}$] (vgnd2);
(vgnd2) – ++(-0.5,0.5) to[R, l=$R_{\alpha_2}$] ++(0,1.75) node[inner sep=0] (alpha2) ;
(quantizer2.-) to[open, v^=$x_2(t)$] (quantizer2.+);
(quantizer2.out) to[open, v_=$s_2(t)$] ($(quantizer2.+) + (0,-0.785)$);
[op amp] (amp3) at ($(amp2) + (5,0)$) $A_3$;
[rground] at (amp3.+) ;
[inner sep=0] (vgnd3) at ($(amp3.-) + (-0.5,-0.5)$) ;
(vgnd3) to[short, *-] (amp3.-);
(vgnd3) to[short, *-] ++(0,1.25) to[C, l=$C_3$] ++ (2.875,0) -| (amp3.out);
[op amp, yscale=-1, rotate=180] (quantizer3) at ($(amp3) + (0, -1.5)$) ;
($(quantizer3) + (0.5, -0.25)$) – ++(-0.25,0) – ++(0, 0.5) – ++(-0.25, 0);
($(quantizer3) + (0.1875, 0)$) – ++(0.125,0);
(fs) at ($(quantizer3) + (1.75, 0.675)$) $f_s$;
[Arrow] (fs) – ++(-1.1, 0);
[rground] at (quantizer3.+) ;
(amp3.out) to[short, *-] (quantizer3.-);
(quantizer3.out) – ++ (-0.5,0) to[R, l=$R_{\kappa_3}$] (vgnd3);
(vgnd3) – ++(-0.5,0.5) to[R, l=$R_{\alpha_3}$] ++(0,1.75) node[inner sep=0] (alpha3) ;
(quantizer3.-) to[open, v^=$x_3(t)$] (quantizer3.+);
(quantizer3.out) to[open, v_=$s_3(t)$] ($(quantizer3.+) + (0,-0.785)$);
[op amp] (ampN) at ($(amp3) + (5.875,0)$) $A_N$;
[rground] at (ampN.+) ;
[inner sep=0] (vgndN) at ($(ampN.-) + (-0.5,-0.5)$) ;
(vgndN) to[short, *-] (ampN.-);
(vgndN) to[short, *-] ++(0,1.25) to[C, l=$C_N$] ++ (2.875,0) -| (ampN.out);
[op amp, yscale=-1, rotate=180] (quantizerN) at ($(ampN) + (0, -1.5)$) ;
($(quantizerN) + (0.5, -0.25)$) – ++(-0.25,0) – ++(0, 0.5) – ++(-0.25, 0);
($(quantizerN) + (0.1875, 0)$) – ++(0.125,0);
(fs) at ($(quantizerN) + (1.75, 0.675)$) $f_s$;
[Arrow] (fs) – ++(-1.1, 0);
[rground] at (quantizerN.+) ;
(ampN.out) to[short, *-] (quantizerN.-);
(quantizerN.out) – ++ (-0.5,0) to[R, l=$R_{\kappa_N}$] (vgndN);
(quantizerN.-) to[open, v^=$x_N(t)$] (quantizerN.+);
(quantizerN.out) to[open, v_=$s_N(t)$] ($(quantizerN.+) + (0,-0.785)$);
(alpha1) – ++(0, 0.5) -| ($(amp2.out) + (0.5,0.5)$) – (amp2.out);
(alpha2) – ++(0, 0.25) -| ($(amp3.out) + (0.5,0.5)$) – (amp3.out);
(alpha3) – ++(0, 0.5) – ++(4,0) node[anchor=west] … ++(0.625,0) -| ($(ampN.out) + (0.5,0.5)$) – (ampN.out);
(amp1.out) to[R, l=$R_{\beta_2}$] (vgnd2);
(amp2.out) to[R, l=$R_{\beta_3}$] (vgnd3);
(amp3.out) – ++(0.625,0) node[anchor=west] … ++(0.625,0) to[R, l=$R_{\beta_N}$] (vgndN);
($(vgnd1) + (-2.25,-1.25)$) node[rground] to[V,l=$u(t)$] ++(0, 1.25) to[R,l=$R_{\beta_1}$] (vgnd1);
($(vgnd1) + (-1.25,-2.875)$) node[rground] to[V,l=$s_0(t)$] ++(0, 1) to[R,l=$R_{\kappa_0}$] ++(0,1.25) – (vgnd1);
A single-ended circuit implementation of the $N$th order homogeneous LF analog frontend with two input signals: $u(.)$, the unknown
signal to be converted, and $s_0(.)$, a known binary reference signal.
All signals $u(.), s_0(.), \dots, s_N(.), x_1(.), \dots, x_N(.)$ are represented as voltages.
A single CBADC analog frontend is considered for calibration; the CBADC LF analog frontend, introduced in [2] and further investigated in [7].
A single-ended opamp-$RC$ based implementation of the $N$th order LF structure is shown in fig:leapfrog-structure. Its basic functionality
can be thought of as amplifying an input signal $u(.)$ through a chain-of-integrators, each parametrized by a time constant $R_{\beta_\ell}C_\ell$,
while stabilizing the analog states $x_1(.),\dots,x_N(.)$ by local DC signals $s_1(.),\dots,s_N(.)$.
The analog frontend gets its name from the $R_{\alpha_\ell}$
feedback paths which also introduce complex poles in the transfer function from $u(.)$ to the final state $x_N(.)$
(assuming ($s_1(.)=0, \dots, s_N(.)=0$)). This transfer function is of fundamental importance for the performance of
a CBADC and will be referred to by its impulse response $g_u(.)$.
The digital control is implemented as a clocked comparator, generating the binary discrete-time control signals $s_\ell[.]$
which are fed back to the system through a DAC with an impulse response $\theta(.)$ as
s_ℓ(t) ∑_k s_ℓ[k] θ_ℓ(t - kT).
The final state can be written as
x_N(t) = (g_u ∗u)(t) + ∑_ℓ=0^N (g_ℓ∗s_ℓ)(t)
where $g_\ell(.)$ denotes the impulse response of the transfer function from $s_\ell(.)$ to $x_N(.)$.
Finally, $s_0[k] \in \{\pm 1\}$ is a known binary discrete-time reference signal which will be used to calibrate the CBADC frontend.
This signal is randomly generated to produce a wideband spectrum, and the ratio $R_{\kappa_0}/R_{\kappa_1}$ is a tradeoff parameter
between the time needed for calibration and a reduced maximum input signal swing.
§ THE DIGITAL ESTIMATOR
The output of the converter is an estimate $\hat u[k]$, of a filtered and sampled version of $u(.)$, obtained by $N+1$ linear filters as
û[k] ∑_ℓ= 0^N (h_ℓ∗s_ℓ)[k] + û_0
where the reference filter $h_0$ is a design choice and $h_1, \dots, h_N$, and the scalar offset $\hat{u}_{0}$,
will be subject to calibration.
As shown in app:signal_decomposition,
$\hat{u}[.]$ can be decomposed into
û[k] = (h_u ∗u )(k T) + (g̃_u ∗x_N)(k T) + e_c[k]
where $h_u(.)$ is the impulse response of the desired STF.
The second term in eq:estimate_decomposition is recognized as the nominal conversion error,
where $\tilde{g}_u(.)$ is the impulse response of the NTF [2].
Using the NTF filter, the STF filter can be witten as
h_u(t) = - (g̃_u ∗g_u)(t)
with $g_u(t)$ as in eq:magic_step.
The last term in eq:estimate_decomposition is the calibration error
e_c[k] ∑_ℓ= 1^N ((h_ℓ- h̆_ℓ) ∗s_ℓ)[k] eq:calibration_error,
h̆_ℓ[k] (g̃_u ∗g_ℓ∗θ_ℓ)(kT) ,
are the minimizing filter coefficients to the cost function eq:LMMSE as
will be further described in sec:adaptive_filtering.
The fundamental step in going from eq:estimate to eq:estimate_decomposition
is connecting the desired continuous-time STF filter and the discrete-time reference filter as
h_0[k] h̆_0[k] ∝- (h_u ∗θ_0)(kT)
where the proportionality relation is due to the fact that $g_u(t) \propto g_0(t)$.
The STF filter is a digital design choice that is not subject to calibration.
Moreover, the role of sampling, in connection to the STF filter, is emphasized in eq:estimate_decomposition
which opposes the conventional view that sampling is the work of the comparators in fig:leapfrog-structure.
Despite the continuous-time nature of the STF filter,
it will be enforced by the digital estimator in the discrete-time domain, see eq:reference_filter.
Therefore, only samples from its impulse response $h_u(.)$ will be required for estimation and calibration. Furthermore,
as the NTF filter is implicitly defined by the circuit implementation, i.e., $g_u(.)$ together with the choice of STF filter, see eq:desired_filter,
it is also not subject to calibration.
In summary, eq:estimate and eq:estimate_decomposition reveals the fundamental idea of the control-bounded ADC concept in two equations:
a filtered and sampled version of the input signal, $\left(h_u \ast u \right)(k T)$, can be computed by a discrete-time convolution of
the control signals, eq:estimate, where the fundamental performance of such an estimate is dictated by the analog frontends
ability to both amplify $u(.)$
and bound the magnitude of the last state $x_N(.)$.
The calibration error $e_c[.]$ can be made arbitrarily small by conventional calibration techniques as will be shown next.
§.§ Calibration
In the case of no input, i.e., $u(.) = 0$,
the impulse responses $\breve{h}_\ell[.]$, from eq:lmmse_filter, are recognized, see app:wiener_filter,
as the minimizing filter coefficients $h_1[.], \dots, h_N[.]$ with respect to the cost function
|(h_0 ∗s_0)[.] + ∑_ℓ= 1^N (h_ℓ∗s_ℓ)[.] + û_0|^2_u(.)=0
where $s_0[.], \dots, s_N[.]$ are assumed to be zero-mean weakly stationary processes,
jointly independent of the assumed zero-mean stationary process $(\tilde{g}_u \ast x_N)(kT)$, and $\EE{\cdot}$ denotes expectation.
This implies that the decomposition in eq:estimate_decomposition can be enforced,
and the calibration error term made to vanish, by estimating $h_1, \dots, h_N$ from
the discrete-time sequences $s_0[.], \dots, s_N[.]$ alone.
In particular, prior knowledge of any of the analog quantities $g_u(.)$, $g_1(.), \dots, g_N(.)$, and $\tilde{g}_u(.)$ are immaterial
for such calibration.
From here on, all filters $h_\ell[.]$ are chosen as FIR filter with equal filter length $K=512$.
Additionally, the reference filter $h_0[.]$ is determined by standard FIR filter design methods ($-3$ dB gain at bandwidth edge)
and its coefficient scaled by $-R_{\kappa_0}/R_{\beta_1}$ while all other $h_\ell[.]$ are initialized with all zero coefficients.
These choices of filter length and filter shape are mainly motivated by simplicity and consistency of notation.
Clearly, for a given application, better reference filter design choices could yield superior filtering performance at reduced complexity.
§.§.§ Adaptive Filtering
Adaptive filtering setup for estimating $h_1, \dots, h_N$ using a
known reference signal $s_0[.]$ and a fixed reference filter $h_0$.
One way of estimating $h_1,\dots,h_N$ from eq:LMMSE is an adaptive filtering scheme as shown in fig:adaptive_filter.
In this work, only frontend calibration, i.e., $u(.)=0$ during calibration, is considered.
However, we conjecture that, a modified version of fig:adaptive_filter could transform
the general content of this paper into a background calibration setting.
Unfortunately, this falls outside the scope of this paper.
For the adaptive filter in fig:adaptive_filter, variations of the LMS or RLS algorithms [8] can directly be applied.
An advantage of LMS algorithm
is its low complexity as it only uses the gradient of eq:LMMSE, i.e., $2\EE{\vct{s}_\ell[.] \hat{u}[.]}$, in its update rule where
s_ℓ[k] s_ℓ[k-K / 2], …, s_ℓ[k + K / 2]^∈{±1}^K.
To decrease the number of calibration iterations, at the expense of additional computational and memory complexity,
the RLS algorithm [8, 9] is a viable alternative.
For simplicity, only the RLS algorithm was considered in this work.
The RLS algorithm minimizes the cost function
_h ∑_k=0^K_i λ^K_i-k |û[k]|^2 + λ^K_i δh^0_2^2,
for $0 < \lambda \leq 1$, and $\delta \geq 0$, which converge to eq:LMMSE for $\lambda < 1$ and the number of calibration iterations $K_i \to \infty$.
The algorithm proceeds recursively as
α_k = V^k-1s[k]
g_k = α_k (λ+ s[k]^α_k )^-1
V^k = 1/λ(V^k-1 - g_k α_k^)
h^k = h^k-1 - û[k]g_k
s[.] s_1[.]^, …, s_N[.]^, 1^∈{±1}^K_Σ,
h[.] h_1[.]^, …, h_N[.]^, û_0^∈^K_Σ,
$K_{\Sigma} = N K + 1$, $\vct{\alpha}_k\in \R^{K_{\Sigma}}$, $\vct{g}_k\in \R^{K_{\Sigma}}$, and $\vct{V}^{k} \in \R^{K_{\Sigma} \times K_{\Sigma}}$ is a symmetric matrix.
The regularization in eq:RLS_cost_function is enforced by initializing the matrix $\vct{V}^0 = \delta^{-1} \vct{I}_{K_{\Sigma}}$ where $\vct{I}_{K_{\Sigma}}$ is the $K_{\Sigma}$-by-$K_{\Sigma}$ identity matrix.
Throughout this paper, $\lambda = 1.0 - 10^{-12}$ and $\delta = 0.01$ for all simulations.
§ SIMULATION RESULTS
The circuit from fig:leapfrog-structure is
parameterized, in accordance with [7], for a nominal performance of 76.5 dB SNR at 10 MHz signal bandwidth and a system order of $N=6$.
Specifically, the time constants are chosen as follows: $R_{\alpha_{\ell}}C_{\ell} = 98.63$ ns, $R_{\beta_{\ell}}C_\ell = 10.30$ ns,
$R_{\kappa_\ell}C_\ell=4 R_{\beta_{\ell}}C_\ell$, for every $\ell$ with the exceptions of
$R_{\beta_1}C_1 = 2 R_{\beta_{\ell \neq 1}} C_{\ell \neq 1}$.
This results in a maximum input signal swing that is twice that of the maximum state swing and half that of the maximum control
signal swing.
The comparators in fig:leapfrog-structure are clocked
at $f_s = 194.7$ MHz resulting in an OSR of approximately $10$.
§.§ Behavioral Simulation
A component variation scenario is considered
where all time constants
$(R_{\kappa_0} C_1)$, $(R_{\alpha_\ell} C_\ell)$, $(R_{\beta_\ell} C_\ell)$, and $(R_{\kappa_\ell}C_\ell)$
are subject to variations from their nominal values.
Ideal amplifiers, comparators, and passive components were modeled in Verilog-A, and using the Spectre simulator,
128 Monte Carlo simulations were conducted were each of the mentioned time constants are drawn
independently and uniformly at random within $\pm 10\%$ of their nominal values.
Every sample circuit is simulated with (testing) and without (training) a test sinusoidal input signal and
the calibration is then conducted on the latter, and validated on the former, dataset.
The test signal has a frequency of $f_s / 2^8 \approx 760$ kHz and an amplitude of $-1$ dBFS to avoid potential
instability due to the component variations. We use $R_{\kappa_1} / R_{\kappa_0} = 0.1$ and the Wiener filter
from [3] (with $\eta^2$ chosen as (22) in [3]), with the true time constants,
as a reference for the calibration results.
ylabel near ticks, xlabel near ticks
legend cell align=left,
legend style=
anchor=north east,
fill opacity=1,
draw opacity=1,
text opacity=1,
nodes=scale=0.8, transform shape
xlabel=number of calibration iterations $K_i$,
ylabel=Testing SNR [dB],
legend pos = south east,
mark size=0.75,
xtick = 1024, 2048, 4096, 8192, 16384, 32768, 65536,
xticklabels = $2^{10}$, $2^{11}$, $2^{12}$, $2^{13}$, $2^{14}$, $2^{15}$, $2^{16}$,
ytick = 20, 30, 40, 50, 60, 70, 80, 90,
[dashed, thick] table[col sep=comma, x=i, y=avg_uncal] ./figures/ref_uncal_firwin2_K512.csv;
[color=blue, dash dot, thick] table[col sep=comma, x=i, y=avg_ref] ./figures/ref_uncal_firwin2_K512.csv;
[color=red, thick] table[col sep=comma, x=i, y=avg] ./figures/testing_error_component_variation.csv;
average uncalibrated filter
average reference Wiener filter
average calibrated filter
[name path=uu,color=gray, opacity=0] table[col sep=comma, x=i, y=max_uncal] ./figures/ref_uncal_firwin2_K512.csv;
[name path=ul,color=gray, opacity=0] table[col sep=comma, x=i, y=min_uncal] ./figures/ref_uncal_firwin2_K512.csv;
[gray,opacity=0.45] fill between[of=uu and ul];
[color=blue, name path=ru,color=gray, opacity=0] table[col sep=comma, x=i, y=max_ref] ./figures/ref_uncal_firwin2_K512.csv;
[color=blue, name path=rl,color=gray, opacity=0] table[col sep=comma, x=i, y=min_ref] ./figures/ref_uncal_firwin2_K512.csv;
[color=blue, opacity=0.3] fill between[of=ru and rl];
name path=u_rsl,
] table[col sep=comma, x=i, y=max] ./figures/testing_error_component_variation.csv;
name path=l_rsl,
] table[col sep=comma, x=i, y=min] ./figures/testing_error_component_variation.csv;
[red, opacity=0.25] fill between[of=u_rsl and l_rsl];
The testing error performance for the uncalibrated and reference case
as well as the RLS algorithm convergence behaviour as a function of the number of calibration iterations.
The colored areas, associated with each case, represent the range of testing performance over all 128 sample circuits.
fig:convergence shows the average testing performance for the uncalibrated Wiener filter (with nominal filter coefficients),
the reference Wiener filter (using true filter coefficients), and the RLS algorithm's testing performance as a function
of the number of calibration iterations.
As stated in [7], without calibration, the digital estimator suffers significant performance degradation due to mismatch between
the digital estimator and the, assumed nominal, analog frontend parametrization. Clearly, the mismatch can be managed as the RLS algorithm
reaches the reference filter performance after $\approx 2^{12}$ iterations.
The leapfrog structure is known for its resilience towards component variations,
which is confirmed by the (average, maximum, minimum) SNR performance of $(80.70,83.51,77.68)$ and $(75.45,78.38,71.34)$ dB
for the calibrated filter and reference Wiener filter respectively.
fig:component_variation_psd shows the PSD, of eq:estimate, from a representative sample of the testing dataset.
ylabel near ticks, xlabel near ticks
legend cell align=left,
legend style=
anchor=north east,
fill opacity=1,
draw opacity=1,
text opacity=1,
nodes=scale=0.8, transform shape
xlabel=frequency [Hz],
ylabel=PSD [dB],
legend pos = north east,
[color=black, thick, smooth] table[col sep=comma, x=f_unc_luna_batch_3_rls_K_9, y=unc_luna_batch_3_rls_K_9]./figures/psd_component_variation.csv;
[color=blue, thick, smooth] table[col sep=comma, x=f_ref_luna_batch_3_rls_K_9, y=ref_luna_batch_3_rls_K_9]./figures/psd_component_variation.csv;
[color=olive, thick, smooth] table[col sep=comma, x=f_cal_luna_batch_3_rls_K_9, y=cal_luna_batch_3_rls_K_9]./figures/psd_component_variation.csv;
uncalibrated filter
Wiener filter reference
calibrated filter
A representative PSD of eq:estimate
from one of the realizations in sec:behavioral-simulation.
The discrepancy around the bandwidth frequencies, between the Wiener reference and the calibrated filter, is due to the
$h_0[.]$ being designed as standard FIR filter and not the Wiener filter from [3].
As the SNR is determined directly from the PSD,
this also explains why the calibrated filter outperforms the reference filter in fig:convergence.
§.§ Transistor Level Simulation
A transistor level implementation of fig:leapfrog-structure was designed and simulated with Spectre
in 65-nm CMOS technology. The specifics are as follows: 1.2 V power supply, $C_1=11.7$ pF, $C_2=C_3=2.925$ pF, and $C_4=C_5=C_6=975$ fF,
an input signal amplitude range of $600$ mVpp and $R_{\kappa_1} / R_{\kappa_0} = 0.03$.
The comparators have an input referred noise of $130$ $\mu$Vrms which was achieved by extending the decision time to a quarter of the control period.
Stability issues, due to the long decision time, were addressed by ternary comparators with decision thresholds at $\pm50$ mV.
The ternary comparators result in a reduced maximum state swing, by approximately $30\%$, which translates into $\approx3$ dB SNR gain, see eq:estimate_decomposition.
The amplifiers are class AB two-stage RC-compensated amplifiers as in [10] with an simulated input-referred noise of $67.03\mu$ Vrms.
They are non-uniformly downsized to somewhat match the diminishing thermal noise and linearity requirements.
The approximate scaling can be seen from the resulting
simulated power consumption as $P_{A_1}=8.5$ mW, $P_{A_2} = P_{A_3}=2.3$ mW, and $P_{A_4}=\dots=P_{A_6}=0.9$ mW.
ylabel near ticks, xlabel near ticks
legend cell align=left,
legend style=
anchor=north east,
fill opacity=1,
draw opacity=1,
text opacity=1,
nodes=scale=0.8, transform shape
xlabel=frequency [Hz],
ylabel=PSD [dB],
legend pos=north east
[color=black,thick, smooth] table[col sep=comma, x=f_unc_downsized_7_rls_K_9_v1, y=unc_downsized_7_rls_K_9_v1] ./figures/psd_circuit.csv;
[color=blue,thick, smooth] table[col sep=comma, x=f_cal_downsized_7_rls_K_9_v1, y=cal_downsized_7_rls_K_9_v1] ./figures/psd_circuit.csv;
uncalibrated filter
calibrated filter
(third_harmonic_top) at (axis cs:4.56328125e6,-63) ;
(third_harmonic_bottom) at (axis cs:4.56328125e6,-160) ;
[color=red,Latex[width=1.5mm]-Latex[width=1.5mm]] (third_harmonic_top) – node[right] $\approx97$dB (third_harmonic_bottom);
[dashed,color=red] ($(third_harmonic_bottom) + (-0.5,0)$) – ($(third_harmonic_bottom)+(0.5,0)$);
Uncalibrated (61.7 dB SNDR) and calibrated (80.9 dB SNDR) PSDs of eq:estimate
from the transistor level simulation in sec:example. From the figure we also recognize a small non-linearity
resulting in a SFDR of $\approx91$ dB.
In fig:psd, both the calibrated and uncalibrated PSDs of the estimate from eq:estimate are shown for a FS input signal at $f_s/2^7 \approx 1.5$ MHz.
A SNDR of $80.9$ (calibrated) and $61.7$ (uncalibrated) dB and a SFDR of $\approx97$ dB is estimated directly from the PSD.
The implementation has a simulated total power consumption of $18.43$ mW resulting in a predicted Schreier FoM of 168.24 dB and a Walden FoM of 101.6 fJ/conv.
§ CONCLUSIONS
Both theoretical and practical aspects of calibrating CBADC were addressed. The proposed calibration
algorithm was validated using both behavioral and transistor level simulations, confirming
recovery of essentially nominal performance.
The introduction of calibration to the CBADC concept splits the CBADC design task into two disjoint parts:
an analog frontend where the fundamental performance follows from analog design parameters and
a digital calibration step that is agnostic to the analog implementation.
§.§ The Signal Estimate Decomposition
The discrete-time convolution can be written as
∑_k_1 s_ℓ[k_1] h̆_ℓ[k - k_1]
= ∑_k_1 s_ℓ[k_1] (g̃_u ∗g_ℓ∗θ_ℓ)((k - k_1) T)
= ∫(g̃_u ∗g_ℓ)(τ) ∑_k_1 s_ℓ[k_1] θ_ℓ(kT - τ- k_1 T ) τ
= ∫(g̃_u ∗g_ℓ)(τ) s_ℓ(kT - τ) τ
= (g̃_u ∗g_ℓ∗s_ℓ)(kT)
where the steps to eq:discrete_continuous_second and eq:discrete_continuous_second_to_last follows from the definitions in eq:lmmse_filter and
eq:continuos_to_discrete_time_control_signals respectively.
(g̃_u ∗g_0 ∗s_0)(kT)
= ( g̃_u ∗( x_N - (g_u ∗u)) )(k T) - ∑_ℓ= 1^N (h̆_ℓ∗s_ℓ)[k]
where eq:discrete_signal_decomposition follows from eq:magic_step and reversing the steps in eq:discrete_continuous_first-eq:discrete_continuous_last
for $(\tilde{g}_u \ast g_\ell \ast s_\ell)(kT)$.
Replacing $(h_0 \ast s_0)[k]$ with eq:discrete_signal_decomposition in eq:estimate and rearranging results in eq:estimate_decomposition.
§.§ The Wiener Filter
Assuming $\hat{s}[.] \eqdef (h_0 \ast s_0)[.]$, $\vct{s}[.]$,
and $w[k] \eqdef -(\tilde{g}_u \ast x_N)(kT)$
to be zero-mean weakly stationary processes and $w[.]$ to be jointly independent of $\vct{s}[.]$,
the minimizing solution to the cost function in eq:LMMSE, i.e., minimizing $\EE{\left|\hat{s}[.] + (\vct{h}^\T \ast \vct{s})[.]\right|^2}$,
with respect to $\vct{h}[.]$, see eq:s_vector and eq:vector_h,
follows by the orthogonality principle as
(h^∗R_s)[.] = R_ŝ s[.], eq:wiener_equations
where $\mat{R}_{\vct{s}}[.] \eqdef \EE{(\vct{s} \ast \vct{s}_c^\T)[.]}$
and $\vct{s}_c[.] \eqdef \vct{s}[-.]$.
Convolving both the left hand and right hand side of eq:magic_step with $\tilde{g}_u(t)$,
enforcing $u(t) = 0$, reversing the steps from eq:discrete_continuous_first-eq:discrete_continuous_last,
and finally rearranging, gives $\hat{s}[.] = (\vct{g}^\T \ast \vct{s})[.] + w[.]$
where $\vct{g}[k] \eqdef \begin{pmatrix}(\tilde{g}_u \ast g_1 \ast \theta_1)(kT), \dots, (\tilde{g}_u \ast g_N \ast \theta_N)(kT)\end{pmatrix}^\T$.
R_ŝ s[.] (ŝ ∗s^_c)[.] = (g^∗R_s)[.]
and plugging eq:cross_correlation into eq:wiener_equations, and assuming a full rank $\mat{R}_{\vct{s}}$,
results in $h_\ell[.]$ as given by $\breve{h}_\ell[.]$ in eq:reference_filter.
[1]H.-A. Loeliger and G. Wilckens, Control-based analog-to-digital conversion without sampling and quantization, in 2015 Information Theory & Applications Workshop (ITA), San Diego, CA, pp. 119-122.
[2]H. Malmberg, Control-Bounded Converters, Ph.D. dissertation no. 27025, ETH Zürich, 2020
[3]H. Malmberg, G. Wilckens, and H.-A. Loeliger, Control-bounded analog-to-digital conversion, Circuits, Syst. Signal Process., vol. 41, no. 3, pp. 1223-1254, Mar. 2022.
[5]S. Pavan, Richard Schreier, and Gabor C. Temes, Understanding Delta-Sigma Data Converters, 2nd ed., Piscataway, NJ: Wiley-IEEE Press, 2017.
[6]J. A. Cherry and M. W. Snelgrove, Continuous-Time Delta-Sigma Modulators for High-Speed A/D Conversion, 1st ed., Boston, MA: Springer, 2002.
[7]F. Feyling, H. Malmberg, C. Wulff, H.-A Loeliger, and T. Ytterdal, High-level comparison of control-bounded A/D converters and continuous-time sigma-delta modulators, in Nordic Circuits and Systems Conference (NORCAS), Oslo, pp. xx, Oct. 2022.
[8]S. Haykin, Adaptive Filter Theory, 4th ed., Upper Saddle River, NJ: Prentice Hall, 2002.
[9]H.-A. Loeliger, J. Dauwels, Junli Hu, S. Korl, Li Ping, and F. R. Kschischang, The factor graph approach to model-based signal processing, Proceedings of the IEEE, vol. 95, no. 6, pp. 1295-1322, June 2007.
[10]J. Ramirez-Angulo, R.G. Carvajal, J.A. Galan, and A. Lopez-Martin, A free but efficient class AB two-stage operational amplifier, IEEE Transactions on Circuits and Systems II: Express Briefs (TCASII),. vol 53, pp. 568-571, August 2006.
|
# The Symmetric Coalescent and Wright-Fisher models with bottlenecks
Adrián González Casanova, Verónica Miró Pina, Arno Siri-Jégousse
_Corresponding author:_<EMAIL_ADDRESS>
###### Abstract
We define a new class of $\Xi$-coalescents characterized by a possibly
infinite measure over the non negative integers. We call them symmetric
coalescents since they are the unique family of exchangeable coalescents
satisfying a symmetry property on their coagulation rates: they are invariant
under any transformation that consists of moving one element from one block to
another without changing the total number of blocks. We illustrate the
diversity of behaviors of this family of processes by introducing and studying
a one parameter subclass, the $(\beta,S)$-coalescents. We also embed this
family in a larger class of $\Xi$-coalescents arising as the limit genealogies
of Wright-Fisher models with bottlenecks. Some convergence results rely on a
new Skorokhod type metric, that induces the Meyer-Zheng topology, which allows
us to study the scaling limit of non-Markovian processes using standard
techniques.
## 1 Introduction
### 1.1 Wright-Fisher models with demographic bottlenecks
Since it was proposed in 1982, the Kingman coalescent [18] has become a key
tool in population genetics. It can describe the limit genealogy of classical
models such as the Wright-Fisher and the Moran model. It has proven to be
robust to modifications of these models’ assumptions (such as constant
population size or random mating) and thus arises as the genealogy of a broad
class of population models. However, it does not model well genealogies from
certain populations, e.g. with a skewed offspring distribution, which are
captured by coalescents with (simultaneous) multiple collisions [25].
Modeling populations with varying population size has been of great interest,
for example to infer the human population history [19, 34]. Several variations
of the Wright-Fisher model with fluctuating population size have been studied,
in which the population size changes but remains of order $N$. It has been
shown that in many cases, the genealogy converges to a continuous time-
rescaling of the Kingman coalescent (see for example [14, 16, 15]). More
recently, Freund [11] studied the case of Cannings models with highly variant
offspring number (whose genealogy is usually described by a
$\Lambda$-coalescent) in which the population size fluctuates (but remains of
the order of $N$) and has shown that the genealogy converges to a time-
rescaled $\Lambda$-coalescent.
In Section 6.1 of [5], Birkner et al. consider a population undergoing
recurrent demographic bottlenecks. We call a bottleneck an event that reduces
substantially the population size and that may last for one or several
generations. They suggest that the genealogy is described by a discontinuous
time-rescaling of the Kingman coalescent, more precisely a Kingman coalescent
where time is rescaled by a subordinator, and which is in fact a coalescent
with simultaneous multiple collisions. But they only consider the case where
the population size during the bottleneck is small compared to $N$ but still
tends to infinity as $N\to\infty$.
To our knowledge, the case of drastic fluctuations, in which the population
size during the bottleneck does not tend to infinity as $N\to\infty$, has not
been studied yet. In this article, we are going to study different types of
bottlenecks, with different scalings for the population sizes inside and
outside the bottleneck, and different lengths. We will establish a
classification of the limiting genealogies obtained in the different settings
and give some intuitions on how to relate the different processes. To do so,
we define a class of models that can be called Wright-Fisher models with
demographic bottlenecks.
###### Definition 1.1 (The Wright-Fisher model with bottlenecks).
The Wright-Fisher model with bottlenecks (parametrized by $N\in\mathbb{N}$)
has varying population size, which is given by a sequence of random variables
$\\{R^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ taking values in $[N]=\\{1,\dots,N\\}$.
It is the random graph $(V,E)$ where
$V=\\{(i,g):i\in[R_{g}^{N}],g\in\mathbb{Z}_{+}\\}$, each individual $(i,g)\in
V$ chooses her parent uniformly amongst the $R_{g-1}^{N}$ individuals of
generation $g-1$, and the set of edges is $E:=\\{((j,g-1)(i,g)):(j,g-1)$ is
the parent of $(i,g),g\in\mathbb{Z}_{+}\\}$.
The case $\mathbb{P}(R^{N}_{g}=N)=1,\forall g\in\mathbb{Z}_{+}$, is the
classical Wright-Fisher model and it is well known that, when the time is
rescaled by $N$ and $N\to\infty$, the genealogy of a sample of $n$ individuals
is described by the Kingman coalescent. We are led to ask ourselves under
which conditions on $\\{R^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ does the genealogy
still converge to a Kingman coalescent, and if it does not, what type of
coalescents describe the genealogy of a population that has undergone
bottlenecks.
We are going to study different types of Wright-Fisher models with
bottlenecks, with different types of laws for the sequence
$\\{R^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$. Inspired by [5] (Section 6, p. 57), we
are going to describe the demographic history of the population by three
random sequences of i.i.d. positive real numbers: $\\{s_{i,N}\\}_{i\in N}$,
$\\{l_{i,N}\\}_{i\in N}$ and $\\{b_{i,N}\\}_{i\in N}$. The sequence of
population sizes $\\{R^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ is given by
$R^{N}_{g}\ =\ \left\\{\begin{array}[]{ll}b_{m,N}N\textrm{ if
}\sum_{i=1}^{m-1}(s_{i,N}+l_{i,N})+s_{m,N}<g\leq\sum_{i=1}^{m}(s_{i,N}+l_{i,N})\\\
N\textrm{ otherwise.}\end{array}\right.$
This means that the population size stays at $N$ for $s_{i,N}$ generations and
then it is reduced to $b_{i,N}N$ for $l_{i,N}$ generations. At the end of the
bottleneck, the population reaches $N$ again and it stays until the next
event. Note that we have assumed that the decline and the re-growth of the
population size are instantaneous.
We call $b_{i,N}$ the intensity of the $i$-th bottleneck and we distinguish
between:
* •
Soft bottlenecks, where $b_{i,N}\to 0$ in distribution but $Nb_{i,N}\to\infty$
in distribution i.e. the population size during the bottleneck is small
compared to $N$, but still large in absolute numbers.
* •
Drastic bottlenecks, where $b_{i,N}\to 0$ in distribution and
$Nb_{i,N}<\infty$ in distribution (as $N\to\infty$) i.e. the population size
during the bottleneck is very small compared to $N$, and remains finite in the
limiting scenario, when the population size outside the bottlenecks is
infinite.
We call $l_{i,N}$ the duration of the $i$-th bottleneck. We will distinguish
between short bottlenecks, that last for only one generation and long
bottlenecks that last for several generations. In both cases, we will assume
that there exists $\alpha\in(0,1]$ such that $l_{i,N}N^{\alpha}\to 0$ as
$N\to\infty$, in distribution, i.e. when time is re-scaled by $N^{\alpha}$ the
duration of the bottleneck is negligible. Finally, we call $s_{i,N}$ the
periodicity of the bottlenecks and again, we distinguish between frequent
bottlenecks when, $s_{i,N}/N\to 0$ in distribution as $N\to\infty$ and rare
bottlenecks otherwise.
As we shall see, coalescents with simultaneous multiple collisions arise as
the limiting genealogies for the Wright-Fisher model with bottlenecks. In the
case of short drastic bottlenecks, the genealogies are described by a new
family of coalescents that we define and study.
### 1.2 A new family of $\Xi$-coalescents
Coalescents with simultaneous multiple collisions ($\Xi$-coalescents, [30, 25,
2, 3]) form the widest class of exchangeable coagulating Markov chains with
values in the set of partitions of $\mathbb{N}$. Mathematically, they give a
nice connection with de Finetti’s representation of exchangeable partitions
and they exhibit a rich variety of behaviors. Biologically, they describe the
genealogy of a large class of population models and their study provides some
statistical tools for inference. Schweinsberg [30] showed that any
exchangeable coalescent is characterized by a finite measure $\Xi$ on the
ranked infinite simplex
$\Delta=\\{\mathbf{\zeta}=(\zeta_{1},\zeta_{2}.\dots),\
\zeta_{1}\geq\zeta_{2}\geq\dots\geq 0,\ \sum_{i=1}^{\infty}\zeta_{i}\leq
1\\}.$
Its dynamics are described as follows. We decompose $\Xi$ into a ‘Kingman
part’ and a ‘simultaneous multiple collisions’ part, i.e.
$\Xi=a\delta_{(0,0,\dots)}+\Xi^{0}$ with $a\in[0,\infty)$ and
$\Xi^{0}(\\{(0,0,\dots)\\})=0$. A $[b,(k_{1},\dots,k_{r}),s]$-collision is a
merger of $b$ blocks into $r$ new blocks and $s$ unchanged blocks. Each new
block contains $k_{1},\dots,k_{r}\geq 2$ original blocks, so that
$\sum_{i=1}^{r}k_{i}=b-s$. Note that the order of the $k_{1},\dots,k_{r}$ does
not matter. Each $[b,(k_{1},\dots,k_{r}),s]$-collision happens at some fixed
rate
$\lambda_{b,(k_{1},\dots,k_{r}),s}=\ a\
\mathds{1}_{\\{r=1,s=b-1\\}}+\int_{\Delta}\sum_{l=0}^{s}\binom{s}{l}(1-\sum_{i\geq
1}\zeta_{i})^{s-l}\sum_{i_{1}\neq\dots\neq
i_{r+l}}\zeta_{i_{i}}^{k_{1}}\dots\zeta_{i_{r}}^{k_{r}}\zeta_{i_{r+1}}\dots\zeta_{i_{r+l}}\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}$
(1.1)
where $(\zeta,\zeta):=\sum_{i\geq 1}\zeta_{i}^{2}$. This complicated formula
is resulting from colliding original blocks according to a Kingman’s paintbox
associated with a partition $\zeta$ drawn from the $\sigma$-finite measure
$\Xi^{0}(d\zeta)/(\zeta,\zeta)$. This complexity justifies the necessity to
consider subclasses of exchangeable coalescents that are easier to study.
When $\Xi$ only puts weight on the subset of mass-partitions having only one
positive element, formula (1.1) reduces considerably as there is now no
possibility to obtain simultaneous collisions. The resulting subclass of
exchangeable coalescents is that of coalescents with multiple collisions
($\Lambda$-coalescents, [27, 28]). Their elegant theory, built around their
one to one correspondence with finite measures in $[0,1]$, turned this family
into the most studied class of coalescent processes for twenty years now. In
particular, $Beta$-coalescents [31] provide a one-parameter family of
$\Lambda$-coalescents, more convenient to study and better calibrated for
statistical applications in population genetics. This model is now validated
by the biological community [10, 33, 26].
However, we find fewer results about $\Xi$-coalescents in the literature and
their applications in biology are rarer. We can yet cite the Poisson-Dirichlet
coalescent [29, 24] and the Beta-Xi family [4, 6] that provide promising
models. The reason for this is probably the difficulty arising from the
complex formulation of (1.1). A first step to simplify it is to consider a
measure $\Xi$ on
$\Delta^{*}=\\{\mathbf{\zeta}=(\zeta_{1},\zeta_{2}.\dots),\
\zeta_{1}\geq\zeta_{2}\geq\dots\geq 0,\ \sum_{i=1}^{\infty}\zeta_{i}=1\\}.$
In this case the transition rates simplify. We can now consider
$[b,(k_{1},\dots,k_{r})]$-collisions where $b$ blocks merge into $r$ blocks,
each one containing $k_{1},\dots,k_{r}\geq 1$ original blocks. Each
$[b,(k_{1},\dots,k_{r})]$-collision happens at some fixed rate
$\lambda_{b,(k_{1},\dots,k_{r})}=\ a\
\mathds{1}_{\\{r=b-1,k_{1}=2\\}}+\int_{\Delta^{*}}\sum_{i_{1}\neq\dots\neq
i_{r}}\zeta_{i_{i}}^{k_{1}}\dots\zeta_{i_{r}}^{k_{r}}\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}.$
(1.2)
In this paper we describe a simple family of $\Xi$-coalescents: the symmetric
coalescent, which arises as the limiting genealogy for Wright-Fisher models
with short drastic bottlenecks. The reason for its name is that the
distribution of the tree obtained from a symmetric coalescent is invariant
under the transformation that involves cutting one branch from one node and
pasting it somewhere else in the tree, at the same height (see Figure 1 for an
illustration). In other words, as a partition-valued process, the symmetric
coalescent is invariant under the transformation that consists of displacing
one element from one block to another (without changing the number of non-
empty blocks).
Figure 1: In the symmetric coalescent these three labelled trees have the same
probability. The second one is obtained from the first one by cutting and
pasting the purple branch to a different node. The third one can be obtained
from the second one by displacing the blue branch from one position to
another. In the $S$-coalescent,
$\lambda_{6,(4,1,1)}=\lambda_{6,(2,3,1)}=\lambda_{6,(2,2,2)}$.
###### Definition 1.2.
The symmetric coalescents are the exchangeable coalescents whose transition
rates satisfy the following symmetry property: for every $b>1$, $2\leq r<b$
and for every $k_{1},\ldots,k_{r}$ and $k^{\prime}_{1},\ldots,k^{\prime}_{r}$
such that $\sum_{i=1}^{r}k_{i}=\sum_{i=1}^{r}k^{\prime}_{i}=b$,
$\lambda_{b,(k_{1},\dots,k_{r})}=\lambda_{b,(k^{\prime}_{1},\dots,k^{\prime}_{r})}.$
In the sequel we will consider the symmetric elements of $\Delta^{*}$. Let
$\xi^{0}:=(0,0,\dots)$ and, for $k\in\mathbb{N}$,
$\xi^{k}:=(\frac{1}{k},\dots,\frac{1}{k},0,\dots)$
and we denote the set of the symmetric elements of $\Delta^{*}$ by
$\Delta^{sym}:=\\{\xi^{k},k\in\mathbb{N}_{0}\\}$. Our first result establishes
a correspondence between symmetric coalescents and measures on a simple set,
being here $\mathbb{Z}_{+}:=\mathbb{N}\cup\\{0\\}.$
###### Theorem 1.3.
A coalescent is symmetric if and only if there exists a measure $F$ on
$\mathbb{Z}_{+}$ such that
$F(0)<\infty\ \textrm{ and }\ \sum_{k\geq 1}\frac{F(k)}{k}<\infty$ (1.3)
and such that its characterizing measure $S$ on $\Delta$ only puts weight on
$\Delta^{sym}$ and
$S(\xi^{k})\ =\ \left\\{\begin{array}[]{ll}\frac{F(k)}{k}&\textrm{ if
}k\in\mathbb{N}\\\ F(0)&\textrm{ if }k=0\end{array}\right..$ (1.4)
Condition (1.3) ensures that the measure $S$ is finite which is a necessary
and sufficient condition for a $\Xi$-coalescent to be well defined. Observe
that for $k\in\mathbb{N}$, $(\xi^{k},\xi^{k})=\frac{1}{k}$, so the rate of a
$k$ merger is $F(k)$. Mimicking the common notations we will speak about
$S$-coalescents.
Before going into further detail, we start by recalling a useful tool, which
is Kingman’s paintbox construction of $\Xi$-coalescents ([18]). We will only
discuss the case when $\Xi$ is concentrated on $\Delta^{*}$. Each element in
$\Delta^{*}$ can be seen as a tiling of (0,1), where the sizes of the
subintervals are $\zeta_{1},\zeta_{2},\dots$. The $\Xi$-coalescent can be
constructed as follows: when there are $b$ blocks, for every
$\zeta\in\Delta^{*}$, at rate $\Xi(d\zeta)/(\zeta,\zeta)$, we choose the
tiling associated with $\zeta$, then we throw $b$ uniform random variables in
$(0,1)$, each one associated with one block, and all blocks within one
subinterval merge. In the case of the symmetric coalescent, the paintbox
construction can be reformulated as follows: when there are $b$ blocks, at
rate $F(k)$, we distribute $b$ balls into $k$ boxes and blocks corresponding
to balls that are in the same box merge. For more details we refer the reader
to the first chapter of [1]. This construction allows us to obtain a nice
explicit formula for the transition rates.
###### Proposition 1.4.
For each $b\geq 2$ and $k_{1},\ldots,k_{r}$ such that $\sum_{i=1}^{r}k_{i}=b$,
we have
$\lambda_{b,(k_{1},\dots,k_{r})}\ =\ a\ \mathds{1}_{\\{r=b-1,k_{1}=2\\}}\ +\
\sum_{k\geq r}F(k)\frac{k!}{(k-r)!}\frac{1}{k^{b}},$
where $a=F(0)$.
This result is obtained from (1.1) as follows. The term $F(k)$ comes from
$\Xi^{0}(d\zeta)/(\zeta,\zeta)$, while $k!/(k-r)!$ is the number of choices of
$i_{1},\dots,i_{r}$ and in this case
$\zeta_{i_{i}}^{k_{1}}\dots\zeta_{i_{r}}^{k_{r}}$ equals $1/k^{b}$. In other
words, at each jump time, if the number of boxes is $k$ (chosen with respect
to $F$), we choose $r$ ordered boxes and we allocate the $b$ balls to these
$r$ boxes ($k_{1}$ balls to the first box, $k_{2}$ to the second one, etc…).
Finally let us consider $\\{N_{t}\\}_{t\geq 0}$, the block-counting process of
the symmetric coalescent and, for $i>j$, let us denote by $q_{ij}$ its
transition rate from $i$ to $j$. Our next result is the symmetric coalescent
version of Proposition 2.1 in [12]. Let $W^{k,b}$ be the random variable
corresponding to the number of non-empty boxes when allocating randomly $b$
balls into $k$ boxes, whose distribution can be found in [8], proof of Theorem
3.6.10, page 172.
###### Proposition 1.5.
We have
$q_{ij}\ =\ a\binom{i}{2}\mathds{1}_{\\{j=i-1\\}}\ +\ \sum_{k\geq
1}F(k)\mathbb{P}(W^{k,i}=j)$
with
$\mathbb{P}(W^{k,i}=j)\ =\
\binom{k}{j}\left(\frac{j}{k}\right)^{i}\sum_{r=0}^{j}(-1)^{r}\binom{j}{r}\left(1-\frac{r}{j}\right)^{i}.$
The fact that the characterizing measure $S$ (or $F$) only puts weight on
elements of $\Delta^{sym}$ simplifies a lot the global picture of the
coalescence tree, even when starting from an infinite population. In
particular, an $S$-coalescent is almost surely finite after the first
coalescence (that is not a ‘Kingman type’ coalescence). However, the symmetric
coalescent can come down from infinity. Denoting by $\\{N_{t}\\}_{t\geq 0}$
the block-counting process of the symmetric coalescent and assuming that
$N_{0}=\infty$, recall that a coalescent comes down from infinity if for every
$t>0$, $N_{t}<\infty$ almost surely. Observing that the time of the first (non
‘Kingman type’) coalescence event is exponentially distributed, with parameter
$\sum_{k=1}^{\infty}\frac{S(\xi^{k})}{(\xi^{k},\xi^{k})}=\sum_{k=1}^{\infty}kS(\xi^{k})$,
it is straightforward to get the next result.
###### Proposition 1.6.
An $S$-coalescent comes down from infinity if and only if
$S(\xi^{0})>0\ \textrm{ or }\ \sum_{k\geq 1}kS(\xi^{k})=\sum_{k\geq
1}F(k)=\infty.$
An interesting family of symmetric coalescents, that we will call
$(\beta,S)$-coalescents, contains those characterized by $F(k)=k^{-\beta}$,
for $\beta>0$ (so that condition (1.3) is satisfied). By Proposition 1.6, a
$(\beta,S)$-coalescent comes down from infinity if and only if $0<\beta\leq
1$. We now focus on the total coalescence rate when there are $n$ lineages,
which is given by
$\lambda_{n}=\sum_{r=1}^{n-1}\ \sum_{k_{1},\dots,k_{r},\sum
k_{i}=n}\mathcal{N}(n,(k_{1},\dots,k_{r}))\lambda_{n,(k_{1},\dots,k_{r})},$
(1.5)
where $\mathcal{N}(n,(k_{1},\dots,k_{r}))$ is the number of different
simultaneous choices of a $k_{1}$-tuple, a $k_{2}$-tuple,… and a $k_{r}$-tuple
from a set of $b$ elements. An explicit formula for this number can be found
in [30], display $(3)$.
###### Proposition 1.7.
For the $(\beta,S)$-coalescent, with $\beta\in\ (0,1)$, we have
$\lim_{n\to\infty}n^{2(\beta-1)}\lambda_{n}=\frac{2^{\beta-1}\Gamma(\beta)}{1-\beta}.$
For the $(1,S)$-coalescent, we have
$\lim_{n\to\infty}\frac{\lambda_{n}}{\log n}=2.$
It is interesting to compare these asymptotics with other classical
coalescents. For example, when $\beta\to 0$, the total coalescence rate
becomes very close to the total coalescence rate of the Kingman coalescent,
which is of order $n^{2}$. When $\beta\in(0,1/2]$ the total coalescence rate
is very close to that of a $Beta(2-2\beta,2\beta)$-coalescent, see Lemma 2.2
in [7]. In particular, the rates of the $(1/2,S)$-coalescent have the same
order than those of the Bolthausen-Sznitman coalescent.
### 1.3 Genealogies of Wright-Fisher models with bottlenecks
In this paper, we study four types of Wright-Fisher models with bottlenecks
and their genealogies. We establish the relations between forwards and
backwards models via moment duality results. In particular, when the
bottlenecks are short, drastic and rare and $b_{i,N}N$ is distributed as
$F^{0}$, a measure on $\mathbb{N}$, we prove (see Theorem 5.4) that the
scaling limit, in the sense of weak convergence in the Skorokhod topology, of
a subpopulation frequency is given by a Wright-Fisher diffusion with jumps
$dX_{t}\ =\ \sqrt{X_{t}(1-X_{t})}dB_{t}\ +\
\int_{\mathbb{N}}\int_{[0,1]^{\mathbb{N}}}\frac{1}{k}\sum_{i=1}^{k}\left(\mathds{1}_{\\{u_{i}\leq
X_{t^{-}}\\}}-X_{t^{-}}\right)\hat{N}(dt,dk,du),$
where $\\{B_{t}\\}_{t\geq 0}$ is a standard Brownian motion and $\hat{N}$ is a
compensated Poisson measure on
$(0,\infty)\times\mathbb{N}\times[0,1]^{\mathbb{N}}$ with intensity $ds\otimes
F^{0}(k)\otimes du$, where $du$ is the Lebesgue measure on
$[0,1]^{\mathbb{N}}$. The jump term can be interpreted as follows. At rate
$F^{0}(k)$ there is a bottleneck in which only $k$ individuals survive. The
term ‘$\mathds{1}_{\\{u_{i}\leq X_{t^{-}}\\}}$’ is the probability that an
individual chooses a type 1 parent (when choosing her parent uniformly from
the generation before the bottleneck), and therefore
$\frac{1}{k}\sum_{i=1}^{k}\mathds{1}_{\\{u_{i}\leq X_{t^{-}}\\}}$ is the
frequency of type 1 individuals after the bottleneck. As we shall see in
Section 2.3, this equation has a unique strong solution that is moment dual to
the block-counting process of the symmetric coalescent characterized by
$F=\delta_{0}+F^{0}$. Duality relations between $\Xi$-coalescents and Wright-
Fisher diffusions with jumps were established in [5]. Moment duality implies
that the process counting the number of ancestors to a sample of individuals
in the Wright-Fisher model with short drastic bottlenecks converges to the
block-counting process of the symmetric coalescent, in the sense of finite
dimensional distributions. In Section 3.1 of [13] the authors show that
convergence in the $J_{1}$ Skorokhod topology of the forward frequency process
to the solution of the above SDE implies convergence in $J_{1}$ of the process
counting the number of ancestors to a sample.
A similar strategy is used in the case of Wright-Fisher models with long
bottlenecks. However, the situation is different since the frequency process
is not Markovian anymore, as the transition rates depend on whether the
population is undergoing a bottleneck or not. Nevertheless, we still obtain a
scaling limit, but in the sense of convergence in measure (as defined in
Section 3) over the Skorokhod space, to a diffusion with jumps. Intuitively,
only a measure zero set of points prevents $J_{1}$ convergence. These are
exactly the accumulation points of the times at which a bottleneck occurred in
the discrete model. The method for proving this convergence relies on a new
Skorokhod type metric that allows us to prove convergence in measure using
standard arguments. We prove that the diffusion with jumps is moment dual to
the block-counting process of a $\Xi$-coalescent. In the case of long drastic
bottlenecks, it is the drastic bottleneck coalescent (see Definition 4.1) and
in the case of long soft bottlenecks it is the subordinated Kingman coalescent
introduced in [5] and studied in Section 5.2. Again, moment duality implies
convergence, in the sense of finite dimensional distributions, of the process
counting the number of ancestors of a sample.
In the case of Wright-Fisher models with soft drastic bottlenecks, a different
strategy is used. Using Möhle’s theorem [25], we obtain that, under an
appropriate time re-scaling, the (partition-valued) ancestral process
converges to a time-changed Kingman coalescent. Again, moment duality implies
the convergence in the sense of finite dimensional distributions of the
frequency process to a Wright-Fisher diffusion. Table 1 summarizes these
results.
| Drastic | Soft
---|---|---
Short | $S$-coalescent | Continuous time-rescaling of the Kingman coalescent
Long | Drastic bottleneck coalescent | Subordinated Kingman coalescent
Table 1: Limiting genealogies for the different types of Wright-Fisher models
with bottlenecks.
### 1.4 Outline
We start the core of this article by a complete study of the symmetric
coalescent. More precisely, in Section 2.1, we prove Theorem 1.4. In Section
2.2, we study asymptotics of the total coalescent rates (Proposition 1.7) and
the tree length in the special case of $(\beta,S)$-coalescents. In Section
2.3, we establish a first duality result between the $S$-coalescent and the
Wright-Fisher diffusion with short drastic bottlenecks. In Section 3 we
introduce the new Skorokhod type metric, that will be used in the last two
sections, which are devoted to the study of other models with bottlenecks and
their genealogies: Section 4 for long drastic bottlenecks and Section 5 for
soft bottlenecks, where time-changed Kingman coalescents appear as limiting
genealogies.
## 2 The symmetric coalescent
We will start by considering bottlenecks that are drastic and short i.e.
bottlenecks that only last for one generation and in which the population size
during the bottleneck does not tend to infinity as $N\to\infty$. More
precisely we consider the following model (that is a special case of
Definition 1.1).
###### Definition 2.1 (Wright-Fisher model with short drastic bottlenecks).
Fix $\alpha\in(0,1]$, $N\in\mathbb{N}$, $k^{(N)}\in(0,N^{\alpha})$ and $F^{0}$
a probability measure on $\mathbb{N}$. Let $\\{F_{g}\\}_{g\in\mathbb{Z}_{+}}$
be a sequence of i.i.d. random variables of law $F^{0}$. Also, let
$\\{B^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ be a sequence of i.i.d. Bernoulli
random variables of parameter $k^{(N)}/N^{\alpha}$. The Wright-Fisher model
with short drastic bottlenecks is such that the sequence of population sizes
$\\{R^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ is given by
$\ R^{N}_{g}=N(1-B_{g}^{N})+\min(N,F_{g})B_{g}^{N}.$
###### Remark 2.2.
In this case, the bottlenecks are short and if the $i$-th bottleneck takes
place during generation $g$, $b_{i,N}N=\min(N,F_{g})$, which does not tend to
infinity when $N$ goes to infinity, so the bottlenecks are drastic. In
addition, $s_{i,N}$, the time between two bottlenecks follows a geometric
distribution of parameter $k^{(N)}/N^{\alpha}$, so if $k^{(N)}=O(1)$, the
expectation of $s_{i,N}/N$ is of order $N^{\alpha}/N$. Thus, when $\alpha<1$,
the bottlenecks are frequent and when $\alpha=1$ the bottlenecks are rare; the
main consequence of this is that in the case of rare bottlenecks there is a
Kingman/Wright-Fisher component, while in the case of frequent bottlenecks
there is not enough time for the Kingman part to be a part of the scaling
limit.
As we will prove, when $N\to\infty$ and time is rescaled by $N^{\alpha}$, the
genealogy of this model is described by the symmetric coalescent. It is now
time to study this process.
### 2.1 Characterization
Let us start with the proof of Theorem 1.4.
###### Proof of Theorem 1.4.
From (1.4), we decompose $F$ (resp. $S$) into a ‘Kingman part’ and a
‘simultaneous multiple collisions’ part, i.e. $F=a\delta_{0}+F^{0}$ where
$a:=F(0)\geq 0$ and $F^{0}(0)=0$ (resp. $S=a\delta_{(0,0,\dots)}+S^{0}$).
We start by proving that any $\Xi$-coalescent that is characterized by a
measure $S$ on $\Delta^{sym}$ as above is symmetric. We fix $b\geq 2$ and
$k_{1},\ldots,k_{r}$ and $k^{\prime}_{1},\ldots,k^{\prime}_{r}$ such that
$\sum_{i=1}^{r}k_{i}=\sum_{i=1}^{r}k^{\prime}_{i}=b$. From Theorem 2 in [30],
the transition rates can be written as follows:
$\displaystyle\lambda_{b,(k_{1},\dots,k_{r})}$
$\displaystyle=a\mathds{1}_{\\{r=b-1\\}}+\int_{\Delta}\sum_{i_{1}\neq\dots\neq
i_{r}}\zeta^{k_{1}}_{i_{1}}\dots\zeta^{k_{r}}_{i_{r}}\
\frac{S^{0}(d\zeta)}{(\zeta,\zeta)}$
$\displaystyle=a\mathds{1}_{\\{r=b-1\\}}+\sum_{j=r}^{\infty}\frac{j!}{(j-r)!}\left(\frac{1}{j}\right)^{b}F^{0}(j)$
$\displaystyle=\lambda_{b,(k^{\prime}_{1},\dots,k^{\prime}_{r})}.$
Conversely, suppose that a $\Xi$-coalescent satisfies the symmetric condition
on its transition rates. We write $\Xi=a\delta_{(0,0,\dots)}+\Xi^{0}$. For any
$\zeta\in\Delta$, we set $\zeta_{0}=1-\sum_{i=1}^{\infty}\zeta_{i}$. We define
$Z=\\{\zeta\in\Delta,\ \exists j,i,\ \zeta_{i}>\zeta_{j}>0\\}$
and we assume that $\Xi^{0}(Z)>0$. Using Theorem 2 in [30], we have
$\displaystyle\lambda_{4,(2,2)}\ $ $\displaystyle=\
\int_{\Delta}\sum_{i_{1}\neq
i_{2}}\zeta_{i_{1}}^{2}\zeta_{i_{2}}^{2}\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}\
=\ 2\int_{\Delta}\sum_{i_{1}<i_{2}}\
\zeta_{i_{1}}^{2}\zeta_{i_{2}}^{2}\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}$
and
$\displaystyle\lambda_{4,(3,1)}\ $ $\displaystyle=\
\int_{\Delta}\left(\sum_{i_{1}\neq
i_{2}}\zeta_{i_{1}}^{3}\zeta_{i_{2}}+\zeta_{0}\sum_{j}\zeta_{j}^{3}\right)\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}$
$\displaystyle\geq\int_{\Delta}\left(\sum_{i_{1}<i_{2}}\zeta_{i_{1}}^{3}\zeta_{i_{2}}+\sum_{i_{1}<i_{2}}\zeta_{i_{1}}\zeta_{i_{2}}^{3}\right)\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}.$
So,
$\displaystyle\lambda_{4,(3,1)}-\lambda_{4,(2,2)}\ $
$\displaystyle\geq\int_{\Delta}\left(\sum_{i_{1}<i_{2}}\zeta_{i_{1}}^{3}\zeta_{i_{2}}+\sum_{i_{1}<i_{2}}\zeta_{i_{1}}\zeta_{i_{2}}^{3}-2\sum_{i_{1}<i_{2}}\zeta_{i_{1}}^{2}\zeta_{i_{2}}^{2}\right)\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}$
$\displaystyle=\int_{\Delta}\sum_{i_{1}<i_{2}}\zeta_{i_{1}}\zeta_{i_{2}}(\zeta_{i_{1}}-\zeta_{i_{2}})^{2}\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}$
$\displaystyle=\int_{Z}\sum_{i_{1}<i_{2}}\zeta_{i_{1}}\zeta_{i_{2}}(\zeta_{i_{1}}-\zeta_{i_{2}})^{2}\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}>0,$
as the integrand is equal to zero on $\Delta\setminus Z$ and strictly positive
on $Z$. This cannot be true (as the coalescent is symmetric). So we need
$\Xi^{0}(Z)=0$, i.e. $\Xi^{0}$ can only take positive values on
$\Delta\setminus Z$, i.e. elements of $\Delta$ such that there exists $0<u\leq
1$ with $\zeta=(u,u,\dots,u,0,0,\dots)$.
Now, we consider the set
$Z_{0}=\\{\zeta\in\Delta\setminus Z,\ \zeta_{0}>0\\}$
and we assume that $\Xi^{0}(Z)=0$ and $\Xi^{0}({Z_{0}})>0$. We have
$\lambda_{4,(2,2)}=\int_{\Delta\setminus Z}\sum_{i_{1}\neq
i_{2}}\zeta_{i_{1}}^{2}\zeta_{i_{2}}^{2}\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}$
and
$\lambda_{4,(3,1)}=\int_{\Delta\setminus Z}\left(\sum_{i_{1}\neq
i_{2}}\zeta_{i_{1}}^{3}\zeta_{i_{2}}+\zeta_{0}\sum_{j}\zeta_{j}^{3}\right)\frac{\Xi^{0}(d\zeta)}{(\zeta,\zeta)}.$
Recall that, if $\zeta\in\Delta\setminus Z$, then $\forall i\geq
1,\zeta_{i}=\zeta_{1}$ or $\zeta_{i}=0$, so $\sum_{i_{1}\neq
i_{2}}\zeta_{i_{1}}^{2}\zeta_{i_{2}}^{2}=\sum_{i_{1}\neq
i_{2}}\zeta_{i_{1}}^{3}\zeta_{i_{2}}$. So
$\lambda_{4,(2,2)}=\lambda_{4,(3,1)}$ if and only if $\Xi^{0}(Z_{0})=0$, which
means that $\Xi^{0}$ can only put weight on elements of $(\Delta\setminus
Z)\setminus Z_{0}$, i.e. elements of $\Delta^{sym}$. This completes the proof.
∎
As we shall see in Section 2.3, the symmetric coalescent characterized by a
measure $F=a\delta_{0}+F^{0}$, where $F^{0}$ is a probability measure,
describes the genealogy of a Wright-Fisher model with short drastic
bottlenecks parametrized by $\alpha\in(0,1]$, $N$, $k^{(N)}=1$ and $F^{0}$, in
the limit when $N\to\infty$. The case $a=0$ corresponds to frequent
bottlenecks ($\alpha<1$) and the case $a=1$ corresponds to rare bottlenecks
($\alpha=1$). In fact, when time is rescaled by $N^{\alpha}$, if the
bottlenecks are frequent, in the limiting genealogy we only see coalescent
events taking place during the bottlenecks, whereas if the bottlenecks are
rare, there is a ‘Kingman part’ in the limiting genealogy, corresponding to
coalescence events taking place outside the bottlenecks. We will also discuss
in this section a model where the limit genealogical process is a symmetric
coalescent characterized by a measure $F$ that is not finite. Indeed, we show
that, when $N\to\infty$, the Wright-Fisher model with short drastic
bottlenecks converges to a diffusion with jumps that is moment dual to the
block-counting process of the symmetric coalescent.
### 2.2 Tree length and total coalescence rate of $(\beta,S)$-coalescents
We now focus on the family of $(\beta,S)$-coalescents. In this case, the total
coalescence rate (1.5) is
$\displaystyle\lambda_{n}\ =\
\sum_{k=1}^{\infty}k^{-\beta}\mathbb{P}(\mathcal{C}^{k}_{n}),$ (2.6)
where $\mathcal{C}^{k}_{n}$ is the event that, in the paintbox construction
with $k$ boxes and $n$ balls, there are at least two balls that are allocated
to the same box. For $n>k$, $\mathbb{P}(\mathcal{C}^{k}_{n})=1$ and for $n\leq
k$,
$\mathbb{P}(\mathcal{C}^{k}_{n})=1-\prod_{i=2}^{n}\frac{k+1-i}{k}.$
In fact, the probability of $\mathcal{C}^{k}_{n}$ is 1 minus the probability
that $n$ successive balls are allocated to distinct boxes, which can be
computed in the following way: the first ball is allocated to any box and
then, for $2\leq i\leq n$, the $i$-th ball is allocated to one of the $k+1-i$
empty boxes. We are now ready to prove Proposition 1.7.
###### Proof of Proposition 1.7.
We first treat the case $\beta\in(0,1)$. Fix $0<\epsilon<1$. We divide (2.6)
into two parts:
$\lambda_{n}=\sum_{k=1}^{\lfloor
n^{1+\epsilon}\rfloor-1}k^{-\beta}\mathbb{P}(\mathcal{C}^{k}_{n})+\sum_{k=\lfloor
n^{1+\epsilon}\rfloor}^{\infty}k^{-\beta}\mathbb{P}(\mathcal{C}^{k}_{n}).$
For the second term, we have
$\displaystyle\sum_{k=\lfloor
n^{1+\epsilon}\rfloor}^{\infty}k^{-\beta}\left(1-\prod_{i=1}^{n-1}\left(1-\frac{i}{k}\right)\right)$
$\displaystyle\sim\sum_{k=\lfloor
n^{1+\epsilon}\rfloor}^{\infty}k^{-\beta}\left(1-\exp\left(-\sum_{i=1}^{n-1}\frac{i}{k}\right)\right)$
$\displaystyle\sim\sum_{k=\lfloor
n^{1+\epsilon}\rfloor}^{\infty}k^{-\beta}\left(1-\exp\left(-\frac{n^{2}}{2k}\right)\right)$
$\displaystyle\sim
n^{2-2\beta}\int_{0}^{\infty}x^{-\beta}\left(1-\exp\left(-\frac{1}{2x}\right)\right)dx$
$\displaystyle=\frac{2^{\beta-1}\Gamma(\beta)}{1-\beta}n^{2(1-\beta)},$
where the last equality is obtained by integrating by parts and using the
inverse-gamma distribution.
For the first term, observe that
$\sum_{k=1}^{\lfloor
n^{1+\epsilon}\rfloor}k^{-\beta}\mathbb{P}(\mathcal{C}^{k}_{n})\leq\sum_{k=1}^{\lfloor
n^{1+\epsilon}\rfloor}k^{-\beta}\sim n^{(1+\epsilon)(1-\beta)},$
which is negligible compared to the second term.
Let us now suppose that $\beta=1$. We divide (2.6) into three parts (recall
that $\mathbb{P}(\mathcal{C}^{k}_{n})=1$ when $k\leq n$).
$\lambda_{n}=\sum_{k=1}^{n-1}k^{-1}+\sum_{k=n}^{\lfloor
n^{1+\epsilon}\rfloor-1}k^{-1}\mathbb{P}(\mathcal{C}^{k}_{n})+\sum_{k=\lfloor
n^{1+\epsilon}\rfloor}^{\infty}k^{-1}\mathbb{P}(\mathcal{C}^{k}_{n}).$
The first term is obviously equivalent to $\log n$. The second term is clearly
smaller than $\epsilon\log n$. Let us now find a lower bound
$\displaystyle\sum_{k=n}^{\lfloor
n^{1+\epsilon}\rfloor-1}k^{-1}\left(1-\prod_{i=1}^{n-1}\left(1-\frac{i}{k}\right)\right)$
$\displaystyle\geq\sum_{k=n}^{\lfloor
n^{1+\epsilon}\rfloor-1}k^{-1}\left(1-\exp\left(\sum_{i=1}^{n-1}\left(-\frac{i}{k}\right)\right)\right)$
$\displaystyle\sim\int_{n^{-1}}^{n^{\epsilon-1}}x^{-1}\left(1-\exp\left(-\frac{1}{2x}\right)\right)dx$
$\displaystyle=\int_{n^{1-\epsilon}}^{n}y^{-1}\left(1-\exp\left(-\frac{y}{2}\right)\right)dy$
$\displaystyle=\log
n\left(1-\exp\left(-\frac{n}{2}\right)\right)-(1-\epsilon)\log
n\left(1-\exp\left(-\frac{n^{1-\epsilon}}{2}\right)\right)$
$\displaystyle+\frac{1}{2}\int_{n^{1-\epsilon}}^{n}\log
y\exp\left(-\frac{y}{2}\right)dy$ $\displaystyle\sim\epsilon\log n.$
For the third term, we use similar computations as in the case $\beta<1$,
$\displaystyle\sum_{k=\lfloor
n^{1+\epsilon}\rfloor}^{\infty}k^{-1}\left(1-\prod_{i=1}^{n-1}\left(1-\frac{i}{k}\right)\right)$
$\displaystyle\sim\int_{n^{\epsilon-1}}^{\infty}x^{-1}\left(1-\exp\left(-\frac{1}{2x}\right)\right)dx$
$\displaystyle=\int_{0}^{n^{1-\epsilon}}y^{-1}\left(1-\exp\left(-\frac{y}{2}\right)\right)dy$
$\displaystyle=(1-\epsilon)\log
n\left(1-\exp\left(-\frac{n^{1-\epsilon}}{2}\right)\right)+\frac{1}{2}\int_{0}^{n^{1-\epsilon}}\log
y\exp\left(-\frac{y}{2}\right)dy$ $\displaystyle\sim(1-\epsilon)\log n.$
This ends the proof. ∎
This result on the total coalescence rate allows us to give a first estimate
of the tree length of the $(\beta,S)$-coalescent. Let $L_{n}$ be the sum of
the lengths of all the branches of the tree obtained from a
$(\beta,S)$-coalescent started with $n$ lineages and stopped at the first time
when there is only one lineage.
###### Corollary 1.
For $\beta\in(0,1)$, there exist two positive constants $C^{\prime}_{\beta}$
and $c^{\prime}_{\beta}$, that only depend on $\beta$ (and not on $n$), such
that for $n$ large enough,
$c^{\prime}_{\beta}n^{(2\beta-1)\vee 0}\ \leq\ \mathbb{E}(L_{n})\ \leq\
C^{\prime}_{\beta}n^{(2\beta)\wedge 1}.$
For $\beta=1$, there exists a constant $C^{\prime}_{1}$ such that
$\frac{n}{2\log n}(1+o(1))\ \leq\ \mathbb{E}(L_{n})\ \leq\ C^{\prime}_{1}n.$
As we can see in Figure 2, this corollary provides better estimates of the
tree length when $\beta$ is close to 1 or close to 0.
Figure 2: Illustration of Corollary 1. The orange lines are $y=2x-1$ and
$y=2x$, and the green line is $y=1$. The blue area is the region where
$\log(\mathbb{E}(L_{n}))$ is located.
###### Proof.
We start by proving that, for any $\beta\in[0,1]$, the expected tree length is
at most of order $n$. We consider the $S^{1}$-coalescent, characterized by
$S^{1}(\xi^{k})=\mathds{1}_{\\{k=1\\}}$. First, one can easily show that the
tree length of the $S^{1}$-coalescent is of order $n$ (it is a star-shaped
coalescent). Second, the rate of events of size 1 in the $S^{1}$-coalescent
and in the $(\beta,S)$-coalescent is the same and, for $k>1$,
$S^{1}(\xi^{k})=0$ while, in the $(\beta,S)$-coalescent,
$S(\xi^{k})=k^{\beta}>0$. It is not hard to construct a coupling between the
two processes in such a way that the length of the $S^{1}$ coalescent is
always larger than the length of the $(\beta,S)$-coalescent.
The expectation of the time to the first coalescence when there are $k$
lineages is $1/\lambda_{k}$, so ${k}/{\lambda_{k}}$ is the expected length of
a tree started with $k$ lineages and stopped at the first coalescence event.
So, for $\beta\in(0,1)$, we have
$\mathbb{E}(L_{n})\ \leq\ \sum_{k=2}^{n}\frac{k}{\lambda_{k}}.$
Recall that, in a coalescent where only two blocks can coalesce at a time (for
example the Kingman coalescent), the sum on the right hand side would be the
expected length, but in a coalescent with simultaneous multiple collisions we
do not observe all the states $\\{2,\dots,n\\}$ for the block-counting process
so it is only an upper bound. Using Proposition 1.7 for $\beta<1$, there
exists a constant $c$ such that
$\displaystyle\mathbb{E}(L_{n})\ $
$\displaystyle\leq\frac{2^{\beta-1}\Gamma(\beta)}{1-\beta}\
\sum_{k=2}^{n}k^{2\beta-1}(1+o(1))$ $\displaystyle\leq c\
\int_{1}^{n}t^{2\beta-1}dt=\frac{c}{2\beta}(n^{2\beta}-1),$
which completes the proof of this first step.
For $\beta\in(0,1]$, for the lower bound, we have
$\mathbb{E}(L_{n})\ \geq\ \frac{n}{\lambda_{n}},$
which is the length of the tree stopped at the first coalescence event. When
$0<\beta<1/2$ this lower bound is not interesting, as it is of order
$n^{2\beta-1}$ and it decreases with $n$. But $\mathbb{E}(L_{n})$ can always
be bounded from below by a positive constant, which completes the proof. ∎
### 2.3 Duality with the Wright-Fisher model with short drastic bottlenecks
We consider the Wright-Fisher model with short drastic bottlenecks from
Definition 2.1. Imagine that there are two types of individuals, $0$ and $1$,
and each individual inherits the type of her parent. We denote by
$\\{X^{N}_{g}\\}_{g\in\mathbb{Z}}$ the process corresponding to the frequency
of type $1$ individuals in the population i.e., for any $g\in\mathbb{Z}_{+}$,
$X^{N}_{g}=\frac{\sum_{i=1}^{R_{g}^{N}}\mathds{1}_{\\{(i,g)\textrm{ is of type
1}\\}}}{R_{g}^{N}}.$
As in the classical Wright-Fisher model, given $R^{N}_{g+1}$ and $X^{N}_{g}$,
$R_{g+1}^{N}X^{N}_{g+1}$ follows a binomial distribution of parameters
$R_{g+1}^{N}$ and $X^{N}_{g}$. In the following, ‘$\Longrightarrow$’ denotes
weak convergence in the Skorokhod on topology $D([0,1],\mathbb{R}_{+})$.
###### Theorem 2.3.
Let $F^{0}$ be a measure on $\mathbb{N}$ that fulfills condition (1.3). Fix
$\alpha\in(0,1]$ and $\gamma\in(0,\alpha/2)$. We consider the probability
measure $F_{\gamma}^{N}$ defined by
$F_{\gamma}^{N}:=\frac{\sum_{k=1}^{\lfloor
N^{\gamma}\rfloor}F^{0}(k)\delta_{k}}{\sum_{k=1}^{\lfloor
N^{\gamma}\rfloor}F^{0}(k)}.$
Consider the sequence of processes $\\{X^{N}\\}_{N\in\mathbb{N}}$, such that
$X^{N}=\\{X^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ is the frequency process
associated with the Wright-Fisher model with short drastic bottlenecks
parametrized by $\alpha$, $N$, $k^{(N)}=\sum_{k=1}^{\lfloor
N^{\gamma}\rfloor}F^{0}(k)$ and $F_{\gamma}^{N}$ (from Definition 2.1). Then,
$\\{X^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{t\geq
0}\underset{N\to\infty}{\Longrightarrow}\\{X_{t}\\}_{t\geq 0},$
where $\\{X_{t}\\}_{t\geq 0}$ is the unique strong solution of the SDE
$dX_{t}\ =\ \mathds{1}_{\\{\alpha=1\\}}\sqrt{X_{t}(1-X_{t})}dB_{t}\ +\
\int_{\mathbb{N}}\int_{[0,1]^{\mathbb{N}}}\frac{1}{k}\sum_{i=1}^{k}\left(\mathds{1}_{\\{u_{i}\leq
X_{t^{-}}\\}}-X_{t^{-}}\right)\hat{N}(dt,dk,du),$ (2.7)
where $\\{B_{t}\\}_{t\geq 0}$ is a standard Brownian motion and $\hat{N}$ is a
compensated Poisson measure on
$(0,\infty)\times\mathbb{N}\times[0,1]^{\mathbb{N}}$ with intensity $ds\otimes
F^{0}(k)\otimes du$, where $du$ is the Lebesgue measure on
$[0,1]^{\mathbb{N}}$.
The same result holds if we consider $F^{0}$, a probability measure on
$\mathbb{N}$ and the Wright-Fisher model with short drastic bottlenecks
parametrized by $\alpha$, $N$, $k^{(N)}=1$ and $F^{0}$.
###### Remark 2.4.
The definition of $F_{\gamma}^{N}$ ensures that $F_{\gamma}^{N}$ is a
probability measure on $\mathbb{N}$ and $k^{(N)}/N^{\alpha}\in[0,1]$, so the
Wright-Fisher model with short drastic bottlenecks is well-defined, at least
for $N$ large enough. In fact, if $F^{0}$ satisfies condition (1.3), for $k$
large enough $F(k)<k$, so $k^{(N)}\leq N^{2\gamma}+C$, where $C$ is a
constant.
In words, the frequency process associated with the Wright-Fisher model with
short drastic bottlenecks converges to a diffusion with jumps that is similar
to the frequency process associated with $\Xi$-Fleming-Viot process (where the
characterising measure $\Xi$ is the measure $S$ on $\Delta^{sym}$ that can be
obtained from $F^{0}$ as in Theorem 1.4). When the bottlenecks are frequent
($\alpha<1$), the limiting process is a pure jump process whereas, when the
bottlenecks are rare ($\alpha=1$), we also have a diffusion term, which is a
Wright-Fisher diffusion and corresponds to the evolution of the population
outside the bottlenecks. Before proving Theorem 2.3 we shall make sure that a
solution to Equation (2.7) exists.
###### Lemma 2.5.
For any measure $F^{0}$ in $\mathbb{N}$ that satisfies condition (1.3) and any
$\alpha\in(0,1]$, there exists a unique strong solution to the SDE (2.7).
###### Proof.
This result is a direct consequence of Lemma 3.6 in [13] (which is itself a
consequence of Theorem 5.1 in [20]), applied to the measure $S$ on
$\Delta^{sym}$ obtained from $F^{0}$ as in Proposition 1.4 and a drift
coefficient equal to 0. ∎
We are now ready to prove Theorem 2.3.
###### Proof of Theorem 2.3.
The proof follows closely the proof of Proposition 3.4 in [13]. The idea is to
prove the convergence of the generator of $\\{X^{N}_{\lfloor
N^{\alpha}t\rfloor}\\}_{t\geq 0}$ to the generator of $\\{X_{t}\\}_{t\geq 0}$.
Provided this claim is true, we can use Theorem 19.25 and 19.28 of [17] to
prove the weak convergence in the Skorokhod topology.
From Lemma 2.5, $\\{X_{t}\\}_{t\geq 0}$ exists and has generator
$\mathcal{A}$. Its domain contains twice differentiable functions and for a
function $f\in C^{2}[0,1]$ and $x\in[0,1]$, we have
$\displaystyle\mathcal{A}f(x)\ =$ $\displaystyle\
\mathds{1}_{\\{\alpha=1\\}}\frac{1}{2}x(1-x)f^{\prime\prime}(x)\ +\
\sum_{k\geq
1}F^{0}(k)\mathbb{E}\left(f\left(\frac{\sum_{i=1}^{k}B_{i}^{x}}{k}\right)-f(x)\right),$
(2.8)
where the $B_{i}^{x}$’s are independent Bernoulli random variables of
parameter $x$ and the second term is the generator of a $\Xi$-Fleming-Viot
process, see for example formula (5.6) in [5] (applied to the measure $S$
associated with $F^{0}$).
For every $N\in\mathbb{N}$, let $\mathcal{U}^{N}$ be the transition operator
associated with $X^{N}$ and define the operator
$\mathcal{A}^{N}:=N^{\alpha}(\mathcal{U}^{N}-I),$ (2.9)
where $I$ is the identity operator (see Theorem 19.28 in [17]).
$\mathcal{A}^{N}$ is referred to as the discrete generator of
$\\{X^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{t\geq 0}$. For any function $f\in
C^{2}[0,1]$ in $x\in[0,1]$ we have
$\displaystyle\mathcal{A}^{N}f(x)\ =$ $\displaystyle\
(1-\frac{\sum_{k=1}^{\lfloor
N^{\gamma}\rfloor}F^{0}(k)}{N^{\alpha}})N^{\alpha}\mathbb{E}\left(f\left(\frac{\sum_{i=1}^{N}B_{i}^{x}}{N}\right)-f(x)\right)$
(2.10) $\displaystyle+$ $\displaystyle\ N^{\alpha}\frac{\sum_{k=1}^{\lfloor
N^{\gamma}\rfloor}F^{0}(k)}{N^{\alpha}}\sum_{k=1}^{\lfloor
N^{\gamma}\rfloor}\frac{F^{0}(k)}{\sum_{k=1}^{\lfloor
N^{\gamma}\rfloor}F^{0}(k)}\mathbb{E}\left(f\left(\frac{\sum_{i=1}^{\min(N,k)}B_{i}^{x}}{\min(N,k)}\right)-f(x)\right).$
(2.11)
First, we study part (2.10). Following Remark 2.4, the prefactor converges to
1. When $\alpha=1$, it is well known that (2.10) converges uniformly as
$N\to\infty$ to $\frac{1}{2}x(1-x)f^{\prime\prime}(x)$, which is the generator
of the Wright-Fisher diffusion (see for example Chapter 2 and Theorem 3.6 in
[9]). When $\alpha<1$ this term becomes of order $N^{\alpha-1}$ and therefore
converges to $0$. Second, it is easy to see that part (2.11) converges when
$N\to\infty$ to the second term of $\mathcal{A}$ in (2.8). Combining these two
results, we have $\mathcal{A}^{N}f\to\mathcal{A}f$ uniformly. ∎
Let us consider $\\{N_{t}\\}_{t\geq 0}$, the block-counting process of the
symmetric coalescent characterized by
$F=\mathds{1}_{\\{\alpha=1\\}}\delta_{0}+F^{0}$ (or by the associated measure
$S$ on $\Delta^{sym}$, as defined in Proposition 1.4). We have the following
duality relation between the block-counting process of the symmetric
coalescent and $\\{X_{t}\\}_{t\geq 0}$, the unique strong solution of (2.7).
###### Theorem 2.6.
For every $x\in[0,1],\ n\in\mathbb{N}$, we have
$\mathbb{E}(X_{t}^{n}|X_{0}=x)\ =\ \mathbb{E}(x^{N_{t}}|N_{0}=n).$
This is a special case of Proposition 3.8 in [13], but we find the proof of
this particular case instructive and for the sake of completeness we include
it here.
###### Proof.
Recall that, from Proposition 1.5, for any function
$h:\mathbb{N}\to\mathbb{R}$, $\mathcal{G}$ the infinitesimal generator of the
block-counting process $\\{N_{t}\\}_{t\geq 0}$ is given by
$\mathcal{G}h(n):=a\binom{n}{2}\left(h(n-1)-h(n)\right)\ +\ \sum_{k\geq
1}F^{0}(k)\sum_{j=1}^{n-1}\mathbb{P}(W^{k,n}=j)\left(h(j)-h(n)\right).$ (2.12)
We first consider the case $\alpha<1$. We use Lemma 4.1 in [13], which states
that generator $\mathcal{A}$ of $\\{X_{t}\\}_{t\geq 0}$, applied to a function
$f\in C^{2}[0,1]$ admits the following representation
$\displaystyle\mathcal{A}f(x)\ $ $\displaystyle=\
\frac{S(\Delta_{S})}{2}\mathbb{E}\left[\frac{(-x+\sum_{i=1}^{\infty}Z_{i}B_{i}^{x})(\sum_{i=1}^{\infty}Z_{i}B_{i}^{x})}{(\sum_{i=1}^{\infty}Z_{i}^{2})}f^{\prime\prime}(x(1-V)+UV\sum_{i=1}^{\infty}Z_{i}B_{i}^{x})\right]$
where $Z=(Z_{1},Z_{2},\dots)$ is $S$-distributed,
$\\{B_{i}^{x}\\}_{i\in\mathbb{N}}$ is a sequence of i.i.d. Bernoulli random
variables with parameter $x$, $U$ is uniform in $[0,1]$, $V$ is $Beta(2,1)$ in
$[0,1]$ and $Z,\\{B_{i}^{x}\\},U$ and $V$ are independent. Using the
definition of $S$, this can be rewritten as
$\displaystyle\mathcal{A}f(x)\ $ $\displaystyle=\ \frac{\sum_{k\geq
1}F^{0}(k)/k}{2}\
\mathbb{E}\left[\frac{(-x+\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})(\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})}{(1/K)}f^{\prime\prime}(x(1-V)+UV\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})\right]$
where the expectation is taken with respect to $K$, a random variable such
that for $k\geq 1$,
$\ \mathbb{P}(K=k)=\frac{F^{0}(k)/k}{\sum_{k\geq 1}F^{0}(k)/k}.$
Integrating by parts,
$\displaystyle\mathcal{A}f(x)\ $ $\displaystyle=\ \frac{\sum_{k\geq
1}F^{0}(k)/k}{2}\
\mathbb{E}\left[\frac{(-x+\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})}{V(1/K)}\left(f^{\prime}(x(1-V)+V\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})-f^{\prime}(x(1-V))\right)\right].$
Conditioning on the value of $V$, we have
$\displaystyle\mathbb{E}\left[\frac{(-x+\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})}{V(1/K)}f^{\prime}(x(1-V))\
\right]=0.$
Again, following closely the proof of Lemma 4.1 in [13], we calculate the
expectation with respect to $V$,
$\displaystyle\mathcal{A}f(x)\ $ $\displaystyle=\ \frac{\sum_{k\geq
1}F^{0}(k)/k}{2}\
\mathbb{E}\left[\frac{(-x+\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})}{V(1/K)}f^{\prime}(x(1-V)+V\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})\right]$
$\displaystyle=\ \frac{\sum_{k\geq 1}F^{0}(k)/k}{2}\
\mathbb{E}\left[\int_{0}^{1}K\frac{(-x+\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})}{s}f^{\prime}(x(1-s)+\frac{s}{K}\sum_{i=1}^{K}B_{i}^{x})2sds\right]$
$\displaystyle=\ \left({\sum_{k\geq 1}F^{0}(k)/k}\right)\
\mathbb{E}\left[K\int_{0}^{1}(-x+\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})f^{\prime}(x(1-s)+\frac{s}{K}\sum_{i=1}^{K}B_{i}^{x})ds\right]$
$\displaystyle=\ \left({\sum_{k\geq 1}F^{0}(k)/k}\right)\
\mathbb{E}\left[K\left(f(\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})-f(x)\right)\right],$
where the last equality comes by integration by parts. Now, we consider the
function $h$ on $[0,1]\times\mathbb{N}$ such that $h(x,n)=x^{n}$. We fix
$n\in\mathbb{N}$ and we apply the generator $\mathcal{A}$ to $h$, seen as a
function of $x$,
$\displaystyle\mathcal{A}h(x)\ $ $\displaystyle=\ \left({\sum_{k\geq
1}F^{0}(k)/k}\right)\
\mathbb{E}\left[K\left((\frac{1}{K}\sum_{i=1}^{K}B_{i}^{x})^{n}-x^{n}\right)\right]$
$\displaystyle=\ \left({\sum_{k\geq 1}F^{0}(k)/k}\right)\
\mathbb{E}\left[K\left(\frac{1}{K^{n}}\sum_{k_{1}+\dots+k_{K}=n}\binom{n}{k_{1},\dots,k_{K}}\prod_{i=1}^{K}(B_{i}^{x})^{k_{i}}-x^{n}\right)\right]$
$\displaystyle=\ \left({\sum_{k\geq 1}F^{0}(k)/k}\right)\
\mathbb{E}\left[K\left(\sum_{k_{1}+\dots+k_{K}=n}\binom{n}{k_{1},\dots,k_{K}}\frac{1}{K^{n}}\left(x^{\sum_{i=1}^{K}\mathds{1}_{\\{k_{i}>0\\}}}-x^{n}\right)\right)\right],$
(2.13)
where in the last line we use the fact that for
$k_{1},\dots,k_{K}\in\mathbb{N}$,
$\mathbb{E}\left(\prod_{i=1}^{K}(B_{i}^{x})^{k_{i}}\right)=x^{\sum_{i=1}^{K}\mathds{1}_{\\{k_{i}>0}\\}}.$
Consider the random variable $W^{k,n}$ defined in Proposition 1.5. For any
$\kappa\in\mathbb{N}$, we have
$\displaystyle\sum_{k_{1}+\dots+k_{\kappa}=n}\binom{n}{k_{1},\dots,k_{\kappa}}\frac{1}{\kappa^{n}}\left(x^{\sum_{i=1}^{\kappa}\mathds{1}_{\\{k_{i}>0}\\}}-x^{n}\right)$
$\displaystyle=\mathbb{E}\left[x^{W^{\kappa,n}}-x^{n}\right]\ =:\ E_{\kappa}.$
(2.14)
So we have,
$\displaystyle\mathcal{A}h(x)\ $ $\displaystyle=\ \left({\sum_{k\geq
1}F^{0}(k)/k}\right)\ \mathbb{E}\left[KE_{K}\right]$ $\displaystyle=\
\left({\sum_{k\geq 1}F^{0}(k)/k}\right)\ \sum_{k\geq
1}\frac{F^{0}(k)/k}{\sum_{k\geq 1}F^{0}(k)/k}kE_{k}$
$\displaystyle=\sum_{k\geq 1}F^{0}(k)E_{k}$ $\displaystyle=\sum_{k\geq
1}F^{0}(k)\sum_{j=1}^{n}\mathbb{P}(W^{k,n}=j)\left(x^{j}-x^{n}\right)=\mathcal{G}h(n),$
where in the last line $h$ is seen as a function of $n$ and $\mathcal{G}$ is
the generator of the block-counting process of the symmetric coalescent
defined in (2.12), in the case $a=0$. This completes the proof for the case
$\alpha<1$.
If $\alpha=1$ and $f$ is defined as previously, we have
$\displaystyle\mathcal{A}f(x)\ =\ \mathcal{A}_{1}f(x)+\mathcal{A}_{2}f(x)$
where $\mathcal{A}_{2}$ is the infinitesimal generator of $\\{X_{t}\\}_{t\geq
0}$ for the case $\alpha<1$ and $\mathcal{A}_{1}$ is the generator of the
Wright-Fisher diffusion. Let $\\{K_{t}\\}_{t\geq 0}$ be the block-counting
process of the Kingman coalescent and $\\{Y_{t}\\}_{t\geq 0}$ be the classical
Wright-Fisher diffusion. We recall the well known moment duality between these
two processes,
$\mathbb{E}(Y_{t}^{n}|Y_{0}=x)\ =\ \mathbb{E}(x^{K_{t}}|K_{0}=n).$ (2.15)
This implies that
$\displaystyle\mathcal{A}_{1}h(x,n)\ $ $\displaystyle=\
\frac{1}{2}x(1-x)\frac{\partial^{2}h(x,n)}{\partial x}$ $\displaystyle=\
\binom{n}{2}\left(h(x,n-1)-h(x,n)\right),$
where the last line corresponds to the generator of $\\{K_{t}\\}_{t\geq 0}$.
Combining this with the result for the case $\alpha<1$, we have
$\displaystyle\mathcal{A}h(x,n)\ $ $\displaystyle=\
\binom{n}{2}\left(x^{n-1}-x^{n}\right)+\sum_{k\geq 1}F^{0}(k)\sum_{j\leq
k}\mathbb{P}(W^{k,i}=j)\left(x^{j}-x^{n}\right)$
$\displaystyle=\mathcal{G}h(x,n),$
where $\mathcal{G}$ is the infinitesimal generator of the block-counting
process of the symmetric coalescent for the case $a=1$. This completes the
proof. ∎
## 3 A topological interlude
As explained in the introduction, in the case of the Wright-Fisher model with
long drastic bottlenecks and long soft bottlenecks, we also want to prove the
convergence of the frequency process to a diffusion with jumps, which is
moment dual to the block-counting process of a bottleneck coalescent. In these
cases, the frequency process is not Markovian (the transition rates depend on
whether the population is undergoing a bottleneck or not) and can have
important fluctuations during the bottlenecks (see Figure 4 for an
illustration), that prevent the convergence in the Skorokhod $J_{1}$ (and
$M_{1}$) topology. However, the points that prevent this convergence are
exactly the accumulation points of the times at which a bottleneck occurred in
the discrete models, that have Lebesgue measure 0 when time is rescaled by
$N^{\alpha}$ and $N$ tends to infinity.
For $T>0$, we denote by $D[0,T]$ the space of real-valued càdlàg functions
defined on $[0,T]$. We will introduce a new metric for convergence in measure
on $D[0,T]$ (see Meyer and Zheng [21]). A sequence of càdlàg functions
converges in measure to another càdlàg function if for any $\epsilon>0$, the
Lebesgue measure of the set of points for which the distance to the limit is
higher than $\epsilon$ converges to 0. We say that a sequence of stochastic
processes converges in measure to another process on $D[0,T]$ if it converges
weakly, as probability measures on $D[0,T]$, in the topology induced by
convergence in measure. Lemma 1 in [21] states that convergence is measure is
equivalent to convergence in the pseudopath space i.e. that $x_{n}\to x$ in
measure on $D[0,T]$ if and only if for any continuous bounded function
$g:[0,T]\times\mathbb{R}\mapsto\mathbb{R}$,
$\displaystyle\lim_{n\rightarrow\infty}\int_{0}^{T}g(s,x_{n}(s))ds$
$\displaystyle=$ $\displaystyle\int_{0}^{T}g(s,x(s))ds,$
and $x_{n}(T)\to x(T)$. This is also known as convergence in the Meyer-Zheng
topology.
Our metric has the advantage of being a slight modification of the Skorokhod
$J_{1}$ distance [32] and thus some techniques to manipulate it are well
known. Our method can be generalized to the study of other processes that can
have strong fluctuations (sparks) only in a set of timepoints that has
Lebesgue measure 0 in the limit. Let us denote by $||\cdot||$ the uniform norm
on $D[0,T]$.
###### Definition 3.1.
Fix $T>0$. Let $x_{1},x_{2}\in D[0,T]$ and $\mathcal{I}$ the set of finite
unions of càdlàg intervals on $[0,T]$ i.e.
$\mathcal{I}=\\{I=\cup_{i=1}^{n}[a_{i},b_{i}):n\in\mathbb{N},0\leq
a_{i}<b_{i}\leq T\\}.$
We say that a function is a Skorokhod reparameterisation of time (SRT) if it
is strictly increasing, continuous with a continuous inverse and we define
$\mathcal{F}=\\{f:[0,T]\mapsto[0,T]:f\text{ is a SRT}\\}.$
Let $\lambda$ be the Lebesgue measure on $[0,T]$ and $Id$ the identity map in
$[0,T]$. For any $x_{1},x_{2}\in D[0,T]$, define
$d_{\lambda}(x_{1},x_{2})=\inf_{A\in\mathcal{I},f\in\mathcal{F}}\\{||\mathds{1}_{A}(x_{1}-x_{2}\circ
f)||\vee||Id-f||\vee\lambda([0,T]/A)\vee|x_{1}(T)-x_{2}(T)|\\}.$
The topology induced by $d_{\lambda}$ is separable. Indeed, the set of
$\mathbb{Q}$-valued staircase functions with discontinuities in $\mathbb{Q}$
is a countable and dense set. However the resulting metric space is not
complete. As a counterexample, one can think of the sequence
$f_{n}=\sum_{i=1}^{n}(-1)^{i}\mathds{1}_{[1/(i+1),1/i)}$ and its behavior at
0.
###### Proposition 3.2.
The mapping $d_{\lambda}$ is a metric in the Skorokhod space $D[0,T]$ for
every $T>0$ and if $\lim_{n\rightarrow 0}d_{\lambda}(x_{n},x)=0$, then
$x_{n}\rightarrow x$ in measure on the Skorokhod space.
###### Proof.
It is clear that, if $x_{1}=x_{2}$, then $d_{\lambda}(x_{1},x_{2})=0$. Now let
us prove the converse. Assume that $x_{1}\neq x_{2}$. If
$|x_{1}(T)-x_{2}(T)|>0$ then $d_{\lambda}(x_{1},x_{2})>0$. Now, if
$|x_{1}(T)-x_{2}(T)|=0$, then there exists a point $\tau\in[0,T)$ such that
$|x_{1}(\tau)-x_{2}(\tau)|>0$. Let us call $y=|x_{1}-x_{2}|$ and observe that
$y\in D[0,T]$. Using the right continuity of $y$ at the point $\tau$, we know
that there exist $\epsilon,\delta>0$ such that $\forall
t\in[\tau,\tau+\delta),\ y(t)>\epsilon$. Let $A\in\mathcal{I}$. If
$[\tau,\tau+\delta)\subset A^{c}$ then $\lambda(A^{c})>\delta$. Otherwise,
$||\mathds{1}_{A}(x_{1}-x_{2})||>\epsilon$. Now, take $f\in\mathcal{F}$ such
that $||Id-f||<\delta/2$, then there exists $\bar{\tau}\in[\tau,\tau+\delta)$
such that for all $t\in[\bar{\tau},\bar{\tau}+\delta/2)$,
$\bar{y}(t)=|x_{1}(t)-x_{2}(f(t))|>\epsilon.$ Repeating the argument, we
conclude that $d_{\lambda}(x_{1},x_{2})>\min\\{\delta/2,\epsilon\\}$, and thus
$x_{1}=x_{2}$ if and only if $d_{\lambda}(x_{1},x_{2})=0$.
Symmetry follows by the same arguments used by Skorokhod in the case of
$J_{1}$ [32]. In fact $f\in\mathcal{F}$ implies that $f(0)=0$ and $f(T)=T$ and
that $f^{-1}\in\mathcal{F}$.Using this observation, it is easy to see that
$d_{\lambda}(x_{1},x_{2})=d_{\lambda}(x_{2},x_{1})$.
To show that the triangle inequality holds, let $x_{1},x_{2},x_{3}\in D[0,T]$
and observe that for any $A,B\in\mathcal{I}$ and $f_{2},f_{3}\in\mathcal{F}$
it holds that $A\cup B\in\mathcal{I}$ and $f_{1}:=f_{3}\circ
f_{2}\in\mathcal{F}.$ The triangle inequality follows from four observations
(most of them due to Skorokhod [32]).
First,
$\displaystyle||\mathds{1}_{A\cup B}(x_{1}-x_{2}\circ f_{1})||$
$\displaystyle\leq$ $\displaystyle||\mathds{1}_{A\cup B}(x_{1}-x_{3}\circ
f_{2})||+||\mathds{1}_{A\cup B}(x_{3}\circ f_{2}-x_{2}\circ f_{3}\circ
f_{2})||$ $\displaystyle\leq$ $\displaystyle||\mathds{1}_{A}(x_{1}-x_{3}\circ
f_{2})||+||\mathds{1}_{B}(x_{3}-x_{2}\circ f_{3})||.$
Second,
$\displaystyle||Id-f_{1}||$ $\displaystyle\leq$ $\displaystyle||Id-
f_{2}||+||Id-f_{3}||.$
Third,
$\displaystyle\lambda((A\cup B)^{c})$ $\displaystyle\leq$
$\displaystyle\lambda(A^{c})+\lambda(B^{c}).$
And finally,
$\displaystyle|x_{1}(T)-x_{2}(T)|\leq|x_{1}(T)-x_{3}(T)|+|x_{3}(T)-x_{2}(T)|.$
Putting all these observations together, we conclude that the triangle
inequality holds, and thus $d_{\lambda}$ is a metric.
Finally, we need to prove that convergence according to $d_{\lambda}$ implies
convergence in the Meyer-Zheng topology. Let $x,x_{1},x_{2},...\in D[0,T]$ and
let $g:[0,T]\times\mathbb{R}\mapsto\mathbb{R}$ be a continuous bounded
function. Assume that $\lim_{n\rightarrow\infty}d_{\lambda}(x_{n},x)=0.$ For
any $n\in\mathbb{N}$, there exists $\epsilon_{n}\to 0$, $A_{n}\in\mathcal{I}$
and $f_{n}\in\mathcal{F}$ such that
$||\mathds{1}_{A_{n}}(x_{n}-x\circ f_{n})||\vee||Id-
f_{n}||\vee\lambda(A_{n}^{c})\vee|x(T)-x_{n}(T)|<d_{\lambda}(x_{n},x)+\epsilon_{n},$
which implies that
$\lim_{n\to\infty}||\mathds{1}_{A_{n}}(x_{n}-x\circ f_{n})||\vee||Id-
f_{n}||\vee\lambda(A_{n}^{c})\vee|x(T)-x_{n}(T)|=0.$
Since $g$ is continuous, bounded and $||Id-f_{n}||\to 0$, we have
$\displaystyle\lim_{n\to\infty}\int_{0}^{T}g(s,x_{n}(s))ds$ $\displaystyle=$
$\displaystyle\lim_{n\rightarrow\infty}\int_{A_{n}}g(s,x_{n}(s))ds+\lim_{n\to\infty}\int_{A_{n}^{c}}g(s,x_{n}(s))ds.$
First, as $d_{\lambda}(x_{n},x)\to 0$ and $\epsilon_{n}\to 0$ , we have
$\lambda(A_{n}^{c})\to 0$ and $g$ is bounded, so
$\displaystyle\lim_{n\to\infty}\int_{A_{n}^{c}}g(s,x_{n}(s))ds=0.$
Second,
$\displaystyle\int_{A_{n}}g(s,x_{n}(s))ds=\int_{A_{n}}g(f_{n}(s),x(f_{n}(s)))ds+\int_{A_{n}}\left(g(s,x_{n}(s))-g(f_{n}(s),x(f_{n}(s)))\right)ds,$
where
$\displaystyle|\int_{A_{n}}\left(g(s,x_{n}(s))-g(f_{n}(s),x(f_{n}(s)))\right)ds|\leq\sup_{s\in
A_{n}}\\{|g(s,x_{n}(s))-g(f_{n}(s),x(f_{n}(s))|\\}T,$
and as $g$ is bounded, continuous, $A_{n}$ is relatively compact and $||Id-
f_{n}||\to 0$,
$\displaystyle\lim_{n\to\infty}\sup_{s\in
A_{n}}\\{|g(s,x_{n}(s))-g(f_{n}(s),x(f_{n}(s))|\\}=0.$
Finally, using the fact that $f_{n}^{-1}\in\mathcal{F}$, and
$\lim_{n\to\infty}||Id-f_{n}||=0$, which implies that
$\lim_{n\to\infty}||Id-f^{-1}_{n}||=0$,
$\displaystyle\lim_{n\to\infty}\int_{A_{n}}g(f_{n}(s),x(f_{n}(s)))ds$
$\displaystyle=$
$\displaystyle\lim_{n\to\infty}\int_{f_{n}^{-1}(A_{n})}g(t,x(t))(f_{n}^{-1})^{\prime}(t)dt$
$\displaystyle=$ $\displaystyle\int_{0}^{T}g(t,x(t))dt.$
Combining these facts, we conclude that $x_{n}\rightarrow x$ in the pseudopath
space, which completes the proof. ∎
###### Remark 3.3.
Convergence in $J_{1}$ implies convergence according to $d_{\lambda}$. To see
this, take $A=[0,T]$.
Convergence in the sense of $d_{\lambda}$ is close to convergence in measure.
However, it is easy to see that it is not equivalent. The reason is that the
difference of the functions in the point $\\{T\\}$ is crucial in order to have
convergence according to $d_{\lambda}$ and $\\{T\\}$ is clearly a set of
Lebesgue measure zero. It seems feasible to modify $d_{\lambda}$ in order to
have the equivalence, but we decide to stay with $d_{\lambda}$ as it is
because it is a minimalistic modification of the Skorokhod $J_{1}$ distance.
Simply removing the term $|x_{1}(T)-x_{2}(T)|$ would cause that the paths that
differ only on the last point would be at distance zero and then $d_{\lambda}$
would only be a pseudometric.
## 4 Coalescents with drastic bottlenecks
### 4.1 The drastic bottleneck coalescent
Now we consider bottlenecks that are drastic but can last for several
generations. As we shall see, when the bottlenecks last for more than one
generation the genealogy is not described by a symmetric coalescent anymore.
###### Definition 4.1 (Wright-Fisher model with long drastic bottlenecks).
Fix $\alpha\in(0,1]$, $\eta>0$, $N\in\mathbb{N}$ and $F^{0}$ and $\mathrm{L}$
two probability measures in $\mathbb{N}$. Let $\\{F_{i}\\}_{i\in\mathbb{N}}$
be a sequence of i.i.d. random variables of law $F^{0}$. Let
$\\{l_{i,N}\\}_{i\in\mathbb{N}}$ and $\\{s_{i,N}\\}_{i\in\mathbb{N}}$ be two
sequences of independent positive random variables such that for all $i\geq
1,\ l_{i,N}$ converges in distribution to $\mathrm{L}$ and $s_{i,N}$ follows a
geometric distribution of parameter $\eta/N^{\alpha}$. In the Wright-Fisher
model with long drastic bottlenecks the sequence of population sizes
$\\{R^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ is given by
$R^{N}_{g}\ =\ \left\\{\begin{array}[]{ll}\min(F_{m},N)\textrm{ if
}\sum_{i=1}^{m-1}(s_{i,N}+l_{i,N})+s_{m,N}<g\leq\sum_{i=1}^{m}(s_{i,N}+l_{i,N})\\\
N\textrm{ otherwise}\end{array}\right.$
###### Remark 4.2.
As $\mathrm{L}$ does not depend on $N$, $\mathrm{L}/N^{\alpha}\to 0$ in
distribution, which ensures that even if the bottlenecks last for several
generations, their duration is negligible when time is rescaled by
$N^{\alpha}$. Also, in that time scale, when $N\to\infty$, the distribution of
the time between two bottlenecks converges to an exponential distribution of
parameter $\eta$.
In the limit when $N\to\infty$, the genealogy of this model can be described
by the drastic bottleneck coalescent that we now define. As for the symmetric
coalescent, the idea is that, in the genealogy there is a ‘Kingman part’
corresponding to what happens outside the bottlenecks (where the population
size goes to infinity) and a ‘simultaneous multiple collisions’ part
corresponding to what happens during the bottlenecks.
To define this type of event, we start by fixing $k,g\in\mathbb{N}$ and
considering the (partition-valued) ancestral process of a classical Wright-
Fisher model with constant population size $k$, running for $g$ generations.
The blocks obtained are given labels in $[k]$. The block labelled $i$ contains
all the descendants of individual $(i,0)$. We then define the following random
variables:
* •
$K^{k,g,b}$ is the number of ancestors of a sample of size $b\leq k$.
* •
$(A^{k,g}_{1},\dots,A^{k,g}_{k})$ are the family sizes: $A^{k,g}_{i}$ is the
size of the block labelled $i$ and $A^{k,g}_{i}=0$ if there is no block
labelled $i$. In other words, $A^{k,g}_{i}$ is the number of descendants of
individual $i$ after $g$ generations. We denote by $\mathrm{A}^{k,g}$ the
distribution, in $E^{k}=\\{(i_{1},\dots.i_{k}),\sum i_{j}=k\\}$, of
$(A^{k,g}_{1},\dots,A^{k,g}_{k})$.
* •
Let $V^{k,n}_{i}$ denote the number of balls allocated to box $i$ in the
paintbox construction of the symmetric coalescent. We define a biased version
of $\mathrm{A}^{k,g}$ as follows: For $n\in\mathbb{N}\cup\infty$ and
$i\in\\{1,\dots,k\\}$,
$\bar{A}^{k,g,n}_{i}\ =\ \sum_{j=1}^{k}V^{k,n}_{j}\mathds{1}_{\\{j\textrm{
belongs to the block labelled }i\\}}.$
###### Definition 4.3 (The drastic bottleneck coalescent).
Fix $F^{0}$ and $\mathrm{L}$ two probability measures in $\mathbb{N}$ and
$\eta>0$. The drastic bottleneck coalescent is defined by the following
transition rates. For each $b\geq 2$ and $k_{1},\ldots,k_{r}$ such that
$\sum_{i=1}^{r}k_{i}=b$, each $[b,(k_{1},\dots,k_{r})]$-collision happens at
rate
$\displaystyle\lambda_{b,(k_{1},\dots,k_{r})}\ =$ $\displaystyle\ a\
\mathds{1}_{\\{r=b-1,k_{1}=2\\}}$ $\displaystyle+\ $
$\displaystyle\mathcal{N}(n,(k_{1},\dots,k_{r}))^{-1}\eta\sum\limits_{k\geq
r}F^{0}(k)\sum\limits_{g\geq 1}\mathrm{L}(g)\sum\limits_{i_{1}\neq\dots\neq
i_{r}}\mathbb{P}(\bar{A}^{k,g-1,b}_{i_{1}}=k_{1},\dots,\bar{A}^{k,g-1,b}_{i_{r}}=k_{r}).$
In words, there are $b$ lineages and a bottleneck of size $k$ and duration $g$
occurs. First each one of the $b$ lineages chooses a parent amongst the $k$
individuals of the last generation of the bottleneck. There remain $W^{k,b}$
lineages (each one containing $V^{k,b}_{1},\dots,V^{k,b}_{k}$ original
lineages). Then the bottleneck still lasts for $g-1$ more generations, in
which the lineages merge as in a Wright-Fisher model with population size $k$
(see Figure 3 for an illustration).
Figure 3: A $[6,(4,2)]$-collision ilustrated. Before the bottleneck, only
‘Kingman type’ mergers occur. Each of the 6 remaining lineages is allocated to
one of the 5 individuals of the last generation of the bottleneck, and we have
$V^{5,6}=(3,0,1,0,2)$. Then, the system evolves for 3 more generations as a
Wright-Fisher model. The family sizes are $A^{5,4}_{1}=3,\ A^{5,4}_{2}=0,\
A^{5,4}_{3}=1,\ A^{5,4}_{4}=1$ and $A^{5,4}_{5}=0$. The biased family sizes
are $\bar{A}^{5,4,6}_{1}=4,\ \bar{A}^{5,4,6}_{4}=2$ and
$\bar{A}^{5,4,6}_{i}=0$ otherwise.
As a consequence, we have the following result.
###### Proposition 4.4.
The block-counting process of the drastic bottleneck coalescent has the
following transition rates
$q_{ij}\ =\ a\binom{i}{2}\mathds{1}_{\\{j=i-1\\}}+\eta\sum_{k\geq
r}F^{0}(k)\sum_{g\geq 1}\mathrm{L}(g)\mathbb{P}(K^{k,g-1,W^{k,i}}=j).$
As in the previous section, we can define $\mathcal{\bar{G}}$, the
infinitesimal generator of the block-counting process
$\\{\bar{N}_{t}\\}_{t\geq 0}$. For any function $h:\mathbb{N}\to\mathbb{R}$,
we have
$\mathcal{\bar{G}}h(n):=a\binom{n}{2}\left(h(n-1)-h(n)\right)\ +\
\eta\sum_{k\geq 1}F^{0}(k)\sum_{g\geq
1}\mathrm{L}(g)\sum_{j=1}^{n-1}\mathbb{P}(K^{k,g-1,W^{k,n}}=j)\left(h(j)-h(n)\right).$
(4.16)
###### Remark 4.5.
As in the previous section, we can describe the drastic bottleneck coalescent
using a paintbox construction. When the bottleneck lasts for $g$ generations,
it would correspond to iterating $g$ times the paintbox construction of the
symmetric coalescent.
### 4.2 Duality with the Wright-Fisher model with long drastic bottlenecks
Now, we consider the Wright-Fisher model with long drastic bottlenecks from
Definition 4.1, with two types of individuals. We denote by
$\\{\bar{X}^{N}_{g}\\}_{g\in\mathbb{N}}$ the frequency process associated with
that model. In the following, ‘$\overset{d_{\lambda}}{\Longrightarrow}$’
denotes weak convergence in the topology induced by $d_{\lambda}$.
###### Theorem 4.6.
Fix $\alpha\in(0,1]$ and $\eta>0$. Let $F^{0}$ and $\mathrm{L}$ be two
probability measures in $\mathbb{N}$. Consider the sequence of processes
$\\{\bar{X}^{N}\\}_{N\in\mathbb{N}}$, such that
$\bar{X}^{N}:=\\{\bar{X}^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ is the frequency
process associated with the Wright-Fisher model with long drastic bottlenecks
parametrized by $\alpha,\ \eta,\ N$, $F^{0}$ and $\mathrm{L}$ (see Definition
4.1). Then, for any $T>0$, in $D[0,T]$,
$\\{\bar{X}^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq T}\
\overset{d_{\lambda}}{\underset{N\to\infty}{\Longrightarrow}}\
\\{\bar{X}_{t}\\}_{0\leq t\leq T},$
where $\\{\bar{X}_{t}\\}_{0\leq t\leq T}$ is the unique strong solution of the
SDE
$d\bar{X}_{t}\ =\
\mathds{1}_{\\{\alpha=1\\}}\sqrt{\bar{X}_{t}(1-\bar{X}_{t})}dB_{t}\ +\
\int_{U_{0}}\sum_{i=1}^{k}\frac{a_{i}}{k}\left(\mathds{1}_{\\{u_{i}\leq\bar{X}_{t^{-}}\\}}-\bar{X}_{t^{-}}\right)\bar{N}(dt,dk,dg,da,du),$
(4.17)
where $\\{B_{t}\\}_{t\geq 0}$ is a standard Brownian motion and $\bar{N}$ is a
compensated Poisson measure on $(0,\infty)\times U_{0}$ with
$U_{0}=\mathbb{N}\times\mathbb{N}\times E^{k}\times[0,1]^{\mathbb{N}}$.
$\bar{N}$ has intensity $\eta ds\otimes
F^{0}(dk)\otimes\mathrm{L}(dg)\otimes\mathrm{A}^{k,g-1}(da)\otimes du$, where
$du$ is the Lebesgue measure on $[0,1]^{\mathbb{N}}$ and $\mathrm{A}^{k,g-1}$
is the distribution in $E^{k}$ of the family sizes in a classical Wright-
Fisher model with population size $k$, after $g-1$ generations (as defined in
Section 4.1), i.e. $\mathrm{A}^{k,g-1}$ is the distribution of a vector whose
$i$-th coordinate corresponds to the number of descendants of individual $i$
after $g-1$ generations in a classical Wright-Fisher model with population
size $k$.
###### Remark 4.7.
This result implies that $\\{\bar{X}^{N}_{\lfloor
N^{\alpha}t\rfloor}\\}_{0\leq t\leq T}$ converges to $\\{\bar{X}_{t}\\}_{0\leq
t\leq T}$ in measure on the Skorokhod space. As already mentioned in the
introduction (and illustrated in Figure 4), it is impossible to have
convergence in the $J_{1}$ or $M_{1}$ Skorokhod topology.
Figure 4: (a) represents a realization of the Wright-Fisher model with long
drastic bottlenecks in which the usual population size is $N=8$. We observe
that $F_{1}=2$, $s_{1,8}=2$ and $l_{1,8}=2$, meaning that in generation $2$ a
bottleneck that reduces the population to $2$ individuals, starts and last for
$2$ generations. Similarly, $F_{2}=4$, $s_{2,8}=3$, $l_{2,8}=5$ and
$s_{3,8}\geq 4$. In (c), the frequency process (associated with (a)) is
colored according to the population size: red outside the bottlenecks and blue
during the bottlenecks. In (b) we observe the result of collapsing the
bottlenecks in the Wright-Fisher model with long drastic bottlenecks
represented in (a). We observe that a lot of the fluctuations of the frequency
process are lost. In (d), the red line is the frequency process and the blue
excursions, that were present in (c), are lost due to the collapsing of the
bottlenecks. These blue excursions (sparks) make it impossible to have
convergence in $J_{1}$ or $M_{1}$ to the diffusion with jumps that is the
solution of (4.17).
To understand why the jump term of (4.17) takes this form, we need to think
about what happens during a bottleneck of size $k$ and length $g$. When the
bottleneck begins, only $k$ individuals of the infinite population survive.
The term ‘$\mathds{1}_{\\{u_{i}\leq\bar{X}_{t^{-}}\\}}$’ comes from the fact
that individual $i$ is of type 1 with probability $\bar{X}_{t^{-}}$ and of
type $0$ with probability $1-\bar{X}_{t^{-}}$. This generation corresponds to
time $0$ for the bottleneck. Then the bottleneck lasts for another $g-1$
generations and individual $i$ has $a_{i}$ descendants which are all of the
same type as her. See Figure 5 for an illustration of this jump term.
Figure 5: The jump part of (4.17) illustrated. Type 1 individuals are
represented in pink and type 0 individuals in green.
Again, before proving Theorem 4.6 we shall make sure that a solution to
Equation (4.17) exists.
###### Lemma 4.8.
For any probability measures $F^{0}$ and $\mathrm{L}$ in $\mathbb{N}$ and any
$\alpha\in(0,1],\ \eta>0$, there exists a unique strong solution to the SDE
(4.17).
###### Remark 4.9.
To prove this result, one could rewrite the stochastic differential equation
(4.17) in terms of a measure $\Xi$ on $\Delta$ that would depend on $F^{0}$,
$\mathrm{L}$ and $\mathrm{A}^{k,g-1}$ (in fact for $k\in\mathbb{N}$ and
$(a_{1},\dots,a_{k})$ drawn from the distribution $\mathrm{A}^{k,g-1}$, the
re-ordering of $(a_{1}/k,\dots,a_{k}/k)$ is an element of $\Delta^{*}$). Then
Lemma 4.8 would follow from Lemma 3.6 in [13] (and Theorem 4.11 would follow
form Proposition 3.8 in the same reference). However, we have decided not to
modify (4.17) and we give a proof that is more instructive, as it is connected
to the parameters of the Wright-Fisher model with long drastic bottlenecks and
sheds light on the connection between this Wright-Fisher model and the drastic
bottleneck coalescent.
###### Proof.
We use Theorem 5.1 in [20]. In particular, we need to verify conditions (3.a),
(3.b) and (5.a) of that paper. Condition (3.a) is trivial, as in our case the
drift coefficient is equal to 0. To prove condition (3.b), we have to prove
that there exists a constant $K$ such that for every $x,y\in[0,1]$,
$\displaystyle\int_{\mathbb{N}}\int_{\mathbb{N}}F^{0}(k)\mathrm{L}(g)\mathbb{E}\left[\left(\sum_{i=1}^{k}\frac{a_{i}}{k}\left(\mathds{1}_{\\{u_{i}\leq
x\\}}-x-\mathds{1}_{\\{u_{i}\leq y\\}}+y\right)\right)^{2}\right]$
$\displaystyle+\mid\sqrt{x(1-x)}-\sqrt{y(1-y)}\mid\ \leq\ K\mid x-y\mid.$
(4.18)
First, we use the fact that
$\displaystyle\mid\sqrt{x(1-x)}-\sqrt{y(1-y)}\mid\leq 4\mid x-y\mid,$
see for example claim (26) in [13]. Second, without lost of generality we
assume that $x>y$ and we have
$\displaystyle\int_{\mathbb{N}}\int_{\mathbb{N}}F^{0}(k)\mathrm{L}(g)\mathbb{E}\left[\left(\sum_{i=1}^{k}\frac{a_{i}}{k}\left(\mathds{1}_{\\{u_{i}\leq
x\\}}-x-\mathds{1}_{\\{u_{i}\leq y\\}}+y\right)\right)^{2}\right]$
$\displaystyle=\int_{\mathbb{N}}\int_{\mathbb{N}}F^{0}(k)\mathrm{L}(g)\mathbb{E}\left[\sum_{i=1}^{k}\frac{a_{i}^{2}}{k^{2}}\left((B_{i}^{x}-B_{i}^{y})-(x-y)\right)^{2}\right],$
where the $B_{i}^{x}$’s and the $B_{i}^{y}$’s are (dependent) Bernoulli random
variables of parameter $x$ and $y$ respectively. Using the fact that
$(B_{i}^{x}-B_{i}^{y})$ is Bernoulli of parameter $x-y$ we have:
$\displaystyle\int_{\mathbb{N}}\int_{\mathbb{N}}F^{0}(k)\mathrm{L}(g)\mathbb{E}\left[\sum_{i=1}^{k}\frac{a_{i}^{2}}{k^{2}}\left((B_{i}^{x}-B_{i}^{y})-(x-y)\right)^{2}\right]$
$\displaystyle\leq((x-y)(1-(x-y))\
\int_{\mathbb{N}}\int_{\mathbb{N}}F^{0}(k)\mathrm{L}(g)\mathbb{E}\left(\sum_{i=1}^{k}\frac{a_{i}^{2}}{k^{2}}\right)$
$\displaystyle\leq(x-y),$
where we used the facts that $\frac{a_{i}}{k}\leq 1$,
$\sum_{i=1}^{k}{a_{i}}=k$ and $F^{0}$ and $\mathrm{L}$ are probability
measures. This proves claim (4.18). Finally, condition (5.a) in [20] is
verified because, using similar arguments as before,
$\displaystyle\int_{\mathbb{N}}\int_{\mathbb{N}}F^{0}(k)\mathrm{L}(g)\mathbb{E}\left[\left(\sum_{i=1}^{k}\frac{a_{i}}{k}\left(\mathds{1}_{\\{u_{i}\leq
x\\}}-x\right)\right)^{2}\right]+x(1-x)\ \leq\ 2.$ (4.19)
This implies that we can apply Theorem 5.1 in [20] and conclude the existence
and uniqueness of strong solution of (4.17). ∎
###### Remark 4.10.
The fact that $F^{0}$ and $\mathrm{L}$ are probability measures guarantees the
existence of a unique strong solution to (4.17), but a sufficient condition is
that
$\int_{\mathbb{N}}\int_{\mathbb{N}}F^{0}(k)\mathrm{L}(g)\mathbb{E}\left(\sum_{i=1}^{k}\frac{a_{i}^{2}}{k^{2}}\right)\
<\ \infty,$
where $(a_{1},\dots,a_{k})$ is distributed as $\mathrm{A}^{k,g-1}$.
We are now ready to prove Theorem 4.6. The strategy of the proof is the
following. We want to prove the convergence of the sequence of processes
$\\{\bar{X}^{N}\\}_{N\in\mathbb{N}}$, which are not Markovian, to a diffusion
with jumps. In Step 1 we will construct an auxiliary process $V^{N}$ that
corresponds to collapsing each bottleneck into one single time step into
$\bar{X}^{N}$ and that is Markovian. We will prove that
$d_{\lambda}(\\{\bar{X}^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq
T},\\{V^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq T})\rightarrow 0$ in
probability. In Step 2, we prove, using standard techniques for Markov
processes, that $\\{V^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq T}$
converges weakly to $\\{\bar{X}_{t}\\}_{0\leq t\leq T}$, as processes in
$D[0,T]$ endowed with Skorokhod’s $J_{1}$ topology. Combining the two steps,
the conclusion is straightforward.
This method can be generalized to any case in which convergence in the
Skorokhod topology is prevented by strong fluctuations that happen in a set of
times that has Lebesgue measure 0 in the limit.
###### Proof of Theorem 4.6.
Step 1. We say that a generation is in a bottleneck and write $g\in B^{N}$ if
for some $m$,
$\underline{t}_{m}:=\sum_{i=1}^{m-1}(s_{i,N}+l_{i,N})+s_{m,N}<g\leq\bar{t}_{m}:=\sum_{i=1}^{m}(s_{i,N}+l_{i,N})$.
Let $g_{i}$ be the $i$-th generation that is not in a bottleneck. Define
$V^{N}_{i}=\bar{X}^{N}_{g_{i}}.$
Note that while $\bar{X}^{N}$ is not a Markov process (as one needs to know if
there is a bottleneck or not to calculate the transition probabilities),
$V^{N}$ is a Markov process, and it is constructed from $\bar{X}^{N}$ simply
by collapsing the entire bottleneck into one step. Now, consider the random
$R^{N}_{g}$-measurable projection $\pi^{N}:\mathbb{R}_{+}\mapsto\mathbb{N}$
defined as $\pi^{N}(t)=\max\\{g\in\mathbb{N}:g<t,g\in(B^{N})^{c}\\}$ and we
define the process $\\{Z^{N}_{t}\\}_{t\geq 0}$ by
$Z^{N}_{t}=\bar{X}^{N}_{\pi^{N}(t)}.$
The difference between $V^{N}_{\lfloor t\rfloor}$ and $Z^{N}_{t}$ is that
$V^{N}_{\lfloor t\rfloor}$ can move every unit of time, while $Z^{N}_{t}$
stays still during the duration of a bottleneck (see Figure 6 for an
illustration).
Taking $f\in\mathcal{F}$ to be the linear function having slope 1 whenever
$\lfloor t\rfloor$ is not in a bottleneck and having slope
$({\bar{t}_{m}-\underline{t}_{m}})$ whenever $\lfloor t\rfloor$ is inside the
$m$-th bottleneck, we conclude that $||V^{N}_{\lfloor
N^{\alpha}t\rfloor}-Z^{N}_{f(\lfloor N^{\alpha}t\rfloor)}||=0$ (see Figure 6)
and thus
$d_{\lambda}(\\{V^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq
T},\\{Z^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq
T})\leq||Id-f||\leq\frac{\sum_{m=1}^{\infty}(\bar{t}_{m}-\underline{t}_{m})\mathds{1}_{\\{\bar{t}_{m}\leq
TN^{\alpha}\\}}}{N^{\alpha}}.$
Observing that the latter converges to 0 in probability when
$N\rightarrow\infty$, we conclude that
$d_{\lambda}(\\{V^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq
T},\\{Z^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq T})\rightarrow
0\textrm{ in probability}.$
Now, observe that taking
$A^{N}=\cup_{m=1}^{i_{T}^{N}}[\underline{t}_{m},\bar{t}_{m})$ where
$i_{T}^{N}=\sup\\{m:\underline{t}_{m}<TN^{\alpha}\\}$ we have
$||\mathds{1}_{(A^{N})^{c}}(Z^{N}_{\lfloor
N^{\alpha}t\rfloor}-\bar{X}^{N}_{\lfloor N^{\alpha}t\rfloor})||=0\textrm{ in
probability}.$
and thus
$d_{\lambda}(\\{\bar{X}^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq
T},\\{Z^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq T})\rightarrow
0\textrm{ in probability},$
which completes this step.
Figure 6: The curves represent realizations of $X^{N}$, $V^{N}$, $Z^{N}$ and
$Z^{N}\circ{f}$ respectively. The first curve is colored according to the
population size: red outside the bottlenecks and blue during the bottlenecks.
Step 2. As in the proof of Theorem 2.3, the idea is to prove the convergence
of the generator of $\\{V^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{t\geq 0}$, to
the generator of $\\{\bar{X}_{t}\\}_{t\geq 0}$. Provided this claim is true,
we can use Theorem 19.25 and 19.28 of [17] to prove the weak convergence of
$\\{V^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq T}$ towards
$\\{\bar{X}_{t}\\}_{0\leq t\leq T}$.
From Lemma 4.8, $\\{\bar{X}_{t}\\}_{t\geq 0}$ exists and has generator
$\mathcal{\bar{A}}$. Its domain contains twice differentiable functions and
for a function $f\in C^{2}[0,1]$ and $x\in[0,1]$, we have
$\mathcal{\bar{A}}f(x)\ =\
\mathds{1}_{\\{\alpha=1\\}}\frac{1}{2}x(1-x)f^{\prime\prime}(x)\ +\
\eta\sum_{k\geq 1}F^{0}(k)\sum_{g\geq
1}\mathrm{L}(g)\mathcal{\bar{A}}^{k,g}f(x),$ (4.20)
with
$\mathcal{\bar{A}}^{k,g}f(x)=\sum_{i=0}^{k}\mathbb{P}(\bar{Y}^{k,g,x}=i/k)\left(f(i/k)-f(x)\right),$
where $\bar{Y}^{k,g,x}$ is a random variable such that
$\bar{Y}^{k,g,x}=\sum_{i=1}^{k}\frac{a_{i}}{k}\mathds{1}_{\\{u_{i}\leq x\\}}$,
where $(a_{1},\dots,a_{k})$ is distributed as $\mathrm{A}^{k,g-1}$ and
$(u_{1},\ldots,u_{k})$ is uniformly distributed in $[0,1]^{k}$. This generator
can be interpreted in the same way as the jump part of (4.17), see Figure 5.
The discrete generator $\mathcal{\bar{A}}^{N}$ of $\\{V^{N}_{\lfloor
N^{\alpha}t\rfloor}\\}_{t\geq 0}$ (defined as in (2.9)), applied to a function
$f\in C^{2}[0,1]$ in $x\in[0,1]$ can be written as
$\displaystyle\mathcal{\bar{A}}^{N}f(x)=\
N^{\alpha}(1-\frac{\eta}{N^{\alpha}})\mathbb{E}\left(f\left(\frac{\sum_{i=1}^{N}B^{x}_{i}}{N}\right)-f(x)\right)$
$\displaystyle+\eta\sum_{k\geq 1}F^{0}(k)\sum_{g\geq
1}\mathbb{P}(l_{1,n}=g)\sum_{i=0}^{k}\mathbb{P}\left(\bar{Y}^{\min(N,k),g,x}=\frac{i}{\min(N,k)}\right)\left(f(\frac{i}{\min(N,k)})-f(x)\right).$
The first term corresponds to the generator of a classical Wright-Fisher model
(outside the bottlenecks). As already mentioned, it is well-known that when
$\alpha=1$, this term converges when $N\to\infty$ to
$\frac{1}{2}x(1-x)f^{\prime\prime}(x)$, which is the generator of the Wright-
Fisher diffusion. When $\alpha<1$ this term becomes of order $N^{\alpha-1}$
and therefore converges to $0$. The second term corresponds to what happens
during a bottleneck (and again, it can be interpreted using Figure 5). Recall
that $\mathrm{L}/N^{\alpha}\to 0$ in distribution, i.e. in the new time scale
the bottlenecks are instantaneous. As $l_{1,N}$ converges in distribution to
$\mathrm{L}$, the second term in the generator converges when $N\to\infty$ to
the second term of $\mathcal{\bar{A}}$ (see (4.20)). Combining these two
results, we have $\mathcal{\bar{A}}^{N}f\to\mathcal{\bar{A}}f$ uniformly. This
implies that $\\{V^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq T}$
converges weakly in the Skorokhod $J_{1}$ topology to
$\\{\bar{X}_{t}\\}_{0\leq t\leq T}$. Since convergence in $J_{1}$ implies
convergenec in $d_{\lambda}$ we have the desired result. ∎
We fix $\alpha\in(0,1],\ \eta>0$ and $F^{0}$ and $\mathrm{L}$ two probability
measures in $\mathbb{N}$. Let us consider $\\{\bar{N}_{t}\\}_{t\geq 0}$, the
block-counting process of the drastic bottleneck coalescent characterized by
$\alpha$, $\eta$, $F^{0}$ and $\mathrm{L}$. As in the previous section, we are
going to prove a duality relation between this block-counting process and
$\\{\bar{X}_{t}\\}_{t\geq 0}$, the unique strong solution of (4.17) (with the
same parameters).
###### Theorem 4.11.
For every $x\in[0,1],\ n\in\mathbb{N}$, we have
$\mathbb{E}(\bar{X}_{t}^{n}|\bar{X}_{0}=x)\ =\
\mathbb{E}(x^{\bar{N}_{t}}|\bar{N}_{0}=n).$
###### Proof.
We start by recalling a moment duality between the frequency process of a
classical Wright-Fisher model with population size $k$, started at $x$,
denoted by $\\{Y^{k,g,x}\\}_{g\in\mathbb{N}}$, and the number of blocks in the
associated ancestry process of a sample of size $n$,
$\\{K^{k,g,n}\\}_{g\in\mathbb{N}}$, which was established by Möhle
(Proposition 3.5 in [23]). We consider the function $h$ on
$[0,1]\times\mathbb{N}$ such that $h(x,n)=x^{n}$. We have
$\mathbb{E}(h(Y^{k,g,x},n))\ =\ \mathbb{E}(h(x,K^{k,g,n}))$
i.e.,
$\sum_{i=0}^{k}\mathbb{P}(Y^{k,g,x}=i/k)h(i/k,n)\ =\
\sum_{b=1}^{n}\mathbb{P}(K^{k,g,n}=b)h(x,b).$ (4.21)
We start by considering the case $\alpha<1$. Using the Markov property of the
Wright-Fisher model with bottlenecks (i.e. the fact that the $u_{i}$’s are
independent from the $a_{i}$’s) we can rewrite $\bar{Y}^{k,g,x}$ so that for
every $n\in\mathbb{N}$, the generator $\mathcal{\bar{A}}$ applied to $h$ (seen
as a function of $x$) is
$\displaystyle\mathcal{\bar{A}}h(x,n)$ $\displaystyle=\eta\sum_{k\geq
1}F^{0}(k)\sum_{g\geq
1}\mathrm{L}(g)\sum_{j=0}^{k}\sum_{y=0}^{k}\mathbb{P}(\sum_{i=1}^{k}\mathds{1}_{\\{u_{i}\leq
x\\}}=y)\mathbb{P}(Y^{k,g-1,y/k}=\frac{j}{k})(h(\frac{j}{k},n)-h(x,n)).$
Using the Markov property of $\\{Y^{k,g,y/k}\\}_{g\in\mathbb{N}}$ we have that
$\sum_{j=0}^{k}\mathbb{P}(Y^{k,g-1,y/k}=\frac{j}{k})=\sum_{j=0}^{k}\sum_{p=0}^{k}\mathbb{P}(Y^{k,g-2,y/k}=\frac{p}{k})\mathbb{P}(Y^{k,1,p/k}=\frac{j}{k}).$
Then, by definition of the classical Wright-Fisher model, we have that
$Y^{k,1,p/k}$ has the same distribution as
$\frac{1}{k}\sum_{i=1}^{k}B_{i}^{p/k}$, where the $B_{i}^{p/k}$’s are
Bernoulli variables of parameter $p/k$, so using exactly the same computations
as in (2.13) and (2.14), we have that
$\displaystyle\mathbb{E}\left((Y^{k,1,p/k})^{n}\right)\ =\
\mathbb{E}\left((p/k)^{W^{k,n}}\right),$
so,
$\displaystyle\sum_{j=0}^{k}\mathbb{P}(Y^{k,g-1,y/k}=\frac{j}{k})h(\frac{j}{k},n)$
$\displaystyle=\sum_{p=0}^{k}\mathbb{P}(Y^{k,g-2,y/k}=\frac{p}{k})\sum_{j=0}^{k}\mathbb{P}(Y^{k,1,p/k}=\frac{j}{k})h(\frac{j}{k},n)$
$\displaystyle=\sum_{p=0}^{k}\mathbb{P}(Y^{k,g-2,y/k}=\frac{p}{k})\sum_{m=1}^{n}\mathbb{P}(W^{k,n}=m)h(\frac{p}{k},m)$
$\displaystyle=\sum_{m=1}^{n}\mathbb{P}(W^{k,n}=m)\sum_{b=1}^{k}\mathbb{P}(K^{k,g-2,m}=b)h(\frac{y}{k},b)$
$\displaystyle=\sum_{b=1}^{k}\mathbb{P}(K^{k,g-2,W^{k,n}}=b)h(\frac{y}{k},b),$
where, from the second to third line we used the duality relation (4.21).
Replacing into the expression of the generator $\mathcal{\bar{A}}$, we have
that
$\displaystyle\mathcal{\bar{A}}h(x,n)$ $\displaystyle=\eta\sum_{k\geq
1}F^{0}(k)\sum_{g\geq
1}\mathrm{L}(g)\sum_{y=0}^{k}\mathbb{P}(\sum_{i=1}^{k}\mathds{1}_{\\{u_{i}\leq
x\\}}=y)\sum_{b=1}^{k}\mathbb{P}(K^{k,g-2,W^{k,n}}=b)(h(\frac{y}{k},b)-h(x,n)).$
Recall that, if $x\in\\{0,1/k,\dots,1\\}$, the distribution of
$\frac{1}{k}\sum_{i=1}^{k}\mathds{1}_{\\{u_{i}\leq x\\}}$ is exactly the
distribution of $Y^{k,1,x}$, so we can apply (4.21) to $Y^{k,1,x}$ and
$K^{k,1,b}$. However, we need to prove that a similar relation exists when
$x\in[0,1]\setminus\\{0,1/k,\dots,1\\}$. In fact, using again the same
computations as in (2.13) and (2.14), we have that
$\displaystyle\mathbb{E}\left((\frac{1}{k}\sum_{i=1}^{k}\mathds{1}_{\\{u_{i}\leq
x\\}})^{b}\right)\ =\ \mathbb{E}\left(x^{W^{k,b}}\right),$
and we let the reader convince herself that $W^{k,b}$ has the same
distribution as $K^{k,1,b}$. Finally,
$\displaystyle\mathcal{\bar{A}}h(x,n)$ $\displaystyle=\eta\sum_{k\geq
1}F^{0}(k)\sum_{g\geq
1}\mathrm{L}(g)\sum_{b=1}^{k}\mathbb{P}(K^{k,g-2,W^{k,n}}=b)\sum_{a=1}^{k}\mathbb{P}(K^{k,1,b}=a)(h(x,a)-h(x,n))$
$\displaystyle=\eta\sum_{k\geq 1}F^{0}(k)\sum_{g\geq
1}\sum_{a=1}^{k}\mathbb{P}(K^{k,g-1,W^{k,n}}=a)\left(h(x,a)-h(x,n)\right)\ =\
\mathcal{\bar{G}}h(x,n),$
where in the last line we used the Markov property of
$\\{K^{k,g,n}\\}_{g\in\mathbb{N}}$ and $\mathcal{\bar{G}}$ is the generator of
$\\{\bar{N}_{t}\\}_{t\geq 0}$ (defined in 4.16) applied to $h$, seen as a
function of $n$.
If $\alpha=1$ we have
$\displaystyle\mathcal{\bar{A}}h(x,n)\ =\
\mathcal{A}_{1}h(x,n)+\mathcal{\bar{A}}_{2}h(x,n)$
where $\mathcal{\bar{A}}_{2}$ is the infinitesimal generator of
$\\{\bar{X}_{t}\\}_{t\geq 0}$ for the case $\alpha<1$ and $\mathcal{A}_{1}$ is
the generator of the Wright-Fisher diffusion. The proof follows by using the
moment duality between the Wright-Fisher diffusion and the Kingman coalescent,
i.e.
$\ \mathcal{A}_{1}h(x,n)=\ \binom{n}{2}\left(h(x,n-1)-h(x,n)\right).$
This, combined with the case $\alpha<1$ completes the proof. ∎
## 5 Coalescents with soft bottlenecks
### 5.1 Kingman coalescents with continuous time rescaling
In this section we consider bottlenecks that are soft and short. More
precisely, we consider a Wright-Fisher model with bottlenecks where the
sequence of population sizes $\\{R^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ is such
that
$R^{N}_{g}=\lfloor Nr_{g}\rfloor+1$
where $\\{r_{g}\\}_{g\in\mathbb{Z}_{+}}$ is a sequence of i.i.d. random
variables with a certain law on $[0,1)$ that we denote by $R$. We review here
some examples, in which the genealogy is a time rescaled Kingman coalescent.
The proofs are based on Möhle’s theorem [25] that can be rewritten as follows.
Let us denote by $\\{\Pi^{N}_{t}\\}_{t\geq 0}$ the ancestral process
describing the genealogy of a Wright-Fisher model with bottlenecks
parametrized by $N$ and $\\{R^{N}_{g}\\}_{g\in\mathbb{N}}$ and let
$\\{\Pi_{t}\\}_{t\geq 0}$ be the standard Kingman coalescent.
###### Proposition 5.1 (Möhle’s theorem for the Wright-Fisher model with
bottlenecks).
Consider
$C_{N}=\sum_{i=1}^{N}\frac{1}{i}\mathbb{P}(R^{N}_{1}=i)\textrm{ and
}D_{N}=\sum_{i=1}^{N}\frac{1}{i^{2}}\mathbb{P}(R^{N}_{1}=i).$
If
$C_{N}\underset{N\to\infty}{\longrightarrow}0\textrm{ and }\
D_{N}/C_{N}\underset{N\to\infty}{\longrightarrow}0,$
then $\\{\Pi^{N}_{\lfloor t/C_{N}\rfloor}\\}_{t\geq 0}$ converges to
$\\{\Pi_{t}\\}_{t\geq 0}$ in the sense of finite dimensional distributions.
Example 1. We start by considering the case where there exists $\epsilon>0$
such that $\mathbb{P}(R>\epsilon)=1$, then the genealogy of the model
converges to a constant time rescaling of the Kingman coalescent. To prove it
we use Möhle’s theorem. In fact,
$C_{N}=\sum_{i=1}^{N}\frac{1}{i}\mathbb{P}(R^{N}_{1}=i)=\sum_{i=\lfloor\epsilon
N\rfloor}^{N}\frac{1}{i}\mathbb{P}(R\in[\frac{i-1}{N},\frac{i}{N}))\in[\frac{1}{N},\frac{1}{\epsilon
N}]$
and
$D_{N}=\sum_{i=1}^{N}\frac{1}{i^{2}}\mathbb{P}(R^{N}_{1}=i)=\sum_{i=\lfloor\epsilon
N\rfloor}^{N}\frac{1}{i^{2}}\mathbb{P}(R\in[\frac{i-1}{N},\frac{i}{N}))\in[\frac{1}{N^{2}},\frac{1}{(\epsilon
N)^{2}}].$
So $C_{N}\rightarrow 0$, $D_{N}/C_{N}\rightarrow 0$ and we can apply
Proposition 5.1.
Example 2. We assume that $R$ is a uniform random variable in $[0,1]$, i.e.
for $g\in\mathbb{Z}_{+}$ $R_{g}^{N}$ is uniformly chosen in $\\{1,\dots,N\\}$.
Informally, this means that going backward in time, one has to wait, on
average, for $N$ generations until the population size is reduced to only 1
individual. Any ancestral lineages present at that time need to coalesce into
one. However, the limiting genealogy is still a Kingman coalescent. In fact,
$C_{N}\ =\ \frac{1}{N}\sum_{i=1}^{N}\frac{1}{i}\sim\frac{\log N}{N}$
and
$D_{N}\ =\ \frac{1}{N}\sum_{i=1}^{N}\frac{1}{i^{2}}=O(1/N).$
So $C_{N}\rightarrow 0$, $D_{N}/C_{N}\rightarrow 0$ and Proposition 5.1
implies that $\\{\Pi^{N}_{\lfloor tN/\log N\rfloor}\\}_{t\geq
0}\rightarrow\\{\Pi_{t}\\}_{t\geq 0}$ in the sense of finite dimensional
distributions.
### 5.2 Subordinated Kingman coalescents
In this section we consider bottlenecks that are soft but that last for
several generations. As in Section 4.1, we are going to assume that, in the
limiting model, the times between two bottlenecks are exponentially
distributed. But this time we are going to assume that the bottlenecks are
soft i.e. $b_{i,N}\to 0$ but $Nb_{i,N}\to\infty$. We present an example
inspired by Birkner et al. [5], in which the limiting genealogy is a
subordinated Kingman coalescent.
###### Definition 5.2 (Wright-Fisher model with long soft bottlenecks).
Fix $\alpha\in(0,1]$, $\eta>0$, $N\in\mathbb{N}$ and $\mathrm{L}_{\gamma}$ a
probability measure on $\mathbb{R}_{+}$. Let $\\{b_{i,N}\\}_{i\in\mathbb{N}}$,
$\\{l_{i,N}\\}_{i\in\mathbb{N}}$ and $\\{s_{i,N}\\}_{i\in\mathbb{N}}$ be three
sequences of independent positive random variables. For any $i\in\mathbb{N}$,
assume that $b_{i,N}\to 0$ in distribution, that
$l_{i,N}/(Nb_{i,N})\to\gamma_{i}$ in distribution, where $\gamma_{i}$ is a
random variable with law $\mathrm{L}_{\gamma}$, and that $s_{i,N}$ follows a
geometric distribution of parameter $\eta/N^{\alpha}$. In the Wright-Fisher
model with long drastic bottlenecks, the sequence of population sizes
$\\{R^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ is given by
$R^{N}_{g}\ =\ \left\\{\begin{array}[]{ll}b_{m,N}N\textrm{ if
}\sum_{i=1}^{m-1}(s_{i,N}+l_{i,N})+s_{m,N}<g\leq\sum_{i=1}^{m}(s_{i,N}+l_{i,N})\\\
N\textrm{ otherwise}\end{array}\right..$
As suggested by Birkner et al. [5], we show in Section 5.3 that when
$N\to\infty$ and time is rescaled by $N^{\alpha}$, the genealogy is described
by $\\{\Pi_{S_{t}}\\}_{t\geq 0}$ where $\\{S_{t}\\}_{t\geq 0}$ is a
subordinator (a compound Poisson process with Lévy measure $\eta{L}_{\gamma}$
and drift 1). Proposition 6.3 in [5] states that this subordinated Kingman
coalescent is in fact a $\Xi$-coalescent with characterizing measure
$\Xi=a\delta_{(0,0,\dots)}+\Xi_{KS}\\\ $
where
$\Xi_{KS}(d\zeta)=(\zeta,\zeta)\int_{(0,\infty)}\sum_{j=1}^{\infty}\mathbb{P}(K_{\sigma}=j)\eta\mathrm{L}_{\gamma}(d\sigma)D_{j}(d\zeta),$
(5.22)
and $K_{t}$ is the number of lineages at time $t>0$ in the standard Kingman
coalescent starting with $K_{0}=\infty$ and $D_{j}$ is the law of the re-
ordering of a ($j$-dimensional) Dirichlet $(1,\dots,1)$ random vector
according to decreasing size. This result can be interpreted as follows: the
simultaneous multiple collisions part in the measure $\Xi$ corresponds to the
way the lineages coalesce during the bottlenecks. As the population size
during the bottleneck still goes to infinity, its evolution is still given by
a Kingman coalescent and it lasts for a time distributed according to
$\mathrm{L}_{\gamma}$. The frequencies of the remaining blocks have a
Dirichlet distribution (see for example Corollary 2.1 in [1]).
###### Remark 5.3.
In an informal sense, this model can be seen as a limiting scenario for the
model presented in Section 4.1 when the population size during the bottleneck
goes to infinity. In fact, in the model from Definition 5.2 we have
$\lambda_{b,(k_{1},\dots,k_{r})}=\mathds{1}_{\\{r=b-1,k_{1}=2\\}}+\mathcal{N}(n,(k_{1},\dots,k_{r}))^{-1}\eta\int_{(0,\infty)}{L}_{\gamma}(d\sigma)\mathbb{P}(X^{\sigma,b}=(k_{1},\dots,k_{r}))$
where $X^{\sigma,b}$ is the vector of the sizes of the blocks of a Kingman
coalescent at time $\sigma$, starting with $b$ blocks (without taking into
account the ordering). The latter can be understood as a $k=\infty$ version of
Proposition 4.4.
### 5.3 Duality between the Wright-Fisher model with long soft bottlenecks
and the subordinated Kingman coalescent
Finally, we consider the Wright-Fisher model with long soft bottlenecks from
Definition 5.2, with two types of individuals. We denote by
$\\{\hat{X}^{N}_{g}\\}_{g\in\mathbb{N}}$ the frequency process associated with
that model.
###### Theorem 5.4.
Let $\mathrm{L}_{\gamma}$ be a probability measure on $\mathbb{R}_{+}$. Fix
$\alpha\in(0,1]$ and $\eta>0$. Consider the sequence of processes
$\\{\hat{X}^{N}\\}_{N\in\mathbb{N}}$, such that
$\hat{X}^{N}=\\{\hat{X}^{N}_{g}\\}_{g\in\mathbb{Z}_{+}}$ is the frequency
process associated with the Wright-Fisher model with long soft bottlenecks
parametrized by $\alpha$, $\eta$, $N$ and $\mathrm{L}_{\gamma}$ (see
Definition 5.2). Then, for all $T>0$, in $D[0,T]$,
$\\{\hat{X}^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq
T}\overset{d_{\lambda}}{\underset{N\to\infty}{\Longrightarrow}}\\{\hat{X}_{t}\\}_{0\leq
t\leq T},$
where $\\{\hat{X}_{t}\\}_{t\geq 0}$ is the strong solution of the SDE
$d\hat{X}_{t}\ =\
\mathds{1}_{\\{\alpha=1\\}}\sqrt{\hat{X}_{t}(1-\hat{X}_{t})}dB_{t}\ +\
\int_{[0,1]}\int_{[0,1]}\sum_{i\geq
1}{\zeta_{i}}\left(\mathds{1}_{\\{u_{i}\leq\hat{X}_{t^{-}}\\}}-\hat{X}_{t^{-}}\right)\tilde{N}(dt,d\zeta,du),$
(5.23)
where $\\{B_{t}\\}_{t\geq 0}$ is a standard Brownian motion and $\hat{N}$ is a
compensated Poisson measure on
$(0,\infty)\times\Delta\times[0,1]^{\mathbb{N}}$. $\hat{N}$ has intensity
$\eta ds\otimes\frac{\Xi_{KS}(d\zeta)}{(\zeta,\zeta)}\otimes du$, where $du$
is the Lebesgue measure on $[0,1]^{\mathbb{N}}$ and $\Xi_{KS}$ is the
characterizing measure of the subordinated Kingman coalescent, defined in
(5.22).
Again, before proving Theorem 5.4 we shall make sure that a solution to
Equation (5.23) exists.
###### Lemma 5.5.
For any probability measure $\mathrm{L}_{\gamma}$ in $\mathbb{R}_{+}$ and any
$\alpha\in(0,1],\ \eta>0$, there exists a unique strong solution to the SDE
(5.23).
###### Proof.
See Proposition 3.4 in [13]. ∎
We are now ready to prove Theorem 5.4.
###### Proof of Theorem 5.4.
We use the same strategy as in the proof of Theorem 4.6. Again, if $g_{i}$ is
the $i$-th generation that is not in a bottleneck, we define
$\hat{V}^{N}_{i}=\hat{X}^{N}_{g_{i}}.$
Following Step 1 in the proof of Theorem 4.6 we have
$d_{\lambda}(\\{\hat{X}^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq
T},\\{\hat{V}^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{0\leq t\leq T})\rightarrow
0\textrm{ in probability}.$
Again, we need to prove the convergence of the generator of
$\\{\hat{V}^{N}_{\lfloor N^{\alpha}t\rfloor}\\}_{t\geq 0}$, to the generator
of $\\{\hat{X}_{t}\\}_{t\geq 0}$.
From Lemma 5.5, $\\{\hat{X}_{t}\\}_{t\geq 0}$ exists and has generator
$\mathcal{\hat{A}}$. Its domain contains twice differentiable functions and
for a function $f\in C^{2}[0,1]$ and $x\in[0,1]$, we have
$\displaystyle\mathcal{\hat{A}}f(x)\ =$ $\displaystyle\
\mathds{1}_{\\{\alpha=1\\}}\frac{1}{2}x(1-x)f^{\prime\prime}(x)\ +\
\int_{\Delta}\frac{\Xi_{KS}(d\zeta)}{(\zeta,\zeta)}\mathbb{E}\left(f\left(\sum_{i\geq
1}\zeta_{i}B_{i}^{x}\right)-f(x)\right),$ (5.24)
where the $B_{i}^{x}$’s are Bernoulli random variables of parameter $x$ and
the second term is the generator of the frequency process associated with a
$\Xi_{KS}$-Fleming-Viot process, see for example formula (5.6) in [5].
The discrete generator $\mathcal{\hat{A}}^{N}$ of $\\{\hat{V}^{N}_{\lfloor
N^{\alpha}t\rfloor}\\}_{t\geq 0}$ (defined as in (2.9)), applied to a function
$f\in C^{2}[0,1]$ in $x\in[0,1]$ can be written as
$\displaystyle\mathcal{\hat{A}}^{N}f(x)\ =$ $\displaystyle\
N^{\alpha}(1-\frac{\eta}{N^{\alpha}})\mathbb{E}\left(f\left(\frac{\sum_{i=1}^{N}B_{i}^{x}}{N}\right)-f(x)\right)$
(5.25) $\displaystyle+$ $\displaystyle\
N^{\alpha}\frac{\eta}{N^{\alpha}}\sum_{k\geq
1}\mathbb{P}(b_{i,N}N=k)\sum_{g\geq
1}\mathbb{P}(l_{1,N}=g)\sum_{i=0}^{k}\mathbb{P}(\bar{Y}^{k,g,x}=i/k)\left(f(i/k)-f(x)\right),$
(5.26)
which can be interpreted in the same way as the generator
$\mathcal{\bar{A}}^{N}$. Again, when $\alpha=1$, part (5.25), corresponds to
the generator of a classical Wright-Fisher model and converges when
$N\to\infty$ to $\frac{1}{2}x(1-x)f^{\prime\prime}(x)$, which is the generator
of the Wright-Fisher diffusion. When $\alpha<1$ this term becomes of order
$N^{\alpha-1}$ and therefore converges to $0$.
Part (5.26) corresponds to what happens during a bottleneck (we recall that
$\mathrm{L}_{\gamma}/N^{\alpha}\to 0$ in distribution i.e. in the new time
scale the bottlenecks are instantaneous). It is well-known that
$\\{Y^{k,\lfloor kt\rfloor,x}\\}_{t\geq 0}$ converges in distribution, in the
Skorokhod topology to the Wright-Fisher diffusion $\\{Y_{t}\\}_{t\geq 0}$ with
$Y_{0}=x$ (see for example Chapter 2 in [9]). In a similar way we can prove
that $\\{\bar{Y}^{k,\lfloor kt\rfloor,x}\\}_{t\geq 0}$ converges in
distribution, in the Skorokhod topology to the same process. In fact,
$\\{\bar{Y}^{k,\lfloor kt\rfloor,x}\\}_{t\geq 0}$ has the distribution of the
frequency process of a Wright-Fisher model, with a random initial condition.
This, combined with the assumptions that $b_{1,N}N\to\infty$ and
$l_{1,N}/(Nb_{1,N})\to\mathrm{L}_{\gamma}$ in distribution, implies that
$\displaystyle\mathcal{\hat{A}}^{N}f(x)\ {\longrightarrow}$ $\displaystyle\
\mathds{1}_{\\{\alpha=1\\}}\frac{1}{2}x(1-x)f^{\prime\prime}(x)+\eta\int_{\mathbb{R}_{+}}\mathrm{L}_{\gamma}(d\sigma)\int_{[0,1]}\mathbb{P}(Y_{\sigma}\in
dy|Y_{0}=x)\left(f(y)-f(x)\right).$ (5.27)
Finally, to compute $\mathbb{P}(Y_{\sigma}\in dy|Y_{0}=x)$, we use the duality
relation between the Wright-Fisher diffusion and the Kingman coalescent
(2.15). More precisely, to compute the probability that the proportion of type
$1$ individuals is $y$ at time $\sigma$, we can follow backwards in time the
ancestry of the whole population. The number of ancestors is given by a
Kingman coalescent started at $K_{0}=\infty$. If $K_{\sigma}=j$, each one of
the $j$ ancestors is of type $1$ with probability $x$ and the fraction of the
population (at time $\sigma$) that is issued from each one of the $j$
ancestors is given by a Dirichlet distribution $D_{j}$. This means that
$\displaystyle\eta\int_{\mathbb{R}_{+}}\mathrm{L}_{\gamma}(d\sigma)\mathbb{P}(Y_{\sigma}\in
dy|Y_{0}=x)$
$\displaystyle=\eta\int_{\mathbb{R}_{+}}\mathrm{L}_{\gamma}(d\sigma)\sum_{j\geq
1}\mathbb{P}(K_{\sigma}=j)\int_{\Delta}D_{j}(\zeta)\mathbb{P}(\sum_{i\geq
1}\zeta_{i}B_{i}^{x}\in dy)$ (5.28)
$\displaystyle=\int_{\Delta}\frac{\Xi_{KS}}{(\zeta,\zeta)}(d\zeta)\mathbb{P}(\sum_{i\geq
1}\zeta_{i}B_{i}^{x}\in dy),$
and replacing into (5.27), we have that $\mathcal{\hat{A}}^{N}$ converges to
$\mathcal{\hat{A}}$ uniformly. This implies that $\\{V^{N}_{\lfloor
N^{\alpha}t\rfloor}\\}_{0\leq t\leq T}$ converges weakly in the Skorokhod
$J_{1}$ topology to $\\{\bar{X}_{t}\\}_{0\leq t\leq T}$. Since convergence in
$J_{1}$ implies convergenec in $d_{\lambda}$ we have the desired result. ∎
We fix $\alpha\in(0,1],\ \eta>0$ and $\mathrm{L}_{\gamma}$ a probability
measure on $\mathbb{R}_{+}$. Let us consider $\\{\hat{N}_{t}\\}_{t\geq 0}$,
the block-counting process of the subordinated Kingman coalescent
characterized by $\alpha$, $\eta$ and $\mathrm{L}_{\gamma}$. As in the
previous sections, we are going to prove a moment duality property between the
block-counting process and the diffusion with jumps $\\{\hat{X}_{t}\\}_{t\geq
0}$ defined above (with the same parameters).
###### Theorem 5.6.
For every $x\in[0,1],\ n\in\mathbb{N}$, we have
$\mathbb{E}(\hat{X}_{t}^{n}|\hat{X}_{0}=x)\ =\
\mathbb{E}(x^{\hat{N}_{t}}|\hat{N}_{0}=n).$
###### Proof.
We only consider the case $\alpha<1$, (as the extension to the case $\alpha=1$
can be done exactly as in Section 2.3). Let $h(x,n)=x^{n}$, seen as a function
of $x$. Using (5.28), for every $n\in\mathbb{N}$, the generator
$\mathcal{\hat{A}}$ applied to $h$ (seen as a function of $x$) can be
rewritten as
$\displaystyle\mathcal{\hat{A}}h(x,n)$
$\displaystyle=\eta\int_{\mathbb{R}_{+}}\mathrm{L}_{\gamma}(d\sigma)\sum_{j\in\mathbb{N}}\mathbb{P}(K_{\sigma}=j)\int_{\Delta}D_{j}(\zeta)\mathbb{P}(\sum_{i\in\mathbb{N}}\zeta_{i}B_{i}^{x}\in
dy)\left(h(y,n)-h(x,n)\right)$ $\displaystyle=\mathcal{\hat{G}}h(x,n),$
where $\mathcal{\hat{G}}$ is the generator of $\\{\hat{N}_{t}\\}_{t\geq 0}$
applied to $h$, seen as a function of $n$. ∎
## Acknowledgements
The authors thank three anonymous referees for very improving commentaries.
ACG thanks Jochen Blath for helpful discussions. AGC was supported by CONACyT
Grant A1-S-14615, VMP by DGAPA-UNAM postdoctoral program and ASJ by CONACyT
Grant CB-2014/243068.
## References
* [1] N. Berestycki. Recent progress in coalescent theory. Ensaios Matemáticos, Sociedade Brasileira de Matemática, 2009.
* [2] J. Bertoin. Random fragmentation and coagulation processes. Cambridge Studies in Advanced Mathematics,. Cambridge University Press, 2006.
* [3] J. Bertoin and J.F. Le Gall Stochastic flows associated to coalescent processes. Probab. Theory Relat. Fields, 126:2, 261–288, 2003.
* [4] M. Birkner, J. Blath and B. Eldon. An ancestral recombination graph for diploid populations with skewed offspring distribution. Genetics, 193, 255–290, 2013.
* [5] M. Birkner, J. Blath, M. Möhle, M. Steinrüken and J. Tams. A modified lookdown construction for the $\Xi$-Fleming-Viot process with mutation and populations with recurrent bottlenecks. ALEA, Lat. Am. J. Probab. Math. Stat., 6, 25–61, 2009.
* [6] M. Birkner, H. Liu and A. Sturm. Coalescent results for diploid exchangeable population models. Electron. J. Probab., 23:49, 2018.
* [7] J-F. Delmas, J-S. Dhersin and A. Siri-Jégousse. Asymptotic results on the length of coalescent trees. Ann. Appl. Probab., 18, 997–1025, 2008.
* [8] R. Durrett. Probability: Theory and Examples, Fourth edition, Cambridge University Press, 2012.
* [9] A. Etheridge. Some mathematical models from population genetics, volume 2012 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011.
* [10] B. Eldon and J. Wakeley. Coalescent processes when the distribution of offspring number among individuals is highly skewed. Genetics, 171:4, 2621–2633, 2006.
* [11] F. Freund. Cannings models, population size changes and multiple-merger coalescents. J. Math. Biol., 2020.
* [12] F. Gaiser and M. Möhle. On the block-counting process and the fixation line of exchangeable coalescents. ALEA, Lat. Am. J. Probab. Math. Stat., 13, 809–833, 2016.
* [13] A. González Casanova and D. Spanò. Duality and fixation in a $\Xi$-Wright-Fisher processes with frequency-dependent selection. Ann. Appl. Probab., 28, 250–284, 2018.
* [14] R.C. Griffiths and S. Tavaré. Sampling theory for neutral alleles in a varying environment. Philos. Trans. Royal Soc. B, 344, 403–410, 1994.
* [15] P. Jagers and S. Sagitov. Convergence to the coalescent in populations of substantially varying size. J. Appl. Probab., 41, 368–378, 2004.
* [16] I. Kaj and S.M. Krone. The coalescent process in a population with stochastically varying size. J. Appl. Probab., 40, 33–48, 2003.
* [17] O. Kallenberg. Foundations of Modern Probability. Probability and its Applications, Springer-Verlag, 2002.
* [18] J.F.C. Kingman. The coalescent, Stochastic Process. Appl., 13, 235–248, 1982.
* [19] H. Li and R. Durbin. Inference of human population history from individual whole-genome sequences. Nature, 475, 493–496, 2011.
* [20] Z. Li and F. Pu. Strong solutions of jump-type stochastic equations. Electron. Commun. Probab., 17:33, 2012.
* [21] P.A. Meyer and W.A. Zheng, Tightness criteria for laws of semimartingales. Ann. Inst. Henri Poincaré Probab. Statist., 20, 353–372, 1984.
* [22] M. Möhle. A convergence theorem for Markov chains arising in population genetics and the coalescent with selfing. Adv. in Appl. Probab., 30, 493–512, 1998.
* [23] M. Möhle. The concept of duality and applications to Markov processes arising in neutral population genetics models. Bernoulli, 5, 761–777, 1999.
* [24] M. Möhle. Asymptotic results for coalescent processes without proper frequencies and applications to the two-parameter Poisson-Dirichlet coalescent. Stochastic Process. Appl., 120, 2159–2173, 2010.
* [25] M. Möhle and S. Sagitov. A classification of coalescent processes for haploid exchangeable population models. Ann. Probab., 29, 1547–1562, 2001.
* [26] H.S. Niwa, K. Nashida and T. Yanagimoto Reproductive skew in Japanese sardine inferred from DNA sequences. ICES J. Mar. Sci., 73:9, 2181–2189, 2016.
* [27] J. Pitman. Coalescents with multiple collisions. Ann. Probab., 24, 1870–1902, 1999.
* [28] S. Sagitov. The general coalescent with asynchronous mergers of ancestral lines. J. Appl. Probab., 36,1116–1125, 1999.
* [29] S. Sagitov. Convergence to the coalescent with simultaneous multiple mergers. J. Appl. Probab., 40, 839–854, 2003.
* [30] J. Schweinsberg. Coalescents with simultaneous multiple collisions. Electron. J. Probab., 5:12, 2000.
* [31] J. Schweinsberg. Coalescent processes obtained from supercritical Galton-Watson processes. Stochastic Process. Appl., 106, 107–139, 2003.
* [32] A.V. Skorokhod. Limit theorems for stochastic processes. Theor. Probab. Appl., 1, 261–290, 1956.
* [33] M. Steinrücken, M. Birkner and J. Blath. Analysis of DNA sequence variation within marine species using Beta-coalescents. Theor. Pop. Biol., 87, 15–24, 2013.
* [34] J. Terhorst, J.A. Kamm and Y.S. Song. Robust and scalable inference of population history from hundreds of unphased whole genomes. Nature Genetics, 49:303, 2017.
* [35] W. Whitt. Stochastic-process limits: an introduction to stochastic-process limits and their application to queues. Springer, 2002.
|
$\overline{E}_{1}+\sum_{i=1}^{10}\overline{C}_{i}$ to recover $X$. This yields
an irrational (and hence ugly) singularity whose minimal resolution includes a
genus 2 curve $\overline{C}_{7}$. See Figure 22 for an illustration of this.
Note that $\overline{C}_{6}$ and $\overline{C}_{7}$ intersect twice because
the intersection point $C_{6}\cap C_{7}$ is not on the branch locus. We have
also depicted (dashed) the $-1$-curves $\mathcal{E}_{0}$ and $\mathcal{E}_{1}$
present in $\mathbb{H}\mathbb{P}(29)$ coming from the non-toric blow-ups in
its presentation as an almost toric surface.
$C_{1}$$-5$$C_{2}$$-2$$C_{3}$$-2$$C_{4}$$-2$$C_{5}$$-2$$C_{6}$$-2$$C_{7}$$-10$$C_{8}$$-2$$C_{9}$$-2$$C_{10}$$-2$$\mathcal{E}_{0}\,\,-1$$\mathcal{E}_{1}\,\,-1$$B$$\tilde{Y}$$C_{1}$$-6$$C_{2}$$-2$$C_{3}$$-2$$C_{4}$$-2$$C_{5}$$-2$$C_{6}$$-2$$C_{7}$$-10$$C_{8}$$-2$$C_{9}$$-2$$C_{10}$$-2$$\mathcal{E}_{0}\,\,-1$$\mathcal{E}_{1}\,\,-1$$E_{1}$$-1$$B$$\accentset{\approx}{Y}$$-3$$-4$$-1$$-4$$-1$$-4$$g=2$$-20$$-1$$-4$$-2$$-2$$-1$$-1$$-1$$-1$$-2$$\accentset{\approx}{X}$$\accentset{\approx}{f}$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$-2$$-3$$-3$$-2$$-3$$-19$$g=2$$-3$$-2$$-2$
Figure 22: $\mathbb{H}\mathbb{P}(29)$. The minimal resolution $\tilde{Y}$, its
blow-up $\accentset{\approx}{Y}$, and the double cover
$\accentset{\approx}{X}$ of $\accentset{\approx}{Y}$. Thick curves are the
branch locus. Any non-dashed curves in $\accentset{\approx}{X}$ are contracted
to get $X$. The top row shows the dual graph of the minimal resolution of the
singularity of $X$.
### $\mathbb{H}\mathbb{P}(34)$
#### §11.21
The Manetti surface $\mathbb{H}\mathbb{P}(34)$ is obtained from
$\mathbb{P}(1,169,1156)$ by smoothing the $\frac{1}{169}(1,25)$ singularity
and keeping the $\frac{1}{1156}(1,169)$ singularity. The octic tropicalisation
of $\mathbb{H}\mathbb{P}(34)$ has vertices at $\bm{p}_{0}=(0,0)$,
$\bm{p}_{1}=(272/13,0)$ and $\bm{p}_{2}=(0,104/34)$, and a branch cut
emanating from $\bm{p}_{1}$ in the $(-89,13)$-direction with monodromy matrix
$M_{\bm{p}_{1}}=\begin{pmatrix}-1156&-7921\\\ 169&1158\end{pmatrix}$. Below we
show the convex hull $\Pi_{A}^{\mathbb{Z}}$ of the integral points (vertices
at $(0,3)$, $(14,1)$ and $(20,0)$) together with the ambient Markov triangle
in grey. Note that the branch cut lies strictly above the convex hull of the
integer points, and never intersects it.
$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\cdot$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$(272/13,0)$$(0,104/34)$$e_{10}$$e_{2}$branch
cut $(-89,13)$$\times$$e_{y}$$e_{z}$
The resolution of the $\frac{1}{1156}(1,169)$ singularity introduces edges
$e_{1},\ldots,e_{10}$ with inward normals
$\displaystyle\rho_{1}=(0,-1),\quad\rho_{2}=(-1,-7),\quad\rho_{3}=(-7,-48),\quad\rho_{4}=(-13,-89),\quad$
$\displaystyle\rho_{5}=(-19,-130),\quad\rho_{6}=(-44,-301),\quad\rho_{7}=(-69,-472),\quad$
$\displaystyle\rho_{8}=(-94,-643),\quad\rho_{9}=(-119,-814),\quad\rho_{10}=(-144,-985).$
corresponding to exceptional curves, $C_{i}$, $i=1,\ldots,10$ with
$C_{1}^{2}=-7,\quad C_{2}^{2}=-7,\quad C_{3}^{2}=C_{4}^{2}=-2,\quad
C_{5}^{2}=-3,\quad C_{6}^{2}=\cdots=C_{10}^{2}=-2$
Note that the edge $e_{4}$ is parallel to the branch cut: this means that, as
they move inwards, the edges $e_{5},\ldots,e_{10}$ cross the branch cut, and
appear as $M_{\bm{p}_{1}}^{-1}e_{i}$. Moving the edges normally inwards to
$\Pi^{\mathbb{Z}}_{A}$, edge $e_{1}$ ends up with zero-length concentrated at
$(0,3)$, and $e_{3},\ldots,e_{9}$ end up with zero-length concentrated at
$(20,0)$. The table of affine lengths and displacements becomes:
Edge | $e_{1}$ | $e_{2}$ | $e_{3}$ | $e_{4}$ | $e_{5}$ | $e_{6}$
---|---|---|---|---|---|---
Affine length | $0$ | $2$ | $0$ | $0$ | $0$ | $0$
Affine displacement | $1/17$ | $7/17$ | $14/17$ | $21/17$ | $11/17$ | $12/17$
Edge | $e_{7}$ | $e_{8}$ | $e_{9}$ | $e_{10}$ | |
Affine length | $1$ | $0$ | $0$ | $1$ | |
Affine displacement | $13/17$ | $14/17$ | $15/17$ | $16/17$ | |
There is an important feature in this example which did not appear in the
other examples. Namely, the curve $\mathcal{E}_{1}$ from the non-toric blow-up
in the construction of $\tilde{Y}$ appears as a part of the branch locus, with
multiplicity $1$. This follows from the discussion in §9.3: the polygon
$\Pi_{A}^{\mathbb{Z}}$ has a zero-length edge concentrated at $(14,1)$ defined
by
$\tilde{b}_{1}\coloneqq\langle u,\rho_{4}\rangle=\left(\begin{pmatrix}8/3\\\
8/3\end{pmatrix}-\begin{pmatrix}14\\\
1\end{pmatrix}\right)\cdot\begin{pmatrix}-13\\\ -89\end{pmatrix}=-1.$
Write our octic as $\Omega=\sum_{p}a_{p}\theta_{p}$ where $\theta_{p}$ is the
GHK theta-function corresponding to the integral point $p$. If we pick
$a_{(0,3)}$, $a_{(7,1)}$, $a_{(14,1)}$ and $a_{(20,0)}$ generically, we find
that the strict transform of $B$ intersects $C_{2}$ transversely at two
points, $C_{10}$ once transversely, and is disjoint from the other curves
$C_{i}$. According to the prescription in §2.4, the branch locus is
$B+C_{1}+C_{2}+C_{4}+C_{5}+C_{7}+C_{9}+\mathcal{E}_{1}$. Let
$\accentset{\approx}{Y}$ be the blow up of $\tilde{Y}$ at $C_{1}\cap C_{2}$,
$C_{2}\cap B$, $C_{4}\cap C_{5}$ and $C_{4}\cap\mathcal{E}_{1}$ with
exceptional divisors $E_{1},E_{2},E_{3},E_{4},E_{5}$ and let
$\accentset{\approx}{X}$ be the double cover of $\accentset{\approx}{Y}$ over
the curve. We obtain the surface $X$ by contracting
$\sum_{i=1}^{10}\overline{C}_{i}+\sum_{i=1}^{5}\overline{E}_{i}$. Thus $X$
contains an ugly singularity (see Figure 23).
$C_{1}$$-7$$C_{2}$$-7$$C_{3}$$-2$$C_{4}$$-2$$C_{5}$$-3$$C_{6}$$-2$$C_{7}$$-2$$C_{8}$$-2$$C_{9}$$-2$$C_{10}$$-2$$\mathcal{E}_{1}\,-1$$B$$\tilde{Y}$$C_{1}$$-8$$E_{1}$$-1$$C_{2}$$-10$$C_{3}$$-2$$C_{4}$$-4$$E_{4}$$-1$$C_{5}$$-4$$C_{6}$$-2$$C_{7}$$-2$$C_{8}$$-2$$C_{9}$$-2$$C_{10}$$-2$$E_{5}\,-1$$\mathcal{E}_{1}$$-2$$B$$E_{2}$$E_{3}$$\accentset{\approx}{Y}$$-4$$-2$$-5$$-4$$-2$$-2$$-2$$-4$$-1$$-4$$-1$$-4$$-2$$-1$$-2$$-2$$\accentset{\approx}{X}$$\accentset{\approx}{f}$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$-2$$-2$$-5$$-2$$-4$$-2$$-4$$-2$$-2$$-3$$-2$$-3$$-2$
Figure 23: $\mathbb{H}\mathbb{P}(34)$. The minimal resolution $\tilde{Y}$, its
blow-up $\accentset{\approx}{Y}$, and the double cover
$\accentset{\approx}{X}$ of $\accentset{\approx}{Y}$. Thick curves are the
branch locus. Any non-dashed curves in $\accentset{\approx}{X}$ are contracted
to get $X$. The top row shows the dual graph of the minimal resolution of the
singularity of $X$.
## 12 Elliptic fibrations
#### §12.1
If $X$ is a normal octic double Manetti surface with at worst rational
singularities then its geometric genus is the same as an octic double plane,
namely $p_{g}=3$. The minimal model of $X$ is then an elliptic surface with
$p_{g}=3$. To see where the elliptic fibration comes from, observe that the
minimal resolutions of Manetti surfaces are blow-ups of rational ruled
(Hirzebruch) surfaces: when the branch curve intersects the rulings with total
multiplicity $4$, the ruling lifts to an elliptic fibration on the branched
double cover. Note that the singularities for the octic double
$\mathbb{H}\mathbb{P}(29)$ surfaces are irrational, and indeed the minimal
model in that case is a ruled surface rather than an elliptic surface.
This point of view can be helpful for studying stable surfaces, so in this
section we give a description of the elliptic surfaces which contract down to
give our octic double Manetti surfaces in all the cases where it exists. These
descriptions can be read off easily from the tropical pictures.
#### §12.2
Let $\mathcal{F}$ be a singular fibre of the elliptic fibration on the minimal
resolution $X^{\prime}$ of $X$.
* •
We call $\mathcal{F}$ an $S$-fibre if it contains an exceptional curve of
$X^{\prime}\to X$.
* •
Otherwise, we call $\mathcal{F}$ a $B$-fibre. The $B$-fibres arise from taking
double covers of rulings which happen to intersect the branch curve $B$ non-
transversely.
We will first describe the $S$-fibres in our examples, then discuss the
possibilities for $B$-fibres. To relate this with what we did earlier, recall
that we constructed a (not necessarily minimal) resolution
$\accentset{\approx}{X}\to X$ which was a double cover
$\accentset{\approx}{f}\colon\accentset{\approx}{X}\to\accentset{\approx}{Y}$.
Our figures below show $X^{\prime}$, and in each case we will record which
curves (if any) in $\accentset{\approx}{X}$ need to be contracted to get
$X^{\prime}$. The curves $C_{i}$ and $E_{i}$ in $\accentset{\approx}{Y}$ make
reference to the notation and figures from Section 11 and the curves
$\mathcal{E}_{i}\subset\accentset{\approx}{Y}$ are curves coming from the non-
toric blow-ups (defined in §9.2). Recall that $\overline{G}$ denotes
$\accentset{\approx}{f}^{-1}(G)$; we will continue to write $\overline{G}$ for
the projection of this curve to $X^{\prime}$. Whenever we write $F_{i}$ we
will mean the $-1$-curve given by the strict transform of the ruling through
the point whose blow-up created the curve $E_{i}$ (whenever that makes sense).
We use standard notation for the Kodaira types of the fibres, see [32] or [6]
for an explanation of this notation.
$\mathbb{P}(1,1,4)$Case ICase IICase III(a)Case III(b)Case III(c)Case
III(d)$-4$section$-4$section$-4$$-4$$-4$$-4$$-4$ Figure 24: Minimal
resolutions for octic double $\mathbb{P}(1,1,4)$ surfaces as elliptic
fibrations. The exceptional locus is drawn thickly. Any unlabelled curves have
self-intersection $-2$. Any flat horizontal lines are sections.
$\mathbb{P}(1,4,25)$Case ICase IICase
III$-4$$-3$$-3$$-1$$-4$$-1$$-4$$-4$$-4$$-1$$-4$$-1$$-4$$-4$$-3$$-3$$-1$$-4$
Figure 25: Minimal resolutions for octic double $\mathbb{P}(1,4,25)$ surfaces
as elliptic fibrations. The exceptional locus is drawn thickly. Any unlabelled
curves have self-intersection $-2$. Any flat horizontal lines are sections.
$\mathbb{H}\mathbb{P}(5)$Case I$\mathbb{H}\mathbb{P}(5)$Case
II$-4$$-3$$-3$$-1$$-4$$-4$$-1$$-1$$\mathbb{H}\mathbb{P}(13)$$\mathbb{H}\mathbb{P}(34)$$-4$$-4$$-3$$-1$$-1$$-3$$-4$$-5$$-4$$-3$$-3$$-1$
Figure 26: Minimal resolutions for octic double $\mathbb{H}\mathbb{P}(5)$,
$\mathbb{H}\mathbb{P}(13)$, and $\mathbb{H}\mathbb{P}(34)$ surfaces as
elliptic fibrations. The exceptional locus is drawn thickly. Any unlabelled
curves have self-intersection $-2$. Any perfectly horizontal lines are
sections. The $-1$-curves come from non-toric blow-ups.
#### §12.3 $\mathbb{P}(1,1,4)$.
See Figure 24. In these cases, $\accentset{\approx}{X}=X^{\prime}$. The
possible $S$-fibres are as follows.
* •
In Case I the curve $\overline{C}_{1}$ comprises two disjoint sections of
square $-4$. There are no $S$-fibres.
* •
In Case II (compare with Figure 13), the curve $\overline{C}_{1}$ is a single
section of square $-4$; the curves $\overline{E}_{i}+\overline{F}_{i}$ form
four $I_{2}$ fibres.
* •
In Case III we include the subcases (a)–(d) shown in Figure 14: III(a) has two
$I_{2}$ and one $I_{4}$ fibre; III(b) has two $I_{4}$ fibres; III(c) has one
$I_{2}$ and one $I_{6}$ fibre; III(d) has one $I_{8}$ fibre. In all these
cases, $\overline{C}_{1}$ is the section of square $-4$ and the
$\overline{E}_{i}$ and $\overline{F}_{i}$ assemble themselves into $I_{n}$
fibres (the curves $\overline{F}_{i}$ are precisely those which are not drawn
thickly in Figure 24).
#### §12.4 $\mathbb{P}(1,4,25)$.
See Figure 25. The possible $S$-fibres are as follows.
* •
In Case I (compare with Figure 16), $\overline{C}_{1}$ is a section of square
$-4$, $\overline{E}_{1}+\overline{F}$ forms an $I_{2}$ fibre and
$\overline{D}_{x}+\sum_{i=2}^{5}\overline{C}_{i}$ is a (blown-up) $IV$ fibre.
Note that in $X^{\prime}$ is obtained from $\accentset{\approx}{X}$ by
contracting $\overline{C}_{3}$.
* •
In Case II (compare with Figure 17), the curve $\overline{C}_{1}$ is a section
of square $-4$, and the curves
$\overline{E}_{1}+\overline{E}_{2}+\sum_{i=2}^{5}\overline{C}_{i}+\overline{D}_{x}$
is a (blown-up) $I^{*}_{1}$ fibre. In this case,
$\accentset{\approx}{X}=X^{\prime}$.
* •
In Case III (compare with Figure 18), the curve $\overline{C}_{1}$ is a
section of square $-4$, $\overline{E}_{1}+\overline{F}_{1}$ is an $I_{2}$
fibre, and
$\overline{D}_{x}+\sum_{i=2}^{5}\overline{E}_{i}+\sum_{i=2}^{5}\overline{C}_{i}$
is a (blown-up) $I^{*}_{0}$ fibre. In this case, $X^{\prime}$ is obtained from
$\accentset{\approx}{X}$ by contracting $\overline{C}_{3}$.
#### §12.5 $\mathbb{H}\mathbb{P}(5)$.
See Figure 26. The possible $S$-fibres are as follows.
* •
In Case I (compare with Figure 19), $\overline{C}_{1}$ is a section of square
$-4$, $\overline{E}_{1}+\overline{F}_{1}$ is an $I_{2}$ fibre and
$\overline{\mathcal{E}}_{1}+\sum_{i=2}^{4}\overline{C}_{2}$ is a (blown-up)
$III$ fibre. In this case, $X^{\prime}$ is obtained from
$\accentset{\approx}{X}$ by contracting $\overline{C}_{3}$.
* •
In Case II (compare with Figure 20), the curve $\overline{C}_{1}$ is a section
of square $-4$ and
$\overline{\mathcal{E}}_{1}+\sum_{i=1}^{2}\overline{E}_{i}+\sum_{i=2}^{4}\overline{C}_{i}$
is a (blown-up) $I^{*}_{1}$ fibre. In this case,
$\accentset{\approx}{X}=X^{\prime}$.
#### §12.6 $\mathbb{H}\mathbb{P}(13)$ (generic situation).
See Figure 26. In this case (compare with Figure 21), the curve
$\overline{C}_{1}$ is a section of square $-4$ and
$\overline{\mathcal{E}}_{1}+\sum_{i=1}^{3}\overline{E}_{i}+\sum_{i=2}^{7}\overline{C}_{i}$
is a (blown-up) $I^{*}_{0}$ fibre. This is the only $S$-fibre. In this case,
$X^{\prime}$ is obtained from $\accentset{\approx}{X}$ by contracting
$\overline{C}_{4}$ and $\overline{C}_{6}$.
#### §12.7 $\mathbb{H}\mathbb{P}(29)$ (generic situation).
The minimal model is ruled rather than elliptic. The genus 2 curve
$\overline{C}_{7}$ is a bi-section and the curves
$\overline{E}_{1}+\overline{\mathcal{E}}_{0}+\overline{C}_{1}+\cdots+\overline{C}_{6}$
and
$\overline{C}_{8}+\overline{C}_{9}+\overline{C}_{10}+\overline{\mathcal{E}}_{1}$
are each fibres.
#### §12.8 $\mathbb{H}\mathbb{P}(34)$ (generic situation).
See Figure 26. For this surface (compare with Figure 23), the curve
$\overline{C}_{1}$ is a section of square $-4$ and
$\overline{\mathcal{E}}_{1}+\sum_{i=1}^{5}\overline{E}_{i}+\sum_{i=2}^{10}\overline{C}_{i}$
is a (blown-up) $I^{*}_{0}$ fibre. This is the only $S$-fibre. In this case,
$X^{\prime}$ is obtained from $\accentset{\approx}{X}$ by contracting
$\overline{C}_{7}$ and $\overline{C}_{9}$.
#### §12.9 $B$-fibres for $\mathbb{P}(1,1,4)$, Case I.
The $B$-fibres appear when the branch locus $B\subset\tilde{Y}$ intersects the
branch locus non-transversely. In $\mathbb{P}(1,1,4)$, Case I, $B$ intersects
each ruling $F$ with total multiplicity four. If we assume that $B$ is smooth
away from the singularities of $Y$, the following possibilities can occur for
the remaining singular elliptic fibres:
* •
$B$ intersects $F$ at two points transversely and one with multiplicity $2$.
This yields a $B$-fibre of Type $I_{1}$ in the double cover.
* •
$B$ intersects $F$ at two points with multiplicity $2$. This yields a
$B$-fibre of Type $I_{2}$ in the double cover.
* •
$B$ intersects $F$ at one point transversely and at one point with
multiplicity $3$. This yields a $B$-fibre of Type $II$ (cuspidal) in the
double cover.
* •
$B$ intersects $F$ at one point with multiplicity $4$. This yields a $B$-fibre
of Type III in the double cover.
The $B$-fibres of Type $I_{1}$ and $II$ are irreducible, and hence intersect
both components of $\overline{C}_{1}$. The $B$-fibres of Types $I_{2}$ and
$III$ are reducible, but each component necessarily intersects one of the
irreducible components of $\overline{C}_{1}$.
#### §12.10 $B$-fibres in other cases.
For all the other cases except $\mathbb{H}\mathbb{P}(29)$, the branch curve
$B$ intersects each ruling $F$ at three points: the the fourth branch point of
each elliptic fibre over a ruling comes from where the ruling intersects the
section $C_{1}$. If we assume that $B$ is smooth away from the singularities
of $Y$, the following possibilities can occur for the remaining $B$-fibres:
* •
$B$ intersects $F$ at one point transversely and one point with multiplicity
$2$. All three points are distinct from $F\cap C_{1}$, so we get a $B$-fibre
of Type $I_{1}$ in the double cover.
* •
$B$ intersects $F$ at one point with multiplicity $3$, distinct from $F\cap
C_{1}$. This yields a $B$-fibre of Type $II$ in the double cover.
This implies that all of the $B$-fibres are irreducible, and hence intersect
$\overline{C}_{1}$. We will use this fact when we study stability of these
surfaces.
## 13 Stability
#### §13.1
In this section, we will discuss the stability of the surfaces we have found.
The surfaces not covered by Theorem §13.2 below are not log canonical, and
hence not stable. We expect that their stable replacements are non-normal; it
would be interesting to know what you get.
#### §13.2 Theorem.
We call an octic curve $B\subset Y$ generic if it is smooth away from the
singularities of $Y$. Assuming genericity of the branch curve, the following
octic double Manetti surfaces are KSBA-stable: $\mathbb{P}(1,1,4)$, Cases I,
II; $\mathbb{P}(1,4,25)$, Cases I, III; $\mathbb{H}\mathbb{P}(5)$, Case I.
#### §13.3 Proof of Theorem §13.2.
The proof of this theorem will occupy the rest of this section. Certainly the
singularities are log canonical, so it remains only to show that $K_{X}$ is
positive on all curves. We will suppose there is an irreducible curve
$D\subset X$ with $K_{X}\cdot D\leq 0$ and derive a contradiction. Let
$\rho\colon X^{\prime}\to X$ be the minimal resolution and write
$A_{1},\ldots,A_{m}$ for the irreducible components of the exceptional locus.
In these cases, the minimal model of $X^{\prime}$ is an elliptic surface, as
discussed in Section 12. We write $S$ for this elliptic surface,
$\varphi\colon X^{\prime}\to S$ for the contraction, and $B_{1},\ldots,B_{n}$
for the exceptional curves of $\varphi$.
#### §13.4
We have $\rho^{*}K_{X}=K_{X^{\prime}}+\sum\alpha_{i}A_{i}$ (where
$-\alpha_{i}$ are the discrepancies of $\rho$) and
$K_{X^{\prime}}=\varphi^{*}K_{S}+\sum\beta_{i}B_{i}$. Here $\alpha_{i}$ and
$\beta_{i}$ are all positive numbers and $K_{S}$ is homologous to a positive
multiple of an elliptic fibre. Therefore $\rho^{*}K_{X}$ is an effective sum
of curves. Write $\Sigma$ for the support of $\rho^{*}K_{X}$. Let $D$ be a
curve in $X$ and $D^{\prime}=\mathrm{strict}_{\rho}(D)$. We have $K_{X}\cdot
D=\rho^{*}K_{X}\cdot D^{\prime}$. If this quantity is non-positive then either
$D^{\prime}$ is one of the curves in $\Sigma$ or else
$D^{\prime}\cap\Sigma=\emptyset$.
#### §13.5 Lemma.
If $K_{X}\cdot D\leq 0$ then $D^{\prime}$ is an irreducible component of an
elliptic fibre.
###### Proof.
There are two possible cases: either $D^{\prime}$ is disjoint from $\Sigma$ or
it is a component of $\Sigma$. In the first case, since $\Sigma$ contains an
elliptic fibre, $D^{\prime}$ must be contained in a (different) fibre. In the
second case, $D^{\prime}$ is either contained in a fibre or else it is one of
the curves $A_{i}$ or $B_{i}$. It cannot be one of the $A_{i}$, since these
project to points in $X$, and the $B_{i}$ are contained in elliptic fibres.
Therefore $D^{\prime}$ is contained in an elliptic fibre. ∎
#### §13.6
If $D^{\prime}$ is contained in a $B$-fibre then it is a $-2$-curve which
intersects one of the $A_{i}$ (one of the components of $\overline{C}_{1}$).
This implies that $\rho^{*}K_{X}\cdot D^{\prime}>0$. Therefore such a
$D^{\prime}$ cannot satisfy $K_{X}\cdot D\leq 0$.
#### §13.7
If $D^{\prime}$ is contained in an $S$-fibre then it is one of the
uncontracted (thinly-drawn) curves in Figures 24–26. For each of these curves
in the cases of interest, we can check (by calculating discrepancies) that
$\rho^{*}K_{X}\cdot D^{\prime}>0$, so again, it is impossible to find such a
$D$ with $K_{X}\cdot D\leq 0$. This completes the proof of Theorem §13.2.∎
## 14 Smoothability
#### §14.1
We now address the question of smoothability of octic double Manetti surfaces,
and how the various strata of the moduli space we have found fit together. We
have seen that an NPQ limit of octic double planes is an octic double Manetti
surface; it turns out that every octic double Manetti surface is the limit of
a degeneration of octic double planes:
#### §14.2 Theorem.
Any octic double Manetti surface is smoothable.
###### Proof.
Suppose that $Y$ is a Manetti surface and let $n$ be the product of the Markov
numbers associated to the singularities of $Y$, that is:
$n=\begin{cases}abc&\mbox{ if }Y=\mathbb{P}(a^{2},b^{2},c^{2})\\\ ab&\mbox{ if
}Y=\mathbb{H}\mathbb{P}(a,b)\\\ c&\mbox{ if
}Y=\mathbb{H}\mathbb{P}(c).\end{cases}$
Let $B\subset Y$ an octic curve and $f\colon X\to Y$ the double cover of $Y$
branched over $B$. Let $\mathcal{Y}\to\Delta$ be a $\mathbb{Q}$-Gorenstein
smoothing of $Y$ with general fibre $\mathbb{P}^{2}$. By [12, Theorem 1.6(4)],
after possibly making a base change, we can find a divisor
$\mathcal{B}\subset\mathcal{Y}$ extending $B\subset Y$ if and only if the Weil
divisor class $[B]$ is divisible by $n$. Since $B$ is an octic curve,
$[B]=8n$, so $\mathcal{B}$ exists, and we can take the double cover of
$\mathcal{Y}$ branched over $\mathcal{B}$ to get a smoothing of $X$. ∎
#### §14.3
The moduli space of smooth octic double planes is $36$-dimensional: the
projective space of octics is the $44$-dimensional space
$\mathbb{P}\left(\mathrm{Sym}^{8}(V^{*})\right)$ where $V$ is the standard
representation of the $9$-dimensional group $GL(3)$ (whose projectivisation is
the $8$-dimensional $PGL(3)$), so
$\dim\mathbb{P}\left(\mathrm{Sym}^{8}(V^{*})\right)\mathbin{/\mkern-6.0mu/}PGL(3)=36$.
We have the following boundary strata comprising NPQ surfaces:
* $D_{A}$:
Surfaces with two $\frac{1}{4}(1,1)$ singularities (§1.8(i)).
* $D_{B}$:
Surfaces with one $\frac{1}{25}(1,4)$ singularity (§1.8(iii)).
* $D_{AB}$:
Surfaces with two $\frac{1}{4}(1,1)$ singularities and one $\frac{1}{25}(1,4)$
singularity (§1.8(ii)).
This enumeration of NPQ strata is only scratching the surface of the KSBA
boundary: there are also strata corresponding to the surfaces mentioned in
§1.14, which can also be understood similarly, and many deeper strata which we
left untouched. There are also degenerations corresponding to double covers of
singular limits of $\mathbb{P}^{2}$ with irrational singularities, which are
not amenable to the (almost) toric methods from this paper.
#### §14.4 Proposition.
These strata have dimensions:
$\dim(D_{A})=\dim(D_{B})=35,\quad\dim(D_{AB})=34$
and $D_{AB}$ is in the intersection of the closures of $D_{A}$ and $D_{B}$.
###### Proof.
For any Manetti surface, the space of generic “octics” is $45$-dimensional,
which becomes $44$ after projectivising. This is because a (GHK)-basis is in
bijection with the integral points of the corresponding Markov triangle; this
is easily seen to be $45$ for $\Pi(1,1,1)$, and the number of integral points
is invariant under mutation.
The cases $A$, $B$ and $AB$ respectively correspond to general octic double
covers of $\mathbb{P}(1,1,4)$, $\mathbb{H}\mathbb{P}(5)$ and
$\mathbb{P}(1,4,25)$. Since $\mathbb{P}(1,4,25)$ is a common degeneration of
both $\mathbb{H}\mathbb{P}(5)$ and $\mathbb{P}(1,1,4)$, we find that $D_{AB}$
is in the closure of both $D_{A}$ and $D_{B}$. It remains to compute the
dimensions, and for that it is sufficient to show:
$\dim(\mathrm{Aut}(\mathbb{P}(1,1,4))=\dim(\mathrm{Aut}(\mathbb{H}\mathbb{P}(5)))=9,\qquad\dim(\mathrm{Aut}(\mathbb{P}(1,4,25)))=10.$
The general automorphism of $\mathbb{P}(1,1,4)$ is:
$[x:y:z]\mapsto[a_{1}x+a_{2}y:a_{3}x+a_{4}y:a_{5}z+a_{6}x^{4}+a_{7}x^{3}y+\cdots+a_{10}y^{5}]$
(we thank Sönke Rollenske for pointing the five terms we originally missed)
which has $9$ free parameters up to an overall scale factor.
The general automorphism of $\mathbb{P}(1,4,25)$ is:
$[x:y:z]\mapsto[a_{1}x:a_{2}y+a_{3}x^{4}:a_{4}z+a_{5}x^{25}+a_{6}x^{21}y+\cdots+a_{11}xy^{6}]$
which has $10$ free parameters up to scale.
To understand the automorphism group of $\mathbb{H}\mathbb{P}(5)$, recall that
its minimal resolution is obtained from $\mathbb{F}_{7}$ by a sequence of two
toric and one non-toric blow-ups (see §7.1–we use the notation from that
paragraph). Let $Y$ denote the result of performing the two toric blow-ups to
$\mathbb{F}_{7}$. An automorphism of $Y$ which fixes the $-1$-curve denoted
$F$ in Figure 6 lifts to a unique automorphism of $\mathbb{H}\mathbb{P}(5)$
and every automorphism of $\mathbb{H}\mathbb{P}(5)$ arises this way, because
the three points $F\cap B$, $F\cap E$ and $F\cap G$ must be fixed, so any
automorphism of $\mathbb{H}\mathbb{P}(5)$ fixes $F$ pointwise and can be
blown-down.
$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$\bullet$$z$$w$$v$$y$$u$$x$
Figure 27: The moment polygon of $Y$.
It therefore suffices to find the group of automorphisms of $Y$ fixing $F$
pointwise. Since $Y$ is toric, with hexagonal moment polygon, there is a GIT
model for $Y$ as $\mathbb{C}^{6}\mathbin{/\mkern-6.0mu/}\mathbb{G}_{m}^{4}$.
Let $x,y,z,u,v,w$ be the Cox variables associated to the edges of the moment
hexagon as shown in Figure 27. The inward normals associated to these edges
are
$\rho_{x}=(0,1),\,\rho_{y}=(-1,-7),\,\rho_{z}=(0,1),\,\rho_{u}=(0,-1),\,\rho_{v}=(-2,-13),\,\rho_{w}=(-1,-6),$
from which we can read off the weights of the Cox variables under the
$\mathbb{G}_{m}^{4}$-action:
$x$ | $y$ | $z$ | $u$ | $v$ | $w$
---|---|---|---|---|---
$1$ | $1$ | $7$ | $0$ | $0$ | $0$
$0$ | $0$ | $1$ | $1$ | $0$ | $0$
$2$ | $0$ | $13$ | $0$ | $1$ | $0$
$1$ | $0$ | $6$ | $0$ | $0$ | $1$
The unstable locus is
$vwyz=vwxz=uwxz=uxyz=uvxy=uvwy=0.$
An automorphism has the form
$[x:y:z:u:v:w]\mapsto\left[ax+byv^{2}w:cy:dz+\sum_{i=0}^{6}e_{i}uv^{13-2i}w^{6-i}x^{i}y^{7-i}:fu:gv:hw\right]$
for some $a,b,c,d,e_{0},\ldots,e_{6},f,g,h$. Invertibility implies that
$c,f,g,h$ are all nonzero, and using the $\mathbb{G}_{m}^{4}$-action we can
assume they are all equal to $1$. This leaves $10$ free parameters
$a,b,d,e_{0},\ldots,e_{6}$. The points $F\cap B=[1:0:1:1:0:1]$ and $F\cap
E=[1:1:1:1:0:0]$ are automatically fixed, so it suffices to fix another point
on $F$, say $[1:1:1:1:0:1]$. This reduces to the condition that
$[a:1:d:1:0:1]=[1:1:1:1:0:1].$
If we pick a square root of $a$ then act using
$(1,1,1/\sqrt{a},1)\in\mathbb{G}_{m}^{4}$ we get
$[1:1:d/\sqrt{a}:1:0:1]=[1:1:1:1:0:1]$
and have used up all of the freedom in the group action, so the relevant
subgroup is defined by the condition that $d^{2}=a$. This leaves $9$ free
parameters, as required. ∎
## References
* [1] M. Aigner. Markov’s theorem and 100 years of the uniqueness conjecture. Springer, Cham, 2013. A mathematical journey from irrational numbers to perfect matchings.
* [2] M. Akhtar, T. Coates, A. Corti, L. Heuberger, A. Kasprzyk, A. Oneto, A. Petracci, T. Prince, and K. Tveiten. Mirror symmetry and the classification of orbifold del Pezzo surfaces. Proc. Amer. Math. Soc., 144(2):513–527, 2016.
* [3] V. Alexeev. Boundedness and $K^{2}$ for log surfaces. Internat. J. Math., 5(6):779–810, 1994.
* [4] V. Alexeev, H. Argüz, and P. Bousseau. The KSBA moduli space of stable log Calabi-Yau surfaces. arXiv:2402.15117, 2024.
* [5] B. Anthes. Gorenstein stable surfaces with $K^{2}_{X}=2$ and $\chi(\mathcal{O}_{X})=4$. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 21:1137–1186, 2020.
* [6] W. P. Barth, K. Hulek, C. A. M. Peters, and A. Van de Ven. Compact complex surfaces, volume 4 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. Springer-Verlag, Berlin, second edition, 2004.
* [7] L. Bădescu. Normal projective degenerations of rational and ruled surfaces. J. Reine Angew. Math., 367:76–89, 1986.
* [8] J. H. Conway. The sensual (quadratic) form, volume 26 of Carus Mathematical Monographs. Mathematical Association of America, Washington, DC, 1997. With the assistance of Francis Y. C. Fung.
* [9] S. Coughlan, M. Franciosi, R. Pardini, J. Rana, and S. Rollenske. On T-divisors and intersections in the moduli space of stable surfaces $\overline{\mathfrak{M}}_{1,3}$. J. Lond. Math. Soc. (2), 107(2):750–776, 2023.
* [10] D. A. Cox, J. B. Little, and H. K. Schenck. Toric varieties, volume 124 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2011.
* [11] V. I. Danilov. The geometry of toric varieties. Uspekhi Mat. Nauk, 33(2(200)):85–134, 247, 1978.
* [12] K. DeVleming and D. Stapleton. Smooth limits of plane curves of prime degree and markov numbers. arXiv:2208.10595, 2022.
* [13] H. Esnault and E. Viehweg. Two-dimensional quotient singularities deform to quotient singularities. Math. Ann., 271(3):439–449, 1985.
* [14] J. D. Evans. Lectures on Lagrangian torus fibrations, volume 105 of London Mathematical Society Student Texts. Cambridge University Press, Cambridge, 2023.
* [15] M. Franciosi, R. Pardini, J. Rana, and S. Rollenske. I-surfaces with one T-singularity. Boll. Unione Mat. Ital., 15(1-2):173–190, 2022.
* [16] M. Franciosi, R. Pardini, and S. Rollenske. Gorenstein stable surfaces with $K^{2}_{X}=1$ and $p_{g}>0$. Math. Nachr., 290(5-6):794–814, 2017.
* [17] W. Fulton. Introduction to toric varieties, volume 131 of Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, 1993. The William H. Roever Lectures in Geometry.
* [18] P. Gallardo, G. Pearlstein, L. Schaffler, and Z. Zhang. Unimodal singularities and boundary divisors in the KSBA moduli of a class of Horikawa surfaces. Math. Nachr., 297(2):595–628, 2024.
* [19] M. Gross, P. Hacking, and S. Keel. Birational geometry of cluster algebras. Algebr. Geom., 2(2):137–175, 2015.
* [20] M. Gross, P. Hacking, and S. Keel. Mirror symmetry for log Calabi-Yau surfaces I. Publ. Math. Inst. Hautes Études Sci., 122:65–168, 2015.
* [21] M. Gross, P. Hacking, S. Keel, and M. Kontsevich. Canonical bases for cluster algebras. J. Amer. Math. Soc., 31(2):497–608, 2018.
* [22] P. Hacking. Compact moduli of plane curves. Duke Math. J., 124(2):213–257, 2004.
* [23] P. Hacking and Y. Prokhorov. Degenerations of Del Pezzo surfaces I. arXiv:math/0509529, 2005.
* [24] P. Hacking and Y. Prokhorov. Smoothable del Pezzo surfaces with quotient singularities. Compos. Math., 146(1):169–192, 2010.
* [25] E. Horikawa. Algebraic surfaces of general type with small $c^{2}_{1}.$ I. Ann. of Math. (2), 104(2):357–387, 1976.
* [26] E. Horikawa. Algebraic surfaces of general type with small $c^{2}_{1}$. II. Invent. Math., 37(2):121–155, 1976.
* [27] E. Horikawa. Algebraic surfaces of general type with small $c^{2}_{1}$. III. Invent. Math., 47(3):209–248, 1978.
* [28] E. Horikawa. Algebraic surfaces of general type with small $c^{2}_{1}$. IV. Invent. Math., 50(2):103–128, 1978/79.
* [29] E. Horikawa. Algebraic surfaces of general type with small $c^{2}_{1}$. V. J. Fac. Sci. Univ. Tokyo Sect. IA Math., 28(3):745–755 (1982), 1981\.
* [30] N. O. Ilten. Mutations of Laurent polynomials and flat families with toric fibers. SIGMA Symmetry Integrability Geom. Methods Appl., 8:Paper 047, 7, 2012.
* [31] A. Kasprzyk, B. Nill, and T. Prince. Minimality and mutation-equivalence of polygons. Forum Math. Sigma, 5:Paper No. e18, 48, 2017.
* [32] K. Kodaira. On compact analytic surfaces: II. Annals of Mathematics, 77(3):563–626, 1963.
* [33] J. Kollár. Families of varieties of general type, volume 231 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 2023. With the collaboration of Klaus Altmann and Sándor J. Kovács.
* [34] J. Kollár and N. I. Shepherd-Barron. Threefolds and deformations of surface singularities. Invent. Math., 91(2):299–338, 1988.
* [35] T. Mandel. Tropical theta functions and log Calabi-Yau surfaces. Selecta Math. (N.S.), 22(3):1289–1335, 2016.
* [36] M. Manetti. Normal degenerations of the complex projective plane. J. Reine Angew. Math., 419:89–118, 1991.
* [37] M. Manetti. Degenerations of algebraic surfaces and applications to moduli problems. PhD thesis, Scuola Normale Pisa, 1996.
* [38] M. Manetti. Degenerate double covers of the projective plane. In New trends in algebraic geometry (Warwick, 1996), volume 264 of London Math. Soc. Lecture Note Ser., pages 255–281. Cambridge Univ. Press, Cambridge, 1999.
* [39] V. Monreal, J. Negrete, and G. Urzúa. Classification of Horikawa surfaces with T-singularities. In preparation, 2024.
* [40] J. Rana. A boundary divisor in the moduli spaces of stable quintic surfaces. Internat. J. Math., 28(4):1750021, 61, 2017.
* [41] J. Rana and S. Rollenske. Standard stable Horikawa surfaces. arXiv:2211.12059, 2022.
* [42] J. Rana and G. Urzúa. Optimal bounds for T-singularities in stable surfaces. Adv. Math., 345:814–844, 2019.
* [43] M. Symington. Four dimensions from two in symplectic topology. In Topology and geometry of manifolds (Athens, GA, 2001), volume 71 of Proc. Sympos. Pure Math., pages 153–208. Amer. Math. Soc., Providence, RI, 2003.
* [44] I. R. Šafarevič, B. G. Averbuh, Ju. R. Vaĭnberg, A. B. Žižčenko, Ju. I. Manin, B. G. Moĭšezon, G. N. Tjurina, and A. N. Tjurin. Algebraic surfaces. Trudy Mat. Inst. Steklov., 75:1–215, 1965. |
# A higher dimensional Hilbert irreducibility theorem
Giulio Bresciani Freie Universität Berlin, Arnimallee 3, 14195, Berlin,
Germany<EMAIL_ADDRESS>
###### Abstract.
Assuming the weak Bombieri-Lang conjecture, we prove that a generalization of
Hilbert’s irreducibility theorem holds for families of geometrically mordellic
varieties (for instance, families of hyperbolic curves). As an application we
prove that, assuming Bombieri-Lang, there are no polynomial bijections
$\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$.
The author is supported by the DFG Priority Program "Homotopy Theory and
Algebraic Geometry" SPP 1786
###### Contents
1. 1 Introduction
2. 2 Pulling families to maximal Kodaira dimension
3. 3 Higher dimensional HIT
4. 4 Polynomial bijections $\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$
## 1\. Introduction
Serre reformulated Hilbert’s irreducibility theorem as follows [Ser97, Chapter
9].
###### Theorem (Hilbert’s irreducibility, Serre’s form).
Let $k$ be finitely generated over $\mathbb{Q}$, and let
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a morphism with $X$
a scheme of finite type over $k$. Suppose that the generic fiber is finite,
and that there are no generic sections
$\operatorname{Spec}k(\mathbb{P}^{1})\to X$. Then $X(k)\to\mathbb{P}^{1}(k)$
is not surjective.
Recall that the weak Bombieri-Lang conjecture states that, if $X$ is a
positive dimensional variety of general type over a field $k$ finitely
generated over $\mathbb{Q}$, then $X(k)$ is not dense in $X$.
A variety $X$ over a field $k$ is _geometrically mordellic_ , or _GeM_ , if
every subvariety of $X_{\bar{k}}$ is of general type. This generalizes to
defining a scheme $X$ as geometrically mordellic, or GeM, if it is of finite
type over $k$ and every subvariety of $X_{\bar{k}}$ is of general type. If the
weak Bombieri-Lang conjecture holds and $k$ is a field finitely generated over
$\mathbb{Q}$, then the set of rational points of a GeM scheme over $k$ is
finite, since its Zariski closure cannot have positive dimension.
Assuming Bombieri-Lang, we prove that Hilbert’s irreducibility theorem
generalizes to morphisms whose generic fiber is GeM.
###### Theorem A.
Let $k$ be finitely generated over $\mathbb{Q}$, and let
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a morphism with $X$
a scheme of finite type over $k$. Suppose that the generic fiber is GeM, and
that there are no generic sections $\operatorname{Spec}k(\mathbb{P}^{1})\to
X$.
Assume either that the weak Bombieri-Lang conjecture holds in every dimension,
or that it holds up to dimension equal to $\dim X$ and that there exists an
$N$ such that $|X_{v}(k)|\leq N$ for every rational point
$v\in\mathbb{P}^{1}(k)$. Then $X(k)\to\mathbb{P}^{1}(k)$ is not surjective.
There is a version of Hilbert’s irreducibility theorem over non-rational
curves, and the same is true for the higher dimensional generalization.
###### Theorem B.
Assume that the weak Bombieri-Lang conjecture holds in every dimension. Let
$k$ be finitely generated over $\mathbb{Q}$, and let
$f\mathrel{\mathop{\ordinarycolon}}X\to C$ be a morphism with $X$ any scheme
of finite type over $k$ and $C$ a geometrically connected curve. Assume that
the generic fiber is GeM, and that there are no generic sections
$\operatorname{Spec}k(C)\to X$. Then $X(h)\to C(h)$ is not surjective for some
finite extension $h/k$.
As an application of Theorem A we give an answer to a long-standing
Mathoverflow question [Mat19] which asks whether there exists a polynomial
bijection $\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$, conditional on the weak
Bombieri-Lang conjecture.
###### Theorem C.
Assume that the weak Bombieri-Lang conjecture for surfaces holds, and let $k$
be a field finitely generated over $\mathbb{Q}$. There are no polynomial
bijections $k\times k\to k$.
We remark that B. Poonen has proved that, assuming the weak Bombieri-Lang
conjecture for surfaces, there are polynomials giving _injective_ maps
$\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$, see [Poo10].
In 2019, T. Tao suggested on his blog [Tao19] a strategy to try to solve the
problem of polynomial bijections $\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$
conditional on Bombieri-Lang, let us summarize it. Given a morphism
$\mathbb{A}^{2}\to\mathbb{A}^{1}$ and a cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{A}^{1}\dashrightarrow\mathbb{A}^{1}$,
denote by $P_{c}$ the pullback of $\mathbb{A}^{2}$. If $P_{c}$ is of general
type, by Bombieri-Lang $P_{c}(\mathbb{Q})$ is not dense in $P_{c}$ and hence
by Hilbert irreducibility a generic section $\mathbb{A}^{1}\dashrightarrow
P_{c}$ exists. If $P_{c}$ is of general type for "many" covers $c$, one might
expect this to force the existence a generic section
$\mathbb{A}^{1}\dashrightarrow\mathbb{A}^{2}$, which would be in contradiction
with the bijectivity of
$\mathbb{A}^{2}(\mathbb{Q})\to\mathbb{A}^{1}(\mathbb{Q})$.
The strategy had some gaps, though. There were no results showing that the
pullback $P_{c}$ is of general type for "many" covers $c$, and it was not
clear how this would force a generic section of
$\mathbb{A}^{2}\to\mathbb{A}^{1}$. Tao started a so-called "polymath project"
in order to crowdsource a formalization. The project was active for roughly
one week in the comments section of the blog but didn’t reach a conclusion.
Partial progress was made, we cite the two most important contributions. W.
Sawin showed that $\mathbb{A}^{2}(\mathbb{Q})\to\mathbb{A}^{1}(\mathbb{Q})$
can’t be bijective if the generic fiber has genus $0$ or $1$. H. Pasten showed
that, for some morphisms $\mathbb{A}^{2}\to\mathbb{A}^{1}$ with generic fiber
of genus at least $2$, the base change of $\mathbb{A}^{2}$ along the cover
$z^{2}-b\mathrel{\mathop{\ordinarycolon}}\mathbb{A}^{1}\to\mathbb{A}^{1}$ is
of general type for a generic $b$.
Theorem A is far more general than Theorem C, but it is possible to extract
from the proof of the former the minimal arguments needed in order to prove
the latter. These minimal arguments are a formalization of the ideas described
above, hence as far as Theorem C is concerned we have essentially filled in
the gaps in Tao’s strategy.
### Acknowledgements
I would like to thank Hélène Esnault for reading an earlier draft of the paper
and giving me a lot of valuable feedback, and Daniel Loughran for bringing to
my attention the problem of polynomial bijections
$\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$.
### Conventions
A variety over $k$ is a geometrically integral scheme of finite type over $k$.
A smooth, projective variety is of general type if its Kodaira dimension is
equal to its dimension: in particular, a point is a variety of general type.
We say that a variety is of general type if it is birational to a smooth,
projective variety of general type. More generally, we define the Kodaira
dimension of any variety $X$ as the Kodaira dimension of any smooth projective
variety birational to $X$.
Curves are assumed to be smooth, projective and geometrically connected. Given
a variety $X$ (resp. a scheme of finite type $X$) and $C$ a curve, a morphism
$X\to C$ is a family of varieties of general type (resp. of GeM schemes) if
the generic fiber is a variety of general type (resp. a GeM scheme). Given a
morphism $f\mathrel{\mathop{\ordinarycolon}}X\to C$, a generic section of $f$
is a morphism $s\mathrel{\mathop{\ordinarycolon}}\operatorname{Spec}k(C)\to X$
(equivalently, a rational map
$s\mathrel{\mathop{\ordinarycolon}}C\dashrightarrow X$) such that $f\circ s$
is the natural morphism $\operatorname{Spec}k(C)\to C$ (equivalently, the
identity $C\dashrightarrow C$).
## 2\. Pulling families to maximal Kodaira dimension
This section is of purely geometric nature, thus we may assume that $k$ is
algebraically closed of characteristic $0$ for simplicity. The results then
descend to non-algebraically closed fields with standard arguments.
Given a family $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ of
varieties of general type and
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ a finite
covering, let $f_{c}\mathrel{\mathop{\ordinarycolon}}X_{c}\to\mathbb{P}^{1}$
be the fiber product and, by abuse of notation,
$c\mathrel{\mathop{\ordinarycolon}}X_{c}\to X$ the base change of $c$. The
goal of this section is to obtain sufficient conditions on $c$ such that
$X_{c}$ is of general type. This goal will be reached in 2.13, which contains
all the geometry we’ll need for arithmetic applications.
Let us say that $X\to\mathbb{P}^{1}$ is birationally trivial if there exists a
birational morphism $X\dashrightarrow F\times\mathbb{P}^{1}$ which commutes
with the projection to $\mathbb{P}^{1}$. If $f$ is birationally trivial, then
clearly our goal is unreachable, since $X_{c}$ will have Kodaira dimension
$-\infty$ no matter which cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ we choose.
We will show that this is in fact the only exception.
Assume that $X$ is smooth and projective (we can always reduce to this case),
then the relative dualizing sheaf $\omega_{f}$ exists [Kle80, Corollary 24].
First, we show that for _every_ non-birationally trivial family there exists
an integer $m$ such that $f_{*}\omega_{f}^{m}$ has _some_ positivity 2.10.
Second, we show that if $f_{*}\omega_{f}^{m}$ has _enough_ positivity, then
$X$ is of general type 2.11. We then pass from "some" to "enough" positivity
by base changing along a cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$.
### 2.1. Positivity of $f_{*}\omega_{f}^{m}$
There are two cases: either there exists some finite cover
$c\mathrel{\mathop{\ordinarycolon}}C\to\mathbb{P}^{1}$ such that $X_{d}\to C$
is birationally trivial, or not. Let us say that
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ is _birationally
isotrivial_ in the first case, and non-birationally isotrivial in the second
case.
The non-birationally isotrivial case has been extensively studied by Viehweg
and Kollár, we don’t need to do any additional work.
###### Proposition 2.1 (Kollár, Viehweg [Kol87, Theorem p.363]).
Let $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a non-
birationally isotrivial family of varieties of general type, with $X$ smooth
and projective. There exists an $m>0$ such that, in the decomposition of
$f_{*}\omega_{f}^{m}$ in a direct sum of line bundles, each factor has
positive degree.∎
We are thus left with studying the positivity of $f_{*}\omega_{f}^{m}$ in the
birationally isotrivial, non-birationally trivial case. We’ll have to deal
with various equivalent birational models of families, not always smooth, so
let us first compare their relative pluricanonical sheaves.
#### 2.1.1. Morphisms of pluricanonical sheaves
In this subsection, fix a base scheme $S$. If a morphism to $S$ is given, it
is tacitly assumed to be flat, locally projective, finitely presentable, with
Cohen-Macauley equidimensional fibers of dimension $n$. For such a morphism
$f\mathrel{\mathop{\ordinarycolon}}X\to S$, the relative dualizing sheaf
$\omega_{f}$ exists and is coherent, see [Kle80, Theorem 21]. Recall that
$\omega_{f}$ satisfies the functorial isomorphism
$f_{*}\underline{\operatorname{Hom}}_{X}(F,\omega_{f}\otimes_{X}f^{*}N)\simeq\underline{\operatorname{Hom}}_{S}(R^{n}f_{*}F,N)$
for every quasi-coherent sheaf $F$ on $X$ and every quasi-coherent sheaf $N$
on $S$. Write $\omega_{f}^{\otimes m}$ for the $m$-th tensor power, we may
drop the superscript $\\_^{\otimes}$ and just write $\omega_{f}^{m}$ if
$\omega_{f}$ is a line bundle.
Every flat, projective map $f\mathrel{\mathop{\ordinarycolon}}X\to S$ of
smooth varieties over $k$ satisfies the above, see [Kle80, Corollary 24], and
in this case we can compute $\omega_{f}$ as $\omega_{X}\otimes
f^{*}\omega_{S}^{-1}$, where $\omega_{X}$ and $\omega_{S}$ are the usual
canonical bundles. Moreover, the relative dualizing sheaf behaves well under
base change along morphisms $S^{\prime}\to S$, see [Kle80, Proposition 9.iii].
Given a morphism $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ over $S$ and a
quasi-coherent sheaf $F$ over $Y$, then $R^{n}f_{*}(g_{*}F)$ is the
$E^{n,0}_{2}$ term of the Grothendieck spectral sequence $(R^{p}f\circ
R^{q}g)(F)\Rightarrow R^{p+q}(f\circ g)(F)$, thus there is a natural morphism
$R^{n}f_{*}(g_{*}F)\to R^{n}(fg)_{*}F$. This induces a natural map
$\operatorname{Hom}_{Y}(F,\omega_{fg})=\operatorname{Hom}_{S}(R^{n}(fg)_{*}F,\mathcal{O}_{S})\to\operatorname{Hom}_{S}(R^{n}f_{*}(g_{*}F),\mathcal{O}_{S})=\operatorname{Hom}_{X}(g_{*}F,\omega_{f}).$
###### Definition 2.2.
If $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ is a morphism over $S$, define
$g_{\scaleto{\triangle}{0.5em},f}\mathrel{\mathop{\ordinarycolon}}g_{*}(\omega_{fg})\to\omega_{f}$
as the sheaf homomorphism induced by the identity of $\omega_{fg}$ via the
homomorphism
$\operatorname{Hom}_{Y}(\omega_{fg},\omega_{fg})\to\operatorname{Hom}_{X}(g_{*}\omega_{fg},\omega_{f})$
given above for $F=\omega_{fg}$. With an abuse of notation, call
$g_{\scaleto{\triangle}{0.5em},f}$ the induced sheaf homomorphism
$g_{*}(\omega_{fg}^{\otimes m})\to\omega_{f}^{\otimes m}$ for every $m\geq 0$.
If there is no risk of confusion, we may drop the subscript $\\__{f}$ and just
write $g_{\scaleto{\triangle}{0.5em}}$.
The following facts are straightforward, formal consequences of the definition
of $g_{\scaleto{\triangle}{0.5em}}$, we omit proofs.
###### Lemma 2.3.
Let $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ be a morphism over $S$ and
$s\mathrel{\mathop{\ordinarycolon}}S^{\prime}\to S$ any morphism,
$f^{\prime}\mathrel{\mathop{\ordinarycolon}}X^{\prime}\to S^{\prime}$,
$g^{\prime}\mathrel{\mathop{\ordinarycolon}}Y^{\prime}\to X^{\prime}$ the
pullbacks to $S^{\prime}$. By abuse of notation, call $s$ the morphisms
$Y^{\prime}\to Y$, $X^{\prime}\to X$, too. Then
$g^{\prime}_{\scaleto{\triangle}{0.5em}}=g_{\scaleto{\triangle}{0.5em}}|_{X^{\prime}}\in\operatorname{Hom}_{X^{\prime}}(g^{\prime}_{*}\omega_{f^{\prime}g^{\prime}},\omega_{f^{\prime}})=\operatorname{Hom}_{X^{\prime}}(s^{*}g_{*}\omega_{fg},s^{*}\omega_{f}).$
∎
###### Lemma 2.4.
For every quasi-coherent sheaf $F$ on $Y$, the natural map
$\operatorname{Hom}_{Y}(F,\omega_{fg})\to\operatorname{Hom}_{X}(g_{*}F,\omega_{f})$
constructed above is given by
$\varphi\mapsto g_{\scaleto{\triangle}{0.5em}}\circ
g_{*}\varphi\mathrel{\mathop{\ordinarycolon}}g_{*}F\to
g_{*}\omega_{fg}\to\omega_{f}.$
∎
###### Corollary 2.5.
Let $h\mathrel{\mathop{\ordinarycolon}}Z\to Y$,
$g\mathrel{\mathop{\ordinarycolon}}Y\to X$ be morphisms over $S$. Then, for
every $m\geq 0$,
$g_{\scaleto{\triangle}{0.5em}}\circ
g_{*}h_{\scaleto{\triangle}{0.5em}}=(gh)_{\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}gh_{*}\omega_{fgh}^{\otimes
m}\to g_{*}\omega_{fg}^{\otimes m}\to\omega_{f}^{\otimes m}.$
∎
###### Corollary 2.6.
Let $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ be a morphism over $S$. Suppose
that a group $H$ acts on $Y,X,S$ and $g,f$ are $H$-equivariant. Then
$g_{*}\omega_{fg}^{\otimes m},\omega_{f}^{\otimes m}$ are $H$-equivariant
sheaves and
$g_{\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}g_{*}\omega_{fg}^{\otimes
m}\to\omega_{f}^{\otimes m}$ is $H$-equivariant.∎
###### Lemma 2.7.
Let $g\mathrel{\mathop{\ordinarycolon}}Y\to X$ be a morphism over $S$. Assume
that $Y,X$ are smooth varieties over a field $k$, and that $g$ is birational.
Then $g_{\scaleto{\triangle}{0.5em}}$ is an isomorphism.
###### Proof.
We have $\omega_{f}=\omega_{X}\otimes f^{*}\omega_{S}^{-1}$ and
$\omega_{fg}=\omega_{Y}\otimes(fg)^{*}\omega_{S}^{-1}$. Moreover,
$\omega_{Y}=g^{*}\omega_{X}\otimes\mathcal{O}_{Y}(R)$ where $R$ is some
effective divisor whose irreducible components are contracted by $g$, hence
$\omega_{fg}=g^{*}\omega_{f}\otimes\mathcal{O}_{Y}(R)$. Since
$g_{*}\mathcal{O}_{Y}(mR)\simeq\mathcal{O}_{X}$, we have a natural isomorphism
$g_{*}(\omega_{fg}^{m})\simeq\omega_{f}^{m}$ by projection formula. This is
easily checked to correspond to $g_{\scaleto{\triangle}{0.5em}}$, which is
then an isomorphism as desired. ∎
#### 2.1.2. Birationally isotrivial families
Let $C$ be a smooth projective curve and
$f\mathrel{\mathop{\ordinarycolon}}X\to C$ a birationally isotrivial family of
varieties of general type, and let $F/k$ be a smooth projective variety such
that the generic fiber of $f$ is birational to $F$. Let $H$ be the finite
group of birational automorphisms of $F$. The scheme of fiberwise birational
isomorphisms $\operatorname{Bir}(X/C,F)\to C$ restricts to an $H$-torsor on
some non-empty open subset $V$ of $C$. The action of $H$ on
$\operatorname{Bir}(X/C,F)|_{V}$ is transitive on the connected components,
thus they are all birational.
###### Definition 2.8.
In the situation above, define $b\mathrel{\mathop{\ordinarycolon}}B_{f}\to C$
as the smooth completion of any connected component of
$\operatorname{Bir}(X/C,F)|_{V}$, and $G_{f}\subseteq H$ as the subgroup of
elements mapping $B_{f}$ to itself. Let us call $B_{f}\to C$ and $G_{f}$ the
_monodromy cover_ and the _monodromy group_ of $f$ respectively.
We have that $B_{f}\to C$ is a $G_{f}$-Galois covering characterized by the
following universal property: if $C^{\prime}$ is a smooth projective curve
with a finite morphism $c\mathrel{\mathop{\ordinarycolon}}C^{\prime}\to C$,
then $X_{c}\to C^{\prime}$ is birationally trivial if and only if there exists
a factorization $C^{\prime}\to B_{f}\to C$.
###### Proposition 2.9.
Let $f\mathrel{\mathop{\ordinarycolon}}X\to C$ be a birationally isotrivial
family of varieties of general type, with $X$ smooth and projective. If $p\in
B_{f}$ is a ramification point of the monodromy cover
$b\mathrel{\mathop{\ordinarycolon}}B_{f}\to C$, then for some $m$ there exists
an injective sheaf homomorphism $\mathcal{O}_{B_{f}}(p)\to
f_{b*}\omega_{f_{b}}^{m}$.
###### Proof.
The statement is equivalent to the existence of a non-trivial section of
$\omega_{f_{b}}^{m}$ which vanishes on the fiber $X_{b,p}$. Let $F$ be as
above, $G_{f}$ acts faithfully with birational maps on $F$. By equivariant
resolution of singularities, we may assume that $G_{f}$ acts faithfully by
isomorphisms on $F$. We have that $X$ is birational to $(F\times B_{f})/G_{f}$
where $G_{f}$ acts diagonally.
By resolution of singularities, let $X^{\prime}$ be a smooth projective
variety with birational morphisms $X^{\prime}\to X$, $X^{\prime}\to(F\times
B_{f})/G_{f}$: thanks to 2.7 we may replace $X$ with $X^{\prime}$ and assume
we have a birational morphism $X\to(F\times B_{f})/G_{f}$. By equivariant
resolution of singularities again, we may find a smooth projective variety $Y$
with an action of $G_{f}$, a birational morphism
$g\mathrel{\mathop{\ordinarycolon}}Y\to X_{b}$ and a birational,
$G_{f}$-equivariant morphism $y\mathrel{\mathop{\ordinarycolon}}Y\to F\times
B_{f}$. Call $\pi\mathrel{\mathop{\ordinarycolon}}F\times B_{f}\to B_{f}$ the
projection.
${Y}$${X_{b}}$${X}$${F\times B_{f}}$${(F\times
B_{f})/G_{f}}$${B_{f}}$${C}$$\scriptstyle{y}$$\scriptstyle{g}$$\scriptstyle{b}$$\scriptstyle{\pi}$$\scriptstyle{b}$$\scriptstyle{\pi
y}$$\scriptstyle{f_{b}}$
Recall that we are trying to find a global section of $\omega_{f_{b}}^{m}$
that vanishes on $X_{b,p}$, where $p$ is a ramification point of $b$. Thanks
to 2.7, we have that $\pi y_{*}\omega_{\pi
y}^{m}\simeq\pi_{*}\omega_{\pi}^{m}\simeq\mathcal{O}_{B_{f}}\otimes\operatorname{H}^{0}(F,\omega_{F}^{m})$,
thus $\operatorname{H}^{0}(Y,\omega_{\pi
y}^{m})=\operatorname{H}^{0}(F,\omega_{F}^{m})=\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})$.
The sheaf homomorphism
$g_{\scaleto{\triangle}{0.5em}}=g_{\scaleto{\triangle}{0.5em},f_{b}}\mathrel{\mathop{\ordinarycolon}}g_{*}\omega_{\pi
y}^{m}\to\omega_{f_{b}}^{m}$ induces a linear map
$g_{\scaleto{\triangle}{0.5em}}(p)\mathrel{\mathop{\ordinarycolon}}\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})=\operatorname{H}^{0}(Y,\omega_{\pi
y}^{m})\xrightarrow{g_{\scaleto{\triangle}{0.5em}}}\operatorname{H}^{0}(X_{b},\omega_{f_{b}}^{m})\xrightarrow{\bullet|_{p}}\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$
where the last map is the restriction to the fiber. Let $V\subseteq B_{f}$ be
the étale locus of $b\mathrel{\mathop{\ordinarycolon}}B_{f}\to C$. Since
$X_{b}|_{V}$ is smooth, then $g_{\scaleto{\triangle}{0.5em}}$ restricts to an
isomorphism on $X_{b}|_{V}$ thanks to 2.7 and thus the map
$\operatorname{H}^{0}(Y,\omega_{\pi
y}^{m})\to\operatorname{H}^{0}(X_{b},\omega_{f_{b}}^{m})$ is injective.
We want to show that the restriction map
$\operatorname{H}^{0}(X_{b},\omega_{f_{b}}^{m})\to\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$
is not injective for some $m$, it is enough to show that
$g_{\scaleto{\triangle}{0.5em}}(p)$ is not injective. Thanks to 2.3, we have
that
$g_{\scaleto{\triangle}{0.5em}}(p)=g_{p,\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})\to\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$.
Recall now that $G_{f}$ acts on $Y$. Let $G_{f,p}$ be the stabilizer of $p\in
B_{f}$, it is a non-trivial group since $p$ is a ramification point. Thanks to
2.6, the stabilizer $G_{f,p}$ acts naturally on
$\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})$,
$\operatorname{H}^{0}(F,\omega_{F}^{m})$,
$\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$, and the maps
$y_{p,\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})\simeq\operatorname{H}^{0}(F,\omega_{F}^{m})$,
$g_{p,\scaleto{\triangle}{0.5em}}\mathrel{\mathop{\ordinarycolon}}\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})\to\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$
are $G_{f,p}$-equivariant. Moreover, the action on
$\operatorname{H}^{0}(X_{b,p},\omega_{X_{b,p}}^{m})$ is trivial since the
action on $X_{b,p}$ is trivial. It follows that
$g_{\scaleto{\triangle}{0.5em}}(p)$ is $G_{f,p}$-invariant, and hence to show
that it is not injective for some $m$ it is enough to show that the action of
$G_{f,p}$ on
$\operatorname{H}^{0}(F,\omega_{F}^{m}))=\operatorname{H}^{0}(Y_{p},\omega_{Y_{p}}^{m})$
is not trivial for some $m$.
Since $F$ is of general type,
$F\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(F,\omega_{F}^{m}))$ is
generically injective for some $m$, fix it. Since the action of $G_{f,p}$ on
$F$ is faithful, for every non-trivial $g\in G_{f,p}$ there exists a section
$s\in\operatorname{H}^{0}(F,\omega_{F}^{m})$ and a point $v\in F$ such that
$s(v)=0$ and $s(g(v))\neq 0$, in particular the action of $G_{f,p}$ on
$\operatorname{H}^{0}(F,\omega_{F}^{m})$ is not trivial and we conclude. ∎
###### Corollary 2.10.
Let $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a non-
birationally trivial family of varieties of general type, with $X$ smooth and
projective. Then there exists an $m$ with an injective homomorphism
$\mathcal{O}(1)\to f_{*}\omega_{f}^{m}$.
###### Proof.
If $f$ is not birationally isotrivial, apply 2.1. Otherwise, $f$ is
birationally isotrivial and not birationally trivial, thus the monodromy cover
$b\mathrel{\mathop{\ordinarycolon}}B_{f}\to\mathbb{P}^{1}$ is not trivial.
Since $\mathbb{P}^{1}$ has no non-trivial étale covers, we have that
$B_{f}\to\mathbb{P}^{1}$ has at least one ramification point $p$. Let $m$ be
the integer given by 2.9, and write
$f_{*}\omega_{f}^{m}=\bigoplus_{i}\mathcal{O}_{\mathbb{P}^{1}}(d_{i})$. Since
$\mathcal{O}_{B_{f}}(p)\subseteq f_{b*}\omega_{f_{b}}^{m}$ and
$\omega_{f_{b}}=b^{*}\omega_{f}$, see [Kle80, Proposition 9.iii], there exists
an $i$ with $d_{i}>0$. ∎
### 2.2. Pulling families to maximal Kodaira dimension
Now that we have established a positivity result for $f_{*}\omega_{f}^{m}$ of
any non-birationally trivial family
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$, let us use this to
pull families to maximal Kodaira dimension.
###### Proposition 2.11.
Let $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a family of
varieties of general type, with $X$ smooth and projective. Then $X$ is of
general type if and only if there exists an injective homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(1)\to f_{*}\omega_{X}^{m_{0}}$, or equivalently
$\mathcal{O}_{\mathbb{P}^{1}}(2m_{0}+1)\to f_{*}\omega_{f}^{m_{0}}$, for some
$m_{0}>0$.
###### Proof.
By resolution of singularities, there exists a birational morphism
$g\mathrel{\mathop{\ordinarycolon}}X^{\prime}\to X$ with $X^{\prime}$ smooth
and projective such that the generic fiber of $X^{\prime}\to\mathbb{P}^{1}$ is
smooth and projective. We have
$\omega_{X^{\prime}}=g^{*}\omega_{X}\otimes\mathcal{O}_{X^{\prime}}(R)$ where
$R$ is some effective divisor whose irreducible components are contracted by
$g$, hence $g_{*}\omega_{X^{\prime}}^{m}=\omega_{X}^{m}\otimes
g_{*}O(mR)=\omega_{X}^{m}$ for every $m\geq 0$. We may thus replace $X$ with
$X^{\prime}$ and assume that the generic fiber is smooth. This guarantees that
$\operatorname{rank}f_{*}\omega_{X}^{m}=\operatorname{rank}f_{*}\omega_{f}^{m}$
has growth $O(m^{\dim X-1})$.
If there are no injective homomorphisms $\mathcal{O}_{\mathbb{P}^{1}}(1)\to
f_{*}\omega_{X}^{m}$ for every $m>0$, then
$\operatorname{h}^{0}(\omega_{X}^{m})\leq\operatorname{rank}f_{*}\omega_{X}^{m}=\operatorname{rank}f_{*}\omega_{f}^{m}$,
and this has growth $O(m^{\dim X-1})$.
On the other hand, let $\mathcal{O}_{\mathbb{P}^{1}}(1)\to
f_{*}\omega_{X}^{m_{0}}$ be an injective homomorphism for some $m_{0}>0$. In
particular, $X$ has Kodaira dimension $\geq 0$.
For some $m$, the closure $Y$ of the image of
$X\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,\omega_{X}^{mm_{0}}))$ has
dimension equal to the Kodaira dimension of $X$ and $k(Y)$ is algebraically
closed in $k(X)$, see [Iit71, §3]. If $X^{\prime}$ is a smooth projective
variety birational to $X$, then there is a natural isomorphism
$\operatorname{H}^{0}(X,\omega_{X}^{mm_{0}})=\operatorname{H}^{0}(X^{\prime},\omega_{X^{\prime}}^{mm_{0}})$,
see [Har77, Ch. 2, Theorem 8.19]. Thus, up to replacing $X$ with some other
smooth, projective variety birational to $X$, we may assume that
$X\dashrightarrow
Y\subseteq\mathbb{P}(\operatorname{H}^{0}(X,\omega_{X}^{mm_{0}}))$ is defined
everywhere and has smooth, projective generic fiber $Z$ by resolution of
singularities. Iitaka has then shown that $Z$ has Kodaira dimension $0$, see
[Iit71, Theorem 5]. This is easy to see in the case in which
$\omega_{X}^{mm_{0}}$ is base point free, since then $\omega_{X}^{mm_{0}}$ is
the pullback of $\mathcal{O}(1)$ and thus
$\omega_{Z}^{mm_{0}}=\omega_{X}^{mm_{0}}|_{Z}$ is trivial.
Let us recall briefly Grothendieck’s convention that, if $V$ is a vector
bundle, then $\mathbb{P}(V)$ is the set (or scheme) of linear quotients $V\to
k$ up to a scalar. A non-trivial linear map $W\to V$ thus induces a rational
map $\mathbb{P}(V)\dashrightarrow\mathbb{P}(W)$ by restriction. If $L$ is a
line bundle with non-trivial global sections, the rational map
$X\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,L))$ is defined by sending
a point $x\in X$ outside the base locus to the quotient
$\operatorname{H}^{0}(X,L)\to L_{x}\simeq k$. If $L$ embeds in another line
bundle $M$, then there is a natural factorization
$X\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,M))\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,L))$,
and any point of $X$ outside the support of $M/L$ and outside the base locus
of $L$ maps to the locus of definition of
$\mathbb{P}(\operatorname{H}^{0}(X,M))\dashrightarrow\mathbb{P}(\operatorname{H}^{0}(X,L))$.
Let $F\subseteq X$ be the fiber over any rational point of $\mathbb{P}^{1}$.
The injective homomorphism $\mathcal{O}_{\mathbb{P}^{1}}(1)\to
f_{*}\omega_{X}^{m_{0}}$ induces an injective homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(m)\to f_{*}\omega_{X}^{mm_{0}}$, choose any
embedding $\mathcal{O}_{\mathbb{P}^{1}}(1)\to\mathcal{O}_{\mathbb{P}^{1}}(m)$,
these induce an injective homomorphism
$\mathcal{O}_{X}(F)\to\omega_{X}^{mm_{0}}$. Since $\mathcal{O}_{X}(F)$ induces
the morphism $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$, the
composition
$X\to
Y\subseteq\mathbb{P}(\operatorname{H}^{0}(X,\omega_{X}^{mm_{0}}))\dashrightarrow\mathbb{P}^{1}$
coincides with $f$. Observe that the right arrow depends on the choice of the
embedding $\mathcal{O}_{X}(F)\to\omega_{X}^{mm_{0}}$, but the composition
doesn’t.
Let $\xi$ be the generic point of $\mathbb{P}^{1}$, $U\subseteq Y$ an open
subset such that $U\to\mathbb{P}^{1}$ is defined, $Y_{\xi}$ the closure of
$U_{\xi}$ in $Y$. Then the generic fiber $Z$ of $X\to Y$ is the generic fiber
of $X_{\xi}\to Y_{\xi}$, too. By hypothesis, $X_{\xi}$ is of general type,
thus by adjunction $\omega_{X_{\xi}}|_{Z}=\omega_{Z}$ is big and hence $Z$ is
of general type.
Since $Z$ is a variety of general type of Kodaira dimension $0$ over
$\operatorname{Spec}k(Y)$, then $Z=\operatorname{Spec}k(Y)$, the morphism
$X\to Y$ is generically injective and thus $X$ is of general type. ∎
###### Remark 2.12.
We don’t actually need the precision of 2.11: for our purposes it is enough to
show that, if $f_{*}\omega_{X}^{m_{0}}$ has a positive _enough_ sub-line
bundle for some $m_{0}$, then $X$ is of general type. This weaker fact has a
more direct proof, let us sketch it.
First, let us mention an elementary fact about injective sheaf homomorphisms.
Let $P,Q$ be vector bundles on $\mathbb{P}^{1}$ and $M,N$ vector bundles on
$X$, with $P$ of rank $1$. Suppose we are given injective homomorphisms
$m\in\operatorname{Hom}(P,f_{*}M)$, $n\in\operatorname{Hom}(Q,f_{*}N)$. Then
$m^{a}\otimes n\in\operatorname{Hom}(P^{\otimes a}\otimes Q,f_{*}(M^{\otimes
a}\otimes N))$ is injective for every $a>0$: this can be checked on the
generic point of $\mathbb{P}^{1}$ and thus on the generic fiber
$X_{k(\mathbb{P}^{1})}$, where the fact that $P$ has rank $1$ allows us to
reduce to the fact that the tensor product of non-zero sections of vector
bundles is non-zero on an integral scheme.
Assume we have an injective homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(3m_{0})\to f_{*}\omega_{X}^{m_{0}}$, or
equivalently $\mathcal{O}_{\mathbb{P}^{1}}(5m_{0})\to
f_{*}\omega_{f}^{m_{0}}$, we want to prove that $X$ is of general type. Let
$r(m)$ be the rank $f_{*}\omega_{f}^{mm_{0}}$ for every $m$. Since the generic
fiber is of general type, up to replacing $m_{0}$ by a multiple
$m_{0}^{\prime}$ we may assume that the growth of $r(m)$ is $O(m^{\dim X-1})$.
The induced morphism $\mathcal{O}_{\mathbb{P}^{1}}(5m_{0}^{\prime})\to
f_{*}\omega_{f}^{m_{0}^{\prime}}$ is injective thanks to the above.
Thanks to [Vie83, Theorem III], every line bundle in the factorization of
$f_{*}\omega_{f}^{mm_{0}}$ has non-negative degree, we may thus choose an
injective homomorphism $\mathcal{O}_{\mathbb{P}^{1}}^{r(m)}\to
f_{*}\omega_{f}^{mm_{0}}$. Taking the tensor product with the $m$-th power of
the homomorphism given by hypothesis, we get an homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(5mm_{0})^{r(m)}\to f_{*}\omega_{f}^{2mm_{0}}$
which is injective thanks to the above.
Since
$f_{*}\omega_{X}^{2mm_{0}}=f_{*}\omega_{f}^{2mm_{0}}\otimes\mathcal{O}_{\mathbb{P}^{1}}(-4mm_{0})$,
we thus have an injective homomorphism
$\mathcal{O}_{\mathbb{P}^{1}}(mm_{0})^{r(m)}\to f_{*}\omega_{X}^{2mm_{0}}.$
In particular, we have
$\operatorname{h}^{0}(\omega_{X}^{2mm_{0}})\geq(mm_{0}+1)r(m)$ which has
growth $O(m^{n})$, hence $X$ is of general type.
###### Corollary 2.13.
Let $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ be a non-
birationally trivial family of varieties of general type. Then there exists an
integer $d_{0}$ and a non-empty open subset $U\subseteq\mathbb{P}^{1}$ such
that, for every finite cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ with $\deg
c\geq d_{0}$ and such that the branch points of $c$ are contained in $U$, we
have that $X_{c}$ is of general type. If $X$ is smooth and projective, $U$ can
be chosen as the largest open subset such that $f|_{f^{-1}(U)}$ is smooth.
###### Proof.
By resolution of singularities, we may assume that $X$ is smooth and
projective. By generic smoothness, there exists an open subset
$U\subseteq\mathbb{P}^{1}$ be such that $f|_{X_{U}}$ is smooth. We have that
$X_{c}$ is smooth for every
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ whose
branch points are contained in $U$ since each point of $X_{c}$ is smooth
either over $X$ or over $\mathbb{P}^{1}$.
Let $m_{0}$ be the integer given by 2.10, we have an injective homomorphism
$\mathcal{O}(1)\to f_{*}\omega_{f}^{m_{0}}$. Set $d_{0}=2m_{0}+1$, for every
finite cover $c$ of degree $\deg c\geq d_{0}=2m_{0}+1$ we have an induced
homomorphism $\mathcal{O}(2m_{0}+1)\to f_{c*}\omega_{f_{c}}^{m_{0}}$ and thus
$\mathcal{O}(1)\to f_{c*}\omega_{X_{c}}^{m_{0}}$. It follows that $X_{c}$ is
of general type thanks to 2.11. ∎
## 3\. Higher dimensional HIT
### 3.1. Pulling fat sets
Recall that Serre [Ser97, Chapter 9] defined a subset $S$ of
$\mathbb{P}^{1}(k)$ as _thin_ if there exists a morphism
$f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$ with $X$ of finite type
over $k$, finite generic fiber and no generic sections
$\operatorname{Spec}k(\mathbb{P}^{1})\to X$ such that $S\subseteq f(X(k))$.
It’s immediate to check that a subset of a thin set is thin, and a finite
union of thin sets is thin. Serre’s form of Hilbert’s irreducibility theorem
says that, if $k$ is finitely generated over $\mathbb{Q}$, then
$\mathbb{P}^{1}(k)$ is not thin.
###### Definition 3.1.
A subset $S\subseteq\mathbb{P}^{1}(k)$ is _fat_ if the complement
$\mathbb{P}^{1}(k)\setminus S$ is thin.
Given a subset $S\subseteq\mathbb{P}^{1}(k)$, a finite set of finite morphisms
$D=\\{d_{i}\mathrel{\mathop{\ordinarycolon}}D_{i}\to\mathbb{P}^{1}\\}_{i}$
each of degree $>1$ with $D_{i}$ smooth, projective and geometrically
connected is a _scale_ for $S$ if
$S\cup\bigcup_{i}d_{i}(D_{i}(k))=\mathbb{P}^{1}(k)$. The set of branch points
of the scale $D$ is the union of the sets of branch points of $d_{i}$.
Using the fact that a connected scheme with a rational point is geometrically
connected [Sta20, Lemma 04KV], it’s immediate to check that a subset of
$\mathbb{P}^{1}$ is fat if and only if it has a scale. The set of branch
points of a scale gives valuable information about a fat set.
###### Lemma 3.2.
Let $S\subseteq\mathbb{P}^{1}$ be a fat set, and let
$D=\\{d_{i}\mathrel{\mathop{\ordinarycolon}}D_{i}\to\mathbb{P}^{1}\\}_{i}$ be
a scale for $S$. Let
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ be a
morphism such that the sets of branch points of $c$ and $D$ are disjoint. Then
$c^{-1}(S)$ is fat.
###### Proof.
Let
$d_{i}^{\prime}\mathrel{\mathop{\ordinarycolon}}D_{i}^{\prime}\to\mathbb{P}^{1}$
be the base change of $d_{i}$ along
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$. By
construction,
$c^{-1}(S)\cup\bigcup_{i}d_{i}^{\prime}(D_{i}^{\prime}(k))=\mathbb{P}^{1}(k)$.
Since the sets of branch points of $c$ and $d_{i}$ are disjoint, we have that
$D_{i}^{\prime}$ is geometrically connected, see for instance [Str20, Lemma
2.8]. Moreover, $D_{i}^{\prime}$ is smooth since each point of
$D_{i}^{\prime}$ is étale either over $\mathbb{P}^{1}$ or $D_{i}$. It follows
that $d_{i}^{\prime}$ has degree $>1$ and
$\\{d_{i}^{\prime}\mathrel{\mathop{\ordinarycolon}}D_{i}^{\prime}\to\mathbb{P}^{1}\\}_{i}$
is a scale for $c^{-1}(S)$, which is thus fat. ∎
### 3.2. Decreasing the fiber dimension
Let us now prove Theorem A. Using Hilbert’s irreducibility, it’s easy to check
that Theorem A is equivalent to the following statement.
If the generic fiber of $f\mathrel{\mathop{\ordinarycolon}}X\to\mathbb{P}^{1}$
is GeM and $f(X(k))$ is fat, there exists a section
$\operatorname{Spec}k(\mathbb{P}^{1})\to X$.
We prove this statement by induction on the dimension of the generic fiber. If
the generic fiber has dimension $0$, this follows from the definition of fat
set. Let us prove the inductive step.
We define recursively a sequence of closed subschemes $X_{i+1}\subseteq X_{i}$
with $X_{0}=X$ and such that $f(X_{i}(k))\subseteq\mathbb{P}^{1}_{k}$ is fat.
* •
Define $X^{\prime}_{i}$ as the closure of $X_{i}(k)$ with the reduced scheme
structure, $f(X^{\prime}_{i}(k))=f(X_{i}(k))\subseteq\mathbb{P}^{1}_{k}$ is
fat.
* •
Define $X^{\prime\prime}_{i}$ as the union of the irreducible components of
$X^{\prime}_{i}$ which dominate $\mathbb{P}^{1}$,
$f(X^{\prime\prime}_{i}(k))\subseteq\mathbb{P}^{1}_{k}$ is fat since
$f(X_{i}^{\prime}(k))\setminus f(X_{i}^{\prime\prime}(k))$ is finite.
* •
Write $X^{\prime\prime}_{i}=\bigcup_{j}Y_{i,j}$ as union of irreducible
components, $Y_{i,j}\to\mathbb{P}^{1}$ is dominant for every $j$. For every
$j$, there exists a finite cover $C_{i,j}\to\mathbb{P}^{1}$ with $C_{i,j}$
smooth projective and a rational map $Y_{i,j}\dashrightarrow C_{i,j}$ with
geometrically irreducible generic fiber. If $C_{i,j}\to\mathbb{P}^{1}$ is an
isomorphism, define $Z_{i,j}=Y_{i,j}$. Otherwise, there exists a non-empty
open subset $V_{i,j}\subseteq Y_{i,j}$ such that $Y_{i,j}\dashrightarrow
C_{i,j}$ is defined on $V_{i,j}$. In particular,
$f(V_{i,j}(k))\subseteq\mathbb{P}^{1}(k)$ is thin. Define
$Z_{i,j}=Y_{i,j}\setminus V_{i,j}$ and $X_{i+1}=\bigcup_{j}Z_{i,j}\subseteq
X_{i}$. By construction, $f(X_{i+1}(k))\subseteq\mathbb{P}^{1}(k)$ is fat
since $f(X_{i}^{\prime\prime}(k))\setminus f(X_{i+1}(k))$ is thin.
By noetherianity, the sequence is eventually stable, let $r$ be such that
$X_{r+1}=X_{r}$. Since $X_{r+1}=X_{r}$, then $X_{r}(k)$ is dense in $X_{r}$,
thus every irreducible component is geometrically irreducible, see [Sta20,
Lemma 0G69]. Moreover, every irreducible component of $X_{r}$ dominates
$\mathbb{P}^{1}$ with geometrically irreducible generic fiber. Replace $X$
with $X_{r}$ and write $X=\bigcup_{j}Y_{j}$ as union of irreducible
components, we may assume that $Y_{j}\to\mathbb{P}^{1}$ is a family of GeM
varieties for every $j$ and $Y_{j}(k)$ is dense in $Y_{j}$.
If $Y_{j}\to\mathbb{P}^{1}$ is birationally trivial for some $j$, since
$Y_{j}(k)$ is dense in $Y_{j}$ and a generic fiber of $Y_{j}\to\mathbb{P}^{1}$
has a finite number of rational points, then $\dim Y_{j}=0$,
$Y_{j}\to\mathbb{P}^{1}$ is birational and we conclude. Otherwise, thanks to
2.13, there exists an integer $d_{0}$ and a non-empty open subset
$U\subseteq\mathbb{P}^{1}$ such that, for every finite cover
$c\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ with $\deg
c\geq d_{0}$ such that the branch points of $c$ are contained in $U$, we have
that $Y_{j,c}$ is of general type for every $j$.
Let $D=\\{d_{l}\mathrel{\mathop{\ordinarycolon}}D_{l}\to\mathbb{P}^{1}\\}$ be
a scale for $f(X(k))$. Up to shrinking $U$ furthermore, we may assume that the
set of branch points of $D$ is disjoint from $U$. Since we are assuming that
the weak Bombieri-Lang conjecture holds up to dimension $\dim X$, the
dimension of $\overline{Y_{j,c}(k)}\subseteq Y_{j,c}$ is strictly smaller than
$\dim Y_{j}$ for every $j$. Moreover, we have that
$f_{c}(X_{c}(k))=m_{c}^{-1}(f(X(k)))$ is fat thanks to 3.2. It follows that,
by induction hypothesis, there exists a generic section
$\operatorname{Spec}k(\mathbb{P}^{1})\to X_{c}$ for _every_ finite cover $c$
as above. There are a lot of such covers: let us show that we can choose them
so that the resulting sections "glue" to a generic section
$\operatorname{Spec}k(\mathbb{P}^{1})\to X$.
### 3.3. Gluing sections
Choose coordinates on $\mathbb{P}^{1}$ so that $0,\infty\in U$, let $p$ be any
prime number greater than $d_{0}$. For any positive integer $n$, let
$m_{n}\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\to\mathbb{P}^{1}$ be the
$n$-th power map. We have shown above that there exists a rational section
$\mathbb{P}^{1}\dashrightarrow X_{m_{p}}$ for every prime $p\geq d_{0}$, call
$s_{p}\mathrel{\mathop{\ordinarycolon}}\mathbb{P}^{1}\dashrightarrow
X_{m_{p}}\to X$ the composition.
We either assume that there exists an integer $N$ such that, for every
rational point $v\in\mathbb{P}^{1}(k)$, we have $|X_{v}(k)|\leq N$ or that the
Bombieri-Lang conjecture holds in every dimension. In the second case, the
uniform bound $N$ exists thanks to a theorem of Caporaso-Harris-Mazur and
Abramovich-Voloch [CHM97, Theorem 1.1] [AV96, Theorem 1.5] [Abr97]. Choose
$N+1$ prime numbers $p_{0},\dots,p_{N}$ greater than $d_{0}$, for each one we
have a rational section
${X}$${\mathbb{P}^{1}}$${\mathbb{P}^{1}}$$\scriptstyle{f}$$\scriptstyle{m_{p}}$$\scriptstyle{s_{p}}$
Let $Q=\prod_{i=0}^{N}p_{i}$, for every $i=0,\dots,N$, we get a rational
section $S_{p_{i}}$ by composition with $s_{p_{i}}$:
${X}$${\mathbb{P}^{1}}$${\mathbb{P}^{1}}$${\mathbb{P}^{1}}$$\scriptstyle{f}$$\scriptstyle{m_{Q/p_{i}}}$$\scriptstyle{S_{p_{i}}}$$\scriptstyle{m_{Q}}$$\scriptstyle{m_{p_{i}}}$$\scriptstyle{s_{p_{i}}}$
Let $V\subseteq\mathbb{P}^{1}$ be an open subset such that $S_{p_{i}}$ is
defined on $V$ for every $i$. For every rational point $v\in V(k)$, we have
$|X_{v}(k)|\leq N$ and thus there exists a couple of different indexes $i\neq
j$ such that $S_{p_{i}}(v)=S_{p_{j}}(v)$ for infinitely many $v\in V(k)$,
hence $S_{p_{i}}=S_{p_{j}}$. Let $Z\subseteq X$ be the image
$S_{p_{i}}=S_{p_{j}}$, by construction we have
$k(\mathbb{P}^{1})=k(t)\subseteq k(Z)\subseteq k(t^{-p_{i}})\cap
k(t^{-p_{j}})\subseteq k(t^{-Q}).$
Using Galois theory on the cyclic extension $k(t^{-Q})/k(t)$, it is immediate
to check that $k(t^{-p_{i}})\cap k(t^{-p_{j}})=k(t)\subseteq k(t^{-Q})$ since
$p_{i},p_{j}$ are coprime, thus $k(Z)=k(t)$ and $Z\to\mathbb{P}^{1}$ is
birational. This concludes the proof of Theorem A.
### 3.4. Non-rational base
Let us show how Theorem A implies Theorem B. Let $C$ be a geometrically
connected curve over a field $k$ finitely generated over $\mathbb{Q}$, and let
$f\mathrel{\mathop{\ordinarycolon}}X\to C$ be a morphism of finite type whose
generic fiber is a GeM scheme. Assume that there exists a non-empty open
subset $V\subseteq C$ such that $X|_{V}(h)\to V(h)$ is surjective for every
finite extension $h/k$. We want to prove that there exists a generic section
$C\dashrightarrow X$. It’s easy to reduce to the case in which $C$ is smooth
and projective, so let us make this assumption.
Observe that, up to replacing $X$ with an affine covering, we may assume that
$X$ is affine. Choose $C\to\mathbb{P}^{1}$ any finite map: since $X$ is
affine, the Weil restriction $R_{C/\mathbb{P}^{1}}(X)\to\mathbb{P}^{1}$ exists
[BLR90, §7.6, Theorem 4]. Recall that
$R_{C/\mathbb{P}^{1}}(X)\to\mathbb{P}^{1}$ represents the functor on
$\mathbb{P}^{1}$-schemes
$S\mapsto\operatorname{Hom}_{C}(S\times_{\mathbb{P}^{1}}C,X)$.
If $L/k(C)/k(\mathbb{P}^{1})$ is a Galois closure and $\Sigma$ is the set of
embeddings $\sigma\mathrel{\mathop{\ordinarycolon}}k(C)\to L$ as
$k(\mathbb{P}^{1})$ extensions, the scheme $R_{C/\mathbb{P}^{1}}(X)_{L}$ is
isomorphic to the product
$\prod_{\Sigma}X\times_{\operatorname{Spec}k(C),\sigma}\operatorname{Spec}L$
and hence is a GeM scheme, see [Bre20, Lemma 3.3]. It follows that the generic
fiber $R_{C/\mathbb{P}^{1}}(X)_{k(\mathbb{P}^{1})}$ is a GeM scheme, too.
Let $U\subseteq\mathbb{P}^{1}$ be the image of $V\subseteq C$. The fact that
$X|_{V}(h)\to V(h)$ is surjective for every finite extension $h/k$ implies
that $R_{C/\mathbb{P}^{1}}(X)|_{U}(k)\to U(k)$ is surjective. By Theorem A, we
get a generic section $\mathbb{P}^{1}\dashrightarrow R_{C/\mathbb{P}^{1}}(X)$,
which in turn induces generic section $C\dashrightarrow X$ by the universal
property of $R_{C/\mathbb{P}^{1}}(X)$. This concludes the proof of Theorem B.
## 4\. Polynomial bijections $\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$
Let us prove Theorem C. Let $k$ be finitely generated over $\mathbb{Q}$, and
let $f\mathrel{\mathop{\ordinarycolon}}\mathbb{A}^{2}\to\mathbb{A}^{1}$ be any
morphism. Assume by contradiction that $f$ is bijective on rational points.
First, let us show that the generic fiber of $f$ is geometrically irreducible.
This is equivalent to saying that $\operatorname{Spec}k(\mathbb{A}^{2})$ is
geometrically connected over $\operatorname{Spec}k(\mathbb{A}^{1})$, or that
$k(\mathbb{A}^{1})$ is algebraically closed in $k(\mathbb{A}^{2})$. Let
$k(\mathbb{A}^{1})\subseteq L\subseteq k(\mathbb{A}^{2})$ a subextension
algebraic over $k(\mathbb{A}^{1})$. Let $C\to\mathbb{A}^{1}$ be a finite cover
with $C$ regular and $k(C)=L$. The rational map $\mathbb{A}^{2}\dashrightarrow
C$ is defined in codimension $1$, thus there exists a finite subset
$S\subseteq\mathbb{A}^{2}$ and an extension $\mathbb{A}^{2}\setminus S\to C$.
Since the composition $\mathbb{A}^{2}\setminus S(k)\to
C(k)\to\mathbb{A}^{1}(k)$ is surjective up to a finite number of points, by
Hilbert’s irreducibility theorem we have that $C=\mathbb{A}^{1}$, i.e.
$L=k(\mathbb{A}^{1})$.
This leaves us with three cases: the generic fiber is a geometrically
irreducible curve of geometric genus $0$, $1$, or $\geq 2$. The first two have
been settled by W. Sawin in the polymath project [Tao19], while the third
follows from Theorem A. Let us give details for all of them.
### 4.1. Genus 0
Assume that the generic fiber of $f$ has genus $0$. By generic smoothness,
there exists an open subset $U\subseteq\mathbb{A}^{2}$ such that $f|_{U}$ is
smooth. For a generic rational point $u\in U(k)$, the fiber $f^{-1}(f(u))$ is
birational to a Brauer-Severi variety of dimension $1$ and has a smooth
rational point, thus it is birational to $\mathbb{P}^{1}$ and
$f^{-1}(f(u))(k)$ is infinite. This is absurd.
### 4.2. Genus 1
Assume now that the generic fiber has genus $1$. By resolution of
singularities, there exists an open subset $V\subseteq\mathbb{A}^{1}$, a
variety $X$ with a smooth projective morphism
$g\mathrel{\mathop{\ordinarycolon}}X\to V$ whose fibers are smooth genus $1$
curves and a compatible birational map $X\dashrightarrow\mathbb{A}^{2}$. Up to
shrinking $V$, we may suppose that the fibers of $f|_{V}$ are geometrically
irreducible. Let $U$ be a variety with open embeddings $U\subseteq X$,
$U\subseteq\mathbb{A}^{2}$, replace $V$ with $g(U)\subseteq V$ so that
$g|_{U}$ is surjective.
The morphism $X\setminus U\to V$ is finite, let $N$ be its degree. Since the
fibers of $U\to V$ have at most one rational point, it follows that
$|X_{v}(k)|\leq N+1$ for every $v\in V(k)$.
Every smooth genus $1$ fibration is a torsor for a relative elliptic curve
(namely, its relative $\underline{\operatorname{Pic}}^{0}$), thus there exists
an elliptic curve $E\to V$ such that $X$ is an $E$-torsor. Moreover, every
torsor for an abelian variety is torsion, thus there exists a finite morphism
$\pi\mathrel{\mathop{\ordinarycolon}}X\to E$ over $V$ induced by the
$n$-multiplication map $E\to E$ for some $n$.
If $v\in V(k)$ is such that $X_{v}(k)$ is non-empty, then
$|X_{v}(k)|=|E_{v}(k)|\leq N+1$. This means that, up to composing $\pi$ with
the $(N+1)!$ multiplication $E\to E$, we may assume that $\pi(X(k))\subseteq
V(k)\subseteq E(k)$, where $V\to E$ is the identity section. In particular,
$X(k)\subseteq\pi^{-1}(V(k))$ is not dense. This is absurd, since $X$ is
birational to $\mathbb{A}^{2}$.
### 4.3. Genus $\geq 2$
Thanks to Theorem A, there exists an open subset $V\subseteq\mathbb{A}^{1}$
and a section $s\mathrel{\mathop{\ordinarycolon}}V\to\mathbb{A}^{2}$. It
follows that $\mathbb{A}^{2}|_{V}(k)=s(V(k))$, which is absurd since $s(V)$ is
a proper closed subset and $\mathbb{A}^{2}|_{V}(k)$ is dense.
## References
* [Abr97] Dan Abramovich “A high fibered power of a family of varieties of general type dominates a variety of general type” In _Invent. Math._ 128.3, 1997, pp. 481–494 DOI: 10.1007/s002220050149
* [AV96] Dan Abramovich and José Felipe Voloch “Lang’s conjectures, fibered powers, and uniformity” In _New York J. Math._ 2, 1996, pp. 20–34electronic URL: http://nyjm.albany.edu:8000/j/1996/2_20.html
* [BLR90] Siegfried Bosch, Werner Lütkebohmert and Michel Raynaud “Néron models” 21, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) Springer-Verlag, Berlin, 1990, pp. x+325 DOI: 10.1007/978-3-642-51438-8
* [Bre20] Giulio Bresciani “On the Bombieri-Lang Conjecture over finitely generated fields”, 2020 arXiv:2012.15765 [math.NT]
* [CHM97] Lucia Caporaso, Joe Harris and Barry Mazur “Uniformity of rational points” In _J. Amer. Math. Soc._ 10.1, 1997, pp. 1–35 DOI: 10.1090/S0894-0347-97-00195-1
* [Har77] Robin Hartshorne “Algebraic geometry” Graduate Texts in Mathematics, No. 52 Springer-Verlag, New York-Heidelberg, 1977, pp. xvi+496
* [Iit71] Shigeru Iitaka “On $D$-dimensions of algebraic varieties” In _J. Math. Soc. Japan_ 23, 1971, pp. 356–373 DOI: 10.2969/jmsj/02320356
* [Kle80] Steven L. Kleiman “Relative duality for quasicoherent sheaves” In _Compositio Math._ 41.1, 1980, pp. 39–60 URL: http://www.numdam.org/item?id=CM_1980__41_1_39_0
* [Kol87] János Kollár “Subadditivity of the Kodaira dimension: fibers of general type” In _Algebraic geometry, Sendai, 1985_ 10, Adv. Stud. Pure Math. North-Holland, Amsterdam, 1987, pp. 361–398 DOI: 10.2969/aspm/01010361
* [Mat19] Z.. (https://mathoverflow.net/users/5098/z-h) “Polynomial bijection from $\mathbb{Q}\times\mathbb{Q}$ to $\mathbb{Q}$?” URL:https://mathoverflow.net/q/21003 (version: 2019-06-09), MathOverflow URL: https://mathoverflow.net/q/21003
* [Poo10] Bjorn Poonen “Multivariable polynomial injections on rational numbers” In _Acta Arith._ 145.2, 2010, pp. 123–127 DOI: 10.4064/aa145-2-2
* [Ser97] Jean-Pierre Serre “Lectures on the Mordell-Weil theorem”, Aspects of Mathematics Friedr. Vieweg & Sohn, Braunschweig, 1997, pp. x+218 DOI: 10.1007/978-3-663-10632-6
* [Sta20] The Stacks project authors “The Stacks project”, https://stacks.math.columbia.edu, 2020
* [Str20] Sam Streeter “Hilbert property for double conic bundles and del Pezzo varieties”, 2020 arXiv:1812.05937 [math.AG]
* [Tao19] Terence Tao et al. “Ruling out polynomial bijections over the rationals via Bombieri-Lang?”, https://terrytao.wordpress.com/2019/06/08/ruling-out-polynomial-bijections-over-the-rationals-via-bombieri-lang/, 2019
* [Vie83] Eckart Viehweg “Weak positivity and the additivity of the Kodaira dimension for certain fibre spaces” In _Algebraic varieties and analytic varieties (Tokyo, 1981)_ 1, Adv. Stud. Pure Math. North-Holland, Amsterdam, 1983, pp. 329–353 DOI: 10.2969/aspm/00110329
|
# A transferable prediction model of molecular adsorption on metals based on
adsorbate and substrate properties
Paolo Restuccia<EMAIL_ADDRESS>Department of Chemistry, Imperial
College London, 82 Wood Lane, W12 0BZ London, UK Ehsan A. Ahmad Faculty of
Engineering and the Environment, University of Southampton, University Road,
Southampton SO17 1BJ, UK Nicholas M. Harrison Department of Chemistry,
Imperial College London, 82 Wood Lane, W12 0BZ London, UK
###### Abstract
Surface adsorption is one of the fundamental processes in numerous fields,
including catalysis, environment, energy and medicine. The development of an
adsorption model which provides an effective prediction of binding energy in
minutes has been a long term goal in surface and interface science. The
solution has been elusive as identifying the intrinsic determinants of the
adsorption energy for various compositions, structures and environments is
non-trivial. We introduce a new and flexible model for predicting adsorption
energies to metal substrates. The model is based on easily computed, intrinsic
properties of the substrate and adsorbate. It is parameterised using machine
learning based on first-principles calculations of probe molecules (e.g., H2O,
CO2, O2, N2) adsorbed to a range of pure metal substrates. The model predicts
the computed dissociative adsorption energy to metal surfaces with a
correlation coefficient of 0.93 and a mean absolute error of 0.77 eV for the
large database of molecular adsorption energies provided by Catalysis-Hub.org
which have a range of 15 eV. As the model is based on pre-computed quantities
it provides near-instantaneous estimates of adsorption energies and it is
sufficiently accurate to eliminate around 90% of candidates in screening study
of new adsorbates. The model, therefore, significantly enhances current
efforts to identify new molecular coatings in many applied research fields.
## Introduction
Control of the chemical and physical properties of surfaces and interfaces is
both of fundamental interest and vital in a very wide range of technologies
including catalysis, environment, energy and medicine. Given the current
pressing need for innovation in areas such as energy supply, energy
distribution and transport there is a pressing need to develop technologies
that can both extend the working lifetime of existing infrastructure and
facilitate the development of new sustainable approaches to production and
consumption. This has lead to the wide acknowledgment of the importance of
controlling material surfaces and coatings [1, 2]. Common phenomena, such as
corrosion and friction, cause substantial economic losses every year and
severely impact the environment. For example, in extending the lifetime of
current infrastructure, the worldwide costs of prevention, detection and
mitigation of metal corrosion alone are estimated to be 2.5 trillion US
dollars per year [3]. In addition, when considering the innovation of new
devices, the development of micro- and nano-electromechanical systems requires
new approaches for friction reduction in limited dimensions which leads to
reduced efficiency and failure [4]. The ability to deposit molecular and
nanostructured coatings with advanced functional properties is primed to have
a profound effect on such diverse technologies as wearable electronics,
corrosion inhibitors and lubricant additives. One of the challenges in
molecular science is therefore the need to find novel, earth-abundant,
inexpensive and environmentally friendly materials that adsorb in a controlled
manner to surfaces and interfaces. Historically, innovation of new materials
has been a time-consuming and challenging task; it typically takes 20 to 70
years to progress from laboratory conception to widespread commercial use [5].
Developments have also been mainly based on the incremental evolution of
existing systems with the oft reported outcome that newly discovered solutions
are based on exactly the same underlying mechanisms as their predecessors; to
find something radical and innovative has usually been a matter of luck.
Extensive use is made of molecular additives for friction and corrosion
reduction. A fundamental step in discovering new classes of these surfaces
modifiers is a predictive understanding of the thermodynamics for both
molecular and dissociative adsorption on different substrates [6, 7, 8, 9, 10,
11, 12]. For this purpose, it is essential to be able to compute the binding
energy (BE) of different adsorption modes with sufficient accuracy to be able
to predict the molecular level adhesion of the self assembling coating. In
principle this is achievable using modern atomistic simulations but in
practice is problematic as the parameter space of factors that affect the BE
is very large [13]. There has therefore been a significant and sustained
effort aimed at identifying a small number of easily computed descriptors that
can accurately capture the nature of the molecule-surface interaction, and
thus facilitating a simple and efficient predictive model of adsorption. In
recent years the combination of high-throughput density functional theory
(DFT) calculations and machine learning techniques has opened a new era of
informatics-based approach to materials design, from which a number of simple
models for predicting adsorption energies have emerged [14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. Similar approaches
have also been used for predicting other figures of merit, like the inhibition
efficiency of molecules [33]. These models are usually based on linear
relationships using simple descriptors for both the substrate and the adsorbed
molecule (e.g., the number of valence electrons, the electronegativity of the
substrate [15] and the ionization potential of the molecule [16]). Despite
their simplicity, these models have been demonstrated to be quite effective in
predicting adsorption energies, especially when machine learning techniques
are employed [14, 16]. However, in the past such models have been limited in
their transferability. For instance, any given model may be limited to the
adsorption of molecules in one specific adsorption site (i.e., on-top or
hollow). For a more extensive employment of these predictive calculations, one
would like to extend the possible range of adsorption sites to model a broader
array of realistic configurations, such as stepped edges or grain boundaries
at the surface, and include a wide range of molecular adsorbates.
In the current work we present a new predictive linear model that uses
appropriate physical descriptors to predict in minutes the adsorption of a
wide range of molecules to multiple substrates in a variety of surface
adsorption sites. The model is based on a combination of systematic DFT
calculations and machine learning. The model reported here accurately predicts
the different dissociative adsorption energies of a range of probe molecules
over simple homogeneous metallic substrates. Despite its simplicity, the model
provides a good estimate of molecular BE in different configuration sites with
limited computational effort, and it is devised in a form that facilitates its
extension to more complicated structures (e.g., oxides, carbonates or
defective surfaces). Moreover, the margin of error in the BE prediction is
sufficiently small that provides a sufficiently accurate estimate of fully
optimised ab initio calculations, saving time and facilitating the rapid
screening of a broader range of systems. Therefore, the model is accurate
enough to guide the discovery and optimisation of molecular adsorbates in
order to improve the functionality of corrosion inhibitors and lubricant
additives and is likely to find application in fields such as catalysis,
molecular electronics and biomedicine [34, 35, 36], where the adsorption of
molecules and molecular films underpins many important processes.
## Method
### BE calculation in slab configuration
For the adsorption energies of the training set (i.e., the slab systems) and
the calculation of the later defined Molecule-Bulk Energy (MBE) terms, Spin-
polarized DFT calculations were performed using the projector-augmented wave
method (PAW) as implemented in the plane-wave code QUANTUM ESPRESSO (QE) [37].
We used the PAW pseudpopotentials [38] from the PSLibrary 1.0.0 [39] within
the generalised gradient approximation (GGA) of Perdew, Burke and Enzerhoff
(PBE) [40] for the exchange-correlation energy. The electronic wave functions
are expanded as a linear combination of plane waves up to a kinetic energy of
95 Ry, which we find is sufficient to converge the total energies (1 meV/atom)
and equilibrium lattice constant (0.1 mÅ) for the considered substrates.
For the cell structure, we considered different configurations for the slab
and the MBE calculations: for the former, we employed supercells with a
$2\times 2$ in-plane size in order to reduce the interaction between adsorbate
replicas and 5 of 6 layers, depending of the substrate of the different
materials. For the latter, all the clusters were computed in a
$20\>\text{\r{A}}\times 20\>\text{\r{A}}\times 20\>\text{\r{A}}$ cubic cell,
so the self interaction between the cluster replicas is negligible. All the
input and output geometries for both the slab and the MBE calculations are
provided as Supplementary Information.
The Monkhorst-Pack grid [41] is used for sampling the Brillouin Zone, but
different $k$-mesh for each structure under study were considered. In
particular, we selected the optimal $k$-point grid for each slab geometry,
whereas all the calculations involving clusters had a sampling at Gamma point
due to the large cell dimensions. To improve the convergence, the Marzari-
Vanderbilt cold smearing [42] method is used for the sampling of the Fermi
surface, with a width of 0.27 eV in order to obtain accurate forces. The
convergence criteria of forces and energy are 0.003 eV Å-1 and 10-2 eV.
### HOMO, LUMO and HOMO-LUMO gap calculation for molecule in gas phase
For the calculation of the HOMO, LUMO and HOMO-LUMO gap, we performed DFT
calculations using the CRYSTAL17 computational suite [43, 44], in which the
crystalline orbitals are expanded as a linear combination of a local basis set
composed by atom-centered Gaussian orbitals with s, p, or d symmetry. For all
the elements employed in the molecular calculations (namely, H, C, N, O, F, S,
Cl), we used the 6-31G** basis sets [45, 46, 47, 48, 49, 50, 51].
The approximation of the exchange and correlation functional is based on the
Becke, 3-parameter, Lee-Yang-Parr (B3LYP) hybrid functional incorporating 20%
Hartree-Fock exchange [52, 53, 54]. The Coulomb and exchange series are summed
directly and truncated using overlap criteria with thresholds of 10-10, 10-10,
10-10, 10-20, 10-30 as described elsewhere [44, 55].
## Results and Discussion
The general definition for the computed BE to a surface may be written as:
$\text{BE}=\text{E}_{\text{tot}}-\left(\text{E}_{\text{sub}}+\text{E}_{\text{mol}}\right)$
(1)
where $\text{E}_{\text{tot}}$ is the computed total energy for a system
composed of a molecule adsorbed on a substrate and $\text{E}_{\text{sub}}$
($\text{E}_{\text{mol}}$) is the energy for the isolated substrate (molecule).
With this definition, a negative (positive) BE indicates that the dissociation
process is favourable (unfavorable).
The total BE can be analysed in terms of many contributions that may be
related to properties of the molecule and the surface [56, 57, 58]. Defining a
comprehensive model for the BE is challenging. A recent approach proposed by
Dean et al. [16] succeeded in predicting the BE of probe molecules to metal
nano-particles. This model is based on the idea that BE can be adequately
represented by stability descriptors for the adsorbate, the adsorption site,
the substrate and a simply computed estimate of the interaction between the
molecule and the surface. These assumptions led to the following linear
equation for the BE:
$\text{BE}=a+b\times\text{CE}_{\text{local}}+c\times\text{IPEA}+d\times\text{MADs}$
(2)
where $\text{CE}_{\text{local}}$ is the term to describe the local cohesive
energy of the adsorption site, IPEA is the negative average between the
ionization potential and the electronic affinity of the molecule, and the MADs
is the gas phase BE between the adsorbate and one atom of the metal substrate,
which is obtained through ab initio calculations, and represents the
descriptor for the adsorbate-metal interaction. Although this model proved to
be effective in the prediction of BE, with correlation coefficient $R^{2}$ of
around 0.94 and a mean absolute error (MAE) of around 0.1 eV, there are some
limitations in the employed approach: i) the adsorbates were always in an on-
top site configuration, limiting the possibility to predict the BE in other
adsorption sites such as hollow or bridge, and ii) the model has been trained
only on noble metals nano-particles and slab surfaces, such as Ag, Au and Cu,
narrowing the range of possible substrates over which the prediction is
effective.
In order to overcome these limitations, we present here a model using suitable
descriptors for the adsorption of molecules over flat substrates. In
particular, we propose the following equation for the prediction of BE:
$\text{BE}=a+b\times\text{CE}_{\text{B}}+c\times\left(\text{W}_{\text{F}}-\frac{\text{E}_{\text{gap}}}{2}\right)+d\times\text{MBE}$
(3)
where $\text{CE}_{\text{B}}$ is the cohesive bulk energy for the substrate
atomic species, $\text{E}_{\text{gap}}$ is the gap between the Highest
Occupied (HOMO) and Lowest Unoccupied (LUMO) Molecular Orbital of the adsorbed
molecule (from now on, HOMO-LUMO gap), $\text{W}_{\text{F}}$ is the work
function of the substrate, MBE is the Molecule-Bulk Energy, which resembles
the MADs of Eq. 2 and it is computed using ab initio theory; $a$, $b$, $c$ and
$d$ are the linear coefficients for the regression. $\text{CE}_{\text{B}}$
provides a general estimate of the strength of the interaction between the
substrate atoms and the MBE provides a simply computed estimate of the
substrate-molecule interaction. The third term contains the difference between
the surface work function and the middle of the HOMO-LUMO gap of the adsorbed
molecule which in frontier molecular orbital theory controls the charge
transfer and hybridisation contributions to the surface binding [59, 60, 61].
The MBE term is computed as:
$\text{MBE}=\sum_{i=1}^{n_{frag}}E_{complex,i}-E_{B,M,i}-\mu_{G,mol,i}$ (4)
where $n_{frag}$ is the number of molecular fragments considered in the
dissociative adsorption process, $E_{complex}$ is the total energy of a
molecular functional group adsorbed on a single atom of the metal substrate,
$E_{B,M}$ is the bulk energy of a single atom of the substrate atomic species
and $\mu_{G,mol}$ is the chemical potential of the molecular fragment
generalised from the fragment energy to allow for the adsorption environment.
This quantity provides an easily computed and flexible estimate of the
strength of adhesion between the adsorbate and the substrate. In contrast to
the MADs term proposed by Dean et al., where all the functional groups are
computed as isolated components, in the proposed MBE term we refer all
energies to a consistent reference enabling the use of pre-computed data in a
transferable predictive model. Another advantage of this approach is choosing
the proper reference for the chemical potential in the calculation of MBE. In
the current work, we chose to refer $\mu_{G,mol}$ to the isolated gas phase
molecule for the sake of simplicity. However, it is possible to reference the
chemical potential to different environments including solvated species, as
shown in recent electrochemical studies [62, 63, 64].
Figure 1: Ball and stick representation of the models used for the different
approaches in the calculation of MBE in the case of Cl adsorption on Cu: a)
the substrate is modelled by just one atom, b) the substrate is represented by
a cluster of 4 atoms and c) the cluster modelling the substrate is composed by
10 atoms. Green and brown balls represent chlorine and copper atoms,
respectively.
In the current work, ordinary least squares (OLS) linear regressions were used
to determine the coefficients in Eq. 3 from a training set of ab initio
energies using the statsmodels library [65] provided in Python 3 [66]. For the
OLS regression, we adopted a training set of eight different probe molecules,
namely Cl2, CO2, F2, H2, H2O, H2S, N2 and O2, adsorbed over ten different
metal substrates, namely Ag(111), Al(111), Au(111), Cu(111), Fe(100), Fe(110),
Ir(111), Pt(111), V(100) and V(110). In each case the energy of the most
stable adsorption configuration was used. Where possible standard reference
data was used for each of the terms of Eq. 3: for $\text{CE}_{\text{B}}$, we
used the observed formation energies of the transition metals provided by Ref.
[67], for the work function, we employed the DFT computed values provided by
the Materials Project database [68]. The HOMO-LUMO gap of the molecules was
estimated from ab initio calculations, with the computational details provided
in the Methods section. The MBE was also computed ab initio using a small
cluster which will be discussed below.
### Analysing Contributions to the MBE
Table 1: Regression coefficients, i.e., Coefficient Estimate, Standard Error
(SE) and P-value, for the different approaches employed for MBE calculation in
the case of a) single metal atom, b) a small metal cluster and c) a large
metal cluster. Cases are trained using the dataset provided in the
Supplementary Information. $R^{2}$ is the correlation coefficient, MAE is the
mean absolute error.
(a) First approach for MBE, as shown in Fig. 1a. $R^{2}=0.21$, $\text{MAE}=1.97$ eV, $\text{RMSE}=2.52$ eV. | Coefficient Estimate | SE | P-value
---|---|---|---
$a$ | 2.2460 | 1.6048 | 0.167
$b$ | -0.7268 | 0.3114 | 0.023
$c$ | 0.8900 | 0.2721 | 0.002
$d$ | -0.0886 | 0.0970 | 0.365
(b) Second approach for MBE, as shown in Fig. 1b. $R^{2}=0.83$, $\text{MAE}=0.89$ eV, $\text{RMSE}=1.17$ eV. | Coefficient Estimate | SE | P-value
---|---|---|---
$a$ | 1.5812 | 0.7064 | 0.029
$b$ | -0.2925 | 0.1461 | 0.050
$c$ | 0.1793 | 0.1122 | 0.116
$d$ | 1.0163 | 0.0691 | $3\cdot 10^{-21}$
(c) Third approach for MBE, as shown in Fig. 1c. $R^{2}=0.94$, $\text{MAE}=0.52$ eV, $\text{RMSE}=0.69$ eV. | Coefficient Estimate | SE | P-value
---|---|---|---
$a$ | 0.7426 | 0.4208 | 0.083
$b$ | -0.1735 | 0.0874 | 0.052
$c$ | 0.1844 | 0.0659 | 0.007
$d$ | 0.9927 | 0.0370 | $3\cdot 10^{-34}$
An appropriate calculation of the MBE is essential for the efficiency and
accuracy of the proposed model. The simplest level of approximation used here
is that proposed by Dean et al. in which the MBE is computed as Eq. 4, i.e.,
the binding energy is the energy difference in the gas phase of a specific
fragment obtained during the dissociative process and one metal atom of the
substrate [16]. An example of this possible configuration to calculate MBE is
shown in Fig. 1a for the case of Cl adsorbed to a Cu atom. The regression
statistics are shown in Table 1(a), while Figure 2 shows the parity plot of
the model training against the DFT computed adsorption energies for the
predicted BE.
Figure 2: Parity plot for the training of the model against the DFT BE
calculations with the MBE approach proposed in Eq. 4 and the system shown in
Fig. 1a. The black solid line represents the parity between the computed DFT
BE and the predicted value.
This approximation provides a rather poor prediction of the BE: the
correlation coefficient $R^{2}$ is around 0.21 (an $R^{2}$ of 0 corresponds to
no correlation and of 1.0 to a perfect set of predictions) and the mean
absolute error (MAE) is almost 2 eV. The parity plot in Figure 2 confirms that
the linear regression does not provide a good description of the BE. Although
Dean et al. have shown convincingly that this approach reproduces the energy
of adsorption to metallic nanoparticles in an on-top configuration of several
radical groups (namely, CH3, CO and OH), it evidently fails to do so when the
molecules are adsorbed on a wide range of substrates.
A possible explanation for this discrepancy is that the variations of the
interactions in the hollow and bridge adsorption sites considered here are not
captured by binding to a single metal atom. This suggests that a somewhat
larger cluster is required to take into account the different adsorption site
configurations in the calculation of MBE such as that represented in Figure
1b. Here the Cl is adsorbed to a four atom cluster based on the hollow site
presented by the Cu(111). This is the smallest cluster, for this specific
substrate, which retains the symmetry of the surface adsorption site. We
conveniently create these clusters that resemble the surface adsorption sites
for all the considered substrates in our training set and the geometries
employed for these calculations are provided as Supplementary Information.
Another essential advantage of this approach is the possibility to explore
sites with lower coordination numbers that resemble substrates containing
defects or stepped edges. The only possibility to simulate these
configurations with a periodic slab structure is by using large supercells
with hundreds of atoms, thus increasing the computation time significantly
compared to the few atoms used within a cluster.
With the use of this approach, we change the definition of MBE as follows:
$\text{MBE}=\sum_{i=1}^{n_{frag}}E_{complex,i}-E_{cluster,i}-\mu_{G,mol,i}$
(5)
where all the terms of Eq. 5 are the same as Eq. 4, apart from $E_{cluster,i}$
which is the energy of the cluster modelling the substrate. This approach
leads to a significant improvement in the BE prediction, as shown in both
Table 1(b) and Figure 3. There is a significant improvement in both the
correlation coefficient (around 0.83) and the MAE (around 0.9 eV).
Figure 3: Parity plot for the training of the model against the DFT BE
calculations with the MBE cluster approach proposed in Eq. 5 and the system
shown in Fig. 1b. The black solid line represents the parity between the
computed DFT BE and the predicted value.
Extending this approach, one can compute the MBE from adsorption to the 10
atom cluster displayed in Figure 1c for the case Cl adsorbed to a hollow site
on Cu. This cluster also maintains the adsorption site symmetry.
From Figure 4 it is evident that the model based on this MBE provides a
satisfactory description of the adsorption energetics with a correlation
coefficient of 0.94, and a more significant reduction of the MAE to 0.5 eV.
Figure 4: Parity plot for the training of the model against the DFT BE
calculations with the MBE cluster approach proposed in Eq. 5 and the system
shown in Fig. 1c. The black solid line represents the parity between the
computed DFT BE and the predicted value.
In addition, all of the fitted parameters have a P-value equal or smaller than
0.05, which is the threshold to obtain a confidence level of 95% in the
predictions of the model. Even if this threshold should not be seen as a sharp
edge for statistical significance [69], the obtained values for both the
P-value and the standard errors provide a rigorous test for the effectiveness
of our model.
It is possible to perform a more qualitative analysis of the accuracy of the
proposed models by considering the residuals distribution as shown in Fig. 5:
each panel presents this distribution as a histogram counter of the errors for
the MBE calculation approaches proposed in this work. As expected, the errors
follow a normal distribution, and every time the description of the MBE is
improved, the residuals range is almost halved, further validating our
approach. It is also interesting to notice that moving from the smaller to
larger cluster leads to a notable reduction in the MAE of around 40%, but the
correlation coefficients are similar (0.83 for the smaller clusters, 0.94 for
the larger ones). Therefore, one can adjust the model for convenience: it is
possible to either lose a bit of predictive accuracy with the advantage of
realising a database with smaller clusters (which usually requires around half
the time for the actual calculation) or retain a better precision with the
cost of longer computational time to build the database of the MBE
coefficients. From now on, our analysis will focus on the predictive model
employing the larger clusters since they provide the most accurate results.
The model proposed here is therefore able to predict BE with a fidelity
comparable to the current state of the art but for a variety of adsorption
geometries and adsorbates. The MAE for the training set is somewhat higher
that of the model reported by Dean et al. of around 0.1 eV but it is computed
for a training dataset with a much larger range of BE (from -10 to +3 eV), so
the associated relative error is comparable.
Table 2: Correlation coefficient $R^{2}$, mean absolute error (MAE) and root mean square error (RMSE) of the OLS regression for the proposed Model and the MBE against the computed DFT values for the BE as shown in Figure 6. | Model | MBE
---|---|---
$R^{2}$ | 0.94 | 0.93
MAE (eV) | 0.52 | 0.56
RMSE (eV) | 0.69 | 0.79
Before proceeding with the model validation analysis, it is interesting to
note the importance of MBE in calculating the BE since its OLS regression
coefficient $d$ is the one with the smallest relative error and P-value. To
understand how relevant this term is in calculating the predicted BE, we
compare two different types of dataset training. The first is the one we
discussed in the previous paragraph and is shown in Figure 4. The second is
based on a simple OLS regression of the MBE values of the considered reaction
paths against the computed DFT BE. The results are shown in Figure 6 and Table
2. It is apparent that qualitatively the first training approach (blue dots)
provides similar results to the one based solely on MBE (red squares),
highlighting the greater importance of the MBE term in the definition of the
model. A deeper analysis involving the regression coefficients, such as
$R^{2}$, MAE and the root mean square error (RMSE), shows us a clearer
picture. Although $R^{2}$ is similar in both scenarios (0.94 vs 0.93), we
notice an increase in both MAE and RMSE when considering in training only MBE
by 8% and 14%, respectively. Therefore, even if the MBE is an essential part
of the definition of this new model, it is important to consider all the
physical terms we have identified in the definition of Eq. 3, in order to
minimize the average error in the BE.
Figure 5: Residual distribution for the MBE calculation as proposed in Fig. 1a
(left panel), Fig. 1b (center panel) and Fig. 1c (right panel). The blue curve
represents a smoothing interpolation of the residual data.
After the training, the following step is to validate the model for use in
predicting new dissociation paths for larger molecules. To do so, we compared
its predictions to reaction energies computed and tabulated in previous work.
Figure 6: Parity plot for the trained model BE (blue dots) with the
coefficients reported in Table 1(c) and MBE values (red squares) against the
DFT BE calculations.
### Validation of the Model
Figure 7: Parity plots for the validation of the model against the DFT BE
calculations available on the Catalysis-Hub.org database. The panel a) shows
the predicted values considering the intercept $a$, whereas panel b)
represents the predictive model dismissing the intercept $a$.
The validation of the proposed model is essential for its employment in actual
technological applications. To do so, we compare the BE predicted by the model
with the DFT reaction energies available on the Catalysis-Hub.org database
[22, 23] for 80 different surface reactions. In particular, we chose twelve
different molecules, namely CH4, C2H6, CO2, H2, H2O, H2S, N2, NH3, NO, N2O,
NO2 and O2, adsorbed over eleven different metal substrates, namely Ag(111),
Al(111), Au(111), Co(111), Cu(111), Ir(111), Ni(111), Pd(111), Pt(111),
Sc(111), Y(111). The specific reaction geometries employed for the validation
are provided as Supplementary Information. All the DFT reaction energies
retrieved from the Catalysis-Hub.org database had been computed within the
BEEF-vdW approximation to electronic exchange and correlation [70].
The parity plots resulting from this comparison are shown in Figure 7. The
error bars present in the plots are computed using standard error propagation
based on the errors obtained from the OLS regressions.
From Figure 7a, it is clear that the model captures to an exceptional level of
accuracy the variation in BE in these systems: the correlation coefficient of
the parity plot is 0.93, the MAE is 0.77 eV and the RMSE is 1.02 eV. When
searching for a molecule with a specific adsorption energy with a typical
range of 20 eV the MAE of 0.77eV is sufficient to reduce the number of
candidate molecules by well over an order of magnitude. It is, nevertheless,
interesting to note that there are outliers to this general trend. For
example, the model predicts a favourable adsorption energy for several
molecules (C2H6, H2O, NH3 and N2O) on Pd(111) (data provided in the
Supplementary Information), whereas the DFT data predicts unfavourable
adsorption energies. In all of these cases, the BE is in the region of $\pm 1$
eV, which is consistent with the MAE found in the training of the model. We
can tentatively conclude from this that the current model is less reliable in
describing the dissociative adsorption in this weak bonding region. Another
possible explanation for this behaviour can also be found in the different
exchange and correlation functional employed in our study: as explained in the
Method section, we used the generalized gradient approximation (GGA) for the
MBE calculation, whereas the data retrieved from Catalysis-Hub.org were all
computed within the BEEF-vdW approximation, which has specific corrections to
take into account the dispersion interactions. The absence of the latter in
our model can explain the differences arising from the weakly bonded systems.
Apart from these outliers, the prediction of tabulated values is remarkably
accurate. As for the original fit to the training set, the intercept of the
model has a significant associated error and P-value. It is therefore
interesting to test the performance of the model when this parameter is
neglected, this data is displayed in Figure 7b. It is notable that i) without
$a$ there is only a slight offset in the predicted BE which does not affect
the general behaviour of the parity plot, with $R^{2}$, MAE and RMSE
essentially unchanged, and ii) the estimated error bars for each energy are
substantially reduced. This latter observation is explained by the fact that
half of the error in the predicted BE is due to the uncertainty of $a$, which
is the coefficient with the highest relative error and P-value.
The discussion above demonstrates that the current model provides a low cost
prediction of the BE to homogeneous metal substrates. It is interesting to
speculate on the extension of the model to a more general framework for
predicting adsorption to a wider range of substrates. A natural extension
would be to design a simple MBE cluster calculation for the oxide and
carbonate substrates, which are essential in many technological applications
and for which there is currently a lack of predictive models regarding
molecular adsorption.
### Comparison with SISSO approach
A possible concern in using the proposed approach is that our model only uses
OLS regressions, which could appear simplistic compared to the current state-
of-the-art approaches. In particular, one of the most advanced methods in
extracting effective descriptors to predict materials properties is the so-
called Sure Independence Screening and Sparsifying Operator (SISSO) algorithm
[71]. SISSO can identify the best descriptors among a set of physical
properties, determining the optimal subset. Moreover, it can also identify the
most accurate mathematical expressions to obtain the optimised relationship
for the available data. It is, therefore, natural to benchmark the results
shown in the previous sections with the ones obtained by employing the SISSO
algorithm.
The first step is to identify whether our approach provides an accurate set of
descriptors and identifies the proper mathematical expression. Therefore,
alongside the three descriptors employed in the previous sections, we added
eight additional physical and chemical properties of the molecule and the
metallic substrate to increase the set of descriptors, among which SISSO would
determine the optimal ones. Namely, we choose the number of valance electrons
of the atomic specie in the metallic substrate ($\text{N}_{\text{ve}}$), the
surface energy of the metal ($\gamma_{\text{met}}$), the first ionization
potential of the metal ($\text{I}_{1}$), the volume of a single atom in the
metallic substrate ($\text{V}_{\text{met}}$), the metal electronegativity
($\chi_{\text{met}}$), the HOMO ($\text{HOMO}_{\text{mol}}$), LUMO
($\text{LUMO}_{\text{mol}}$) and molar mass ($\text{M}_{\text{mol}}$) of the
molecule as additional physical/chemical descriptors to process in the SISSO
algorithm. These data have been gathered as tabulated values (namely,
$\text{N}_{\text{ve}}$, $\text{I}_{1}$, $\text{V}_{\text{met}}$,
$\chi_{\text{met}}$ and $\text{M}_{\text{mol}}$), by DFT values provided by
the Materials Project database ($\gamma_{\text{met}}$) [68] and by performing
DFT calculations ($\text{HOMO}_{\text{mol}}$ and $\text{LUMO}_{\text{mol}}$)
as detailed in the Method section.
Once we applied the SISSO algorithm to the training data shown in Fig. 4, we
obtained the best fitting with three coefficients by using the following
equation (the data obtained by the SISSO algorithm are provided as
Supplementary Information):
$\begin{split}\text{BE}_{\text{SISSO}}&=0.787-0.044\times\text{V}_{\text{met}}+\\\
&+0.178\times\left(\text{W}_{\text{F}}-\frac{\text{E}_{\text{gap}}}{2}\right)+1.010\times\text{MBE}\end{split}$
(6)
with an associated RMSE of 0.65 eV. The best descriptors identified by the
SISSO algorithm were the MBE and the difference between the substrate work
function and half of the HOMO-LUMO gap, as already identify in with our
approach, alongside $\text{V}_{\text{met}}$. The former is the only different
descriptor compared to our model, in which we used the cohesive bulk energy of
the substrate. Most notably, Eq. 6 shows a linear relationship with similar
coefficients to the OLS fitting shown in Table 1(c), and we noticed a
reduction of the RMSE of around 5%. Therefore, our proposed model shows
similar results both in identifying the best descriptors and in the RMSE, thus
the SISSO methodology validates our approach. The significant difference in
one of the descriptors identified by the fitting should not be seen as a
downside of our work, rather as a different way to interpret the problem of
predicting BE: we have identified the descriptors in our methodology by
finding physical and chemical properties adequate to the studied systems,
whereas the SISSO algorithm is looking only to the primary features that best
fit mathematically the BE data.
## Conclusion
In summary, we report a new model of molecule-surface binding based on the
combination of ab initio calculations with machine learning algorithms, such
as ordinary least squares regression. This model provides in minutes and with
limited computational effort a reasonable estimate of the adsorption of small
molecules to metal substrates given a set of easily computed descriptors. The
model distinguishes different reaction sites and between molecular and
dissociative adsorption accurately, especially for larger adsorption energies
(values greater than $\pm 1$ eV). Compared with an independent and well
established database of computed adsorption energies, the predicted values
suggest that the model is transferable in that it can provide equally accurate
BE predictions for a variety of functional groups and surfaces from outside
the training set. We have benchmarked our model against the SISSO algorithm,
finding that the best fitting of the BE data is with a linear equation, and
our choice of the descriptors is very close to the best-case scenario
identified by the algorithm, with a difference in the RMSE of around 5%.
The model is constructed so that its extension to different substrates (e.g.,
oxides and carbonates) and technically relevant functional groups is
straightforward. We expect the model to find widespread use in a variety of
applications. For example, the innovation of new coatings for friction and
corrosion reduction and the development of novel anti-pathogen coatings to
reduce disease transmission via surfaces.
## Acknowledgements
This work made use of the high performance computing facilities of Imperial
College London. The authors thankfully acknowledge the funding and technical
support from BP through the BP International Centre for Advanced Materials
(BP-ICAM). The authors want also to thank Dr. Giuseppe Mallia for the fruitful
discussions. The pictures present in this article are generated thanks to
XCrySDen [72, 73] and Matplotlib [74].
## References
* Barth _et al._ [2005] J. V. Barth, G. Costantini, and K. Kern, Engineering atomic and molecular nanostructures at surfaces, Nature 437, 671 (2005).
* Choy [2003] K. Choy, Chemical vapour deposition of coatings, Progress in Materials Science 48, 57 (2003).
* Koch _et al._ [2016] G. Koch, J. Varney, N. Thompson, O. Moghissi, M. Gould, and J. Payer, International measures of prevention, application, and economics of corrosion technologies study, NACE Impact (2016).
* Komvopoulos [1996] K. Komvopoulos, Surface engineering and microtribology for microelectromechanical systems, Wear 200, 305 (1996).
* Gross _et al._ [2018] R. Gross, R. Hanna, A. Gambhir, P. Heptonstall, and J. Speirs, How long does innovation and commercialisation in the energy sectors take? historical case studies of the timescale from invention to widespread commercialisation in energy supply and end use technology, Energy Policy 123, 682 (2018).
* Finšgar and Jackson [2014] M. Finšgar and J. Jackson, Application of corrosion inhibitors for steels in acidic media for the oil and gas industry: A review, Corrosion Science 86, 17 (2014).
* Zhu _et al._ [2017] Y. Zhu, M. L. Free, R. Woollam, and W. Durnie, A review of surfactants as corrosion inhibitors and associated modeling, Progress in Materials Science 90, 159 (2017).
* Kousar _et al._ [2021] K. Kousar, M. Walczak, T. Ljungdahl, A. Wetzel, H. Oskarsson, P. Restuccia, E. Ahmad, N. Harrison, and R. Lindsay, Corrosion inhibition of carbon steel in hydrochloric acid: Elucidating the performance of an imidazoline-based surfactant, Corrosion Science 180, 109195 (2021).
* Neville _et al._ [2007] A. Neville, A. Morina, T. Haque, and M. Voong, Compatibility between tribological surfaces and lubricant additives—how friction and wear reduction can be controlled by surface/lube synergies, Tribology International 40, 1680 (2007).
* Minami [2017] I. Minami, Molecular science of lubricant additives, Applied Sciences 7, 445 (2017).
* Fatti _et al._ [2018] G. Fatti, P. Restuccia, C. Calandra, and M. C. Righi, Phosphorus adsorption on fe(110): An ab initio comparative study of iron passivation by different adsorbates, The Journal of Physical Chemistry C 122, 28105 (2018).
* Peeters _et al._ [2019] S. Peeters, P. Restuccia, S. Loehlé, B. Thiebaut, and M. C. Righi, Characterization of molybdenum dithiocarbamates by first-principles calculations, The Journal of Physical Chemistry A 123, 7007 (2019).
* Nørskov _et al._ [2008] J. K. Nørskov, T. Bligaard, B. Hvolbæk, F. Abild-Pedersen, I. Chorkendorff, and C. H. Christensen, The nature of the active site in heterogeneous metal catalysis, Chem. Soc. Rev. 37, 2163 (2008).
* Ras _et al._ [2013] E.-J. Ras, M. J. Louwerse, M. C. Mittelmeijer-Hazeleger, and G. Rothenberg, Predicting adsorption on metals: simple yet effective descriptors for surface catalysis, Phys. Chem. Chem. Phys. 15, 4436 (2013).
* Gao _et al._ [2020] W. Gao, Y. Chen, B. Li, S.-P. Liu, X. Liu, and Q. Jiang, Determining the adsorption energies of small molecules with the intrinsic properties of adsorbates and substrates, Nature Communications 11, 1196 (2020).
* Dean _et al._ [2019] J. Dean, M. G. Taylor, and G. Mpourmpakis, Unfolding adsorption on metal nanoparticles: Connecting stability with catalysis, Science Advances 5, eaax5101 (2019).
* Roling and Abild-Pedersen [2018] L. T. Roling and F. Abild-Pedersen, Structure-sensitive scaling relations: Adsorption energies from surface site stability, ChemCatChem 10, 1643 (2018).
* Roling _et al._ [2017] L. T. Roling, L. Li, and F. Abild-Pedersen, Configurational energies of nanoparticles based on metal–metal coordination, The Journal of Physical Chemistry C 121, 23002 (2017).
* Yan _et al._ [2018] Z. Yan, M. G. Taylor, A. Mascareno, and G. Mpourmpakis, Size-, shape-, and composition-dependent model for metal nanoparticle stability prediction, Nano Letters 18, 2696 (2018).
* Liu _et al._ [2020] C. Liu, Y. Li, M. Takao, T. Toyao, Z. Maeno, T. Kamachi, Y. Hinuma, I. Takigawa, and K.-i. Shimizu, Frontier molecular orbital based analysis of solid–adsorbate interactions over group 13 metal oxide surfaces, The Journal of Physical Chemistry C 124, 15355 (2020).
* Calle-Vallejo _et al._ [2014] F. Calle-Vallejo, J. I. Martínez, J. M. García-Lastra, P. Sautet, and D. Loffreda, Fast prediction of adsorption properties for platinum nanocatalysts with generalized coordination numbers, Angewandte Chemie International Edition 53, 8316 (2014).
* Winther _et al._ [2019] K. T. Winther, M. J. Hoffmann, J. R. Boes, O. Mamun, M. Bajdich, and T. Bligaard, Catalysis-hub.org, an open electronic structure database for surface reactions, Scientific Data 6, 75 (2019).
* Mamun _et al._ [2019] O. Mamun, K. T. Winther, J. R. Boes, and T. Bligaard, High-throughput calculations of catalytic properties of bimetallic alloy surfaces, Scientific Data 6, 76 (2019).
* Mamun _et al._ [2020] O. Mamun, K. T. Winther, J. R. Boes, and T. Bligaard, A bayesian framework for adsorption energy prediction on bimetallic alloy catalysts, npj Computational Materials 6, 177 (2020).
* Andersen _et al._ [2019] M. Andersen, S. V. Levchenko, M. Scheffler, and K. Reuter, Beyond scaling relations for the description of catalytic materials, ACS Catalysis 9, 2752 (2019).
* Chowdhury _et al._ [2018] A. J. Chowdhury, W. Yang, E. Walker, O. Mamun, A. Heyden, and G. A. Terejanu, Prediction of adsorption energies for chemical species on metal catalyst surfaces using machine learning, The Journal of Physical Chemistry C 122, 28142 (2018).
* Praveen and Comas-Vives [2020] C. S. Praveen and A. Comas-Vives, Design of an accurate machine learning algorithm to predict the binding energies of several adsorbates on multiple sites of metal surfaces, ChemCatChem 12, 4611 (2020).
* Zhang and Xu [2021] Y. Zhang and X. Xu, Predictions of adsorption energies of methane-related species on cu-based alloys through machine learning, Machine Learning with Applications 3, 100010 (2021).
* Chang and Medford [2021] C. Chang and A. J. Medford, Application of density functional tight binding and machine learning to evaluate the stability of biomass intermediates on the rh(111) surface, The Journal of Physical Chemistry C 125, 18210 (2021).
* Fung _et al._ [2021] V. Fung, G. Hu, P. Ganesh, and B. G. Sumpter, Machine learned features from density of states for accurate adsorption energy prediction, Nature Communications 12, 88 (2021).
* Li _et al._ [2021] Z. Li, B. J. Bucior, H. Chen, M. Haranczyk, J. I. Siepmann, and R. Q. Snurr, Machine learning using host/guest energy histograms to predict adsorption in metal–organic frameworks: Application to short alkanes and xe/kr mixtures, The Journal of Chemical Physics 155, 014701 (2021).
* Anderson _et al._ [2020] R. Anderson, A. Biong, and D. A. Gómez-Gualdrón, Adsorption isotherm predictions for multiple molecules in mofs using the same deep learning model, Journal of Chemical Theory and Computation 16, 1271 (2020).
* Ser _et al._ [2020] C. T. Ser, P. Žuvela, and M. W. Wong, Prediction of corrosion inhibition efficiency of pyridines and quinolines on an iron surface using machine learning-powered quantitative structure-property relationships, Applied Surface Science 512, 145612 (2020).
* Bligaard _et al._ [2004] T. Bligaard, J. Nørskov, S. Dahl, J. Matthiesen, C. Christensen, and J. Sehested, The brønsted–evans–polanyi relation and the volcano curve in heterogeneous catalysis, Journal of Catalysis 224, 206 (2004).
* Joachim and Ratner [2005] C. Joachim and M. A. Ratner, Molecular electronics: Some views on transport junctions and beyond, Proceedings of the National Academy of Sciences 102, 8801 (2005).
* Kasemo [2002] B. Kasemo, Biological surface science, Surface Science 500, 656 (2002).
* Giannozzi _et al._ [2009] P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, Quantum espresso: a modular and open-source software project for quantum simulations of materials, Journal of Physics: Condensed Matter 21, 395502 (2009).
* Kresse and Joubert [1999] G. Kresse and D. Joubert, From ultrasoft pseudopotentials to the projector augmented-wave method, Phys. Rev. B 59, 1758 (1999).
* Dal Corso [2014] A. Dal Corso, Pseudopotentials periodic table: From h to pu, Computational Materials Science 95, 337 (2014).
* Perdew _et al._ [1996] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77, 3865 (1996).
* Monkhorst and Pack [1976] H. J. Monkhorst and J. D. Pack, Special points for brillouin-zone integrations, Phys. Rev. B 13, 5188 (1976).
* Marzari _et al._ [1999] N. Marzari, D. Vanderbilt, A. De Vita, and M. C. Payne, Thermal contraction and disordering of the al(110) surface, Phys. Rev. Lett. 82, 3296 (1999).
* Dovesi _et al._ [2018] R. Dovesi, A. Erba, R. Orlando, C. M. Zicovich-Wilson, B. Civalleri, L. Maschio, M. Rérat, S. Casassa, J. Baima, S. Salustro, and B. Kirtman, Quantum-mechanical condensed matter simulations with CRYSTAL, WIREs Computational Molecular Science 8, e1360 (2018).
* Dovesi _et al._ [2017] R. Dovesi, V. R. Saunders, C. Roetti, R. Orlando, C. M. Zicovich-Wilson, F. Pascale, B. Civalleri, K. Doll, N. M. Harrison, I. J. Bush, P. D’Arco, M. Llunell, M. Causà, Y. Noël, L. Maschio, A. Erba, M. Rérat, and S. Casassa, _CRYSTAL17 User’s Manual_, University of Torino (2017).
* Clark _et al._ [1983] T. Clark, J. Chandrasekhar, G. W. Spitznagel, and P. V. R. Schleyer, Efficient diffuse function-augmented basis sets for anion calculations. III. the 3-21+g basis set for first-row elements, Li-F, J. Comput. Chem. 4, 294 (1983).
* Ditchfield _et al._ [1971] R. Ditchfield, W. J. Hehre, and J. A. Pople, Self-consistent molecular-orbital methods. IX. an extended gaussian-type basis for molecular-orbital studies of organic molecules, J. Chem. Phys. 54, 724 (1971).
* Francl _et al._ [1982] M. M. Francl, W. J. Pietro, W. J. Hehre, J. S. Binkley, M. S. Gordon, D. J. DeFrees, and J. A. Pople, Self-consistent molecular orbital methods. XXIII. a polarization-type basis set for second-row elements, J. Chem. Phys. 77, 3654 (1982).
* Gordon _et al._ [1982] M. S. Gordon, J. S. Binkley, J. A. Pople, W. J. Pietro, and W. J. Hehre, Self-consistent molecular-orbital methods. 22. small split-valence basis sets for second-row elements, J. Am. Chem. Soc. 104, 2797 (1982).
* Hariharan and Pople [1973] P. C. Hariharan and J. A. Pople, The influence of polarization functions on molecular orbital hydrogenation energies, Theor. Chim. Acta 28, 213 (1973).
* Hehre _et al._ [1972] W. J. Hehre, R. Ditchfield, and J. A. Pople, Self-consistent molecular orbital methods. XII. further extensions of gaussian-type basis sets for use in molecular orbital studies of organic molecules, J. Chem. Phys. 56, 2257 (1972).
* Spitznagel _et al._ [1987] G. W. Spitznagel, T. Clark, P. v. R. Schleyer, and W. J. Hehre, An evaluation of the performance of diffuse function-augmented basis sets for second row elements, na-cl, J. Comput. Chem. 8, 1109 (1987).
* Becke [1993] A. D. Becke, A new mixing of hartree–fock and local density‐functional theories, The Journal of Chemical Physics 98, 1372 (1993).
* Lee _et al._ [1988] C. Lee, W. Yang, and R. G. Parr, Development of the Colle-Salvetti correlation-energy formula into a functional of the electron density, Physical Review B 37, 785 (1988).
* Stephens _et al._ [1994] P. J. Stephens, F. J. Devlin, C. F. Chabalowski, and M. J. Frisch, Ab initio calculation of vibrational absorption and circular dichroism spectra using density functional force fields, The Journal of Physical Chemistry 98, 11623 (1994).
* Pisani _et al._ [1988] C. Pisani, R. Dovesi, and C. Roett, _Hartree-Fock Ab Initio Treatment of Crystalline Systems_, 1st ed., Lecture Notes in Chemistry, Vol. 48 (Springer Verlag, 1988).
* Scaranto _et al._ [2011] J. Scaranto, G. Mallia, and N. Harrison, An efficient method for computing the binding energy of an adsorbed molecule within a periodic approach. the application to vinyl fluoride at rutile TiO2(110) surface, Computational Materials Science 50, 2080 (2011).
* Morin _et al._ [2004] C. Morin, D. Simon, and P. Sautet, Chemisorption of benzene on Pt(111), Pd(111), and Rh(111) metal surfaces: A structural and vibrational comparison from first principles, The Journal of Physical Chemistry B 108, 5653 (2004).
* Wang _et al._ [2010] B. Wang, S. Günther, J. Wintterlin, and M.-L. Bocquet, Periodicity, work function and reactivity of graphene on ru(0001) from first principles, New Journal of Physics 12, 043041 (2010).
* Fukui _et al._ [1952] K. Fukui, T. Yonezawa, and H. Shingu, A molecular orbital theory of reactivity in aromatic hydrocarbons, The Journal of Chemical Physics 20, 722 (1952).
* Hoffmann [1988] R. Hoffmann, A chemical and theoretical way to look at bonding on surfaces, Rev. Mod. Phys. 60, 601 (1988).
* Ishii _et al._ [1999] H. Ishii, K. Sugiyama, E. Ito, and K. Seki, Energy level alignment and interfacial electronic structures at organic/metal and organic/organic interfaces, Advanced Materials 11, 605 (1999).
* Hörman _et al._ [2019] N. G. Hörman, O. Andreussi, and N. Marzari, Grand canonical simulations of electrochemical interfaces in implicit solvation models, The Journal of Chemical Physics 150, 041730 (2019).
* Hörmann _et al._ [2020] N. G. Hörmann, N. Marzari, and K. Reuter, Electrosorption at metal surfaces from first principles, npj Computational Materials 6, 136 (2020).
* Hörmann and Reuter [2021] N. G. Hörmann and K. Reuter, Thermodynamic cyclic voltammograms based on ab initio calculations: Ag(111) in halide-containing solutions, Journal of Chemical Theory and Computation 17, 1782 (2021).
* Seabold and Perktold [2010] S. Seabold and J. Perktold, statsmodels: Econometric and statistical modeling with python, in _9th Python in Science Conference_ (2010) pp. 92–96.
* Van Rossum and Drake [2009] G. Van Rossum and F. L. Drake, _Python 3 Reference Manual_ (CreateSpace, 2009).
* Kittel [2004] C. Kittel, _Introduction to Solid State Physics_ , 8th ed. (John Wiley & Sons, 2004).
* Jain _et al._ [2013] A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, and K. a. Persson, The Materials Project: A materials genome approach to accelerating materials innovation, APL Materials 1, 011002 (2013).
* Analytical Methods Committee AMCTB No. 93 [2020] Analytical Methods Committee AMCTB No. 93, To p or not to p: the use of p-values in analytical science, Anal. Methods 12, 872 (2020).
* Wellendorff _et al._ [2012] J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Density functionals for surface science: Exchange-correlation model development with bayesian error estimation, Phys. Rev. B 85, 235149 (2012).
* Ouyang _et al._ [2018] R. Ouyang, S. Curtarolo, E. Ahmetcik, M. Scheffler, and L. M. Ghiringhelli, Sisso: A compressed-sensing method for identifying the best low-dimensional descriptor in an immensity of offered candidates, Phys. Rev. Materials 2, 083802 (2018).
* Kokalj [1999] A. Kokalj, Xcrysden—a new program for displaying crystalline structures and electron densities, Journal of Molecular Graphics and Modelling 17, 176 (1999).
* Kokalj [2003] A. Kokalj, Computer graphics and graphical user interfaces as tools in simulations of matter at the atomic scale, Computational Materials Science 28, 155 (2003), proceedings of the Symposium on Software Development for Process and Materials Design.
* Hunter [2007] J. D. Hunter, Matplotlib: A 2d graphics environment, Computing in Science Engineering 9, 90 (2007).
|
First Sagittarius A^* Event Horizon Telescope Results. IV. Variability, morphology, and black hole mass
Variability, morphology, and black hole mass
0000-0002-9475-4254]Kazunori Akiyama
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
0000-0002-9371-1033]Antxon Alberdi
Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain
Walter Alef
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-6993-1696]Juan Carlos Algaba
Department of Physics, Faculty of Science, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
0000-0003-3457-7660]Richard Anantua
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Department of Physics & Astronomy, The University of Texas at San Antonio, One UTSA Circle, San Antonio, TX 78249, USA
0000-0001-6988-8763]Keiichi Asada
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-2200-5393]Rebecca Azulay
Departament d'Astronomia i Astrofísica, Universitat de València, C. Dr. Moliner 50, E-46100 Burjassot, València, Spain
Observatori Astronòmic, Universitat de València, C. Catedrático José Beltrán 2, E-46980 Paterna, València, Spain
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-7722-8412]Uwe Bach
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0003-3090-3975]Anne-Kathrin Baczko
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
David Ball
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0003-0476-6647]Mislav Baloković
Yale Center for Astronomy & Astrophysics, Yale University, 52 Hillhouse Avenue, New Haven, CT 06511, USA
0000-0002-9290-0764]John Barrett
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0002-5518-2812]Michi Bauböck
Department of Physics, University of Illinois, 1110 West Green Street, Urbana, IL 61801, USA
0000-0002-5108-6823]Bradford A. Benson
Fermi National Accelerator Laboratory, MS209, P.O. Box 500, Batavia, IL 60510, USA
Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Dan Bintley
East Asian Observatory, 660 N. A'ohoku Place, Hilo, HI 96720, USA
James Clerk Maxwell Telescope (JCMT), 660 N. A'ohoku Place, Hilo, HI 96720, USA
0000-0002-9030-642X]Lindy Blackburn
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-5929-5857]Raymond Blundell
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0003-0077-4367]Katherine L. Bouman
California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, USA
0000-0003-4056-9982]Geoffrey C. Bower
Institute of Astronomy and Astrophysics, Academia Sinica,
645 N. A'ohoku Place, Hilo, HI 96720, USA
Department of Physics and Astronomy, University of Hawaii at Manoa, 2505 Correa Road, Honolulu, HI 96822, USA
0000-0002-6530-5783]Hope Boyce
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
McGill Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
Michael Bremer
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d'Hères, France
0000-0002-2322-0749]Christiaan D. Brinkerink
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0002-2556-0894]Roger Brissenden
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0001-9240-6734]Silke Britzen
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-3351-760X]Avery E. Broderick
Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada
Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
Waterloo Centre for Astrophysics, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
0000-0001-9151-6683]Dominique Broguiere
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d'Hères, France
0000-0003-1151-3971]Thomas Bronzwaer
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0001-6169-1894]Sandra Bustamante
Department of Astronomy, University of Massachusetts, 01003, Amherst, MA, USA
0000-0003-1157-4109]Do-Young Byun
Korea Astronomy and Space Science Institute, Daedeok-daero 776, Yuseong-gu, Daejeon 34055, Republic of Korea
University of Science and Technology, Gajeong-ro 217, Yuseong-gu, Daejeon 34113, Republic of Korea
0000-0002-2044-7665]John E. Carlstrom
Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Department of Physics, University of Chicago, 5720 South Ellis Avenue, Chicago, IL 60637, USA
Enrico Fermi Institute, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
0000-0002-4767-9925]Chiara Ceccobello
Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, SE-43992 Onsala, Sweden
0000-0003-2966-6220]Andrew Chael
Princeton Center for Theoretical Science, Jadwin Hall, Princeton University, Princeton, NJ 08544, USA
NASA Hubble Fellowship Program, Einstein Fellow
0000-0001-6337-6126]Chi-kwan Chan
Steward Observatory and Department of Astronomy, University of Arizona,
933 N. Cherry Ave., Tucson, AZ 85721, USA
Data Science Institute, University of Arizona, 1230 N. Cherry Ave., Tucson,
AZ 85721, USA
Program in Applied Mathematics, University of Arizona, 617 N. Santa Rita,
Tucson, AZ 85721
0000-0002-2825-3590]Koushik Chatterjee
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-2878-1502]Shami Chatterjee
Cornell Center for Astrophysics and Planetary Science, Cornell University, Ithaca, NY 14853, USA
0000-0001-6573-3318]Ming-Tang Chen
Institute of Astronomy and Astrophysics, Academia Sinica, 645 N. A'ohoku Place, Hilo, HI 96720, USA
0000-0001-5650-6770]Yongjun Chen (陈永军)
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, People's Republic of China
Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, Nanjing 210008, People's Republic of China
0000-0003-4407-9868]Xiaopeng Cheng
Korea Astronomy and Space Science Institute, Daedeok-daero 776, Yuseong-gu, Daejeon 34055, Republic of Korea
0000-0001-6083-7521]Ilje Cho
Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain
0000-0001-6820-9941]Pierre Christian
Physics Department, Fairfield University, 1073 North Benson Road, Fairfield, CT 06824, USA
0000-0003-2886-2377]Nicholas S. Conroy
Department of Astronomy, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0003-2448-9181]John E. Conway
Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, SE-43992 Onsala, Sweden
0000-0002-4049-1882]James M. Cordes
Cornell Center for Astrophysics and Planetary Science, Cornell University, Ithaca, NY 14853, USA
0000-0001-9000-5013]Thomas M. Crawford
Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
0000-0002-2079-3189]Geoffrey B. Crew
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0002-3945-6342]Alejandro Cruz-Osorio
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
0000-0001-6311-4345]Yuzhu Cui (崔玉竹)
Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shengrong Road 520, Shanghai, 201210, People’s Republic of China
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
Department of Astronomical Science, The Graduate University for Advanced Studies (SOKENDAI), 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
0000-0002-2685-2434]Jordy Davelaar
Department of Astronomy and Columbia Astrophysics Laboratory, Columbia University, 550 W 120th Street, New York, NY 10027, USA
Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0002-9945-682X]Mariafelicia De Laurentis
Dipartimento di Fisica “E. Pancini”, Universitá di Napoli “Federico II”, Compl. Univ. di Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
INFN Sez. di Napoli, Compl. Univ. di Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy
0000-0003-1027-5043]Roger Deane
Wits Centre for Astrophysics, University of the Witwatersrand, 1 Jan Smuts Avenue, Braamfontein, Johannesburg 2050, South Africa
Department of Physics, University of Pretoria, Hatfield, Pretoria 0028, South Africa
Centre for Radio Astronomy Techniques and Technologies, Department of Physics and Electronics, Rhodes University, Makhanda 6140, South Africa
0000-0003-1269-9667]Jessica Dempsey
East Asian Observatory, 660 N. A'ohoku Place, Hilo, HI 96720, USA
James Clerk Maxwell Telescope (JCMT), 660 N. A'ohoku Place, Hilo, HI 96720, USA
ASTRON, Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
0000-0003-3922-4055]Gregory Desvignes
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Université de Paris, 5 place Jules Janssen, 92195 Meudon, France
0000-0003-3903-0373]Jason Dexter
JILA and Department of Astrophysical and Planetary Sciences, University of Colorado, Boulder, CO 80309, USA
0000-0001-6765-877X]Vedant Dhruv
Department of Physics, University of Illinois, 1110 West Green Street, Urbana, IL 61801, USA
0000-0002-9031-0904]Sheperd S. Doeleman
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-3769-1314]Sean Dougal
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0001-6010-6200]Sergio A. Dzib
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d'Hères, France
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-6196-4135]Ralph P. Eatough
National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100101, PR China
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-2791-5011]Razieh Emami
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-2526-6724]Heino Falcke
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0003-4914-5625]Joseph Farah
Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA
Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA
0000-0002-7128-9345]Vincent L. Fish
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0002-9036-2747]Ed Fomalont
National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville,
VA 22903, USA
0000-0002-9797-0972]H. Alyson Ford
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0002-5222-1361]Raquel Fraga-Encinas
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
William T. Freeman
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 32-D476, 77 Massachusetts Ave., Cambridge, MA 02142, USA
Google Research, 355 Main St., Cambridge, MA 02142, USA
0000-0002-8010-8454]Per Friberg
East Asian Observatory, 660 N. A'ohoku Place, Hilo, HI 96720, USA
James Clerk Maxwell Telescope (JCMT), 660 N. A'ohoku Place, Hilo, HI 96720, USA
0000-0002-1827-1656]Christian M. Fromm
Institut für Theoretische Physik und Astrophysik, Universität Würzburg, Emil-Fischer-Str. 31,
97074 Würzburg, Germany
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-8773-4933]Antonio Fuentes
Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain
0000-0002-6429-3872]Peter Galison
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Department of History of Science, Harvard University, Cambridge, MA 02138, USA
Department of Physics, Harvard University, Cambridge, MA 02138, USA
0000-0001-7451-8935]Charles F. Gammie
Department of Physics, University of Illinois, 1110 West Green Street, Urbana, IL 61801, USA
Department of Astronomy, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801, USA
NCSA, University of Illinois, 1205 W Clark St, Urbana, IL 61801, USA
0000-0002-6584-7443]Roberto García
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d'Hères, France
0000-0002-0115-4605]Olivier Gentaz
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d'Hères, France
0000-0002-3586-6424]Boris Georgiev
Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
Waterloo Centre for Astrophysics, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada
0000-0002-2542-7743]Ciriaco Goddi
Dipartimento di Fisica, Università degli Studi di Cagliari, SP Monserrato-Sestu km 0.7, I-09042 Monserrato, Italy
INAF - Osservatorio Astronomico di Cagliari, Via della Scienza 5, 09047, Selargius, CA, Italy
0000-0003-2492-1966]Roman Gold
CP3-Origins, University of Southern Denmark, Campusvej 55, DK-5230 Odense M, Denmark
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
0000-0001-9395-1670]Arturo I. Gómez-Ruiz
Instituto Nacional de Astrofísica, Óptica y Electrónica. Apartado Postal 51 y 216, 72000. Puebla Pue., México
Consejo Nacional de Ciencia y Tecnologìa, Av. Insurgentes Sur 1582, 03940, Ciudad de México, México
0000-0003-4190-7613]José L. Gómez
Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain
0000-0002-4455-6946]Minfeng Gu (顾敏峰)
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, People's Republic of China
Key Laboratory for Research in Galaxies and Cosmology, Chinese Academy of Sciences, Shanghai 200030, People's Republic of China
0000-0003-0685-3621]Mark Gurwell
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0001-6906-772X]Kazuhiro Hada
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
Department of Astronomical Science, The Graduate University for Advanced Studies (SOKENDAI), 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
0000-0001-6803-2138]Daryl Haggard
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
McGill Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
Kari Haworth
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-4114-4583]Michael H. Hecht
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0003-1918-6098]Ronald Hesper
NOVA Sub-mm Instrumentation Group, Kapteyn Astronomical Institute, University of Groningen, Landleven 12, 9747 AD Groningen, The Netherlands
0000-0002-7671-0047]Dirk Heumann
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0001-6947-5846]Luis C. Ho (何子山)
Department of Astronomy, School of Physics, Peking University, Beijing 100871, People's Republic of China
Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, People's Republic of China
0000-0002-3412-4306]Paul Ho
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
James Clerk Maxwell Telescope (JCMT), 660 N. A'ohoku Place, Hilo, HI 96720, USA
East Asian Observatory, 660 N. A'ohoku Place, Hilo, HI 96720, USA
0000-0003-4058-9000]Mareki Honma
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
Department of Astronomical Science, The Graduate University for Advanced Studies (SOKENDAI), 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
0000-0001-5641-3953]Chih-Wei L. Huang
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-1923-227X]Lei Huang (黄磊)
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, People's Republic of China
Key Laboratory for Research in Galaxies and Cosmology, Chinese Academy of Sciences, Shanghai 200030, People's Republic of China
David H. Hughes
Instituto Nacional de Astrofísica, Óptica y Electrónica. Apartado Postal 51 y 216, 72000. Puebla Pue., México
0000-0002-2462-1448]Shiro Ikeda
National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
The Institute of Statistical Mathematics, 10-3 Midori-cho, Tachikawa, Tokyo, 190-8562, Japan
Department of Statistical Science, The Graduate University for Advanced Studies (SOKENDAI), 10-3 Midori-cho, Tachikawa, Tokyo 190-8562, Japan
Kavli Institute for the Physics and Mathematics of the Universe, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, 277-8583, Japan
0000-0002-3443-2472]C. M. Violette Impellizzeri
Leiden Observatory, Leiden University, Postbus 2300, 9513 RA Leiden, The Netherlands
National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville,
VA 22903, USA
0000-0001-5037-3989]Makoto Inoue
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-5297-921X]Sara Issaoun
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
NASA Hubble Fellowship Program, Einstein Fellow
0000-0001-5160-4486]David J. James
ASTRAVEO LLC, PO Box 1668, Gloucester, MA 01931
0000-0002-1578-6582]Buell T. Jannuzi
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0001-8685-6544]Michael Janssen
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0003-2847-1712]Britton Jeter
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0001-7369-3539]Wu Jiang (江悟)
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, People's Republic of China
0000-0002-2662-3754]Alejandra Jiménez-Rosales
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0002-4120-3029]Michael D. Johnson
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0001-6158-1708]Svetlana Jorstad
Institute for Astrophysical Research, Boston University, 725 Commonwealth Ave., Boston, MA 02215, USA
0000-0002-2514-5965]Abhishek V. Joshi
Department of Physics, University of Illinois, 1110 West Green Street, Urbana, IL 61801, USA
0000-0001-7003-8643]Taehyun Jung
Korea Astronomy and Space Science Institute, Daedeok-daero 776, Yuseong-gu, Daejeon 34055, Republic of Korea
University of Science and Technology, Gajeong-ro 217, Yuseong-gu, Daejeon 34113, Republic of Korea
0000-0001-7387-9333]Mansour Karami
Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada
Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
0000-0002-5307-2919]Ramesh Karuppusamy
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-8527-0496]Tomohisa Kawashima
Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8582, Japan
0000-0002-3490-146X]Garrett K. Keating
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-6156-5617]Mark Kettenis
Joint Institute for VLBI ERIC (JIVE), Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
0000-0002-7038-2118]Dong-Jin Kim
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-8229-7183]Jae-Young Kim
Department of Astronomy and Atmospheric Sciences, Kyungpook National University,
Daegu 702-701, Republic of Korea
Korea Astronomy and Space Science Institute, Daedeok-daero 776, Yuseong-gu, Daejeon 34055, Republic of Korea
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-1229-0426]Jongsoo Kim
Korea Astronomy and Space Science Institute, Daedeok-daero 776, Yuseong-gu, Daejeon 34055, Republic of Korea
0000-0002-4274-9373]Junhan Kim
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, USA
0000-0002-2709-7338]Motoki Kino
National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Kogakuin University of Technology & Engineering, Academic Support Center, 2665-1 Nakano, Hachioji, Tokyo 192-0015, Japan
0000-0002-7029-6658]Jun Yi Koay
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0001-7386-7439]Prashant Kocherlakota
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
Yutaro Kofuji
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
0000-0003-2777-5861]Patrick M. Koch
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-3723-3372]Shoko Koyama
Niigata University, 8050 Ikarashi-nino-cho, Nishi-ku, Niigata 950-2181, Japan
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-4908-4925]Carsten Kramer
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d'Hères, France
0000-0002-4175-2271]Michael Kramer
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-4892-9586]Thomas P. Krichbaum
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-6211-5581]Cheng-Yu Kuo
Physics Department, National Sun Yat-Sen University, No. 70, Lien-Hai Road, Kaosiung City 80424, Taiwan, R.O.C.
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-8116-9427]Noemi La Bella
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0003-3234-7247]Tod R. Lauer
National Optical Astronomy Observatory, 950 N. Cherry Ave., Tucson, AZ 85719, USA
0000-0002-3350-5588]Daeyoung Lee
Department of Physics, University of Illinois, 1110 West Green Street, Urbana, IL 61801, USA
0000-0002-6269-594X]Sang-Sung Lee
Korea Astronomy and Space Science Institute, Daedeok-daero 776, Yuseong-gu, Daejeon 34055, Republic of Korea
0000-0002-8802-8256]Po Kin Leung
Department of Physics, The Chinese University of Hong Kong, Shatin, N. T., Hong Kong
0000-0001-7307-632X]Aviad Levis
California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, USA
0000-0003-0355-6437]Zhiyuan Li (李志远)
School of Astronomy and Space Science, Nanjing University, Nanjing 210023, People's Republic of China
Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Nanjing 210023, People's Republic of China
0000-0001-7361-2460]Rocco Lico
Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain
INAF-Istituto di Radioastronomia, Via P. Gobetti 101, I-40129 Bologna, Italy
0000-0002-6100-4772]Greg Lindahl
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-3669-0715]Michael Lindqvist
Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, SE-43992 Onsala, Sweden
0000-0001-6088-3819]Mikhail Lisakov
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-7615-7499]Jun Liu (刘俊)
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-2953-7376]Kuo Liu
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0003-0995-5201]Elisabetta Liuzzo
INAF-Istituto di Radioastronomia & Italian ALMA Regional Centre, Via P. Gobetti 101, I-40129 Bologna, Italy
0000-0003-1869-2503]Wen-Ping Lo
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
Department of Physics, National Taiwan University, No.1, Sect.4, Roosevelt Rd., Taipei 10617, Taiwan, R.O.C
0000-0003-1622-1484]Andrei P. Lobanov
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-5635-3345]Laurent Loinard
Instituto de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de México, Morelia 58089, México
Instituto de Astronomía, Universidad Nacional Autónoma de México (UNAM), Apdo Postal 70-264, Ciudad de México, México
0000-0003-4062-4654]Colin J. Lonsdale
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0002-7692-7967]Ru-Sen Lu (路如森)
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, People's Republic of China
Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, Nanjing 210008, People's Republic of China
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-7077-7195]Jirong Mao (毛基荣)
Yunnan Observatories, Chinese Academy of Sciences, 650011 Kunming, Yunnan Province, People's Republic of China
Center for Astronomical Mega-Science, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing, 100012, People's Republic of China
Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, 650011 Kunming, People's Republic of China
0000-0002-5523-7588]Nicola Marchili
INAF-Istituto di Radioastronomia & Italian ALMA Regional Centre, Via P. Gobetti 101, I-40129 Bologna, Italy
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-9564-0876]Sera Markoff
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands
Gravitation and Astroparticle Physics Amsterdam (GRAPPA) Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands
0000-0002-2367-1080]Daniel P. Marrone
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0001-7396-3332]Alan P. Marscher
Institute for Astrophysical Research, Boston University, 725 Commonwealth Ave., Boston, MA 02215, USA
0000-0003-3708-9611]Iván Martí-Vidal
Departament d'Astronomia i Astrofísica, Universitat de València, C. Dr. Moliner 50, E-46100 Burjassot, València, Spain
Observatori Astronòmic, Universitat de València, C. Catedrático José Beltrán 2, E-46980 Paterna, València, Spain
0000-0002-2127-7880]Satoki Matsushita
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-3728-8082]Lynn D. Matthews
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0003-2342-6728]Lia Medeiros
NSF Astronomy and Astrophysics Postdoctoral Fellow
School of Natural Sciences, Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USA
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0001-6459-0669]Karl M. Menten
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-7618-6556]Daniel Michalik
Science Support Office, Directorate of Science, European Space Research and Technology Centre (ESA/ESTEC), Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands
Department of Astronomy and Astrophysics, University of Chicago,
5640 South Ellis Avenue, Chicago, IL 60637, USA
0000-0002-7210-6264]Izumi Mizuno
East Asian Observatory, 660 N. A'ohoku Place, Hilo, HI 96720, USA
James Clerk Maxwell Telescope (JCMT), 660 N. A'ohoku Place, Hilo, HI 96720, USA
0000-0002-8131-6730]Yosuke Mizuno
Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shengrong Road 520, Shanghai, 201210, People’s Republic of China
School of Physics and Astronomy, Shanghai Jiao Tong University,
800 Dongchuan Road, Shanghai, 200240, People’s Republic of China
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
0000-0002-3882-4414]James M. Moran
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0003-1364-3761]Kotaro Moriyama
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
0000-0002-4661-6332]Monika Moscibrodzka
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0002-2739-2994]Cornelia Müller
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0003-0329-6874]Alejandro Mus
Departament d'Astronomia i Astrofísica, Universitat de València, C. Dr. Moliner 50, E-46100 Burjassot, València, Spain
Observatori Astronòmic, Universitat de València, C. Catedrático José Beltrán 2, E-46980 Paterna, València, Spain
0000-0003-1984-189X]Gibwa Musoke
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0003-3025-9497]Ioannis Myserlis
Institut de Radioastronomie Millimétrique (IRAM), Avenida Divina Pastora 7, Local 20, E-18012, Granada, Spain
0000-0001-9479-9957]Andrew Nadolski
Department of Astronomy, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801, USA
0000-0003-0292-3645]Hiroshi Nagai
National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Department of Astronomical Science, The Graduate University for Advanced Studies (SOKENDAI), 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
0000-0001-6920-662X]Neil M. Nagar
Astronomy Department, Universidad de Concepción, Casilla 160-C, Concepción, Chile
0000-0001-6081-2420]Masanori Nakamura
National Institute of Technology, Hachinohe College, 16-1 Uwanotai, Tamonoki, Hachinohe City, Aomori 039-1192, Japan
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-1919-2730]Ramesh Narayan
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-4723-6569]Gopal Narayanan
Department of Astronomy, University of Massachusetts, 01003, Amherst, MA, USA
0000-0001-8242-4373]Iniyan Natarajan
Wits Centre for Astrophysics, University of the Witwatersrand,
1 Jan Smuts Avenue, Braamfontein, Johannesburg 2050, South Africa
South African Radio Astronomy Observatory, Observatory 7925, Cape Town, South Africa
Antonios Nathanail
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
Department of Physics, National and Kapodistrian University of Athens, Panepistimiopolis, GR 15783 Zografos, Greece
Santiago Navarro Fuentes
Institut de Radioastronomie Millimétrique (IRAM), Avenida Divina Pastora 7, Local 20, E-18012, Granada, Spain
0000-0002-8247-786X]Joey Neilsen
Department of Physics, Villanova University, 800 Lancaster Avenue, Villanova, PA 19085, USA
0000-0002-7176-4046]Roberto Neri
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d'Hères, France
0000-0003-1361-5699]Chunchong Ni
Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
Waterloo Centre for Astrophysics, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada
0000-0002-4151-3860]Aristeidis Noutsos
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-6923-1315]Michael A. Nowak
Physics Department, Washington University CB 1105, St Louis, MO 63130, USA
0000-0002-4991-9638]Junghwan Oh
Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul, Republic of Korea
0000-0003-3779-2016]Hiroki Okino
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
0000-0001-6833-7580]Héctor Olivares
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0002-2863-676X]Gisela N. Ortiz-León
Instituto de Astronomía, Universidad Nacional Autónoma de México (UNAM), Apdo Postal 70-264, Ciudad de México, México
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0003-4046-2923]Tomoaki Oyama
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
0000-0002-7179-3816]Daniel C. M. Palumbo
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0001-6757-3098]Georgios Filippos Paraschos
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-6558-9053]Jongho Park
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
EACOA Fellow
0000-0002-6327-3423]Harriet Parsons
East Asian Observatory, 660 N. A'ohoku Place, Hilo, HI 96720, USA
James Clerk Maxwell Telescope (JCMT), 660 N. A'ohoku Place, Hilo, HI 96720, USA
0000-0002-6021-9421]Nimesh Patel
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0003-2155-9578]Ue-Li Pen
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada
Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, ON, M5S 3H8, Canada
Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON, M5S 3H4, Canada
Canadian Institute for Advanced Research, 180 Dundas St West, Toronto, ON, M5G 1Z8, Canada
0000-0002-5278-9221]Dominic W. Pesce
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Vincent Piétu
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d'Hères, France
0000-0001-6765-9609]Richard Plambeck
Radio Astronomy Laboratory, University of California, Berkeley, CA 94720, USA
Aleksandar PopStefanija
Department of Astronomy, University of Massachusetts, 01003, Amherst, MA, USA
0000-0002-4584-2557]Oliver Porth
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
0000-0002-6579-8311]Felix M. Pötzl
Department of Physics, University College Cork, Kane Building, College Road, Cork T12 K8AF, Ireland
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-0393-7734]Ben Prather
Department of Physics, University of Illinois, 1110 West Green Street, Urbana, IL 61801, USA
0000-0002-4146-0113]Jorge A. Preciado-López
Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada
0000-0001-9270-8812]Hung-Yi Pu
Department of Physics, National Taiwan Normal University, No. 88, Sec.4, Tingzhou Rd., Taipei 116, Taiwan, R.O.C.
Center of Astronomy and Gravitation, National Taiwan Normal University, No. 88, Sec. 4, Tingzhou Road, Taipei 116, Taiwan, R.O.C.
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-9248-086X]Venkatessh Ramakrishnan
Astronomy Department, Universidad de Concepción, Casilla 160-C, Concepción, Chile
Finnish Centre for Astronomy with ESO, FI-20014 University of Turku, Finland
Aalto University Metsähovi Radio Observatory, Metsähovintie 114, FI-02540 Kylmälä, Finland
0000-0002-1407-7944]Ramprasad Rao
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-6529-202X]Mark G. Rawlings
Gemini Observatory/NSF NOIRLab, 670 N. A’ohōkū Place, Hilo, HI 96720, USA
East Asian Observatory, 660 N. A'ohoku Place, Hilo, HI 96720, USA
James Clerk Maxwell Telescope (JCMT), 660 N. A'ohoku Place, Hilo, HI 96720, USA
0000-0002-5779-4767]Alexander W. Raymond
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-1330-7103]Luciano Rezzolla
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
Frankfurt Institute for Advanced Studies, Ruth-Moufang-Strasse 1, 60438 Frankfurt, Germany
School of Mathematics, Trinity College, Dublin 2, Ireland
0000-0001-5287-0452]Angelo Ricarte
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
0000-0002-7301-3908]Bart Ripperda
Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544, USA
Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA
0000-0001-5461-3687]Freek Roelofs
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0003-1941-7458]Alan Rogers
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0001-9503-4892]Eduardo Ros
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0001-6301-9073]Cristina Romero-Cañizales
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
0000-0002-8280-9238]Arash Roshanineshat
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
Helge Rottmann
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-1931-0135]Alan L. Roy
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-0965-5463]Ignacio Ruiz
Institut de Radioastronomie Millimétrique (IRAM), Avenida Divina Pastora 7, Local 20, E-18012, Granada, Spain
0000-0001-7278-9707]Chet Ruszczyk
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0003-4146-9043]Kazi L. J. Rygl
INAF-Istituto di Radioastronomia & Italian ALMA Regional Centre, Via P. Gobetti 101, I-40129 Bologna, Italy
0000-0002-8042-5951]Salvador Sánchez
Institut de Radioastronomie Millimétrique (IRAM), Avenida Divina Pastora 7, Local 20, E-18012, Granada, Spain
0000-0002-7344-9920]David Sánchez-Argüelles
Instituto Nacional de Astrofísica, Óptica y Electrónica. Apartado Postal 51 y 216, 72000. Puebla Pue., México
Consejo Nacional de Ciencia y Tecnologìa, Av. Insurgentes Sur 1582, 03940, Ciudad de México, México
0000-0003-0981-9664]Miguel Sánchez-Portal
Institut de Radioastronomie Millimétrique (IRAM), Avenida Divina Pastora 7, Local 20, E-18012, Granada, Spain
0000-0001-5946-9960]Mahito Sasada
Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551, Japan
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
Hiroshima Astrophysical Science Center, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan
0000-0003-0433-3585]Kaushik Satapathy
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0001-6214-1085]Tuomas Savolainen
Aalto University Department of Electronics and Nanoengineering, PL 15500, FI-00076 Aalto, Finland
Aalto University Metsähovi Radio Observatory, Metsähovintie 114, FI-02540 Kylmälä, Finland
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
F. Peter Schloerb
Department of Astronomy, University of Massachusetts, 01003, Amherst, MA, USA
0000-0002-8909-2401]Jonathan Schonfeld
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0003-2890-9454]Karl-Friedrich Schuster
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine,
F-38406 Saint Martin d'Hères, France
0000-0002-1334-8853]Lijing Shao
Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, People's Republic of China
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0003-3540-8746]Zhiqiang Shen (沈志强)
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, People's Republic of China
Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, Nanjing 210008, People's Republic of China
0000-0003-3723-5404]Des Small
Joint Institute for VLBI ERIC (JIVE), Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
0000-0002-4148-8378]Bong Won Sohn
Korea Astronomy and Space Science Institute, Daedeok-daero 776, Yuseong-gu, Daejeon 34055, Republic of Korea
University of Science and Technology, Gajeong-ro 217, Yuseong-gu, Daejeon 34113, Republic of Korea
Department of Astronomy, Yonsei University, Yonsei-ro 50, Seodaemun-gu, 03722 Seoul, Republic of Korea
0000-0003-1938-0720]Jason SooHoo
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0001-7915-5272]Kamal Souccar
Department of Astronomy, University of Massachusetts, 01003, Amherst, MA, USA
0000-0003-1526-6787]He Sun (孙赫)
California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, USA
0000-0003-0236-0600]Fumie Tazaki
Mizusawa VLBI Observatory, National Astronomical Observatory of Japan, 2-12 Hoshigaoka, Mizusawa, Oshu, Iwate 023-0861, Japan
0000-0003-3906-4354]Alexandra J. Tetarenko
Department of Physics and Astronomy, Texas Tech University, Lubbock, Texas 79409-1051, USA
NASA Hubble Fellowship Program, Einstein Fellow
0000-0003-3826-5648]Paul Tiede
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
0000-0002-6514-553X]Remo P. J. Tilanus
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
Leiden Observatory, Leiden University, Postbus 2300, 9513 RA Leiden, The Netherlands
Netherlands Organisation for Scientific Research (NWO), Postbus 93138, 2509 AC Den Haag, The Netherlands
0000-0001-9001-3275]Michael Titus
Massachusetts Institute of Technology Haystack Observatory, 99 Millstone Road, Westford, MA 01886, USA
0000-0001-8700-6058]Pablo Torne
Institut de Radioastronomie Millimétrique (IRAM), Avenida Divina Pastora 7, Local 20, E-18012, Granada, Spain
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-1209-6500]Efthalia Traianou
Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
Tyler Trent
Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
0000-0003-0465-1559]Sascha Trippe
Department of Physics and Astronomy, Seoul National University, Gwanak-gu, Seoul 08826, Republic of Korea
0000-0002-5294-0198]Matthew Turk
Department of Astronomy, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801, USA
0000-0001-5473-2950]Ilse van Bemmel
Joint Institute for VLBI ERIC (JIVE), Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
0000-0002-0230-5946]Huib Jan van Langevelde
Joint Institute for VLBI ERIC (JIVE), Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
Leiden Observatory, Leiden University, Postbus 2300, 9513 RA Leiden, The Netherlands
University of New Mexico, Department of Physics and Astronomy, Albuquerque, NM 87131, USA
0000-0001-7772-6131]Daniel R. van Rossum
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0003-3349-7394]Jesse Vos
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0003-1105-6109]Jan Wagner
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0003-1140-2761]Derek Ward-Thompson
Jeremiah Horrocks Institute, University of Central Lancashire, Preston PR1 2HE, UK
0000-0002-8960-2942]John Wardle
Physics Department, Brandeis University, 415 South Street, Waltham, MA 02453, USA
0000-0002-4603-5204]Jonathan Weintroub
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0003-4058-2837]Norbert Wex
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-7416-5209]Robert Wharton
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-8635-4242]Maciek Wielgus
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-0862-3398]Kaj Wiik
Tuorla Observatory, Department of Physics and Astronomy, University of Turku, Finland
0000-0003-2618-797X]Gunther Witzel
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-6894-1072]Michael F. Wondrak
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
Radboud Excellence Fellow of Radboud University, Nijmegen, The Netherlands
0000-0001-6952-2147]George N. Wong
School of Natural Sciences, Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USA
Princeton Gravity Initiative, Princeton University, Princeton, New Jersey 08544, USA
0000-0003-4773-4987]Qingwen Wu (吴庆文)
School of Physics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, People's Republic of China
0000-0002-6017-8199]Paul Yamaguchi
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0001-8694-8166]Doosoo Yoon
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands
0000-0003-0000-2682]André Young
Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
0000-0002-3666-4920]Ken Young
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0001-9283-1191]Ziri Younsi
Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK
Institut für Theoretische Physik, Goethe-Universität Frankfurt, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main, Germany
0000-0003-3564-6437]Feng Yuan (袁峰)
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, People's Republic of China
Key Laboratory for Research in Galaxies and Cosmology, Chinese Academy of Sciences, Shanghai 200030, People's Republic of China
School of Astronomy and Space Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, People's Republic of China
0000-0002-7330-4756]Ye-Fei Yuan (袁业飞)
Astronomy Department, University of Science and Technology of China, Hefei 230026, People's Republic of China
0000-0001-7470-3321]J. Anton Zensus
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
0000-0002-2967-790X]Shuo Zhang
Bard College, 30 Campus Road, Annandale-on-Hudson, NY, 12504
0000-0002-4417-1659]Guang-Yao Zhao
Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain
0000-0002-9774-3606]Shan-Shan Zhao (赵杉杉)
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, People's Republic of China
Dominic O. Chang
Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA
0The Event Horizon Telescope Collaboration
The EHT Collaboration et al.
In this paper we quantify the temporal variability and image morphology of the horizon-scale emission from , as observed by the EHT in 2017 April at a wavelength of 1.3 mm. We find that the data exhibit variability that exceeds what can be explained by the uncertainties in the data or by the effects of interstellar scattering. The magnitude of this variability can be a substantial fraction of the correlated flux density, reaching $\sim$100% on some baselines. Through an exploration of simple geometric source models, we demonstrate that ring-like morphologies provide better fits to the data than do other morphologies with comparable complexity. We develop two strategies for fitting static geometric ring models to the time-variable data; one strategy fits models to short segments of data over which the source is static and averages these independent fits, while the other fits models to the full dataset using a parametric model for the structural variability power spectrum around the average source structure. Both geometric modeling and image-domain feature extraction techniques determine the ring diameter to be (68% credible intervals), with the ring thickness constrained to have an FWHM between $\sim$30% and 50% of the ring diameter. To bring the diameter measurements to a common physical scale, we calibrate them using synthetic data generated from GRMHD simulations. This calibration constrains the angular size of the gravitational radius to be , which we combine with an independent distance measurement from maser parallaxes to determine the mass of to be .
§ INTRODUCTION
Sagittarius A$^*$ (), the radio source associated with the supermassive black hole (SMBH) at the center of the Milky Way, is thought to subtend the largest angular size of all black holes in the sky. At a distance of $D \approx 8$ kpc and with a mass of $M \approx 4 \times 10^6$ <cit.>, has a Schwarzschild radius of $\sim$10 . Models of optically thin spherical accretion flows around SMBHs generically predict that they will appear to distant observers as bright rings of emission surrounding a darker central “shadow” <cit.>, and a variety of more general accretion flow simulations have demonstrated that the diameter of this ring is typically $\sim$5 times larger than the Schwarzschild radius <cit.>. The Event Horizon Telescope (EHT) collaboration provided observational verification of this picture, using a global very long baseline interferometry (VLBI) network of radio telescopes observing at a frequency of $\sim$230 GHz to resolve the $\sim$40 ring of emission around the 87 SMBH <cit.>.
The predicted ring diameter for is $\sim$50 , about 25% larger than what the EHT observed for 87. However, because is more than three orders of magnitude less massive than 87, all dynamical timescales in the system are correspondingly shorter. In particular, the typical gravitational timescale for is $G M / c^3 \approx 20$ s, implying that the source structure can vary substantially over the several-hour duration of a single EHT observation. Consistent with this expectation, exhibits broadband variability on timescales of minutes to hours <cit.>. The multiwavelength properties of during the 2017 EHT observing campaign are described in <cit.>.
The potential for rapid structural variability complicates the analysis of EHT observations of . A standard strategy for ameliorating the sparsity of VLBI data sets is Earth-rotation aperture synthesis, whereby Fourier coverage of the array is accumulated as the Earth rotates and baselines change their orientation with respect to the source <cit.>. This strategy is predicated on the source remaining static throughout the observing period, in which case the accumulated data measure a single image structure. However, violates this assumption on timescales as short as minutes. After several hours, the variable components of the image structure in are expected to be uncorrelated <cit.>.
Thus, image reconstructions from the EHT data are focused on reconstructing time-averaged source structures <cit.>.
Despite the necessity of reconstructing an average source structure, the data collected within a single multihour observation epoch are associated with many specific instances of the variable emission from , i.e., they represent an amalgam of observations of instantaneous images. The imaging strategy pursued for the EHT observations of aims to mitigate the impact of this changing source structure through the introduction of a “variability noise budget,” which absorbs the structural evolution into inflated uncertainties and thereby permits imaging algorithms to reconstruct a time-averaged image under the usual static source assumption.[Note that this procedure ignores the subhour correlations that are present in the data; the implications of these correlations are discussed in <ref>.] The image reconstruction procedure is described in detail in , and the results confirm that the data are consistent with being produced by a ring-like emission structure with a diameter of $\sim$50 .
For the EHT observations of 87, morphological properties of the observed ring (e.g., diameter, thickness, orientation) were quantified using both imaging and geometrical modeling analyses , and the measured ring diameter was calibrated using general relativistic magnetohydrodynamic (GRMHD) simulations from to constrain the mass of the SMBH. The current paper applies a conceptually similar strategy to the analysis of the EHT data, though significant alterations have been made to meet the new challenges posed by and to tailor the analyses appropriately. In this paper, we first characterize the variability seen in the data, and we develop a framework for mitigating the impact of variability when imaging or modeling the data. We then make measurements of the ring size and other structural properties using both imaging and geometrical modeling analyses, and we derive and apply a GRMHD-based calibration to bring ring size measurements made using different techniques to a common physical scale.
This paper is organized as follows. <ref> provides an overview of the observations and data processing. In <ref>, we quantify the variability on different spatial scales, and we outline the strategies used to mitigate its impact during imaging and modeling. In <ref>, we discuss salient data properties in the context of a ring-like emission structure, and we describe our procedure for using GRMHD simulations to calibrate different ring size measurement techniques to a common physical scale. Sections <ref>, <ref>, and <ref> detail our three primary strategies for measuring the ring size and describe their application to the data. Our results are presented in <ref>, and we summarize and conclude in <ref>. This paper is the fourth in a series that describes the analysis of the 2017 EHT observations of . The series is summarized in <cit.>. The data processing and calibration are described in , imaging is carried out in , physical simulations are described in <cit.>, and tests of gravity are presented in <cit.>.
§ OBSERVATIONS AND DATA PRODUCTS
In this section, we briefly review the interferometric data products used for analyses in this paper (<ref>), and we summarize the observations (<ref>) and data processing (<ref>) that precede these analyses. A more comprehensive description of the data collection, correlation, and calibration can be found in , , and references therein.
§.§ VLBI data products
As a radio interferometer, the EHT is natively sensitive to the Fourier transform of the sky-plane emission structure. For a source of emission $I(\boldsymbol{x},t)$, the complex visibility $\mathcal{V}(\boldsymbol{u},t)$ is given by
\begin{equation}
\mathcal{V}(\boldsymbol{u},t) = \iint e^{- 2 \pi i \boldsymbol{u} \cdot \boldsymbol{x}} I(\boldsymbol{x},t) \text{d}^2\boldsymbol{x} ,
\end{equation}
where $t$ is time, $\boldsymbol{x} = (x,y)$ are angular coordinates on the sky, and $\boldsymbol{u} = (u,v)$ are projected baseline coordinates in units of the observing wavelength <cit.>.
The ideal visibilities $\mathcal{V}$ are not directly observable because they are corrupted by both statistical errors and a variety of systematic effects. For the EHT, the dominant systematics are complex station-based gain corruptions. The relationship between an ideal visibility $\mathcal{V}_{ij}$ and the observed visibility $V_{ij}$ on a baseline connecting stations $i$ and $j$ is given by
\begin{equation}
V_{ij} = g_i g_j^* \mathcal{V}_{ij} + \sigma_{\text{th},ij} \equiv |V_{ij}| e^{i \phi_{ij}}, \label{eqn:VisibilityCorruptions}
\end{equation}
where $\sigma_{\text{th},ij}$ is the statistical (or “thermal”) error on the baseline, $g_i$ and $g_j$ are the station gains, and we have defined the visibility amplitude $|V_{ij}|$ and phase $\phi_{ij}$. The statistical error is well described as a zero-mean circularly symmetric complex Gaussian random variable with a variance determined (per the radiometer equation) by the station sensitivities, integration time, and frequency bandwidth <cit.>. The station gains vary in time at every site and must in general be either calibrated out or determined alongside the source structure.
The presence of station-based systematics motivates the construction and use of “closure quantities” that are invariant to such corruptions. A closure phase $\psi_{ijk}$ <cit.> is the sum of visibility phases around a closed triangle of baselines connecting stations $i$, $j$, and $k$,
\begin{equation}
\psi_{ijk} = \phi_{ij} + \phi_{jk} + \phi_{ki} . \label{eqn:ClosurePhase}
\end{equation}
Closure phases are invariant to station-based phase corruptions, such that the measured closure phase is equal to the ideal closure phase, up to statistical errors. Similarly, a closure amplitude $A_{ijk\ell}$ <cit.> is the ratio of pairs of visibility amplitudes on a closed quadrangle of baselines connecting stations $i$, $j$, $k$, and $\ell$,
\begin{equation}
A_{ijk\ell} = \frac{|V_{ij}| |V_{k\ell}|}{|V_{ik}| |V_{j\ell}|} .
\end{equation}
Analogous with closure phases, closure amplitudes are invariant to station-based amplitude corruptions. Because closure quantities are constructed from nonlinear combinations of complex visibilities, they have correlated and non-Gaussian error statistics; a detailed discussion is provided in <cit.>.
§.§ EHT observations of Sgr A*
The EHT observed on 2017 April 5, 6, 7, 10, and 11 with the phased Atacama Large Millimeter/submillimeter Array (ALMA) and the Atacama Pathfinder Experiment (APEX) on the Llano de Chajnantor in Chile, the Large Millimeter Telescope Alfonso Serrano (LMT) on Volcán Sierra Negra in Mexico, the James Clerk Maxwell Telescope (JCMT) and phased Submillimeter Array (SMA) on Maunakea in , the IRAM 30 m telescope (PV) on Pico Veleta in Spain, the Submillimeter Telescope (SMT) on Mt. Graham in Arizona, and the South Pole Telescope (SPT) in Antarctica . Only the April 6, 7, and 11 observations included the highly sensitive ALMA station, and the April 11 light curve exhibits strong variability <cit.> that is presumably associated with an X-ray flare that occurred shortly before the start of the track . In this paper, we thus analyze primarily the April 6 and April 7 data sets. We note that while focuses on the April 7 data set, with the April 6 data set used for secondary validation, most of the analyses carried out in this paper instead focus on a joint data set that combines the April 6 and April 7 data.
At each site the data were recorded in two 1.875 GHz wide frequency bands, centered around sky frequencies of 227.1 GHz (low band; LO) and 229.1 GHz (high band; HI), and in each of two polarization modes. For all telescopes except ALMA and JCMT, the data were recorded in a dual circular polarization mode: right-hand circular polarization (RCP; R) and left-hand circular polarization (LCP; L). ALMA recorded using linear feeds, and the data were later converted to a circular polarization basis during the DiFX <cit.> correlation <cit.>. The JCMT observed only a single hand of circular polarization at a time, with the specific handedness (RCP or LCP) changing from day to day. All other stations observed in a standard dual-polarization mode, which allows the construction of RR, RL, LR, and LL correlation products. The analyses in this paper use only the parallel-hand correlations (i.e., RR and LL), which are averaged to form Stokes $I$ data products. Because JCMT records only a single hand at a time, we instead form “pseudo-$I$” data products for JCMT baselines, using whichever parallel-hand correlation is available as a stand-in for Stokes $I$.[The “pseudo-$I$” formation is a good approximation for Stokes $I$ when the magnitude of the Stokes $V$ contribution is small. We expect this condition to be met for the 2017 EHT observations of <cit.>, and the impact of residual Stokes $V$ is captured by the systematic error budget .]
§.§ Data reduction
After correlation, residual phase and bandpass errors are corrected with two independent processing pipelines: EHT-HOPS <cit.> producing “HOPS” <cit.> data and rPICARD <cit.> producing “CASA” <cit.> data. Relative phase gains between RCP and LCP have been corrected based on the assumption of zero circular polarization on baselines between ALMA and other EHT stations. Absolute flux density scales are based on a priori measurements of each station's sensitivity, resulting in a ${\sim}10$% typical uncertainty in the amplitude gains . The amplitude gains of the colocated ALMA/APEX and SMA/JCMT stations have been further refined via time-variable network calibration using a light curve of the compact flux measured by ALMA and SMA <cit.>. For the remaining stations, gross amplitude gain errors have been corrected by a transfer of gain solutions from the J1924-2914 and NRAO 530 calibrator sources as described in .
Following the completion of the above calibration pipelines, additional preprocessing of the data has been carried out as described in , including calibration of the LMT and JCMT station gains and normalization of the visibility amplitudes by the total light curve. The characterization of residual calibration effects (e.g., polarization leakage) into a systematic error budget, as well as a more comprehensive description of the overall EHT data reduction, is provided in .
§ VARIABILITY EXTRACTION AND MITIGATION
The statistical errors quoted in and summarized in the preceding section do not account for three additional sources of uncertainty that can otherwise substantially bias any analysis efforts. First, unaccounted-for nonclosing (i.e., baseline-based) systematic errors are present in the data at a level that is on the order of $\sim$1% of the visibility amplitude, which is often larger than the formal statistical errors . Second, significant refractive scattering in the interstellar medium produces additional substructure within the image that is not present in the intrinsic emission map <cit.>. Third, there is intraday variability in the source itself. Source variability is theoretically expected to arise on a broad range of timescales, and it is explicitly seen in GRMHD simulations on timescales as short as minutes <cit.>. Such variability was also observed in the light curve of during the 2017 EHT campaign on timescales from 1 minute to several hours <cit.>.
In this section, we summarize the theoretical expectations for and characteristics of the variability based on GRMHD simulations, present an estimate for the degree of structural variability in directly from the visibility amplitude data, and describe the strategies pursued here and in to mitigate the impact of the three components of additional error listed above.
§.§ Expectations from theory
In low-luminosity SMBH systems such as , we expect the emission to originate from the immediate vicinity of the black hole, i.e., on scales comparable to the event horizon size. Here, all characteristic speeds of the hot relativistic gas approach the speed of light. The timescales associated with these processes are therefore set by the gravitational timescale, $G M/c^3$, which is $\sim$20 s for . This timescale is $\sim$3 orders of magnitude shorter than the nightly observations carried out by the EHT, so a single observation contains many realizations of the underlying source variability.
GRMHD simulations can model the dynamical processes in and, using ray-tracing and radiative transfer, provide a theoretical expectation for the observed emission. provides a library of GRMHD simulations and associated movies, which have been scaled to the conditions during the EHT 2017 observations (e.g., the average total 230 GHz flux is set to the EHT measurement). We use the variability characteristics of these simulations as our expectation for the variability seen by the EHT.
GRMHD simulations are universally described by a “red-red” power spectrum, with the largest fluctuations in the emission occurring on the longest timescales and the largest spatial scales <cit.>. Spatially, the largest scale for variability is limited to the size of the emitting region, which for an observing frequency of 230 GHz is typically several $G M/c^2$ and which for the EHT data is constrained to be $\lesssim$87 . Temporally, the simulations exhibit a red power spectrum that flattens on timescales ${\gtrsim}1000\ GM/c^3$. Observations of the total flux variability in corroborate this expectation, finding a red-noise spectrum extending to timescales of several hours and flattening on longer timescales <cit.>.
We can, without loss of generality, express the time-variable image structure $I$ in terms of some static mean image $I_{\text{avg}}$ and a zero-mean time-variable component $\delta I$ that captures all of the variation,
\begin{equation}
I(\boldsymbol{x},t) = I_{\text{avg}}(\boldsymbol{x}) + \delta I(\boldsymbol{x},t) .
\end{equation}
The linearity of the Fourier transform ensures that an analogous decomposition holds for $\mathcal{V}$, which is thus simply the sum of an analogous $\mathcal{V}_0$ and $\delta\mathcal{V}$. The variation $\delta\mathcal{V}$ represents the component of the data we wish to mitigate.
The EHT stations ALMA and SMA are themselves interferometric arrays capable of separating out extended structure <cit.> from the light curve,
\begin{equation}
L(t)=\mathcal{V}(\boldsymbol{0},t)=\iint I(\boldsymbol{x},t) d^2 \boldsymbol{x} ,
\end{equation}
on the largest spatial scales, predicted by GRMHD simulations to be the most variable.
Using this motivation, the light-curve-normalized image is defined to be
\begin{equation}
\hat{I}(\boldsymbol{x},t) \equiv \frac{I(\boldsymbol{x},t)}{L(t)}
\end{equation}
with $\hat{I}_{\text{avg}}$ and $\delta \hat{I}$ similarly defined; here, the “hat” diacritic denotes light-curve normalization.
From GRMHD simulations, the expected noise is well approximated by a broken power law,
\begin{equation}
\sigma_{\text{var}}^2
\equiv
\<\delta \hat{\mathcal{V}}^2\>
\approx
\frac{a^2 \left( |\boldsymbol{u}| / u_0 \right)^c}{1 + \left( |\boldsymbol{u}| / u_0 \right)^{b+c}} ,
\label{eq:noise-model}
\end{equation}
along any radial direction <cit.>. This broken power law is described by four parameters: a break at $u_0$, an amplitude $a$ representing the amount of noise at the break location, and long- and short-baseline power-law indices $b$ and $c$, respectively.
Typically, we expect that $c\gtrsim2$, due to the compact nature of the source.
In <ref>, red lines show $\sigma_{\text{var}}^2$ measured for an example GRMHD simulation about average images that have been constructed on observationally relevant timescales. The variability has been averaged in azimuth and across different black hole spin orientations. As the timescale over which the average image is constructed increases, the location of the break $u_0$ decreases and the amount of power at the break
increases.[This behavior is generic across a large number of simulations of and explored in detail in <cit.>.] This behavior can intuitively be understood as the GRMHD simulations changing less for short timescales. For comparison, we show the thermal, systematic, and refractive scattering noise. For timescales longer than $\sim$10 min, the variability noise dominates on EHT VLBI baselines.
The amount of excess variance in the visibility amplitudes expected for a GRMHD simulation (red lines, $\sigma_\text{var}^2$), after subtraction of the average image and normalization of the flux density by the light curve. Shown also are the typical thermal noise (black dashed line) and a 1% fractional systematic noise (green band) proportional to the mean image visibility amplitudes. The expected degree of refractive scattering is shown by the purple bands, with purple lines evaluated for a Gaussian source at the projected location of EHT data . The variability is shown about a mean image constructed on different observationally relevant timescales. The fractional systematic and variability noise have been averaged over azimuth and over the position angle of the diffractive screen.
§.§ Intraday variability in the Sgr A* data
Location and detrended visibility amplitudes for crossing and following tracks from the observations on April 5, 6, 7, and 10. Left: the locations of all data points in the $(u,v)$-plane. Individual baselines correspond to the curving tracks seen in this plot, and we have highlighted the crossing and following tracks using colored points. For reference, circles of constant baseline length are shown by the dashed gray lines. The central time stamps for the highlighted tracks are labeled in the corresponding colors. Right: linearly detrended and light-curve-normalized visibility amplitudes for the crossing and following baselines highlighted in the left panel, shown in corresponding point colors. For crossing tracks (bottom panels), the points within $1~{\rm G}\lambda$ of the crossing point are shown. Error bars show the thermal errors. For following tracks (top panels) all points for which the baselines overlap are shown. All highlighted points are employed in the linear detrending for each panel. The associated estimates of the mean and standard deviation are shown by the gray dashed line and horizontal band in each panel. The April 7 visibility amplitudes are indicated by open circles.
The intraday variability expected from theoretical considerations can be observed directly in the data.
<ref> shows the combined baseline coverage for the EHT's 2017 campaign, including the observations on April 5, 6, 7, and 10.
The upper limit on the source size of 87 implies that the complex visibilities will be correlated in regions of the -plane smaller than ${\sim}2$ .
In practice, the visibility amplitudes exhibit variations on scales smaller than this and otherwise appear strongly correlated on scales of $1$ (see , Figure 3).
Therefore, among the baseline tracks in <ref> there are four regions where the -coverage is redundant, i.e., multiple baselines pass within 1 of the same -position.
We separate the redundant baseline combinations into “crossing tracks,” in which two baseline tracks intersect at a single -point, and “following tracks,” in which two baselines follow a nearly identical extended track in the -plane. Both sets of redundant baselines provide an opportunity to directly probe the degree of intraday variability in the visibilities at specific locations in the -plane.
Prior to making comparisons, we apply the data preprocessing steps outlined in <ref> to mitigate unphysical sources of variability. To avoid addressing the unknown atmospheric phase delays, we focus exclusively on visibility amplitudes.
Because source structure will produce additional variations in the visibility amplitudes that are hard to visualize in projection and obscure the relative degree of variability, we detrend the visibility amplitudes with a linear model.
The crossing and following tracks discussed below are shown in the top and bottom subpanels of <ref>, respectively.
Chile–PV vs. Chile–SPT: The first crossing track we consider contains baselines between the Chile stations (ALMA, APEX) and PV and SPT, which both cross near $(u,v) = ($4 , 3.5 ) at times separated by 6.2 hr. The concurrent ALMA and APEX baselines are consistent within the reported statistical errors, and thus there is no evidence for unaddressed baseline-specific dominant systematic errors. The normalized visibility amplitudes for the Chile–PV and Chile–SPT baselines individually vary smoothly with time. Nevertheless, they differ significantly at the crossing point, and this difference is consistent in magnitude with the variation found across days (indicated by the gray band in the relevant panel of <ref>).
Chile–SMT vs. Chile–SPT: The second crossing track we consider contains baselines between the Chile stations (ALMA, APEX) and SMT and SPT, which both cross near $(u,v) = ($3 , 4.5 ) at times separated by 5.2 hr. Again, we find excellent agreement between ALMA and APEX baselines, individually smooth variations on the Chile–SMT and Chile–SPT baselines, and significant differences in the visibility amplitudes between those baselines.
SMA–SPT vs. LMT–SPT: The first following track we consider contains baselines between the SPT, which is located at the South Pole, and SMA and LMT, which have similar latitudes. Because the baseline tracks are coincident across a large range of locations in the -plane, this following track permits many direct comparisons at a baseline length of 8 at times separated by 3.4 hr. As with both crossing tracks, significant differences exist between the two sets of baselines, consistent with the range across multiple days.
SMT–SPT vs. PV–SPT: The final following track we consider again involves the SPT, and now the SMT and PV, which also have similar latitudes. This is the longest set of baselines that we consider, with a length of roughly 8.5 and covering similar regions in the -plane at times separated by 6.7 hr. Again, significant variations are exhibited, consistent with those across days.
In summary, intraday variability is observed on multiple baselines with lengths ranging from 5 to 8.5 and on timescales as short as 3.4 hr. In all cases, this variability is broadly consistent with that observed on interday timescales.
Furthermore, the variability behavior is consistent with theoretical expectations from GRMHD simulations and empirical expectations from the light curve, both of which imply that the variable elements of the emission should be uncorrelated beyond a timescale of a few hours <cit.>. Any average image of reconstructed from data spanning a time range longer than several hours captures the long-timescale asymptotic source structure; the intrinsic image averaged over a single day or multiple days is thus expected to exhibit similar structure.
§.§ Model-agnostic variability quantification
To quantify the variability observed in the EHT data, we make use of the procedure described in <cit.>. This procedure provides an estimate of the excess variability – i.e., the visibility amplitude variance in excess of that caused by known sources, such as average source structure, statistical and systematic uncertainties, and scattering – as a function of baseline length. We apply the same data preparation steps summarized in <ref> and described in <cit.>, combining the visibility amplitudes measured on April 5, 6, 7, and 10 in both observing bands. All data points are weighted equally.
Illustration of detrended visibility amplitudes and associated variance estimate.
Top: scan-averaged tracks in -coordinates with a circular region of diameter 1 superposed (red disk), centered at $(-2.1$ , 4.7 ). Scans within the region are dark red, while those outside are blue.
Middle: light-curve-normalized visibility amplitudes as a function of $u$, projected in $v$ (limited to points within the top panel).
Bottom: light-curve-normalized visibility amplitudes after detrending with a linear model defined by the scans within the 1 circular region. The estimated mean and standard deviation are shown by the orange dashed line and horizontal band.
Model-agnostic estimate of the azimuthally averaged excess variance of the visibility amplitudes, after subtracting the variance from the reported statistical errors, as a function of baseline length. Nonparametric estimates (filled and open black circles) are obtained across April 5, 6, 7, and 10 and using both high- and low-band data. The filled black circles indicate significant detections of source variability, while the open black circles indicate variance measurements that are dominated by the other sources of uncertainty; only the former are used in the parametric fitting. Uncertainties associated with the thermal errors, uncertain station gains, and polarization leakage are indicated by the error bars. Azimuthally averaged thermal errors are shown by the gray triangles and provide an approximate lower limit on the range of accurate variance estimates. For comparison, the magnitudes of the variance induced by refractive scattering are shown in purple along the minor (top) and major (bottom) axes of the diffractive scattering kernel ; the variance along individual tracks on April 7, as well as a $\sim$10 mJy floor (assuming a fixed 2.5 Jy total flux), is shown by the solid and dashed purple lines, respectively. The orange band indicates the 95th-percentile range of fits to the filled variance estimates shown by filled points by the broken power law of the form in <ref>, with a handful of specific examples shown explicitly.
The procedure is illustrated in <ref>. We again make use of the strong correlations induced by the finite source size, and for every location in the -plane we consider only those data points falling within a circular region of diameter 1 centered at that point (red circular region in the top panel of <ref>). Within each such region containing at least three data points, we linearly detrend the light-curve-normalized visibility amplitudes with respect to $u$ and $v$ to remove variations due to physical structure (bottom panel of <ref>), and we compute the variance of the residuals. This variance is then debiased to remove the contributions from the reported statistical errors, as described in <cit.>. Finally, the variances from all regions having a common baseline length are averaged to produce an azimuthally averaged set of variances. The uncertainty in the variance estimates is obtained via Monte Carlo sampling of the unknown gains, leakage terms, and statistical errors.
<ref> shows the results of applying this procedure to the data, with the normalized visibility amplitude variance measurements given by the black points. For baselines shorter than $2.5$ , the LMT calibration procedure precludes an accurate estimate of the variance, and thus these baselines have been excluded. For baselines between ${\sim}2.5$ and 6 in length, our empirical estimates of the noise exceed the typical contributions from statistical errors and refractive scattering, indicating the presence of an additional source of structural variability. The degree of inferred variability is consistent with that seen in prior millimeter-VLBI data sets, which is discussed further in <ref>. For baselines longer than 6 , our measurements are consistent with the degree of variability expected from the statistical uncertainties in the data; we thus do not directly constrain the source variability on these long baselines.
To characterize the variability behavior within the -plane, we fit a broken power law of the form in <ref> to the normalized variance measurements. As indicated by the filled black circles in <ref>, significant measurements exist only in the range of baselines with lengths $\sim$2.5–6 ; on baselines longer than 6 , we are unable to distinguish the variability from its associated measurement uncertainties. We thus perform the broken power-law fit only to the $\sim$2.5–6 range of baselines, where we have significant measurements, and we find no evidence for a break in the power law in this region. As a result, only an upper limit can be placed on $u_0$, and we are not able to constrain the short-baseline power-law index, $c$.
The range of permitted broken power-law fits is illustrated in <ref> by the orange shaded region, with several samples from the posterior distribution explicitly plotted as orange lines.
Joint posteriors of the constrained parameters after fitting a broken power law to the model-agnostic normalized variances estimates. Because the amplitude is well constrained within the range of baseline lengths for which good estimates of the variability exist, we set the normalization at $|\boldsymbol{u}|=4~{\rm G}\lambda$, denoted as $a_4$. Contours show the enclosed 50th, 90th and 99th percentiles. The purple bands indicate the ranges used as priors during the full-track modeling, associated with the interquartile ranges.
Because the location of the broken power-law break is poorly constrained, the parameters $u_0$ and $a$ (describing the location of the break and the amplitude of the power law at the break, respectively) are strongly correlated and highly uncertain. However, it is clear from the orange shaded region in <ref> that there is only a narrow range of variances permitted over the 2.5–6 range of baselines over which the data are constraining. We thus choose to characterize the amplitude of the excess variability noise at $|\boldsymbol{u}|=4\,{\rm G}\lambda$, which we denote as $a_4$. Joint posteriors for $a_4$, the break location $u_0$, and the long-baseline power-law index $b$ are shown in <ref>. These constraints are used to inform the prior distributions for the full-track geometric modeling described in <ref>; the associated prior ranges on each parameter are indicated by the purple shaded regions in <ref>.
§.§ Description of variability mitigation approaches
Having established the existence of structural variability and quantified its magnitude in the data,
we now turn to strategies for mitigating its impact on downstream analyses.
We employ the light-curve-normalized visibility data, which eliminates large-scale variations and correlations by construction.
In principle, there are four methods that we might pursue to address the remaining structural variability:
* Analyze time-averaged data products.
* Employ explicitly time-variable models.
* Analyze short time segments of the data and combine the results afterward to characterize the average source structure.
* Simultaneously reconstruct the average source structure and a statistical characterization of the structural variability.
The first of these options is complicated substantially by the uncertain visibility phases, which limit our ability to coherently average the data on timescales longer than several minutes. The second option can be employed either when a descriptive low-dimensional model for the source structure can be constructed <cit.>, or when there is sufficient -coverage for nonparametric dynamical imaging algorithms to be successful <cit.>. The latter approach is explored in the dynamical imaging analyses described in , ultimately demonstrating that the -coverage is insufficient to permit unambiguous reconstructions of the variable source structure.
We dub the third option “snapshot” modeling, whereby a simple geometric model of the source structure is fit to segments of the data that are short enough in duration (${\lesssim}3$ minutes) for the impact of structural variability to be subdominant to other sources of visibility uncertainty (e.g., refractive scattering; see <ref>). Though the data sparsity is exacerbated by restricting the reconstructions to only a single snapshot at a time, the model itself is also correspondingly restricted in its parameterization of the source structure. The results of the fits to each individual snapshot are then combined across the entire data set, effectively averaging over the source variability. Details of our snapshot modeling analyses as applied to are presented in <ref>.
The fourth option we refer to as “full-track” modeling, which aims to simultaneously reconstruct both the average source structure and a set of parameters describing the contribution of the structural variability to the visibility data variances <cit.>. In contrast to the snapshot modeling, full-track modeling considers the entire data set at once and uses a parameterized “variability noise” model to appropriately modify the data uncertainties as part of the fitting procedure. In this way, the full-track modeling retains access to sufficient -coverage to permit fitting a nonparametric image model to the data , though in <ref> we also pursue full-track geometric modeling to provide a cross-comparison with the results from the snapshot geometric modeling. Our parameterization of the variability noise follows <ref>, with the amplitude specified at a baseline length of 4 as described in <ref>. A detailed description of our full-track modeling approach as applied to is presented in <ref>.
Both the snapshot and full-track modeling approaches focus on describing the average source structure and treating the structural variability in a statistical manner. This goal is formally mismatched with what the EHT data measure for a single day, which is instead a collection of complex visibilities that sample different instantaneous realizations of the intrinsic source structure. The nature of this mismatch impacts the full-track analyses significantly.
The variability mitigation scheme employed by full-track modeling presumes that the variability may be modeled as excess uncorrelated fluctuations in the complex visibility data. This assumption is well justified on timescales exceeding a few hours, but significant correlations between visibilities exist on shorter timescales. Within a single day, subhour correlations that are localized in the -plane can induce significant biases in the source structure reconstructed from the sparse EHT -coverage of . The noise model is thus fundamentally misspecified for EHT data, with the level of misspecification increasing as shorter-in-time segments of data are analyzed; <ref> describes pathological behavior that can arise when analyzing EHT data from only a single day. While prominent artifacts associated with these subhour correlations are present in the April 6 reconstructions shown in <ref> and <ref>, we note that the underlying origin of these artifacts is no less present on April 7.
The impact of unmodeled correlations on the reconstructed source structure can be ameliorated by combining multiple days, which provides visibility samples associated with independent realizations of the source structure. This additional sampling rapidly brings the statistical properties of the data into better agreement with the assumptions underpinning the full-track analyses; even the combination of just 2 days is often sufficient to mitigate the subhour correlations in analysis experiments that make use of GRMHD simulation data. For this reason, we combine both the April 6 and April 7 data sets during the analysis of the data. For comparison, <ref> presents the results of equivalent analyses applied to the April 6 and April 7 data sets individually.
§ RING CHARACTERIZATION AND CALIBRATION
We have a strong prior expectation – from both prior millimeter-VLBI observations of a different black hole and theoretical simulations of the accretion flow around itself – that ought to contain a ring of emission, and we thus aim to determine the characteristics of the ring-like image structure that best describes the data. In this section, we first review the evidence from the data for a ring-like image structure, and we then present a geometric model for fitting parameters of interest and describe our procedure for bringing ring size measurements made using different techniques to a common physical scale.
§.§ Evidence for a ring
In reconstructing images of , explores a large space of imaging algorithms and associated assumptions. The resulting “top sets” of images
contain primarily “ring-like” image structures, though a small fraction of the images are morphologically ambiguous. These “nonring” images still nominally provide a reasonable fit to the data and so are not ruled out from the results.
We can quantify the preference for a ring-like image structure by fitting the data with a set of simple geometrical models. Employing the snapshot geometric modeling technique detailed in <ref>, we compare the Bayesian evidence, $\mathcal{Z}$, between these different geometric models.
The value of $\mathcal{Z}$ serves as a model comparison metric that naturally balances improvements in fit quality against increases in model complexity, with larger values of $\mathcal{Z}$ indicating preferred models <cit.>. <ref> shows the results of a survey over simple geometric models with varying complexity, captured here by the number of parameters required to specify the model. At all levels of complexity, ring-like models outperform the other tested models. This disparity is most stark for the simplest models but continues to hold as the models increase in complexity.
Comparison of the relative Bayesian evidence, $\Delta \ln \mathcal{Z}$, for a series of increasingly complex geometric models fitted using closure amplitudes and closure phases within the snapshot modeling formalism described in detail in <ref>. The fits have been carried out using on the the HOPS April 7 data, and each point in the figure is colored according to the number of free parameters in the model; the number of free parameters in each model is also indicated in the horizontal axis labels. The panel on the right shows a zoom-in to the highest-evidence region of the left panel. Ring-like models are indicated with circles, and nonring models are indicated with crosses. All Bayesian evidence values are quoted relative to the highest value attained across all models. The parameter counts reflect the fact that all models are normalized to have unit total flux density and are centered at the image origin. The crescent model consists of a smaller disk subtracted from an offset larger disk. In the crescent+floor model, the smaller disk may have a nonzero flux density. The m-ring and mG-ring models are defined in <ref>. The maximum value of $\ln\mathcal{Z}$ among the models explored in this figure is obtained for an $m=2$ model, in agreement with the analysis described in <ref>.
The remainder of this paper proceeds with analyses that presuppose a ring-like emission structure for .
§.§ Salient features in the context of a ring model
The overall structure of the visibility amplitudes (see the left panel of <ref>) exhibits at least three distinct regions:
* A “short-baseline” region containing baselines shorter than $\sim$2 . The effects of data calibration and pre-processing – particularly the light-curve normalization and LMT calibration procedures – are evident in the unit total flux density and the Gaussian structure of the visibility amplitudes in this region.
* An “intermediate-baseline” region containing baselines between $\sim$2 and 6 . The visibility amplitudes in this region exhibit a general rise and then fall with increasing baseline length, peaking at a flux density approximately $\sim$20% of the total at a baseline length of $\sim$4 .
* A “long-baseline” region containing baselines with lengths in excess of $\sim$6 . The visibility amplitudes in this region generally rise with increasing baseline length from a deep minimum near $\sim$6.5 , approximately flattening out at longer baselines to a level that is $\sim$3%–10% of the total flux density.
The visibility amplitudes exhibit indications of asymmetric source structure, particularly on baselines with lengths of $\sim$3 that fall near the first minimum. Here, the baselines between the SMT and stations (oriented approximately in the east–west direction) have systematically higher correlated flux densities than the similar-length baselines between the LMT and Chile stations (oriented approximately in the north–south direction). The implication for the source morphology is that we would expect to see more symmetric structure in the north–south than in the east–west direction. Detailed geometric modeling analyses that are able to capture this asymmetry are described in <ref> and <ref>; here, we consider only a simple azimuthally symmetric toy model that captures some salient features of interest.
We attempt to understand the visibility behavior in light of expectations for a ring-like emitting structure. Specifically, we consider a geometric construction whereby an infinitesimally thin circular ring bordering an inner disk of emission is convolved with a Gaussian blurring kernel. The visibility function $V$ produced by such an emission structure is given by
\begin{equation}
\begin{aligned}
V & = F_0 V_{\text{Gauss}} \left[ f_d V_{\text{disk}} + (1 - f_d) V_{\text{ring}} \right] \\
& = F_0 \exp\left( - \frac{w^2 \xi^2}{4 \ln(2)} \right) \left[ \frac{2 f_d}{\xi} J_1(\xi) + (1 - f_d) J_0(\xi) \right] ,
\end{aligned}
\end{equation}
where $F_0$ is the total flux in the image, $f_d$ is the fraction of that flux that is contained in the disk component, $w = W / d$ is a fractional ring width, $W$ is the FWHM of the Gaussian convolving kernel, $d$ is the diameter of the ring and disk components, $\xi \equiv \pi |\boldsymbol{u}| d$ is a normalized radial visibility-domain coordinate, and $J_n(\xi)$ is a Bessel function of the first kind of order $n$.
An illustration of how the observed visibilities constrain the source structure, within the context of a simple geometric model (see <ref>). The left panel shows the debiased visibility amplitudes from April 7 (black points) after normalizing by the light curve, averaging in time across scans, and deconvolving the diffractive scattering kernel. Only data points with a signal-to-noise ratio greater than 3 are shown. The ranges of baseline lengths corresponding to the locations of the first and second visibility minima are highlighted in cyan and purple, respectively. The locations of these minima constrain the diameter of an emitting ring, as shown in the top right panel. Similarly, the ranges of normalized amplitudes corresponding to the first and second visibility maxima are highlighted in red and orange, respectively. Both the absolute fractional amplitudes and their relative values (shown in green) constrain a combination of the fractional ring width and the fractional central flux. The “best-fit” model visibility amplitudes are shown by the gray curve in the left panel, with the corresponding parameters marked by the gray line in the top right panel and the gray cross in the bottom right panel. The image structure corresponding to this model is shown in the upper right corner of the left panel, with a 20 scale bar shown in white.
The three regions of data identified above are separated by apparent minima in the visibility amplitudes, and they can be approximately characterized by the baseline locations of those minima and the peak flux density levels achieved at the visibility maxima between them. <ref> illustrates how this characterization manifests as constraints on the defining parameters of the geometric toy model. The cyan and purple shaded regions in the left panel indicate the approximate ranges of baseline lengths corresponding to the locations of the first and second visibility minima, respectively. The locations of these minima constrain the diameter of the emitting structure, as shown in the top right panel of <ref>. To be consistent with both a first minimum falling between $\sim$2.5 and 3.5 and a second minimum falling between $\sim$6 and 7 , the emitting region must be between $\sim$50 and 60 across. The amplitudes of two visibility maxima – one falling between the first and second visibility minima, and the second following the second minimum – constrain a combination of the fractional disk flux $f_d$ and the fractional ring width $w$. The bottom right panel of <ref> shows the constraints from the first and second visibility maxima in red and orange, respectively, and from the ratio of the two in green.
Taken together, even these few, simple, and only modestly constrained visibility features result in a rather narrow permitted range of model parameter values for $d$, $w$, and $f_d$; an example of a “best-fit” model from within the permitted range is shown by the gray curve in the left panel of <ref>. However, we stress that the above constraints only strictly hold within the context of the specific toy model used to derive them. More general and robust constraints on the emission structure require a model that can accommodate more than just the gross features; such models are produced as part of the imaging ( and <ref>) and geometric modeling analyses (see <ref> and <ref>) carried out in this paper series.
§.§ Geometric ring model specification
The ring-like images reconstructed in are not azimuthally symmetric, but instead show pronounced azimuthal brightness variations that we would like to capture in our geometric modeling analyses. In this section, we specify the “” model that we use in <ref> and <ref> to quantify the morphological properties of the observed emission.
§.§.§ Image-domain representation of model
Adopting the construction developed by <cit.>, we can model an infinitesimally thin circular ring with azimuthal brightness variations using a sum over angular Fourier modes indexed by integer $k$,
\begin{align}
I_{\text{ring}}(r,\phi)=\frac{F_{\rm ring}}{\pi d} \, \delta\!\left(r-\frac{d}{2}\right) \sum_{k=-m}^m\beta_k e^{i k \phi}. \label{eqn:RingImage}
\end{align}
Here $r$ is the image radial coordinate, $\phi$ is the azimuthal coordinate (east of north), $d$ is the ring diameter, $\{ \beta_k \}$ are the set of (dimensionless) complex azimuthal mode coefficients, and $m$ sets the order of the expansion. Because the image is real, $\beta_{-k} = \beta_k^\ast$; we enforce $\beta_0 \equiv 1$ so that $F_{\rm ring}$ sets the total flux density of the ring. Given that the images from show a ring of radius ${\sim}25$ and the diffraction-limited EHT resolution is ${\sim}20$ , we expect the data to primarily constrain ring modes with $m \lsim \pi \left(25/20\right) \approx 4$. We refer to this asymmetric ring as an “m-ring” of order $m$.
For the purposes of constraining additional image structures, we augment this m-ring in two ways. First, we convolve the m-ring with a circular Gaussian kernel of FWHM $W$,
\begin{equation}
I_{\text{ring}}(r,\phi;W) = I_{\text{ring}}(r,\phi) \ast \left[\frac{4 \ln(2)}{\pi W^2} \exp\!\left( - \frac{4 \ln(2) r^2}{W^2} \right)\right]. \label{eqn:ModelImage}
\end{equation}
Second, we add a circular Gaussian component that is concentric with the ring, which serves to provide a nonzero brightness floor interior to the ring. The Gaussian component has a total flux density of $F_{\text{Gauss}}$ and an FWHM of $W_{\text{Gauss}}$,
\begin{align}
I_{\text{Gauss}}(r,\phi) = \frac{4 \ln(2) F_{\text{Gauss}}}{\pi W_{\text{Gauss}}^2} \exp\!\left( - \frac{4 \ln(2) r^2}{W_{\text{Gauss}}^2} \right). \label{eqn:GaussImage}
\end{align}
We refer to the resulting composite model $I(r,\phi)$, where
\begin{equation}
I(r,\phi) = I_{\text{ring}}(r,\phi; W) + I_{\text{Gauss}}(r,\phi) ,
\end{equation}
as an “.” An example is shown in <ref>.
Example model. The model consists of an m-ring of diameter $d$, with azimuthal variations determined by Fourier coefficients $\beta_m$ and with thickness determined by convolution with a circular Gaussian of FWHM $W$. The model also includes a circular Gaussian of FWHM $W_{\text{Gauss}}$ and fractional relative flux density $f_{\rm Gauss}$ (<ref>). The position angle $\eta$ of the ring is determined by the phase of the mode (<ref>), while the magnitude of its asymmetry $A$ is determined by the amplitude of the mode (<ref>). The red curve in the middle panel shows a radial profile in the horizontal direction; the orange curve in the bottom panel shows the azimuthal profile and its decomposition into its three modes. The plotted model has $w = 0.3$, $f_{\rm Gauss} = 0.2$, $W_{\text{Gauss}}/d=0.8$, $\beta_1 = -0.3i$, and $\beta_2 = 0.1+0.1i$.
An of order $m$ has $5 + 2 m$ model parameters: the flux density in the ring ($F_{\text{ring}}$), the diameter of the ring ($d$), the flux density in the central Gaussian ($F_{\text{Gauss}}$), the FWHM of the central Gaussian ($W_{\text{Gauss}}$), the FWHM of the ring convolving kernel ($W$), and two parameters for each complex Fourier coefficient $\beta_k$ with $1\leq k \leq m$.
§.§.§ Visibility-domain representation of model
To aid in efficient parameter space exploration, the model is intentionally constructed using components and transformations that permit analytic Fourier transformations. The Fourier transform of the m-ring image (<ref>) is given by
\begin{equation}
V_{\text{ring}}(|\boldsymbol{u}|,\phi_u) = F_{\text{ring}} \sum_{k=-m}^m \beta_k J_k(\pi |\boldsymbol{u}| d) e^{ik (\phi_u - \pi/2)} ,
\end{equation}
where $(|\boldsymbol{u}|,\phi_u)$ are polar coordinates in the Fourier domain.
The convolution with a circular Gaussian in the image plane corresponds to multiplication of this function by the Fourier transform of the convolving kernel,
\begin{equation}
V_{\text{ring}}(|\boldsymbol{u}|,\phi_u;W) = \exp\left( -\frac{\pi^2 W^2 |\boldsymbol{u}|^2}{4 \ln(2)} \right) V_{\text{ring}}(|\boldsymbol{u}|,\phi_u) .
\end{equation}
The Fourier transform of the Gaussian image (<ref>) is given by
\begin{equation}
V_{\text{Gauss}}(|\boldsymbol{u}|,\phi_u) = F_{\text{Gauss}} \exp\left( - \frac{\pi^2 W_{\text{Gauss}}^2 |\boldsymbol{u}|^2}{4 \ln(2)} \right) .
\end{equation}
By the linearity of the Fourier transform, the visibility-domain representation of the model is then simply the sum of these two components,
\begin{equation}
V(|\boldsymbol{u}|,\phi_u) = V_{\text{ring}}(|\boldsymbol{u}|,\phi_u;W) + V_{\text{Gauss}}(|\boldsymbol{u}|,\phi_u) .
\end{equation}
When interpreting model-fitting results in subsequent sections, we are interested in a number of derivative quantities. We will typically work with the fractional thickness of the ring, $w$, defined to be
\begin{equation}
w \equiv \frac{W}{d} .
\end{equation}
Similarly, we are typically interested in fractional representations of flux densities. We define
\begin{equation*}
F_0 \equiv F_{\text{ring}} + F_{\text{Gauss}}
\end{equation*}
to be the total flux density, and then
\begin{equation}
f_{\text{ring}} \equiv \frac{F_{\text{ring}}}{F_0}
\end{equation}
\begin{equation}
f_{\text{Gauss}} \equiv \frac{F_{\text{Gauss}}}{F_0} \label{eqn:FractionalGaussFlux}
\end{equation}
are the fraction of the total flux density that is contained in the ring and in the Gaussian components, respectively. Note that $F_0$ is typically close to or fixed to unity as a consequence of normalizing the data by the light curve. We also define a fractional central flux as
\begin{equation}
f_{\text{c}} \equiv \frac{F_{\text{Gauss}}(r<d/2)}{F_{\text{ring}} + F_{\text{Gauss}}(r<d/2)} , \label{eqn:FractionalCentralFlux}
\end{equation}
where $F_{\text{Gauss}}(r<d/2)$ is the integrated flux density of the central Gaussian component interior to the ring radius, given by
\begin{equation}
F_{\text{Gauss}}(r< d/2) = F_{\text{Gauss}} \left[ 1 - \exp\left( - \frac{d^2 \ln(2)}{W_{\text{Gauss}}^2} \right) \right] .
\end{equation}
Following , the m-ring position angle $\eta$ and degree of azimuthal asymmetry $A$ are both determined by the coefficient of the $m=1$ mode,
\begin{align}
\eta &\equiv {\rm arg}\left( \int_0^{2\pi} I(\phi) e^{i \phi} d\phi \right) \nonumber \\
&= -{\rm arg}\left( \beta_1 \right) , \label{eqn:PositionAngle}\\
A &\equiv \frac{ \left| \int_0^{2\pi} I(\phi) e^{i \phi} d\phi \right| }{\int_0^{2\pi} I(\phi) e^{i \phi} d\phi} \nonumber \\
&= |\beta_1| . \label{eqn:Asymmetry}
\end{align}
A number of these derivative quantities are illustrated in the example shown in <ref>.
§.§ Calibrating ring size measurements to a common physical scale
The parameters returned by the geometric modeling and feature extraction analyses used in this paper to describe the emission structure do not correspond directly to physical quantities. Instead, the relationship between measured and physical quantities must be calibrated using data for which we know the correct underlying physical system's defining parameters. For ring size measurements, the associated physical quantity of interest is related to the angular size of the gravitational radius,
\begin{equation}
\theta_g = \frac{G M}{c^2 D} . \label{eqn:ThetaG}
\end{equation}
which sets the absolute scale of the system.
Under the assumption that the emission near the black hole originates from some “typical” radius, a measurement of the angular diameter $d$ of the emitting region will be related to $\theta_g$ by a scaling factor $\alpha$,
\begin{equation}
d = \alpha \theta_g . \label{eqn:Alpha}
\end{equation}
If the observations were directly sensitive to the critical curve bounding the black hole shadow, then $\alpha$ could be determined analytically and would take on a value ranging from $\sim$9.6 to 10.4 depending on the black hole spin and inclination <cit.>. For more realistic emission structures and measurement strategies, the value of $\alpha$ cannot be determined from first principles and must instead be calibrated.
Our $\alpha$ calibration strategy generally follows the procedure developed in . Using the library of GRMHD simulations described in , we generate a suite of 100 synthetic data sets that emulate the cadence and sensitivity of the 2017 EHT observations and that contain a realistic character and magnitude of data corruption; <ref> describes the generation of these synthetic data sets. In the analyses described in Sections <ref>, <ref>, and <ref>, 90 of these 100 synthetic data sets are used to derive the $\alpha$ calibration for each analysis pathway, while the remaining 10 data sets are used to validate the calibration.
After carrying out ring size measurements on each of the data sets in the suite, we determine $\alpha$ (for each specific combination of data set and measurement technique) by dividing the measured ring diameter by the known value of $\theta_g$ (per <ref>). For a given measurement technique, the distribution of $\alpha$ values that results from applying this procedure to the entire suite of synthetic data sets then provides a measure of $\alpha$ and its theoretical uncertainty. The $\alpha$ value associated with each measurement technique can then be used to translate ring size measurements into their corresponding $\theta_g$ constraints. We note that this calibration strategy assumes that the images contained in the GRMHD library provide a reliable representation of the emission structure in the vicinity of ; a separate calibration strategy that relaxes this GRMHD assumption is presented in .
<ref> describes elements of the calibration and validation strategy that are specific to each of the analysis pathways detailed in Sections <ref>, <ref>, and <ref>.
§ IMAGE-DOMAIN FEATURE EXTRACTION
The imaging carried out in permits very flexible emission structures to be reconstructed from the data, but the majority of these images exhibit a ring-like morphology whose properties we seek to characterize. In this section, we describe our image-domain feature extraction (IDFE) procedure, which uses a topological classification scheme to identify the presence of a ring-like structure in an image and quantifies the parameters that best describe this ring using two different algorithms. We apply this IDFE procedure to the image reconstructions from .
§.§ Imaging methods and products
The imaging analyses carried out in use four different algorithms classified into three categories: one sampling-based posterior exploration algorithm (, ), one CLEAN-based deconvolution algorithm (, ), and two “regularized maximum likelihood” (RML) algorithms (, ; and , ).
All methods produce image reconstructions using band-combined data (i.e., both low band and high band) and the latter three are run on two versions of the data: a “descattered” version that attempts to deconvolve the effects of the diffractive scattering kernel from the data, and a “scattered” version that applies no such deconvolution. The posterior exploration imaging method instead applies the effects of diffractive scattering as part of its internal forward model, rather than deconvolving the data; the analogous “scattered” and “descattered” versions of the images thus correspond simply to those for which the scattering kernel has been applied or not, respectively. The posterior exploration imaging jointly reconstructs the combined April 6 and April 7 data sets (see <ref>), while the CLEAN and RML imaging reconstructs each day individually, focusing primarily on the April 7 data and using the April 6 data for cross-validation. Example fits and residuals for each of the imaging pipelines are shown in <ref>[Note that the preprocessing and data products used during imaging are not the same across imaging pipelines; and fit to complex visibilities, while and fit iteratively to different combinations of data products that include visibility amplitudes, closure phases, and log closure amplitudes . For clarity in <ref>, we simply show residual complex visibilities for each imaging pipeline using a representative image from that pipeline's top set or posterior.], and $\chi^2$ statistics for each image are provided in <ref>; detailed descriptions of the data preprocessing and imaging procedures for each imaging algorithm are provided in .
Representative examples of imaging results for each of the four imaging pipelines used in ; is shown in the top left panel, in the top right panel, in the bottom left panel, and in the bottom right panel. The top section of each panel shows the light-curve-normalized complex visibility data (in blue) as a function of baseline length; the light-curve-normalized visibilities are denoted as $\hat{V}$. The real parts of the complex visibilities are plotted as filled markers, and the imaginary parts of the complex visibilities are plotted as open markers; the corresponding model visibilities are overplotted as red points. The plotted data have been through the pre-analysis and pre-imaging calibration procedures described in , , and . The bottom section of each panel shows the normalized residuals – i.e., the difference between the model and data visibilities, normalized by the data uncertainties – as a function of baseline length. The solid red horizontal line marks zero residual, and the two dotted horizontal red lines mark $\pm$ one standard deviation. The blue histogram on the right side of each bottom panel shows the distribution of normalized residuals, with the solid red curve showing a unit-variance normal distribution and the dotted green curve showing a normal distribution with variance equal to that of the normalized residuals. We note that the visibilities for the , , and pipelines have been “descattered” and so have somewhat larger typical amplitudes than the visibilities for the pipeline (for which the scattering is incorporated as part of the forward model; see <ref> and ). We also note that the different imaging pipelines make different choices about data averaging: and average the data over 60 s intervals, averages over 120 s intervals, and averages over scans. Detailed descriptions of each of the imaging methods are provided in .
For the CLEAN and RML imaging methods, there are a number of tunable hyperparameters associated with each algorithm whose values are determined through extensive “parameter surveys” carried out on synthetic data sets. During a parameter survey, images of each synthetic data set are reconstructed using a broad range of possible values for each hyperparameter. Settings that produce high-fidelity image reconstructions across all synthetic data sets are collected into a “top set” of hyperparameters, and these settings are then applied for imaging the data. The resulting top sets of images capture emission structures that are consistent with the data, and we use these top-set images for the feature extraction analyses in this paper.
The imaging algorithm explores a posterior distribution over the image structure, and there are no hyperparameters that require synthetic data surveys to determine. Rather than producing a top set of images, instead produces a sample of images drawn from the posterior determined from the data. We use these posterior image samples for the feature extraction analyses in this paper.
Morphological parameter distributions from IDFE analyses, applied to the descattered top-set and posterior images corresponding to the combined LO+HI band data from the HOPS calibration pipeline. The distributions shown correspond to combined April 6+7 results for posterior imaging and April 7 data for top-set imaging. No -based filtering has been applied.
§.§ Image-domain feature extraction methods
Given the top-set and posterior images from , we carry out IDFE analyses using two separate tools: and . An independent cross-validation of both IDFE tools has been carried out in <cit.>. In this section, we provide a brief overview of each method and specify the details relevant for the analyses presented in this paper.
The Ring Extractor () is an IDFE tool for quantifying the morphological properties of ring-like images. It is available as part of the software library and is described in detail in <cit.>. was the main tool used in to extract ring properties from the 87 images, and detailed definitions of the various parameters are provided in that paper.
For the majority of the -derived ring parameters, we retain the same definitions as used in . first defines a ring center $(x_0,y_0)$, which is determined to be the point in the image from which radial intensity profiles have a minimum dispersion in their peak intensity radii. The ring radius, $r_0$, is then taken to be the average of these peak intensity radii over all angles, and the ring thickness $w$ is taken to be the angular average of the FWHM about the peak measured along each radial intensity profile. To avoid biases associated with a nonzero floor to the image brightness outside of the ring, we subtract out the quantity
\begin{equation}
I_{\text{floor}} = \frac{1}{2\pi} \int_0^{2\pi} I(r_{\mathrm{max}}=60\,\uas,\phi) \text{d}\phi
\end{equation}
when computing the FWHM, i.e., we compute the average FWHM of $I(r,\phi) - I_{\text{floor}}$.
For all other ring parameters, the definitions remain the same as those used in .
defines the ring position angle $\eta$ and asymmetry $A$ as the argument and amplitude, respectively, of the first circular mode,
\begin{equation}
\beta_1 = \left\langle \frac{\int_0^{2\pi} I(\phi) \cos(\phi) \text{d}\phi}{\int_0^{2\pi} I(\phi) \text{d}\phi} \right\rangle_r ,
\end{equation}
where the angled brackets denote a radial average between $r_0 - w/2$ and $r_0 + w/2$.[We note that this definition for the position angle $\eta$ does not necessarily return the azimuthal location of the intensity peak; rather, it tracks the circular mean of the azimuthal intensity profile.] These definitions are analogous to those used to define the corresponding position angle and asymmetry of the model (<ref> and <ref>, respectively). The fractional central brightness $f_c$ is defined to be the ratio of the mean brightness within 5 of the center to the azimuthally averaged brightness along the ring (i.e., along $r = r_0$).
As in , we replace the negative pixels in images with zero values before performing analyses.
Comparison of two ring classification procedures. Each panel shows a mean ring and nonring image for a single imaging pipeline, with the top row showing how the images are classified by in the “permissive” mode and the bottom row showing the classification determined by the clustering analysis from . All of the images have been produced using descattered data from the HOPS calibration pipeline. The results correspond to combined April 6 and 7 data for posterior imaging and April 7 data for top-set imaging. All of the images share a common brightness color scale; the absolute brightness scale is arbitrary because each image has been normalized to have unit total flux density.
Variational Image-Domain Analysis <cit.> is an IDFE tool for quantifying the parameters describing a specifiable image morphology; it is written in Julia <cit.> and contained in the package [<https://github.com/ptiede/VIDA.jl>]. employs a template-matching approach for image analysis, using parameterized templates to approximate an image and adjusting the parameters of the templates
until a specified cost function is minimized. Within , the cost function takes the form of a probability divergence, which provides a distance metric between the image and template; the template parameters that minimize this divergence are taken to provide the best description of the image. The optimization strategy and additional details are provided in <cit.>.
For the IDFE analyses in this paper, we use 's template and the least-squares divergence (for details, see Section 8 of ). This template describes an image structure that is similar to the model (<ref>), and it is characterized by a ring center ($x_0$, $y_0$), a ring diameter $d=2r_0$, an FHWM fractional ring thickness $w$, and a cosine expansion describing the azimuthal brightness distribution $S(\phi)$,
\begin{align}
S(\phi) = 1 - 2\sum_{k=1}^m A_k \cos\left[k (\phi - \eta_k)\right] .
\end{align}
To maintain consistency with the geometric modeling analyses (see <ref> and <ref>), we use $m=4$. We also restrict the value of the $A_1$ parameter to be $<0.5$ to avoid negative flux in the template. As with the model, the orientation $\eta$ is equal to the first-order phase $\eta_1$, and the asymmetry $A$ is equal to the first-order coefficient $A_1$.
To permit the presence of a central brightness floor, the template contains an additional component in the form of a circular disk whose center point is fixed to coincide with that of the ring. The disk radius is fixed to be $r_0$. A Gaussian falloff is stitched to the outer edge of the disk, such that for radii larger than $r_0$ the intensity profile becomes a Gaussian with mean $r_0$ and an FWHM that matches the ring thickness. The flux of this disk component is a free parameter in the template. We then retain the same definition of the fractional central brightness $f_c$ as used by .
§.§ Identifying rings via topological classification
The output of the IDFE analysis is a set of distributions for the ring parameters from each imaging method; <ref> shows an example set of results from applying both IDFE software packages to the descattered posterior and top-set images. However, both and implicitly assume that the images fed into them contain a ring-like emission structure. If the input image does not contain a ring, then the output measurements may not be meaningful. For each input image, we thus wish to determine both whether the image contains a ring-like structure and how sensitive the IDFE results are to the specific manner in which “a ring-like structure” is defined.
To determine whether the images we are analyzing with and contain ring-like structures, we use [<https://github.com/focisrc/metronization>], a software that preprocesses the images into a form suitable for topological analysis and extracts topologically relevant features with the help of the open-source computational topology code [<https://mrzv.org/software/dionysus2>] <cit.>. A detailed description of can be found in <cit.>.
The preprocessing procedure consists of the following steps:
* First, the image undergoes a “robust” thresholding step, in which the pixels are sorted by brightness in a cumulative sequence, and all pixels below a certain threshold in this sequence have their values set to zero and the rest are set to a value of one.
* Next, in a process called “skeletonization,” the Boolean image produced in the first step is reduced to its topological skeleton that preserves the topological characteristics of the original shape. This step thins large contiguous areas of flagged pixels and enlarges the “holes.”
* The topological skeleton is rebinned and downsampled. Holes smaller than the rebinning resolution are preserved by the skeletonization in the previous step.
* The downsampled image undergoes skeletonization once more. |
# A dynamical argument for a Ramsey property
Enhui Shi Soochow University, Suzhou, Jiangsu 215006, China
<EMAIL_ADDRESS>and Hui Xu Soochow University, Suzhou, Jiangsu 215006,
China<EMAIL_ADDRESS>
###### Abstract.
We show by a dynamical argument that there is a positive integer valued
function $q$ defined on positive integer set $\mathbb{N}$ such that $q([\log
n]+1)$ is a super-polynomial with respect to positive $n$ and
$\limsup_{n\rightarrow\infty}r\left((2n+1)^{2},q(n)\right)<\infty,$
where $r(\ ,\ )$ is the opposite-Ramsey number function.
## 1\. Introduction and preliminaries
For positive integers $p$ and $q$, we define the opposite-Ramsey number
$r(p,q)$ to be the maximal number $k$ for which every edge-coloring of the
complete graph $K_{q}$ with $p$ colors yields a monochromatic complete
subgraph of order $k$ (the order of a graph means the number of its vertices).
The following is implied by the well-known Ramsey’s theorem.
###### Theorem 1.1.
Let $p$ be a fixed positive integer. Then
$\liminf_{q\rightarrow\infty}r\left(p,q\right)=\infty.$
One may expect that if $p=p(n)$ and $q=q(n)$ are positive integer valued
functions defined on $\mathbb{N}$ and the speed of $q(n)$ tending to infinity
is much faster than that of $p(n)$ as $n$ tends to infinity, then we still
have
$\liminf_{n\rightarrow\infty}r\left(p(n),q(n)\right)=\infty.$
The purpose of the paper is to show by a dynamical argument that this is not
true in general even if $p(n)$ is a polynomial and $q([\log n]+1)$ is a super-
polynomial. By a super-polynomial, we mean a function
$f:\mathbb{N}\rightarrow\mathbb{R}$ such that for any polynomial $g(n)$,
$\liminf_{n\rightarrow\infty}\frac{|f(n)|}{|g(n)|}=\infty.$
Let $(X,d)$ be a compact metric space. For any $\varepsilon>0$, let
$N(\varepsilon)$ denote the minimal number of subsets of diameter at most
$\varepsilon$ needed to cover $X$. The lower box dimension of $X$ is defined
to be
(1.1) $\underline{\dim}_{B}(X,d)=\liminf_{\varepsilon\rightarrow 0}\frac{\log
N(\varepsilon)}{\log 1/\varepsilon}.$
For a subset $E$ of $X$ and $\varepsilon>0$, we say $E$ is
$\varepsilon$-separated if for any distinct $x,y\in X$,
$d(x,y)\geq\varepsilon$. Let $S(\varepsilon)$ denote the cardinality of a
maximal $\varepsilon$-separated subset of $X$. It is easy to verify
$N(\varepsilon)\leq S(\varepsilon)\leq N(\varepsilon/2)$. Thus
(1.2) $\underline{\dim}_{B}(X,d)=\liminf_{\varepsilon\rightarrow 0}\frac{\log
S(\varepsilon)}{\log 1/\varepsilon}.$
Furthermore, it is easy to see that
(1.3) $\underline{\dim}_{B}(X,d)=\liminf_{n\rightarrow\infty}\frac{\log
S(1/n)}{\log n}.$
We use $\dim(X)$ to denote the topological dimension of $X$. It is well known
that the topological dimension of $X$ is always no greater than its lower box
dimension with respect to any compatible metric.
A continuous action $G\curvearrowright X$ of group $G$ on $X$ is said to be
expansive if there exists $c>0$ such that for any two distinct points $x,y\in
X$, $\sup_{g\in G}d(gx,gy)>c$. For $v=(v_{1},\cdots,v_{k})\in\mathbb{Z}^{k}$,
let $|v|$ denote $\max\\{|v_{1}|,\cdots,|v_{k}|\\}$.
The following lemma is due to T. Meyerovitch and M. Tsukamoto.
###### Lemma 1.2.
[4, Lemma 4.4] Let $k$ be a positive integer and $T:\mathbb{Z}^{k}\times
X\rightarrow X$ be a continuous action of $\mathbb{Z}^{k}$ on a compact metric
space $(X,d)$. If the action is expansive, then there exist $\alpha>1$ and a
compatible metric $D$ on $X$ such that for any positive integer $n$ and any
two distinct points $x,y\in X$ satisfying $D(x,y)\geq\alpha^{-n}$, we have
$\max_{v\in\mathbb{Z}^{k},|v|\leq n}D(T^{v}x,T^{v}y)\geq\frac{1}{4\alpha}.$
###### Lemma 1.3.
If $(X,d)$ is a compact metric space of infinite dimension, then $S(1/n)$ is a
super-polynomial with respect to variable $n$.
###### Proof.
Since $\dim(X)\leq\underline{\dim}_{B}(X,d)$, we have
(1.4) $\underline{\dim}_{B}(X,d)=\liminf_{n\rightarrow\infty}\frac{\log
S(1/n)}{\log n}=\infty.$
Thus, for any positive integer $k$,
$\liminf_{n\rightarrow\infty}\frac{S(1/n)}{n^{k}}=\infty$. ∎
## 2\. Main results
For a positive real number $x$, we use $[x]$ to denote its integer part.
###### Theorem 2.1.
There is a function $q:\mathbb{N}\rightarrow\mathbb{R}$ such that $q([\log
n]+1)$ is a super-polynomial and
$\limsup_{n\rightarrow\infty}r\left((2n+1)^{2},q(n)\right)<\infty.$
###### Proof.
Let $T:\mathbb{Z}^{2}\times X\rightarrow X$ be an expansive continuous action
on a compact metric space $(X,d)$ of infinite dimension (see [5] where an
expansive $\mathbb{Z}^{2}$-action on $\mathbb{T}^{\infty}$ was constructed).
By Lemma 1.2, there exist $\alpha>1$ and a compatible metric $D$ on $X$ such
that for any positive integer $n$ and any two distinct points $x,y\in X$ with
$D(x,y)\geq\alpha^{-n}$,
$\max_{v\in\mathbb{Z}^{2},|v|\leq n}D(T^{v}x,T^{v}y)\geq\frac{1}{4\alpha}.$
For each $n\in\mathbb{N}$, let $V_{n}$ be a maximal $\alpha^{-n}$-separated
set of $(X,D)$. Hence $|V_{n}|=S(\alpha^{-n})$. Let $G_{n}$ be the complete
graph $K_{S(\alpha^{-n})}$ whose vertex set is $V_{n}$. Now we use the color
set $C_{n}=\\{v\in\mathbb{Z}^{2}:|v|\leq n\\}$ to color the edges of $G_{n}$.
Since $V_{n}$ is $\alpha^{-n}$-separated, for any two distinct points $x,y\in
V_{n}$, $D(x,y)\geq\alpha^{-n}$. By Lemma 1.2, there exists $v\in C_{n}$ such
that $D(T^{v}x,T^{v}y)\geq\frac{1}{4\alpha}$. Then we color the edge
$\\{x,y\\}$ by $v$. By the definition of opposite-Ramsey number, there is a
monochromatic complete subgraph $H_{n}$ of order
$r\left((2n+1)^{2},S(\alpha^{-n})\right)$.
By Lemma 1.3, $S(1/n)$ is a super-polynomial. Let $q(n)=S(\alpha^{-n})$. Thus
$q([\log n]+1)$ is a super-polynomial with respect to positive $n$. Assuming
that the conclusion of the Theorem is false, we have
$\limsup_{n\rightarrow\infty}r\left((2n+1)^{2},q(n)\right)=\infty.$
Therefore, there is an increasing subsequence $(n_{i})$ of positive integers
such the the sequence of orders of $H_{n_{i}}$ is unbounded. Since $H_{n_{i}}$
is monochromatic, there exists $v_{n_{i}}\in C_{n_{i}}$ such that the image of
vertex set of $H_{n_{i}}$ under $T^{v_{n_{i}}}$ is
$\frac{1}{4\alpha}$-separated. These imply that there are arbitrarily large
$\frac{1}{4\alpha}$-separated subsets of $X$, which contradicts the
compactness of $X$. Thus we complete the proof. ∎
## 3\. Comparison with Classical Ramsey number
For any positive integers $k$ and $g$, the Ramsey number $R_{g}(k)$ is defined
to be the minimal number $n$ for which every edge-coloring of the complete
graph $K_{n}$ with $g$ colors yields a monochromatic complete subgraph of
order $k$.
By Corollary 3 of Greenwood and Gleason in [1], $R_{g}(k)$ has an upper bound
$g^{gk}$. In [2] Lefmann and Rödl obtained a lower bound $2^{\Omega(gk)}$ for
$R_{g}(k)$. Thus
(3.1) $2^{\Omega((2n+1)^{2}k)}\leq
R_{(2n+1)^{2}}(k)\leq\left((2n+1)^{2}\right)^{(2n+1)^{2}k}.$
Suppose $r\left((2n+1)^{2},q(n)\right)=r<\infty$. Then it implies that
* (1)
every edge-coloring of complete graph $K_{q(n)}$ with $(2n+1)^{2}$ colors
yields a monochromatic complete subgraph of order $r$, hence
(3.2) $q(n)\geq R_{(2n+1)^{2}}(r);$
* (2)
there exists an edge-coloring of $K_{q(n)}$ with $(2n+1)^{2}$ colors such that
there is no monochromatic complete subgraph of order $r+1$, hence
(3.3) $q(n)\leq R_{(2n+1)^{2}}(r+1).$
Thus $q(n)$ gives a lower bound of $R_{(2n+1)^{2}}(r+1)$ and an upper bound of
$R_{(2n+1)^{2}}(r)$. By Theorem 2.1, every expansive $\mathbb{Z}^{2}$-action
on a compact metric space of infinite dimension gives rise to such a $q(n)$.
In addition, there is a positive integer $r$ and an increasing subsequence
$(n_{i})$ of positive integers such that for any $i\in\mathbb{N}$,
$r\left((2n_{i}+1)^{2},q(n_{i})\right)=r$. Therefore, we obtain a lower bound
of $R_{(2n_{i}+1)^{2}}(r+1)$ and an upper bound of $R_{(2n_{i}+1)^{2}}(r)$ for
each $i\in\mathbb{N}$.
If $q(\log n)$ is a super-polynomial, then we claim that for any $A\geq 0$,
(3.4) $\liminf_{n\rightarrow\infty}\frac{q(n)}{A^{n}}=\infty.$
In fact, take a positive integer $m$ such that $e^{m}\geq A$. Then
$\displaystyle\liminf_{n\rightarrow\infty}\frac{q(n)}{A^{n}}\geq\liminf_{n\rightarrow\infty}\frac{q(n)}{e^{mn}}=\liminf_{n\rightarrow\infty}\frac{q(\log(e^{n}))}{(e^{n})^{m}}=\infty.$
The lower bound of $R_{(2n+1)^{2}}(r+1)$ obtained by (3.1) is
$2^{\Omega\left((2n+1)^{2}(r+1)\right)}$ which is also faster than any
exponential growth. If there is an expansive $\mathbb{Z}$-action on a compact
metric space of infinite dimension, then we can get a lower bound for
$R_{2n+1}(r+1)$ which is faster than the classical bound
$2^{\Omega\left((2n+1)(r+1)\right)}$. Unfortunately, in [3] Mañé showed that
such action does not exist. If we can construct an expansive
$\mathbb{Z}^{2}$-action on a compact metric space of infinite dimension such
that the condition $D(x,y)\geq\alpha^{-n}$ in Lemma 1.2 can be replaced by
$D(x,y)\geq\alpha^{-n^{2}}$, then we can show that $q(n)$ obtained in Theorem
2.1 satisfies that $q([\sqrt{\log n}]+1)$ is a super-polynomial. Then it
satisfies $\liminf_{n\rightarrow\infty}\frac{q(n)}{A^{n^{2}}}=\infty$. Hence
$q(n)$ is faster than the classical lower bound
$2^{\Omega\left((2n+1)^{2}(r+1)\right)}$. Therefore, we leave the following
question.
###### Question 3.1.
Is there an expansive $\mathbb{Z}^{2}$-action on a compact metric space
$(X,d)$ of infinite dimension and $\alpha>1$ such that for any positive
integer $n$ and any two distinct points $x,y\in X$ satisfying
$d(x,y)\geq\alpha^{-n^{2}}$, we have
$\max_{v\in\mathbb{Z}^{2},|v|\leq n}d(T^{v}x,T^{v}y)\geq\frac{1}{4\alpha}.$
A positive answer to Question 3.1 can give a better estimate of the lower
bound of $R_{(2n_{i}+1)^{2}}(r+1)$, where $(n_{i})$ and $r$ come from the
system. By (3.2), a negative answer also gives a better estimate of the upper
bound of $R_{(2n_{i}+1)^{2}}(r)$.
Finally, we remark that the above comparison between $q(n)$ and the bound of
Ramsey is only for a subsequence of positive integers. However, dealing with a
concrete system we may obtain more information and can get a special edge-
coloring of $K_{q(n)}$ as the proof of Theorem 2.1. Our method may give a new
direction to estimate the bounds of Ramsey numbers and construct edge-
colorings of big graphs.
### Acknowledgements
We would like to thank Professor Bingbing Liang and Professor Xin Wang for
their helpful comments.
## References
* [1] R. E. Greenwood and A. M. Gleason, Combinatiorial Relations and Chromatic Graphs. Canadian Journal of Mathematics 7(1955):1-7.
* [2] H. Lefmann and V. Rödl, On canonical Ramsey numbers for complete graphs versus paths. J. Combin. Theory Ser. B 58 (1993), no. 1, 1-13.
* [3] R. Mañé, Expansive homeomorphisms and topological dimension, Trans. Amer. Math. Soc. 252(1979), 313-319.
* [4] T. Meyerovitch and M. Tsukamoto, Expansive multiparameter actions and mean dimension. Trans. Amer. Math. Soc. 371 (2019), no. 10, 7275-7299.
* [5] E. Shi and L. Zhou, The nonexistence of expansive $\mathbb{Z}^{d}$ actions on graphs. Acta Math. Sin. (Engl. Ser.) (2005) 1509-1514.
|
# Entangled Pair Resource Allocation under Uncertain Fidelity Requirements
Rakpong Kaewpuang1, Minrui Xu1, Stephen John Turner2, Dusit Niyato1, Han Yu1,
and Dong In Kim3
1School of Computer Science and Engineering, Nanyang Technological University,
Singapore
2School of Information Science and Technology, Vidyasirimedhi Institute of
Science and Technology, Thailand
3School of Information and Communication Engineering, Sungkyunkwan University,
South Korea
###### Abstract
In quantum networks, effective entanglement routing facilitates remote
entanglement communication between quantum source and quantum destination
nodes. Unlike routing in classical networks, entanglement routing in quantum
networks must consider the quality of entanglement qubits (i.e., entanglement
fidelity), presenting a challenge in ensuring entanglement fidelity over
extended distances. To address this issue, we propose a resource allocation
model for entangled pairs and an entanglement routing model with a fidelity
guarantee. This approach jointly optimizes entangled resources (i.e.,
entangled pairs) and entanglement routing to support applications in quantum
networks. Our proposed model is formulated using two-stage stochastic
programming, taking into account the uncertainty of quantum application
requirements. Aiming to minimize the total cost, our model ensures efficient
utilization of entangled pairs and energy conservation for quantum repeaters
under uncertain fidelity requirements. Experimental results demonstrate that
our proposed model can reduce the total cost by at least 20% compared to the
baseline model.
###### Index Terms:
Quantum networks, entanglement routing, end-to-end fidelity, entanglement
purification, entangled pair resource allocation, stochastic programming.
## I Introduction
In recent decades, quantum networks have emerged as a groundbreaking
advancement, enabling the support of innovative applications that surpass the
capabilities of classical networks. These applications include quantum key
distribution (QKD), distributed quantum computing, and quantum encryption
protocols [1]. Quantum networks rely on entangled qubit pairs, which serve as
a fundamental component for end-to-end quantum communication between two
quantum nodes. This unique feature allows for secure communication, robust
computational power, and novel cryptographic schemes that are resistant to
current and future threats. Furthermore, quantum networks pave the way for the
development of new technologies and applications that leverage quantum
phenomena such as superposition and entanglement, providing substantial
advantages over their classical counterparts in terms of speed, security, and
efficiency. Consequently, the exploration of quantum networks and their
potential continues to gain momentum, driving research into their optimization
and integration within existing communication infrastructures.
In quantum networks, quantum nodes are interconnected by optical fiber links
[2]. Quantum nodes possess the capability to generate quantum information and
store it within their quantum memories. Furthermore, they can transmit and
receive quantum information between nodes [3, 4]. Prior to exchanging
information, the quantum network must establish an entanglement connection
between the nodes and enable the transmission of quantum information encoded
as a quantum bit (qubit) over this connection. Consequently, the quantum
source node can utilize entangled pairs in the entanglement connection to
transmit information to the quantum destination node. When a quantum source
node is distant from the quantum destination node, entanglement connections
are generated based on routing, and quantum repeaters (i.e., intermediate
quantum nodes in the routing) connect the quantum source node to the quantum
destination node using entanglement swapping, or joint Bell state measurements
at quantum repeaters, for a remote entanglement connection [1]. For large-
scale quantum networks, efficiently utilizing entangled pairs and identifying
optimal routing for entanglement connections are vital challenges. By
optimizing the use of entangled pairs and routing, the energy consumption of
quantum repeaters can be minimized in quantum networks.
Entanglement fidelity is a crucial factor in guaranteeing the quality of
remote entanglement connections, as quantum repeaters may not generate
entangled pairs with the desired fidelity due to system noise [3]. Low-
fidelity entangled pairs can impact the quality of services provided by
quantum applications [5]. For instance, the security of key distribution in
quantum cryptography protocols, such as the BB84 protocol, can be compromised
if entanglement fidelity is lower than the quantum bit error rate requirements
[6]. Nevertheless, entanglement purification techniques can increase the
fidelity value of entangled pairs [7]. These techniques use additional
entangled pairs to achieve higher fidelity values, but determining the optimal
number of additional entangled pairs to satisfy uncertain fidelity
requirements for quantum applications remains a challenge and is often
overlooked in existing works.
To address these challenges, in this paper, we propose a stochastic resource
management framework for achieving optimal entangled resources in quantum
networks and introduce a dynamic entanglement purification algorithm to
adaptively increase fidelity values. We consider entangled pair resource
allocation and fidelity-guaranteed entanglement routing within quantum
networks. Specifically, we solve the optimization problem to obtain the
optimal number of entangled pairs and fidelity values, satisfying all requests
(i.e., multiple quantum source nodes and quantum destination nodes), while
considering the uncertainty of fidelity requirements during resource
allocation and routing.
The major contributions of this paper can be summarized as follows:
* •
We propose a novel entangled pair resource allocation and fidelity-guaranteed
entanglement routing model under uncertainty of fidelity requirements in
quantum networks. Additionally, we introduce a dynamic entanglement
purification algorithm to elastically improve fidelity values.
* •
We formulate and solve a two-stage stochastic programming (SP) model to obtain
not only the optimal decisions on entangled pair resource allocation but also
fidelity-guaranteed entanglement routing with the minimum number of quantum
repeaters in quantum networks. In the proposed model, the entangled pair
resource allocation and fidelity-guaranteed entanglement routing are jointly
calculated, with statistical information in the first stage and realization in
the second stage.
* •
We evaluate the performance of the proposed model through comprehensive
experiments under real-world network topologies. Moreover, we compare the
solution of the proposed model with those of baseline models to demonstrate
the superior performance of our approach.
## II Related Work
In this section we provide a brief overview of relevant works in the field,
highlighting their contributions and limitations. The authors of [3] presented
a fidelity-guaranteed entanglement routing scheme to ensure fidelity for
source-destination pairs in quantum networks. In [3], they initially proposed
an iterative routing algorithm (Q-PATH) for optimal entanglement routing with
minimum entangled pair cost for single source-destination pairs. For multiple
source-destination pairs, they introduced a greedy-based algorithm to minimize
the entanglement routing path and entangled pair count. Similarly,[1] authors
proposed an efficient routing scheme for multiple entanglement generation
requests in quantum lattice networks with limited quantum resources. Their
objective was to allocate quantum resources effectively, meeting entanglement
generation requests and fidelity thresholds. In[8], the authors suggested a
linear programming model to maximize the achievable expected entanglement
generation rate between multiple source-destination pairs in a quantum
network, satisfying the end-to-end fidelity demand. This problem resembled
that in [3]. Nonetheless,[8] did not consider the purification process.
Therefore, in[9], the authors introduced redundant entanglement provisioning
and selection (REPS) for throughput maximization in multi-hop quantum networks
with multiple source-destination pairs. The authors of [10] proposed an
adaptive routing scheme addressing quantum memory failures in quantum nodes
within quantum networks, finding the shortest entanglement paths between
source and destination quantum nodes. The author of [11] suggested an optimal
routing protocol for quantum networks, identifying the path with the highest
end-to-end entanglement rate between source-destination pairs in quantum
networks. In [12], the authors employed graph-theoretic tools (i.e., graph
states) to reduce the number of necessary measurements and proposed a routing
method for quantum communication between source and destination nodes.
However, none of these existing works address the problem of jointly managing
entangled pair resources and optimizing fidelity-guaranteed entanglement
routing under fidelity requirement uncertainty.
## III System Model
Figure 1: Short-distance and long-distance quantum teleportation for qubit
transmission in a quantum network.
We propose entangled pair resource allocation and fidelity-guaranteed
entanglement routing in a quantum network for quantum applications. In the
quantum network, quantum nodes connected via optical fiber links can generate,
store, exchange, and process quantum information [3, 4], as shown in Figs. 1.
In Fig. 1, the quantum network establishes an entanglement connection between
the source and destination nodes to enable the transmission of quantum
information in the form of qubits. To create this connection over long
distances, entangled pairs are generated between intermediate quantum nodes
located between the source and destination nodes. As illustrated in Fig. 1(b),
the quantum repeater (an intermediate node) establishes communication with
other quantum nodes through entanglement swapping, creating a long-distance
entanglement connection. When quantum source node 2 attempts to transmit
information to quantum destination node 2, it first becomes entangled with the
quantum repeater, which in turn entangles with quantum destination node 2.
Subsequently, the repeater performs entanglement swapping, generating a long-
distance connection between quantum source node 2 and quantum destination node
2 for the transmission of qubits.
The quantum network is represented by a network graph
$G(\mathcal{M},\mathcal{E})$ where $\mathcal{M}$ and $\mathcal{E}$ are a set
of quantum nodes and a set of edges between two quantum nodes, respectively.
Each quantum node has finite quantum memories that are used to store qubits.
The maximum capacity defined as the maximum number of entangled pairs of an
edge between quantum nodes $i$ and $j$ is denoted as $C^{\mathrm{etp}}_{i,j}$
where $i$ and $j$ $\in\mathcal{M}$. The entanglement purification is performed
on each edge to satisfy the fidelity threshold that is denoted as
$F^{\mathrm{ths}}_{i,n}$ where $i$ and $n$ $\in\mathcal{M}$. In each round of
the entanglement purification operation, entangled pairs are utilized. The
fidelity value of multiple entangled pairs on the same edge is identical while
the fidelity value on different edges can vary [1].
Figure 2: An example of three purification rounds in the entanglement
purification process.
### III-A Network Model
The quantum node, quantum repeater, quantum source, quantum destination, and
quantum channel are described as follows:
#### III-A1 Quantum Node, Quantum Repeater, Quantum Source, and Quantum
Destination
A quantum node can create, exchange, and process quantum information in
quantum networks [4]. The quantum node contains the quantum repeater’s
function, i.e., entanglement generation, purification, and swapping. Quantum
processors and quantum applications can be installed in quantum nodes to
establish a quantum network and support the quantum applications. In the
quantum repeater, the number of quantum memories is limited, and the
entanglement generation, purification, and swapping are applied [3]. In
quantum networks, all quantum nodes have limited computing and storage
capacities, and are connected via classical networks [3]. A quantum network is
typically managed by a centralized controller through classical networks. This
controller is responsible for overseeing all the quantum nodes and storing
essential information about the network, including its topology and resources.
The quantum nodes can report any updates to this information to ensure it
remains accurate and up to date. To support the quantum application, a quantum
source node can establish an entanglement connection with the quantum
destination node according to the requirement of the quantum application.
#### III-A2 Quantum Channel
Each quantum channel established between intermediate quantum nodes is used to
share entangled pairs of qubits between the nodes. These entangled pairs are
shared via both optical fibers [3, 4] and free space[13]. Qubits are encoded
and then transmitted using quantum teleportation between the entangled pairs
of qubits. Therefore, a capacity (i.e., the number of entangled pairs) of a
quantum channel between intermediate quantum nodes is generated in advance by
the entanglement generation process, e.g., nitrogen-vacancy centers [15],
before qubit transmission. The entanglement generation process, particularly
when using the nitrogen-vacancy center, can be seen as a deterministic black
box [3]. In addition, the fidelity of the entangled pair on each quantum
channel is approximately computed beforehand by the deterministic equations
without noise [14] [15], e.g., the deterministic state-delivery protocol [15].
The three steps for an entanglement routing process are described as follows:
First, the quantum source node, the quantum destination node, and intermediate
quantum nodes generate the entangled pairs in the quantum channel. After that,
the network controller calculates the routing and allocates entangled pair
resources in the network. Finally, the network controller instructs the
corresponding quantum nodes to perform entanglement purification to increase
the fidelity of entangled pairs to satisfy the requirement of the quantum
application. For the multi-hop entanglement connection, entanglement swapping
is introduced to build the long-distance entanglement.
Entanglement generation, purification, and swapping to establish entanglement
connections in the quantum networks can be described as follows:
#### III-A3 Entanglement Generation
To establish physical entanglement between two manageable quantum nodes, they
are linked to an intermediate station, referred to as the heralding station,
through optical fibers. A range of hardware platforms can be employed for this
objective, such as nitrogen-vacancy centers in diamond [16]. Upon successful
generation at the heralding station, the entangled pair is retained in the
memory of both quantum nodes. Serving as a valuable resource, the entangled
pair facilitates entanglement communication and qubit transmission between the
nodes.
#### III-A4 Entanglement Purification
Entanglement purification is applied to increase the fidelity of a Bell pair
by combining two low-fidelity Bell pairs into a single high-fidelity Bell
pair, which is implemented by controlled-NOT (C-NOT) gates or a polarizing
beam splitter [17]. The entanglement purification function [3] is expressed as
follows:
$\displaystyle
f^{\mathrm{pur}}(q_{1},q_{2})=\frac{q_{1}q_{2}}{q_{1}q_{2}+(1-q_{1})(1-q_{2})}.$
(1)
$q_{1}$ and $q_{2}$ are the fidelity of two Bell pairs in purification
operation. The dynamic entanglement purification algorithm is introduced to
perform the purification operation in Eq. (1) to satisfy the requirement of
quantum applications. In the entanglement purification algorithm, each round
of purification operation in Eq. (1) utilizes an additional entangled pair.
For example, Fig. 2 shows three rounds of entanglement purification operations
to increase the fidelity value from 0.75 to 0.987 by utilizing entangled
pairs. The entanglement purification algorithm can be expressed in Algorithm
1. The entanglement purification algorithm is applied to the SP model in the
constraints of Eqs. (14) and (14).
#### III-A5 Entanglement Swapping
When the quantum source node is far away from the quantum destination node,
entanglement swapping is introduced to establish distant entanglement
connections along the routing. By using entanglement swapping, the multi-hop
entanglement connection can be established along the routing of quantum
repeaters containing entangled pairs.
Algorithm 1 Dynamic Entanglement Purification, i.e.,
$\mathbf{F}^{\mathrm{epg}}(\cdot)$
1: Input: The number of entangled pairs between quantum nodes $i$ and $j$
2: Output: The resulting fidelity value ($f_{v}$) between quantum nodes $i$
and $j$
3: $N$ is the total number of entangled pairs - 1.
4: $f_{v}$ is the fidelity value.
5: for $p\\_round$ = $1$ to $N$ do
6: if $p\\_round$ == 1 then
7: $q_{1}$ = the fidelity of the first pair
8: $q_{2}$ = the fidelity of the second pair
9: $f_{v}=f^{\mathrm{pur}}(q_{1},q_{2})$
10: else
11: $f_{v}$ = $f_{v}$ of $p\\_round-1$
12: $q_{2}$ = the fidelity of the next pair
13: $f_{v}=f^{\mathrm{pur}}(f_{v},q_{2})$
14: end if
15: $p\\_round=p\\_round+1$
16: end for
## IV Problem Formulation
### IV-A Model Description
We define sets and decision variables in the proposed formulation as follows:
* •
${\mathcal{M}}$ represents a set of all quantum nodes that are present within
the network.
* •
${\mathcal{Q}}_{n}$ represents a set of all outbound links from node
$n\in{\mathcal{M}}$.
* •
${\mathcal{J}}_{n}$ represents a set of all inbound links to node
$n\in{\mathcal{M}}$.
* •
${\mathcal{R}}$ represents a set of requests, i.e., quantum source and
destination nodes, in the network.
* •
$x_{i,j,r}$ represents a binary decision variable indicating whether request
$r\in\mathcal{R}$ will take a route with the link from nodes $i$ to $j$ or
not, i.e., $x_{i,j,r}\in\\{0,1\\}$, $i,j\in\mathcal{M}$.
* •
$y^{\mathrm{r}}_{i,j,r}$ represents a decision variable indicating the number
of entangled pairs between nodes $i$ and $j$ in the reservation phase, i.e.,
$y^{\mathrm{r}}_{i,j,r}\in\\{0,1,2,\dots\\}$.
* •
$y^{\mathrm{e}}_{i,j,r,\omega}$ represents a decision variable indicating the
number of entangled pairs between nodes $i$ and $j$ under scenario $\omega$ in
the utilization phase, i.e.,
$y^{\mathrm{e}}_{i,j,r,\omega}\in\\{0,1,2,\dots\\}$.
* •
$y^{\mathrm{o}}_{i,j,r,\omega}$ represents a decision variable indicating the
number of entangled pairs between nodes $i$ and $j$ under scenario $\omega$ in
the on-demand phase, i.e.,
$y^{\mathrm{o}}_{i,j,r,\omega}\in\\{0,1,2,\dots\\}$.
We consider the uncertainty of fidelity requirements for request $r$ in the SP
model. As such, fidelity requirements are treated as uncertain parameters. Let
$\tilde{\omega}$ and $\omega$ represent the random variables of fidelity
requirements and a scenario of request $r$, respectively. A scenario
represents a realization of the random variable $\tilde{\omega}$, and its
value can be taken from the set of scenarios. We denote the set of all
scenarios for each fidelity requirement (i.e., a scenario space), as
$\Upsilon$. We denote the set of all scenarios for request $r$ as
$\Omega_{r}$. The set of all scenarios for each fidelity requirement is
described as follows:
$\displaystyle\Upsilon$ $\displaystyle=$
$\displaystyle{\displaystyle\prod_{r\in\mathcal{R}}}\Omega_{r}=\Omega_{1}\times\Omega_{2}\times\dots\times\Omega_{|\mathcal{R}|}$
(2) Where $\displaystyle\Omega_{r}=\\{0.0,\dots,1.0\\}.$ (3)
Therefore, $\omega$, $\times$, and $|\mathcal{R}|$ are the scenario space of
request $r$ (i.e., $\omega\in\Omega_{r}$), the Cartesian product, and the
cardinality of the set $\mathcal{R}$, respectively. The probability that the
fidelity requirement of request $r$ is realized can be denoted as
$\mathbb{P}_{r}(\omega)$.
### IV-B Stochastic Programming Formulation
We propose the two-stage SP model [18] to provision entangled pair resources
and fidelity-guaranteed entanglement routing in quantum networks for quantum
applications. In the first stage, decisions on provisioning the number of
entangled pairs and finding the routing according to the fidelity requirements
are performed. In the second stage, when the numbers of entangled pairs in the
first stage are inadequate, the number of entangled pairs in the on-demand
phase is provisioned to satisfy the rest of the fidelity requirements.
The objective function is designed to achieve the minimum total cost of
entangled pair utilization of all quantum nodes to satisfy all the requests,
which is expressed as follows:
$\displaystyle\min_{x_{i,n,r},y^{\mathrm{r}}_{i,n,r}}$
$\displaystyle\sum_{r\in{\mathcal{R}}}\sum_{n\in{\mathcal{M}}}\sum_{i\in{\mathcal{J}}_{n}}\big{(}(E^{\mathrm{eng}}_{n,r}+S^{\mathrm{stp}}_{n,r})x_{i,n,r}$
(4)
$\displaystyle+R^{\mathrm{r}}_{n,r}y^{\mathrm{r}}_{i,n,r}\big{)}+{\mathbf{E}}\left[{\mathscr{L}}(y^{\mathrm{r}}_{i,n,r},\tilde{\omega})\right].$
Theoretically, we can convert the SP model with the random variable
$\tilde{\omega}$ into a deterministic equivalent formulation [18], expressed
in Eqs. (5) - (14). The objective function in Eq. (5) corresponds to and
shares the same meaning as Eq. (4). Eq. (14) enforces that the number of
outbound routes exceeds the number of inbound routes if the node is the source
node $S_{r}$ of request $r$. Eq. (14) dictates that the number of inbound
routes surpasses the number of outbound routes if the node is the destination
node $D_{r}$ of request $r$. Eq. (14) requires that the number of outbound
routes equals the number of inbound routes if the node serves as an
intermediate node for request $r$. Eq. (14) ensures no loop for any request,
implying that each node has only one outbound route for the request. Eq. (14)
establishes that the number of reserved entangled pairs between node $i$ and
node $n$ in the reservation phase does not exceed the maximum capacity of
entangled pairs between node $i$ and node $j$ ($C^{\mathrm{etp}}_{i,j}$). Eq.
(14) asserts that the number of utilized entangled pairs between node $i$ and
node $n$ in the utilization phase is not greater than the number of reserved
entangled pairs between node $i$ and node $n$ in the reservation phase. Eq.
(14) states that the numbers of entangled pairs in utilization and on-demand
phases must meet the entanglement fidelity requirement.
$\mathbf{F}^{\mathrm{epg}}(\cdot)$ in Eq. (14) refers to the entanglement
purification algorithm applied to calculate entanglement fidelity based on the
numbers of entangled pairs in utilization and on-demand phases. Eq. (14)
stipulates that the numbers of entangled pairs in utilization and on-demand
phases must satisfy the entanglement fidelity threshold. Finally, Eq. (14)
ensures that the number of entangled pairs used in the on-demand phase does
not surpass the maximum capacity of entangled pairs between node $i$ and node
$j$ ($O^{\mathrm{etp}}_{i,j}$).
$\displaystyle\min_{x_{i,n,r},y^{\mathrm{r}}_{i,n,r},y^{\mathrm{e}}_{i,n,r,\omega},y^{\mathrm{o}}_{i,n,r,\omega}}\sum_{r\in{\mathcal{R}}}\sum_{n\in{\mathcal{M}}}\sum_{i\in{\mathcal{J}}_{n}}$
(5)
$\displaystyle\big{(}(E^{\mathrm{eng}}_{n}+S^{\mathrm{stp}}_{n})x_{i,n,r}y^{\mathrm{r}}_{i,n,r}+R^{\mathrm{r}}_{n,r}y^{\mathrm{r}}_{i,n,r}\big{)}+\sum_{r\in{\mathcal{R}}}$
$\displaystyle\Big{(}\mathbb{P}_{r}(\omega)\sum_{n\in{\mathcal{M}}}\sum_{i\in{\mathcal{J}}_{n}}\big{(}U^{\mathrm{e}}_{n,r}y^{\mathrm{e}}_{i,n,r,\omega}+O^{\mathrm{o}}_{n,r}y^{\mathrm{o}}_{i,n,r,\omega}\big{)}\Big{)}$
s.t.
$\displaystyle\sum_{j^{\prime}\in{\mathcal{Q}}_{S_{r}}}x_{S_{r},j^{\prime},r}-\sum_{i^{\prime}\in{\mathcal{J}}_{S_{r}}}x_{i^{\prime},S_{r},r}=1,r\in{\mathcal{R}},$
(14)
$\displaystyle\sum_{i^{\prime}\in{\mathcal{J}}_{D_{r}}}x_{i^{\prime},D_{r},r}-\sum_{j^{\prime}\in{\mathcal{Q}}_{D_{r}}}x_{D_{r},j^{\prime},r}=1,r\in{\mathcal{R}},$
$\displaystyle\sum_{j^{\prime}\in{\mathcal{Q}}_{n}}x_{n,j^{\prime},r}-\sum_{i^{\prime}\in{\mathcal{J}}_{n}}x_{i^{\prime},n,r}=0,r\in{\mathcal{R}},$
$\displaystyle n\in{\mathcal{M}}\setminus\\{S_{r},D_{r}\\},$
$\displaystyle\sum_{j^{\prime}\in{\mathcal{Q}}_{n}}x_{n,j^{\prime},r}\leq
1,n\in{\mathcal{M}},r\in{\mathcal{R}},$
$\displaystyle\sum_{r\in{\mathcal{R}}}y^{\mathrm{r}}_{i,n,r}x_{i,n,r}\leq
C^{\mathrm{etp}}_{i,j},i,j,n\in{\mathcal{M}},$ $\displaystyle
y^{\mathrm{e}}_{i,n,r,\omega}x_{i,n,r}\leq y^{\mathrm{r}}_{i,n,r}x_{i,n,r},$
$\displaystyle
i,j,n\in{\mathcal{M}},r\in{\mathcal{R}},\forall\omega\in\Omega_{r},$
$\displaystyle\mathbf{F}^{\mathrm{epg}}\big{(}(y^{\mathrm{e}}_{i,n,r,\omega}x_{i,j,r})+y^{\mathrm{o}}_{i,n,r,\omega}\big{)}\geq\omega,$
$\displaystyle i,n\in{\mathcal{M}},r\in{\mathcal{R}},\omega\in\Omega_{r},$
$\displaystyle\mathbf{F}^{\mathrm{epg}}\big{(}(y^{\mathrm{e}}_{i,n,r,\omega}x_{i,j,r})+y^{\mathrm{o}}_{i,n,r,\omega}\big{)}\geq
F^{\mathrm{ths}}_{i,n},$ $\displaystyle
i,n\in{\mathcal{M}},r\in{\mathcal{R}},\omega\in\Omega_{r},$
$\displaystyle\sum_{r\in{\mathcal{R}}}\big{(}y^{\mathrm{o}}_{i,j,r,\omega}x_{i,j,r}\big{)}\leq
O^{\mathrm{etp}}_{i,j},i,j\in{\mathcal{M}},\forall\omega\in\Omega_{r}.$
## V Performance Evaluation
### V-A Parameter Setting
We examine the network topology of NSFNET, which is connected using optical
fibers [2] and perform experiments on this topology. For each quantum node in
the topology, we set the fidelity values between nodes $i$ and $j$ as shown in
Fig. 3(a). The fidelity threshold is initially 0.8 [3]. The maximum numbers of
entangled pairs between nodes $i$ and $j$ in the reservation phase
($C^{\mathrm{etp}}_{i,j}$) and the on-demand phase ($O^{\mathrm{etp}}_{i,j}$ )
are initially 10 and 60, respectively. For the SP model, we consider a random
number of requests with fidelity requirements that are uniformly distributed.
We assume the costs of reservation, utilization, and on-demand phases are 10$,
1$, and 200$, respectively. The cost of energy consumption of transferring
traffic of request $r$ through node $n$ ($E^{\mathrm{eng}}_{n}$) is 5$. The
cost of energy consumption to establish repeater $n$ ($S^{\mathrm{stp}}_{n}$)
is 150$. We implement and solve the entangled pair resource allocation and
fidelity-guaranteed entanglement routing via the GAMS/CPLEX solver [19].
### V-B Numerical Results
(a) Three requests.
(b) The entangled pair utilization.
(c) The optimal solution.
(d) The cost comparison.
Figure 3: (a) Three requests in NSFNET topology, (b) The entangled pair
utilization under different fidelity requirements, (c) The optimal solution
under different numbers of reserved entangled pairs, and (d) The cost
comparison of three models.
#### V-B1 Routing and Entangled Pair Utilization
Figure 3(a) shows the solutions of the SP model that satisfies the fidelity
requirements of three requests (i.e., $f_{1}$, $f_{2}$, and $f_{3}$)
represented by different colors. In the solutions, the SP model not only
allocates the entangled pairs to satisfy fidelity requirements but also
minimizes the number of repeaters (i.e., intermediate quantum nodes) for the
requests in the network. In Fig. 3(a), to obtain the optimal cost, the
entangled pairs are forced to share the same edges and the number of repeaters
is forcibly minimized while supporting all requests. For example, each request
utilizes one entangled pair (i.e., $[1,1,1]$) on the edge between quantum
nodes 4 and 11, which is a direct result of minimizing the number of
repeaters.
Figure 3(b) shows the number of entangled pairs in the reservation,
utilization, and on-demand phases under different fidelity requirements. In
Fig. 3(b), in the reservation and utilization phases, the number of entangled
pairs reserved and then utilized increases steadily until the fidelity
requirement is 0.87. At this point, the numbers of reserved and utilized
entangled pairs reach the maximum capacity of the entangled pairs. As a
result, the numbers of reserved and utilized entangled pairs are stable at 9
entangled pairs, and it cannot support the higher fidelity requirements.
Therefore, to meet the high fidelity requirements, the entangled pairs in the
on-demand phase are utilized. In the on-demand phase, the number of entangled
pairs starts utilizing at 0.88 of the fidelity requirement due to the limited
capacity of the entangled pairs in the reservation phase.
#### V-B2 Cost Structure Analysis
In Fig. 3(c), we assess the SP model’s effectiveness in attaining an optimal
solution. Initially, we modify the number of reserved entangled pairs and
subsequently present the optimal solution derived from the SP model, as well
as the impact of reserved entangled pairs on this solution. In Fig. 3(c), the
first-stage cost notably escalates as the quantity of reserved entangled pairs
expands. In contrast, the second-stage cost markedly diminishes when fidelity
requirements are satisfied. This is a consequence of the reservation phase
(i.e., the first stage) necessitating the maximum number of entangled pairs
due to lower costs, while the on-demand phase (i.e., the second stage) demands
the minimum number of entangled pairs. As a result, with 24 reserved entangled
pairs, the optimal solution is attained at 11,952$, and the second-stage cost
remains at 0. This occurs because the reserved entangled pairs fulfill
fidelity requirements, negating the need for on-demand entangled pairs in the
second stage. Beyond 24 reserved entangled pairs, the total cost and first-
stage cost experience a slight increase due to a penalty cost for surplus
reserved entangled pairs. In Fig. 3(c), we observe that over- and under-
provision of entangled pairs contribute to the overall high total cost.
#### V-B3 Performance Evaluation
We compare the SP model’s performance with two alternative models: the
entangled pair resource allocation (EPRA) over the expected value and the
deterministic equivalent formulation. In the EPRA over the expected value, the
fidelity requirements in the first stage are treated as expected demands. In
the deterministic equivalent formulation, fidelity requirements are considered
as exact demands. In Fig. 3(d), the SP model evidently attains the optimal
solution in comparison to the EPRA over the expected value as the number of
requests increases. For instance, when the request count is 2, the SP model
can reduce the total cost by 53.21% compared to the EPRA over the expected
value. However, the SP model’s solution is inferior to that of the
deterministic equivalent formulation. This difference arises because the
deterministic equivalent formulation employs exact fidelity requirements to
reach the solution, while the SP model relies on statistical information of
fidelity requirements. Nevertheless, the SP model is more pragmatic than the
deterministic equivalent formulation, as determining exact entangled pair
fidelity requirements for input in the deterministic equivalent formulation is
challenging in real-world scenarios.
## VI Conclusion
In this paper, we have proposed the entangled pair resource allocation and
fidelity-guaranteed entanglement routing model for quantum networks. Using the
two-stage SP framework, we have formulated the model to determine the optimal
cost under fidelity requirement uncertainty. The experimental results have
demonstrated that our proposed model not only achieves optimal total cost and
entanglement routing but also minimizes the number of repeaters (i.e.,
intermediate quantum nodes). Moreover, the model’s performance has surpassed
that of the EPRA over the expected value by a minimum of 20%. In future
research, we plan to investigate and incorporate the energy consumption model
of quantum nodes within quantum networks into the SP model. Additionally, we
aim to develop an entangled pair resource allocation and fidelity-guaranteed
entanglement routing model for space-air-ground integrated networks (SAGIN).
## References
* [1] C. Li et al., “Effective routing design for remote entanglement generation on quantum networks,” NPJ Quantum Inf., vol. 7, no. 1, pp. 1-12, 2021.
* [2] Y. Cao et al., “Hybrid Trusted/Untrusted Relay-Based Quantum Key Distribution Over Optical Backbone Networks,” IEEE JSAC, vol. 39, no. 9, pp. 2701-2718, 2021.
* [3] J. Li et al., “Fidelity-Guaranteed Entanglement Routing in Quantum Networks,” in IEEE Transactions on Communications, vol. 70, no. 10, pp. 6748-6763, 2022.
* [4] S. Shi and C. Qian, “Concurrent entanglement routing for quantum networks: Model and designs,” in Proc. ACM SIGCOMM Conf., pp. 62–75, 2020.
* [5] A. S. Cacciapuoti et al., “Quantum Internet: Networking Challenges in Distributed Quantum Computing,” in IEEE Network, vol. 34, no. 1, pp. 137-143, 2020.
* [6] Q. Jia et al., “An improved QKD protocol without public announcement basis using periodically derived basis,” Quantum Inf. Process, vol. 20, 69 (2021).
* [7] A. S. Cacciapuoti et al., “When Entanglement Meets Classical Communications: Quantum Teleportation for the Quantum Internet,” in IEEE Transactions on Communications, vol. 68, no. 6, pp. 3808-3833, 2020.
* [8] K. Chakraborty et al., “Entanglement Distribution in a Quantum Network: A Multicommodity Flow-Based Approach,” in IEEE TQE, vol. 1, pp. 1-21, 2020.
* [9] Y. Zhao and C. Qiao, “Redundant Entanglement Provisioning and Selection for Throughput Maximization in Quantum Networks,” IEEE INFOCOM 2021, pp. 1-10, 2021.
* [10] L. Gyongyosi and S. Imre, “Adaptive routing for quantum memory failures in the quantum internet,” Quantum Inf. Process, vol. 18, no. 2, pp. 1–21, 2019.
* [11] M. Caleffi, “Optimal Routing for Quantum Networks,” IEEE Access, vol. 5, pp. 22299-22312, 2017.
* [12] F. Hahn, A. Pappa, and J. Eisert, “Quantum network routing and local complementation,” NPJ Quantum Inf., vol. 5, no. 1, pp. 1–7, 2019.
* [13] J.-G. Ren et al., “Ground-to-satellite quantum teleportation,” Nature, vol. 549, no. 7670, pp. 70–73, 2017.
* [14] M. Caleffi and A. S. Cacciapuoti, “Quantum Switch for the Quantum Internet: Noiseless Communications Through Noisy Channels,” in IEEE JSAC, vol. 38, no. 3, pp. 575-588, 2020.
* [15] P. C. Humphreys et al.. ”Deterministic delivery of remote entanglement on a quantum network.” Nature, vol. 558, no. 7709, pp. 268-286, 2018.
* [16] A. Dahlberg et al., “A link layer protocol for quantum networks,” in Proc. ACM SIGCOMM Conf., pp. 159–173, 2019.
* [17] X.-M. Hu et al., “Long-distance entanglement purification for quantum communication,” Phys. Rev. Lett., vol. 126, no. 1, 2021.
* [18] J. R. Birge, and F. Louveaux, “Introduction to Stochastic Programming,” 2nd ed. Springer, 2011.
* [19] General Algebraic Modeling System (GAMS), https://www.gams.com/, 2022.
|
# Boundary Adversarial Examples
Against Adversarial Overfitting
Muhammad Zaid Hameed
IBM Research Europe
Dublin, Ireland
<EMAIL_ADDRESS>
&Beat Buesser
IBM Research Europe
Dublin, Ireland
<EMAIL_ADDRESS>
###### Abstract
Standard adversarial training approaches suffer from robust overfitting where
the robust accuracy decreases when models are adversarially trained for too
long. The origin of this problem is still unclear and conflicting explanations
have been reported, i.e., memorization effects induced by large loss data or
because of small loss data and growing differences in loss distribution of
training samples as the adversarial training progresses. Consequently, several
mitigation approaches including early stopping, temporal ensembling and weight
perturbations on small loss data have been proposed to mitigate the effect of
robust overfitting. However, a side effect of these strategies is a larger
reduction in clean accuracy compared to standard adversarial training. In this
paper, we investigate if these mitigation approaches are complimentary to each
other in improving adversarial training performance. We further propose the
use of helper adversarial examples that can be obtained with minimal cost in
the adversarial example generation, and show how they increase the clean
accuracy in the existing approaches without compromising the robust accuracy.
## 1 Introduction
Adversarial examples have gained great attention in ML research [1, 2, 3] and
building models with increased robustness against such examples is still an
open challenge [4, 5, 6, 7, 8, 9, 10, 11]. Adversarial training (AT) [4] is
the most successful approach against adversarial examples where approximate
worst-case adversarial examples are used to train the model.
The success of AT is impeded by high computational costs compared to standard
training and gaps between robust and clean accuracy [4, 5, 12, 13]. It has
been shown that AT reduces the clean accuracy [5] and mitigation techniques
based on label smoothing and stochastic weight averaging [11] or use of
additional unlabeled data [14] have been proposed, which solve the problem
only partially.
Recently, another phenomenon associated with adversarial training has been
discovered and coined robust overfitting [15, 11, 16, 17], which occurs when
models are adversarially trained for too long. Especially after the first
learning rate decay, the robust accuracy tends to decrease with additional
training. Even though overfitting to training data results in increased clean
accuracy, the robust accuracy is negatively impacted by this additional
training. The exact causes of robust overfitting are still unclear and most
explanations are conflicting. For example, memorization effects induced by
large loss data, which give rise to high confidence predictions, have been
shown to be a driving force behind robust overfitting [16]. On the other hand,
small loss data and large differences in loss distribution across training
samples, tend to amplify as AT progresses, and have been shown to enhance
robust overfitting [17]. Reported mitigation approaches like temporal
ensembling (TE) [16] and model weight perturbation (WP) [17] alleviate robust
overfitting, but tend to trade off more clean accuracy when compared to
standard adversarial training (AT) [4]. This shows that increased
regularization induced by these schemes negatively impact the generalization
performance on clean data.
Here, we analyze how these two seemingly conflicting approaches, TE and WP,
which attribute robust overfitting to training with samples with large and
small loss data, respectively, mitigate robust overfitting and if they can be
combined to create a compounding mitigation effect. First, we investigate how
TE and WP behave during AT and find that they induce different types of
regularization on adversarial examples during overfitting. Second, we evaluate
a combination of TE and WP and find that there is no compounding improvement
in robust accuracy. Third, we minimize loss on clean input data along-with
these robust overfitting mitigation approaches which results in increased
clean accuracy at the expense of robust accuracy. Thus, we find that the
negative impacts on clean accuracy induced by TE and WP can be prevented
without affecting the robust accuracy with appropriate regularization.
We propose to employ TE/WP based robust overfitting mitigation with the
additional use of helper adversarial examples along-with a TE-based
regularization term and demonstrate improvement in clean accuracy over the
existing AT approaches. The helper examples can be created during the
adversarial example generation in AT with a minimal computational overhead of
just one extra forward pass. Our idea is to use perturbed samples close to
decision boundaries as helper examples with their property of increasing clean
accuracy with only a small degradation in robust accuracy compared to using
clean data only.
## 2 Preliminaries
Adversarial training (AT) has empirically proved to be the most effective
defense strategy against adversarial evasion attacks [4, 5, 6, 10, 11]. In the
following we will briefly describe the mechanics of AT [4] and two robust
mitigation schemes [16, 17].
### 2.1 Adversarial Training (AT) [4]
We consider a classification problem where an input sample
$\mathbf{x}\in\mathbb{R}^{n}$ belongs to a true class $y$ among a set of
classes $\mathcal{Y}=\\{1,2,\ldots,Y\\}$. A DNN-based classifier
$F_{\boldsymbol{\theta}}:\mathbb{R}^{n}\times\mathcal{Y}\to\mathbb{R}$,
assigns a label
$\hat{y}\in\operatorname*{arg\,max}_{y\in\mathcal{Y}}F_{\boldsymbol{\theta}}(\mathbf{x},y)$
to $\mathbf{x}$, where ${\boldsymbol{\theta}}\in\Theta$ denotes the parameters
of DNN. With a slight abuse of notation, we also use $F$ to denote the
classifier, $F(\mathbf{x})$ to denote the class label assigned to
$\mathbf{x}$, and $F(\mathbf{x},y)$ to denote the score of class $y$ for input
$\mathbf{x}$ and $p(\mathbf{x})$ denotes the DNN prediction probability
vector. For a correctly classified input $\mathbf{x}$ i.e., $y=F(\mathbf{x})$
is the true label, an adversarial attack aims to find a sample
$\tilde{\mathbf{x}}$ in the $\epsilon$-neighborhood of $\mathbf{x}$ in norm
$p$ denoted by
$\mathcal{B}_{\epsilon}(\mathbf{x})=\left\\{\tilde{\mathbf{x}}:\|\tilde{\mathbf{x}}-\mathbf{x}\|_{p}\leq\epsilon\right\\}$,
such that $F(\tilde{\mathbf{x}})\neq F(\mathbf{x})$. In practice, these
adversarial examples are generated by modifying the input $\mathbf{x}$ through
optimizing a loss function $\mathcal{L}$ on the classifier [3, 18, 19, 20, 4].
Finally, AT can be considered as a robust optimization problem:
$\min_{\boldsymbol{\theta}}\max_{\tilde{\mathbf{x}}\in\mathcal{B}_{\epsilon}(\mathbf{x})}\mathcal{L}(F(\tilde{\mathbf{x}},y),y),$
(1)
where $\tilde{\mathbf{x}}$ is the adversarial example. Hence, during training,
adversarial attacks first generate adversarial examples in the neighborhood
$\mathcal{B}_{\epsilon}(\mathbf{x})$ of input $\mathbf{x}$ to approximately
maximize the loss $\mathcal{L}$ (e.g., cross-entropy loss), followed by
training on the examples to update the network parameters and achieving
robustness against adversarial examples.
### 2.2 Adversarial Training with Temporal Ensembling (AT-TE) [16]
Robust overfitting is attributed to neural networks training on high loss
input data and over-confident predictions (memorization) by the model during
AT [11, 16]. To prevent this, a regularization based on temporal ensembling
(TE) has been proposed in [16] which penalizes over-confident predictions. Let
$z(\mathbf{x})$ denote the ensemble prediction by a model on input
$\mathbf{x}$ which is updated as $z(\mathbf{x})=\eta\cdot
z(\mathbf{x})+(1-\eta)\cdot p(\mathbf{x})$ in each epoch and the AT objective
becomes
$\min_{\boldsymbol{\theta}}\max_{\tilde{\mathbf{x}}\in\mathcal{B}_{\epsilon}(\mathbf{x})}\\{\mathcal{L}(F(\tilde{\mathbf{x}},y),y)+w\cdot\|p(\tilde{\mathbf{x}})-\hat{z}(\mathbf{x})\|_{2}^{2}\\},$
(2)
where $\hat{z}(\mathbf{x})$ is obtained by normalizing $z(\mathbf{x})$ and $w$
is regularization weight. In this AT,
the regularization term is activated close to the first learning rate decay,
and prevents the network from assigning high confidence to samples with large
loss.
### 2.3 Minimum Loss Constrained Adversarial Training (MLCAT-WP) [17]
Training samples with small loss and large differences in loss distribution
among samples are shown to cause robust overfitting in [17] and adversarial
weight perturbation [9] is employed to increase the loss on such samples,
which helps to mitigate robust overfitting. The training objective becomes
$\min_{\boldsymbol{\theta}}\max_{\mathbf{v}\in\mathcal{V}}\max_{\tilde{\mathbf{x}}\in\mathcal{B}_{\epsilon}(\mathbf{x})}\mathcal{L}(F_{\theta+v}(\tilde{\mathbf{x}},y),y),$
(3)
where $v\in\mathcal{V}$ is an adversarial weight perturbation and is generated
by $\max_{\mathbf{v}\in\mathcal{V}}\sum_{i}\mathbbm{1}_{\\{\mathcal{L}_{i}\leq
L_{min}\\}}\cdot\mathcal{L}_{i},$ where
$\mathcal{L}_{i}=\mathcal{L}(F_{\theta+v}(\tilde{\mathbf{x}_{i}},y_{i}),y_{i})$,
$L_{min}$ is threshold for minimum adversarial loss and $\mathbbm{1}_{c}$ is
an indicator function which is 1 only when condition $c$ is true.
## 3 Comparison of AT, AT-TE and MLCAT-WP
In order to investigate how standard AT and robust overfitting mitigation
approaches AT-TE and MLCAT-WP behave as training progresses, we consider image
classification on CIFAR-10 [21] with a ResNet-18 model [22]. For all
approaches, we use a Projected Gradient Descent ($\mathrm{PGD}$) attack [4]
for 10 steps denoted by $\mathrm{{PGD}_{10}}$ in training and
$\mathrm{{PGD}_{20}}$ for evaluation; see Appendix A for full details. We
observe from Figure 1(b)-(d) that as the training progresses, an AT model
starts to classify training data with high true class probability (TCP)
(Average TCP $\geq 0.5$) after the first learning rate decay at epoch 100 and
proportion of training samples with very small loss value (in range [0, 0.5))
and high TCP also grows ($\geq 40\%$ of all training samples). On the other
hand, an MLCAT-WP trained model’s average TCP on training data increases
gradually and stays small ($\approx 0.4$) at epoch 200 and the proportion of
samples in the loss range [0, 0.5) amounts to only 20% of all training data
which shows weight perturbation prevents the model from assigning very high
TCP to training data and fitting samples with small loss. In case of AT-TE the
average TCP for samples with loss in range [0, 0.5) starts to decrease when
temporal ensembling is activated (epoch $\geq 90$) and even though the
proportion of samples increase, it lies between AT and MLCAT-WP. Finally, we
can also make this observation from Figure 1(a) that combining weight
perturbation and temporal ensembling (MLCAT-WP+TE) does not result in any
improvement in robust accuracy and its behavior closely resembles MLCAT-WP.
Moreover, modifying AT-TE and MLCAT-WP to AT-TE+XEC and MLCAT-WP+XEC by
including cross entropy loss on clean data results in higher clean accuracy at
the cost of reduced robust accuracy and results in increase in TCP and
proportions of samples with small loss.
(a)
(b)
(c)
(d)
Figure 1: CIFAR-10 training for ResNet-18. (a) Test accuracy against clean
data (dark solid lines) and $\mathrm{{PGD}_{20}}$ attack (dim solid lines) are
plotted.
## 4 Boundary Adversarial Examples for Improving Adversarial Training
To counter the negative effect of regularization in AT-TE and MLCAT-WP on
clean accuracy and negative effect of using clean sample on robust accuracy,
we propose to extract additional useful information from the adversarial
examples generation process in adversarial training. More specifically, we
extract intermediate adversarial examples that are close to a decision
boundary as soon as the perturbed sample is misclassified. Our underlying idea
is that using boundary (intermediate and weak) adversarial examples in place
of clean samples will guide the network to attain better clean accuracy
without affecting robust accuracy too much. Thus, using this intermediate
perturbed sample $\mathbf{x}^{\prime}\in\mathcal{B}_{\epsilon}(\mathbf{x})$
and a regularization based on its prediction $p(\mathbf{x}^{\prime})$ and
ensemble prediction $z(\mathbf{x}^{\prime})$, the AT-TE objective becomes
$\min_{\boldsymbol{\theta}}\\{\mathcal{L}(F(\mathbf{x}^{\prime},y),y)+w\cdot\|p(\mathbf{x}^{\prime})-\hat{z}(\mathbf{x}^{\prime})\|_{2}^{2}\
+\max_{\tilde{\mathbf{x}}\in\mathcal{B}_{\epsilon}(\mathbf{x})}\\{\mathcal{L}(F(\tilde{\mathbf{x}},y),y)+w\cdot\|p(\tilde{\mathbf{x}})-\hat{z}(\mathbf{x})\|_{2}^{2}\\}\\},$
(4)
where $\hat{z}(\mathbf{x}^{\prime})$ is normalized $z(\mathbf{x}^{\prime})$,
and $z(\mathbf{x}^{\prime})$ is updated as $z(\mathbf{x}^{\prime})=\eta\cdot
z(\mathbf{x}^{\prime})+(1-\eta)\cdot p(\mathbf{x}^{\prime})$ in each epoch. We
denote this modified AT-TE as AT-TEBS. Similarly, MLCAT-WP objective becomes
MLCAT-WP+TEBS by including TE and boundary sample as
$\min_{\boldsymbol{\theta}}\\{\mathcal{L}(F_{\theta+v}(\mathbf{x}^{\prime},y),y)+w\cdot\|p(\mathbf{x}^{\prime})-\hat{z}(\mathbf{x}^{\prime})\|_{2}^{2}\
+\
\max_{\mathbf{v}\in\mathcal{V}}\max_{\tilde{\mathbf{x}}\in\mathcal{B}_{\epsilon}(\mathbf{x})}\\{\mathcal{L}(F_{\theta+v}(\tilde{\mathbf{x}},y),y)+w\cdot\|p(\tilde{\mathbf{x}})-\hat{z}(\mathbf{x})\|_{2}^{2}\\}\\}$
(5)
## 5 Experimental Evaluation
We train a ResNet-18 model using AT, AT-TE, MLCAT-WP, MLCAT-WP+TE, AT-TEBS and
MLCAT-WP+TEBS for CIFAR-10 [21], CIFAR-100 [21] and SVHN [23]; see Appendix A
for details. We use $\mathrm{{PGD}_{10}}$ attack ($\epsilon=8/255$,
$L_{\infty}$ norm) during training and $\mathrm{{PGD}_{20}}$ at inference. In
addition, we also run AutoAttack (AA) [24] which is an ensemble of different
attacks for a more reliable evaluation. Table 1 shows that both AT-TEBS and
MLCAT-WP+TEBS attain significant increase in clean accuracy over their
counterparts AT-TE and MLCAT-WP/MLCAT-WP+TE ($\approx$2%-3% in CIFAR-10, 2%-5%
in CIFAR-100, and 1%-2% in SVHN). AT-TE and AT-TEBS do not prevent overfitting
in the SVHN dataset because the training data fits very early in training with
high confidence, whereas temporal ensembling activates closer to first
learning rate decay and thus, regularization based on ensemble prediction is
ineffective. On the other hand, weight perturbation based approaches MLCAT-WP
and MLCAT-WP+TEBS result in superior performance across all datasets compared
to AT-TE and AT-TEBS in terms of clean and robust accuracy; and use of
adversarial boundary examples significantly boosts the clean accuracy
especially for CIFAR-100 and SVHN datasets in MLCAT-WP+TEBS. From Figure 2, we
observe that for all three datasets, AT-TEBS and MLCAT-WP+TEBSapproximately
retain the robust accuracy of AT-TE and MLCAT-WP, but increase the clean
accuracy to match and even surpass AT in case of the CIFAR-100 and SVHN
datasets.
(a)
(b)
(c)
Figure 2: Accuracy results for AT using ResNet-18. Clean test accuracy (dark
solid lines) and $\mathrm{{PGD}_{20}}$ attack test accuracy (dim solid lines)
are plotted. Table 1: Test accuracy (mean and standard deviation of 5 runs).
“Last" and “Best" refer to test accuracy at the end of training, and end of
epoch that gives the highest accuracy w.r.t. test data respectively.
Dataset | Name | Clean(Best) | Clean(Last) | Robust(Best) | Robust(Last) | AA(Best) | AA(Last)
---|---|---|---|---|---|---|---
CIFAR-10 | AT | 82.03$\pm 0.42$ | 84.37$\pm$0.30 | 52.64$\pm$0.13 | 45.06$\pm$0.70 | 48.04$\pm$0.15 | 42.64$\pm$0.62
AT-TE | 82.11$\pm$0.14 | 82.73$\pm$0.22 | 55.69$\pm$0.11 | 54.15$\pm$0.42 | 49.99$\pm$0.16 | 48.95$\pm$0.37
MLCAT-WP | 82.05$\pm$0.31 | 81.78 $\pm$0.26 | 58.16$\pm$0.13 | 57.46$\pm$0.45 | 50.43$\pm$0.07 | 49.96$\pm$0.31
MLCAT-WP+TE | 81.06$\pm$0.49 | 80.74$\pm$0.24 | 58.48$\pm$0.08 | 57.71$\pm$0.22 | 50.67$\pm$0.10 | 50.46$\pm$0.20
AT-TEBS | 83.99$\pm$0.14 | 84.46$\pm$0.36 | 55.26$\pm$0.07 | 53.58$\pm$0.74 | 50.17$\pm$0.09 | 48.80$\pm$0.70
MLCAT-WP+TEBS | 83.712$\pm$0.51 | 83.75$\pm$0.41 | 59.20$\pm$0.21 | 58.53$\pm$0.52 | 50.38$\pm$0.07 | 50.30$\pm$0.26
CIFAR-100 | AT | 55.49$\pm$0.60 | 56.93$\pm$0.15 | 29.50$\pm$0.22 | 22.11$\pm$0.25 | 25.05$\pm$0.44 | 19.97$\pm$0.19
AT-TE | 56.43$\pm$0.25 | 56.94$\pm$0.44 | 32.07$\pm$0.06 | 30.67$\pm$0.18 | 26.19$\pm$0.18 | 25.1$\pm$0.09
MLCAT-WP | 57.65$\pm$0.41 | 57.96$\pm$0.30 | 32.73$\pm$0.11 | 31.68$\pm$0.61 | 27.14$\pm$0.14 | 26.48$\pm$0.41
MLCAT-WP+TE | 56.60$\pm$0.56 | 57.44$\pm$0.33 | 33.02$\pm$0.16 | 32.02$\pm$0.56 | 27.08$\pm$0.16 | 26.58$\pm$0.54
AT-TEBS | 58.78$\pm$0.23 | 58.82$\pm$0.29 | 31.59$\pm$0.09 | 30.40$\pm$0.16 | 25.99$\pm$0.18 | 24.98$\pm$0.23
MLCAT-WP+TEBS | 62.50$\pm$0.28 | 62.45$\pm$0.42 | 31.95$\pm$0.06 | 30.72$\pm$0.65 | 26.57$\pm$0.14 | 25.74$\pm$0.48
SVHN | AT | 89.08$\pm$0.47 | 89.91$\pm$0.30 | 53.21$\pm$0.32 | 44.70$\pm$0.60 | 45.59$\pm$0.61 | 39.97$\pm$0.63
AT-TE | 89.17$\pm$0.58 | 89.83$\pm$0.34 | 53.21$\pm$0.18 | 44.24$\pm$0.71 | 45.81$\pm$0.43 | 39.65$\pm$0.81
MLCAT-WP | 91.45$\pm$0.26 | 91.91$\pm$0.21 | 60.25$\pm$0.28 | 56.98$\pm$0.62 | 51.60$\pm$0.37 | 49.20$\pm$0.30
MLCAT-WP+TE | 91.53$\pm$0.46 | 91.67$\pm$0.30 | 60.33$\pm$0.30 | 57.71$\pm$ 0.84 | 51.57$\pm$0.31 | 49.89$\pm$0.54
AT-TEBS | 91.58$\pm$0.32 | 90.43$\pm$0.46 | 50.94$\pm$0.16 | 44.43$\pm$0.66 | 45.37$\pm$0.29 | 39.43$\pm$0.70
MLCAT-WP+TEBS | 92.43$\pm$0.46 | 92.53$\pm$0.35 | 60.07$\pm$0.31 | 57.61$\pm$ 0.94 | 51.27$\pm$0.49 | 49.42$\pm$0.54
## 6 Conclusion
We investigate temporal ensembling and weight perturbation for mitigating
robust overfitting and discover that temporal ensembling mainly influences
high confidence predictions whereas weight perturbation affects both
confidence in predictions and small loss data samples. Overall, adversarial
weight perturbation, which directly prevents the model from fitting low loss
data samples, achieves better clean and robust accuracy compared to temporal
ensembling. Furthermore, we propose to use samples close to decision boundary
for improving clean accuracy. These can be directly obtained from the
adversarial examples generation process during adversarial training with
minimal additional cost. Together with ensemble prediction regularization,
this helps in retaining the robust accuracy of both robust overfitting
mitigation approaches but significantly increases the clean accuracy.
## References
* [1] J. Bruna, C. Szegedy, I. Sutskever, I. Goodfellow, W. Zaremba, R. Fergus, and D. Erhan, “Intriguing properties of neural networks,” in International Conference on Learning Representations, 2014.
* [2] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in International Conference on Learning Representations, 2015.
* [3] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57, IEEE, 2017.
* [4] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in International Conference on Learning Representations, 2018.
* [5] H. Zhang, Y. Yu, J. Jiao, E. Xing, L. El Ghaoui, and M. Jordan, “Theoretically principled trade-off between robustness and accuracy,” in International Conference on Machine Learning, pp. 7472–7482, 2019.
* [6] A. Shafahi, M. Najibi, M. A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, L. S. Davis, G. Taylor, and T. Goldstein, “Adversarial training for free!,” in Advances in Neural Information Processing Systems, pp. 3353–3364, 2019\.
* [7] C. Qin, J. Martens, S. Gowal, D. Krishnan, K. Dvijotham, A. Fawzi, S. De, R. Stanforth, and P. Kohli, “Adversarial robustness through local linearization,” in Advances in Neural Information Processing Systems (H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, eds.), vol. 32, Curran Associates, Inc., 2019.
* [8] V. Sehwag, S. Wang, P. Mittal, and S. Jana, “Hydra: Pruning adversarially robust neural networks,” in Advances in Neural Information Processing Systems (H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, eds.), vol. 33, pp. 19655–19666, Curran Associates, Inc., 2020.
* [9] D. Wu, S.-T. Xia, and Y. Wang, “Adversarial weight perturbation helps robust generalization,” Advances in Neural Information Processing Systems, vol. 33, 2020.
* [10] S. Gowal, C. Qin, J. Uesato, T. Mann, and P. Kohli, “Uncovering the limits of adversarial training against norm-bounded adversarial examples,” arXiv preprint arXiv:2010.03593, 2020.
* [11] T. Chen, Z. Zhang, S. Liu, S. Chang, and Z. Wang, “Robust overfitting may be mitigated by properly learned smoothening,” in International Conference on Learning Representations, vol. 1, 2021.
* [12] D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry, “Robustness may be at odds with accuracy,” in International Conference on Learning Representations, 2019.
* [13] Y.-Y. Yang, C. Rashtchian, H. Zhang, R. Salakhutdinov, and K. Chaudhuri, “A closer look at accuracy vs. robustness,” Advances in Neural Information Processing Systems, vol. 33, 2020.
* [14] Y. Carmon, A. Raghunathan, L. Schmidt, J. C. Duchi, and P. S. Liang, “Unlabeled data improves adversarial robustness,” in Advances in Neural Information Processing Systems, pp. 11190–11201, 2019.
* [15] L. Rice, E. Wong, and Z. Kolter, “Overfitting in adversarially robust deep learning,” in International Conference on Machine Learning, pp. 8093–8104, PMLR, 2020.
* [16] Y. Dong, K. Xu, X. Yang, T. Pang, Z. Deng, H. Su, and J. Zhu, “Exploring memorization in adversarial training,” in International Conference on Learning Representations, 2022.
* [17] C. Yu, B. Han, L. Shen, J. Yu, C. Gong, M. Gong, and T. Liu, “Understanding robust overfitting of adversarial training and beyond,” in International Conference on Machine Learning, pp. 25595–25610, PMLR, 2022.
* [18] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582, 2016\.
* [19] P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh, “Ead: elastic-net attacks to deep neural networks via adversarial examples,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
* [20] C. Laidlaw and S. Feizi, “Functional adversarial attacks,” in NeurIPS, 2019\.
* [21] A. Krizhevsky, “Learning multiple layers of features from tiny images,” tech. rep., University of Toronto, 2009.
* [22] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
* [23] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” 2011.
* [24] F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” in International conference on machine learning, pp. 2206–2216, PMLR, 2020.
## Appendix A Experimental Setup
For all experiments, we use ResNet-18 models [22] which are trained using SGD
with momentum value of 0.9 and weight decay of 5 × $10^{-4}$. Initial learning
rate is set to 0.1 for CIFAR-10 and CIFAR-100 datasets and 0.01 for SVHN
dataset, which is divided by 10 at the $100^{th}$ and $150^{th}$ epochs with a
total training of 200 epochs. Data augmentation consisting of horizontal flip
and random crop is used for CIFAR-10 and CIFAR-100 datasets, while no data
augmentation is used for SVHN datatset.
For $\mathrm{PGD}$ attack parameters for training, we set the
$\epsilon=\frac{8}{255}$ in $L_{\infty}$ norm (maximum perturbation) and 10
attack steps for all datasets where step size of $\frac{2}{255}$ is used for
CIFAR-10 and CIFAR-100 datatsets and step size of $\frac{1}{255}$ for SVHN.
For evaluation we consider $\mathrm{PGD}$ attack with 20 steps with
$\epsilon=\frac{8}{255}$ in $L_{\infty}$ norm and step size of $\frac{2}{255}$
for all datasets.
For temporal ensembling based approaches, the value of temporal ensembling
weight parameter $w$ is adjusted experimentally and is set to 300 in AT-TE,
AT-TEBS and AT-TECS for CIFAR-10 and SVHN datasets and 3000 for CIFAR-100
datatset along a Gaussian ramp-up curve [16]. Similarly, for MLCAT-WP+TE,
MLCAT-WP+TEBS and MLCAT-WP+TECS $w=30$ is used in training with CIFAR-10 and
SVHN datasets and $w=300$ is used for CIFAR-100 dataset along a Gaussian ramp-
up curve. The momentum term $\eta$ in ensemble prediction update is set to 0.9
and temporal ensembling activates at $90^{th}$ epoch [16] for all experiments
involving temporal ensembling. For all experiments with MLCAT-WP, MLCAT-WP+TE,
MLCAT-WP+TEBS and MLCAT-WP+TECS, $L_{min}=1.5$ for CIFAR-10 and SVHN datasets
and $L_{min}=4.0$ for CIFAR-100 dataset and other parameters are set as per
the original work [17].
Furthermore all experiments are run on a single NVIDIA A100 Tensor Core GPU
using PyTorch version 1.11.0 on Red Hat Enterprise Linux release 8.5 operating
system.
## Appendix B Additional Results for CIFAR-10, CIFAR-100 and SVHN Datasets
This section contains additional results for CIFAR-10, CIFAR-100 and SVHN
datasets. We consider the case when instead of boundary sample
$\mathbf{x}^{\prime}$ we use clean input sample $\mathbf{x}$ and a
regularization on network’s current prediction on this clean sample
$p(\mathbf{x})$ and ensemble prediction $z(\mathbf{x})$. Thus AT-TEBS modifies
to
$\min_{\boldsymbol{\theta}}\\{\mathcal{L}(F(\mathbf{x},y),y)+w\cdot\|p(\mathbf{x})-\hat{z}(\mathbf{x})\|_{2}^{2}\
+\max_{\tilde{\mathbf{x}}\in\mathcal{B}_{\epsilon}(\mathbf{x})}\\{\mathcal{L}(F(\tilde{\mathbf{x}},y),y)+w\cdot\|p(\tilde{\mathbf{x}})-\hat{z}(\mathbf{x})\|_{2}^{2}\\}\\}$
(6)
which we denote as AT-TECS. Similarly, MLCAT-WP+TEBS objective becomes MLCAT-
WP+TECS by including TE and clean input sample as
$\min_{\boldsymbol{\theta}}\\{\mathcal{L}(F_{\theta+v}(\mathbf{x},y),y)+w\cdot\|p(\mathbf{x})-\hat{z}(\mathbf{x})\|_{2}^{2}\
+\
\max_{\mathbf{v}\in\mathcal{V}}\max_{\tilde{\mathbf{x}}\in\mathcal{B}_{\epsilon}(\mathbf{x})}\\{\mathcal{L}(F_{\theta+v}(\tilde{\mathbf{x}},y),y)+w\cdot\|p(\tilde{\mathbf{x}})-\hat{z}(\mathbf{x})\|_{2}^{2}\\}\\}$
(7)
We also consider the case when MLCAT-WP objective is modified to include cross
entropy loss on boundary sample $\mathbf{x}^{\prime}$ but no temporal
ensembling is employed. We denote this scheme as MLCAT-WP+XEBS which is given
by
$\min_{\boldsymbol{\theta}}\\{\mathcal{L}(F_{\theta+v}(\mathbf{x}^{\prime},y),y)\
+\
\max_{\mathbf{v}\in\mathcal{V}}\max_{\tilde{\mathbf{x}}\in\mathcal{B}_{\epsilon}(\mathbf{x})}\\{\mathcal{L}(F_{\theta+v}(\tilde{\mathbf{x}},y),y)\\}\\}$
(8)
Table 2 shows the inherent trade off between robust accuracy and clean
accuracy when clean input sample is used in place of boundary sample.
Adversarial weight perturbation based approaches MLCAT-WP+TEBS and MLCAT-
WP+TECS seem to be more sensitive to the choice of sample as boundary sample
clearly leads to significantly higher robust accuracy whereas clean input
sample leads to significantly higher clean accuracy. In addition, robust
overfitting occurs more severely for MLCAT-WP+TECS compared to MLCAT-WP+TEBS.
MLCAT-WP+XEBS that does not employ temporal ensembling attains clean and
robust accuracy that lie in between MLCAT-WP+TEBS and MLCAT-WP+TECS. On the
other hand, AT-TECS with clean input sample has increased clean accuracy and
comparable robust accuracy to AT-TEBS for CIFAR-10 and CIFAR-100 datasets.
Both AT-TEBS and AT-TECS suffer from robust overfitting for SVHN dataset
training and the reason is that the networks learn data in few epochs and
start to assign high confidence predictions to input samples as shown in
Figure 5(b). Since regularization based on network current prediction and
ensemble prediction activates at a later stage close to first learning rate
decay at epoch 100, it becomes ineffective as the ensemble prediction is also
very high similar to current prediction. Figure 3-Figure 5 show how accuracy,
average TCP and proportion of small loss samples evolve with time for
CIFAR-10, CIFAR-100 and SVHN datasets, respectively.
Table 2: Test accuracy (mean and standard deviation of 5 runs). “Last" and
“Best" refer to test accuracy at the end of training, and end of epoch that
gives the highest accuracy w.r.t. test data respectively.
Dataset | Name | Clean(Best) | Clean(Last) | Robust(Best) | Robust(Last) | AA(Best) | AA(Last)
---|---|---|---|---|---|---|---
CIFAR-10 | AT-TEBS | 83.99$\pm$0.14 | 84.46$\pm$0.36 | 55.26$\pm$0.07 | 53.58$\pm$0.74 | 50.17$\pm$0.09 | 48.80$\pm$0.70
AT-TECS | 85.62$\pm$0.23 | 85.76$\pm$0.18 | 54.83$\pm$0.23 | 53.21$\pm$0.91 | 50.09$\pm$0.21 | 48.81$\pm$0.61
MLCAT-WP+TEBS | 83.712$\pm$0.51 | 83.75$\pm$0.41 | 59.20$\pm$0.21 | 58.53$\pm$0.52 | 50.38$\pm$0.07 | 50.30$\pm$0.26
MLCAT-WP+TECS | 86.17$\pm$0.37 | 88.47$\pm$0.67 | 55.37$\pm$0.28 | 51.68$\pm$1.18 | 48.13$\pm$0.40 | 46.84$\pm$0.23
MLCAT-WP+XEBS | 84.81$\pm$0.3 | 84.91$\pm$0.33 | 58.76$\pm$0.16 | 57.8$\pm$0.61 | 50.09$\pm$0.09 | 49.75$\pm$0.32
CIFAR-100 | AT-TEBS | 58.78$\pm$0.23 | 58.82$\pm$0.29 | 31.59$\pm$0.09 | 30.40$\pm$0.16 | 25.99$\pm$0.18 | 24.98$\pm$0.23
AT-TECS | 60.71$\pm$0.37 | 60.52$\pm$0.41 | 31.08$\pm$0.22 | 29.72$\pm$0.19 | 25.72$\pm$0.23 | 24.77$\pm$0.15
MLCAT-WP+TEBS | 62.50$\pm$0.28 | 62.45$\pm$0.42 | 31.95$\pm$0.06 | 30.72$\pm$0.65 | 26.57$\pm$0.14 | 25.74$\pm$0.48
MLCAT-WP+TECS | 66.38$\pm$0.82 | 67.25$\pm$0.38 | 28.87$\pm$0.25 | 26.96$\pm$0.53 | 24.00$\pm$0.26 | 22.59$\pm$0.62
MLCAT-WP+XEBS | 62.67$\pm$0.3 | 63.1$\pm$0.37 | 31.19$\pm$0.15 | 29.95$\pm$0.51 | 26.29$\pm$0.14 | 25.46$\pm$0.49
SVHN | AT-TEBS | 91.58$\pm$0.32 | 90.43$\pm$0.46 | 50.94$\pm$0.16 | 44.43$\pm$0.66 | 45.37$\pm$0.29 | 39.43$\pm$0.70
AT-TECS | 91.75$\pm$0.69 | 90.24$\pm$0.43 | 48.72$\pm$0.45 | 43.02$\pm$0.79 | 42.42$\pm$0.27 | 38.22$\pm$0.68
MLCAT-WP+TEBS | 92.43$\pm$0.46 | 92.53$\pm$0.35 | 60.07$\pm$0.31 | 57.61$\pm$ 0.94 | 51.27$\pm$0.49 | 49.42$\pm$0.54
MLCAT-WP+TECS | 93.18$\pm$0.69 | 93.57$\pm$0.41 | 56.22$\pm$1.17 | 52.05$\pm$ 1.78 | 48.71$\pm$1.02 | 45.19$\pm$1.01
MLCAT-WP+XEBS | 92.47$\pm$0.44 | 93.4$\pm$0.82 | 59.78$\pm$0.04 | 55.21$\pm$ 1.2 | 51.29$\pm$0.24 | 46.73$\pm$1.81
(a)
(b)
(c)
(d)
Figure 3: CIFAR-10 training for ResNet-18. (a) Test accuracy against clean
data (dark solid lines) and $\mathrm{{PGD}_{20}}$ attack (dim solid lines) are
plotted.
(a)
(b)
(c)
(d)
Figure 4: CIFAR-100 training for ResNet-18. (a) Test accuracy against clean
data (dark solid lines) and $\mathrm{{PGD}_{20}}$ attack (dim solid lines) are
plotted.
(a)
(b)
(c)
(d)
Figure 5: SVHN training for ResNet-18. (a) Test accuracy against clean data
(dark solid lines) and $\mathrm{{PGD}_{20}}$ attack (dim solid lines) are
plotted.
|
# Ordinarity of Local Galois Representation Arising from Dwork Motives
Lie Qian Department of Mathematics, Stanford University Department of
Mathematics, Stanford University<EMAIL_ADDRESS>
(Date: January 25th, 2021)
## 1\. Introduction
Let $F$ be a characteristic $0$ local field containing $\zeta_{N}$ whose
residue characteristic equals to $p$ and does not divide $N$. We first
introduce the Dwork motives.
Let $T_{0}=\mathbb{P}^{1}-(\\{\infty\\}\cup\mu_{N})/\mathbb{Z}[1/N]$ with
coordinate $t$ and $Z\subset\mathbb{P}^{N-1}\times T_{0}$ be a projective
family defined by the following equation:
$X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N}=NtX_{1}X_{2}\cdots X_{N}$
The map $\pi:Z\rightarrow T_{0}$ is a smooth of relative dimension $N-2$. We
will write $Z_{s}$ for the fiber of this family at a point $s$. Let
$H=\mu_{N}^{N}/\mu_{N}$ and
$H_{0}:=\\{(\xi_{1},\ldots,\xi_{N})\in\mu_{N}^{N}:\xi_{1}\cdots\xi_{N}=1\\}/\mu_{N}\subset
H$
Over $\mathbb{Z}[1/N,\zeta_{N}]$ there is an $H$ action on $Z$ by:
$(\xi_{1},\ldots,\xi_{N})(X_{1},\ldots,X_{N},t)=(\xi_{1}X_{1},\ldots,\xi_{N}X_{N},(\xi_{1}\cdots\xi_{N})^{-1}t)$
Thus $H_{0}$ acts on every fibre $Z_{s}$, and $H$ acts on $Z_{0}$.
Fix $\chi$ a character $H_{0}\rightarrow\mu_{N}$ of the form:
$\chi\ ((\xi_{1},\ldots,\xi_{N}))=\prod_{i=1}^{N}\xi_{i}^{a_{i}}$
where $(a_{1},\ldots,a_{N})$ are $N$ constants such that
$\sum_{i=1}^{N}a_{i}\equiv 0$ mod $N$. Therefore the character is well-
defined.
We define the Dwork motive to be given by $Z$ and the $\chi$-eigenpart of the
$H_{0}$ group action. In concrete terms, its $p$-adic realization is defined
below.
For any prime $\lambda$ of $\mathbb{Z}[1/2N,\zeta_{N}]$ of residue
characteristic $p$, we define the lisse sheaf $V_{\lambda}/(T_{0}\times{\rm
Spec}\mathbb{Z}[1/2Np,\zeta_{N}])_{et}$ by:
$V_{\lambda}=(R^{N-2}\pi_{\ast}\mathbb{Z}[\zeta_{N}]_{\lambda})^{\chi,H_{0}}$
here the $\chi,H_{0}$ in the supscript means the $\chi$-eigenpart of the
$H_{0}$ action.
We let $V_{\lambda,t}$ denote the fibre of the sheaf $V_{\lambda}$ over $t$
for a $t\in T_{0}(F^{\prime})$, where $F^{\prime}$ is a finite extension of
$F$. In other words, viewed as a $G_{F^{\prime}}$ representation,
$V_{\lambda,t}$ is just
$H^{N-2}(Z_{\overline{t}},\mathbb{Z}[\zeta_{N}]_{\lambda})^{\chi,H_{0}}$, here
$\overline{t}$ is the corresponding geometric point of $t$.
Fix the embedding $\tau:\mathbb{Q}(\zeta_{N})\hookrightarrow\mathbb{C}$ such
that $\tau(\zeta_{N})=e^{2\pi i/N}$. Let $\tilde{\pi}:Y(\mathbb{C})\rightarrow
T_{0}(\mathbb{C})$ denote the base change of $\pi$ along $\tau$ and $V_{B}$ be
the locally constant sheaf over $T_{0}(\mathbb{C})$:
$V_{B}=(R^{N-2}\tilde{\pi}_{\ast}\mathbb{Z}[\zeta_{N}])^{\chi,H_{0}}$
By standard comparison results(see e.g. [BLGHT09]), $V_{\lambda}$ and $V_{B}$
are locally constant and locally free of the same rank over
$\mathbb{Z}[\zeta_{N}]_{\lambda}$ and $\mathbb{Z}[\zeta_{N}]$. This rank can
be computed by looking at the fibre over $0$. Denote this rank by $n$.
Fix a nonzero base point $s\in T_{0}(\mathbb{C})$. Now we have the monodromy
representation:
$\rho_{s}:\pi_{1}(T_{0}(\mathbb{C}),s)\rightarrow GL(V_{B,s})$
Let $\gamma_{\infty}$ be the loop around $\infty$ as an element of
$\pi_{1}(T_{0}(\mathbb{C}),s)$.
We can now state the main theorem of this paper.
###### Theorem 1.1.
Under the assumption that for the motives defined by any fixed $\sigma\chi$,
where $\sigma\in\mathrm{Gal}(\mathbb{Q}(\mu_{N})/\mathbb{Q})$ is arbitrary,
$\rho_{s}(\gamma_{\infty})$ has minimal polynomial $(X-1)^{n}$, i.e. it is
maximally unipotent, we have that the $G_{F_{0}}$ representation
$V_{\lambda,t^{-1}}$ is regular ordinary for any finite extension $F_{0}$ of
$F$ and $t\in F_{0}$ with $v(t)>0$, where $v$ is the valuation of $F_{0}$.
###### Remark 1.2.
Note that by the argument in Page 12-13 of the author’s forthcoming work
Potential Automorphy for $GL_{n}$, the assumption that
$\rho_{s}(\gamma_{\infty})$ has minimal polynomial $(X-1)^{n}$ is satisfied
when $0\in\\{a_{1},\ldots,a_{N}\\}$.
The main application of our theorems would be in the proof of a potential
automorphy theorem claimed in a forthcoming paper by the author. There we
choose certain $(a_{1},\ldots,a_{N})$ so that the input condition of main
theorem is satisfied. In that paper we need certain local Galois
representation to be regular and ordinary in order to apply the automorphy
lifting theorem from [ACC+18].
We shall briefly discuss the idea of the proof of the theorem. Since we may
replace $F$ by $F_{0}$ at the very beginning, we assume without loss of
generality that $t$ is an $F$ point $T_{0}$ from now on.
The theorem will be implied by the following two theorems this paper is going
to establish, each possibly interesting in its own right.
The first theorem claims the existence of a semistable model for $Z_{t}$ with
$t$ of negative valuation over a finite extension $F^{\prime}/F$. The
semistable model comes from a series of blowup of the naive integral model
given by the equation
$t^{-1}(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots X_{N}$ in
$\mathbb{P}^{N-1}\times\mathcal{O}_{F^{\prime}}$ for some extension
$F^{\prime}/F$. We first work with the fundamental case where $t$ is of
(normalized) valuation $-1$. Then we use the technique of toroidal embedding
(in a mixed charateristic setting) established by Mumford to go from this case
to the general case. This is the main result of section 2.
The second theroem gives a way to compute the log cristalline cohomology
$H^{\ast}_{crys}((\overline{Y_{1}},\overline{N_{1}})/(W(k),\mathbb{N},1\mapsto
0))$
(See section 3 for a brief recall of the notation concerning log geometry)
with its $N$ operator in terms of log de-Rham cohomology under the setting
that $(\overline{Y_{1}},\overline{N_{1}})$ is log smooth over
$(k,\mathbb{N},1\mapsto 0)$ and can be lifted into a family with log structure
$(Y,N)/(W(k)[T],\mathbb{N},1\mapsto T)$. Indeed, we use the log de Rham
cohomology of the characteristic $0$ lift given by the fibre over $0$ of
$(Y,N)$ and the operator $N$ is given as the connecting homomorphism of
certain long exact sequence that is similar to the one used to define the
Gauss-Manin connection. This is the first main result of section 3.
From now on, when there are no ambiguity of the residue field $k$, we will use
$W_{n}$ to denote $W_{n}(k)$ and $W$ to denote $W(k)$.
We will now explain these 2 results in more detail.
### 1.1. Semistable Model
Let us recall what we mean by semistable throughout this paper, following
[HK94].
###### Definition 1.3.
We say a scheme $X$ over a discrete valuation ring $A$ is with semistable
reduction if etale locally on $X$, there is a smooth morphism $X\rightarrow
A[T_{1},\ldots,T_{r}]/(T_{1}\cdots T_{r}-\pi)$ for some $r\geq 0$, where $\pi$
is a uniformizer of the ring $A$.
###### Theorem 1.4.
For any $t\in F$ such that $v(t)=d>0$, there exists a totally ramified
extension $F^{\prime}/F$ generated by $\pi^{1/e}$, a choice of $e$-th root of
$\pi$ which is a uniformizer of $\mathcal{O}_{F}$ (thus $F^{\prime}$ is purely
ramified of degree $e$ over $F$ with uniformizer $\pi^{1/e}$), and a
semistable model $\mathcal{Y}$ over $\mathcal{O}_{F^{\prime}}$ of $Z_{t^{-1}}$
with compatible $H_{0}$ action.
Here is a supplement of the above theorem. It gives some idea on how the
semistable model $\mathcal{Y}$ is constructed, and will be used as a link to
reduce our computation to the situation over the unramified base ring $W$,
where Hyodo-Kato’s log geometry technique could be applied.
Let $\text{Spec}\ W[S,U^{\pm 1}]^{\prime}$ denote the open subscheme of
$\text{Spec}\ W[S,U^{\pm 1}]$ defined by $(S^{de}U)^{N}\neq 1$ and let
$W[S,U^{\pm 1}]^{\prime}$ denote the ring of regular function of this scheme.
###### Proposition 1.5.
In the setting of the above theorem, we can actually find a variety $Z$ with
$H_{0}$ action over $\text{Spec}\ W[S,U^{\pm 1}]^{\prime}$ that is a blowup of
the variety $US^{de}(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots
X_{N}$ over $\text{Spec}\ W[S,U^{\pm 1}]^{\prime}$ (the latter is the base
change of the previous
$UT(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots X_{N}$ along
$W[T,U^{\pm 1}]\rightarrow W[S,U^{\pm 1}]^{\prime}$, $T\mapsto S^{de}$) that
is an isomorphism outside the closed subscheme defined by $S$, such that
locally $Z$ admits an etale map to
(1.1) $W[U^{\pm 1},S,Z_{1},\ldots,Z_{r},Z_{r+1}^{\pm 1},\ldots,Z_{n}^{\pm
1}]/(US-Z_{1}\cdots Z_{r})$
over $W[S,U^{\pm 1}]^{\prime}$
Choosing a uniformizer $\pi$ of $\mathcal{O}_{F}$ and write $t=u\pi^{d}$ with
$u\in\mathcal{O}_{F}^{\times}$, the $\mathcal{Y}$ in the above theorem is
obtained from base change of $Z$ along $W[S,U^{\pm
1}]^{\prime}\rightarrow\mathcal{O}_{F^{\prime}}$, $S\mapsto\pi^{1/e},U\mapsto
u$.
The link to a model over $W$ will be stated in Remark 2.4.
### 1.2. Log Geometry
A general log scheme is usually denoted as $(Z,M)$ for a scheme $Z$ and a a
sheaf of monoid $M$. However, we would sometime use a third argument after $M$
to show what the structure map $M\rightarrow\mathcal{O}_{Z}$ is. Again, we
will use $\text{Spec}\ W[T]^{\prime}$ to denote a fixed choice of affine open
subscheme of $\text{Spec}\ W[T]$ such that the closed subscheme defined by
$T=0$ is contained in $\text{Spec}\ W[T]^{\prime}$. Let $\text{Spec}\
W_{n}[T]^{\prime\prime}$ be the mod $p^{n}$ reduction of $\text{Spec}\
W[T]^{\prime}$.
Let $(Y,N)$ be a log smooth scheme over $(W[T]^{\prime},\mathbb{N},1\mapsto
T)$. Denote by $(Y_{n},N_{n})$ (resp. $(\overline{Y_{n}},\overline{N_{n}})$)
the base change of $(Y,N)$ along $(W_{n}[T]^{\prime},\mathbb{N},1\mapsto
T)\rightarrow(W[T]^{\prime},\mathbb{N},1\mapsto T)$(resp.
$(W_{n},\mathbb{N},1\mapsto 0)\rightarrow(W[T]^{\prime},\mathbb{N},1\mapsto
T)$). Let $(\overline{Y},\overline{N})$ be the base change of $(Y,N)$ along
$(W,\mathbb{N},1\mapsto 0)\rightarrow(W[T]^{\prime},\mathbb{N},1\mapsto T)$.
Note that the closed immersions in the corresponding fibre diagram are all
exact and the projection maps to the base are all log smooth.
Hyodo-Kato define the $i$-th log cristalline cohomology of
$(\overline{Y_{1}},\overline{N_{1}})$ as the limit
(1.2)
$\left(\varprojlim_{n}H^{i}\left(\left((\overline{Y_{1}},\overline{N_{1}})/(W_{n},\mathbb{N},1\mapsto
0)\right)_{crys},\mathcal{O}_{\overline{Y_{1}}/W_{n}}\right)\right)[\frac{1}{p}]$
and equip it with a nilpotent operator $N$.
###### Theorem 1.6.
The above limit is isomorphic to
$\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[\frac{1}{p}]$ while the operator $N$ is given as the degree $i$ boundary
homomorphism of the long exact sequence given by the following exact triangle
${\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y}/(W,(0))}}$${\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
which is defined at each degree $i$ by taking the $i$-th wedge power of
locally split exact sequence of locally free sheaves of modules
${0}$${\mathcal{O}_{\overline{Y}}}$${\omega^{1}_{\overline{Y}/(W,(0))}}$${\omega^{1}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}}$${0}$$\scriptstyle{\cdot d\log 1}$
given in Theorem 3.2.3 of [Ogu18] since we note that
$(\overline{Y},\overline{N})$ is log smooth over $(W,\mathbb{N},1\mapsto 0)$
and we identify $\omega^{1}_{(W,\mathbb{N},1\mapsto 0)/(W,(0))}\cong W$ by
$d\log 1\mapsto 1$.
###### Remark 1.7.
In the setting of the above theorem, note also that the resulting exact
sequence
${0}$${\omega^{i-1}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}}$${\omega^{i}_{\overline{Y}/(W,(0))}}$${\omega^{i}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}}$${0}$$\scriptstyle{\cdot d\log 1}$
at degree $i$ is a locally split exact sequence of locally free sheaves of
modules.
Finally, let $(\overline{Y}_{\mathbb{C}},\overline{N}_{\mathbb{C}})$ be a
proper log scheme smooth over $(\mathbb{C},\mathbb{N},1\mapsto 0)$ and
$(\overline{Y^{\mathrm{an}}},\overline{N^{\mathrm{an}}})$ be the analytic log
scheme associated to it which is also smooth over the analytic point
$(\mathrm{pt},\mathbb{N},1\mapsto 0)$. We would like to reduce the computation
to the analytic setting by the following theorem. It will be proved by GAGA.
###### Theorem 1.8.
There exists an $N$ equivariant isomorphism
$\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},\mathbb{N},1\mapsto
0)})\cong\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})$
where the $N$ on each hypercohomology is defined by degree $i$ boundary
morphism of the exact triangles obtained similar to the one in Theorem 1.6
${\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},(0))}}$${\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
${\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},(0))}}$${\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
### Acknowledgements.
I would like to first thank Richard Taylor for encouraging me to think about
the subject of this paper. I also want to thank him for all the helpful
comments on the draft of this paper. I am grateful to Richard Taylor, Brian
Conrad, Weibo Fu, Ravi Vakil, Bogdan Zavyalov for many interesting
conversation during the preparation of this text.
## 2\. Existence of Semistable Blowup
In this section we always assume char$(k)\nmid N$ and $N>1$. Again, we will
use $W_{n}$ to denote $W_{n}(k)$ and $W$ to denote $W(k)$
We first prove a lemma that basically settles the case where $t$ is a
uniformizer in 1.4 via base change along $W[T,U^{\pm
1}]^{\prime}\rightarrow\mathcal{O}_{F}$, $T\mapsto t,U\mapsto u$, where
$\text{Spec}\ W[T,U^{\pm 1}]^{\prime}$ is the open subscheme of $\text{Spec}\
W[T,U^{\pm 1}]$ defined by $(TU)^{N}\neq 1$. The idea of this blowup process
comes from a construction of Nick Shepherd-Barron.
###### Lemma 2.1.
There exists a blowup $\mathbb{X}$ of the variety
$UT(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots X_{N}$ over
$\text{Spec}\ W[T,U^{\pm 1}]^{\prime}$ with an action of $H_{0}$ such that the
blowdown map is $H_{0}$-equivariant and etale locally $\mathbb{X}$ admits an
etale map to
$W[T,U^{\pm 1}]^{\prime}[Z_{1},\ldots,Z_{r},Z_{r+1}^{\pm 1},\ldots,Z_{n}^{\pm
1}]/(TU-Z_{1}\cdots Z_{r})$
over $\text{Spec}\ W[T,U^{\pm 1}]^{\prime}$
###### Proof.
The sequence of blowup is as the following. Initially we have divisors $D_{i}$
defined by $X_{i}$ and $D$ defined by $X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N}$.
Denote the original variety as $Y_{N}$, we blowup along $D\cap D_{N}$ to get a
variety $Y_{N-1}$, as well as the divisors $D_{i}^{(N-1)}$ and $D^{(N-1)}$ as
the strict transform of $D_{i}$ and $D$ respectively. In general, by induction
we will have the variety $Y_{j}$ at the $(N-j)$-th stage, with divisors
$D_{i}^{(j)}$ and $D^{(j)}$ as the strict transform of $D_{i}^{(j+1)}$ and
$D^{(j+1)}$, $\forall i\in\\{1,\ldots,n\\}$, then we blowup along $D^{(j)}\cap
D_{j}^{(j)}$, we get the variety $Y_{j-1}$ with the divisors $D_{i}^{(j-1)}$
and $D^{(j-1)}$. The final step is that we let $\mathbb{X}:=Y_{0}$ be the
blowup of $Y_{1}$ along $D^{(1)}\cap D_{1}^{(1)}$. Note that for each blowup
$Y_{j-1}\rightarrow Y_{j}$, we can associate an $H_{0}$ action on $Y_{j-1}$
such that the morphism is $H_{0}$-equivariant because we may see by induction
that the locus $D^{(j)}\cap D_{j}^{(j)}$ we are blowing up along is
$H_{0}$-stable.
We check the desired local property of $Y_{0}$ by hand. Fix an
$i\in\\{1,\ldots,N\\}$ from now on and we will only work with the preimage of
the affine chart $X_{i}\neq 0$ and we will still use $Y_{k}$ to denote the
primage of this chart in the original $Y_{k}$. In each step of the blowups, we
might write the blowup as the union of two affine open subschemes given by the
complement of the strict transform of the two divisors whose intersection we
are blowing up along. And note that the affine open given by the complement of
$D^{(j)}$ will be isomorphic to its preimage under all further blowup, since
it has empty intersection with the locus we are blowing up along.
More precisely, let $l$ be the function defined on
$\\{1,\ldots,\hat{i},\ldots,N\\}$ by the rule $l(x)=x+1$ if $x\neq i-1$ while
$l(i-1)=i+1$. For $0<k\leq N$ with $k\neq i$, let $U_{k}$ denote the variety
$\text{Spec}\ (W[T,U^{\pm
1}]^{\prime}[x_{1},\ldots,\hat{x_{i}},\ldots,x_{N},b_{k}^{\prime},b_{l(k)}]/(Nb_{k}^{\prime}\prod_{1\leq
j\leq k-1,j\neq i}x_{j}-UT,$
$b_{k}^{\prime}b_{l(k)}-x_{k},b_{l(k)}\prod_{l(k)\leq j\leq N,j\neq
i}x_{j}-\sum_{1\leq j\leq N,j\neq i}x_{j}^{N}-1)$
here $x_{j}$ are the affine coordinates $\frac{X_{j}}{X_{i}}$, with the
understanding that when $k=N$, the product $\prod_{l(k)\leq j\leq N,j\neq
i}x_{j}=1$, and thus $b_{l(N)}=\sum_{1\leq j\leq N,j\neq i}x_{j}^{N}+1$.
And for $0<k<N$ with $k\neq i$, let $V_{k}$ denote the variety
$\text{Spec}\ (W[T,U^{\pm
1}]^{\prime}[x_{1},\ldots,\hat{x_{i}},\ldots,x_{N},b_{l(k)}]/(N\prod_{1\leq
j\leq k,j\neq i}x_{j}-UTb_{l(k)},$ $b_{l(k)}\prod_{l(k)\leq j\leq N,j\neq
i}x_{j}-\sum_{1\leq j\leq N,j\neq i}x_{j}^{N}-1)$
It can be seen inductively for $k$, the following properties hold.
* •
$Y_{k}=V_{k}\cup\cup_{l(k)\leq j\leq N,j\neq i}U_{j}$ as affine opens for
$0\leq k\leq N,k\neq i$, $Y_{i}=Y_{i+1}$.
* •
for any $m\leq k$ $D_{m}^{(k)}$ is the divisor given by $x_{m}$ in the affine
open $V_{k}$ and $U_{j}$ for all $l(k)\leq j\leq N,j\neq i$.
* •
for $m\geq l(k)$, $D_{m}^{(k)}$ is the divisor given by $b_{m}^{\prime}$ in
$U_{m}$ and $x_{m}$ in $U_{j}$ for all $j>m$ and it has empty intersection
with the rest of the affine opens.
* •
$D^{(k)}$ is given by the divisor $b_{l(k)}$ in the affine open $V_{k}$ only.
Thus the blowup is an isomorphism over each $U_{k}$ in the affine charts of
$Y_{j}$ and maps $V_{j-1}\cup U_{j}$ to $V_{j}$ in the next step. From this,
it suffice to verify all $U_{k}$ satisfy the local property. Now $U_{k}$ can
be written as
$\text{Spec}\ W[U^{\pm
1},x_{1},\ldots,\hat{x_{i}},\hat{x_{k}},\ldots,x_{N},b_{k}^{\prime},b_{l(k)}]/(b_{l(k)}\prod_{l(k)\leq
j\leq N,j\neq i}x_{j}-$ $\sum_{1\leq j\leq N,j\neq
i,k}x_{j}^{N}-(b_{k}^{\prime}b_{l(k)})^{N}-1)$
We first prove $U_{k}$ is regular by applying Jacobian criterion to this
variety. We fix a closed point $q$ of this variety giving rise to a
$\overline{k}$-point of the form
$x_{1}\mapsto a_{1},\ldots,x_{N}\mapsto a_{N},b_{k}^{\prime}\mapsto
u,b_{l(k)}\mapsto v,U\mapsto w,$
for some constant $a_{j},u,v,w\in\overline{k}$. Let $\mathfrak{m}$ be the
maximal ideal in $W[U^{\pm
1},x_{1},\ldots,\hat{x_{i}},\hat{x_{k}},\ldots,x_{N},b_{k}^{\prime},b_{l(k)}]$.
It suffice to check that the defining equation is not $0$ in
$\mathfrak{m}/\mathfrak{m}^{2}\otimes_{k(\mathfrak{m})}\overline{k}$, which
has a basis $dp,dx_{1},\ldots,dx_{N},db_{k}^{\prime},db_{l(k)},dU$.
Let $f$ be the defining equation
$b_{l(k)}\prod_{l(k)\leq j\leq N,j\neq i}x_{j}-\sum_{1\leq j\leq N,j\neq
i,k}x_{j}^{N}-(b_{k}^{\prime}b_{l(k)})^{N}-1$
of $U_{k}$. Then $df=\sum_{j\neq
i,k}f_{j}dx_{j}+f_{u}db_{k}^{\prime}+f_{v}db_{l(k)}$ where
$\begin{array}[]{ll}f_{j}=-Na_{j}^{N-1}&\text{if}\ j<k\\\
f_{j}=v\prod_{l(k)\leq s\leq N,s\neq i,j}a_{s}-Na_{j}^{N-1}&\text{if}\ j\geq
l(k)\\\ f_{u}=-Nu^{N-1}v^{N}&\\\ f_{v}=-Nu^{N}v^{N-1}+\prod_{l(k)\leq s\leq
N,s\neq i}a_{s}&\end{array}$
If all of the coefficients are $0$, we see that $a_{j}=0$, $\forall j<k$ from
$f_{j}=0$ since $l\nmid N$. From $f_{u}=0$ we see $u=0$ or $v=0$, either
implies by $f_{v}=0$ that $a_{s}=0$ for some $l(k)\leq s\leq N,s\neq i$. This
implies $a_{j}=0$, $\forall j\neq i,k,s$ by the $f_{j}=0$, which in turn
implies $a_{s}=0$ by the $f_{s}=0$. But substituting the known zeroes into $f$
we see that this can never happen. Thus $U_{k}$ is regular.
To verify the local property, fix a $U_{k}$ to work in.
(1) If $f_{j}\neq 0$ for some $j\geq l(k)$, then we claim that the map
$U_{k}\rightarrow W[U^{\pm
1},x_{1},\ldots,\hat{x_{i}},\hat{x_{j}},\hat{x_{k}},\ldots,x_{N},b_{k}^{\prime},b_{l(k)}]$
given by corresponding coordinate is etale near the point $q$. Let
$\mathfrak{n}$ denote the maximal ideal corresponding to the image of $q$
under this map. Then the claim follows because
$\begin{split}\mathfrak{m}/\mathfrak{m}^{2}\otimes_{k(\mathfrak{m})}\overline{k}&\cong\overline{k}dp\oplus\overline{k}dx_{1}\oplus\cdots\oplus\overline{k}dx_{N}\oplus\overline{k}db_{k}^{\prime}\oplus\overline{k}db_{l(k)}\oplus\overline{k}dU/\\\
&\left(\sum_{j\neq
i,k}f_{j}dx_{j}+f_{u}db_{k}^{\prime}+f_{v}db_{l(k)}\right)\\\
&\cong\overline{k}dp\oplus\overline{k}dx_{1}\oplus\cdots\oplus\widehat{\overline{k}dx_{j}}\oplus\overline{k}dx_{N}\oplus\overline{k}db_{k}^{\prime}\oplus\overline{k}db_{l(k)}\oplus\overline{k}dU\\\
&\cong\mathfrak{n}/\mathfrak{n}^{2}\otimes_{k(\mathfrak{n})}\overline{k}\end{split}$
here the middle isomorphism follows because $f_{j}\neq 0$, and we define the
structure map $\text{Spec}\ \left(W[U^{\pm
1},x_{1},\ldots,\hat{x_{i}},\hat{x_{j}},\hat{x_{k}},\ldots,x_{N},b_{k}^{\prime},b_{l(k)}]\right)\rightarrow
W[T,U^{\pm 1}]^{\prime}$ by $T\mapsto NU^{-1}b_{k}^{\prime}\prod_{1\leq m\leq
k-1,m\neq i}x_{m}$, so that the morphism is a morphism of $W[T,U^{\pm
1}]^{\prime}$ scheme.
(2) If $f_{v}\neq 0$, then same argument as that in (1) gives that the map
$U_{k}\rightarrow W[U^{\pm
1},x_{1},\ldots,\hat{x_{i}},\hat{x_{k}},\ldots,x_{N},b_{k}^{\prime}]$ is etale
near at the point $q$. And again define the structure map by $T\mapsto
NU^{-1}b_{k}^{\prime}\prod_{1\leq m\leq k-1,m\neq i}x_{m}$.
Note now that if we try to use similar argument to "kill" the variable $x_{j}$
for some $j<k$, then it is hard to define a structure map on the target scheme
because $x_{j}$ appears in the expression of $T$ in the original $U_{k}$.
Hence we use the following trick.
(3) If $a_{j}^{N}\neq(uv)^{N}$ for some $j<k$, then we consider the map
$U_{k}\rightarrow W[U^{\pm
1},x_{1},\ldots,\hat{x_{i}},\hat{x_{j}},\hat{x_{k}},\ldots,x_{N},\hat{b_{k}^{\prime}},b_{l(k)},c_{j}]$
where every variable maps to the corresponding one in the structure ring of
$U_{k}$ except $c_{j}\mapsto b_{k}^{\prime}x_{j}$, thus $dc_{j}\mapsto
udx_{j}+a_{j}db_{k}^{\prime}$ under the pullback of the map mentioned above.
The condition $a_{j}^{N}\neq(uv)^{N}$ gives that
$\begin{pmatrix}u&a_{j}\\\ f_{j}&f_{u}\par\end{pmatrix}$
is nondegenerate, and hence
$\begin{split}\mathfrak{m}/\mathfrak{m}^{2}\otimes_{k(\mathfrak{m})}\overline{k}&\cong\overline{k}dp\oplus\overline{k}dx_{1}\oplus\cdots\oplus\overline{k}dx_{N}\oplus\overline{k}db_{k}^{\prime}\oplus\overline{k}db_{l(k)}\oplus\overline{k}dU/\\\
&\left(\sum_{j\neq
i,k}f_{j}dx_{j}+f_{u}db_{k}^{\prime}+f_{v}db_{l(k)}\right)\\\
&\cong\overline{k}dp\oplus\overline{k}dx_{1}\oplus\cdots\oplus\widehat{\overline{k}dx_{j}}\oplus\overline{k}dx_{N}\oplus\widehat{\overline{k}db_{k}^{\prime}}\oplus\overline{k}db_{l(k)}\oplus\overline{k}dU\oplus\overline{k}(udx_{j}+a_{j}db_{k}^{\prime})\\\
&\cong\mathfrak{n}/\mathfrak{n}^{2}\otimes_{k(\mathfrak{n})}\overline{k}\end{split}$
where $\mathfrak{n}$ denote the maximal ideal corresoonding to the image of
$q$ under the map. And we may take the structure map $W[U^{\pm
1},x_{1},\ldots,\hat{x_{i}},\hat{x_{j}},\hat{x_{k}},\ldots,x_{N},\hat{b_{k}^{\prime}},b_{l(k)},c_{j}]\rightarrow
W[T,U^{\pm 1}]^{\prime}$ to be $T\mapsto NU^{-1}c_{j}\prod_{1\leq m\leq
k-1,m\neq i,j}x_{m}$. It could be checked that the map is a map of $W[T,U^{\pm
1}]^{\prime}$ scheme.
We conclude that this exhausts all possibilities if we impose the condition
that $q$ lies in the closed subscheme defined by $T$: If all of the three
conditions above do not hold, then $a_{j}^{N}=(uv)^{N}$ for all $j<k$,
$f_{j}=0$ for all $j\geq l(k)$ and $f_{v}=0$.
We claim for all $j>l(k)$, $a_{j}\neq 0$: If $a_{j}=0$ for some $j\geq l(k)$,
then checking the condition $f_{j^{\prime}}=0$ for all $j^{\prime}\geq l(k)$,
$j^{\prime}\neq j$, gives $a_{j}=0$ for all $j\geq l(k)$. Putting that into
$f_{v}=0$ gives $u=0$ or $v=0$. Putting that once more into the condition
$a_{j}^{N}=(uv)^{N}$ gives $a_{j}=0$ for all $j<k$. Thus, we see that all
$f_{j}$, $j\neq i,k$, $f_{u}$ and $f_{v}$ are $0$. This have been shown to be
impossible by the argument for regularity, hence we prove the claim that for
all $j>l(k)$, $a_{j}\neq 0$.
Multiplying each $f_{j}=0$ by $a_{j}$ gives
$a_{l(k)}^{N}=\cdots=a_{N}^{N}=\frac{1}{N}v\prod_{l(k)\leq s\leq N,s\neq
i}a_{s}$ and thus $v\neq 0$. Hence it also follows from $f_{v}=0$ that $u\neq
0$. The condition $a_{j}^{N}=(uv)^{N}$ then gives $a_{j}\neq 0$ for all $j<k$.
Therefore, $T=NU^{-1}b_{k}^{\prime}\prod_{1\leq m\leq k-1,m\neq i}a_{m}$ is
not $0$ at $q$ because $u$, $a_{m}$ for $1\leq m\leq k-1,m\neq i$ are all
nonzeron as we have seen.
Similar consideration shows that for characteristic $0$ points, the only
peculiartiy can happen in the open locus $T\neq 0$.
Now over the locus $T\neq 0$, the blowdown map is an isomorphism since the
locus we are blowing up against is always a codimension $1$ irreducible
subvariety. Hence it suffices to show that the original projective variety
$UT(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots X_{N}$ admits such
an etale map locally. Again we only work with the affine chart $X_{i}\neq 0$
and writing the intersection with this chart as the affine variety $U_{0}$
$\text{Spec}\ W[T,U^{\pm
1}]^{\prime}[x_{1},\ldots,\hat{x_{i}},\ldots,x_{N}]/(UT(1+x_{1}^{N}+\cdots+\hat{x_{i}^{N}}+\cdots+x_{N}^{N})-N\prod_{j\neq
i}x_{j})$
Take a geometric point $q$ with coordinates $x_{j}$ sent to $b_{j}$ in some
algebraically closed field. Denote the defining equation by $g$ and its
derivatives with respect to each variable $x_{j}$ evaluated at $q$ as $g_{j}$.
Similar to the argument in (1), the map $U_{0}\rightarrow\text{Spec}\
W[T,U^{\pm 1}]^{\prime}[x_{1},\ldots,\hat{x_{i}},\hat{x_{j},}\ldots,x_{N}]$ is
etale at $q$ if $g_{j}\neq 0$. Hence such a map (clearly over $W[T,U^{\pm
1}]^{\prime}$) exist as long as one of the $g_{j}\neq 0$.
If all $g_{j}=0$, then $UTb_{j}^{N-1}=\prod_{k\neq j,i}b_{k}$ for any $j\neq
i$. In other words, $b_{1}^{N}=\cdots=b_{N}^{N}=U^{-1}T^{-1}\prod_{j\neq
i}b_{j}$. Denote this constant by $C$. Also the defining equation gives us
$1+b_{1}^{N}+\cdots+\hat{b_{i}^{N}}+\cdots+b_{N}^{N}=NU^{-1}T^{-1}\prod_{j\neq
i}b_{j}$. Hence $1+(N-1)C=NC$ and it follows that $C=1$. Putting it back to
the equations give that $UT=\zeta_{N}$ for an $N$-th root of unity. We have
excluded this from the base, thus we conclude that the local expression over
$W[T,U^{\pm 1}]^{\prime}$ exists near all points on $\mathbb{X}$.
∎
###### Remark 2.2.
Under the notation of the proof above, we can concretely describe the action
of $H_{0}$ on each $U_{k}$ as the following: $(\xi_{1},\ldots,\xi_{N})$ acts
by $x_{j}\mapsto\frac{\xi_{j}}{\xi_{i}}x_{j}$ for $1\leq j\leq N,j\neq i,k$,
$T\mapsto T$, $b_{k}^{\prime}\mapsto\left(\prod_{1\leq j\leq k-1,j\neq
i}\frac{\xi_{i}}{\xi_{j}}\right)b_{k}^{\prime}$ and
$b_{l(k)}\mapsto\left(\prod_{l(k)\leq j\leq N,j\neq
i}\frac{\xi_{i}}{\xi_{j}}\right)b_{l(k)}$.
Thus we may also make all the local etale maps given in the proof to be
$H_{0}$-equivariant with the $H_{0}$ action on the target be the corresponding
multiplication on each coordinates.
We have seen there is a variety denoted by $\mathbb{X}$, that is a blowup with
$H_{0}$ action of the variety
$UT(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots X_{N}$ over
$\text{Spec}\ W[T,U^{\pm 1}]^{\prime}$ and is an isomorphism outside the
closed subscheme defined by $T$. $\mathbb{X}$ locally admits an etale map to
the form given as in 2.1. Thus $\mathbb{X}$ further satisfies the property
that the base change of it along $W[T,U^{\pm
1}]^{\prime}\rightarrow\mathcal{O}_{F}$, $T\mapsto\pi,U\mapsto u$ is a
semistable(in the sense of Definition 1.3) model of its generic fibre, with
$H_{0}$ action. The generic fibre is the underlying motive of the $l$-adic
representation $V_{\lambda,(u\pi)^{-1}}$. That follows because the generic
fibre factor through the open locus of $\mathbb{X}$ where $T\neq 0$. We can
use this semistable model and take $Z=\mathbb{X}$ to prove Theorem 1.4 and
Proposition 1.5 in the case $d=1$.
Base changing the blowdown map we get from Lemma 2.1 along $W[T,U^{\pm
1}]^{\prime}\rightarrow W[R,U^{\pm 1}]^{\prime}$, $T\mapsto R^{d}$, where
$\text{Spec}\ W[R,U^{\pm 1}]^{\prime}$ is the open subscheme of $\text{Spec}\
W[R,U^{\pm 1}]$ given by $(R^{d}U)^{N}\neq 1$, we immediately get a blowup
$\mathbb{X}_{d}$ of the projective variety
$UR^{d}(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots X_{N}$ over
$\text{Spec}\ W[R,U^{\pm 1}]^{\prime}$ with an action of $H_{0}$, such that
the blowdown map is $H_{0}$-equivariant and $\mathbb{X}_{d}$ locally admits an
etale map to some
$W[R,U^{\pm 1}]^{\prime}[Z_{1},\ldots,Z_{r},Z_{r+1}^{\pm 1},\ldots,Z_{n}^{\pm
1}]/(UR^{d}-Z_{1}\cdots Z_{r})$
Note that to proceed now, we cannot simply base change $X$ along $W[R,U^{\pm
1}]^{\prime}\rightarrow\mathcal{O}_{F}$, $R\mapsto\pi,U\mapsto u$, since then
the model has etale locally the form
$\mathcal{O}_{F}[Z_{1},\ldots,Z_{r},Z_{r+1}^{\pm 1},\ldots,Z_{n}^{\pm
1}]/(u\pi^{d}-Z_{1}\cdots Z_{r})$
which is not semistable.
We state the theorem we would like to prove in this section below.
###### Theorem 2.3.
For any $t\in F$ such that $v(t)=d>0$, there exists a (e and a extension)
totally ramified extension $F^{\prime}/F$ generated by $\pi^{1/e}$, a choice
of $e$-th root of $\pi$ which is a uniformizer of $\mathcal{O}_{F}$ (thus
$F^{\prime}$ is purely ramified of degree $e$ over $F$ with uniformizer
$\pi^{1/e}$), and a semistable model $\mathcal{Y}$ over
$\mathcal{O}_{F^{\prime}}$ of $Z_{t^{-1}}$ (defined in the first page) with
compatible $H_{0}$ action.
The idea of the proof goes as the following. Since the $\mathbb{X}_{d}$ above
is a blowup of the projective variety
$UR^{d}(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots X_{N}$ over
$\text{Spec}\ W[R,U^{\pm 1}]^{\prime}$, its base change along $W[R,U^{\pm
1}]^{\prime}\rightarrow F$, $R\mapsto\pi,U\mapsto u$ is isomorphic to the
underlying motive of $V_{\lambda,t^{-1}}$ where $t=u\pi^{d}$. We will use the
theory of toroidal embedding to construct a blowup $Z$ of $\mathbb{X}_{de}$
for some $e$ along some locus contained in the closed subscheme defined by
$S$, where $\mathbb{X}_{de}$ is the base change of $\mathbb{X}_{d}$ along
$W[R,U^{\pm 1}]^{\prime}\rightarrow W[S,U^{\pm 1}]^{\prime}$, $T\rightarrow
S^{e}$ (again $\text{Spec}\ W[S,U^{\pm 1}]^{\prime}$ is defined by the
condition $(S^{de}U)^{N}\neq 1$ in $\text{Spec}\ W[S,U^{\pm 1}]$ as mentioned
in the introduction). We want our $Z$ to admit an etale morphism to
(2.1) $W[S,U^{\pm 1},Z_{1},\ldots,Z_{r},Z_{r+1}^{\pm 1},\ldots,Z_{n}^{\pm
1}]/(US-Z_{1}\cdots Z_{r})$
Zariski locally so that we see from construction that its base change along
$W[S,U^{\pm 1}]\rightarrow\mathcal{O}_{F^{\prime}}$,
$S\mapsto\pi^{1/e},U\mapsto u$ gives our desired $\mathcal{Y}$.
We will use the theory of toroidal embeddings in the mixed characteristic case
from [KKMSD73b] to reduce construction to a combinatorial problem of
subdividing conical complexes, which is also resolved in [Knu73]. So we will
sketch the main idea and notation from [KKMSD73a] [KKMSD73c] [Knu73]
[KKMSD73b] first.
Let $\eta$ be the generic point of $\text{Spec}\ W$ and we will work with the
base $\text{Spec}\ W$. A torus embedding is an irreducible normal variety $X$
of finite type over $\text{Spec}\ W$ with inclusion $T_{\eta}\hookrightarrow
X$, here $T_{\eta}$ is the generic fiber, with $T$ acting on $X$ extending the
action of $T$ on $T_{\eta}$.
Let $M(T),N(T)$ be the character and cocharacter group of $T$. Let
$\widetilde{M}(T)$ ($\widetilde{N}(T)$) be $\mathbb{Z}\times M(T)$
($\mathbb{Z}\times N(T)$) and $N_{\mathbb{R}}(T)=N(T)\otimes\mathbb{R}$
($\widetilde{N}_{\mathbb{R}}(T)=\widetilde{N}(T)\otimes\mathbb{R}$). Like the
usual case over a field, affine torus embeddings all come from cones $\sigma$
in $\mathbb{R}_{\geq 0}\times N_{\mathbb{R}}(T)$ not containing any linear
subspace in $N_{\mathbb{R}}(T)$. The association is as the following:
$X_{\sigma}=\text{Spec}\
R[\cdots,\pi^{k}\cdot\mathfrak{X}^{\alpha},\cdots]_{(k,\alpha)\in\widetilde{M}(T),\
\ \langle(k,\alpha),v\rangle\geq 0,\forall v\in\sigma}$
with the orbits of $T_{\eta}$ in $X_{\eta}$ corresponding to the faces of
$\sigma$ in $0\times N_{\mathbb{R}}(T)$ and the orbit of $T_{0}$ in $X_{0}$
corresponding to the faces of $\sigma$ in $\mathbb{R}_{>0}\times
N_{\mathbb{R}}(T)$. We will mostly deal with the simple case where $\sigma$
has the form $\mathbb{R}_{\geq 0}\times\sigma^{\prime}$ for some cone
$\sigma^{\prime}\subset N_{R}(T)$. In that case the orbits of $T_{\eta}$ in
$X_{\eta}$ correspond naturally to the orbits of $T_{0}$ in $X_{0}$ via
specialization.
The above is somehow the local picture. We will call an irreducible normal
scheme $X$ of finite type over $R$ with an open $U\subset X_{\eta}$ a toroidal
embedding if Zariski locally near a point $x$, there exists an etale morphism
$\rho$ from $(U,X)$ to some torus embedding $(T_{\eta},Z_{\rho(x)})$ as pairs
of schemes. This gives a global stratification of $X-U$ as the following. The
existence of local models gives that $X-U=\cup_{i\in I}E_{i}$ with $E_{i}$
normal irreducible subscheme(so no self intersection for these $E_{i}$) of
codimension $1$. The strata are defined as the connected components of
$\cap_{i\in J}E_{i}-\cup_{i\notin J}E_{i}$ for various subsets $J\subset I$,
and for $J=\varnothing$, we take the strata to be $U$. Locally under the etale
maps to the torus embedding models $Z_{\rho(x)}$, these strata admit etale
maps into the strata of the local models $Z_{\rho(x)}$, hence the original
strata are regular because those of $Z_{\rho(x)}$ are, due to the explicit
description. The point of this definition is that we may assign a canonical
topological space $\Delta$ that is the union of the cones obtained locally
from the models. The cones are glued together via inclusion contravariantly
with respect to the specialization of strata. The canonical topological space
$\Delta$ reflects lots of properties of $X$. The precise definition of the
conical complex $\Delta$ is in page 196 of [KKMSD73b]. It has the structure of
a topological space and each cone in it admits an injection into some
$\mathbb{R}^{m}$ with integral structure. We just stress that the main idea is
that the character group $M(T)$ of the local model can be canonically defined
by our toroidal embedding $(U,X)$ as the group of Cartier divisors on
$\mathrm{Star}Y$ supported in $\mathrm{Star}Y-U$ for a strata $Y$ (Star means
the union of all the strata that specialize to it) and the cone can be
canonically described as the dual cone of the effective Cartier divisors.
Here is the crucial property we use to reduce our construction to the problem
of subdividing a complex: Any subdivision $\Delta^{\prime}$ of $\Delta$
determines a morphism $(U,X^{\prime})\rightarrow(U,X)$ such that
$(U,X^{\prime})$ is a toroidal embedding with the associated complex
$\Delta^{\prime}$ and that the associated map of complexes
$\Delta^{\prime}\rightarrow\Delta$ is the natural inclusion map. Indeed, the
map is a blowdown for an ideal sheaf on $X$ supported in $X-U$ completely
determined by the subdivision $\Delta^{\prime}$.
We also have interpretation in terms of the complex $\Delta$ of the properties
that the toroidal embedding variety $X$ being regular and certain Cartier
divisor being a sum of irreducible Weil divisors without repetition. In the
case over a field, for a Cartier divisor $S$, by restricting to a Cartier
divisor in $\mathrm{Star}Y$ we can view it as an element in the character
group $M$ associated to this stratum and hence a function on the cone $\sigma$
associated to this stratum. Compatibility gives us that $S$ induce a globally
defined function on $X$, still denoted by $S$. Assume the hyperplane $S=1$ in
$\Delta$ defined by this function meets every nonzero faces in $\Delta$, then
$X$ is regular and $S$ vanishes to order $1$ on all irreducible components of
$X-U$ if and only if the intersection of $S=1$ with any face $\tau_{\alpha}$
of $\Delta$ has vertices with integral coordinates and the volume of the above
intersection equals to $1/d_{\alpha}!$ for $d_{\alpha}$ the dimension of
$\tau_{\alpha}$. We will use similar argument to Mumford’s in the proof of
this fact to prove a similar result in the mixed characteristic case. It
essentially reduces to the same proof by adjoining the extra $\mathbb{R}$
factor.
###### Proof.
To apply the theory of toroidal embeddings to get a blowup of $\mathbb{X}_{d}$
or $\mathbb{X}_{de}$, we first need to check $\mathbb{X}_{d}$ and
$\mathbb{X}_{de}$ have the structure of toroidal embeddings. Our setting is as
the following: let $U_{d}\subset\mathbb{X}_{d,\eta}$ be the open subvariety
defined by $R\neq 0$. We know $\mathbb{X}_{d}$ locally admits an etale map to
(2.2) $W[R,U^{\pm 1}]^{\prime}[Z_{1},\ldots,Z_{r},Z_{r+1}^{\pm
1},\ldots,Z_{n}^{\pm 1}]/(UR^{d}-Z_{1}\cdots Z_{r})$
which is an affine torus embedding of the generic torus $T_{\eta}$ of the
split torus $T$ of the form
$W[R^{\pm 1},U^{\pm 1},Z_{1}^{\pm 1},\ldots,Z_{r}^{\pm 1},Z_{r+1}^{\pm
1},\ldots,Z_{n}^{\pm 1}]/(UR^{d}-Z_{1}\cdots Z_{r})$
with $T$ action given by multiplying corresponding coordinates. And clearly
$U_{d}$ is the preimage of the generic torus via $R\neq 0$. Let $e_{i}$ denote
the character given by the regular function $Z_{i}$ in $X^{\ast}(T)$, $e_{u}$
denote the character given by the regular function $U$ and similarly $f_{i}$
and $f_{u}$ in $X_{\ast}(T)$, then the lattice $\widetilde{M}(T)$ is
identified with
$\mathbb{Z}e_{1}+\ldots+\mathbb{Z}e_{n}+\mathbb{Z}e_{u}+\mathbb{Z}e_{0}+\mathbb{Z}\frac{1}{d}(-e_{u}+\sum_{j=1}^{r}e_{j})$,
where $e_{0}$ corresponds to the extra factor of $\mathbb{Z}$ as in page 191
of [KKMSD73b]. $\widetilde{N}_{\mathbb{R}}(T)$ is spanned by
$f_{1},\ldots,f_{n},f_{u}f_{0}$, and the cone $\sigma$ giving the above
toroidal embedding is just $\mathbb{R}_{\geq 0}f_{1}+\ldots\mathbb{R}_{\geq
0}f_{r}+\mathbb{R}_{\geq 0}f_{0}$. The structure of toroidal embedding on
$\mathbb{X}_{de}$ is defined similarly with the open set $U_{de}$ given by
$S\neq 0$.
We now give some description of the local picture and the relationship between
the complex associated to $\mathbb{X}_{d}$ and $\mathbb{X}_{de}$. Consider the
stratification on $X$ mentioned before the proof, as in page 195 of
[KKMSD73b]. It suffices to look at a stratum $Y_{d}$ of $\mathbb{X}_{d}$
supported in the closed subscheme defined by $\pi$ because for any strata
$Y_{d}^{\prime}$ not supported in the $(\pi)$ we may intersect the closure of
the strata with the closed subscheme defined by $\pi$ and get a strata $Y_{d}$
which is a specialisation of $Y_{d}^{\prime}$ and we have Star$Y_{d}^{\prime}$
naturally injects into Star$Y_{d}$. We may thus suppose that
$\mathrm{Star}Y_{d}$ locally admits an etale map to a scheme of the form 2.2.
We have a bijective correspondance between the strata on the varieties
$\mathbb{X}_{d}$ and $\mathbb{X}_{de}$ because the fiber over $R=0$ and $S=0$
are isomorphic under pullback. Denote the stratum corresponding to $Y_{d}$ by
$Y_{de}$. Similar local presentation as 2.2 holds for the strata $Y_{de}$ of
$\mathbb{X}_{de}$. We have the intrinsically defined
$\widetilde{M}^{Y_{d}},\widetilde{M}^{Y_{de}},\widetilde{N}^{Y_{d}},\widetilde{N}^{Y_{de}},\widetilde{\sigma}^{Y_{d}},\widetilde{\sigma}^{Y_{de}}$
as in page 196 of [KKMSD73b] (their notation without the tilde). All
$\widetilde{\sigma}^{Y_{d}}$ glued into a conical complex
$\widetilde{\Delta}_{d}$ and all $\widetilde{\sigma}^{Y_{de}}$ glued into a
conical complex $\widetilde{\Delta}_{de}$. Note that we have a globally
compatible decomposition $\widetilde{M}^{Y_{d}}=\mathbb{Z}\oplus M^{Y_{d}}$,
$\widetilde{M}^{Y_{de}}=\mathbb{Z}\oplus M^{Y_{de}}$ with the first factor
given by $(\pi)$ and the second factor generated by the irreducible Cartier
divisors supported in Star$Y_{d}-U_{d}$(Star$Y_{de}-U_{de}$) not supported in
$(\pi)$. Similarly we have the decomposition
$\widetilde{N}^{Y_{d}}=\mathbb{Z}\oplus N^{Y_{d}}$,
$\widetilde{N}^{Y_{de}}=\mathbb{Z}\oplus N^{Y_{de}}$,
$\widetilde{\sigma}^{Y_{d}}=\mathbb{R}_{\geq 0}\oplus\sigma^{Y_{d}}$,
$\widetilde{\sigma}^{Y_{de}}=\mathbb{R}_{\geq 0}\oplus\sigma^{Y_{de}}$,
$\widetilde{\Delta}_{de}=\mathbb{R}_{\geq 0}\oplus\Delta_{de}$,
$\widetilde{\Delta}_{d}=\mathbb{R}_{\geq 0}\oplus\Delta_{d}$. Indeed, if
$Y_{d}$ is a connected component of $(\pi)\cap\cap_{i\in
I}E_{i}\backslash\cup_{i\notin I}E_{i}$ with $I$ a subset of the irreducible
divisors not supported in $\pi$, and let $Y_{d,\eta}$ be the (unique, by local
form) strata of the generic fibre $X_{\eta}$ in $\cap_{i\in
I}E_{i}\backslash\left(\cup_{i\notin I}E_{i}\cup(\pi)\right)$ that specializes
to $Y_{d}$, then the
$M^{Y_{d}},M^{Y_{de}},N^{Y_{d}},N^{Y_{de}},\sigma^{Y_{d}},\sigma^{Y_{de}},\Delta_{d},\Delta_{de}$
are the corresponding lattice, cones and conical complexes associated to the
strata $Y_{d,\eta}$ of the (field case) toroidal embedding
$U_{d}\hookrightarrow\mathbb{X}_{d,\eta}$ defined as in page 59 of [KKMSD73c].
We now use the subdivision given in [Knu73] to construct a blowup of
$\mathbb{X}_{de}$. Since $S=R^{1/e}$ is globally defined, it gives a Cartier
divisor in $\mathbb{X}_{de}$ and hence by the process described in the last
paragraph before the proof, it lies in each $M^{Y_{de}}$ compatibly by
restriction of Cartier divisors. Hence it defines a function on each cone
$\widetilde{\sigma}^{Y_{de}}$ compatibly and hence a function on
$\widetilde{\Delta}_{de}$. Denote this function by $l_{de}$, so it defines a
closed subset
$\widetilde{\Delta}_{de}^{\ast}=\\{x\in\widetilde{\Delta}^{Y_{de}}|l_{de}(x)=1\\}$.
Clearly $\widetilde{\Delta}_{de}^{\ast}=\mathbb{R}_{\geq
0}\oplus\Delta_{de}^{\ast}$ where $\Delta_{de}^{\ast}\subset\Delta_{de}$ also
defined by $l_{de}=1$ and it is a compact convex polyhedral set because the
hyperplane $l_{de}=1$ meets every positive dimensional face of $\Delta_{de}$.
$\widetilde{\Delta}_{de}^{\ast}$ and $\Delta_{de}^{\ast}$ has integral
structure given by various lattices $\widetilde{M}^{Y_{d}}$ and $M^{Y_{d}}$.
The upshot of the discussion from page 105-108 of [KKMSD73c] and [Knu73] is
that we may find an $e$ and a subdivision $\\{\sigma_{\alpha}\\}$ of
$\Delta_{de}^{\ast}$ into convex polyhedral sets such that all vertices of all
$\sigma_{\alpha}$ lie in $(\Delta_{de}^{\ast})_{\mathbb{Z}}$, the lattice
given by the integral structure mentioned above and that the volume (also
given by the integral structure as in page 95 of [KKMSD73c]) of each
$\sigma_{\alpha}$ is $1/(d_{\alpha})!$ where $d_{\alpha}$ is the dimension of
$\sigma_{\alpha}$. Adjoining the origin to the vertices gives a conical
decomposition $\\{\tau_{\alpha}\\}$ of $\Delta_{de}$ associated to
$\\{\sigma_{\alpha}\\}$. Now by part (d) of page 197 of [KKMSD73b], if we let
$\widetilde{\Delta}^{\prime}_{de}$ be the subdivided complex associated to the
f.r.p.p decomposition $\\{\mathbb{R}_{\geq 0}\oplus\tau_{\alpha}\\}$ of
$\widetilde{\Delta}_{de}$, it gives a toroidal embedding $(U_{de},Z)$ with a
map $Z\rightarrow\mathbb{X}_{de}$ respecting the inclusion of $U_{de}$.
Moreover, the associated map of the conical complex is just
$\widetilde{\Delta}^{\prime}_{de}\rightarrow\widetilde{\Delta}_{de}$, and the
integral structure is preserved.
We may associate an $H_{0}$ action on $(U_{de},Z)$ such that the map
$Z\rightarrow\mathbb{X}_{de}$ is $H_{0}$-equivariant for the following reason:
Part (e) in Page 198 of [KKMSD73b] gives that this map we constructed via a
subdivision of the conical complex $\widetilde{\Delta}_{de}^{\prime}$ is a
blowup along an ideal sheaf $\mathcal{I}$ over $\mathbb{X}_{de}$. The ideal
$\mathcal{I}$ over $\mathrm{Star}Y_{de}$ can be written as the sum of
$\mathcal{O}_{\mathbb{X}_{de}}(-D)$ for a collection of Cartier divisors
$D\in\widetilde{M}^{Y_{de}}$ that satisfy certain condition relevant to the
subdivision of the cone $\widetilde{\sigma}^{Y_{de}}$. Now we see from the
local form of $\mathbb{X}_{de}$ that the $H_{0}$ action fix every Cartier
divisor $D\in\widetilde{M}^{Y_{de}}$ (see Remark 2.2), and hence preserve the
sum of these $\mathcal{O}_{\mathbb{X}_{de}}(-D)$. The asserted
$H_{0}$-equivariance thus follows.
We now use the information on the evaluation of $S$ on
$\widetilde{\Delta}^{\prime}_{de}$ to prove the criterion for regularity and
the divisor $S$ vanishing to the order $1$ on any irreducible Weil divisors,
as well as giving a local model over $W[S,U^{\pm 1}]^{\prime}$. By the two
condition on each $\sigma_{\alpha}$, we see that each $\tau_{\alpha}$ is
generated as a cone by a set of vectors $\\{f_{\alpha,i}\\}_{1\leq i\leq
d_{\alpha}+1}\subset(\sigma_{\alpha})_{\mathbb{Z}}\subset(\tau_{\alpha})_{\mathbb{Z}}$
given by the vertices of $\sigma_{\alpha}$, and by Lemma 4 on Page 96 of
[KKMSD73c], if we denote the linear subspace generated by $\tau_{\alpha}$ as
$(\tau_{\alpha})_{\mathbb{R}}$, then $\\{f_{\alpha,i}\\}_{1\leq i\leq
d_{\alpha}+1}$ is a basis for the lattice of $(\tau_{\alpha})_{\mathbb{R}}$
given by the restriction of the integral structure $M^{Y_{de}}$ for some
$Y_{de}$ such that $\tau_{\alpha}\subset(N^{Y_{de}})_{\mathbb{R}}$. Thus it
can be expanded into a basis $\\{f_{\alpha,i}\\}_{1\leq i\leq n}$ of
$N^{Y_{de}}$. Thus by changing the corresponding dual basis as well, we write
the torus $T_{\eta}$ in the local model of the strata corresponding to the
cone $\mathbb{R}_{\geq 0}\oplus\tau_{\alpha}$ as $W[\frac{1}{p}][T_{1}^{\pm
1},\ldots,T_{n}^{\pm 1}]$ with the the $T_{i}\in X^{\ast}(T)$ being the dual
basis of $f_{\alpha,i}$. Since the cone $\mathbb{R}_{\geq
0}\oplus\tau_{\alpha}$ is now generated by $\mathbb{R}_{\geq
0}f_{0}+\sum_{i=1}^{d_{\alpha}+1}\mathbb{R}_{\geq 0}f_{\alpha,i}$, the local
model is by definition 2.3 isomorphic to
$W[T_{1},\ldots,T_{d_{\alpha}+1},T_{d_{\alpha}+2}^{\pm 1},\ldots,T_{n}^{\pm
1}]$ and hence smooth over $W$. Now the globally defined $U\in X^{\ast}(T)$ is
a unit in the coordinate ring
$W[T_{1},\ldots,T_{d_{\alpha}+1},T_{d_{\alpha}+2}^{\pm 1},\ldots,T_{n}^{\pm
1}]$, hence $U$ can be written as $T_{d_{\alpha}+2}^{c_{d_{\alpha}+2}}\cdots
T_{n}^{c_{n}}$. We also know that $U$ is non-divisible in the finite free
abelian group $X^{\ast}(T)$ as this only involves the integral structure and
hence could be checked over $\mathbb{X}_{de}$ where an explicit local model
gives the result. It follows that there are no nontrivial common divisor of
the $c_{d_{\alpha}+2},\ldots,c_{n}$, and hence we can change coordinate and
write the coordinate ring as
$W[T_{1},\ldots,T_{d_{\alpha}+1},T_{d_{\alpha}+2}^{\prime\pm
1},\ldots,T_{n-1}^{\prime\pm 1},U^{\pm 1}]$ over $W[U^{\pm 1}]$. Furthermore,
since $S\in\widetilde{M}^{Y_{de}}$ evaluated at each $f_{\alpha,i}$, $1\leq
i\leq d_{\alpha}+1$ is $1$ and evaluated at the basis of the first copy of
$\mathbb{R}_{\geq 0}$ is $0$, we see that
$S=\prod_{i=1}^{d_{\alpha}+1}T_{i}\cdot\prod_{j=d_{\alpha}+2}^{n-1}T_{j}^{\prime
d_{j}}\cdot U^{s}$, and hence over $W[S,U^{\pm 1}]$, we may change coordinate
by $T_{1}^{\prime}\mapsto T_{1}\prod_{j=d_{\alpha}+2}^{n-1}T_{j}^{\prime
d_{j}}\cdot U^{s+1}$, $T_{i}^{\prime}\mapsto T_{i}$ for $2\leq i\leq
d_{\alpha}+1$, and $T_{j}^{\prime}\mapsto T_{j}^{\prime}$ for
$d_{\alpha}+2\leq j\leq n-1$, we have that locally $Z$ admits an etale map to
$W[S,U^{\pm
1},T_{1}^{\prime},\ldots,T_{d_{\alpha}+1}^{\prime},T_{d_{\alpha}+2}^{\prime\pm
1},\ldots,T_{n-1}^{\prime\pm
1}]/(SU-\prod_{i=1}^{d_{\alpha}+1}T_{i}^{\prime})$ over $W[S,U^{\pm 1}]$.
We deduce from the local description above that the base change of $Z$ over
$W[S,U^{\pm 1}]^{\prime}$ along $W[S,U^{\pm
1}]^{\prime}\rightarrow\mathcal{O}_{F^{\prime}}:\ S\mapsto\pi^{1/e},U\mapsto
u$, which we denote by $\mathcal{Y}$, admits an etale map to
$\mathcal{O}_{F^{\prime}}[T^{\prime}_{1},\ldots,T^{\prime}_{d_{\alpha}+1},T_{d_{\alpha}+2}^{\prime\pm
1},\ldots,T_{n-1}^{\prime\pm
1}]/(\pi^{1/e}u-\prod_{i=1}^{d_{\alpha}+1}T_{i})$. Since $\pi^{1/e}$ is a
uniformizer of $\mathcal{O}_{F^{\prime}}$, we see that $\mathcal{Y}$ is
regular and semistable over $\mathcal{O}_{F^{\prime}}$. We may further
identify its generic fibre with the projective variety over $F^{\prime}$
defined by the equation
$u\pi^{d}(X_{1}^{N}+\cdots+X_{N}^{N})=NX_{1}\cdots X_{N}$
compatible with the $H_{0}$ action because the generic fibre of $\mathcal{Y}$
is also the base change of $Z_{\eta}$ along $W[\frac{1}{p}][S,U^{\pm
1}]\rightarrow F^{\prime}:\ S\mapsto\pi^{1/e},U\mapsto u$. The map
$Z_{\eta}\rightarrow\mathbb{X}_{de,\eta}$ is an isomorphism outside $S=0$, and
hence the above base change of $Z$ is isomorphic to the fibre over
$S=\pi^{1/e}$ of $\mathbb{X}_{de,\eta}$, which has the above form from its
definition and corresponding property of $\mathbb{X}_{d,\eta}$. The $H_{0}$
compatibility follows from the $H_{0}$ compatibility of the map
$Z\rightarrow\mathbb{X}_{de}$.
∎
We now analyze the special fibre of $\mathcal{Y}$ because we want to apply
Hyodo-Kato’s semistable comparison theorem.
###### Remark 2.4.
Denote the canonical log structure on $\mathcal{Y}$ given by the Cartier
divisor $(\pi^{1/e})$ by $\mathcal{N}$, the special fibre with pullback log
structure by $(\overline{\mathcal{Y}},\overline{\mathcal{N}})$, the log
structure on $Z$ given by the divisor $(S)$ by $M$. Picking a $u^{\prime}\in
W^{\times}$ that have the same reduction as $u$ in $k^{\times}$, we denote the
base change of $Z$ with pullback log structure from $M$ along $W[S,U^{\pm
1}]^{\prime}\rightarrow W[T]^{\prime}:\ U\rightarrow u^{\prime},\ S\mapsto T$
by $(Y,N)$ (here $\text{Spec}\ W[T]^{\prime}$ is the open subscheme of
$\text{Spec}\ W[T]$ defined by $(T^{de}u^{\prime})^{N}\neq 1$), and further
base change along $W[T]\rightarrow k$, $T\mapsto 0$ with pullback log
structure by $(\overline{Y_{1}},\overline{N_{1}})$. Then we have that as log
schemes over $k$ with $H_{0}$ action,
$(\overline{\mathcal{Y}},\overline{\mathcal{N}})\cong(\overline{Y_{1}},\overline{N_{1}})$
(because they are both base change of $(Z,M)$ by $W[S,U^{\pm
1}]^{\prime}\rightarrow k:$ $S\mapsto 0,U\mapsto\overline{u}$) and $Y$ is a
blowup of the projective variety
$u^{\prime}T^{de}(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots
X_{N}$ over $\text{Spec}\ W[T]^{\prime}$ with equivariant $H_{0}$-action.
## 3\. Comparison Theorem of Log Geometry
We start by introducing some notation we will use in log geometry, and then we
will specialize to the case we work in. We will denote a general log scheme by
$(X,M)$ where $X$ is a usual scheme and $M$ denotes the log structure. When
$M$ is a monoid, it is to be interpretted as a chart for this log structure.
In particular, let $(X,\mathbb{N},1\mapsto f)$ be the log scheme determined by
the chart $\mathbb{N}$ with $1$ sent to the $f\in\mathcal{O}_{X}(X)$ under the
log structure map. And let $(X,(0))$ denote the log scheme $X$ with trivial
log structure. When there is only one log structure mentioned on a scheme $X$,
we will simply use $X$ to refer to the log scheme $(X,M)$.
We also record the following lemma from [Ogu18] Chapter 4 Proposition 1.3.1:
###### Lemma 3.1.
Let
${(X^{\prime},M^{\prime})}$${(X,M)}$${(Y^{\prime},N^{\prime})}$${(Y,N)}$$\scriptstyle{g}$$\scriptstyle{f^{\prime}}$$\scriptstyle{f}$$\scriptstyle{h}$
be a Cartesian diagram of log schemes. Then we have a natural isomorphism
$g^{\ast}\omega^{1}_{(X,M)/(Y,N)}\cong\omega^{1}_{(X^{\prime},M^{\prime})/(Y^{\prime},N^{\prime})}$
We will work with $(Y,N)$ being a log smooth scheme over
$(W[T]^{\prime},\mathbb{N},1\mapsto T)$, where $\text{Spec}\ W[T]^{\prime}$ is
a fixed choice of affine open subscheme of $\text{Spec}\ W[T]$ such that the
closed subscheme defined by $T=0$ is contained in $\text{Spec}\
W[T]^{\prime}$. In particular, we will apply the theory to the $(Y,N)$ we get
from last section in Remark 2.4 over the particular base
$(W[T]^{\prime},\mathbb{N},1\mapsto T)$(although same notation). Denote by
$(Y_{n},N_{n})$ (resp. $(\overline{Y_{n}},\overline{N_{n}})$) the base change
of $(Y,N)$ along $(W_{n}[T]^{\prime},\mathbb{N},1\mapsto
T)\rightarrow(W[T]^{\prime},\mathbb{N},1\mapsto T)$(resp.
$(W_{n},\mathbb{N},1\mapsto 0)\rightarrow(W[T]^{\prime},\mathbb{N},1\mapsto
T)$). Let $(\overline{Y},\overline{N})$ be the base change of $(Y,N)$ along
$(W,\mathbb{N},1\mapsto 0)\rightarrow(W[T]^{\prime},\mathbb{N},1\mapsto T)$.
Note that the closed immersions in the corresponding fibre diagram are all
exact and the projection maps to the base are all log smooth.
For a base 4-ple $(S,L,I,\gamma)$, where $S$ is a scheme with
$\mathcal{O}_{S}$ killed by some integer $p^{n}$, $L$ is a fine log structure
on it (see [HK94] (2.6)), $I$ is a quasi-coherent ideal on $S$ and $\gamma$ is
a PD structure on $I$, and a fine log scheme $(X,M)$ over $(S,L)$ such that
$\gamma$ extends to $X$, we may define the cristalline site and cristalline
cohomology as in [HK94] (2.15).
Take $(X,M)=(\overline{Y_{1}},\overline{N_{1}})$ and
$(S,L)=(W_{n},\mathbb{N},1\mapsto 0)$ with the ideal $I=(p)$ and its usual PD
structure, [HK94] define the $i$-th log cristalline cohomology of
$(\overline{Y_{1}},\overline{N_{1}})$ as the limit
$\varprojlim_{n}H^{i}(((\overline{Y_{1}},\overline{N_{1}})/(W_{n},\mathbb{N},1\mapsto
0))_{crys},\mathcal{O}_{\overline{Y_{1}}/W_{n}})[\frac{1}{p}]$
and they also equip it with a nilpotent operator $N$ that will be the same as
the $N$ given by $p$-adic Hodge theory when identified by the comparison
isomorphism.
###### Theorem 3.2.
The above limit is isomorphic to
$\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[\frac{1}{p}]$ while the operator $N$ is given as the degree $i$ boundary
homomorphism of the long exact sequence given by the following exact triangle
${\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y}/(W,(0))}}$${\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
which is defined at each degree $i$ by taking the $i$-th wedge power of the
following locally split exact sequence of locally free sheaf of modules
${0}$${\mathcal{O}_{\overline{Y}}}$${\omega^{1}_{\overline{Y}/(W,(0))}}$${\omega^{1}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}}$${0}$$\scriptstyle{\cdot d\log 1}$
by Theorem 3.2.3 of [Ogu18] since we note that $(\overline{Y},\overline{N})$
is log smooth over $(W,\mathbb{N},1\mapsto 0)$ and we identify
$\omega^{1}_{(W,\mathbb{N},1\mapsto 0)/(W,(0))}\cong W$ by $d\log 1\mapsto 1$.
###### Proof.
We will prove the theorem in 3 steps.
(i) We prove that there exists an $N$-equivariant isomorphism
(3.1) $H^{i}(((\overline{Y_{1}},\overline{N_{1}})/(W_{n},\mathbb{N},1\mapsto
0))_{crys},\mathcal{O}_{\overline{Y_{1}}/W_{n}})\cong\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)})$
with the operator $N$ on the right hand side given by the degree $i$ boundary
homomorphism of the following exact triangle
${\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y_{n}}/(W_{n},(0))}}$${\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
obtained via various wedge power of the following exact sequence as the
process above
${0}$${\mathcal{O}_{\overline{Y_{n}}}}$${\omega^{1}_{\overline{Y_{n}}/(W_{n},(0))}}$${\omega^{1}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)}}$${0}$$\scriptstyle{\cdot d\log 1}$
The transition maps in the limit
$\varprojlim_{n}H^{i}(((\overline{Y_{1}},\overline{N_{1}})/(W_{n},\mathbb{N},1\mapsto
0))_{crys},\mathcal{O}_{\overline{Y_{1}}/W_{n}})$
is compatible with the pullback homomorphism of cohomology
$\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n+1}}/(W_{n+1},\mathbb{N},1\mapsto
0)})\rightarrow\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)})$ under the isomorphism 3.1.
(ii) We will prove there exists commuting short exact sequences of $W$ modules
(3.2) ${0}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})/p^{n+1}}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n+1}}/(W_{n+1},\mathbb{N},1\mapsto
0)})}$${\mathbb{H}^{i+1}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[p^{n+1}]}$${0}$${0}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})/p^{n}}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)})}$${\mathbb{H}^{i+1}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[p^{n}]}$${0}$$\scriptstyle{\cdot p}$
where the middle vertical map is the pullback map, the left vertical map is
natural reduction and the right vertical map is multiplication by $p$. All
maps in the left commutative square are $N$ equivariant.
(iii) Conclude Theorem 3.2.
We first prove (i).
Again we recall the base 4-ple $(S,L,I,\gamma)$ being a fine $p^{n}$ torsion
log scheme with a quasi-coherent ideal $I$ and a PD structure on it. In the
discussion below, we will always take $I=(p)$ and the usual PD structure on
it. This PD structure extends to all schemes $X$ over $S$. Recall the
following theory of log cristalline cohomology of crystals. This is the basic
case in (2.18) of [HK94] combined with (2.17). We will not recall the notions
in the proposition but refer to the same reference.
###### Proposition 3.3.
For $(X,M)/(S,L)$ log schemes, if there exists a log smooth $(Z,N)/(S,L)$ with
a closed immersion $i:(X,M)\hookrightarrow(Z,N)$. Denote the (log) PD envelope
of $i:(X,M)\hookrightarrow(Z,N)$ by $(D,M_{D})$, then there exists a complex
of $\mathcal{O}_{Z}$-module $C_{X,Z/S}$ of the form:
${\mathcal{O}_{D}}$${\mathcal{O}_{D}\otimes_{\mathcal{O}_{Z}}\omega^{1}_{Z/S}}$${\mathcal{O}_{D}\otimes_{\mathcal{O}_{Z}}\omega^{2}_{Z/S}}$${\cdots}$$\scriptstyle{\nabla}$$\scriptstyle{\nabla}$$\scriptstyle{\nabla}$
where $\omega^{1}_{Z/S}$ is the log differential module, and Leibniz rules are
satisfied:
$\nabla(m\otimes\omega)=\nabla(m)\otimes\omega+m\otimes d\omega\ \
(m\in\mathcal{O}_{D}\otimes_{\mathcal{O}_{Z}}\omega^{i}_{Z/S},\ \
\omega\in\omega^{j}_{Z/S})$
such that the cohomology of this complex as an object in $D(Sh(X_{et}))$
computes the log cristalline cohomology
$H^{\cdot}(((X,M)/(S,L))_{crys},\mathcal{O}_{X/S})$.
###### Remark 3.4.
[HK94] (2.18) also gives that the association of the complex $C_{X,Z/S}$ is
natural with respect to the system
$(X,M)\rightarrow(Z,N)\rightarrow(S,L,I,\gamma)$.
Apply this proposition to $(X,M)=(\overline{Y_{1}},\overline{N_{1}})$,
$(S,L)=(W_{n},\mathbb{N},1\mapsto 0)$ and the closed immersion
$i:(\overline{Y_{1}},\overline{N_{1}})\rightarrow(\overline{Y_{n}},\overline{N_{n}})$
whose ideal of definition is generated by $p$, hence
$(D,M_{D})=(\overline{Y_{n}},\overline{N_{n}})$ and by Leibniz rule we see
that $C_{\overline{Y_{1}},\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)}=\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto 0)}$ and hence
the abstract isomorphism in 3.1 follows. We still need to show it’s
$N$-equivariant.
There is also a description of the operator $N$ on the log cristalline
cohomology in terms of the complexes mentioned above, see [HK94] (3.6). We
simplify it a little bit.
###### Proposition 3.5.
Suppose there exist a (log) closed immersion $(X,M)\hookrightarrow(Z,N)$ over
$(W_{n}[T]^{\prime},\mathbb{N},1\mapsto T)$ with $(Z,N)$ log smooth over
$(W_{n}[T]^{\prime},\mathbb{N},1\mapsto T)$. Then there exist an exact
triangle
$C_{X,Z\langle\rangle/(W_{n}\langle T\rangle^{\prime},\mathbb{N},1\mapsto
T)}[-1]\rightarrow C_{X,Z/(W_{n},(0))}\rightarrow
C_{X,Z\langle\rangle/(W_{n}\langle T\rangle^{\prime},\mathbb{N},1\mapsto
T)}\rightarrow$
where $W_{n}\langle T\rangle^{\prime}$ denotes the usual PD envelope of
$W_{n}[T]^{\prime}$ along the ideal $(t)$ and $Z\langle\rangle$ is the base
change of $Z$ along $W_{n}[T]^{\prime}\rightarrow W_{n}\langle
T\rangle^{\prime}$ with the pullback log structure. We refer the reader to
[HK94] (3.6) for the definition of this exact triangle.
Tensoring the above exact triangle by $\otimes^{\mathbb{L}}_{W_{n}\langle
T\rangle^{\prime}}W_{n}$, we get:
$C_{X,\overline{Z}/(W_{n},\mathbb{N},1\mapsto 0)}[-1]\rightarrow
C_{X/(W_{n},(0))}\otimes^{\mathbb{L}}_{W_{n}\langle
T\rangle^{\prime}}W_{n}\rightarrow
C_{X,\overline{Z}/(W_{n},\mathbb{N},1\mapsto 0)}\rightarrow$
where $\overline{Z}$ is the base change of $Z$ along
$W_{n}[T]^{\prime}\rightarrow W_{n},t\mapsto 0$ with the pullback log
structure, so that the operator $N$ on
$H^{i}(((X,M)/(W_{n},\mathbb{N}))_{crys},\mathcal{O}_{X/W_{n}})$ is given by
the connecting homomorphism of degree $i$ of this exact triangle under the
identification given after Proposition 3.3.
In fact, we will use this proposition in our case for
$(X,M)=(\overline{Y_{1}},\overline{N_{1}})$ and $(Z,N)=(Y_{n},N_{n})$. The
first is the base change of the second along $(W_{n},\mathbb{N},1\mapsto
0)\rightarrow(W_{n}[T]^{\prime},\mathbb{N},1\mapsto T)$. Thus
$Z\langle\rangle=(Y_{n}\langle\rangle,N_{n}\langle\rangle)$ as log schemes,
the latter of which we define to be the base change of
$(Y_{n},N_{n})\rightarrow(W_{n}[T]^{\prime},\mathbb{N},1\mapsto T)$ along
$(W_{n}\langle T\rangle^{\prime},\mathbb{N},1\mapsto T)$. Also
$\overline{Z}=(\overline{Y_{n}},\overline{N_{n}})$ as log schemes. We would
identify the objects appearing in the above proposition in more concrete
terms.
Here is a diagram illustrating the situation:
${(\overline{Y_{1}},\overline{N_{1}})}$${(Y_{n}\langle\rangle,N_{n}\langle\rangle)}$${(Y_{n},N_{n})}$${(W_{n}\langle
T\rangle^{\prime},\mathbb{N},1\mapsto
T)}$${(W_{n}[T]^{\prime},\mathbb{N},1\mapsto T)}$${(W_{n},(0))}$
Apply Proposition 3.3 to $(X,M)=(\overline{Y_{1}},\overline{N_{1}})$,
$(S,L)=(W_{n}\langle T\rangle^{\prime},\mathbb{N},1\mapsto T)$ and the closed
immersion
$(\overline{Y_{1}},\overline{N_{1}})\rightarrow(Y_{n}\langle\rangle,N_{n}\langle\rangle)$.
Since the closed immersion is exact and the ideal of definition is generated
by $p$ and all the divided powers of $t$,
$(D,M_{D})=(Y_{n}\langle\rangle,N_{n}\langle\rangle)$ in this case.
Proposition 3.3 gives that
$C_{\overline{Y_{1}},Y_{n}\langle\rangle/(W_{n}\langle
T\rangle^{\prime},\mathbb{N},1\mapsto
T)}\cong\mathrm{dR}_{Y_{n}\langle\rangle/(W_{n}\langle
T\rangle^{\prime},\mathbb{N},1\mapsto T)}$
Apply Proposition 3.3 to $(X,M)=(\overline{Y_{1}},\overline{N_{1}})$,
$(S,L)=(W_{n},(0))$ and the closed immersion
$(\overline{Y_{1}},\overline{N_{1}})\rightarrow(Y_{n},N_{n})$. $(Y_{n},N_{n})$
is log smooth over $(W_{n},(0))$ because it is log smooth over
$(W_{n}[T]^{\prime},\mathbb{N},1\mapsto T)$ and
$(W_{n}[T]^{\prime},\mathbb{N},1\mapsto T)$ is log smooth over $(W_{n},(0))$
by the smoothness criterion (2.9) in [HK94]. In this case
$(D,M_{D})=(Y_{n}\langle\rangle,N_{n}\langle\rangle)$ since the ideal of
deinition given by the closed immersion $i$ is generated by $p,t$. Proposition
3.3 shows that $C_{\overline{Y_{1}},Y_{n}/(W_{n},(0))}$ is given by a complex
${\mathcal{O}_{Y_{n}\langle\rangle}}$${\mathcal{O}_{Y_{n}\langle\rangle}\otimes_{\mathcal{O}_{Y_{n}}}\omega^{1}_{Y_{n}/(W_{n},(0))}}$${\mathcal{O}_{Y_{n}\langle\rangle}\otimes_{\mathcal{O}_{Y_{n}}}\omega^{2}_{Y_{n}/(W_{n},(0))}}$${\cdots}$$\scriptstyle{\nabla}$$\scriptstyle{\nabla}$$\scriptstyle{\nabla}$
Via the application of Proposition 3.3 indicated above, and identifying
$\omega^{i}_{Y_{n}\langle\rangle/(W_{n}\langle
T\rangle^{\prime},\mathbb{N},1\mapsto
T)}\cong\mathcal{O}_{Y_{n}\langle\rangle}\otimes_{\mathcal{O}_{Y_{n}}}\omega^{i}_{Y_{n}/(W_{n}[T]^{\prime},\mathbb{N},1\mapsto
T)}\cong W_{n}\langle
T\rangle^{\prime}\otimes_{W_{n}[T]^{\prime}}\omega^{i}_{Y_{n}/(W_{n}[T]^{\prime},\mathbb{N},1\mapsto
T)}$, one can verify that using the recipe given in (3.6) of [HK94], the exact
triangle sequence
$C_{\overline{Y_{1}},Y_{n}\langle\rangle/(W_{n}\langle
T\rangle^{\prime},\mathbb{N},1\mapsto T)}[-1]\rightarrow
C_{\overline{Y_{1}},Y_{n}/(W_{n},(0))}\rightarrow
C_{\overline{Y_{1}},Y_{n}\langle\rangle/(W_{n}\langle
T\rangle^{\prime},\mathbb{N},1\mapsto T)}\rightarrow$
is given by actual chain maps
(3.3)
${0}$${\mathcal{O}_{Y_{n}\langle\rangle}\otimes_{\mathcal{O}_{Y_{n}}}\omega^{i-1}_{Y_{n}/(W_{n}[T]^{\prime},\mathbb{N},1\mapsto
T)}}$${\mathcal{O}_{Y_{n}\langle\rangle}\otimes_{\mathcal{O}_{Y_{n}}}\omega^{i}_{Y_{n}/(W_{n},(0))}}$${\mathcal{O}_{Y_{n}\langle\rangle}\otimes_{\mathcal{O}_{Y_{n}}}\omega^{i}_{Y_{n}/(W_{n}[T]^{\prime},\mathbb{N},1\mapsto
T)}}$${0}$$\scriptstyle{\cdot d\log T}$
which is obtained by tensoring $\mathcal{O}_{Y_{n}\langle\rangle}$ with the
$i$-th wedge power of the locally split short exact sequence
${0}$${\mathcal{O}_{Y_{n}}}$${\omega^{1}_{Y_{n}/(W_{n},(0))}}$${\omega^{1}_{Y_{n}/(W_{n}[T]^{\prime},\mathbb{N},1\mapsto
T)}}$${0}$$\scriptstyle{\cdot d\log T}$
Now we claim that the following diagram commutes and the three vertical arrow
are isomorphisms :
${\mathrm{dR}_{Y_{n}\langle\rangle/(W_{n}\langle
T\rangle^{\prime},\mathbb{N},1\mapsto T)}\otimes^{\mathbb{L}}_{W_{n}\langle
T\rangle^{\prime}}W_{n}[-1]}$${C_{\overline{Y_{1}},Y_{n}/(W_{n},(0))}\otimes^{\mathbb{L}}_{W_{n}\langle
T\rangle^{\prime}}W_{n}}$${\mathrm{dR}_{Y_{n}\langle\rangle/(W_{n}\langle
T\rangle^{\prime},\mathbb{N},1\mapsto T)}\otimes^{\mathbb{L}}_{W_{n}\langle
T\rangle^{\prime}}W_{n}}$${\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y_{n}}/(W_{n},(0))}}$${\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlogT}$$\scriptstyle{\cdot dlog1}$
The commutativity is straightforward. The isomorphisms of left and right
column just follows form the locally freeness of each modules of differentials
and the pullback result Lemma 3.1 for modules of differentials.
For the middle vertical arrow, we know that
$(\mathcal{O}_{Y_{n}\langle\rangle}\otimes_{\mathcal{O}_{Y_{n}}}\omega^{i}_{Y_{n}/(W_{n},(0))})\otimes_{W_{n}\langle
T\rangle^{\prime}}W_{n}\cong\omega^{i}_{\overline{Y_{n}}/(W_{n},(0))}$ and
since
$\omega^{i}_{Y_{n}/(W_{n},(0))}\rightarrow\omega^{i}_{\overline{Y_{n}}/(W_{n},(0))}$
is already surjective and we know the $\nabla$ in
$C_{\overline{Y_{1}},Y_{n}/(W_{n},(0))}$ satisfy Leibniz rules and hence the
differential applied to the image of $\omega^{i}_{Y_{n}/(W_{n},(0))}$ in
$\mathcal{O}_{Y_{n}\langle\rangle}\otimes_{\mathcal{O}_{Y_{n}}}\omega^{i}_{Y_{n}/(W_{n},(0))}$
is just log differential, we see that the differentials in the complex after
tensoring are exactly the log de-Rham differentials.
Thus, by the last part of Proposition 3.5, we see that the operator $N$ on
$H^{i}_{crys}((\overline{Y_{1}},\overline{N_{1}})/(W_{n},\mathbb{N}),\mathcal{O}_{\overline{Y_{1}}/W_{n}})$
is given by the degree $i$ connecting homomorphism of the long exact sequence
associated to the lower row of the above diagram. Thus (i) is proved.
Now we prove (ii).
By proper base change theorem in the context of derived category of coherent
sheaves(Section 7.7 of [Gro63]) applied to the Cartesian square
${\overline{Y_{n}}}$${\overline{Y}}$${W_{n}}$${W}$$\scriptstyle{i_{n}}$$\scriptstyle{\pi_{n}}$$\scriptstyle{\pi_{\infty}}$
and the exact triangle
(3.4) ${\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y}/(W,(0))}}$${\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
,we see that
${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}\otimes^{\mathbb{L}}_{W}W_{n}[-1]}$${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,(0))}\otimes^{\mathbb{L}}_{W}W_{n}}$${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}\otimes^{\mathbb{L}}_{W}W_{n}}$${R\pi_{n,\ast}\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)}[-1]}$${R\pi_{n,\ast}\mathrm{dR}_{\overline{Y_{n}}/(W_{n},(0))}}$${R\pi_{n,\ast}\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cong}$$\scriptstyle{\cong}$$\scriptstyle{\cong}$
(Note that each term of the de Rham complexes in 3.4 (as actual chain
complexes) is locally free by the log smoothness and that
$Li_{n}^{\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}=\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto 0)}$,
$Li_{n}^{\ast}\mathrm{dR}_{\overline{Y}/(W,(0))}=\mathrm{dR}_{\overline{Y_{n}}/(W_{n},(0))}$
by applying usual pullback at each degree and 3.1).
Applying $R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}\otimes^{\mathbb{L}}_{W}$ to the exact triangle
${W}$${W}$${W_{n}}$$\scriptstyle{\cdot p^{n}}$
and then taking the long exact sequence, we get a short exact sequence
(3.5) ${0}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})/p^{n}}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)})}$${\mathbb{H}^{i+1}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[p^{n}]}$${0}$
where the inclusion is $N$-equivariant since it is induced by the above
diagram from the connecting homomorphisms of the diagram
${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}\otimes^{\mathbb{L}}_{W}W[-1]}$${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,(0))}\otimes^{\mathbb{L}}_{W}W}$${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}\otimes^{\mathbb{L}}_{W}W}$${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}\otimes^{\mathbb{L}}_{W}W_{n}[-1]}$${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,(0))}\otimes^{\mathbb{L}}_{W}W_{n}}$${R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}\otimes^{\mathbb{L}}_{W}W_{n}}$
Applying
$R\pi_{\infty,\ast}\mathrm{dR}_{\overline{Y}/(W,\mathbb{N})}\otimes^{\mathbb{L}}_{W}$
to the morphism between exact triangles
${W}$${W}$${W_{n+1}}$${W}$${W}$${W_{n}}$$\scriptstyle{\cdot
p^{n+1}}$$\scriptstyle{\cdot p}$$\scriptstyle{=}$$\scriptstyle{\cdot p^{n}}$
and take long exact sequence will give us the desired commutative diagram
(3.6) ${0}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})/p^{n+1}}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n+1}}/(W_{n+1},\mathbb{N},1\mapsto
0)})}$${\mathbb{H}^{i+1}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[p^{n+1}]}$${0}$${0}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})/p^{n}}$${\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)})}$${\mathbb{H}^{i+1}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[p^{n}]}$${0}$$\scriptstyle{\cdot p}$
in (ii) where the left commutative square is $N$ equivariant. Thus (ii) is
proved.
To prove (iii), take inverse limit over the short exact sequences in 3.6
indexed by $n$, we get
${0}$${\varprojlim_{n}\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})/p^{n}}$${\varprojlim_{n}\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y_{n}}/(W_{n},\mathbb{N},1\mapsto
0)})}$${\varprojlim_{\cdot
p}\mathbb{H}^{i+1}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[p^{n}]}$
where the inclusion is $N$ equivariant.
Note that all $\mathbb{H}^{j}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})$ are finitely generated over $W$ (by truncation and that proper
pushforward preserve coherent sheaves). So
$\varprojlim_{n}\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})/p^{n}\cong\mathbb{H}^{i}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})$ and $\varprojlim_{\cdot
p}\mathbb{H}^{i+1}(\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)})[p^{n}]=0$. Thus (iii) is proved.
∎
We would like to prove another GAGA type result.
Let $(X,M)$ be a log scheme smooth over $(\mathbb{C},\mathbb{N},1\mapsto 0)$
and proper as a scheme. Let $(X^{\mathrm{an}},M^{\mathrm{an}})$ be the
analytic log scheme associated to it which is also smooth over the analytic
point $(\mathrm{pt},\mathbb{N},1\mapsto 0)$.
###### Proposition 3.6.
There exists an $N$ equivariant isomorphism
$\mathbb{H}^{i}(\mathrm{dR}_{X/(\mathbb{C},\mathbb{N},1\mapsto
0)})\cong\mathbb{H}^{i}(\mathrm{dR}_{X^{\mathrm{an}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})$
where the $N$ on each hypercohomology is defined by degree $i$ boundary
morphism of the exact triangles obtained similar to the one in Theorem 1.6
${\mathrm{dR}_{X/(\mathbb{C},\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{X/(\mathbb{C},(0))}}$${\mathrm{dR}_{X/(\mathbb{C},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
${\mathrm{dR}_{X^{\mathrm{an}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{X^{\mathrm{an}}/(\mathrm{pt},(0))}}$${\mathrm{dR}_{X^{\mathrm{an}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
###### Proof.
We can apply the exact GAGA functor
$\mathcal{F}\mapsto\mathcal{F}^{\mathrm{an}}$ from the category of sheaves of
$\mathcal{O}_{X}$ modules to the category of sheaves of
$\mathcal{O}_{X^{\mathrm{an}}}$ modules. Let $\lambda$ be the morphism
$X^{\mathrm{an}}\rightarrow X$. Thus it suffices to show there is a canonical
isomorphism between
${\lambda^{\ast}\mathrm{dR}_{X/(\mathbb{C},\mathbb{N},1\mapsto
0)}[-1]}$${\lambda^{\ast}\mathrm{dR}_{X/(\mathbb{C},(0))}}$${\lambda^{\ast}\mathrm{dR}_{X/(\mathbb{C},\mathbb{N},1\mapsto
0)}}$${\mathrm{dR}_{X^{\mathrm{an}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{X^{\mathrm{an}}/(\mathrm{pt},(0))}}$${\mathrm{dR}_{X^{\mathrm{an}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot
dlog1}$$\scriptstyle{\cong}$$\scriptstyle{\cong}$$\scriptstyle{\cong}$$\scriptstyle{\cdot
dlog1}$
Commutativity of diagram is clear from the definition of $log1$ as the image
of $1$ of the monoid in the differential modules. The isomorphism of complexes
actually holds termwise. Namely, we could prove the stronger statement that
for any log scheme $(X,M)/(Z,L)$ over $\mathbb{C}$, if we denote its
analytification by $(X^{\mathrm{an}},M)/(Z^{\mathrm{an}},L)$ then
$\lambda^{\ast}\omega^{i}_{(X,M)/(Z,L)}\cong\omega^{i}_{(X^{\mathrm{an}},M)/(Z^{\mathrm{an}},L)}$
for any non-negative integer $i$. It suffices to prove for $i=1$. We know from
definition that $\omega^{1}_{(X,M)/(Z,L)}$ is $\Omega^{1}_{X/Z}\oplus
M\otimes\mathcal{O}_{X}$ modulo the relation $di(m)\oplus-m\otimes i(m)$ and
$0\oplus l\otimes 1$ for all $m\in M$ and $l\in L$ where $i$ is the structure
morphism $M\rightarrow\mathcal{O}_{X}$. Similarly
$\omega^{1}_{(X^{\mathrm{an}},M)/(Z^{\mathrm{an}},L)}$ is
$\Omega^{1}_{X^{\mathrm{an}}/Z^{\mathrm{an}}}\oplus
M\otimes\mathcal{O}_{X^{\mathrm{an}}}$ modulo the relation
$di(m)\oplus-m\otimes i(m)$ and $0\oplus l\otimes 1$ for all $m\in M$ and
$l\in L$ where $i$ is the structure morphism
$M\rightarrow\mathcal{O}_{X^{\mathrm{an}}}$. We conclude the case $i=1$ by
noting the fact that
$\lambda^{\ast}\Omega^{1}_{X/Z}\cong\Omega^{1}_{X^{\mathrm{an}}/Z^{\mathrm{an}}}$.
∎
We apply this proposition to
$(X,M)=(\overline{Y}_{\mathbb{C}},\overline{N}_{\mathbb{C}})$ and its
analytification $(\overline{Y^{\mathrm{an}}},\overline{N^{\mathrm{an}}})$.
Since $Z$ is a blowup of the projective variety $\mathbb{X}_{de}$(Under the
notation of previous section), as the base change of $Z$ along
$W[T]^{\prime}\rightarrow\mathbb{C}$, $T\mapsto 0$,
$\overline{Y}_{\mathbb{C}}$ is projective as well. Hence the condition of the
proposition is satisfied and we may reduce the computation of the operator $N$
to the analytic setting.
## 4\. Proof of Theorem 1.1
Recall the hypothesis that $F$ is a characteristic $0$ local field containing
$\zeta_{N}$ whose residue characteristic equals to $p$ and does not divide
$N$.
It suffices to prove that $V_{\lambda,t^{-1}}$ is regular ordinary as a
$G_{F^{\prime}}$ representation for a finite extension $F^{\prime}/F$ because
of the following lemma.
###### Lemma 4.1.
If a representation $r:G_{F}\rightarrow GL_{n}(\overline{\mathbb{Q}}_{p})$
satisfy that its restriction $r|_{G_{F^{\prime}}}$ is de Rham regular and
ordinary for some finite extension $F^{\prime}/F$, then the same holds for
$r$.
###### Proof.
We may assume without loss of generality that $F^{\prime}/F$ is Galois. By the
interpretation of the regular and ordinary condition as in Theorem 6.1.2 of
[ACC+18], we see that
$r|_{G_{F^{\prime}}}\sim\left(\begin{matrix}\psi_{1}&\ast&\ast&\ast\\\
0&\psi_{2}&\ast&\ast\\\ \vdots&\ddots&\ddots&\ast\\\
0&\cdots&0&\psi_{n}\end{matrix}\right)$
where for each $i=1,\ldots,n$ the character $\psi_{i}$ :
$G_{F^{\prime}}\rightarrow\overline{\mathbb{Q}}_{p}^{\times}$ agrees with the
character
$\sigma\in
I_{F^{\prime}}\mapsto\prod_{\tau\in\mathrm{H}\mathrm{o}\mathrm{m}(F^{\prime},\overline{\mathbb{Q}}_{p})}\tau(\mathrm{Art}_{F^{\prime}}^{-1}(\sigma))^{\mu_{\tau,i}}$
on an open subgroup of the inertia group $I_{L}$, for some tuple
$\mu_{\tau,i}$ satisfying for each fixed $\tau$,
$\mu_{\tau,1}<\cdots<\mu_{\tau,n}$. It suffices to show the $G_{F}$(actually
$\mathrm{Gal}(F^{\prime}/F)$) action preserve the filtration given by the
above triangular form and $\mu_{\tau,i}=\mu_{\tau\sigma,i}$ for any
$\tau\in\mathrm{Hom}(F^{\prime},\overline{\mathbb{Q}}_{p})$ and
$\sigma\in\mathrm{Gal}(F^{\prime}/F)$.
Take a $\sigma\in\mathrm{Gal}(F^{\prime}/F)$, consider the one dimensional
subspace $\overline{\mathbb{Q}}_{p}e_{1}$ underlying $\psi_{1}$ and the
subspace $L_{1}=\overline{\mathbb{Q}}_{p}\sigma e_{1}$. $G_{F^{\prime}}$ acts
on $L_{1}$ with the $\tau$ HT weight $-\mu_{\tau\sigma,1}$. Now if there
exists some $\tau$ such that $\mu_{\tau,1}\neq\mu_{\tau\sigma,1}$, there must
exist some $\tau$ such that $\mu_{\tau\sigma,1}<\mu_{\tau,1}$, thus the $\tau$
HT weights of $L_{1}$ is strictly greater than the $\tau$ HT weights of
$\psi_{i}$ for any $i=1,\ldots,n$(Following from the strict ordering of the
various $\mu_{\tau,i}$). From this we see $\mathrm{Hom}(L_{1}.\psi_{i})=0$ for
any $i=1,\ldots,n$(otherwise their $\tau$ HT weights must be same for all
$\tau$) and thus $\mathrm{Hom}(L_{1},V)=0$ where $V$ denotes the underlying
vector space of $r$, which is a contradiction.
Thus $\mu_{\tau,1}=\mu_{\tau\sigma,1}$ for any
$\tau\in\mathrm{Hom}(F^{\prime},\overline{\mathbb{Q}}_{p})$ and
$\sigma\in\mathrm{Gal}(F^{\prime}/F)$. By similar argument, we still have
$\mathrm{Hom}(L_{1}.\psi_{i})=0$ for any $i=2,\ldots,n$, hence
$\mathrm{Hom}(L_{1},V)=\mathrm{Hom}(L_{1},\psi_{1})$. The left hand side is
nonzero and $L_{1}$ and the vector space underlying $\psi_{1}$ are both of $1$
dimensional, thus we must have $\sigma$ preserve the $1$ dimensional vector
space underlying $\psi_{1}$. We proved that the first step of the filtration
is preserved under $\mathrm{Gal}(F^{\prime}/F)$ action. Quotienting out the
subspace underlying $\psi_{1}$, we can use the same argument to deduce the
rest of the claim.
∎
Note that the $F^{\prime}/F$ we will use is totally ramified, hence the
residue field is still $k$.
The following $p$-adic Hodge theoretic lemma will be used.
###### Lemma 4.2.
Suppose that $F_{v}$ is a characteristic $0$ local field over $\mathbb{Q}_{p}$
and
$r:\mathrm{Gal}(\overline{F_{v}}/F_{v})\rightarrow
GL_{n}(\overline{\mathbb{Q}}_{p})$
is semi-stable. Suppose moreover that for every
$\tau:F_{v}^{0}\hookrightarrow\overline{\mathbb{Q}}_{p}$, the operator $N$ on
the $n$-dimensional vector space
$(r\otimes_{\tau,F_{v}^{0}}B_{\text{st}})^{\mathrm{Gal}(\overline{F}_{v}/F_{v})}$
is maximally nilpotent, i.e. the smallest $j$ such that $N^{j}=0$ is $n$. Then
$r$ is regular and ordinary.
###### Proof.
This Lemma 2.2 (2) of [BLGHT09]. ∎
Thus it suffices to prove that the operator $N$ on the space
$D_{st,\sigma}(V_{\lambda,t^{-1}}):=(V_{\lambda,t^{-1}}\otimes_{\sigma,(F^{\prime})^{0}}B_{\text{st}})^{\mathrm{Gal}(\overline{F^{\prime}}/F^{\prime})}$
is maxinally nilpotent, where $(F^{\prime})^{0}$ is the maximally unramified
subextension of $F^{\prime}$ and
$\sigma:(F^{\prime})^{0}\hookrightarrow\overline{\mathbb{Q}(\zeta_{N})}_{\lambda}$
is any fixed embedding of $(F^{\prime})^{0}$ and $\lambda$ is a place of
$\mathbb{Q}(\zeta_{N})$ over $p$. Applying the main comparison theorem of
[Tsu99] to the semistable model $\mathcal{Y}$ we get from Theorem 1.4, we have
$D_{st,\sigma}(V_{\lambda,t^{-1}})\cong
H^{N-2}_{crys}((\overline{\mathcal{Y}},\overline{\mathcal{N}})/(W,\mathbb{N},1\mapsto
0))[\frac{1}{p}]^{H_{0},\sigma^{-1}\chi}\otimes_{(F^{\prime})^{0},\sigma}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda}$
as filtered $(\phi,N)$-modules, where the right hand side is Hyodo-Kato’s log
cristalline cohomology.
Now apply Proposition 1.5 and Remark 2.4 to get the variety $Y$ with $H_{0}$
action over $\text{Spec}\ W[T]$ and the remark gives that
$H^{N-2}_{crys}((\overline{\mathcal{Y}},\overline{\mathcal{N}})/(W,\mathbb{N},1\mapsto
0))[\frac{1}{p}]^{H_{0},\sigma^{-1}\chi}\otimes_{(F^{\prime})^{0},\sigma}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda}\cong$
$H^{N-2}_{crys}((\overline{Y_{1}},\overline{N_{1}})/(W,\mathbb{N},1\mapsto
0))[\frac{1}{p}]^{H_{0},\sigma^{-1}\chi}\otimes_{(F^{\prime})^{0},\sigma}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda}$
So we only need to identify the limit
$\varprojlim_{n}H^{N-2}(((\overline{Y_{1}},\overline{N_{1}})/(W_{n},\mathbb{N},1\mapsto
0))_{crys},\mathcal{O}_{\overline{Y_{1}}/W_{n}})[\frac{1}{p}]$
and describe the operator $N$ on its $H_{0}$ eigenspace determined by the
character $\sigma^{-1}\chi$.
Now by Theorem 1.6 applied to the log scheme $(Y,N)$ where $Y$ is as above and
$N$ is the log structure on it given by the divisor $T$ over
$(W[T],\mathbb{N},1\mapsto T)$ and $i=N-2$, we may identify the above limit
with the de Rham cohomology of
$(\overline{Y},\overline{N})/(W,\mathbb{N},1\mapsto 0)$. And it suffice to
prove that the operator $N$ defined as the degree $N-2$ boundary homomorphism
of the long exact sequence given by the following exact triangle
${\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y}/(W,(0))}}$${\mathrm{dR}_{\overline{Y}/(W,\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
is maximally nilpotent on the $\sigma^{-1}\chi$ eigenspace of the $H_{0}$
action.
This algebraic statement can be verified over $\mathbb{C}$. So let
$(\overline{Y}_{\mathbb{C}},\overline{N}_{\mathbb{C}})$ denote the base change
of $(\overline{Y},\overline{N})$ along $(W,\mathbb{N},1\mapsto
0)\rightarrow(\mathbb{C},\mathbb{N},1\mapsto 0)$ for some fixed embedding
$\tau:W\rightarrow\mathbb{C}$. We are reduced to show that the $N$ on
$\mathbb{H}^{N-2}(\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},\mathbb{N},1\mapsto
0)})$ given as the degree $N-2$ boundary map of the long exact sequence given
by the exact triangle
${\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},\mathbb{N},1\mapsto
0)}[-1]}$${\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},(0))}}$${\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},\mathbb{N},1\mapsto
0)}}$$\scriptstyle{\cdot dlog1}$
is maximally nilpotent on the $\tau\sigma^{-1}\chi$ eigenspace of the $H_{0}$
action.
Let $(S,\mathbb{N},1\mapsto T)$ be the analytic disc centered at $0$ with
radius $1$, with coordinate $T$ in $\mathbb{C}$ and
$\pi^{\mathrm{an}}:Y^{\mathrm{an}}\rightarrow S$ be the analytic space
associated to $Y$ pulled back to $S$ from the affine line. We have the log
structure $N^{\mathrm{an}}$ on $Y^{\mathrm{an}}$ given by $N$ on $Y$. We write
$(\overline{Y^{\mathrm{an}}},\overline{N^{\mathrm{an}}})$ as the base change
of $(Y^{\mathrm{an}},N^{\mathrm{an}})$ along $(\mathrm{pt},\mathbb{N},1\mapsto
0)\rightarrow(S,\mathbb{N},1\mapsto T)$, which is clearly also the
analytification of $(\overline{Y}_{\mathbb{C}},\overline{N}_{\mathbb{C}})$.
Now we apply Theorem 1.8 to see there exists an $N$ and $H_{0}$ equivariant
isomorphism
$\mathbb{H}^{N-2}(\mathrm{dR}_{\overline{Y}_{\mathbb{C}}/(\mathbb{C},\mathbb{N},1\mapsto
0)})\cong\mathbb{H}^{N-2}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})$
We are reduced to prove that the operator $N$ on the $\sigma^{-1}\chi$
eigenspace of
$\mathbb{H}^{N-2}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})$ is maximally nilpotent.
Apply [Ill94] 2.2.2 to the setting $X=Y^{\mathrm{an}}$, $S=S$ with their
mentioned log structure.
[Ill94] 2.2.2 says
$R^{i}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}$ is locally free for all $i$(in particular for $i=N-2$) and
(4.1)
$\left(R^{N-2}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\right)\otimes_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}\cong\mathbb{H}^{N-2}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})$
(Here, we identify the $\omega_{Y}^{\cdot}$ in [Ill94] 2.2.2, which is defined
there as the derived pullback of
$\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto T)}$ to the point $0$,
with $\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)}$ by Lemma 3.1 and that by locally freeness we can pullback termwise for
$\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto T)}$.)
We claim this isomorphism 4.1 is $N$ equivariant, where the $N$ on the left
hand side is the reduction of the Gauss-Manin connection
$R^{N-2}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\rightarrow
R^{N-2}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes_{\mathcal{O}_{S}}\omega^{1}_{(S,\mathbb{N},1\mapsto T)/\mathbb{C}}$
that is (see 2.2.1.2 and 2.2.1.3 of [Ill94]) the degree $N-2$ connecting
homomorphism of the following exact triangle (the definition is completely
similar to that in step (2) of section 3) when applying
$R\pi^{\mathrm{an}}_{\ast}$, if we identify
$\omega^{1}_{(S,\mathbb{N},1\mapsto T)/\mathbb{C}}\cong\mathcal{O}_{S}$ by
$dlogT\mapsto 1$:
${\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}[-1]}$${\mathrm{dR}_{Y^{\mathrm{an}}/(\mathrm{pt},(0))}}$${\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}}$$\scriptstyle{\cdot dlogT}$
and the $N$ on the right hand side is defined in step (2) of section 3 and is
the operator we reduced to calculate.
Granting the following lemma, we are reduced to calculate the residue of the
Gauss-Manin connection at $0$ on the locally free sheaf
$R^{N-2}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}$, and that is linked to monodromy of this locally free sheaf by [Ill94]
(2.2.3). The proof of the lemma is quite formal, the reader is recommended to
skip it.
###### Lemma 4.3.
The above isomorphism 4.1 is $N$-equivariant.
###### Proof.
By proper base change applied to the cartesian diagram
${\overline{Y^{\mathrm{an}}}}$${Y^{\mathrm{an}}}$${\mathrm{pt}}$${S}$$\scriptstyle{i_{0}}$$\scriptstyle{\pi^{\mathrm{an}}}$
and the exact triangle of complexes over $Y^{\mathrm{an}}$
${\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}[-1]}$${\mathrm{dR}_{Y^{\mathrm{an}}/(\mathrm{pt},(0))}}$${\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}}$$\scriptstyle{\cdot dlogT}$
we get an isomorphism of exact triangles
${\mathbb{H}^{\cdot}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})[-1]}$${\mathbb{H}^{\cdot}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},(0))})}$${\mathbb{H}^{\cdot}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})}$${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}[-1]}$${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(\mathrm{pt},(0))}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}}$${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}}$
Thus
$\mathbb{H}^{N-2}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})\cong\mathbb{H}^{N-2}(R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}})$ is $N$
equivariant where the $N$ on the right hand side is given as the degree $N-2$
boundary momorphism of the lower exact triangle. Now to prove the lemma, it
suffices to prove that the surjection
$R^{N-2}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\twoheadrightarrow\mathbb{H}^{N-2}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})$ is $N$ equivariant, which factor as the composition of the ($N-2$)-th
cohomology of the map
$R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\rightarrow
R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}$ and the inverse
of the isomorphism
$R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}\rightarrow\mathbb{H}^{\cdot}(\mathrm{dR}_{\overline{Y^{\mathrm{an}}}/(\mathrm{pt},\mathbb{N},1\mapsto
0)})$ of the left or right column in the above diagram
Now it is reduced to show the $N$ equivariance of the natural map
$R^{N-2}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\rightarrow\mathbb{H}^{N-2}(R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}})$. But this
clearly follows from the commutative diagram
${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}[-1]}$${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,(0))}}$${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}}$${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}[-1]}$${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(\mathrm{pt},(0))}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}}$${R\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}\otimes^{\mathbb{L}}_{\mathcal{O}_{S}}\mathbb{C}_{\\{0\\}}}$
∎
Thus it now is reduced to show the residue $N$ at $0$ of the
$H_{0}$-$\tau\sigma^{-1}\chi$ eigenpart of
$R^{N-2}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}$ is maximally nilpotent. But by Corollary 2.2.3 of [Ill94], if we identify
the fibre of the $H_{0}$-$\tau\sigma^{-1}\chi$ eigenpart of
$R^{N-2}\pi^{\mathrm{an}}_{\ast}\mathrm{dR}_{Y^{\mathrm{an}}/(S,\mathbb{N},1\mapsto
T)}$ at $0$ with the fibre at $s$, here $s\neq 0$ and
$(s^{de}u^{\prime})^{N}\neq 1$, then the monodromy $T_{s}$ at $s$ around $0$
is identified with $\mathrm{exp}(-2\pi iN)$. We see from Proposition 1.5 and
Remark 2.4 that outside $T=0$, the family $Y^{\mathrm{an}}$ is just the
projective variety
$u^{\prime}T^{de}(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots
X_{N}$ over the punctured analytic disc with coordinate $T$, where
$u^{\prime}$ should be viewed as its image in $\mathbb{C}$. As
$Y^{\mathrm{an}}$ is (over the punctured unit disc) the base change of the
projective variety
$u^{\prime}T(X_{1}^{N}+X_{2}^{N}+\cdots+X_{N}^{N})=NX_{1}X_{2}\cdots X_{N}$
along $S\rightarrow S$, $T\mapsto T^{de}$, we know the monodromy $T_{s}$ can
be identified with $\rho_{s}(\gamma_{\infty})^{de}$ which is again maximally
unipotent from the condition of our Theorem 1.1 since the coefficient field
$\mathbb{C}$ is of characteristic $0$. Now the maximal nilpotence of $N$
follows from the monodromy $T_{1}=\mathrm{exp}(-2\pi iN)$ on the eigenspace of
the $H_{0}$ action being maximally unipotent.
## References
* [ACC+18] Patrick B. Allen, Frank Calegari, Ana Caraiani, Toby Gee, David Helm, Bao V. Le Hung, James Newton, Peter Scholze, Richard Taylor, and Jack A. Thorne, _Potential automorphy over CM fields_ , arXiv e-prints (2018), arXiv:1812.09999.
* [BLGHT09] Tom Barnet-Lamb, David Geraghty, Michael Harris, and Richard Taylor, _A family of calabi–yau varieties and potential automorphy ii_ , Publications of the Research Institute for Mathematical Sciences 47 (2009).
* [Gro63] Alexander Grothendieck, _éléments de géométrie algébrique : Iii. étude cohomologique des faisceaux cohérents, seconde partie_ , Publications Mathématiques de l’IHÉS 17 (1963), 5–91 (fr). MR 163911
* [HK94] Osamu Hyodo and Kazuya Kato, _Exposé v : Semi-stable reduction and crystalline cohomology with logarithmic poles_ , Périodes $p$-adiques - Séminaire de Bures, 1988 (Jean-Marc Fontaine, ed.), Astérisque, no. 223, Société mathématique de France, 1994, talk:5, pp. 221–268 (en). MR 1293974
* [Ill94] Luc Illusie, _Exposé i : Autour du théorème de monodromie locale_ , Périodes $p$-adiques - Séminaire de Bures, 1988 (Jean-Marc Fontaine, ed.), Astérisque, no. 223, Société mathématique de France, 1994, talk:1, pp. 9–57 (fr). MR 1293970
* [KKMSD73a] G. Kempf, F. Knudsen, D. Mumford, and B. Saint-Donat, _Equivariant embeddings of tori_ , pp. 1–52, Springer Berlin Heidelberg, Berlin, Heidelberg, 1973.
* [KKMSD73b] by same author, _Further applications_ , pp. 165–209, Springer Berlin Heidelberg, Berlin, Heidelberg, 1973.
* [KKMSD73c] by same author, _Semi-stable reduction_ , pp. 53–108, Springer Berlin Heidelberg, Berlin, Heidelberg, 1973.
* [Knu73] Finn F. Knudsen, _Construction of nice polyhedral subdivisions_ , pp. 109–164, Springer Berlin Heidelberg, Berlin, Heidelberg, 1973.
* [Ogu18] Arthur Ogus, _Lectures on logarithmic algebraic geometry_ , Cambridge Studies in Advanced Mathematics, Cambridge University Press, 2018.
* [Tsu99] Takeshi Tsuji, _p-adic étale cohomology and crystalline cohomology in the semi-stable reduction case_ , Inventiones mathematicae 137 (1999), no. 2, 233–411.
|
# Validations and corrections of the SFD and _Planck_ reddening maps based on
LAMOST and _Gaia_ data
Yang Sun Department of Astronomy, Beijing Normal University, Beijing 100875,
China Steward Observatory, University of Arizona, Tucson, AZ 85721, USA
Haibo Yuan Department of Astronomy, Beijing Normal University, Beijing 100875,
China<EMAIL_ADDRESS>Bingqiu Chen South-Western Institute for Astronomy
Research, Yunnan University, Kunming 650500, China
(Received Feb 16, 2022; Revised Mar 22, 2022; Accepted Mar 30, 2022)
###### Abstract
Precise correction of dust reddening is fundamental to obtain the intrinsic
parameters of celestial objects. The Schlegel et al. (SFD) and the _Planck_ 2D
extinction maps are widely used for the reddening correction. In this work,
using accurate reddening determinations of about two million stars from the
Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) data
release 5 (DR5) and _Gaia_ DR2, we check and calibrate the SFD and _Planck_
maps in the middle and high Galactic latitudes. The maps show similar
precision in reddening correction. We find small yet significant spatially
dependent biases for the four maps, which are similar between the SFD and
Planck2014-R maps, and between the Planck2014-Tau and Planck2019-Tau maps. The
biases show a clear dependence on the dust temperature and extinction for the
SFD and Planck2014-R maps. While those of the Planck2014-Tau and
Planck2019-Tau maps have a weak dependence on the dust temperature, they both
strongly depend on the dust spectral index. Finally, we present corrections of
the SFD and _Planck_ extinction maps within the LAMOST footprint, along with
empirical relations for corrections outside the LAMOST footprint. Our results
provide important clues for the further improvement of the Galactic all-sky
extinction maps and lay an significant foundation for the accurate extinction
correction in the era of precision astronomy.
Interstellar dust extinction (837); Galaxy stellar content (621)
††journal: ApJS
## 1 Introduction
Interstellar dust is formed from the condensation of heavy elements which are
expelled into the space by stellar winds or explosions. In the ultraviolet
(UV), optical, and near-infrared (NIR) bands, dust absorbs and scatters the
light passing through it. Therefore, correction for the effect of dust
extinction is fundamental to reveal the intrinsic properties of observed
objects. The existing extinction maps serve as a straight and convenient tool
to correct for extinction.
The two-dimensional (2D) extinction map of Schlegel et al. (1998, hereafter
SFD) is most widely used for dust corrections. The SFD map is based on the
Infrared Astronomical Satellite (IRAS) 100 $\mu$m and the Dust Infrared
Background Experiment (DIRBE) 100 and 240 $\mu$m data. The far-infrared (FIR)
emission is firstly modeled by a modified blackbody to obtain the dust
temperature, which is then used to determine the dust column density and
finally transferred to reddening by calibrating against 389 elliptical
galaxies. Unfortunately, the map suffers from moderate systematic
uncertainties as the dust temperatures have a low spatial resolution as well
as the fact that the cosmic infrared background (CIB) and the zodiacal light
have not been fully removed.
Many works have been carried out to validate and calibrate the SFD map. By
comparing the surface number density and the average colors of galaxies from
the Sloan Digital Sky Survey (SDSS; York et al. 2000) in the regions with
different extinctions, Yahata et al. (2007) indicated that, toward low-
reddening lines of sight, the results show the opposite of the effect of
Galactic dust that the number density of galaxies increases with $E(B-V)$
regardless of extinction correction with the SFD map. These unexpected effects
were explained by the pollution of the sky brightness by extragalactic
infrared emission of order 0.01 magnitudes in the SFD emission map. Using
151,637 passively evolving galaxies from the SDSS, Peek & Graves (2010) have
presented corrections to the SFD map in the high Galactic latitude region.
They found that the SFD map under-predicts extinction in regions of low dust
temperature. The result is further confirmed by a large-scale reddening map
from neutral hydrogen emission (Lenz et al. 2017). By measuring the reddening
of a sample of over 50, 000 QSOs from the SDSS DR7, Wolf (2014) concluded that
there is a non-linearity in the SFD map (see also Figure 10 of Li et al.
2017), which may be caused by pollution from unresolved extragalactic infrared
emission. In addition, Schlafly et al. (2010), Schlafly & Finkbeiner (2011)
and Yuan et al. (2013) measured the stellar reddening and found that the SFD
extinction map overestimates $E(B-V)$ by about 14 percent.
Although many works have checked the errors of the SFD map, further
investigations are still needed due to the limitations of previous works. The
limitations include large statistical uncertainties caused by small sample
sizes, systematic calibration errors in the photometric data used, and invalid
assumptions adopted. Thanks to the Large Sky Area Multi-Object Fiber
Spectroscopic Telescope (LAMOST; Cui et al. 2012) and _Gaia_ (Gaia
Collaboration et al. 2016), reddening values of an enormous number of stars
can be estimated with high accuracy. The LAMOST surveys (Zhao et al. 2012;
Deng et al. 2012; Liu et al. 2014) have obtained high-quality spectra and
precise atmospheric parameters including effective temperature, surface
gravity, and metallicity for millions of stars. Their reddening values
$E(G_{\rm BP}-G_{\rm RP})$ can be accurately measured from the star-pair
method (Yuan et al., 2013) by combining with the _Gaia_ photometric data.
In this work, we first fit empirical temperature- and reddening-dependent
coefficient (R) to convert $E(G_{\rm BP}-G_{\rm RP})$ into $E(B-V)$. Then by
comparing with the SFD dust map, we produce a correction map for it and
investigate the potential error sources. Since Planck Collaboration et al.
(2014) (hereafter Planck2014) applied a similar method as SFD to build a new
extinction map (hereafter Planck2014-R) and provided another one based on 353
GHz absorption (hereafter Planck2014-Tau), we inspect and correct them at the
same time. The map from Irfan et al. (2019) (hereafter Planck2019-Tau), which
is an update of Planck2014-Tau, is also considered.
The paper is organized as follows. We describe our data and the method of
calculating $E(B-V)$ in Section 2. The details of the four extinction maps and
the way to build the correction factor _k_ maps are introduced in Section 3.
We present the final correction factor _k_ maps and discuss potential error
sources of the extinction maps in Section 4. We summarize in Section 5.
## 2 Accurate reddening estimation from LAMOST and _Gaia_ data
We use a star-pair method (Yuan et al. 2013) to measure reddening values of
individual stars. The method assumes that stars of the same stellar
atmospheric parameters should have the same intrinsic colors. The method
requires a control sample of stars that are unreddened or of very low
reddening. We can then obtain the intrinsic colors of a reddened star from its
pairs out of the control sample having the same atmospheric parameters. In
this work, we calculate values of $E(G_{\rm BP}-G_{\rm RP})$ using the same
star-pair algorithm of Yuan et al. (2015).
### 2.1 LAMOST and Gaia data
The stellar parameters used in this analysis are from the LAMOST Data Release
5 (DR5; Luo et al. 2015). The LAMOST spectra have a resolution of about $R$
1800 and cover a wavelength range of 3700 - 9000Å. The LAMOST DR5 has provided
stellar atmospheric parameters (effective temperatures $T_{\mathrm{eff}}$,
surface gravities $\log\mathrm{g}$, and metallicities $\mathrm{[Fe/H]}$) for
over 5 million stars, with the LAMOST Stellar Parameter Pipeline (LASP; Wu et
al. 2011). The typical uncertainties of the stellar parameters are about 110
K, 0.2 dex and 0.15 dex for $T_{\mathrm{eff}}$, $\log\mathrm{g}$ and
$\mathrm{[Fe/H]}$, respectively (Luo et al. 2015). Note the internal
uncertainties are significantly lower (e.g., Niu et al. 2021).
To obtain accurate reddening values of LAMOST stars, we adopt the photometric
data from the _Gaia_ DR2 (Collaboration et al. 2018). The _Gaia_ DR2 provides
high precision $G$ magnitudes of $\sim$ 1.7 billion sources and $G_{\rm BP}$
and $G_{\rm RP}$ photometric measurements of $\sim$ 1.4 billion targets. The
_Gaia_ $G$, $G_{\rm BP}$, and $G_{\rm RP}$ filters cover wavelength ranges of
330 - 1050, 330 - 680, and 630 - 1050 nm, respectively. The calibration
uncertainties are 2, 5, and 3 mmag for $G$, $G_{\rm BP}$, and $G_{\rm RP}$,
respectively (Evans et al. 2018).
To validate and correct 2D reddening maps, we only use stars at medium and
high Galactic latitudes, where dust layers can be treated as thin screens and
the reddening values of the observed stars are the same as those at infinite
distances. Stars from the LAMOST DR5 are first cross-matched with the _Gaia_
DR2. We then select stars with the following criteria: $\rm{parallax}>0$,
Galactic latitude $|b|>20$°, distance to the Galactic plane
$|Z|>200\,\mathrm{pc}$, effective temperature
$T_{\mathrm{eff}}\in[4000,8000]\,\mathrm{K}$, LAMOST spectral signal-to-noise
ratio (SNR) at $g$-band ${\rm{SNR}_{g}}>20$. This yields 2,083,741 sources as
our sample.
### 2.2 Reddening determination
As we mentioned above, the star-pair method is used to estimate the reddening
values for our sample stars. For a given sample star, its control stars are
selected via the following criteria:
$\Delta{T_{\mathrm{eff}}}<T_{\mathrm{eff}}*{(0.00003*T_{\mathrm{eff}})}^{2}\,\mathrm{K}$,
$\Delta{\log\mathrm{g}}<0.5\,\mathrm{dex}$,
$\Delta{\mathrm{[Fe/H]}}<0.3\,\mathrm{dex}$, distance to the Galactic plane
$|Z|>1\,\mathrm{kpc}$ and extinction value given by the SFD map
$E(B-V)_{SFD}<0.02\,\mathrm{mag}$. Then by using the same star-pair algorithm
described in Yuan et al. (2015), we obtain $E(G_{\rm BP}-G_{\rm RP})$ and
$E(G_{\rm BP}-G)$ for our sample. Stars of resulted reddening $E(G_{\rm
BP}-G_{\rm RP})<-0.05$ mag are excluded. This leads to 2,067,805 stars in our
sample, which we denote as ‘LAMOST sample’ in this work.
Reddening coefficient R is usually used to convert reddening values of a given
color to $E(B-V)$. The stellar extinction depends on the convolution of
stellar spectral energy distribution (SED) and the extinction curve.
Therefore, the reddening coefficient R is physically related to the
$T_{\mathrm{eff}}$ and the $E(B-V)$. This effect is particularly strong for
the very broad _Gaia_ passbands. Therefore, we use binary quadratic functions
to fit $R_{G_{\rm BP}-G_{\rm RP}}$ and $R_{G_{\rm BP}-G}$ as functions of
$T_{\mathrm{eff}}$ and $E(B-V)$. A sample of stars is collected via the
following criteria:
* •
$|b|>15$°;
* •
$|Z|>300\,\mathrm{pc}$;
* •
If $T_{\mathrm{eff}}>4500\,\mathrm{K}$, $\mathrm{{SNR}_{g}}>15$. Otherwise,
$\mathrm{{SNR}_{g}}>20$;
* •
$E(B-V)>0.05\,\mathrm{mag}$.
The selection criteria are different from those for selecting stars to correct
reddening maps mentioned in Section 2.1. To make the fitting more reliable,
here we exclude stars in very low reddening regions, and restrict
$\mathrm{{SNR}_{g}}$ more tightly. This yields about 700,000 sources for
fitting. The exact number varies slightly for different extinction maps.
In order to build grids, $E(B-V)$ values are equally divided into 8 bins from
0 to 0.8 mag, and $T_{\mathrm{eff}}$ are also equally divided into 8 bins from
4000 to 8000 K. Then for each grid, the medians of $E(B-V)$,
$T_{\mathrm{eff}}$, and R are estimated. The error of R is also calculated by
the following formula:
$R_{err}=\frac{std(R)}{\sqrt{N}},$ (1)
where N is the number of sources in the grid. The grids with N $<$ 10 or
$R_{err}$ $>$ 0.03 are discarded in the fitting.
We fit the relations for the four maps separately, considering their different
systematics in $E(B-V)$. As to be mentioned in Section 3.1, the Planck2014-R
map is unreliable in high extinction regions. Therefore, for this map we only
choose grids whose $E(B-V)$ values are in the range of $[0.05,0.5]$ mag, and
in the range of $[0.05,0.7]$ mag for other maps. The fitting coefficients are
listed in Table 1. The results are plotted in Figure 1 and Figure 2. Note that
the trend with $E(B-V)$ is different for the Planck2014-R map. For the other
three maps, $R_{G_{\rm BP}-G_{\rm RP}}$ and $R_{G_{\rm BP}-G}$ decrease as
$E(B-V)$ increases. For the Planck2014-R map, $R_{G_{\rm BP}-G_{\rm RP}}$ and
$R_{G_{\rm BP}-G}$ increase as $E(B-V)$ increases at $E(B-V)<0.4$ mag. It is
because that the Planck2014-R map tends to underestimate reddening in high
extinction regions.
Via the above relations, the $E(B-V)_{\rm LAMOST}$ values can be computed:
$E(B-V)_{\rm LAMOST}=\frac{E(G_{BP}-G_{RP})}{R_{G_{BP}-G_{RP}}}$ (2)
The typical errors of $E(B-V)_{\rm LAMOST}$ for individual stars are around
0.01–0.02 mag.
## 3 Corrections of reddening maps
### 3.1 The SFD and Planck maps
The SFD map traces dust reddening via thermal emission in the far-infrared
based on the IRAS data. The IRAS 100 $\mu$m map was calibrated to match the
COBE/DIRBE data. The zodiacal light and CIB contaminations were removed. The
DIRBE 100 and 240 $\mu$m data were then used to estimate the dust temperature
at a spatial resolution of 1.3∘ and to transform the IRAS 100 $\mu$m flux to
dust optical depth. By using a sample of 389 elliptical galaxies, the dust map
was finally normalized to $E(B-V)$ reddening map. The SFD reddening map has a
spatial resolution of 6.1 arcmin.
The Planck2014-R and Planck2014-Tau maps are based on the _Planck_ 350, 550,
and 850 $\mu$m data from the HFI 2013 delivery maps (Planck Collaboration et
al. 2014) and the IRAS 100 $\mu$m data. Dust parameters, including the dust
optical depth $\tau_{353}$, dust temperature $T_{\rm obs}$, and spectral index
of the dust emission $\beta_{\rm obs}$, were obtained by the $\chi^{2}$ fit of
the observed dust SED with a modified blackbody (MBB) model. The dust radiance
$\mathcal{R}$ was also calculated by integrating the MBB fit. Finally,
Planck2014-R and Planck2014-Tau reddening maps were respectively obtained by
transforming $\mathcal{R}$ and $\tau_{353}$ to $E(B-V)$ using extinction
measurements of SDSS quasars. Both maps have a resolution of 5 arcmin.
Planck2014 recommended that their Planck2014-R reddening map should only be
used to estimate reddening in lines of sight where $E(B-V)<$ 0.3 mag. However,
to make a more robust comparison, we slightly enlarged this limit by adopting
sightlines of reddening $E(B-V)<$0.5 mag. Finally, Planck2014 indicated an
offset of $-$0.003 mag for the SFD map. We thus corrected the SFD maps by
adding the offset.
Based on the _Planck_ Release 2 353, 545, and 857 GHz maps as well as the IRAS
100 $\mu$m data, Irfan et al. (2019) determined the MBB model parameters of
thermal dust emission by a new sparsity-based, parametric method and presented
the Planck2019-Tau map. By adopting the new method, they are able to produce
full-resolution MBB parameter maps without smoothing and account for the CIB
without removing thermal dust emission through over-smoothing. We use the same
coefficient for the Planck2014-Tau map to convert $\tau_{353}$ to $E(B-V)$.
The Planck2019-Tau map has a resolution of 5 arcmin.
### 3.2 Correction Factor
To validate and correct the SFD and _Planck_ maps, we first obtain the
$E(B-V)_{\rm map}$ values for the individual stars in the LAMOST sample from
the SFD and _Planck_ maps, and then compare these reddening values to those
derived from the star-pair method $E(B-V)_{\rm LAMOST}$. Sample stars are then
divided into different subfields (pixels) by the HEALPix grid (Gorski et al.,
2005) at Nside = 64 (with a resolution of about 1°). There are typically 100
stars in one pixel. For each pixel, we assume that $E(B-V)_{\rm
LAMOST}=k\times E(B-V)_{\rm map}$, where $k$ is the correction factor and is
estimated by averaging the ratios of $E(B-V)_{\rm LAMOST}$ and $E(B-V)_{\rm
map}$ after 3$\sigma$ clipping. Its uncertainty is derived by,
$k_{\rm err}=\frac{Std(\frac{E(B-V)_{\rm LAMOST}-E(B-V)_{\rm fit}}{E(B-V)_{\rm
map}})}{\sqrt{N}},$ (3)
where $E(B-V)_{\rm fit}=k\times E(B-V)_{\rm map}$ and N is the total number of
sources for a given pixel. Figure 3 shows the comparisons of $E(B-V)_{\rm
LAMOST}$ and $E(B-V)_{\rm map}$ in six selected sight-lines. For a given
sightline, the standard deviations between different maps are similar. The
typical standard deviations are between 0.01 – 0.03 mag, increasing towards
high extinction sight-lines.
## 4 Result and Discussion
### 4.1 Removing bad pixels
Figure 4 shows the spatial variations of $k_{\rm err}$ for the SFD and
_Planck_ reddening maps. Excluding pixels in the very low extinction regions
of the maps, most $k_{\rm err}$ values are less than 0.1, only a few pixels
have $k_{\rm err}>0.1$ due to their small numbers of stars or large deviations
between ${E(B-V)}_{\mathrm{LAMOST}}$ and ${E(B-V)}_{fit}$.
Figure 5 plots the spatial and histogram distributions of star numbers in the
individual pixels for the SFD map as an example. There are about 100 stars for
a typical pixel. Only 887 pixels have less than 10 stars. The results for the
three Planck maps are similar. We thus exclude these pixels in the following
analyses.
Figure 6 shows ${\rm Std}_{\rm fit}$ against $E(B-V)_{\rm map}$ for the SFD
and _Planck_ reddening maps, where ${\rm Std}_{\rm fit}$ is the standard
deviation of the reddening fit residuals ${\rm Std}_{\rm fit}=Std(E(B-V)_{\rm
LAMOST}-E(B-V)_{\rm fit})$. Values of $k_{\rm err}$ are also color-coded. Only
pixels having over 10 stars are plotted. ${\rm Std}_{\rm fit}$ increases with
$E(B-V)_{\rm map}$. Second-order polynomials are applied to fit the binned
median values. The pixels that have ${\rm Std}_{\rm fit}$ values larger than
the fitted curves by 2$\sigma$ deviation are excluded. There are about 400
pixels in each panel of Figure 6 that have very large values of ${\rm
Std}_{\rm fit}$ and consequently $k_{\rm err}$ $>$ 0.1. Note not all pixels
eliminated from the four extinction maps are the same. A total of 739 pixels
are discarded from the four extinction maps.
We have checked the reddening-distance profiles of these pixels. About half
(374) of them have dust clouds at large distances to the Galactic plane $Z$
(Figure 7). For these pixels, there are lots of stars locating in front of the
dust cloud. Thus the reddening values of these stars $E(B-V)_{\rm LAMOST}$ are
much smaller than those from the reddening maps $E(B-V)_{\rm map}$, causing
large values of Stdfit. Figure 8 shows the spatial distribution of all the
discarded pixels for the four dust maps. The number of rejected times are also
marked. It must be admitted that not all pixels with associated dust cloud are
well eliminated in the four extinction maps. However, due to the small number
of such pixels, they will not affect the statistical results obtained in the
following analyses.
The final spatial distributions of $E(B-V)_{\mathrm{LAMOST}}$ and
$E(B-V)_{\mathrm{map}}$ for the SFD and _Planck_ maps are shown in Figure 9.
The differences ($E(B-V)_{\rm LAMOST}-E(B-V)_{\rm map}$ and $k$) are shown in
Figure 10 and Figure 11. On one hand, the overall agreements between the
$E(B-V)_{\mathrm{LAMOST}}$ and $E(B-V)_{\mathrm{map}}$ maps are excellent. On
the other hand, the small yet significant discrepancies show clear spatially-
dependent patterns. The patterns for the SFD and Planck2014-R maps are
similar, while those for the Planck2014-Tau and Planck2019-Tau maps are
similar.
### 4.2 Precisions of the SFD and _Planck_ maps
Before we discuss the possible factors that affect the spatial variations of
$k$, we first check and validate reddening correction precisions of the SFD
and _Planck_ maps. As there are systematics of $E(B-V)$ values between the
four reddening maps, we compare $E(G_{\rm BP}-G_{\rm RP})$ values here. For
each pixel of each map, we estimate the dispersion of the differences between
$E(G_{\rm BP}-G_{\rm RP})_{\rm LAMOST}$ and $E(G_{\rm BP}-G_{\rm RP})_{\rm
map}$, where $E(G_{\rm BP}-G_{\rm RP})_{\rm map}$ is calculated using the
corresponding reddening coefficient. For each map, the median dispersion
values are plotted against $E(G_{\rm BP}-G_{\rm RP})_{\rm LAMOST}$ in Figure
12. Dispersion values increase with the reddening values of the pixels. The
curves of the four maps are very close, suggesting that the four reddening
maps can achieve similar precisions in reddening correction. In the low-
extinction region ($E(G_{\rm BP}-G_{\rm RP})$ $<$ 0.1 mag), the Planck2014-Tau
map is slightly worse. In the relatively high-extinction region, the
Planck2019-Tau map is slightly better. The typical dispersion is 0.016, 0.022,
and 0.045 mag at $E(G_{\rm BP}-G_{\rm RP})_{\rm LAMOST}=$ 0.01, 0.1, and 0.7
mag, respectively. Note the dispersion values include measurement
uncertainties of $E(G_{\rm BP}-G_{\rm RP})_{\rm LAMOST}$, therefore are upper
limits of the reddening correction precisions.
### 4.3 Properties correlated with the correction factor
In Figure 10, the spatial distributions of the correction factor _k_ show
clear patterns. We explore here the possible origins of the patterns.
#### 4.3.1 Reddening values
The panels in the first row of Figure 13 plot the correction factor $k$
against reddening values $E(B-V)$ from the SFD and _Planck_ maps. All the
pixels are divided into different bins according to their $E(B-V)$ values. The
bin width is 0.01 mag when $E(B-V)<$ 0.1 mag, and 0.1 mag otherwise. The
median values of $E(B-V)_{\rm map}$ and $k$ are over-plotted in red pluses and
listed in Table 2.
The overall relations between $k$ and $E(B-V)_{\rm map}$ are similar for the
four reddening maps (particularly between the SFD and Planck2014-R maps, and
between the Planck2014-Tau and Planck2019-Tau maps). All show dips at
$E(B-V)_{\rm map}\sim 0.03$ mag when $E(B-V)_{\rm map}$ $<$ 0.1 mag. The
trends become flat around 1 when $E(B-V)_{\rm map}$ $>$ 0.1 mag, as expected
considering that reddening-dependent coefficients are used (see Section 2.2)
to compute the $E(B-V)_{\rm LAMOST}$ values and then the $k$ values. The dips
in low reddening regions are likely caused by the stronger over-estimation of
reddening compared to that in the high reddening regions.
We have explored the relations between $k$ and $E(B-V)_{\rm map}$ for five
different sky areas in the remaining rows of Figure 13. For a given sky area,
the relations are similar between the SFD and Planck2014-R maps, and between
the Planck2014-Tau and Planck2019-Tau maps. However, for a given map, the
relations are different between different sky areas, suggesting that reddening
is not the main factor for the spatial variations of $k$.
#### 4.3.2 Dust Temperature and spectral index
Since the SFD and _Planck_ maps are all based on the MBB model of dust thermal
emissions, the correction factor $k$ could be correlated to the model
parameters, i.e. dust temperature $T_{\rm d}$ and spectral index $\beta$. Note
that $T_{\rm d}$ and $\beta$ parameters are usually fitted simultaneously and
suffer strong anti-correlation in the MBB model (e.g., see Figure 7 of
Planck2014). The SFD map adopted a constant value of $\beta$ = 2. While the
Planck2014-R map was based on dust radiance ($\mathcal{R}$), which is the
integrated intensity (See Equation 9 in Planck2014) and thus does not suffer
from any degeneracy in the fit parameters. The possible effect of spectral
index $\beta$ on the Planck2014-R map should be very weak. Therefore, we first
investigate the effect of $T_{\rm d}$ on $k$ for the SFD and Planck2014-R
maps.
The upper-left and upper-right panels of Figure 14 plot $k$ as functions of
$E(B-V)$ and $T_{\rm d}$ for the SFD and Planck2014-R maps, respectively. A
strong dependence on $T_{\rm d}$ is found for the SFD map, particularly in the
low extinction regions ($E(B-V)<0.3$ mag), where $k$ decreases when $T_{\rm
d}$ increases. It suggests that the SFD map underestimates/overestimates
reddening in low/high dust temperature regions. One possible explanation is
that the SFD $T_{\rm d}$ map ”underestimates/overestimates” $T_{\rm d}$ in
high/low dust temperature regions. The result is consistent with Peek & Graves
(2010) and Lenz et al. (2017).
For the Planck2014-R map, the dependence on $E(B-V)$ and $T_{\rm d}$ is more
clearly visible, in both the low and high extinction regions. It suggests that
the Planck2014-R map underestimates/overestimates reddening in low/high dust
temperature regions, and indicates that the Planck dust radiance map
underestimates/overestimates the integrated intensity in low/high dust
temperature regions.
We also investigate the dependence of $k$ on $T_{\rm d}$ for the
Planck2014-Tau and Planck2019-Tau maps. Very weak relations are found.
However, a strong dependence on $\beta$ is found for the above two maps. To
show the dependence more clearly, we first perform a normalization of the $k$
values with respect to the red lines in the top panels of Figure 13 to get rid
of the effect of reddening. The normalized $k$ values are named
$k^{{}^{\prime}}$ hereafter. The upper-left panel of Figure 15 shows
$k^{{}^{\prime}}$ as functions of $T_{\rm d}$ and $\beta$ for the
Planck2014-Tau map. We also divide the pixels into four different reddening
ranges, and their results are plotted in other upper panels. It can be seen
that for a given $T_{\rm d}$, $k^{{}^{\prime}}$ increases as $\beta$
increases. Similar results are found for the Planck2019-Tau map, as shown in
Figure 16.
### 4.4 Fitting of the correction factor
To provide corrections of the SFD and _Planck_ maps for regions outside the
footprints in the current work, we fit the correction factor $k$ as functions
of $E(B-V)$ and $T_{\rm d}$ for the SFD and Planck2014-R maps, and the
normalized correction factor $k^{{}^{\prime}}$ as functions of $T_{\rm d}$ and
$\beta$ for the Planck2014-Tau and Planck2019-Tau maps.
Two-dimensional fourth-order polynomials of 15 free parameters are adopted in
the fitting with the Python procedure SciPy. The resultant coefficients are
listed in Table 3. The fitting residuals are shown in Figure 14 for the SFD
and Planck2014-R maps, Figure 15 for the Planck2014-Tau map, and Figure 16 for
the Planck2019-Tau map, respectively. One can see that global trends with
$E(B-V)$, $T_{\rm d}$ or $\beta$ are successfully removed.
Figure 17 shows spatial distributions of $k$/$k^{{}^{\prime}}$, $k_{\rm
fit}$/$k^{{}^{\prime}}_{\rm fit}$, and $\Delta k$/$\Delta k^{{}^{\prime}}$ for
the SFD and Planck maps. The fluctuations are smaller in the residual maps.
However, the fitting residuals still show similar spatial patterns, despite
the fact that they show no dependence on $E(B-V)$, $T_{d}$ or $\beta$. Figure
18 compares fitting residuals between different maps. The correlation
coefficients are between 0.63 – 0.90. The result indicates that there are
likely other factors (e.g., dust sizes/reddening laws) that contribute the
spatial variations of the correction factors. Such possibilities will be
explored in the future work.
## 5 Summary
Combining high-precision LAMOST DR5 spectroscopic data and _Gaia_ DR2
photometric data, we have calculated the color excess $E(G_{\rm BP}-G_{\rm
RP})$ for about two million well selected middle and high Galactic latitude
stars by using the star-pair technique. With empirical temperature- and
reddening-dependent coefficients, values of $E(B-V)_{\mathrm{LAMOST}}$ are
further obtained to check and correct the SFD, Planck2014-R, Planck2014-Tau,
and Planck2019-Tau reddening maps. By comparing $E(B-V)_{\mathrm{LAMOST}}$
with $E(B-V)_{\mathrm{map}}$, the following results are found:
1. 1.
On one hand, the overall agreements between the $E(B-V)_{\mathrm{LAMOST}}$ and
$E(B-V)_{\mathrm{map}}$ maps are excellent (Figure 9). On the other hand, the
small yet significant discrepancies show clear spatially-dependent patterns
(Figure 10). The patterns for the SFD and Planck2014-R maps are similar, while
those for the Planck2014-Tau and Planck2019-Tau maps are similar.
2. 2.
The four reddening maps can achieve similar precisions in reddening correction
(Figure 12). In the low-extinction region ($E(G_{\rm BP}-G_{\rm RP})$ $<$ 0.1
mag), the Planck2014-Tau map is slightly worse. In the relatively high-
extinction region, the Planck2019-Tau map is slightly better.
3. 3.
For a given sky area, the $k$ and $E(B-V)_{\rm map}$ relations are similar
between the SFD and Planck2014-R maps, and between the Planck2014-Tau and
Planck2019-Tau maps. However, for a given map, the relations are different
among different sky areas (Figure 13), suggesting that reddening is not the
main factor for the spatial variations of $k$.
4. 4.
For the SFD and Planck2014-R maps, the dependence of $k$ on $E(B-V)$ and
$T_{\rm d}$ is clearly visible (Figure 14). $k$ decreases when $T_{\rm d}$
increases. It suggests that the two maps underestimates/overestimates
reddening in low/high dust temperature regions. One possible explanation is
that the SFD $T_{\rm d}$ map ”underestimates/overestimates” $T_{\rm d}$ in
high/low dust temperature regions, consistent with Peek & Graves (2010) and
Lenz et al. (2017). While the Planck dust radiance map possibly
underestimates/overestimates the integrated intensity in low/high dust
temperature regions.
5. 5.
For the Planck2014-Tau and Planck2019-Tau maps, the dependence of $k$ on
$T_{\rm d}$ is very weak, but very strong on $\beta$. For a given $T_{\rm d}$,
the normalized correction factor $k^{{}^{\prime}}$ increases as $\beta$
increases (Figure 15 and Figure 16).
The $k$ maps and their errors are publicly available111http://paperdata.china-
vo.org/Dustmaps-correction/extinction-maps-correction.zip and can be used to
perform corrections of the SFD and _Planck_ maps. For regions outside the
footprints in the current work, relations of $k$ as functions of $E(B-V)$ and
$T_{\rm d}$ for the SFD and Planck2014-R maps, and $k^{{}^{\prime}}$ as
functions of $T_{\rm d}$ and $\beta$ for the Planck2014-Tau and Planck2019-Tau
maps, can be used to correct global trends with $E(B-V)$, $T_{\rm d}$ or
$\beta$ to some extent (Table 3 and Figure 17). It should be noticed that,
applications of the empirical correction relations are limited by the range of
$E(B-V)$ for fitting. For the convenience of use, a python routine is
provided222https://github.com/qy-sunyang/Extinction-Maps-Correction for such
purposes. However, the fitting residuals between different maps still show
similar spatial patterns and good correlations (Figure 18), indicating that
there are likely other factors (e.g., dust sizes/reddening laws) that
contribute the spatial variations of the correction factors. Such
possibilities will be explored in the future work.
Our results provide important clues for the further improvement of the
Galactic all-sky extinction maps and lay an important foundation for the
accurate extinction correction in the era of precision astronomy.
We acknowledge the referee for his/her valuable comments to improve the
clarity and quality of the manuscript. This work is supported by the National
Key Basic R&D Program of China via 2019YFA0405503, the National Natural
Science Foundation of China through the projects NSFC 12173007, 12173034 and
11603002, and Beijing Normal University grant No. 310232102. We acknowledge
the science research grants from the China Manned Space Project with NO. CMS-
CSST-2021-A08 and CMS-CSST-2021-A09. This work has made use of data from the
European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia),
processed by the Gaia Data Processing and Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has
been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement. Guoshoujing Telescope (the
Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a
National Major Scientific Project built by the Chinese Academy of Sciences.
Funding for the project has been provided by the National Development and
Reform Commission. LAMOST is operated and managed by the National Astronomical
Observatories, Chinese Academy of Sciences.
## References
* Collaboration et al. (2018) Collaboration, G., Brown, A. G. A., Vallenari, A., et al. 2018, Astronomy and Astrophysics, 616, A1, doi: 10.1051/0004-6361/201833051
* Cui et al. (2012) Cui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, Research in Astronomy and Astrophysics, 12, 1197, doi: 10.1088/1674-4527/12/9/003
* Deng et al. (2012) Deng, L.-C., Newberg, H. J., Liu, C., et al. 2012, Research in Astronomy and Astrophysics, 12, 735, doi: 10.1088/1674-4527/12/7/003
* Evans et al. (2018) Evans, D. W., Riello, M., De Angeli, F., et al. 2018, Astronomy & Astrophysics, 616, A4, doi: 10.1051/0004-6361/201832756
* Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, Astronomy & Astrophysics, 595, A1, doi: 10.1051/0004-6361/201629272
* Gorski et al. (2005) Gorski, K. M., Hivon, E., Banday, A. J., et al. 2005, The Astrophysical Journal, 622, 759, doi: 10.1086/427976
* Irfan et al. (2019) Irfan, M. O., Bobin, J., Miville-Deschênes, M.-A., & Grenier, I. 2019, Astronomy & Astrophysics, 623, A21, doi: 10.1051/0004-6361/201834394
* Lenz et al. (2017) Lenz, D., Hensley, B. S., & Doré, O. 2017, The Astrophysical Journal, 846, 38, doi: 10.3847/1538-4357/aa84af
* Li et al. (2017) Li, L., Shen, S., Hou, J., et al. 2017, The Astronomical Journal, 153, 88, doi: 10.3847/1538-3881/153/2/88
* Liu et al. (2014) Liu, X., Feltzing, S., Zhao, G., Walton, N., & Whitelock, P. 2014
* Luo et al. (2015) Luo, A.-L., Zhao, Y.-H., Zhao, G., et al. 2015, Research in Astronomy and Astrophysics, 15, 1095, doi: 10.1088/1674-4527/15/8/002
* Niu et al. (2021) Niu, Z., Yuan, H., & Liu, J. 2021, The Astrophysical Journal, 909, 48, doi: 10.3847/1538-4357/abdbac
* Peek & Graves (2010) Peek, J. E. G., & Graves, G. J. 2010, The Astrophysical Journal, 719, 415, doi: 10.1088/0004-637X/719/1/415
* Planck Collaboration et al. (2014) Planck Collaboration, Abergel, A., Ade, P. A. R., et al. 2014, Astronomy & Astrophysics, 571, A11, doi: 10.1051/0004-6361/201323195
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, The Astrophysical Journal, 737, 103, doi: 10.1088/0004-637X/737/2/103
* Schlafly et al. (2010) Schlafly, E. F., Finkbeiner, D. P., Schlegel, D. J., et al. 2010, The Astrophysical Journal, 725, 1175, doi: 10.1088/0004-637X/725/1/1175
* Schlegel et al. (1998) Schlegel, D., Finkbeiner, D., & Davis, M. 1998, Wide Field Surveys in Cosmology, 14, 297. https://ui.adsabs.harvard.edu/abs/1998wfsc.conf..297S/abstract
* Wolf (2014) Wolf, C. 2014, Monthly Notices of the Royal Astronomical Society, 445, 4252, doi: 10.1093/mnras/stu2069
* Wu et al. (2011) Wu, Y., Luo, A.-L., Li, H.-N., et al. 2011, Research in Astronomy and Astrophysics, 11, 924, doi: 10.1088/1674-4527/11/8/006
* Yahata et al. (2007) Yahata, K., Yonehara, A., Suto, Y., et al. 2007, Publications of the Astronomical Society of Japan, 59, 205, doi: 10.1093/pasj/59.1.205
* York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, AJ, 120, 1579, doi: 10.1086/301513
* Yuan et al. (2013) Yuan, H. B., Liu, X. W., & Xiang, M. S. 2013, Monthly Notices of the Royal Astronomical Society, 430, 2188, doi: 10.1093/mnras/stt039
* Yuan et al. (2015) Yuan, H.-B., Liu, X.-W., Huo, Z.-Y., et al. 2015, Monthly Notices of the Royal Astronomical Society, 448, 855, doi: 10.1093/mnras/stu2723
* Zhao et al. (2012) Zhao, G., Zhao, Y.-H., Chu, Y.-Q., Jing, Y.-P., & Deng, L.-C. 2012, Research in Astronomy and Astrophysics, 12, 723, doi: 10.1088/1674-4527/12/7/002
Table 1: Temperature- and reddening-dependent reddening coefficients of the Gaia DR2 passbands for different reddening maps. The function form is: $R=C_{0}+C_{1}y+C_{2}y^{2}+C_{3}x+C_{4}xy+C_{5}x^{2}$, where x is stellar temperature and y is $E(B-V)_{map}$. Band | Extinction Map | $C_{0}$ | $C_{1}$ | $C_{2}$ | $C_{3}$ | $C_{4}$ | $C_{5}$
---|---|---|---|---|---|---|---
$R_{G_{\rm BP}-G_{\rm RP}}$ | SFD | 1.097E+00 | $-$2.415E$-$01 | 2.437E$-$01 | $-$7.219E$-$06 | $-$4.504E$-$05 | 7.716E$-$09
| Planck2014-R | 5.559E$-$01 | 2.738E+00 | $-$3.006E+00 | 5.830E$-$05 | $-$3.468E$-$05 | 3.744E$-$09
| Planck2014-Tau | 1.008E+00 | $-$1.445E$-$03 | $-$2.799E$-$01 | 3.136E$-$05 | $-$1.893E$-$05 | 4.083E$-$09
| Planck2019-Tau | 1.044E+00 | $-$6.634E$-$02 | $-$2.316E$-$01 | 1.761E$-$05 | $-$2.670E$-$06 | 4.432E$-$09
$R_{G_{\rm BP}-G}$ | SFD | 9.040E$-$01 | $-$2.210E$-$01 | 1.750E$-$01 | $-$6.410E$-$05 | $-$1.150E-06 | 3.520E$-$09
| Planck2014-R | 5.450E$-$01 | 1.260E+00 | $-$1.360E+00 | 7.520E$-$06 | 9.440E$-$06 | $-$2.620E$-$09
| Planck2014-Tau | 8.560E$-$01 | $-$7.380E$-$02 | $-$1.360E$-$01 | $-$4.310E$-$05 | 1.570E$-$05 | 1.280E$-$09
| Planck2019-Tau | 8.510E$-$01 | $-$1.260E$-$01 | $-$1.080E$-$01 | $-$4.390E$-$05 | 2.720E$-$05 | 1.030E$-$09
Table 2: Correction factor $k$ as a function of $E(B-V)_{\rm map}$ for the SFD and Planck maps. SFD | Planck2014-R | Planck2014-Tau | Planck2019-Tau |
---|---|---|---|---
$E(B-V)$ | $k_{median}$ | $E(B-V)$ | $k_{median}$ | $E(B-V)$ | $k_{median}$ | $E(B-V)$ | $k_{median}$ |
0.009 | 0.997 | 0.017 | 0.835 | 0.009 | 1.216 | 0.009 | 1.065 |
0.016 | 0.728 | 0.025 | 0.777 | 0.015 | 1.010 | 0.015 | 0.884 |
0.025 | 0.710 | 0.035 | 0.830 | 0.025 | 0.962 | 0.024 | 0.890 |
0.034 | 0.783 | 0.045 | 0.854 | 0.035 | 0.944 | 0.035 | 0.890 |
0.045 | 0.834 | 0.055 | 0.909 | 0.045 | 0.960 | 0.045 | 0.911 |
0.055 | 0.913 | 0.065 | 0.944 | 0.054 | 1.017 | 0.055 | 0.975 |
0.065 | 0.962 | 0.075 | 0.985 | 0.065 | 1.041 | 0.065 | 1.008 |
0.075 | 0.971 | 0.085 | 0.972 | 0.075 | 1.033 | 0.074 | 1.017 |
0.084 | 1.002 | 0.095 | 0.969 | 0.085 | 1.035 | 0.085 | 1.013 |
0.095 | 0.981 | 0.105 | 0.980 | 0.095 | 1.005 | 0.094 | 1.006 |
0.105 | 0.997 | 0.136 | 1.029 | 0.105 | 1.032 | 0.105 | 1.000 |
0.138 | 1.024 | 0.249 | 0.985 | 0.141 | 1.035 | 0.141 | 1.023 |
0.252 | 0.989 | 0.350 | 0.992 | 0.251 | 0.984 | 0.251 | 0.978 |
0.355 | 0.976 | 0.427 | 0.991 | 0.362 | 0.980 | 0.358 | 0.972 |
0.450 | 0.969 | | | 0.452 | 1.007 | 0.449 | 0.984 |
0.558 | 0.975 | | | 0.539 | 1.010 | 0.549 | 0.990 |
0.632 | 0.980 | | | | | | |
Table 3: Fitting coefficients of $k$ and $k^{{}^{\prime}}$. The function form
is:
$z=ax^{4}+by^{4}+cyx^{3}+dxy^{3}+ex^{2}y^{2}+fx^{3}+gy^{3}+hyx^{2}+ixy^{2}+jx^{2}+ky^{2}+lxy+mx+ny+o$.
For the SFD and Planck2014-R maps, $z$ is $k$, $x$ is $T_{d}$, and $y$ is
$E(B-V)$. For the Planck2014-Tau and Planck2019-Tau maps, $z$ is
$k^{{}^{\prime}}$, $x$ is $T_{d}$, and $y$ is $\beta$.
| SFD | Planck2014-R | Planck2014-Tau | Planck2019-Tau
---|---|---|---|---
a | -0.0096664 | 0.0007690 | 0.0003437 | -0.0011918
b | 1103.8527321 | -195.2549470 | 25.0345231 | -29.6874082
c | 3.6652110 | 0.0578553 | 0.0308794 | -0.0822918
d | 623.1533468 | -15.6974711 | 7.5020031 | -11.6586509
e | 87.1827827 | 1.7125698 | 0.7068647 | -1.7087918
f | -11592.2395358 | 454.3115073 | -309.0274163 | 399.8471526
g | 0.3264288 | -0.0712684 | -0.0647263 | 0.2236456
h | -219.0927722 | -4.2123580 | -3.8361633 | 10.0293453
i | -3365.6297663 | -57.9146527 | -61.9580752 | 118.9355843
j | 2.4443323 | 2.4844060 | 4.3976829 | -14.3621849
k | 32345.3778957 | 440.2730008 | 1331.7124722 | -2046.8604171
l | 4350.8798470 | 96.5155018 | 165.9610933 | -375.9821150
m | -180.9410685 | -38.5440982 | -133.0115411 | 380.1986985
n | -28699.9073759 | -707.2919166 | -2420.6173146 | 4497.9132419
o | 1576.5430626 | 225.1130841 | 1513.4808280 | -3575.0589374
Figure 1: Reddening coefficients of ${G_{\rm BP}-G_{\rm RP}}$ color as a
function of reddening for different reddening maps. In each panel, the dots
and lines are measured and fitted values, respectively. Different colors
denote different temperature ranges of stars used. For the SFD,
Planck2014-Tau, and Planck2019-Tau maps, only dots of $E(B-V)<0.7$ mag are
used. For the Planck-R map, only dots of $E(B-V)<0.5$ mag are used. Note that
for the SFD, Planck2014-Tau, and Planck2019-Tau maps, the higher stellar
temperatures, the lower reddening values, the larger the reddening
coefficients. While for the Planck-R map, the reddening coefficients are
larger for higher reddening values when $E(B-V)<0.45$ mag. Figure 2: Same to
Figure 1 but for the ${G_{\rm BP}-G}$ color. Note that for the SFD,
Planck2014-Tau, and Planck2019-Tau maps, the lower stellar temperatures, the
lower reddening values, the larger the reddening coefficients. Figure 3:
LAMOST reddening against those from different reddening maps for six selected
sight-lines of $E(B-V)$ about 0.02 (two top rows), 0.1 (two middle rows), and
0.4 mag (two bottom rows), respectively. From left to right are for the SFD,
Planck2014-R, Planck2014-Tau, and Planck2019-Tau maps, respectively. For each
panel, the blue line represents the final $k$, the red dashed line denotes
line of equality. The spatial position, estimated $k$ value, and standard
deviation are also labeled. The 1st, 3rd, and 5th rows have typical standard
deviations, while the 2nd, 4th, and 6th rows show examples of abnormal
standard deviations. Figure 4: Spatial distributions of correction factor
errors $k_{err}$ in Galactic coordinates for different reddening maps. Figure
5: Left: spatial distribution of the number of stars in each pixel for the SFD
map. Note that due to independent 3$\sigma$ clipping for each extinction map,
the numbers of stars in each pixel for the four extinction maps may be
slightly different. Right: histogram distribution of the number of remaining
sources in each pixel after 3$\sigma$ clipping. The vertical black line
indicates $N/pix=10$. Figure 6: $Std_{fit}$ of the SFD, Planck2014-R,
Planck2014-Tau, and Planck2019-Tau maps as a function of reddening. The color
of the dots indicates $k_{err}$. In each panel, the dots are divided into
different bins. The bin width is 0.01/0.1 mag when the extinction value is
smaller/larger than 0.1 mag. For each panel, the yellow crosses represent the
median values, the grey line represents a second-order polynomial fitting to
the yellow crosses, the black dash line is obtained by shifting the grey line
up by 2-$\sigma$ of the fitting. The dots above the black dash lines are
excluded. Figure 7: Three examples of sight-lines that show large $Std_{fit}$
values (dots above the black dashed lines in Figure 6). Dust clouds far above
the Galactic disk ($Z\sim$500 pc) are found as jumps of $k$ in the panels.
Figure 8: Spatial distribution of the pixels eliminated in the four reddening
maps in Galactic coordinates. Gray, orange, pink, and light-blue dots
represent pixels removed in four, three, two, and one maps, respectively. The
black circles represent pixels where dust clouds similar to those in Figure 7
are found. Figure 9: All-sky distributions of $E(B-V)_{\mathrm{LAMOST}}$
(left) and $E(B-V)_{\mathrm{map}}$ (right). From top to bottom are for the
SFD, Planck2014-R, Planck2014-Tau and Planck2019-Tau maps, respectively. Note
the four maps in the left are different due to different reddening
coefficients used. Figure 10: All-sky distributions of$E(B-V)_{\rm
LAMOST}-E(B-V)_{\rm map}$ (left) and $k$ (right). From top to bottom are for
the SFD, Planck2014-R, Planck2014-Tau and Planck2019-Tau maps, respectively.
Figure 11: Histogram distributions of $E(B-V)_{\rm LAMOST}-E(B-V)_{\rm map}$
(left) and $k$ (right). From top to bottom are for the SFD, Planck2014-R,
Planck2014-Tau and Planck2019-Tau maps, respectively. Figure 12: Typical
dispersion values of the differences between $E(G_{\rm BP}-G_{\rm RP})_{\rm
LAMOST}$ and $E(G_{\rm BP}-G_{\rm RP})_{\rm map}$ plotted against $E(G_{\rm
BP}-G_{\rm RP})_{\rm LAMOST}$. The orange, blue, green and pink dots represent
results for the SFD, Planck2014-R, Planck2014-Tau and Planck2019-Tau dust
maps, respectively. Figure 13: Correction factor k as a function of
$E(B-V)_{\mathrm{map}}$ in different sky areas. From left to right are for the
SFD, Planck2014-R, Planck2014-Tau, and Planck2019-Tau maps, respectively. For
each panel, the sky area is labelled on the top, and the red crosses mark the
median values. The trends with $E(B-V)_{\mathrm{map}}$ are different between
different areas for all the maps. Figure 14: Top row: dependence of $k$ on
dust temperature and reddening for the SFD and Planck2014-R maps. Second row:
fitting residuals. Third row: fitting residuals as a function of dust
temperature. Bottom row: fitting residuals as a function of reddening. One-
fifth of all pixels are randomly selected to plot the results. Figure 15: Top
row: dependence of $k^{{}^{\prime}}$ on dust temperature and spectral index
for the Planck2014-Tau map. Second row: Fitting residuals. Third row: fitting
residuals as a function of dust temperature. Bottom row: fitting residuals as
a function of spectral index. The first column shows the results of all
pixels. The other four columns show the results of pixels in four different
reddening ranges. One-fifth of all pixels are randomly selected to plot the
results. Figure 16: Same as Figure 15 but for the Planck2019-Tau map. Figure
17: All-sky maps of k (or $k^{{}^{\prime}}$) (top), $k_{fit}$ (or
$k_{fit}^{{}^{\prime}}$) (middle) and fitting residual $\Delta\,k$ (or
$\Delta\,k^{{}^{\prime}}$) (bottom) for different reddening maps. From left to
right are for the SFD, Planck2014-R, Planck2014-Tau, and Planck2019-Tau maps,
respectively. Figure 18: Comparisons between the fitting residuals $\Delta\,k$
(or $\Delta\,k^{{}^{\prime}}$) of the SFD, Planck2014-R, Planck2014-Tau, and
Planck2019-Tau maps. The correlation coefficients $R$ are labeled.
|
# Positive mass gap of quantum Yang-Mills Fields
Adrian P. C. Lim
Email<EMAIL_ADDRESS>
###### Abstract
We construct a 4-dimensional quantum field theory on a Hilbert space,
dependent on a simple Lie Algebra of a compact Lie group, that satisfies
Wightman’s axioms. This Hilbert space can be written as a countable sum of
non-separable Hilbert spaces, each indexed by a non-trivial, inequivalent
irreducible representation of the Lie Algebra.
In each component Hilbert space, a state is given by a triple, a space-like
rectangular surface $S$ in $\mathbb{R}^{4}$, a measurable section of the Lie
Algebra bundle over this surface $S$, represented irreducibly as a matrix, and
a Minkowski frame. The inner product is associated with the area of the
surface $S$.
In our previous work, we constructed a Yang-Mills measure for a compact semi-
simple gauge group. We will use a Yang-Mills path integral to quantize the
momentum and energy in this theory. During the quantization process,
renormalization techniques and asymptotic freedom will be used. Each component
Hilbert space is the eigenspace for the momentum operator and Hamiltonian, and
the corresponding Hamiltonian eigenvalue is given by the quadratic Casimir
operator. The eigenvalue of the corresponding momentum operator will be shown
to be strictly less than the eigenvalue of the Hamiltonian, hence showing the
existence of a positive mass gap in each component Hilbert space. We will
further show that the infimum of the set containing positive mass gaps, each
indexed by an irreducible representation, is strictly positive.
In the last section, we will show how the positive mass gap will imply the
Clustering Theorem.
MSC 2020: 81T13, 81T08, 81T70
Keywords: Mass gap, Yang-Mills, Wightman’s axioms, compact simple Lie group,
renormalization, asymptotic freedom, clustering, space-like surface,
time-like surface, Casimir operator, Lorentz transformation, ${\rm
SL}(2,{{\mathbb{C}}})$,
Callan-Symanzik Equation, path integral, spinor representation
###### Contents
1. 1 Preliminaries
1. 1.1 Why should one read this article
2. 2 A Description of Quantum Hilbert space
1. 2.1 Time-like and space-like surfaces
2. 2.2 Unitary representation of inhomogeneous ${\rm SL}(2,{{\mathbb{C}}})$
3. 3 Quantum Field Operators
1. 3.1 Creation operators
2. 3.2 Domain and continuity
3. 3.3 Cyclicity
4. 3.4 Tempered Distribution
4. 4 Transformation Law of the Field Operator
5. 5 Causality
1. 5.1 CPT Theorem
6. 6 Yang-Mills path integrals
1. 6.1 Hermite polynomials
2. 6.2 Yang-Mills measure
3. 6.3 Asymptotic freedom
4. 6.4 Callan-Symanzik beta function
7. 7 Hamiltonian and Momentum operator
1. 7.1 Renormalization
2. 7.2 Callan-Symanzik Equation
3. 7.3 Existence of positive mass gap
8. 8 Clustering
1. 8.1 Vacuum Expectation
2. 8.2 Proof of Cluster Decomposition Property
9. A Surface Integrals
10. B Lorentz transformation of a space-like vector
## 1 Preliminaries
Let $M$ be a 4-manifold, with $\Lambda^{q}(T^{\ast}M)$ being the $q$-th
exterior power of the cotangent bundle over the manifold $M$. Fix a Riemannian
metric $g$ on $M$ and this in turn defines an inner product
$\langle\cdot,\cdot\rangle_{q}$ on $\Lambda^{q}(T^{\ast}M)$, for which we can
define a volume form $d\omega$ on $M$. This allows us to define a Hodge star
operator $\ast$ acting on $k$-forms,
$\ast:\Lambda^{k}(T^{\ast}M)\rightarrow\Lambda^{4-k}(T^{\ast}M)$ such that for
$u,v\in\Lambda^{k}(T^{\ast}M)$, we have
$u\wedge\ast v=\langle u,v\rangle_{k}\ d\omega.$ (1.1)
An inner product on the set of smooth sections
$\Gamma(\Lambda^{k}(T^{\ast}M))$ is then defined as
$\langle u,v\rangle=\int_{M}u\wedge\ast v=\int_{M}\langle u,v\rangle_{k}\
d\omega.$ (1.2)
See [1].
Introduce a compact and simple gauge group $G$. Without loss of generality we
will assume that $G$ is a Lie subgroup of ${\rm U}(\bar{N})$,
$\bar{N}\in\mathbb{N}$. We will identify the (real) Lie Algebra $\mathfrak{g}$
of $G$ with a Lie subalgebra of the Lie Algebra $\mathfrak{u}(\bar{N})$ of
${\rm U}(\bar{N})$ throughout this article. Suppose we write the trace as
${{\rm{Tr}}}_{{\rm Mat}(\bar{N},\mathbb{C})}$, which we will abbreviate as
${{\rm{Tr}}}$ in future. Then we can define a positive, non-degenerate
bilinear form by
$\langle A,B\rangle=-{{\rm{Tr}}}_{{\rm Mat}(\bar{N},\mathbb{C})}[AB]$ (1.3)
for $A,B\in\mathfrak{g}$. Its Lie bracket will be denoted by
$[A,B]\equiv{{\rm{ad}}}(A)B$.
Let $P\rightarrow M$ be some trivial vector bundle, with structure group $G$.
The vector space of all smooth $\mathfrak{g}$-valued 1-forms on the manifold
$M$ will be denoted by $\mathcal{A}_{M,\mathfrak{g}}$. Denote the group of all
smooth $G$-valued mappings on $M$ by $\mathcal{G}$, called the gauge group.
The gauge group induces a gauge transformation on
$\mathcal{A}_{M,\mathfrak{g}}$,
$\mathcal{A}_{M,\mathfrak{g}}\times\mathcal{G}\rightarrow\mathcal{A}_{M,\mathfrak{g}}$
given by
$A\cdot\Omega:=A^{\Omega}=\Omega^{-1}d\Omega+\Omega^{-1}A\Omega$
for $A\in\mathcal{A}_{M,\mathfrak{g}}$, $\Omega\in\mathcal{G}$. The orbit of
an element $A\in\mathcal{A}_{M,\mathfrak{g}}$ under this operation will be
denoted by $[A]$ and the set of all orbits by
$\mathcal{A}_{M,\mathfrak{g}}/\mathcal{G}$.
For $A\in\mathcal{A}_{M,\mathfrak{g}}$, the curvature $dA+A\wedge A$ is a
smooth $\mathfrak{g}$-valued 2-form on $M$, whereby $dA$ is the differential
of $A$ and $A\wedge A$ is computed using the Lie Bracket of $\mathfrak{g}$ and
the wedge product on $\Lambda^{1}(T^{\ast}M)$, at each fibre of the tensor
bundle $\Lambda^{1}(T^{\ast}M)\otimes(M\times\mathfrak{g}\rightarrow M)$. The
Yang-Mills Lagrangian is given by
$S_{{\rm YM}}(A)=\int_{M}\left|dA+A\wedge A\right|^{2}\ d\omega.$
Here, the induced norm $|\cdot|$ is from the tensor product of
$\langle\cdot,\cdot\rangle_{2}$ and the inner product on $\mathfrak{g}$,
computed on each fiber of the bundle
$\Lambda^{2}(T^{\ast}M)\otimes(M\times\mathfrak{g}\rightarrow M)$. The
integral over $M$ is then defined using Equation (1.2). Note that this
Lagrangian is invariant under gauge transformations.
The 4-manifold we will consider in this article is
${{\mathbb{R}}}\times{{\mathbb{R}}}^{3}\equiv{{\mathbb{R}}}^{4}$, with tangent
bundle $T{{\mathbb{R}}}^{4}$. Note that ${{\mathbb{R}}}$ will be referred to
as the time-axis and ${{\mathbb{R}}}^{3}$ is the spatial 3-dimensional
Euclidean space. We will choose the standard Riemannian metric on
$\mathbb{R}^{4}$, denoted as $\langle\cdot,\cdot\rangle$. Let
$\\{e_{a}\\}_{a=0}^{3}$ be an orthonormal basis on
${{\mathbb{R}}}^{4}\equiv{{\mathbb{R}}}\times{{\mathbb{R}}}^{3}$, hence
defining the standard coordinates, $\vec{x}\equiv(x^{0},x^{1},x^{2},x^{3})$,
with time coordinate $x^{0}$ and spatial coordinates $(x^{1},x^{2},x^{3})$.
Let $\Lambda^{q}({{\mathbb{R}}}^{4})$ denote the fiber of the $q$-th exterior
power of the cotangent bundle over ${{\mathbb{R}}}^{4}$, and we choose the
canonical basis $\\{dx^{0},dx^{1},dx^{2},dx^{3}\\}$ for
$\Lambda^{1}({{\mathbb{R}}}^{4})$. Let $\Lambda^{1}({{\mathbb{R}}}^{3})$
denote the subspace in $\Lambda^{1}({{\mathbb{R}}}^{4})$ spanned by
$\\{dx^{1},dx^{2},dx^{3}\\}$. There is an obvious inner product defined on
$\Lambda^{1}({{\mathbb{R}}}^{4})$, i.e. $\langle dx^{a},dx^{b}\rangle=0$ if
$a\neq b$, 1 otherwise. Finally, a basis for $\Lambda^{2}({{\mathbb{R}}}^{4})$
is given by
$\\{dx^{0}\wedge dx^{1},dx^{0}\wedge dx^{2},dx^{0}\wedge dx^{3},dx^{1}\wedge
dx^{2},dx^{3}\wedge dx^{1},dx^{2}\wedge dx^{3}\\}.$
Using the volume form $d\omega=dx^{0}\wedge dx^{1}\wedge dx^{2}\wedge dx^{3}$,
the Hodge star operator $\ast$ is a linear isomorphism between
$\Lambda^{2}({{\mathbb{R}}}^{4})$ and $\Lambda^{2}({{\mathbb{R}}}^{4})$, i.e.
$\displaystyle\ast(dx^{0}\wedge dx^{1})=dx^{2}\wedge
dx^{3},\quad\ast(dx^{0}\wedge dx^{2})=dx^{3}\wedge
dx^{1},\quad\ast(dx^{0}\wedge dx^{3})=dx^{1}\wedge dx^{2}.$
We adopt Einstein’s summation convention, i.e. we sum over repeated
superscripts and subscripts. We set the speed of light $\mathbf{c}=1$. We can
define the Minkowski metric, given by
$\vec{x}\cdot\vec{y}=-x^{0}y^{0}+\sum_{i=1}^{3}x^{i}y^{i}.$ (1.4)
Note that our Minkowski metric is negative of the one used by physicists. A
vector $\vec{x}$ is time-like (space-like) if $\vec{x}\cdot\vec{x}<0$
($\vec{x}\cdot\vec{x}>0$). It is null if $\vec{x}\cdot\vec{x}=0$. When
$\vec{x}$ and $\vec{y}$ are space-like separated, it means that
$-(x^{0}-y^{0})^{2}+\sum_{i=1}^{3}(x^{i}-y^{i})^{2}\geq 0.$
A Lorentz transformation $\Lambda$ is a linear transformation mapping space-
time ${{\mathbb{R}}}^{4}$ onto itself, which preserves the Minkowski metric
given in Equation (1.4). Indeed, the Lorentz transformations form a group,
referred to as Lorentz group $L$. It has 4 components, and we will call the
component containing the identity, as the restricted Lorentz group, denoted
$L_{+}^{\uparrow}$.
### 1.1 Why should one read this article
At the time of this writing, an online search will reveal that several authors
have attempted to solve the Yang-Mills mass gap problem, which is one of the
millennium problems, as described in [2]. The problem is to construct a
Hilbert space satisfying Wightman’s axioms for a compact, simple Yang-Mills
gauge theory in 4-dimensional Euclidean space. See [3] for a complete
description of the axioms. The axioms are also stated in [4]. Furthermore,
this theory has a minimum positive mass gap. This means that besides the zero
eigenvalue, the Hamiltonian has a minimum positive eigenvalue. The momentum
operator is also a non-negative operator, and the difference between the
squares of their eigenvalues, is known as the mass gap squared. Despite all
these attempts, no solution has been widely accepted by the scientific
community. So, why should one spend time reading this article?
It is known that the set of inequivalent, non-trivial, irreducible
representation $\\{\rho_{n}:n\in\mathbb{N}\\}$ of a simple Lie Algebra
$\mathfrak{g}$ is indexed by highest weights, hence countable. See [5]. Using
all the inequivalent, non-trivial, irreducible representations of
$\mathfrak{g}$, we can construct a Hilbert space
$\mathbb{H}_{{\rm YM}}(\mathfrak{g})=\\{1\\}\oplus\bigoplus_{n\geq
1}\mathscr{H}(\rho_{n}),$
for which Wightman’s axioms are satisfied. The vacuum state will be denoted by
1 and $\\{1\\}$ will denote its linear span. The Hilbert space
$\mathscr{H}(\rho_{n})$ is defined using the non-trivial irreducible
representation $\rho_{n}$.
Experiments have shown that quantum fields are highly singular. Hence, we need
to smear the field, which we will represent it as a
$\rho(\mathfrak{g})$-valued vector field, over a rectangular surface. This
will define a state in $\mathscr{H}(\rho)$.
Each state in $\mathscr{H}(\rho)$ is described by a space-like rectangular
surface $S$ equipped with a Minkowski frame, and a measurable section of
$S\times[\rho(\mathfrak{g})\otimes{{\mathbb{C}}}]\rightarrow S$ defined on it.
See Definition 2.5. This is where the geometry comes in, as surfaces have a
well-defined physical quantity, which is the area. Incidentally, when we
compute the average flux of non-abelian Yang-Mills gauge fields through a
surface in [6], we will obtain a formula for the area of the surface. Further
justification for its connection with the Yang-Mills action will be given in
Remark 3.21.
But area is not invariant under the action of the Poincare group. To define an
unitary action by the Poincare group, we will consider time to be purely
imaginary and hence define a physical quantity on the surface $S$, associated
with the area of the surface. See Definition A.2.
We can now allow a test function to act on this ‘smeared’ field. This test
function will be represented as a field operator, to be defined later in
Section 3, and it acts on a dense subset in the Hilbert space
$\mathbb{H}_{{\rm YM}}(\mathfrak{g})$, as required in Wightman axioms.
A non-abelian Yang-Mills measure was constructed in [6]. Using a Yang-Mills
path integral, we will proceed in Section 7 to quantize momentum, which will
yield its quantum eigenvalues, associated with the quadratic Casimir operator.
A path integral approach to quantize the Yang-Mills theory was described in
[7]. During the quantization process, we will use renormalization techniques
and asympotic freedom. Incidentally, $\mathscr{H}(\rho_{n})$ is the eigenspace
for both the Hamiltonian and quantized momentum operator. We will now list
down the following reasons why we think this is a correct approach to prove
the existence of a mass gap.
The quadratic Casimir operator is dependent on the non-trivial irreducible
representation of the simple Lie Algebra $\mathfrak{g}$. It is a positive
operator and proportional to the identity. If our gauge group is ${\rm
SU}(2)$, then the quadratic Casimir operator is given by $j(j+1)$, for a given
representation $\rho:\mathfrak{su}(2)\rightarrow{\rm
End}({{\mathbb{C}}}^{2j+1})$, $j$ is a non-negative half integer or integer.
Its square root is well known to be the eigenvalue of total momentum in
quantum mechanics.
The square of the Hamiltonian will be defined later to be proportional to the
dimension of the representation times the quadratic Casimir operator. Hence,
we will correlate the Casimir operator directly with the energy levels
squared. Large values of the Casimir operator mean high energy levels. The
momentum eigenvalues will be computed via a path integral. The set containing
the eigenvalues will be discrete, and because the eigenvalues of both
operators go to infinity, both are unbounded. See subsection 7.1. By comparing
the eigenvalues of the squares of the Hamiltonian and the quantized momentum
operator, we will prove the existence of a mass gap. See subsection 7.3. Note
that it is not enough to just show that the Hamiltonian has a strictly
positive minimum eigenvalue, besides the zero eigenvalue. Our construction
will also show that the the vacuum state is an eigenstate of the Hamiltonian
and quantized momentum operator with eigenvalue 0 respectively, implying the
vacuum state is massless.
A successful quantum Yang-Mills theory should explain the following:
1.
There is a mass gap, which will explain why the strong force is short range.
2.
The theory should incorporate asymptotic freedom, i.e. at high energies and
short distances, the theory is like a free theory.
In the case of gauge group ${\rm SU}(3)$, which describes the strong force, it
must also explain the following:
3.
The theory should demonstrate quark confinement, i.e. the potential between a
quark and anti-quark grows linearly;
4.
The theory should incorporate chiral symmetry breaking, which means that the
vacuum is potentially invariant only under a certain subgroup of the full
symmetry group that acts on the quark fields.
Item 2 was proved in a prequel [6], but we only showed asymptotic freedom in
the context for short distances. In that prequel, we derived the Wilson Area
Law formula of a time-like surface $S$ using a (non-abelian) Yang-Mills path
integral. This will show quark confinement and that the potential energy
between quarks grows linearly, which is Item 3. We also like to mention that
the Area Law formula does not hold in the abelian gauge group case, as was
shown in [8].
In this article, our main goal will be to prove Item 1, and also show that
asymptotic freedom holds at high energies. Item 4 can be demonstrated in
Example 3.17. If one assumes that asymptotic freedom holds for non-abelian
simple compact gauge group, then a positive mass gap is implied, which we will
furnish the details later in this article. On pages 541-543 in [9], and also
in [10], the authors gave a qualitative explanation of asymptotic freedom,
using gauge group ${\rm SU}(2)$. See also [11].
The weak interaction is described by a ${\rm SU}(2)$ gauge theory. An
unification of the weak and electromagnetic interaction is described by the
electroweak theory, which is ${\rm SU}(2)\times{\rm U}(1)$ gauge theory, first
put forth by Sheldon Glashow in 1961, then completed later by Abdus Salam and
Steven Weinberg in 1967. Mathematically formulated as Yang-Mills fields, the
gauge bosons in this theory have to be massless. But experimentally, it was
shown that the gauge bosons responsible for weak interaction are massive,
hence short range; whereas the photons, responsible for electromagnetic
interaction, are massless, hence long range. To resolve this issue, the Higgs
mechanism was introduced. See page 330 in [12]. But to date, there is no
experimental evidence to suggest that gluons have any physical mass, even
though strong interaction is short range.
A successful quantum Yang-Mills theory that satisfies Wightman’s axioms and
exhibits a positive mass gap, will then imply and explain the short range
nature of the weak and strong interaction, without assuming the existence of a
physical non-zero mass of these gauge bosons. This will be mathematically
formulated later as the Clustering Theorem in Section 8.
This article is focused on a construction of a 4-dimensional Yang-Mills
quantum field theory that satisfies Wightman’s axioms, without assuming the
existence of a positive mass. However, the proof that a positive mass gap
exists, requires a construction of a Yang-Mills path integral, which is
detailed in [8, 6]. An abelian Yang-Mills path integral was constructed in
[8]. The former details the construction of an infinite dimensional Gaussian
probability space using Abstract Wiener space formalism, developed by Gross.
See [13]. The construction of an abelian Yang-Mills path integral will then
allow us to construct a non-abelian Yang-Mills path integral in [6]. The
latter is instrumental in proving the mass gap.
These two prequals are technical in nature. By removing all the technical
aspects of the construction, we hope that this article, will be more palatable
to both physicists and mathematicians, who only want to understand a
construction of a 4-dimensional quantum field, satisfying Wightman’s axioms.
To further convince the reader that the construction of a 4-dimensional
quantum field theory is correct, we will proceed to prove the Clustering
Theorem, that will imply that the vacuum expectation has an exponential decay
along a space-like separation, which implies that the force represented by the
Lie Algebra $\mathfrak{g}$ is short ranged in nature. Because our construction
of a 4-dimensional quantum theory for Yang-Mills fields satisfies a modified
version of Wightman’s axioms, we will give an alternative proof of the
Clustering Theorem, that fits into our context.
## 2 A Description of Quantum Hilbert space
To construct our Hilbert space, we need a compact Lie group $G$, with its real
simple Lie Algebra $\mathfrak{g}$, of which $\\{E^{\alpha}\\}_{\alpha=1}^{N}$
is an orthonormal basis using the inner product defined in Equation (1.3),
fixed throughout this article.
Extend the inner product defined in Equation (1.3) to be a sesquilinear
complex inner product, over the complexification of $\mathfrak{g}$, denoted as
$\mathfrak{g}_{{{\mathbb{C}}}}\equiv\mathfrak{g}\otimes_{{{\mathbb{R}}}}{{\mathbb{C}}}$.
Hence, it is linear in the first variable, conjugate linear in the second.
A finite dimensional representation of $\mathfrak{g}$ is a Lie Algebra
homomorphism $\rho$ of $\mathfrak{g}$ into ${\rm
End}({{\mathbb{C}}}^{\tilde{N}})$ or into ${\rm End}(V)$ for some complex
vector space $V$. All our representations will be considered to be non-trivial
and irreducible. The dimension of the representation is given by $\tilde{N}$.
Because $G$ is a compact Lie group, every finite dimensional representation of
$G$ is equivalent to a unitary representation. See Theorems 9.4 and 9.5 in
[5].
Hence, we can always assume that our Lie Algebra representation
$\rho:\mathfrak{g}\rightarrow{\rm End}({{\mathbb{C}}}^{\tilde{N}})$ is
represented as skew-Hermitian matrices. Thus, the eigenvalues of $\rho(E)$
will be purely imaginary, $E\in\mathfrak{g}$.
Suppose each non-trivial irreducible representation
$\rho_{n}:\mathfrak{g}\rightarrow{\rm End}({{\mathbb{C}}}^{\tilde{N}_{n}})$ is
indexed by $n$, whereby $n\in\mathbb{N}$ and no two representations are
equivalent. We will order them as described in subsection 7.3.
Introduce state 1, which will be referred to as the vacuum state, and
$\\{1\\}$ refers to the linear span of $1$, over the complex numbers. Let
$\mathscr{H}(\rho_{0}):=\\{1\\}$, which is an one dimensional complex inner
product space, with a complex sesquilinear inner product defined by $\langle
1,1\rangle=1$.
Define
$\mathbb{H}_{{\rm
YM}}(\mathfrak{g}):=\bigoplus_{n=0}^{\infty}\mathscr{H}(\rho_{n}),$ (2.1)
whereby $\rho_{n}:\mathfrak{g}\rightarrow{\rm
End}({{\mathbb{C}}}^{\tilde{N}_{n}})$, $n\geq 1$. The inner product defined on
this direct sum is given by
$\left\langle\sum_{n=0}^{\infty}v_{n},\sum_{n=0}^{\infty}u_{n}\right\rangle:=\sum_{n=0}^{\infty}\langle
v_{n},u_{n}\rangle,$
whereby $\langle v_{n},u_{n}\rangle$ is the inner product defined on
$\mathscr{H}(\rho_{n})$.
We will now give the full description of each Hilbert space
$\mathscr{H}(\rho)$, $\rho$ is an irreducible representation. Later, we will
see that $\mathscr{H}(\rho)$ is an eigenspace for the momentum operator and
Hamiltonian.
### 2.1 Time-like and space-like surfaces
###### Notation 2.1
We will let $I=[0,1]$ be the unit interval, and $I^{2}\equiv I\times I$.
Denote $\hat{s}=(s,\bar{s}),\ \hat{t}=(t,\bar{t})\in I^{2}$, $d\hat{s}\equiv
dsd\bar{s}$, $d\hat{t}\equiv dtd\bar{t}$. Typically, $s,\bar{s},t,\bar{t}$
will be reserved as the variable for some parametrization, i.e. $\rho:s\in
I\mapsto\rho(s)\in{{\mathbb{R}}}^{4}$.
###### Definition 2.2
(Time-like and space-like)
Let $S$ be a bounded rectangular surface in ${{\mathbb{R}}}^{4}$, contained in
some plane. By rotating the spatial axes if necessary, we may assume without
any loss of generality, a parametrization of $S$ is given by
$\left\\{(a^{0}+sb^{0},a^{1},a^{2}+sb^{2},a^{3}+tb^{3})^{T}\in{{\mathbb{R}}}^{4}:\
s,t\in I\right\\},$ (2.2)
for constants $a^{\alpha},b^{\alpha}\in{{\mathbb{R}}}$. Now, the surface $S$
is spanned by two directional vectors $(b^{0},0,b^{2},0)^{T}$ and
$(0,0,0,b^{3})^{T}$. Note that $(b^{0},0,b^{2},0)^{T}$ lie in the
$x^{0}-x^{2}$ plane and is orthogonal to $(0,0,0,b^{3})^{T}$.
We say a rectangular surface is space-like, if $|b^{0}|<|b^{2}|$, i.e. the
acute angle which the vector $(b^{0},0,b^{2},0)^{T}$ makes with the
$x^{2}$-axis in the $x^{0}-x^{2}$-plane is less than $\pi/4$.
We say a rectangular surface is time-like, if $|b^{0}|>|b^{2}|$, i.e. the
acute angle which the vector $(b^{0},0,b^{2},0)^{T}$ makes with the
$x^{0}$-axis in the $x^{0}-x^{2}$-plane is less than $\pi/4$.
Let $S$ be a rectangular surface in ${{\mathbb{R}}}^{4}$ contained in some
plane, and $TS$ denote the set of directional vectors that lie inside $S$.
Write $\vec{v}=(v^{0},v)\equiv(v^{0},v^{1},v^{2},v^{3})^{T}\in TS$, and define
$|v|^{2}=v^{1,2}+v^{2,2}+v^{3,2}$. An equivalent way to say that $S$ is time-
like is
$\inf_{\vec{0}\neq\vec{v}\in TS}\frac{|v|^{2}}{v^{0,2}}<1.$ (2.3)
And we say that $S$ is space-like if
$\inf_{\vec{0}\neq\vec{v}\in TS}\frac{|v|^{2}}{v^{0,2}}>1.$ (2.4)
###### Remark 2.3
1. 1.
By definition, a time-like surface must contain a time-like directional vector
in it. Since under Lorentz transformation, a time-like vector remains time-
like, we see that a time-like surface remains time-like under Lorentz
transformation. Similarly, a surface is space-like means all its directional
vectors in the surface are space-like. Under Lorentz transformation, all its
directional vectors spanning $S$ remain space-like, hence a space-like surface
remains space-like under Lorentz transformation.
2. 2.
By a boost, any space-like rectangular surface contained in a plane can be
transformed into a surface lying strictly inside
$\\{c\\}\times{{\mathbb{R}}}^{3}$, for some constant $c$.
3. 3.
Recall $e_{0}$ spans the time-axis. By a boost, any time-like rectangular
surface contained in a plane can be transformed into a surface which is
spanned by $e_{0}$ and an orthogonal directional vector
$v\in{{\mathbb{R}}}^{3}$.
4. 4.
Any two distinct points on a space-like surface is space-like separated; but
two distinct points on a time-like surface may not be time-like separated.
When we say surface $S$ in this article, we mean it is a countable, disjoint
union of rectangular surfaces in ${{\mathbb{R}}}^{4}$, which are space-like,
containing none, some or all of its boundary points.
###### Definition 2.4
(Surface)
Any surface $S\equiv\\{S_{u}\\}_{u\geq 1}\subset{{\mathbb{R}}}^{4}$ satisfies
the following conditions:
* •
each component $S_{u}$ is a space-like rectangular surface contained in some
plane;
* •
each connected component $S_{u}$ may contain none, some or all of its
boundary;
* •
$S_{u}\cap S_{v}=\emptyset$ if $u\neq v$;
* •
$S_{u}$ is contained in some bounded set in ${{\mathbb{R}}}^{4}$.
###### Definition 2.5
Let $S_{0}$ be a compact rectangular space-like surface inside the
$x^{2}-x^{3}$ plane. From Equation (2.2), we see that any rectangular space-
like surface $S$ contained in a plane, can be transformed to $S_{0}$ by
Lorentz transformations and translation.
Recall $\\{e_{a}\\}_{a=0}^{3}$ is an orthonormal basis on
${{\mathbb{R}}}^{4}$. We say that $\\{\hat{f}_{a}\\}_{a=0}^{3}$ is a Minkowski
frame for a compact space-like surface $S$ contained in some plane, if there
exists a sequence of Lorentz transformations $\Lambda_{1},\cdots,\Lambda_{n}$
and a translation by $\vec{a}\in{{\mathbb{R}}}^{4}$, such that
* •
$S=\Lambda_{n}\cdots\Lambda_{1}S_{0}+\vec{a}$;
* •
$\hat{f}_{a}=\Lambda_{n}\cdots\Lambda_{1}e_{a}\in{{\mathbb{R}}}^{4}$,
$a=0,\cdots,3$.
###### Remark 2.6
Observe that $\hat{f}_{0}$ is time-like and for $i=1,2,3$, $\hat{f}_{i}$ is
space-like, satisfying the following properties:
* •
$\hat{f}_{a}\cdot\hat{f}_{b}=0$ if $a\neq b$; and
* •
$\hat{f}_{0}\cdot\hat{f}_{0}=-1$ and $\hat{f}_{i}\cdot\hat{f}_{i}=1$.
Note that $\\{\hat{f}_{2},\hat{f}_{3}\\}$ spans $S$. Later we will let
$S^{\flat}$ be a time-like plane, spanned by $\\{\hat{f}_{0},\hat{f}_{1}\\}$.
Clearly, $\\{\hat{f}_{a}\\}_{a=0}^{3}$ is a basis on ${{\mathbb{R}}}^{4}$.
Each component Hilbert space $\mathscr{H}(\rho)$ will consists of vectors of
the form
$\sum_{u=1}^{\infty}\left(S_{u},f_{\alpha}^{u}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}\right)$,
whereby $S=\bigcup_{u=1}^{\infty}S_{u}$ is some countable union of compact,
rectangular surfaces in ${{\mathbb{R}}}^{4}$, $f_{\alpha}^{u}$ will be some
(measurable) complex-valued function, which is defined on the surface $S_{u}$.
And $\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}$ is a Minkowski frame for each compact
rectangular surface $S_{u}$ contained in a plane as described in Definition
2.5.
Let $\sigma:[0,1]\times[0,1]\rightarrow{{\mathbb{R}}}^{4}$ be a
parametrization of a compact $S$. The complex-valued function $f_{\alpha}$ is
measurable on $S$, if
$f_{\alpha}\circ\sigma:[0,1]^{2}\rightarrow{{\mathbb{C}}}$ is measurable.
Each of these surfaces is assumed to be space-like, as defined above. We sum
over repeated index $\alpha$, from $\alpha=1$ to $N$. One should think of
$f_{\alpha}^{u}\otimes\rho(E^{\alpha})$ as a section of the vector bundle
$S_{u}\times\rho(\mathfrak{g})_{{\mathbb{C}}}\rightarrow S_{u}$, defined over
the surface $S_{u}$. In terms of the parametrization $\sigma$, the section at
$\sigma(\hat{s})$ is given by
$f_{\alpha}^{u}(\sigma(\hat{s}))\otimes\rho(E^{\alpha})$.
###### Remark 2.7
Another way to view this vector
$\left(S_{u},f_{\alpha}^{u}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
is via taking its Fourier Transform into energy-momentum space. We will refer
the reader to Section 8.
Let $S$ and $\tilde{S}$ be rectangular space-like surfaces contained in a
plane. Given complex scalars $\lambda$ and $\mu$, we define the addition and
scalar multiplication as
$\displaystyle\lambda$
$\displaystyle\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)+\mu\left(\tilde{S},g_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
$\displaystyle:=\left(S\cup\tilde{S},\left(\lambda\tilde{f}_{\alpha}+\mu\tilde{g}_{\alpha}\right)\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right).$
(2.5)
Here, we extend $f_{\alpha}$ to be
$\tilde{f}_{\alpha}:S\cup\tilde{S}\rightarrow{{\mathbb{C}}}$ by
$\tilde{f}_{\alpha}(p)=f_{\alpha}(p)$ if $p\in S$; $\tilde{f}(p)=0$ otherwise.
Similarly, $\tilde{g}_{\alpha}$ is an extension of $g_{\alpha}$, defined as
$\tilde{g}_{\alpha}(p)=g_{\alpha}(p)$, if $p\in\tilde{S}$,
$\tilde{g}_{\alpha}(p)=0$ otherwise.
###### Remark 2.8
For the above addition to hold, we require that the Minkowski frame
$\\{\hat{f}_{a}\\}_{a=0}^{3}$ on $S$ and $\tilde{S}$ to be identical.
Given 2 surfaces, $S$ and $\tilde{S}$, we need to take the intersection and
union of these surfaces. Now, the union of these 2 surfaces can always be
written as a disjoint union of connected sets, each such set is a space-like
surface, containing none, some or all of its boundary points. However, the
intersection may not be a surface. For example, the two surfaces may intersect
to give a curve. In such a case, we will take the intersection to be the empty
set $\emptyset$.
Given a surface $S$, let $\sigma$ be any parametrization of $S$. We can define
$\int_{S}d\rho$ using this parametrization $\sigma$ as given in Definition
A.1. Now, replace $\sigma\equiv(\sigma_{0},\sigma_{1},\sigma_{2},\sigma_{3})$
with $\acute{\sigma}=(i\sigma_{0},\sigma_{1},\sigma_{2},\sigma_{3})$ and hence
define $\int_{S}d|\acute{\rho}|$ as given in Definition A.2. With this, define
the following inner product on $\mathscr{H}(\rho)$.
###### Definition 2.9
Given a surface $S=\bigcup_{u=1}^{n}S_{u}$ equipped with a collection of
frames $\\{\hat{f}_{a}^{u}:a=0,\cdots,3\\}_{u\geq 1}$, and a set of bounded
and continuous complex-valued functions
$\\{f_{\alpha}^{u}:\alpha=1,\cdots,N\\}_{u\geq 1}$ for $u\geq 1$, defined on
$S$, form a vector
$\sum_{u=1}^{n}\left(S_{u},f_{\alpha}^{u}\otimes\rho(E^{\alpha})\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}\right)$,
also referred to as Yang-Mills field. Note that for each $u$,
$f_{\alpha}^{u}\otimes\rho(E^{\alpha})$ is a section of
$S_{u}\times[\rho(\mathfrak{g})\otimes{{\mathbb{C}}}]\rightarrow S_{u}$, with
$S_{u}$ contained in some plane, equipped with a Minkowski frame
$\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}$.
Let $V$ be a (complex) vector space containing such vectors, with addition and
scalar multiplication defined in Equation (2.5). The zero vector can be
written as $\left(S,0,\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$ for any space-like
rectangular surface $S$ contained in a plane, equipped with any suitable
Minkowski frame $\\{\hat{f}_{a}\\}_{a=0}^{3}$.
Refer to Definition A.2. Assume that $S$ and $\tilde{S}$ be space-like
surfaces, contained in a plane. Define an inner product
$\langle\cdot,\cdot\rangle$ for
$\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)\in
V$, and
$\left(\tilde{S},g_{\beta}\otimes\rho(E^{\beta}),\\{\hat{g}_{a}\\}_{a=0}^{3}\right)\in
V$, given by
$\displaystyle\left\langle\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),\left(\tilde{S},g_{\beta}\otimes\rho(E^{\beta}),\\{\hat{g}_{a}\\}_{a=0}^{3}\right)\right\rangle$
$\displaystyle:=\int_{S\cap\tilde{S}}[f_{\alpha}\overline{g_{\beta}}]\cdot
d|\acute{\rho}|\cdot{{\rm{Tr}}}[-\rho(E^{\alpha})\rho(E^{\beta})]$ (2.6)
$\displaystyle\equiv\sum_{\alpha=1}^{N}C(\rho)\int_{I^{2}}[f_{\alpha}\cdot\overline{g_{\alpha}}](\sigma(\hat{s}))\left|\sum_{0\leq
a<b\leq
3}\acute{\rho}_{\sigma}^{ab}(\hat{s})\left[\det\acute{J}_{ab}^{\sigma}(\hat{s})\right]\right|d\hat{s},$
provided $\hat{f}_{a}=\hat{g}_{a}$, for $a=0,\ldots,3$. Otherwise, it is
defined as zero. Here, $\sigma:I^{2}\rightarrow{{\mathbb{R}}}^{4}$ is a
parametrization of $S\cap\tilde{S}$ and $C(\rho)$ is defined later in Notation
7.1.
Denote its norm by $|\cdot|$. Let $\mathscr{H}(\rho)$ denote the Hilbert space
containing $V$.
###### Remark 2.10
The quantity $\int_{S}d\rho$ was first derived in [8], which motivates the
definition of this quantity $\int_{S}d|\acute{\rho}|$. It was obtained from
computing an abelian Yang-Mills path integral, by computing the average square
of the flux of a Yang-Mills gauge field, over a time-like or space-like
surface, using an infinite-dimensional Gaussian measure.
This quantity can also be derived from the Wilson Area Law formula in [6],
computed using a non-abelian gauge group. The reader will later observe that
this quantity $\int_{S}d|\acute{\rho}|$, plays an important role in all the 4
Wightman’s axioms.
Using this quantity is one reason why we call the states as Yang-Mills fields,
as the above inner product has its origins from a Yang-Mills path integral.
There is also a second reason for calling them Yang-Mills fields. Refer to
Remark 3.21.
###### Proposition 2.11
The Hilbert space $\mathscr{H}(\rho)$ is non-separable.
Proof. Consider a compact rectangular surface $S_{0}$ contained in the
$x^{2}-x^{3}$ plane, with $\\{e_{a}\\}_{a=0}^{3}$ as its Minkowski frame.
Then, we see that
$\left\\{\left(S_{0}+\vec{a},\rho(E^{\alpha}),\\{e_{a}\\}_{a=0}^{3}\right):\
\vec{a}\in\mathbb{R}\right\\}$
is an uncountable set of orthogonal vectors in $\mathscr{H}(\rho)$, since
$\left\langle\Big{(}S_{0}+\vec{a},\rho(E^{\alpha}),\\{e_{a}\\}_{a=0}^{3}\Big{)},\left(S_{0}+\vec{b},\rho(E^{\alpha}),\\{e_{a}\\}_{a=0}^{3}\right)\right\rangle=0,$
if $\vec{a}\neq\vec{b}$. Hence the Hilbert space is non-separable.
### 2.2 Unitary representation of inhomogeneous ${\rm SL}(2,{{\mathbb{C}}})$
Given a continuous group acting on ${{\mathbb{R}}}^{4}$, we can consider its
corresponding inhomogeneous group, whose elements are pairs consisting of a
translation and a homogeneous transformation. For example, the Poincare group
$\mathscr{P}$ containing the Lorentz group $L$, will have elements
$\\{\vec{a},\Lambda\\}$, where $\Lambda\in L$ and $\vec{a}$ will represent
translation in the direction $\vec{a}$. The multiplication law for the
Poincare group is given by
$\\{\vec{a}_{1},A_{1}\\}\\{\vec{a}_{2},A_{2}\\}=\\{\vec{a}_{1}+A_{1}\vec{a}_{2},A_{1}A_{2}\\}.$
Associated with the restricted Lorentz group $L_{+}^{\uparrow}$ is the group
of $2\times 2$ complex matrices of determinant one, denoted by ${\rm
SL}(2,{{\mathbb{C}}})$. There is an onto homomorphism $Y:{\rm
SL}(2,{{\mathbb{C}}})\rightarrow L_{+}^{\uparrow}$. Thus, given
$\Lambda\in{\rm SL}(2,{{\mathbb{C}}})$, $Y(\Lambda)\in L_{+}^{\uparrow}\subset
L$. See [3].
Instead of the Poincare group, we can consider the inhomogeneous ${\rm
SL}(2,{{\mathbb{C}}})$ in its place, which we will also denote by ${\rm
SL}(2,{{\mathbb{C}}})$, and use it to construct unitary representations. Its
elements will consist of $\\{\vec{a},\Lambda\\}$ and its multiplication law is
given by
$\\{\vec{a}_{1},\Lambda_{1}\\}\\{\vec{a}_{2},\Lambda_{2}\\}=\\{\vec{a}_{1}+Y(\Lambda_{1})\vec{a}_{2},\Lambda_{1}\Lambda_{2}\\}.$
By abuse of notation, for any $\Lambda\in{\rm SL}(2,{{\mathbb{C}}})$, we will
write $\Lambda\vec{a}$ to mean $\Lambda$ being represented as a $4\times 4$
matrix, acting on $\vec{a}\in{{\mathbb{R}}}^{4}$. This means we will write
$\Lambda_{1}\vec{a}_{2}\equiv Y(\Lambda_{1})\vec{a}_{2}$.
Given a vector $\vec{x}\in{{\mathbb{R}}}^{4}$, $\\{\vec{a},\Lambda\\}$ acts on
$\vec{x}$ by $\vec{x}\mapsto\Lambda\vec{x}+\vec{a}$. By abuse of notation, for
a surface $S$, $\\{\vec{a},\Lambda\\}$ acts on $S$ by $S\mapsto\Lambda
S+\vec{a}$, which means apply a Lorentz transformation $Y(\Lambda)$ to every
position vector on the surface $S$, followed by translation in the direction
$\vec{a}$. In terms of some parametrization
$\sigma:I^{2}\rightarrow{{\mathbb{R}}}^{4}$ for $S$, the surface $\Lambda
S+\vec{a}$ is parametrized by
$\Lambda\sigma+\vec{a}:I^{2}\rightarrow{{\mathbb{R}}}^{4}$.
In general, the only finite dimensional unitary representation of ${\rm
SL}(2,{{\mathbb{C}}})$ is the trivial representation. See Theorem 16.2 in
[14]. Thus, to construct a unitary representation, we must consider an
infinite dimensional space. See [15].
###### Definition 2.12
(Unitary Representation of the inhomogeneous ${\rm SL}(2,{{\mathbb{C}}})$)
Let $\hat{H}(\rho)$, $\hat{P}(\rho)$ be positive numbers, dependent on the
representation $\rho$, to be defined later in Definition 7.13. Let $\Lambda$
be in ${\rm SL}(2,{{\mathbb{C}}})$.
There is an unitary representation of the inhomogeneous ${\rm
SL}(2,{{\mathbb{C}}})$, $\\{\vec{a},\Lambda\\}\mapsto U(\vec{a},\Lambda)$.
Now, $U(\vec{a},\Lambda)$ acts on the Hilbert space $\mathscr{H}(\rho)$,
$\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)\longmapsto
U(\vec{a},\Lambda)\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),$
by
$\displaystyle U$
$\displaystyle(\vec{a},\Lambda)\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
$\displaystyle:=\left(\Lambda
S+\vec{a},e^{-i[\vec{a}\cdot(\hat{H}(\rho_{n})\Lambda\hat{f}_{0}+\hat{P}(\rho_{n})\Lambda\hat{f}_{1})]}f_{\alpha}(\Lambda^{-1}(\cdot-\vec{a}))\otimes\rho(E^{\alpha}),\\{\Lambda\hat{f}_{a}\\}_{a=0}^{3}\right).$
(2.7)
Here, $S$ is a space-like surface contained in some plane.
###### Remark 2.13
1. 1.
Notice that $U(\vec{a},\Lambda)$ acts trivially on $\rho(\mathfrak{g})$. In
classical Yang-Mills equation, the fields over ${{\mathbb{R}}}^{4}$, are
$\mathfrak{g}$-valued. The Lie group $G$ describes the internal symmetry, on
each fiber of the vector bundle. See [16]. When we apply a Lorentz
transformation, we expect that $G$ remains invariant under Lorentz
transformation. After all, $G$ acts fiberwise on the vector bundle over
${{\mathbb{R}}}^{4}$.
2. 2.
Let us explain the formula on the RHS of Equation (2.7). Suppose
$\sigma:I^{2}\rightarrow S$ is a parametrization for $S$. Then
$\Lambda\sigma+\vec{a}\equiv Y(\Lambda)\sigma+\vec{a}$ will be a
parametrization for $Y(\Lambda)S+\vec{a}$. And, the field at the point
$\vec{x}:=Y(\Lambda)\sigma(\hat{s})+\vec{a}\in Y(\Lambda)S+\vec{a}$, is given
by
$\displaystyle
e^{-i[\vec{a}\cdot(\hat{H}(\rho_{n})Y(\Lambda)\hat{f}_{0}+\hat{P}(\rho_{n})Y(\Lambda)\hat{f}_{1})]}$
$\displaystyle
f_{\alpha}[Y(\Lambda^{-1})(\vec{x}-\vec{a})]\otimes\rho(E^{\alpha})$
$\displaystyle\equiv
e^{-i[\vec{a}\cdot(\hat{H}(\rho_{n})Y(\Lambda)\hat{f}_{0}+\hat{P}(\rho_{n})Y(\Lambda)\hat{f}_{1})]}f_{\alpha}[\sigma(\hat{s})]\otimes\rho(E^{\alpha}).$
3. 3.
When there is no translation, the vector field over $Y(\Lambda)S$ is the
pushforward of the vector field $f_{\alpha}\otimes\rho(E^{\alpha})$ over $S$.
4. 4.
When there is only translation, the unitary operator can be simplified to be
$\displaystyle U(\vec{a},1)$
$\displaystyle\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
$\displaystyle\equiv$ $\displaystyle
e^{-i[\vec{a}\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}\left(S+\vec{a},f_{\alpha}(\cdot-\vec{a})\otimes\rho\left(E^{\alpha}\right),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),$
directly from Equation (2.7).
It is straightforward to check that this map $\\{\vec{a},\Lambda\\}\mapsto
U(\vec{a},\Lambda)$ is a representation, to be left to the reader.
###### Lemma 2.14
The map $U(\vec{a},\Lambda)$ defined on $\mathscr{H}(\rho)$ is unitary, using
the inner product $\langle\cdot,\cdot\rangle$ as defined in Definition 2.9.
Proof. It suffices to show for $S$ and $\tilde{S}$, both contained in some
space-like plane respectively. Let $\\{\hat{f}_{a}\\}_{a=0}^{3}$,
$\\{\hat{g}_{a}\\}_{a=0}^{3}$ be as defined in Definition 2.5 for $\Lambda S$
and $\Lambda\tilde{S}$ respectively.
By Definition 2.9, it suffices to prove when $\Lambda S\cap\Lambda\tilde{S}$
has non-zero area. We only consider the case when $\hat{f}_{a}=\hat{g}_{a}$,
$a=0,\cdots,3$, since the result is trivial otherwise. Thus,
$\displaystyle\left\langle
U(\vec{a},\Lambda)\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),U(\vec{a},\Lambda)\left(\tilde{S},g_{\beta}\otimes\rho(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)\right\rangle$
$\displaystyle:=\int_{[\Lambda S+\vec{a}]\cap[\Lambda\tilde{S}+\vec{a}]}\
d|\acute{\rho}|\
e^{-i[\vec{a}\cdot(\hat{H}(\rho_{n})\Lambda\hat{f}_{0}+\hat{P}(\rho_{n})\Lambda\hat{f}_{1})]}e^{i[\vec{a}\cdot(\hat{H}(\rho_{n})\Lambda\hat{f}_{0}+\hat{P}(\rho_{n})\Lambda\hat{f}_{1})]}$
$\displaystyle\hskip
156.49014pt\times[f_{\alpha}\overline{g_{\beta}}](\Lambda^{-1}(\cdot-\vec{a}))\cdot{{\rm{Tr}}}[-\rho(E^{\alpha})\rho(E^{\beta})]$
$\displaystyle=\int_{\Lambda(S\cap\tilde{S})+\vec{a}}[f_{\alpha}\overline{g_{\beta}}](\Lambda^{-1}(\cdot-\vec{a}))\cdot
d|\acute{\rho}|\cdot{{\rm{Tr}}}[-\rho(E^{\alpha})\rho(E^{\beta})]$
$\displaystyle=\int_{S\cap\tilde{S}}[f_{\alpha}\overline{g_{\beta}}](\cdot)\cdot
d|\acute{\rho}|\cdot{{\rm{Tr}}}[-\rho(E^{\alpha})\rho(E^{\beta})],$
after applying Lemma A.3.
###### Remark 2.15
In general, the directional derivative for
$\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
does not exist for arbitrary direction $\vec{a}$. Thus, in the computation of
the generator for translation, the derivative does not appear.
The author in [17] also talked about problems when taking the derivative of a
quantum field, at short distances. The differences in fields at different
spatial points actually diverges as the separation gets smaller. The fields
become infinitely rough at small distance scales and it means that it is
impossible experimentally to probe the field at a single point.
We have just described the Hilbert space $\mathscr{H}(\rho)$. Now let us focus
on the vacuum state, which we denoted it as 1. Recall our Yang-Mills fields
are described by a triple
$\sum_{u=1}^{\infty}\left(S_{u},f_{\alpha}^{u}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}\right)$,
whereby $S_{u}$ is some non-empty surface.
###### Definition 2.16
The vacuum state 1 is synonymous with the empty set $\emptyset$, i.e.
$1\equiv(\emptyset)$. We define $\langle 1,1\rangle:=1$.
###### Remark 2.17
1. 1.
On the empty set, it does not make sense to have a measurable function or
Minkowski frame defined on it.
2. 2.
Recall that $\\{1\\}$ is a one-dimensional subspace spanned by the vacuum
state. The Lie Algebra $\mathfrak{g}$ acts trivially on $\\{1\\}$, via the
trivial representation $\rho_{0}:\mathfrak{g}\rightarrow{{\mathbb{C}}}$. Refer
to Equation (2.1).
Clearly the empty set is invariant under the Poincare action. Hence, it is
invariant under $U(\vec{a},\Lambda)$.
Given a Schwartz function $g$ on ${{\mathbb{R}}}^{4}$, we can define a quantum
field operator $\phi^{\alpha,n}(g)$, acting on the vacuum state and
$\sum_{u=1}^{\infty}\left(S,f_{\alpha}^{u}\otimes\rho_{n}(E^{\alpha}),\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}\right)$
in Section 3. We will show later that $\phi^{\alpha,n}(g)$ is densely defined
on $\mathbb{H}_{{\rm YM}}(\mathfrak{g})$. Furthermore,
$\\{\phi^{\alpha,n}(g):\ 1\leq\alpha\leq\underline{N}\\}$ defines a spinor of
dimension $\underline{N}$. Using this definition and the definition of unitary
transformation, one can prove the transformation law given in Proposition 4.1,
and causality in Section 5 using this action.
###### Remark 2.18
In [4], the function $g$ is interpreted as an observable, which is elevated to
be a quantum field operator $\phi^{\alpha,n}(g)$. One can also understand it
as applying canonical quantization to a classical field $g$.
In our description of the Hilbert space containing Yang-Mills fields, nowhere
did we use the Yang-Mills action, so it is not clear if $\mathbb{H}_{{\rm
YM}}(\mathfrak{g})$ is a Hilbert space for a quantum Yang-Mills gauge theory.
The set $\\{\hat{H}(\rho_{n}),\hat{P}(\rho_{n})\\}_{n\geq 1}$, indexed by the
non-trivial irreducible, inequivalent representations, will give us a discrete
set of eigenvalues. The choices of $\hat{H}(\rho)$ and $\hat{P}(\rho)$ will be
given later, referred to as the eigenvalues of the Hamiltonian $\hat{H}$ and
momentum operator $\hat{P}$ respectively. Indeed, these discrete eigenvalues
will give us a countable spectrum for the translation operator $U(\vec{a},1)$,
provided $\vec{a}=a^{0}\hat{f}_{0}+a^{1}\hat{f}_{1}$. There are many choices
of $\\{\hat{H}(\rho),\hat{P}(\rho)\\}$ and it is not clear how we should
choose these numbers.
It is only in Wightman’s zeroth axiom, where the mass of the theory appears.
This axiom requires that $\hat{H}(\rho)^{2}-\hat{P}(\rho)^{2}=m(\rho)^{2}$,
for some mass gap $m(\rho)\geq 0$.111See Remark 8.3. Furthermore, these
eigenvalues are required to be unbounded. The mass gap problem is equivalent
to show that $m_{0}:=\inf_{n\in\mathbb{N}}m(\rho_{n})>0$, and this is only
true for a compact simple gauge group.
For each eigenstate in $\mathscr{H}(\rho_{n})$, $\hat{H}$ and $\hat{P}$ will
be multiplication by scalars $\hat{H}(\rho_{n})$ and $\hat{P}(\rho_{n})$
respectively. Then $\hat{H}^{2}-\hat{P}^{2}=m^{2}$ translates to
$\frac{\hat{P}(\rho_{n})^{2}}{\hat{H}(\rho_{n})^{2}}-1=-\frac{m(\rho_{n})^{2}}{\hat{H}(\rho_{n})^{2}}.$
(2.8)
To prove that the Hamiltonian and momentum operators are unbounded, and there
is a positive mass gap $m_{0}$, it suffices to show that for all $n\geq 1$,
$0>\frac{\hat{P}(\rho_{n})^{2}}{\hat{H}(\rho_{n})^{2}}-1\longrightarrow 0,$
as $n\rightarrow\infty$, and $\lim_{n\rightarrow\infty}m(\rho_{n})=\infty$.
To show that the operators are unbounded and the existence of a positive mass
gap, we need to make use of the Yang-Mills path integral to quantize, so that
we will obtain Equation (2.8). During the quantization process, we will use
renormalization techniques and asymptotic freedom. Note that asymptotic
freedom only holds for a non-abelian gauge group. The compact simple gauge
group will give us a quadratic Casimir operator, dependent on the
representation $\rho_{n}$. As the set containing all Casimir operators, each
corresponding to a non-equivalent irreducible representation of
$\mathfrak{g}$, is countably infinite and unbounded from above, the
Hamiltonian will be shown to be an unbounded operator.
The existence of a positive mass gap is a consequence of Equation (7.4), first
proved in [6]. To prove this equation, we need
* •
renormalization techniques,
* •
asymptotic freedom,
* •
the compactness of the gauge group allows us to represent the Lie Algebra as
skew-Hermitian matrices,
* •
the structure constants and the quadratic Casimir operator of the simple Lie
Algebra, and
* •
the quartic term in the Yang-Mills action,
all of which are collectively responsible for the existence of a mass gap. We
also need to impose the Callan-Symanzik Equation to prove the existence of a
mass gap $m_{0}$.
###### Remark 2.19
The idea that the quartic term in the Yang-Mills action might be responsible
for the mass gap was suggested in [18].
Using a Yang-Mills path integral to define the Hamiltonian and momentum
operator eigenvalues justify our construction as a 4-dimensional Yang-Mills
quantum gauge theory. In Section 8, we will show how the positive mass gap,
will imply the Clustering Theorem.
As for the rest of the axioms, we see that it does not make use of the Yang-
Mills action. We will postpone the proof of the mass gap till Section 7. For
now, we will move on to the remaining Wightman’s axioms.
## 3 Quantum Field Operators
We will now begin our discussion on the field operators that act on the
Hilbert space $\bigoplus_{n=0}^{\infty}\mathscr{H}(\rho_{n})$, which contains
our Yang-Mills fields.
Now, every irreducible finite dimensional representation of ${\rm
SL}(2,{{\mathbb{C}}})$ is denoted by $D^{(j,k)}$, $j,k$ are non-negative
integers or half integers. This representation is known as the spinor
representation of ${\rm SL}(2,{{\mathbb{C}}})$. Under this representation, one
sees that $D^{(j,k)}(-1)=(-1)^{2(j+k)}$. Incidently, when restricted to ${\rm
SU}(2)$, $\Lambda\in{\rm SU}(2)\mapsto D^{(j,0)}(\Lambda)$ is equivalent to an
irreducible representation of ${\rm SU}(2)$. See [3].
Recall we have a Minkowski frame $\\{\hat{f}_{a}\\}_{a=0}^{3}$ defined on a
rectangular space-like surface $S$ in Definition 2.5. Now,
$\hat{f}_{a}=\Lambda e_{a}\equiv Y(\Lambda)e_{a}$ for some $\Lambda\in{\rm
SL}(2,{{\mathbb{C}}})$. Suppose we have another Minkowski frame
$\\{\hat{g}_{a}\\}_{a=0}^{3}$ on $S$ such that $\hat{f}_{a}=\hat{g}_{a}$ for
all $a$. Then, we have that $Y(\Lambda)e_{a}=Y(\tilde{\Lambda})e_{a}$ for some
$\tilde{\Lambda}\in{\rm SL}(2,{{\mathbb{C}}})$. Hence,
$Y(\Lambda)=Y(\tilde{\Lambda})$, which can be shown that
$\Lambda=\pm\tilde{\Lambda}$. When $j+k$ is an integer, we see that
$D^{(j,k)}(\pm 1)=1$, thus $D^{(j,k)}$ is a representation describing vector
bosons.
###### Definition 3.1
(Test functions)
We let $\mathscr{P}$ denote the Schwartz space consisting of infinitely
differentiable complex-valued functions on ${{\mathbb{R}}}^{4}$, which
converge to 0 at infinity faster than any powers of $|\vec{x}|$. We will refer
$f\in\mathscr{P}$ as a test function, which is bounded.
###### Notation 3.2
For $\vec{k}=(k^{0},k^{1},k^{2},k^{3})$, $k^{a}\in\\{0\\}\cup\mathbb{N}$, we
will write
$D^{\vec{k}}=\left(\frac{\partial}{\partial
x^{0}}\right)^{k^{0}}\left(\frac{\partial}{\partial
x^{1}}\right)^{k^{1}}\left(\frac{\partial}{\partial
x^{2}}\right)^{k^{2}}\left(\frac{\partial}{\partial
x^{3}}\right)^{k^{3}},\quad\vec{x}^{\vec{k}}=(x^{0})^{k^{0}}(x^{1})^{k^{1}}(x^{2})^{k^{2}}(x^{3})^{k^{3}}.$
And $|\vec{k}|=\sum_{a=0}^{3}|k^{a}|$.
###### Definition 3.3
(Norm on $\mathscr{P}$)
Let $r,s$ be whole numbers. Suppose $f\in\mathscr{P}$. With the above
notation, define a norm $\parallel\cdot\parallel_{r,s}$ on $\mathscr{P}$ as
$\parallel f\parallel_{r,s}:=\sum_{|\vec{k}|\leq r}\sum_{|\vec{l}|\leq
s}\sup_{\vec{x}\in{{\mathbb{R}}}^{4}}|\vec{x}^{\vec{k}}D^{\vec{l}}f(\vec{x})|.$
### 3.1 Creation operators
###### Definition 3.4
(Time-like plane)
Refer to Definition 2.5. Let $S$ be a connected space-like rectangular surface
contained in some plane.
Let $S^{\flat}$ be a time-like plane spanned by
$\\{\hat{f}_{0},\hat{f}_{1}\\}$, parametrized by
$\hat{s}=(s,\bar{s})\mapsto\sigma(s,\bar{s})=s\hat{f}_{0}+\bar{s}\hat{f}_{1},\
s,\bar{s}\in{{\mathbb{R}}}.$ (3.1)
And we will write $d\hat{s}=dsd\bar{s}$.
Refer to Definition A.2. Define using a Minkowski metric,
$\tilde{\eta}:\vec{v}\in{{\mathbb{R}}}^{4}\mapsto\vec{v}\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})\in{{\mathbb{R}}}$.
Suppose we are given a $\tilde{f}\in\mathscr{P}$. We will define a new
function
$\tilde{f}^{\\{\hat{f}_{0},\hat{f}_{1}\\}}:{{\mathbb{R}}}^{4}\rightarrow{{\mathbb{C}}}$
by
$\displaystyle\vec{x}\in{{\mathbb{R}}}^{4}$
$\displaystyle\longmapsto\tilde{f}^{\\{\hat{f}_{0},\hat{f}_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))(\vec{x}):=\int_{S^{\flat}}\frac{e^{-i\tilde{\eta}(\cdot)}}{2\pi}\tilde{f}(\vec{x}+\cdot)\
d|\acute{\rho}|$
$\displaystyle=\int_{\hat{s}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[\sigma(\hat{s})\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{2\pi}\tilde{f}\left(\vec{x}+\sigma(\hat{s})\right)\cdot|\acute{\rho}_{\sigma}|(\hat{s})\
d\hat{s},$ (3.2)
integration over a time-like plane $S^{\flat}$, using the parametrization
given in Equation (3.1), for representation $\rho_{n}$.
###### Remark 3.5
1. 1.
Note that $\tilde{f}^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\notin\mathscr{P}$, unless
$\tilde{f}\equiv 0$.
2. 2.
Even though we compute the integral using a given parametrization $\sigma$, it
is actually independent of the parametrization and only depends on the time-
like plane span by $\\{\hat{f}_{0},\hat{f}_{1}\\}$. Because we are doing a
Fourier Transform on time-like and one space-like variable, evaluated at
$\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1}$, we see that the
transformed function depends on $\\{\hat{f}_{0},\hat{f}_{1}\\}$, not just
$S^{\flat}$.
3. 3.
If $\vec{x}=\sum_{a=0}^{3}x^{a}\hat{f}_{a}$, then
$\displaystyle\tilde{f}^{\\{\hat{f}_{0},\hat{f}_{1}\\}}$
$\displaystyle(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))(\vec{x})$
$\displaystyle=e^{-i[x^{0}\hat{H}(\rho_{n})-x^{1}\hat{P}(\rho_{n})]}\tilde{f}^{\\{\hat{f}_{0},\hat{f}_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))(x^{3}\hat{f}_{3}+x^{3}\hat{f}_{3}).$
Hence, $\hat{H}(\rho_{n})$ ($\hat{P}(\rho_{n})$) is the generator for
translation, in the $\hat{f}_{0}$ ($\hat{f}_{1}$) direction.
###### Definition 3.6
Let
$\mathcal{A}:=\\{F^{\alpha}:1\leq\alpha\leq\underline{N}\\}\subset\mathfrak{g}$
be a finite set, and define recursively for $n\geq 2$,
$\mathcal{A}^{n}:=[\mathcal{A}^{n-1},\mathcal{A}]$,
$\mathcal{A}^{1}:=\mathcal{A}$, such that $\mathfrak{g}$ can be spanned by
$\bigcup_{j=1}^{\underline{n}}\mathcal{A}^{j}$ for some $\underline{n}\geq 1$.
Given $\tilde{f}\in\mathscr{P}$, we now wish to describe the field operator
$\phi^{\alpha,n}(\tilde{f})$, $1\leq\alpha\leq\underline{N}$. Suppose we have
a spinor representation $A:{\rm SL}(2,{{\mathbb{C}}})\rightarrow{\rm
End}({{\mathbb{C}}}^{\underline{N}})$. For each $n\in\mathbb{N}$, the field
operator $\\{\phi^{\alpha,n}(\tilde{f}):\ 1\leq\alpha\leq\underline{N}\\}$
transforms like a spinor under the action of $A(\Lambda)$, for any
$\Lambda\in{\rm SL}(2,{{\mathbb{C}}})$. Hence, a Schwartz function $\tilde{f}$
will be ‘promoted’ to be some spinor
$\sum_{\alpha=1}^{\underline{N}}c_{\alpha}\phi^{\alpha,n}(\tilde{f})$. But
first, how does $\phi^{\alpha,n}(\tilde{f})$ act on $1$, for some Schwartz
function $\tilde{f}$?
###### Definition 3.7
(Creation operators)
Recall we indexed our non-trivial irreducible representation by a natural
number $n$. Fix a connected space-like plane $S_{0}\subset{{\mathbb{R}}}^{4}$.
We will choose $S_{0}$ to be the $x^{2}-x^{3}$ plane. Using spatial rotation,
we can rotate $S_{0}$ to be the $x^{i}-x^{j}$ plane, for $i,j=1,2,3$. Together
with translation and boost, we can transform any surface contained inside
$S_{0}$, to be any space-like surface, using the unitary representation of
$SL(2,{{\mathbb{C}}})$. By Definition 2.5, we will choose
$\\{e_{a}\\}_{a=0}^{3}$ to be a Minkowski frame for $S_{0}$.
For any $\tilde{f}\in\mathscr{P}$, we define an operator
$\phi^{\alpha,n}(\tilde{f})$, $\alpha=1,2,\cdots,\underline{N}$,
$n\in\mathbb{N}$, which acts on the vacuum state 1 by
$\displaystyle\phi^{\alpha,n}(\tilde{f})1:=$
$\displaystyle\left(S_{0},\tilde{f}^{\\{e_{0},e_{1}\\}}\otimes\rho_{n}(F^{\alpha}),\\{e_{a}\\}_{a=0}^{3}\right)$
$\displaystyle\equiv$
$\displaystyle\left(S_{0},\tilde{f}^{\\{e_{0},e_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))\otimes\rho_{n}(F^{\alpha}),\\{e_{a}\\}_{a=0}^{3}\right)\in\mathscr{H}(\rho_{n}),$
for $F^{\alpha}\in\mathcal{A}\subset\mathfrak{g}$.
In the notation above, it is understood that
$\tilde{f}^{\\{e_{0},e_{1}\\}}\equiv\tilde{f}^{\\{e_{0},e_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))$.
And we restrict the domain of
$\tilde{f}^{\\{e_{0},e_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n})):{{\mathbb{R}}}^{4}\rightarrow{{\mathbb{C}}}$
to be on the surface $S_{0}$.
###### Remark 3.8
The field operators can be indexed by a countably infinite set. See [19].
### 3.2 Domain and continuity
###### Definition 3.9
Let $\mathscr{S}$ denote the set consisting of countable union of space-like
rectangular surfaces in ${{\mathbb{R}}}^{4}$, each component surface is
compact or a plane. For $S\in\mathscr{S}$, let $\mathscr{P}_{S}$ denote the
set of Schwartz functions defined on $S$.
In particular, if $\sigma:I^{2}\rightarrow{{\mathbb{R}}}^{4}$ is a
parametrization of a compact space-like surface $S$, we say $f$ is a Schwartz
function on $S$ when $f\circ\sigma$ is an infinitely differentiable function
on $I^{2}$, and is compactly supported in the interior of $I^{2}$, i.e. it
decays to zero at the boundary.
If $S$ is a space-like plane, then for some parametrization
$\sigma:{{\mathbb{R}}}^{2}\rightarrow S$, we say $f$ is a Schwartz function on
$S$ when $f\circ\sigma$ is a Schwartz function on ${{\mathbb{R}}}^{2}$.
###### Definition 3.10
(Domain of field operators)
Define a domain
$\mathscr{D}\subset\bigoplus_{n=0}^{\infty}\mathscr{H}(\rho_{n})$ as
$\left\\{a_{0}1+\sum_{n,u=1}^{\infty}\left(S_{n,u},f_{n,\alpha}^{u}\otimes\rho_{n}(E^{\alpha}),\\{\hat{f}_{a}^{n,u}\\}_{a=0}^{3}\right):\
a_{0}\in{{\mathbb{C}}},\ f_{n,\alpha}^{u}\in\mathscr{P}_{S_{n,u}},\
S_{n,u}\in\mathscr{S}\right\\}.$
Given a parametrization $\sigma:I^{2}\rightarrow{{\mathbb{R}}}^{4}$ for a
surface $S$, we say that $f\in L^{2}(S)$ if $f$ is measurable on $S$ and
$\int_{I^{2}}|f\circ\sigma|^{2}(\hat{s})\left|\sum_{0\leq a<b\leq
3}\acute{\rho}_{\sigma}^{ab}(\hat{s})\left[\det\acute{J}_{ab}^{\sigma}(\hat{s})\right]\right|d\hat{s}<\infty.$
By construction of $\mathscr{H}(\rho)$, we only consider space-like surfaces,
equipped with a Minkowski frame. For any surface $S\in\mathscr{S}$,
$\mathscr{P}_{S}$ is dense inside $L^{2}(S)$. So we see that $\mathscr{D}$ is
actually a dense set inside $\mathbb{H}_{{\rm YM}}(\mathfrak{g})$, and it
contains the vacuum state.
###### Remark 3.11
In the proof of Proposition 3.20, we will show that $\mathscr{P}_{S}$ is dense
inside $L^{2}(S)$, when $S$ is compact.
From Equation (2.2), a compact space-like surface in the $x^{2}-x^{3}$ plane
can be transformed to any compact space-like surface under translation,
spatial rotation or boost. Hence, $\mathscr{S}$ remains invariant under the
action of ${\rm SL}(2,{{\mathbb{C}}})$. From Definition 2.5, a Minkowski frame
associated with a rectangular surface, is generated by Lorentz transformations
of $\\{e_{a}\\}_{a=0}^{3}$. Hence
$U(\vec{a},\Lambda)\mathscr{D}\subset\mathscr{D}$.
We can now define the field operator, whose domain is given by $\mathscr{D}$,
as follows. Recall from Definition 3.4, how we can define a new function
$\tilde{f}^{\\{\hat{f}_{0},\hat{f}_{1}\\}}$, from $\tilde{f}\in\mathscr{P}$,
using $\\{\hat{f}_{0},\hat{f}_{1}\\}$ contained in a Minkowski frame,
associated with a space-like surface $S$.
From the opening paragraph in Section 3, we saw that
$\\{\hat{f}_{a}\\}_{a=0}^{3}$ uniquely determines $\Lambda\in{\rm
SL}(2,{{\mathbb{C}}})$, up to $\pm 1$. When $j+k$ is an integer, we see that
$D^{(j,k)}(\pm\Lambda)=D^{(j,k)}(\Lambda)$.
###### Definition 3.12
Let
$\sum_{u=1}^{\infty}\left(S_{u},g_{\beta}^{u}\otimes\rho(E^{\beta}),\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}\right)\in\mathscr{H}(\rho)$,
whereby each $S_{u}$ is a connected rectangular space-like surface contained
inside some plane, and $g_{\beta}^{u}\in\mathscr{P}_{S_{u}}$. Refer to
Definition 3.4. For each Minkowski frame $\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}$,
let $\Lambda^{u}e_{a}=\hat{f}_{a}^{u}$ for some $\Lambda^{u}\in{\rm
SL}(2,{{\mathbb{C}}})$. The adjoint representation ${{\rm{ad}}}$ of
$\rho(\mathfrak{g})$ is defined as
${{\rm{ad}}}(\rho(E^{\alpha}))\rho(E^{\beta})=[\rho(E^{\alpha}),\rho(E^{\beta})]$.
Recall from Definition 3.6, we defined a set $\mathcal{A}\subset\mathfrak{g}$
with cardinality $\underline{N}$. Suppose we have a spinor representation
$A:{\rm SL}(2,{{\mathbb{C}}})\rightarrow{\rm
End}({{\mathbb{C}}}^{\underline{N}})$. This representation can be written as a
sum of irreducible representations of the form
$\bigoplus_{\alpha=1}^{m}D^{(j_{\alpha},k_{\alpha})}$,
$D^{(j_{\alpha},k_{\alpha})}$ is an irreducible representation of ${\rm
SL}(2,{{\mathbb{C}}})$ as described earlier. We will further assume that for
each $1\leq\alpha\leq m$, $j_{\alpha}+k_{\alpha}$ is an integer.
For $\Lambda\in{\rm SL}(2,{{\mathbb{C}}})$, let $A(\Lambda)_{\alpha}^{\beta}$
denote the entry at the $\beta$-th row, $\alpha$-th column. Given a test
function $f\in\mathscr{P}$, define a field operator $\phi^{\alpha,n}(f)$ as
$\displaystyle\phi^{\alpha,n}$
$\displaystyle(f)\sum_{u=1}^{\infty}\left(S_{u},g_{\beta}^{u}\otimes\rho_{m}(E^{\beta}),\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}\right)$
$\displaystyle:=\left\\{\begin{array}[]{ll}\sum_{u=1}^{\infty}\left(S_{u},f^{\\{\hat{f}_{0}^{u},\hat{f}_{1}^{u}\\}}_{n}A(\Lambda^{u})_{\gamma}^{\alpha}\cdot
g_{\beta}^{u}\otimes\rho_{n}\left([F^{\gamma},E^{\beta}]\right),\\{\hat{f}_{a}^{u}\\}_{a=0}^{3}\right),&\hbox{$m=n$;}\\\
0,&\hbox{$m\neq n$.}\end{array}\right.$
There is an implied sum over $\gamma$ from 1 to $\underline{N}$, and over
$\beta$ from 1 to $N$. Here,
$f^{\\{\hat{f}_{0}^{u},\hat{f}_{1}^{u}\\}}_{n}\equiv
f^{\\{\hat{f}_{0}^{u},\hat{f}_{1}^{u}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))$
and the restricted vector field
$f^{\\{\hat{f}_{0}^{u},\hat{f}_{1}^{u}\\}}_{n}\Big{|}_{S_{u}}A(\Lambda^{u})_{\gamma}^{\alpha}\otimes\rho_{n}(F^{\gamma})$
acts on $g_{\beta}^{u}\otimes\rho_{n}(E^{\beta})$, by
${{\rm{ad}}}\left(f^{\\{\hat{f}_{0}^{u},\hat{f}_{1}^{u}\\}}_{n}|_{S_{u}}A(\Lambda^{u})_{\gamma}^{\alpha}\otimes\rho_{n}(F^{\gamma})\right)$,
over the surface $S_{u}$ fiberwise, ${{\rm{ad}}}(\rho_{n}(E))$ is the adjoint
representation of $\rho_{n}(E)$.
###### Remark 3.13
1. 1.
The alpha in $\phi^{\alpha,n}(f)$ is referred to as the spinor index. Indeed,
for each $n\in\mathbb{N}$ and $f\in\mathscr{P}$, we have that
$\\{\phi^{\alpha,n}(f):\ 1\leq\alpha\leq\underline{N}\\}$
is a spinor of dimension $\underline{N}$. If $A=D^{(s,0)}$, then we must have
that $s$ is an integer, i.e. the spinors are vector bosons in this case.
2. 2.
By linearity, we define
$\displaystyle\phi^{\alpha,n}(f)\sum_{m=0}^{\infty}v_{m}$
$\displaystyle:=\sum_{m=0}^{\infty}\phi^{\alpha,n}(f)v_{m}$
$\displaystyle=a_{0}\Big{(}S_{0},f^{\\{e_{0},e_{1}\\}}_{n}\otimes\rho_{n}(E^{\alpha}),\\{e_{a}\\}_{a=0}^{3}\Big{)}+\phi^{\alpha,n}(f)v_{n},$
$v_{0}=a_{0}1$ is a scalar multiple of the vacuum state and
$v_{n}\in\mathscr{H}(\rho_{n})$. The domain for $\phi^{\alpha,n}(f)$ will be
$\mathscr{D}$. Note that it is a bounded operator.
3. 3.
Suppose $g_{\beta}$ is measurable, or $L^{2}$ integrable on a space-like
rectangular surface $S$, contained in some plane. Since
$f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}$ is bounded and continuous, we see that
$f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\cdot g_{\beta}$ is measurable and $L^{2}$
integrable. So,
$\displaystyle\phi^{\alpha,n}(f)$
$\displaystyle\left(S,g_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
$\displaystyle:=\left(S,f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}_{n}\cdot
A(\Lambda)_{\gamma}^{\alpha}g_{\beta}\otimes\rho_{n}([F^{\gamma},E^{\alpha}]),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),$
and $f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}_{n}\cdot g_{\beta}$ is defined almost
everywhere on $S$. But we will run into problems later, when proving
Proposition 3.22. This is because multiplying a measurable function with a
tempered distribution, may not be a tempered distribution. It is thus
necessary to restrict the domain for the field operators to be on
$\mathscr{D}$, to avoid technical difficulties later on.
We showed how $\phi^{\alpha,n}(f)$ is defined on $\mathscr{D}$. We can now
define its adjoint.
###### Definition 3.14
(Annihilation operators)
Using the inner product in Definition 2.9, we define the adjoint
$\phi^{\alpha,n}(g)^{\ast}$ on a space-like surface $S$ contained in some
plane, as
$\displaystyle\phi^{\alpha,n}(g)^{\ast}$
$\displaystyle\left(S,f_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
$\displaystyle=$
$\displaystyle-\left(S,\overline{g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}}\overline{A(\Lambda)_{\gamma}^{\alpha}}\cdot
f_{\beta}\otimes\rho_{n}([F^{\gamma},E^{\beta}]),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
$\displaystyle+\left\langle\left(S,f_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),\phi^{\alpha,n}(g)1\right\rangle
1,$
whereby $\hat{f}_{a}=\Lambda e_{a}$ for $a=0,\cdots,3$, and we sum over
repeated indices $\gamma$ from 1 to $\underline{N}$, and over $\beta$ from 1
to $N$. This is because
$\left\langle{{\rm{ad}}}(\rho(E^{\alpha}))\rho(E^{\beta}),\rho(E^{\gamma})\right\rangle=-\left\langle\rho(E^{\beta}),{{\rm{ad}}}(\rho(E^{\alpha}))\rho(E^{\gamma})\right\rangle.$
And
* •
$\phi^{\alpha,n}(g)^{\ast}$ will send
$\left(S,f_{\beta}\otimes\rho_{m}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
to 0 if $m\neq n$;
* •
$\phi^{\alpha,n}(g)^{\ast}1=0$.
###### Remark 3.15
Clearly, we can choose the domain to be $\mathscr{D}$.
###### Example 3.16
Consider the Lie group ${\rm SU}(2)$. Its Lie algebra can be generated by the
three Pauli matrices. Suppose $\mathcal{A}$ as described in Definition 3.6 is
linearly independent. Hence, $2\leq|\mathcal{A}|\leq 3$. If $A=D^{(j,k)}$,
then $j=1$, $k=0$, since $j+k$ must be an integer. Thus, the Pauli matrices,
which represent $W^{\pm}$ and $Z$ bosons responsible for weak force
interactions, transform like spin 1 vector bosons.
###### Example 3.17
Consider the Lie group ${\rm SU}(3)$. Its Lie algebra can be generated by the
Gell-Mann matrices
$\displaystyle\lambda_{1}=$ $\displaystyle\left(\begin{array}[]{ccc}0&\ 1&\
0\\\ -1&\ 0&\ 0\\\ 0&\ 0&\ 0\\\
\end{array}\right),\quad\lambda_{2}=\left(\begin{array}[]{ccc}0&\ i&\ 0\\\ i&\
0&\ 0\\\ 0&\ 0&\ 0\\\
\end{array}\right),\quad\lambda_{3}=\left(\begin{array}[]{ccc}i&\ 0&\ 0\\\ 0&\
-i&\ 0\\\ 0&\ 0&\ 0\\\ \end{array}\right),$ $\displaystyle\lambda_{4}=$
$\displaystyle\left(\begin{array}[]{ccc}0&\ 0&\ i\\\ 0&\ 0&\ 0\\\ i&\ 0&\ 0\\\
\end{array}\right),\quad\lambda_{5}=\left(\begin{array}[]{ccc}0&\ 0&\ 1\\\ 0&\
0&\ 0\\\ -1&\ 0&\ 0\\\ \end{array}\right),$ $\displaystyle\lambda_{6}=$
$\displaystyle\left(\begin{array}[]{ccc}0&\ 0&\ 0\\\ 0&\ 0&\ i\\\ 0&\ i&\ 0\\\
\end{array}\right),\quad\lambda_{7}=\left(\begin{array}[]{ccc}0&\ 0&\ 0\\\ 0&\
0&\ 1\\\ 0&\ -1&\ 0\\\
\end{array}\right),\quad\lambda_{8}=\frac{1}{\sqrt{3}}\left(\begin{array}[]{ccc}i&\
0&\ 0\\\ 0&\ i&\ 0\\\ 0&\ 0&\ -2i\\\ \end{array}\right),$
each representing the gluons responsible for strong force interaction.
Observe that $\\{\lambda_{1},\lambda_{2},\lambda_{3}\\}$ is the Lie algebra of
a subgroup $H_{1}\subset{\rm SU}(3)$, isomorphic to ${\rm SU}(2)$.
Furthermore, $\lambda_{8}$ generates an abelian subgroup $H_{2}$, such that
the elements in $H_{1}$ commutes with elements in $H_{2}$, because
$\lambda_{8}$ commutes with $\\{\lambda_{1},\lambda_{2},\lambda_{3}\\}$. Let
$\tilde{H}$ be a Lie subgroup in ${\rm SU}(3)$, generated by
$\\{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{8}\\}$.
Suppose the vacuum state $1$ is invariant under this unbroken subgroup
$\tilde{H}$ and $\mathcal{A}$ as described in Definition 3.6 is linearly
independent. This means that $3\leq|\mathcal{A}|\leq 4$ and cannot contain any
elements that are linear combination of
$\\{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{8}\\}$, also referred to as
unbroken generators. Therefore, this ${\rm SU}(3)$ gauge theory is
spontaneously broken. Furthermore, if $A=D^{(j,k)}$, then either $j=1$, $k=0$
or $j=k=1/2$.
Choose $\mathcal{A}=\\{\lambda_{4},\lambda_{5},\lambda_{7}\\}$. A direct
computation shows that
$\displaystyle[\lambda_{7},\lambda_{4}]=$
$\displaystyle\lambda_{2},\quad[\lambda_{7},\lambda_{5}]=\lambda_{1},\quad{{\rm{ad}}}(\lambda_{4}){{\rm{ad}}}(\lambda_{5})\lambda_{7}=-\lambda_{6},$
$\displaystyle{{\rm{ad}}}(\lambda_{7}){{\rm{ad}}}(\lambda_{4}){{\rm{ad}}}(\lambda_{5})\lambda_{7}=$
$\displaystyle\lambda_{3}-\sqrt{3}\lambda_{8},\quad[\lambda_{5},\lambda_{4}]=\lambda_{3}+\sqrt{3}\lambda_{8}.$
Thus, we see that $\bigcup_{j=1}^{4}\mathcal{A}^{j}$ spans $\mathfrak{su}(3)$.
In this case, we can choose $A=D^{(1,0)}$ as an irreducible representation and
the vectors in the span of $\mathcal{A}$ are called spin 1 vectors.
If we choose
$\mathcal{A}=\\{\lambda_{4},\lambda_{5},\lambda_{6},\lambda_{7}\\}$, which are
the broken generators in $\mathfrak{su}(3)$, then $A=D^{(1/2,1/2)}$ is
equivalent to $Y$, and the vectors in the span of $\mathcal{A}$ transform like
4-vectors.
### 3.3 Cyclicity
###### Notation 3.18
Let $S=[0,1]\times[0,1]\equiv I^{2}$ and
$\mathscr{P}_{S}(n)\subset\mathscr{P}_{S}$, whereby $g\in\mathscr{P}_{S}(n)$
if $g=f_{1}f_{2}\cdots f_{n}$, each $f_{i}\in\mathscr{P}_{S}$.
Let $C_{c}(S,{{\mathbb{C}}})$ and $C(S,{{\mathbb{C}}})$ denote the set of
compactly supported continuous functions and the set of continuous functions
on $S$ respectively.
We also write $\parallel\cdot\parallel_{L^{2}}$ to denote the $L^{2}$ norm on
$S$, i.e. $\parallel f\parallel_{L^{2}}=\left[\int_{I^{2}}|f(\hat{s})|^{2}\
d\hat{s}\right]^{1/2}$.
###### Lemma 3.19
We have that $\mathscr{P}_{S}$ is dense in $L^{2}(S)$. Let $\mathcal{G}(n)$ be
the smallest algebra containing $\mathscr{P}_{S}(n)$, $n\in\mathbb{N}$ fixed.
Then, $\mathcal{G}(n)$ is dense in $\mathscr{P}_{S}$ using the $L^{2}$ norm.
Proof. Now $\mathscr{P}_{S}$ is a complex algebra and clearly it separates
interior points in $S$. Unfortunately, it does not contain the unit 1 on
$S\equiv I^{2}$. Let $\mathcal{C}$ be the smallest algebra containing $1$ and
$\mathscr{P}_{S}$.
By complex Stone Weierstrass Theorem, $\mathscr{P}_{S}$ and $\mathcal{C}$ are
respectively dense in $C_{c}(S,{{\mathbb{C}}})$ and $C(S,{{\mathbb{C}}})$.
Furthermore, since continuous functions are dense in $L^{2}(S)$, we see that
$\mathcal{C}$ will be dense in $L^{2}(S)$. To show that the space containing
polynomials of functions in $\mathscr{P}_{S}$ will generate $L^{2}(S)$, we
will show that we can approximate 1 via the $L^{2}$ norm, using a sequence of
functions in $\mathscr{P}_{S}$.
Define $\breve{\varphi}_{\delta}:[0,1]\rightarrow{{\mathbb{R}}}$ by
$\breve{\varphi}_{\delta}(t):=\left\\{\begin{array}[]{ll}\frac{1}{\delta}t,&\hbox{$0\leq
t\leq\delta$;}\\\ 1,&\hbox{$\delta<t\leq 1-\delta$;}\\\
\frac{1}{\delta}(1-t),&\hbox{$1-\delta<t\leq 1$.}\end{array}\right.$
Then, $\hat{t}=(t,\bar{t})\in
I^{2}\mapsto\phi_{\delta}(\hat{t}):=\breve{\varphi}_{\delta}(t)\breve{\varphi}_{\delta}(\bar{t})$
is continuous. Let $\epsilon>0$. We can find a $\delta>0$ such that
$\parallel\phi_{\delta}-1\parallel_{L^{2}}<\epsilon/2$.
Since $\phi_{\delta}\in C_{c}(S,{{\mathbb{C}}})$, we can find a
$g_{\epsilon}\in\mathscr{P}_{S}$ such that
$\parallel\phi_{\delta}-g_{\epsilon}\parallel_{L^{2}}<\epsilon/2$. Thus,
$\parallel 1-g_{\epsilon}\parallel_{L^{2}}<\epsilon$. This completes the claim
that $\mathscr{P}_{S}$ is dense in $L^{2}(S)$.
To prove the second statement, let $f\in\mathscr{P}_{S}$ and let $\epsilon>0$.
There exists a $M>0$ such that $|f|(\hat{s})<M$ for all $\hat{s}\in S$. Choose
a $\delta>0$ such that $\parallel
1-g_{\delta}^{n-1}\parallel_{L^{2}}<\epsilon/M$,
$g_{\delta}\in\mathscr{P}_{S}$. Let $\tilde{g}_{\epsilon}:=g_{\delta}^{n-1}f$.
Then,
$\displaystyle\parallel f-\tilde{g}_{\epsilon}\parallel_{L^{2}}$
$\displaystyle=\left[\int_{I^{2}}|1-g_{\delta}^{n-1}|^{2}(\hat{t})|f|^{2}(\hat{t})\
d\hat{t}\right]^{1/2}$ $\displaystyle\leq M\parallel
1-g_{\delta}^{n-1}\parallel_{L^{2}}<\epsilon.$
Thus, $\mathcal{G}(n)$ is dense in $\mathscr{P}_{S}$ using the $L^{2}$ norm.
Because $\mathscr{P}_{S}$ is an algebra, we see that
$\psi^{\alpha_{1},m_{1}}(g_{1})\cdots\psi^{\alpha_{k},m_{k}}(g_{k})\mathscr{D}\subset\mathscr{D},$
whereby $\psi^{\alpha_{i},k_{i}}(g_{i})=\phi^{\alpha_{i},k_{i}}(g_{i})$ or its
adjoint $\phi^{\alpha_{i},k_{i}}(g_{i})^{\ast}$.
Let $\mathscr{D}_{0}$ be a subspace inside $\mathbb{H}_{{\rm
YM}}(\mathfrak{g})$, generated by the action of $U(\vec{a},\Lambda)$ and
polynomial containing
$\phi^{\alpha_{1},n}(f_{1}),\cdots\phi^{\alpha_{r},n}(f_{r})$, on the vacuum
state 1. Clearly, $\mathscr{D}_{0}\subset\mathscr{D}$.
###### Proposition 3.20
The set $\mathscr{D}_{0}$ is dense inside $\mathbb{H}_{{\rm
YM}}(\mathfrak{g})$.
Proof. Under $U(\vec{a},\Lambda)$, we can transform any space-like rectangular
surface, into other space-like rectangular surface, via translation, spatial
rotation and boost. See Equation (2.2). Thus it suffices to show that for any
compact space-like rectangular surface $S\subset{{\mathbb{R}}}^{4}$ contained
in some plane, we can approximate any measurable section in
$S\times\rho_{n}(\mathfrak{g})_{{\mathbb{C}}}\rightarrow S$, for each fixed
$n$, using the field operators. Without loss of generality, we assume that $S$
is $I^{2}$, lying inside the $x^{2}-x^{3}$ plane.
Refer to Definition 3.6. First we assume that the span of $\mathcal{A}$ is
$\mathfrak{g}$. In this case, we see that $\underline{N}\geq N$. Recall
${{\rm{ad}}}(E)F=[E,F]\in\mathfrak{g}$. Since $\mathfrak{g}$ is simple, we see
that for a fixed $k\geq 1$,
${\rm span}\
\left\\{{{\rm{ad}}}(F^{\alpha_{1}})\cdots{{\rm{ad}}}(F^{\alpha_{k}})F^{\beta}:\
1\leq\beta\leq\underline{N},\ 1\leq\alpha_{i}\leq\underline{N},\
i=1,2,\cdots,k\right\\}=\mathfrak{g}.$
Therefore, for each $1\leq\gamma\leq N$, we can write
$E^{\gamma}=\sum_{\beta=1}^{N(\gamma)}d_{\beta}^{\gamma}{{\rm{ad}}}(F^{\alpha_{1}^{\gamma,\beta}})\cdots{{\rm{ad}}}(F^{\alpha_{m-1}^{\gamma,\beta}})F^{\alpha_{m}^{\gamma,\beta}},$
for real coefficients $d_{\beta}^{\gamma}$ and natural numbers
$1\leq\alpha_{i}^{\gamma,\beta}\leq\underline{N}$.
Let $p_{\kappa}(x)=\frac{\kappa}{\sqrt{2\pi}}e^{-\kappa^{2}x^{2}/2}$ be an
one-dimensional Gaussian function, mean 0, variance $1/\kappa^{2}$. Its
Fourier Transform is $\hat{p}_{\kappa}(q)=\kappa p_{1/\kappa}(q)$. Let
$c_{n}=\hat{p}_{1}(\hat{H}(\rho_{n}))\neq 0$,
$d_{n}=\hat{p}_{1}(\hat{P}(\rho_{n}))\neq 0$ be fixed.
For a given set of Schwartz functions $\\{f_{1},\cdots,f_{m}\\}$ defined on
$S$, extend each one to be a function
$F_{i}\in\mathscr{P}:\vec{x}\in{{\mathbb{R}}}^{4}\mapsto\frac{1}{c_{n}}p_{1}(x^{0})\frac{1}{d_{n}}p_{1}(x^{1})f_{i}(x^{2},x^{3}),\
i=1,\cdots,m.$
We also have
$F_{i}^{\\{e_{0},e_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))(0,0,x^{2},x^{3})=\frac{1}{c_{n}}\hat{p}_{1}(\hat{H}(\rho_{n}))\frac{1}{d_{n}}\hat{p}_{1}(\hat{P}(\rho_{n}))f_{i}(x^{2},x^{3})=f_{i}(x^{2},x^{3}).$
Hence,
$\displaystyle[F_{1}^{\\{e_{0},e_{1}\\}}$ $\displaystyle\cdot\cdots
F_{m}^{\\{e_{0},e_{1}\\}}](\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))(0,0,x^{2},x^{3})=\prod_{i=1}^{m}f_{i}(x^{2},x^{3}).$
For any set of Schwartz functions $\\{f_{1},\cdots,f_{m}\\}$ on $S$, we can
extend them to be Schwartz functions on ${{\mathbb{R}}}^{4}$ as described
above, and thus we have
$\displaystyle\sum_{\beta=1}^{N(\gamma)}d_{\beta}^{\gamma}\phi^{\alpha_{1}^{\gamma,\beta},n}(F_{1})\cdots\phi^{\alpha_{m}^{\gamma,\beta},n}(F_{m})1$
$\displaystyle=\left(S,\prod_{i=1}^{m}f_{i}\otimes\rho_{n}(E^{\gamma}),\\{e_{a}\\}_{a=0}^{3}\right).$
If we let
$C_{m}={\rm span}\
\left\\{\phi^{\alpha_{1},n}(F_{1})\cdots\phi^{\alpha_{m},n}(F_{m})1:\
F_{i}\in\mathscr{P},\ 1\leq\alpha_{i}\leq\underline{N}\right\\},$
we see that the sum of subspaces, $\sum_{m=1}^{\infty}C_{m}$, is dense in
$L^{2}(S)\otimes\rho_{n}(\mathfrak{g})$. This follows from Lemma 3.19.
This will show that we can find a sequence of vectors in the sum
$\sum_{m=1}^{\infty}C_{m}$ and approximate any vector of the form
$\left(S,f_{\alpha}\otimes\rho_{n}(E^{\alpha}),\\{e_{a}\\}_{a=0}^{3}\right)$,
whereby $f_{\alpha}$ is measurable on a compact rectangular surface $S$ inside
$x^{2}-x^{3}$ plane.
Now suppose span of $\mathcal{A}$ is not equal to $\mathfrak{g}$. Thus,
$\underline{n}\geq 2$. By definition of $\mathcal{A}$, we see that
${\rm span}\
\left\\{{{\rm{ad}}}(F^{\alpha_{1}})\cdots{{\rm{ad}}}(F^{\alpha_{\tilde{n}-1}})F^{\alpha_{\tilde{n}}}:\
1\leq\alpha_{i}\leq\underline{N},\
1\leq\tilde{n}\leq\underline{n}\right\\}=\mathfrak{g}.$
Thus, for each $1\leq\gamma\leq N$, we can write for some
$1\leq\tilde{n}\leq\underline{n}$,
$\tilde{E}^{\gamma}=\sum_{\xi=1}^{N(\gamma)}d(\gamma,\xi){{\rm{ad}}}(F^{\alpha_{1}(\gamma,\xi)})\cdots{{\rm{ad}}}(F^{\alpha_{\tilde{n}-1}(\gamma,\xi)})F^{\alpha_{\tilde{n}}(\gamma,\xi)},$
for real coefficients $d(\gamma,\xi)$ and natural numbers
$1\leq\alpha_{i}(\gamma,\xi)\leq\underline{N}$. Here, $\\{\tilde{E}^{\alpha}:\
1\leq\alpha\leq N\\}$ is a basis for $\mathfrak{g}$.
Let $\epsilon>0$ and $f\in\mathscr{P}_{S}$. From the proof of Lemma 3.19, we
can find $g_{1},\cdots,g_{\tilde{n}}\in\mathscr{P}_{S}$ such that
$\parallel f-g_{1}\cdots
g_{\tilde{n}}\parallel_{L^{2}}<\frac{\epsilon}{C(\rho_{n})}.$
Using an earlier argument, there exists $G_{1},\cdots,G_{\tilde{n}}$ such that
$G_{i}^{\\{e_{0},e_{1}\\}}\equiv g_{i}$ for $1\leq i\leq\tilde{n}$.
Hence, we have that
$\displaystyle\sum_{\xi=1}^{N(\gamma)}$ $\displaystyle
d(\gamma,\xi)\phi^{\alpha_{1}(\gamma,\xi),n}(G_{1})\cdots\phi^{\alpha_{\tilde{n}-1}(\gamma,\xi),n}(G_{\tilde{n}-1})\phi^{\alpha_{\tilde{n}}(\gamma,\xi),n}(G_{\tilde{n}})1$
$\displaystyle=$
$\displaystyle\left(S,\prod_{i=1}^{\tilde{n}}g_{i}\otimes\rho_{n}(\tilde{E}^{\gamma}),\\{e_{a}\\}_{a=0}^{3}\right).$
A direct computation will show that
$\left\|\Big{(}S,f\otimes\rho(\tilde{E}^{\gamma}),\\{e_{a}\\}_{a=0}^{3}\Big{)}-\left(S,\prod_{i=1}^{\tilde{n}}g_{i}\otimes\rho_{n}(\tilde{E}^{\gamma}),\\{e_{a}\\}_{a=0}^{3}\right)\right\|<\epsilon.$
This completes the proof.
###### Remark 3.21
Given any
$\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$,
we can find a sequence of
Schwartz functions
$\\{g_{\alpha}^{m}:{{\mathbb{R}}}^{4}\rightarrow{{\mathbb{C}}}\ |\
m\in\mathbb{N}\\}$, for which its partial Fourier Transform approximates
$f_{\alpha}$, for each $1\leq\alpha\leq N$. Here, for each
$g_{\alpha}^{m}:{{\mathbb{R}}}^{4}\rightarrow{{\mathbb{C}}}$, we can take the
Fourier Transform on the time and space variables, i.e. $g_{\alpha}^{m}\mapsto
g_{\alpha}^{m,\\{\hat{f}_{0},\hat{f}_{1}\\}}(\hat{H}(\rho),\hat{P}(\rho))(\vec{x})$,
$\vec{x}\in S$. As explained at the end of subsection 2.2, the eigenvalues
$\\{\hat{H}(\rho),\hat{P}(\rho)\\}$ will be defined using a Yang-Mills path
integral in subsection 7.3. Therefore,
$\left(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)\in\mathscr{H}(\rho)$
can be written as the limit of
$\left\\{\left(S,g_{\alpha}^{m,\\{\hat{f}_{0},\hat{f}_{1}\\}}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)\right\\}_{m=1}^{\infty}$,
using the inner product given in Definition 2.9, and hence they are referred
to as Yang-Mills fields.
### 3.4 Tempered Distribution
###### Proposition 3.22
Let $S,\tilde{S}$ be bounded space-like surfaces lying inside some plane. Let
$\left(S,g_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),\left(\tilde{S},\tilde{g}_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)\in\mathscr{D}$.
Suppose $\hat{f}_{a}=\Lambda e_{a}$ for $\Lambda\in{\rm
SL}(2,{{\mathbb{C}}})$.
Write
$c_{\alpha}^{\gamma,\beta}=A(\Lambda)_{\delta}^{\alpha}{{\rm{Tr}}}\left[-[{{\rm{ad}}}(\rho_{n}(F^{\delta}))\rho_{n}(E^{\gamma})]\rho_{n}(E^{\beta})\right],$
with an implied sum over $\delta$. Given a test function $f\in\mathscr{P}$, we
define for a smooth parametrization $\sigma:I^{2}\rightarrow
S\cap\tilde{S}\subset{{\mathbb{R}}}^{4}$,
$\displaystyle T(f):=$
$\displaystyle\left\langle\phi^{\alpha,n}(f)\left(\tilde{S},\tilde{g}_{\gamma}\otimes\rho_{n}(E^{\gamma}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),\left(S,g_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)\right\rangle$
$\displaystyle=$ $\displaystyle c_{\alpha}^{\gamma,\beta}\int_{I^{2}}d\hat{t}\
\left[f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\cdot\tilde{g}_{\gamma}\cdot\overline{g_{\beta}}\right](\sigma(\hat{t}))\cdot|\acute{\rho}_{\sigma}|(\hat{t}).$
Then $T$ is a linear functional on $\mathscr{P}$. Furthermore, it is a
tempered distribution.
Proof. It is clear that it is a linear functional on $\mathscr{P}$. It remains
to show that it is a tempered distribution.
Let $\sigma:I^{2}\rightarrow{{\mathbb{R}}}^{4}$ be a parametrization of
$\hat{S}:=S\cap\tilde{S}$, and write
$h=[\tilde{g}_{\gamma}\cdot\overline{g_{\beta}}]\circ\sigma\cdot\left|\acute{\rho}_{\sigma}\right|c_{\alpha}^{\gamma,\beta}.$
Then,
$\displaystyle T(f)=$
$\displaystyle\int_{I^{2}}f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}(\sigma(\hat{t}))h(\hat{t})\
d\hat{t}.$
Let
$\tilde{\sigma}:\hat{s}\mapsto\tilde{\sigma}(\hat{s})=s\hat{f}_{0}+\bar{s}\hat{f}_{1}$,
$s,\bar{s}\in{{\mathbb{R}}}$, whereby $\\{\hat{f}_{a}\\}_{a=0}^{3}$ is a
Minkowski frame for $\hat{S}$. By definition,
$\displaystyle\vec{x}\longmapsto f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}(\vec{x})$
$\displaystyle\equiv
f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))(\vec{x})$
$\displaystyle=\int_{\hat{s}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[\tilde{\sigma}(\hat{s})\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{2\pi}f\left(\vec{x}+\tilde{\sigma}(\hat{s})\right)\left|\acute{\rho}_{\tilde{\sigma}}\right|(\hat{s})\
d\hat{s}.$
Write
$\vec{\alpha}=\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1}$. Thus
$\displaystyle T(f)=$
$\displaystyle\int_{I^{2}}f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}(\sigma(\hat{t}))h(\hat{t})\
d\hat{t}$ $\displaystyle=$
$\displaystyle\int_{\hat{s},\hat{t}\in{{\mathbb{R}}}^{2}}d\hat{t}d\hat{s}\
f(\sigma(\hat{t})+\tilde{\sigma}(\hat{s}))\left|\acute{\rho}_{\tilde{\sigma}}\right|(\hat{s})\left|\acute{\rho}_{\sigma}\right|(\hat{t})\cdot\frac{e^{-i[\tilde{\sigma}(\hat{s})\cdot\vec{\alpha}]}}{2\pi}[\tilde{g}_{\gamma}\cdot\overline{g_{\beta}}]\circ\sigma(\hat{t})\cdot
c_{\alpha}^{\gamma,\beta}.$
Note that
$\displaystyle(\hat{s},\hat{t})\in{{\mathbb{R}}}^{4}$
$\displaystyle\longmapsto\left|\acute{\rho}_{\tilde{\sigma}}\right|(\hat{s})\left|\acute{\rho}_{\sigma}\right|(\hat{t})\cdot\frac{e^{-i[\tilde{\sigma}(\hat{s})\cdot\vec{\alpha}]}}{2\pi}[\tilde{g}_{\gamma}\cdot\overline{g_{\beta}}]\circ\sigma(\hat{t})\cdot
c_{\alpha}^{\gamma,\beta},$
is a tempered distribution, because
$\\{g_{\beta},\tilde{g}_{\gamma}\\}_{\beta,\gamma=1}^{N}$ are Schwartz
functions. Therefore, $T$ is a tempered distribution.
###### Remark 3.23
The map
$\displaystyle f$ $\displaystyle\in\mathscr{P}$
$\displaystyle\longmapsto\int_{\hat{s},\hat{t}\in{{\mathbb{R}}}^{4}}d\hat{t}d\hat{s}\
f(\sigma(\hat{t})+\tilde{\sigma}(\hat{s}))\left|\acute{\rho}_{\tilde{\sigma}}\right|(\hat{s})\left|\acute{\rho}_{\sigma}\right|(\hat{t})\cdot\frac{e^{-i[\tilde{\sigma}(\hat{s})\cdot\vec{\alpha}]}}{2\pi}[\tilde{g}_{\gamma}\cdot\overline{g_{\beta}}]\circ\sigma(\hat{t})$
$\displaystyle\hskip
239.00298pt\times{{\rm{Tr}}}\left[-[{{\rm{ad}}}(\cdot)\rho_{n}(E^{\gamma})]\rho_{n}(E^{\beta})\right],$
defines a $\rho_{n}(\mathfrak{g})$-valued distribution, using the inner
product on $\rho_{n}(\mathfrak{g})$ defined in Equation (7.1).
###### Corollary 3.24
Suppose $S=\tilde{S}=S_{0}$. For any $f\in\mathscr{P}$, we can write
$\displaystyle T(f)$
$\displaystyle=\int_{{{\mathbb{R}}}^{4}}f(\vec{x})\frac{e^{i[x^{0}\hat{H}(\rho_{n})-x^{1}\hat{P}(\rho_{n})]}}{2\pi}[\tilde{g}_{\gamma}\cdot\overline{g_{\beta}}](x^{2},x^{3})\
d\vec{x}\cdot{{\rm{Tr}}}\left[-[{{\rm{ad}}}(\rho_{n}(F^{\alpha}))\rho_{n}(E^{\gamma})]\rho_{n}(E^{\beta})\right].$
Proof. We note that $\\{e_{a}\\}_{a=0}^{3}$ is the default Minkowski frame. In
this case, $\Lambda=\pm 1\in{\rm SL}(2,{{\mathbb{C}}})$, and $A(\pm 1)=1$.
Furthermore, we can choose the parametrizations
$\sigma(x^{2},x^{3})=x^{2}e_{2}+x^{3}e_{3}$,
$\tilde{\sigma}(x^{0},x^{1})=x^{0}e_{0}+x^{1}e_{1}$. A direct calculation
shows $|\acute{\rho}_{\sigma}|=|\acute{\rho}_{\tilde{\sigma}}|=1$. The result
hence follows.
###### Remark 3.25
In this corollary, we see that the function that maps
$\vec{x}=(x^{0},x^{1},x^{2},x^{3})\longmapsto\frac{e^{i[x^{0}\hat{H}(\rho_{n})-x^{1}\hat{P}(\rho_{n})]}}{2\pi}[\tilde{g}_{\gamma}\cdot\overline{g_{\beta}}](x^{2},x^{3}),$
is not in $\mathscr{P}$, but rather it is a tempered distribution.
## 4 Transformation Law of the Field Operator
Recall $\mathfrak{g}$ has dimension $N$ and has an irreducible representation
$\rho:\mathfrak{g}\rightarrow{\rm End}({{\mathbb{C}}}^{\tilde{N}})$. Without
any loss of generality, consider a space-like surface $S$, contained inside
some plane. From Definition 2.5, we have a Minkowski frame
$\\{\hat{f}_{a}\\}_{a=0}^{3}$ assigned to it.
In Definition 3.6, we have a finite set $\mathcal{A}\subset\mathfrak{g}$,
which defines a spinor indexed by $1\leq\alpha\leq\underline{N}$. This spinor
transforms under the action $A(\Lambda):\phi^{\alpha,n}\mapsto
A(\Lambda)_{\beta}^{\alpha}\phi^{\beta,n}$ for $\Lambda\in{\rm
SL}(2,{{\mathbb{C}}})$.
For $\Lambda\in{\rm SL}(2,{{\mathbb{C}}})$, we consider
$\Lambda^{-1}(S-\vec{a})$, which is also a space-like surface, with
$\\{\hat{g}_{a}\\}_{a=0}^{3}=\\{\Lambda^{-1}\hat{f}_{a}\\}_{a=0}^{3}$ assigned
as a Minkowski frame to it by Definition 2.5.
###### Proposition 4.1
We have the transformation law for the field operators acting on
$\mathscr{H}(\rho)$, i.e.
$\displaystyle U(\vec{a},\Lambda)$
$\displaystyle\phi^{\alpha,n}(f)U(\vec{a},\Lambda)^{-1}\left(S,g_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
$\displaystyle=A(\Lambda^{-1})_{\gamma}^{\alpha}\phi^{\gamma,n}(f(\Lambda^{-1}(\cdot-\vec{a})))\left(S,g_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right),$
whereby $S$ is some rectangular space-like surface contained in some plane.
Recall $S_{0}^{\flat}$ is the $x^{0}-x^{1}$ plane. However,
$\displaystyle U$
$\displaystyle(\vec{a},\Lambda)\phi^{\alpha,n}(f)U(\vec{a},\Lambda)^{-1}1$
$\displaystyle=\left(\Lambda
S_{0}+\vec{a},e^{-i[\vec{a}\cdot(\hat{H}(\rho_{n})\hat{g}_{0}+\hat{P}(\rho_{n})\hat{g}_{1})]}f^{\\{\hat{g}_{0},\hat{g}_{1}\\}}(\Lambda^{-1}(\cdot-\vec{a}))\otimes\rho_{n}(F^{\alpha}),\\{\hat{g}_{a}\\}_{a=0}^{3}\right).$
Here, $\\{\hat{g}_{a}\\}_{a=0}^{3}=\\{\Lambda e_{a}\\}_{a=0}^{3}$ is a
Minkowski frame for $\Lambda S_{0}$.
Proof. By Definition 2.5, there is a $\tilde{\Lambda}\in{\rm
SL}(2,{{\mathbb{C}}})$ such that $\hat{f}_{a}=Y(\tilde{\Lambda})e_{a}$. Thus,
$\\{\Lambda^{-1}\tilde{\Lambda}e_{a}\\}_{a=0}^{3}=\\{\Lambda^{-1}\hat{f}_{a}\\}_{a=0}^{3}$
is a Minkowski frame for $\Lambda^{-1}(S-\vec{a})$.
Write
$d_{\beta}^{\alpha}=A(\Lambda^{-1}\tilde{\Lambda})_{\beta}^{\alpha}\equiv
A(\Lambda^{-1})_{\gamma}^{\alpha}A(\tilde{\Lambda})^{\gamma}_{\beta}$,
$\displaystyle T(\rho_{n},\vec{a})$
$\displaystyle=e^{-i[\vec{a}\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]},\quad
T(\rho_{n},\vec{a})^{-1}=e^{i[\vec{a}\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]},$
and
$f^{\mathcal{C}}=f^{\\{\Lambda^{-1}\hat{f}_{0},\Lambda^{-1}\hat{f}_{1}\\}}$,
with $\mathcal{C}=\\{\Lambda^{-1}\hat{f}_{0},\Lambda^{-1}\hat{f}_{1}\\}$,
$\mathcal{D}=\\{\Lambda^{-1}\hat{f}_{a}\\}_{a=0}^{3}$.
We have
$\displaystyle U$
$\displaystyle(\vec{a},\Lambda)\phi^{\alpha,n}(f)U(\vec{a},\Lambda)^{-1}\left(S,g_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\hat{f}_{a}\\}_{a=0}^{3}\right)$
$\displaystyle=$ $\displaystyle
U(\vec{a},\Lambda)\phi^{\alpha,n}(f)\Big{(}\Lambda^{-1}(S-\vec{a}),T(\rho_{n},\vec{a})^{-1}g_{\beta}(\Lambda\cdot+\vec{a})\otimes\rho_{n}\left(E^{\beta}\right),\mathcal{D}\Big{)}$
$\displaystyle=$ $\displaystyle
U(\vec{a},\Lambda)\Big{(}\Lambda^{-1}(S-\vec{a}),[T(\rho_{n},\vec{a})^{-1}f^{\mathcal{C}}](\cdot)d_{\gamma}^{\alpha}\cdot
g_{\beta}(\Lambda\cdot+\vec{a})\otimes{{\rm{ad}}}(\rho_{n}(F^{\gamma}))\rho_{n}\left(E^{\beta}\right),\mathcal{D}\Big{)}$
$\displaystyle=$
$\displaystyle\Big{(}S,T(\rho_{n},\vec{a})T(\rho_{n},\vec{a})^{-1}f^{\mathcal{C}}(\Lambda^{-1}(\cdot-\vec{a}))d_{\gamma}^{\alpha}g_{\beta}(\cdot)\otimes{{\rm{ad}}}(\rho_{n}(F^{\gamma}))\rho_{n}\left(E^{\beta}\right),\\{\hat{f}_{a}\\}_{a=0}^{3}\Big{)}.$
(4.1)
Let $[\Lambda^{-1}(S-\vec{a})]^{\flat}$ be the span of
$\\{\Lambda^{-1}\hat{f}_{0},\Lambda^{-1}\hat{f}_{1}\\}$. By definition, for
any point $\vec{x}\in S$, we have that
$f^{\mathcal{C}}(\Lambda^{-1}(\vec{x}-\vec{a}))\equiv
f^{\\{\Lambda^{-1}\hat{f}_{0},\Lambda^{-1}\hat{f}_{1}\\}}(\Lambda^{-1}(\vec{x}-\vec{a})),$
means we do a partial integration using Equation (3.2) on $f$ in the time-like
plane $\vec{y}+[\Lambda^{-1}(S-\vec{a})]^{\flat}$, for
$\vec{y}=\Lambda^{-1}(\vec{x}-\vec{a})$.
Let $\sigma(s,\bar{s})=s\hat{f}_{0}+\bar{s}\hat{f}_{1}$,
$s,\bar{s}\in{{\mathbb{R}}}$. Let $\hat{g}_{0}=\Lambda^{-1}\hat{f}_{0}$ and
$\hat{g}_{1}=\Lambda^{-1}\hat{f}_{1}$. We will let
$\hat{\sigma}(\hat{s})=\Lambda^{-1}\sigma(\hat{s})=s\hat{g}_{0}+\bar{s}\hat{g}_{1}$,
$s,\bar{s}\in{{\mathbb{R}}}$.
Now, $\Lambda^{-1}\sigma\cdot\Lambda^{-1}\hat{f}_{a}=\sigma\cdot\hat{f}_{a}$
and $\acute{\rho}_{\hat{\sigma}}=\acute{\rho}_{\sigma}$. See Remark A.4.
Hence,
$\displaystyle
f^{\\{\hat{g}_{0},\hat{g}_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))(\Lambda^{-1}(\vec{x}-\vec{a}))$
$\displaystyle:=\int_{\hat{s}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[\hat{\sigma}(\hat{s})\cdot(\hat{H}(\rho_{n})\hat{g}_{0}+\hat{P}(\rho_{n})\hat{g}_{1})]}}{2\pi}f(\vec{y}+\hat{\sigma}(\hat{s}))|\acute{\rho}_{\hat{\sigma}}|(\hat{s})\
d\hat{s}$
$\displaystyle=\int_{\hat{s}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[\sigma(\hat{s})\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{2\pi}f(\vec{y}+\Lambda^{-1}\sigma(\hat{s}))|\acute{\rho}_{\sigma}|(\hat{s})\
d\hat{s}$
$\displaystyle=\int_{\hat{s}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[\sigma(\hat{s})\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{2\pi}f(\Lambda^{-1}(\vec{x}+\sigma(\hat{s})-\vec{a}))|\acute{\rho}_{\sigma}|(\hat{s})\
d\hat{s}$
$\displaystyle=f(\Lambda^{-1}(\cdot-\vec{a}))^{\\{\hat{f}_{0},\hat{f}_{1}\\}}(\hat{H}(\rho_{n}),\hat{P}(\rho_{n}))(\vec{x}).$
Therefore, Equation (4.1) is equal to
$\displaystyle\Big{(}S,$ $\displaystyle
f(\Lambda^{-1}(\cdot-\vec{a}))^{\\{\hat{f}_{0},\hat{f}_{1}\\}}A(\Lambda^{-1})_{\gamma}^{\alpha}A(\tilde{\Lambda})_{\delta}^{\gamma}\cdot
g_{\beta}\otimes{{\rm{ad}}}(\rho_{n}(F^{\delta}))\rho_{n}(E^{\beta}),\\{\tilde{\Lambda}e_{a}\\}_{a=0}^{3}\Big{)}$
$\displaystyle=A(\Lambda^{-1})_{\gamma}^{\alpha}\phi^{\gamma,n}[f(\Lambda^{-1}(\cdot-\vec{a}))]\Big{(}S,g_{\beta}\otimes\rho_{n}(E^{\beta}),\\{\tilde{\Lambda}e_{a}\\}_{a=0}^{3}\Big{)}.$
To prove the second statement, note that $U(\vec{a},\Lambda)1=1$. Let
$\hat{g}_{a}=\Lambda e_{a}$, $a=0,\cdots,3$. Then,
$\displaystyle U(\vec{a},\Lambda)$
$\displaystyle\phi^{\alpha,n}(f)U(\vec{a},\Lambda)^{-1}1=U(\vec{a},\Lambda)\phi^{\alpha,n}(f)1$
$\displaystyle=$ $\displaystyle
U(\vec{a},\Lambda)\left(S_{0},f^{\\{e_{0},e_{1}\\}}\otimes\rho_{n}(F^{\alpha}),\\{e_{a}\\}_{a=0}^{3}\right)$
$\displaystyle=$ $\displaystyle\left(\Lambda
S_{0}+\vec{a},e^{-i[\vec{a}\cdot(\hat{H}(\rho_{n})\hat{g}_{0}+\hat{P}(\rho_{n})\hat{g}_{1})]}f^{\\{e_{0},e_{1}\\}}(\Lambda^{-1}(\cdot-\vec{a}))\otimes\rho_{n}(F^{\alpha}),\\{\hat{g}_{a}\\}_{a=0}^{3}\right),$
by definitions.
###### Remark 4.2
Relative to $\Lambda S_{0}$ and for
$\Lambda\vec{a}=a^{0}\hat{g}_{0}+a^{1}\hat{g}_{1}$, we see that multiplication
by
$e^{-i[\Lambda\vec{a}\cdot(\hat{H}(\rho_{n})\hat{g}_{0}+\hat{P}(\rho_{n})\hat{g}_{1})]}=e^{i[a^{0}\hat{H}(\rho_{n})-a^{1}\hat{P}(\rho_{n})]}$,
corresponds to a shift
$f(\cdot)^{\\{\hat{g}_{0},\hat{g}_{1}\\}}\mapsto
f(\cdot-\Lambda\vec{a})^{\\{\hat{g}_{0},\hat{g}_{1}\\}},$
when we take Fourier Transform.
## 5 Causality
We are now down to the final Wightman’s axiom. Recall that we have a countable
set $\\{\hat{H}(\rho_{n}),\hat{P}(\rho_{n})\\}_{n\in\mathbb{N}}$ to be defined
later in Definition 7.13. We will see in this section that to satisfy local
commutativity, we must have $\hat{H}(\rho_{n})^{2}-\hat{P}(\rho_{n})^{2}>0$,
for each $n$.
###### Definition 5.1
(Space-like separation)
Let $f,g\in\mathscr{P}$. The support of $f$, denoted ${\rm supp}\ f$, is the
closed set obtained by taking the complement of the largest open set in which
$f$ vanishes. We say that ${\rm supp}\ f$ and ${\rm supp}\ g$ are space-like
(time-like) separated if $f(\vec{x})g(\vec{y})=0$ for all pairs of points
$\vec{x}=(x^{0},x),\ \vec{y}=(y^{0},y)\in{{\mathbb{R}}}^{4}$ such that
$(\vec{x}-\vec{y})\cdot(\vec{x}-\vec{y})=-(x^{0}-y^{0})^{2}+\sum_{i=1}^{3}(x^{i}-y^{i})^{2}\leq\
(\geq)\ 0.$
###### Remark 5.2
* •
Given a connected space-like rectangular surface $S$ contained in a plane, any
two distinct points $\vec{x},\vec{y}\in S$ are actually space-like separated.
By Definition 5.1, if $f$ and $g$ have supports which are space-like
separated, then we must have $f(\vec{x})g(\vec{x})=0$ for any $\vec{x}\in S$.
* •
Note that on a time-like rectangular surface, two distinct points in it may
not be time-like separated.
###### Notation 5.3
For this section, we only consider a space-like surface $S$ contained in a
plane. It is equipped with a Minkowski frame $\\{\hat{f}_{a}\\}_{a=0}^{3}$,
which will be assumed throughout. To ease our notations, we will drop this
Minkowski frame from our notation. This means
$(S,f_{\alpha}\otimes\rho(E^{\alpha}))\equiv(S,f_{\alpha}\otimes\rho(E^{\alpha}),\\{\hat{f}_{a}\\}_{a=0}^{3}).$
###### Definition 5.4
Write the commutators of $f$ and $g$ in $\mathscr{P}$ as
$\displaystyle\lceil\phi^{\alpha,n}(f),\phi^{\beta,n}(g)\rceil$
$\displaystyle:=\phi^{\alpha,n}(f)\phi^{\beta,n}(g)-\phi^{\alpha,n}(g)\phi^{\beta,n}(f),$
$\displaystyle\lceil\phi^{\alpha,n}(f)^{\ast},\phi^{\beta,n}(g)^{\ast}\rceil$
$\displaystyle:=\phi^{\alpha,n}(f)^{\ast}\phi^{\beta,n}(g)^{\ast}-\phi^{\alpha,n}(g)^{\ast}\phi^{\beta,n}(f)^{\ast}.$
###### Remark 5.5
We are using the version of Wightman’s last axiom taken from [4], and not from
[3].
###### Lemma 5.6
Let $S$ be a space-like surface contained in a plane. Then, we have
$\displaystyle\lceil\phi^{\alpha,n}(f),\phi^{\beta,n}(g)\rceil\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)=0,$
for any $\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)\in\mathscr{D}$,
and $[\phi^{\alpha,n}(f),\phi^{\beta,n}(g)]1=0$.
We also have
$\displaystyle\lceil\phi^{\alpha,n}(f)^{\ast},\phi^{\beta,n}(g)^{\ast}\rceil$
$\displaystyle
1=0,\quad\lceil\phi^{\alpha,n}(f)^{\ast},\phi^{\beta,n}(g)^{\ast}\rceil\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)=0,$
for any $\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)\in\mathscr{D}$.
Proof. The first two commutator relations follow from Definitions 3.7, 3.12,
and that $f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\cdot
g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}=g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\cdot
f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}$. By taking the adjoint, the last two follow
immediately.
###### Remark 5.7
These commutation relations hold, regardless of whether the supports of $f$
and $g$ are space-like separated or not.
### 5.1 CPT Theorem
###### Definition 5.8
Define the anti-commutators and commutators of $f$ and $g$ in $\mathscr{P}$,
as
$\displaystyle\lfloor\phi^{\alpha,n}(f),\phi^{\beta,n}(g)^{\ast}\rfloor_{\pm}$
$\displaystyle:=\phi^{\alpha,n}(f)\phi^{\beta,n}(g)^{\ast}\pm\phi^{\alpha,n}(g)\phi^{\beta,n}(f)^{\ast},$
$\displaystyle\lfloor\phi^{\alpha,n}(f)^{\ast},\phi^{\beta,n}(g)\rfloor_{\pm}$
$\displaystyle:=\phi^{\alpha,n}(f)^{\ast}\phi^{\beta,n}(g)\pm\phi^{\alpha,n}(g)^{\ast}\phi^{\beta,n}(f).$
###### Lemma 5.9
Recall ${{\rm{ad}}}(\rho(E^{\alpha}))$ refers to its adjoint representation on
$\rho(\mathfrak{g})$. Suppose the Minkowski frame on $S$ is
$\hat{f}_{a}=\Lambda e_{a}$, $a=0,1,2,3$. Write
$\mathcal{C}=\\{\hat{f}_{0},\hat{f}_{1}\\}$.
For any $\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)\in\mathscr{D}$,
we have
$\displaystyle\lfloor\phi^{\alpha,n}$
$\displaystyle(f),\phi^{\beta,n}(g)^{\ast}\rfloor_{\pm}\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)$
$\displaystyle=-$ $\displaystyle
A(\Lambda)_{\delta}^{\alpha}\overline{A(\Lambda)_{\mu}^{\beta}}\left(S,{\rm
B}^{\pm}[f^{\mathcal{C}}\cdot\overline{g^{\mathcal{C}}}\pm
g^{\mathcal{C}}\cdot\overline{f^{\mathcal{C}}}]\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\delta})){{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\right)$
$\displaystyle+\left\langle(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})),\phi^{\beta,n}(g)1\right\rangle\
\phi^{\alpha,n}(f)1$
$\displaystyle\pm\left\langle(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})),\phi^{\beta,n}(f)1\right\rangle\
\phi^{\alpha,n}(g)1,$ (5.1)
whereby ${\rm B}^{+}={\rm Re}$ and ${\rm B}^{-}={\rm Im}$ for anti-commutation
and commutation relations respectively.
And
$\displaystyle\lfloor\phi^{\alpha,n}$
$\displaystyle(f)^{\ast},\phi^{\beta,n}(g)\rfloor_{\pm}\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)$
$\displaystyle=-$
$\displaystyle\overline{A(\Lambda)_{\delta}^{\alpha}}A(\Lambda)_{\mu}^{\beta}\left(S,{\rm
B}^{\pm}[\overline{f^{\mathcal{C}}}\cdot
g^{\mathcal{C}}\pm\overline{g^{\mathcal{C}}}\cdot f^{\mathcal{C}}]\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\delta})){{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\right)$
$\displaystyle+\left\langle\left(S,g^{\mathcal{C}}A(\Lambda)_{\mu}^{\beta}\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\right),\phi^{\alpha,n}(f)1\right\rangle
1$
$\displaystyle\pm\left\langle\left(S,f^{\mathcal{C}}A(\Lambda)_{\mu}^{\beta}\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\right),\phi^{\alpha,n}(g)1\right\rangle
1,$ (5.2)
whereby ${\rm B}^{+}={\rm Re}$ and ${\rm B}^{-}={\rm Im}$ for anti-commutation
and commutation relations respectively.
Proof. From Definitions 3.7, 3.12 and 3.14, we see that
$\displaystyle\phi^{\alpha,n}(f)\phi^{\beta,n}(g)^{\ast}$
$\displaystyle(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma}))$ $\displaystyle=$
$\displaystyle\phi^{\alpha,n}(f)\overline{A(\Lambda)_{\mu}^{\beta}}\bigg{[}\left(S,-\overline{g^{\mathcal{C}}}\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\right)$
$\displaystyle+\left\langle\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),\phi^{\beta,n}(g)1\right\rangle
1\bigg{]}$ $\displaystyle=$
$\displaystyle-A(\Lambda)_{\delta}^{\alpha}\overline{A(\Lambda)_{\mu}^{\beta}}\left(S,f^{\mathcal{C}}\cdot\overline{g^{\mathcal{C}}}\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\delta})){{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\right)$
$\displaystyle+\left\langle\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),\phi^{\beta,n}(g)1\right\rangle\
\phi^{\alpha,n}(f)1.$
Similarly,
$\displaystyle\phi^{\alpha,n}(g)\phi^{\beta,n}(f)^{\ast}$
$\displaystyle\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)$
$\displaystyle=$
$\displaystyle-A(\Lambda)_{\delta}^{\alpha}\overline{A(\Lambda)_{\mu}^{\beta}}\left(S,g^{\mathcal{C}}\cdot\overline{f^{\mathcal{C}}}\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\delta})){{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\right)$
$\displaystyle+\left\langle\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),\phi^{\beta,n}(f)1\right\rangle\
\phi^{\alpha,n}(g)1.$
Take the sum or difference, and we will obtain
$\displaystyle-A$
$\displaystyle(\Lambda)_{\delta}^{\alpha}\overline{A(\Lambda)_{\mu}^{\beta}}\Big{(}S,[f^{\mathcal{C}}\cdot\overline{g^{\mathcal{C}}}\pm
g^{\mathcal{C}}\cdot\overline{f^{\mathcal{C}}}]\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\delta})){{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\Big{)}$
$\displaystyle+$
$\displaystyle\left\langle(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})),\phi^{\beta,n}(g)1\right\rangle\phi^{\alpha,n}(f)1\pm\left\langle(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})),\phi^{\beta,n}(f)1\right\rangle\phi^{\alpha,n}(g)1.$
Since $f^{\mathcal{C}}\cdot\overline{g^{\mathcal{C}}}\pm
g^{\mathcal{C}}\cdot\overline{f^{\mathcal{C}}}$ is real and purely imaginary
respectively, this proves Equation (5.1). The proof for Equation (5.2) is
similar, hence omitted.
###### Remark 5.10
Without any loss of generality, we assume that a time-like plane $S^{\flat}$
spanned by $\\{\hat{f}_{0},\hat{f}_{1}\\}$, is parametrized by
$\vec{y}(\hat{s})\equiv\vec{y}(s,\bar{s}):=s\hat{f}_{0}+\bar{s}\hat{f}_{1}$,
$s,\bar{s}\in{{\mathbb{R}}}$, whereby $\hat{f}_{0}\cdot\hat{f}_{0}=-1$,
$\hat{f}_{1}\cdot\hat{f}_{1}=1$, $\hat{f}_{0}\cdot\hat{f}_{1}=0$.
Suppose we now assume that ${\rm supp}\ f$ and ${\rm supp}\ g$ are disjoint
compact sets. Write $\hat{H}=\hat{H}(\rho_{n})$, $\hat{P}=\hat{P}(\rho_{n})$.
Let $\vec{x}\in S$. By definition, for any $\vec{x}\in S$,
$\displaystyle g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}(\hat{H},\hat{P})(\vec{x})=$
$\displaystyle\int_{\hat{s}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[\vec{y}(\hat{s})\cdot(\hat{H}\hat{f}_{0}+\hat{P}\hat{f}_{1})]}}{2\pi}g(\vec{x}+\vec{y}(\hat{s}))|\acute{\rho}_{\vec{y}}|(\hat{s})\
d\hat{s},$
$\displaystyle\overline{f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}}(\hat{H},\hat{P})(\vec{x})=$
$\displaystyle\int_{\hat{t}\in{{\mathbb{R}}}^{2}}\frac{e^{i[\vec{y}(\hat{t})\cdot(\hat{H}\hat{f}_{0}+\hat{P}\hat{f}_{1})]}}{2\pi}\bar{f}(\vec{x}+\vec{y}(\hat{t}))|\acute{\rho}_{\vec{y}}|(\hat{t})\
d\hat{t}.$
Write $g_{\vec{x}}(\cdot)=g(\vec{x}+\cdot)$,
$\bar{f}_{\vec{x}}(\cdot)=\bar{f}(\vec{x}+\cdot)$. Thus,
$\displaystyle\Big{[}g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}$
$\displaystyle\cdot\overline{f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}}\Big{]}(\hat{H},\hat{P})(\vec{x})$
$\displaystyle=\int_{\hat{s},\hat{t}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[\vec{y}(\hat{s})\cdot(\hat{H}\hat{f}_{0}+\hat{P}\hat{f}_{1})]}}{(2\pi)^{2}}g_{\vec{x}}(\vec{y}(\hat{s})+\vec{y}(\hat{t}))\bar{f}_{\vec{x}}(\vec{y}(\hat{t}))|\acute{\rho}_{\vec{y}}|(\hat{s})|\acute{\rho}_{\vec{y}}|(\hat{t})\
d\hat{s}d\hat{t}$ $\displaystyle=\int_{\hat{t}\in{{\mathbb{R}}}^{2},\hat{s}\in
D}\frac{e^{-i[\vec{y}(\hat{s})\cdot(\hat{H}\hat{f}_{0}+\hat{P}\hat{f}_{1})]}}{(2\pi)^{2}}g_{\vec{x}}(\vec{y}(\hat{s})+\vec{y}(\hat{t}))\bar{f}_{\vec{x}}(\vec{y}(\hat{t}))|\acute{\rho}_{\vec{y}}|(\hat{s})|\acute{\rho}_{\vec{y}}|(\hat{t})\
d\hat{s}d\hat{t}.$
Similarly,
$\displaystyle\Big{[}f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}$
$\displaystyle\cdot\overline{g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}}\Big{]}(\hat{H},\hat{P})(\vec{x})$
$\displaystyle=\int_{\hat{t}\in{{\mathbb{R}}}^{2},\hat{s}\in-D}\frac{e^{-i[\vec{y}(\hat{s})\cdot(\hat{H}\hat{f}_{0}+\hat{P}\hat{f}_{1})]}}{(2\pi)^{2}}f_{\vec{x}}(\vec{y}(\hat{s})+\vec{y}(\hat{t}))\bar{g}_{\vec{x}}(\vec{y}(\hat{t}))|\acute{\rho}_{\vec{y}}|(\hat{s})|\acute{\rho}_{\vec{y}}|(\hat{t})\
d\hat{s}d\hat{t}.$
From both expressions, we see that the integrals depend on the relative
displacement between pairs of positions in their respective supports. When the
region of integration is on $D$, it is clear that we are referring to
$g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\cdot\overline{f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}}$;
when the region of integration is on $-D$, then we are referring to
$f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\cdot\overline{g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}}$.
Note that the vectors in the set $\vec{y}(-D)$, are in the opposite direction
of those vectors in $\vec{y}(D)$.
Thus, by reversing time direction and taking space inversion (parity), we can
obtain its complex conjugate. The CPT theorem is being implied by these two
expressions. See Remark 5.23. Note that here, ‘C’ refers to complex
conjugation, not charge conjugation.
In general, the anti-commutators and commutators will not be equal to zero,
even when the supports are space-like separated.
###### Lemma 5.11
Let $f,g\in\mathscr{P}$ for which their supports are space-like separated, and
let $S$ be a space-like plane, equipped with a Minkowski frame
$\\{\hat{f}_{a}\\}_{a=0}^{3}$. Let $S^{\flat}$ be the span of
$\\{\hat{f}_{0},\hat{f}_{1}\\}$.
Fix a $\vec{x}\in S$. Write $g_{\vec{x}}(\cdot)=g(\vec{x}+\cdot)$,
$\bar{f}_{\vec{x}}(\cdot)=\bar{f}(\vec{x}+\cdot)$. Suppose
$\hat{H}(\rho_{n})^{2}-\hat{P}(\rho_{n})^{2}>0$.
If ${\rm supp}\ f\cap(\vec{x}+S^{\flat})=\emptyset$ or ${\rm supp}\
g\cap(\vec{x}+S^{\flat})=\emptyset$, then for any $u,v\in S^{\flat}$, we have
$\frac{e^{-i[(u-v)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{(2\pi)^{2}}g_{\vec{x}}(u)\bar{f}_{\vec{x}}(v)\mp\frac{e^{-i[(v-u)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{(2\pi)^{2}}f_{\vec{x}}(v)\bar{g}_{\vec{x}}(u)=0.$
(5.3)
Suppose both sets are non-empty. If $u-v$ is parallel to
$\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1}$, then the
commutation and anti-commutation relations in Equation (5.3) hold when
$f\cdot\bar{g}$ is real and imaginary respectively.
Proof. When one of the sets is empty, then
$f_{\vec{x}}(v)\cdot\bar{g}_{\vec{x}}(u)=0$, so clearly Equation (5.3) holds.
Now consider when both are non-empty. Since
$(u-v)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})=c(\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1})\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})=0,$
we have
$e^{-i[(u-v)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}=\cos\left[(u-v)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})\right]=1.$
When $f_{\vec{x}}(v)\cdot\bar{g}_{\vec{x}}(u)$ is real, then the LHS of
Equation (5.3) becomes
$\left[\frac{e^{-i[(u-v)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{(2\pi)^{2}}-\frac{e^{-i[(v-u)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{(2\pi)^{2}}\right]f_{\vec{x}}(v)\bar{g}_{\vec{x}}(u),$
which is zero.
When $f_{\vec{x}}(v)\cdot\bar{g}_{\vec{x}}(u)$ is purely imaginary, then the
LHS of Equation (5.3) becomes
$\left[-\frac{e^{-i[(u-v)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{(2\pi)^{2}}+\frac{e^{-i[(v-u)\cdot(\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1})]}}{(2\pi)^{2}}\right]f_{\vec{x}}(v)\bar{g}_{\vec{x}}(u),$
which is zero.
###### Remark 5.12
Suppose both ${\rm supp}\ f\cap(\vec{x}+S^{\flat})$ and ${\rm supp}\
g\cap(\vec{x}+S^{\flat})$ are non-empty. The lemma says that there exists a
space-like line in $S^{\flat}$, such that the LHS of Equation (5.3) is zero.
If $\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1}$ is space-like
or null, then the LHS of Equation (5.3) cannot be zero, on any space-like line
in $S^{\flat}$. This can be inferred from Lemma B.3.
Thus, it is essential that a positive mass gap exists in
$\mathscr{H}(\rho_{n})$, for the lemma to hold true.
Consider a bilinear map that sends
$\displaystyle(f,g)$ $\displaystyle\in\mathscr{P}\times\mathscr{P}$
$\displaystyle\longmapsto$
$\displaystyle\left\langle\phi^{\alpha,n}(f)\phi^{\beta,n}(g)^{\ast}\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),(\tilde{S},\tilde{h}_{\gamma}\otimes\rho_{n}(E^{\gamma}))\right\rangle$
$\displaystyle-\left\langle\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),\phi^{\beta,n}(g)1\right\rangle\left\langle\phi^{\alpha,n}(f)1,(\tilde{S},\tilde{h}_{\gamma}\otimes\rho_{n}(E^{\gamma}))\right\rangle.$
From Proposition 3.22, we have a tempered distribution $W(\vec{x},\vec{y})$
such that
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}$
$\displaystyle\int_{\vec{y}\in{{\mathbb{R}}}^{4}}W(\vec{x},\vec{y})f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\
d\vec{x}d\vec{y}$ $\displaystyle=$
$\displaystyle\left\langle\phi^{\alpha,n}(f)\phi^{\beta,n}(g)^{\ast}\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),(\tilde{S},\tilde{h}_{\gamma}\otimes\rho_{n}(E^{\gamma}))\right\rangle$
$\displaystyle-\left\langle\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),\phi^{\beta,n}(g)1\right\rangle\left\langle\phi^{\alpha,n}(f)1,(\tilde{S},\tilde{h}_{\gamma}\otimes\rho_{n}(E^{\gamma}))\right\rangle.$
See Remark 5.14.
Indeed, writing $\vec{x}=\sum_{a=0}^{3}x^{a}\hat{f}_{a}$ and
$\vec{y}=\sum_{a=0}^{3}y^{a}\hat{f}_{a}$, we have that
$\displaystyle\left(x^{0}\hat{f}_{0}+x^{1}\hat{f}_{1},y^{0}\hat{f}_{0}+y^{1}\hat{f}_{1}\right)\longmapsto$
$\displaystyle\int_{(x^{2},x^{3})\in{{\mathbb{R}}}^{2}}\int_{(y^{2},y^{3})\in{{\mathbb{R}}}^{2}}W(\vec{x},\vec{y})f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\
dx^{2}dx^{3}dy^{2}dy^{3},$ (5.4)
defines a continuous function on $S^{\flat}\times S^{\flat}$.
###### Notation 5.13
Suppose $f\cdot\bar{g}$ is real. Then we will define tempered distribution
${\rm Re}\ W$, such that
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}$
$\displaystyle{\rm Re}\
W(\vec{x},\vec{y})\left[f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\right]\
d\vec{x}d\vec{y}$
$\displaystyle:=\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}W(\vec{x},\vec{y})\left[f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\right]\
d\vec{x}d\vec{y}.$
When $f\cdot\bar{g}$ is purely imaginary, we will define tempered distribution
${\rm Im}\ W$, such that
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}$
$\displaystyle{\rm Im}\
W(\vec{x},\vec{y})\left[f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\right]\
d\vec{x}d\vec{y}$
$\displaystyle:=\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}W(\vec{x},\vec{y})\left[f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\right]\
d\vec{x}d\vec{y}.$
###### Remark 5.14
Suppose we write $f=\underline{f}+i\overline{f}$,
$g=\underline{g}+i\overline{g}$, whereby $\underline{f}={\rm Re}\ f$,
$\overline{f}={\rm Im}\ f$, $\underline{g}={\rm Re}\ g$, $\overline{g}={\rm
Im}\ g$. From Notation 5.13, we understand
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}$
$\displaystyle\int_{\vec{y}\in{{\mathbb{R}}}^{4}}W(\vec{x},\vec{y})\left[f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\right]\
d\vec{x}d\vec{y}$ $\displaystyle=$
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}{\rm
Re}\
W(\vec{x},\vec{y})\left[\underline{f}(\vec{x})\underline{g}(\vec{y})+\overline{f}(\vec{x})\overline{g}(\vec{y})\right]\
d\vec{x}d\vec{y}$
$\displaystyle+i\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}{\rm
Im}\
W(\vec{x},\vec{y})\left[\overline{f}(\vec{x})\underline{g}(\vec{y})-\underline{f}(\vec{x})\overline{g}(\vec{y})\right]\
d\vec{x}d\vec{y}.$
###### Lemma 5.15
Fix a space-like plane $S$ and let the Minkowski frame on $S$ be
$\hat{f}_{a}=\Lambda e_{a}$, $a=0,1,2,3$, as defined in Definition 2.5.
Suppose $\hat{H}(\rho_{n})^{2}-\hat{P}(\rho_{n})^{2}>0$.
We have
$\displaystyle{\rm Re}\ W(\vec{x},\vec{y})-{\rm Re}\ W(\vec{y},\vec{x})$
$\displaystyle=0,$ $\displaystyle{\rm Im}\ W(\vec{x},\vec{y})+{\rm Im}\
W(\vec{y},\vec{x})$ $\displaystyle=0,$
provided $\vec{0}\neq\vec{x}-\vec{y}$ can be written as
$c_{1}(\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1})+\sum_{i=2}^{3}c_{i}\hat{f}_{i}$
for some constants $c_{i}$’s.
Proof. Choose any $f,g\in\mathscr{P}$ such that their compact supports are
space-like separated. Write $g_{\vec{x}}(\cdot)=g(\vec{x}+\cdot)$,
$\bar{f}_{\vec{x}}(\cdot)=\bar{f}(\vec{x}+\cdot)$, and
$\hat{H}=\hat{H}(\rho_{n})$, $\hat{P}=\hat{P}(\rho_{n})$.
From the proof in Lemma 5.9,
$\displaystyle\phi^{\alpha,n}(f)$
$\displaystyle\phi^{\beta,n}(g)^{\ast}\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right)-\left\langle\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),\phi^{\beta,n}(g)1\right\rangle\
\phi^{\alpha,n}(f)1$ $\displaystyle=$
$\displaystyle-A(\Lambda)_{\delta}^{\alpha}\overline{A(\Lambda)_{\mu}^{\beta}}\Big{(}S,[f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\cdot\overline{g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}}]\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\delta})){{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\Big{)}.$
Refer to the calculations in Remark 5.10. If we swap $f$ with $g$ in the above
expression and take the sum or difference, we will have
$-A(\Lambda)_{\delta}^{\alpha}\overline{A(\Lambda)_{\mu}^{\beta}}\Big{(}S,A^{\pm}\cdot
h_{\gamma}\otimes{{\rm{ad}}}(\rho_{n}(F^{\delta})){{\rm{ad}}}(\rho_{n}(F^{\mu}))\rho_{n}(E^{\gamma})\Big{)},$
whereby
$\displaystyle A^{\pm}(\vec{z})=$
$\displaystyle\int_{\hat{s},\hat{t}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[(\vec{y}(\hat{s})-\vec{y}(\hat{t}))\cdot(\hat{H}\hat{f}_{0}+\hat{P}\hat{f}_{1})]}}{(2\pi)^{2}}f_{\vec{z}}(\vec{y}(\hat{s}))\bar{g}_{\vec{z}}(\vec{y}(\hat{t}))|\acute{\rho}_{\vec{y}}|(\hat{s})|\acute{\rho}_{\vec{y}}|(\hat{t})\
d\hat{s}d\hat{t}$
$\displaystyle\pm\int_{\hat{s},\hat{t}\in{{\mathbb{R}}}^{2}}\frac{e^{-i[(\vec{y}(\hat{t})-\vec{y}(\hat{s}))\cdot(\hat{H}\hat{f}_{0}+\hat{P}\hat{f}_{1})]}}{(2\pi)^{2}}g_{\vec{z}}(\vec{y}(\hat{t}))\bar{f}_{\vec{z}}(\vec{y}(\hat{s}))|\acute{\rho}_{\vec{y}}|(\hat{s})|\acute{\rho}_{\vec{y}}|(\hat{t})\
d\hat{s}d\hat{t}.$
Since their supports are disjoint, by definition of $W(\vec{y},\vec{x})$, we
note that swapping the arguments $\vec{x}$ and $\vec{y}$, is equivalent to
swapping $f$ and $g$, i.e. $f(\vec{x})\bar{g}(\vec{y})\mapsto
g(\vec{y})\bar{f}(\vec{x})$ in their respective integrals.
Consider when
$\vec{0}\neq\vec{x}-\vec{y}=c_{1}(\hat{P}\hat{f}_{0}+\hat{H}\hat{f}_{1})+\sum_{i=2}^{3}c_{i}\hat{f}_{i}$,
whereby not all the $c_{i}$’s are zero. By Lemma 5.11, we see that if $c_{2}$
or $c_{3}$ is non-zero, then we must have
$W(\vec{x},\vec{y})=W(\vec{y},\vec{x})=0$.
Consider when $c_{1}\neq 0$ and $f\bar{g}$ is real. By Equation (5.3), we have
that
$\displaystyle{\rm Re}\ W(\vec{x},\vec{y})-{\rm Re}\ W(\vec{y},\vec{x})=0,$
for the real part.
Now consider when $c_{1}\neq 0$ and $f\bar{g}$ is purely imaginary. By
Equation (5.3), we have that
$\displaystyle{\rm Im}\ W(\vec{x},\vec{y})+{\rm Im}\ W(\vec{y},\vec{x})=0,$
for the imaginary part.
###### Remark 5.16
In the Wightman’s axiom for local commutativity, it is required that
$\lfloor\phi^{\alpha,n}(f),\phi^{\beta,n}(g)^{\ast}\rfloor_{\pm}$ is zero when
their supports are space-like separated. This is not true in general, even if
$f\cdot\bar{g}$ is real or purely imaginary. The same remark applies to
$\lfloor\phi^{\alpha,n}(f)^{\ast},\phi^{\beta,n}(g)\rfloor_{\pm}$, as we will
see in Lemmas 5.18 and 5.20.
Indeed, when $(\vec{x}+S^{\flat})\cap\ {\rm supp}\ f$ or
$(\vec{x}+S^{\flat})\cap\ {\rm supp}\ g$ is empty for every $\vec{x}\in S$,
then we have
$\lfloor\phi^{\alpha,n}(f),\phi^{\beta,n}(g)^{\ast}\rfloor_{\pm}=\lfloor\phi^{\alpha,n}(f)^{\ast},\phi^{\beta,n}(g)\rfloor_{\pm}=0$,
acting on $(S,f_{\alpha}\otimes E^{\alpha})$.
The above lemma says that if we consider the orthogonal projection $P_{0}$, on
the orthogonal complement of $\\{1\\}$, then the operators commute or anti-
commute on the hyperplane spanned by space-like vectors
$\\{\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1},\hat{f}_{2},\hat{f}_{3}\\}$.
This fact will be used in the proof of Lemma 8.23.
Consider another bilinear map that sends
$\displaystyle(f,g)$
$\displaystyle\in\mathscr{P}\times\mathscr{P}\longmapsto\left\langle\phi^{\alpha,n}(f)^{\ast}\phi^{\beta,n}(g)\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),(\tilde{S},\tilde{h}_{\gamma}\otimes\rho_{n}(E^{\gamma}))\right\rangle.$
From Proposition 3.22, we have a tempered distribution
$\tilde{W}(\vec{x},\vec{y})$ such that
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}$
$\displaystyle\int_{\vec{y}\in{{\mathbb{R}}}^{4}}\tilde{W}(\vec{x},\vec{y})f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\
d\vec{x}d\vec{y}$ $\displaystyle=$
$\displaystyle\left\langle\phi^{\alpha,n}(f)^{\ast}\phi^{\beta,n}(g)\left(S,h_{\gamma}\otimes\rho_{n}(E^{\gamma})\right),(\tilde{S},\tilde{h}_{\gamma}\otimes\rho_{n}(E^{\gamma}))\right\rangle.$
Similar to Equation (5.4), we can define a continuous complex-valued function
on $S^{\flat}\times S^{\flat}$ from it.
###### Remark 5.17
Suppose we write $f=\underline{f}+i\overline{f}$,
$g=\underline{g}+i\overline{g}$, whereby $\underline{f}={\rm Re}\ f$,
$\overline{f}={\rm Im}\ f$, $\underline{g}={\rm Re}\ g$, $\overline{g}={\rm
Im}\ g$. Define tempered distributions ${\rm Re}\ \tilde{W}$ and ${\rm Im}\
\tilde{W}$, similar to how we defined ${\rm Re}\ W$ and ${\rm Im}\ W$ in
Notation 5.13. We understand
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}$
$\displaystyle\int_{\vec{y}\in{{\mathbb{R}}}^{4}}\tilde{W}(\vec{x},\vec{y})\left[f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\right]\
d\vec{x}d\vec{y}$ $\displaystyle=$
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}{\rm
Re}\
\tilde{W}(\vec{x},\vec{y})\left[\underline{f}(\vec{x})\underline{g}(\vec{y})+\overline{f}(\vec{x})\overline{g}(\vec{y})\right]\
d\vec{x}d\vec{y}$
$\displaystyle+i\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}{\rm
Im}\
\tilde{W}(\vec{x},\vec{y})\left[\underline{f}(\vec{x})\overline{g}(\vec{y})-\overline{f}(\vec{x})\underline{g}(\vec{y})\right]\
d\vec{x}d\vec{y}.$
###### Lemma 5.18
Fix a space-like plane $S$ and let $\\{\hat{f}_{a}\\}_{a=0}^{3}$ be a basis as
defined in Definition 2.5. Suppose
$\hat{H}(\rho_{n})^{2}-\hat{P}(\rho_{n})^{2}>0$.
We have
$\displaystyle{\rm Re}\ \tilde{W}(\vec{x},\vec{y})-{\rm Re}\
\tilde{W}(\vec{y},\vec{x})$ $\displaystyle=0,$ $\displaystyle{\rm Im}\
\tilde{W}(\vec{x},\vec{y})+{\rm Im}\ \tilde{W}(\vec{y},\vec{x})$
$\displaystyle=0,$
provided $\vec{0}\neq\vec{x}-\vec{y}$ can be written as
$c_{1}(\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1})+\sum_{i=2}^{3}c_{i}\hat{f}_{i}$
for some constants $c_{i}$’s.
Proof. Proof is similar to Lemma 5.15, hence omitted.
Finally consider the following bilinear map that sends
$\displaystyle(f,g)$
$\displaystyle\in\mathscr{P}\times\mathscr{P}\longmapsto\left\langle\phi^{\alpha,n}(f)^{\ast}\phi^{\beta,n}(g)1,(\tilde{S},\tilde{h}_{\gamma}\otimes\rho_{n}(E^{\gamma}))\right\rangle.$
From Proposition 3.22, we have a tempered distribution
$\check{W}(\vec{x},\vec{y})$ such that
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}$
$\displaystyle\int_{\vec{y}\in{{\mathbb{R}}}^{4}}\check{W}(\vec{x},\vec{y})f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\
d\vec{x}d\vec{y}$ $\displaystyle=$
$\displaystyle\left\langle\phi^{\alpha,n}(f)^{\ast}\phi^{\beta,n}(g)1,(\tilde{S},\tilde{h}_{\gamma}\otimes\rho_{n}(E^{\gamma}))\right\rangle.$
Similar to Equation (5.4), we can obtain a continuous complex-valued function
defined on $S^{\flat}\times S^{\flat}$, from the integral.
###### Remark 5.19
Suppose we write $f=\underline{f}+i\overline{f}$,
$g=\underline{g}+i\overline{g}$, whereby $\underline{f}={\rm Re}\ f$,
$\overline{f}={\rm Im}\ f$, $\underline{g}={\rm Re}\ g$, $\overline{g}={\rm
Im}\ g$. Define tempered distributions ${\rm Re}\ \check{W}$ and ${\rm Im}\
\check{W}$, similar to how we defined ${\rm Re}\ W$ and ${\rm Im}\ W$ in
Notation 5.13. We understand
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}$
$\displaystyle\int_{\vec{y}\in{{\mathbb{R}}}^{4}}\check{W}(\vec{x},\vec{y})\left[f(\vec{x})\otimes_{{\mathbb{R}}}g(\vec{y})\right]\
d\vec{x}d\vec{y}$ $\displaystyle=$
$\displaystyle\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}{\rm
Re}\
\check{W}(\vec{x},\vec{y})\left[\underline{f}(\vec{x})\underline{g}(\vec{y})+\overline{f}(\vec{x})\overline{g}(\vec{y})\right]\
d\vec{x}d\vec{y}$
$\displaystyle+i\int_{\vec{x}\in{{\mathbb{R}}}^{4}}\int_{\vec{y}\in{{\mathbb{R}}}^{4}}{\rm
Im}\
\check{W}(\vec{x},\vec{y})\left[\underline{f}(\vec{x})\overline{g}(\vec{y})-\overline{f}(\vec{x})\underline{g}(\vec{y})\right]\
d\vec{x}d\vec{y}.$
###### Lemma 5.20
Fix a space-like plane $S$ and let $\\{\hat{f}_{a}\\}_{a=0}^{3}$ be a basis as
defined in Definition 2.5. Suppose
$\hat{H}(\rho_{n})^{2}-\hat{P}(\rho_{n})^{2}>0$.
We have
$\displaystyle{\rm Re}\ \check{W}(\vec{x},\vec{y})-{\rm Re}\
\check{W}(\vec{y},\vec{x})$ $\displaystyle=0,$ $\displaystyle{\rm Im}\
\check{W}(\vec{x},\vec{y})+{\rm Im}\ \check{W}(\vec{y},\vec{x})$
$\displaystyle=0,$
provided $\vec{0}\neq\vec{x}-\vec{y}$ can be written as
$c_{1}(\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1})+\sum_{i=2}^{3}c_{i}\hat{f}_{i}$
for some constants $c_{i}$’s.
Proof. Proof is similar to Lemma 5.15, hence omitted.
###### Remark 5.21
Note that $\phi^{\alpha,n}(f)^{\ast}1=0$ by definition, thus we always have
$\phi^{\alpha,n}(f)\phi^{\beta,n}(g)^{\ast}1\pm\phi^{\alpha,n}(g)\phi^{\beta,n}(f)^{\ast}1=0.$
The Lemmas 5.6, 5.15, 5.18 and 5.20 are collectively known as the local
commutation and anti-commutation relations.
Indeed, we see that the relations hold, because
$\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1}$ is time-like. If
it is space-like or null vector, then we see local commutativity only holds
for space-like directions in the span of $\\{\hat{f}_{2},\hat{f}_{3}\\}$.
Now,
$\\{\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1},\hat{f}_{2},\hat{f}_{3}\\}$
is a set of space-like vectors. In fact, for any vector given by a linear
combination of these three vectors, there exists a sequence of translations
and Lorentz transformations that rotates it by $\pi$ radians in
${{\mathbb{R}}}^{4}$, which is time and space inversion. Refer to Lemma B.1
and Remark B.2.
###### Corollary 5.22
(CPT Theorem)
Fix a space-like plane $S$, with a time-like plane $S^{\flat}$ spanned by
$\\{\hat{f}_{0},\hat{f}_{1}\\}$, whereby $\\{\hat{f}_{a}\\}_{a=0}^{3}$ is a
basis defined in Definition 2.5. By abuse of notation, write
$W={\rm Re}\ W+\sqrt{-1}\ {\rm Im}\ W,\ \ \tilde{W}={\rm Re}\
\tilde{W}+\sqrt{-1}\ {\rm Im}\ \tilde{W},\ \ \check{W}={\rm Re}\
\check{W}+\sqrt{-1}\ {\rm Im}\ \check{W}.$
Suppose $\hat{H}(\rho_{n})^{2}-\hat{P}(\rho_{n})^{2}>0$. We have that
$\displaystyle W(\vec{y},\vec{x})$
$\displaystyle=\overline{W}(\vec{x},\vec{y}),\quad\tilde{W}(\vec{y},\vec{x})=\overline{\tilde{W}}(\vec{x},\vec{y}),\quad\check{W}(\vec{y},\vec{x})=\overline{\check{W}}(\vec{x},\vec{y}),$
provided
$\vec{0}\neq\vec{x}-\vec{y}=c_{1}(\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1})+\sum_{i=2}^{3}c_{i}\hat{f}_{i}$,
for constants $c_{i}$’s.
Proof. Immediate from the statements in Lemmas 5.15, 5.18 and 5.20.
###### Remark 5.23
In Lemma 8.13, we will see that we can write
$W(\vec{y},\vec{x})=\mathscr{W}(\vec{\xi})$, for some distribution
$\mathscr{W}$, and $\vec{\xi}=\vec{y}-\vec{x}$. The same remark applies to
$\tilde{W}(\vec{y},\vec{x})$ and $\check{W}(\vec{y},\vec{x})$.
Thus, the above corollary says that by taking time inversion and space
inversion (parity) of a space-like vector, i.e. $\vec{\xi}\mapsto-\vec{\xi}$,
is equivalent to taking the complex conjugation. This is the content of the
CPT Theorem. Refer also to Remark 5.10. But, this only applies if $\vec{\xi}$
lies in the hyperplane spanned by
$\\{\hat{P}(\rho_{n})\hat{f}_{0}+\hat{H}(\rho_{n})\hat{f}_{1},\hat{f}_{2},\hat{f}_{3}\\}$.
In [3], it is actually stated that
$\displaystyle\phi^{\alpha,n}(f)\phi^{\beta,n}(g)\pm\phi^{\beta,n}(g)\phi^{\alpha,n}(f)$
$\displaystyle=0,$
$\displaystyle\phi^{\alpha,n}(f)^{\ast}\phi^{\beta,n}(g)\pm\phi^{\beta,n}(g)\phi^{\alpha,n}(f)^{\ast}$
$\displaystyle=0,$
if their respective supports are space-like separated. But this is only true
iff
${\rm supp}\ f^{\\{\hat{f}_{0},\hat{f}_{1}\\}}\cap{\rm supp}\
g^{\\{\hat{f}_{0},\hat{f}_{1}\\}}=\emptyset$.
It would be ideal, that Lemmas 5.15, 5.18 and 5.20 hold provided
$0\neq\vec{x}-\vec{y}$ is any space-like vector. But, this cannot be true in
general. What we have shown instead, is that the commutation and anti-
commutation relations hold in a three dimensional subspace, containing space-
like vectors.
We would suggest that the final Wightman’s axiom be replaced by the following.
###### Axiom 5.24
Let $H$ be a Hilbert space, as described in Wightman’s zeroth axiom. Let
$f,g\in\mathscr{P}$ and define tempered distributions
$\Omega,\widetilde{\Omega},\widehat{\Omega}:\mathscr{P}\times\mathscr{P}\rightarrow{{\mathbb{C}}}$,
such that
$\displaystyle\int_{{{\mathbb{R}}}^{4}\times{{\mathbb{R}}}^{4}}\Omega(\vec{x},\vec{y})f(\vec{x})g(\vec{y})\
d\vec{x}d\vec{y}$
$\displaystyle:=\left\langle\phi^{\alpha}(f)\phi^{\beta}(g)\Phi,\Psi\right\rangle,$
$\displaystyle\int_{{{\mathbb{R}}}^{4}\times{{\mathbb{R}}}^{4}}\widetilde{\Omega}(\vec{x},\vec{y})f(\vec{x})g(\vec{y})\
d\vec{x}d\vec{y}$
$\displaystyle:=\left\langle\phi^{\alpha}(f)\phi^{\beta}(g)^{\ast}\Phi,\Psi\right\rangle-\left\langle\Phi,\phi^{\beta}(g)1\right\rangle\left\langle\phi^{\alpha}(f)1,\Psi\right\rangle,$
$\displaystyle\int_{{{\mathbb{R}}}^{4}\times{{\mathbb{R}}}^{4}}\widehat{\Omega}(\vec{x},\vec{y})f(\vec{x})g(\vec{y})\
d\vec{x}d\vec{y}$
$\displaystyle:=\left\langle\phi^{\alpha}(f)^{\ast}\phi^{\beta}(g)\Phi,\Psi\right\rangle,$
whereby the field operators $\phi^{\alpha}(f)$, $\phi^{\beta}(g)$ act on
$\Phi,\Psi\in\mathscr{D}\subset H$.
Then, there exists a three-dimensional subspace $V$ in ${{\mathbb{R}}}^{4}$,
such that
$\displaystyle\Omega(\vec{x},\vec{y})=\pm\Omega(\vec{y},\vec{x}),\quad\widetilde{\Omega}(\vec{x},\vec{y})=\pm\widetilde{\Omega}(\vec{y},\vec{x}),\quad\widehat{\Omega}(\vec{x},\vec{y})=\pm\widehat{\Omega}(\vec{y},\vec{x}),$
if $\vec{0}\neq\vec{x}-\vec{y}\in V$. Here, any non-zero vector $v\in V$ is
space-like.
## 6 Yang-Mills path integrals
We have completed the description of Wightman’s axioms. We have seen that to
satisfy local commutativity, the vector
$\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1}$ must be time-like,
thus $\hat{H}(\rho_{n})^{2}-\hat{P}(\rho_{n})^{2}=m_{n}^{2}>0$, for each
$n\geq 1$. Local commutativity already implies the existence of a positive
mass gap in each component Hilbert space $\mathscr{H}(\rho_{n})$. Indeed, we
will see that $\hat{H}(\rho_{n})\hat{f}_{0}+\hat{P}(\rho_{n})\hat{f}_{1}$
defines a time-like222In subsection 8.2, we will see that the mass gap $m_{n}$
is the generator for translation in the $\tilde{f}_{0}^{n}$ direction. vector
$m_{n}\tilde{f}_{0}^{n}$, $\tilde{f}_{0}^{n}\cdot\tilde{f}_{0}^{n}=-1$, in
Definition 8.28, which is crucial to prove the Clustering Theorem 8.34.
But how do we choose
$\\{(\hat{H}(\rho_{n}),\hat{P}(\rho_{n})):n\in\mathbb{N}\\}$? In the next
section, we will explain how we are going to compute the eigenvalues for the
Hamiltonian and momentum operator. To prove the existence of a positive mass
gap, we need to further show that $\inf_{n\in\mathbb{N}}m_{n}>0$.
To do the quantization, we need to turn to Yang-Mills path integrals, which we
will now summarize the construction done in [8] and [6].
### 6.1 Hermite polynomials
Consider the inner product space $\mathcal{S}_{\kappa}(\mathbb{R}^{4})$,
consisting of functions of the form $f\sqrt{\phi_{\kappa}}$, whereby
$\phi_{\kappa}(\vec{x})=\kappa^{4}e^{-\kappa^{2}|\vec{x}|^{2}/2}/(2\pi)^{2}$
is a Gaussian function and $f$ is a polynomial in
$\vec{x}=(x^{0},x^{1},x^{2},x^{3})\in{{\mathbb{R}}}^{4}$. Its inner product is
given by
$\left\langle
f\sqrt{\phi_{\kappa}},g\sqrt{\phi_{\kappa}}\right\rangle=\int_{{{\mathbb{R}}}^{4}}fg\cdot\phi_{\kappa}\
d\lambda,$
$\lambda$ is Lebesgue measure on $\mathbb{R}^{4}$.
Suppose $h_{i}/\sqrt{i}$ is a normalized Hermite polynomial of degree $i$ on
${{\mathbb{R}}}$. Let $\overline{\mathcal{S}}_{\kappa}({{\mathbb{R}}}^{4})$ be
the smallest Hilbert space containing
$\mathcal{S}_{\kappa}({{\mathbb{R}}}^{4})$ and hence
$\left\\{\frac{h_{i}(\kappa x^{0})h_{j}(\kappa x^{1})h_{k}(\kappa
x^{2})h_{l}(\kappa x^{3})}{\sqrt{i!j!k!l!}}\sqrt{\phi_{\kappa}(\vec{x})}\
\Big{|}\ \vec{x}=(x^{0},x^{1},x^{2},x^{3})\in{{\mathbb{R}}}^{4},\ i,j,k,l\geq
0\right\\}$
forms an orthonormal basis. Note its dependence on $\kappa>0$.
Recall we chose the standard metric on $T{{\mathbb{R}}}^{4}$, thus the volume
form on ${{\mathbb{R}}}^{4}$ is given by $d\omega=dx^{0}\wedge dx^{1}\wedge
dx^{2}\wedge dx^{3}$. Using the Hodge star operator and the above volume form,
we will define an inner product on
$\mathcal{\mathcal{S}}_{\kappa}({{\mathbb{R}}}^{4})\otimes\Lambda^{2}({{\mathbb{R}}}^{4})$
from Equation (1.2). Explicitly, it is given by
$\left\langle\sum_{0\leq a<b\leq 3}f_{ab}\otimes dx^{a}\wedge
dx^{b},\sum_{0\leq a<b\leq 3}\hat{f}_{ab}\otimes dx^{a}\wedge
dx^{b}\right\rangle=\sum_{0\leq a<b\leq 3}\left\langle
f_{ab},\hat{f}_{ab}\right\rangle.$ (6.1)
Write $\partial_{a}=\partial/\partial x^{a}$. Given
$f=\sum_{i=1}^{3}f_{i}\otimes
dx^{i}\in\mathcal{S}_{\kappa}({{\mathbb{R}}}^{4})\otimes\Lambda^{1}({{\mathbb{R}}}^{3})$,
the differential $df$ is given by
$df=\sum_{i=1}^{3}\partial_{0}f_{i}\otimes dx^{0}\wedge dx^{i}+\sum_{1\leq
i<j\leq 3}(\partial_{i}f_{j}-\partial_{j}f_{i})dx^{i}\wedge dx^{j}.$ (6.2)
###### Definition 6.1
Recall $\\{E^{\alpha}\in\mathfrak{g}:\ 1\leq\alpha\leq N\\}$ is an orthonormal
basis in $\mathfrak{g}$. Define
$c_{\gamma}^{\alpha\beta}=-{{\rm{Tr}}}\left[E^{\gamma}[E^{\alpha},E^{\beta}]\right],\
E^{\alpha},E^{\beta},E^{\gamma}\in\mathfrak{g}.$
The term $c_{\gamma}^{\alpha\beta}$ is referred to as the structure constant.
###### Proposition 6.2
Suppose $A=\sum_{\alpha=1}^{N}\sum_{i=1}^{3}a_{i,\alpha}\otimes dx^{i}\otimes |
††thanks: These two authors contributed equally††thanks: These two authors
contributed equally
# Longitudinal elastic wave control by pre-deforming semi-linear materials
Dengke Guo School of Aerospace Engineering, Beijing Institute of Technology,
100081, Beijing, China Yi Chen School of Aerospace Engineering, Beijing
Institute of Technology, 100081, Beijing, China Zheng Chang College of
Science, China Agricultural University, Beijing 100083, China Gengkai Hu
<EMAIL_ADDRESS>School of Aerospace Engineering, Beijing Institute of
Technology, 100081, Beijing, China
###### Abstract
An incremental wave superimposed on a pre-deformed hyper-elastic material
perceives an elastic media with the instantaneous modulus of the current
material. This offers a new route with a broadband feature to control elastic
waves by purposely creating finite deformation field. This study proves that
the governing equation of a semi-linear material under a symmetric pre-
deformation condition maintains the form invariance for longitudinal wave, so
the longitudinal wave control can be made by transformation method without the
constraint condition on principle stretches, but this is not the case for
shear waves. Therefore pre-deforming a semi-linear material provides a
potential method for treating longitudinal and shear waves differently.
Examples with elastic wave control and band structure shift through pre-
deforming a semi-linear material are provided to illustrate this finding.
Finally, a one-dimensional spring lattice is proposed to mimic a semi-linear
material, and the dispersion relation for longitudinal waves in a sandwich
structure with such spring lattice is shown to be invariant during elongation,
confirming the result found based on a homogeneous semi-linear material. These
results may stimulate researches on designing new hyper-elastic
microstructures as well as designing new devices based on pre-deformed hyper-
elastic materials.
## I INTRODUCTION
Wave steering by carefully distributing material in space is an interesting
topic for both scientific and engineering communities. A recent significant
progress along this line is the proposition of transformation methodPendry
_et al._ (2006); Milton _et al._ (2006); Cummer and Schurig (2007); Norris
(2008), which in essence maps physical fields from one region to another based
on the form-invariance of the governing equation. Within the framework of
conventional elasticity, the elastic wave equation cannot retain its form
under a general mappingMilton _et al._ (2006); Norris and Shuvalov (2011). An
approximate methodChang _et al._ (2010) with conventional elastic materials
or an exact control strategyBrun _et al._ (2009) with hypothesized elastic
materials without the elastic minor symmetry are proposed to control elastic
waves. Another interesting method is to look at the degenerated elastic
materials, for example, pentamode materials are shown to satisfy the form-
invarianceNorris (2008), but they are used to control specifically the pseudo
pressure wave instead of a fully coupled elastic wave. The transformation
method based on pentamode materials generally requires a symmetric or quasi-
symmetric mappingChen _et al._ (2016). Realization of the materials to meet
the design requirement is of a great challenge and needs meticulous
microstructure designBückmann _et al._ (2014); Chen _et al._ (2017). In
addition, they may suffer from limited frequency band due to inevitable
dispersion at high frequency.
To overcome these shortcomings, an alternative promising route is proposed to
steer elastic waves by pre-deforming a hyper-elastic material. Wave on the
pre-deformed material perceives a new media with the modified local density
and instantaneous modulus of the current materialOgden (2007). This effect
purely originates from the finite deformation and hyper-elastic constitutive
relation of the hyper-elastic material. The non-dispersive local density and
instantaneous modulus may provide a new way to control elastic waves with a
broadband efficiency. By noticing the similarity between the material
parameters required by the asymmetric transformationBrun _et al._ (2009) and
the local density and instantaneous modulusOgden (2007) of a pre-deformed
hyper-elastic material, the hyper-elastic transformation theoryParnell (2012);
Norris and Parnell (2012); Chang _et al._ (2015) is proposed to control the
superimposed incremental waves by pre-deforming a hyper-elastic material. Neo-
Hookean1Parnell (2012); Parnell _et al._ (2012) hyper-elastic materials have
been investigated to design a cylindrical cloak for anti-plane shear waves,
the cloaking effect is demonstrated from scattering analysis. This hyper-
elastic material is also used to design a phononic crystal with invariant band
gapsBarnwell _et al._ (2016) for shear waves. Chang et al.Chang _et al._
(2015) further studied the in-plane longitudinal and shear waves in a
compressible neo-Hookean material under a pre-shear deformation, and they
found that the in-plane shear waves will follow the design by the hyper-
elastic transformation while the longitudinal waves will not be affected.
However, it is only rigorously proved that incompressible neo-Hookean
materials maintain the form invariance for anti-plane shear wavesParnell
(2012); Parnell _et al._ (2012); Barnwell _et al._ (2016). Norris and
Parnell proposed to use a semi-linear hyper-elastic material with a symmetric
pre-deformation to control fully coupled elastic wavesNorris and Parnell
(2012), and a cylindrical cloak is designed for both in-plane longitudinal and
shear waves. Based on the semi-linear energy function in terms of three
principal stretches, they find that the out of plane extensions should be
adapted accordingly with the in-plane stretches in order to control the in-
plane longitudinal and shear wavesNorris and Parnell (2012), simultaneously.
In this paper, we observe that the second order derivatives of the semi-linear
energy function are composed of the deformation gradients and a fourth order
tensor with the minor anti-symmetry. This forth order tensor with the minor
anti-symmetry is cancelled out with a divergence field, implying that a semi-
linear material, under a general symmetric deformation without the out of
plane extensions constraint, still maintains the form invariance for a
divergence field. Therefore the transformation method can be used to control
longitudinal waves. This finding provides a useful tool to design devices
interacting differently with the longitudinal and shear components of a fully
coupled elastic wave.
This research is arranged as follows. First, we will prove that the divergence
component of elastic waves on a symmetrically pre-deformed semi-linear
material can be strictly mapped from the initial configuration. These results
indicate that, the governing equation for longitudinal waves in a semi-linear
material under a symmetric pre-deformation maintain the form invariance, so
they can be controlled within the strategy of transformation method by pre-
deformation. Second, numerical simulations, demonstrating wavelength
magnifying and dispersion relation invariance for longitudinal waves on a pre-
deformed semi-linear material, will be conducted to validate the
transformation method. Finally, a spring lattice is proposed as a one-
dimensional (1D) prototype for a homogeneous semi-linear material, the
invariant dispersion relation for longitudinal waves under a finite elongation
will be demonstrated through a 1D sandwich structure to confirm the
theoretical finding.
## II METHODS
### II.1 Wave propagation on a deformed hyper-elastic material
A stress free hyper-elastic material occupying an initial configuration
$\Omega_{0}$ is deformed to the current configuration $\Omega$ through the
boundary and body loads. Mathematically, this process is represented by a
mapping between the two configurations
$\mathbf{x}:\Omega_{0}\rightarrow\Omega$, which maps any point $\mathbf{X}$ in
$\Omega_{0}$ to another point $\mathbf{x}$ in $\Omega$,
$\mathbf{x}=\mathbf{x}(\mathbf{X})$. This mapping can also be denoted by the
deformation gradient $\mathbf{F}=\partial\mathbf{x}/\partial\mathbf{X}$ in
nonlinear elasticityOgden (2007), or in component form $F_{iJ}=\partial
x_{i}/\partial X_{J}$. Here, lowercase and uppercase subscripts refer to the
initial configuration and current configuration, respectively. The deformation
tensor $\mathbf{F}$ in general can take any form compatible with physical
deformation, e.g., identity, diagonal or symmetric form. A small displacement
perturbation $\mathbf{u}(\mathbf{x},t)$ superimposed on the deformed body
$\Omega$ is governed by the dynamic equation in the current configurationOgden
(2007),
$\rho u_{j,tt}=\nabla_{i}(C_{ijkl}\nabla_{k}u_{l})$ (1)
Here, $i$ with a lowercase subscript representing the partial derivative with
respect to the current coordinate $\mathbf{x}_{i}$, the displacements ul with
a lowercase subscript mean the fields physically occurred in the current
configuration, and subscript $t$ denotes time derivative. The used subscripts
range from 1 to 3 and duplicated indexes should be understood as summation.
The local density $\rho$ and instantaneous elastic tensor $\mathbf{C}$ in the
current configuration are,
$\rho=J^{-1}\rho_{0}$ (2a)
$C_{ijkl}=J^{-1}F_{iM}F_{kN}A_{MjNl}=J^{-1}F_{iM}F_{kN}\frac{\partial^{2}W}{\partial
F_{jM}\partial F_{lN}}$ (2b)
In which, $J$ is determinant of the deformation tensor $\mathbf{F}$, $\rho$ is
derived from the mass conservation law and $\rho_{0}$ represents the density
in the initial configuration, $W$ is the strain energy function of the hyper-
elastic material. The instantaneous elastic tensor $\mathbf{C}$ is a forth
order tensor without the minor symmetry, and the incremental wave is likely to
be governed by an elastic material with asymmetric incremental stress
$C_{ijkl}\nabla_{k}u_{l}$. The true Cauchy stress in the hyper-elastic
material is indeed still symmetric, only the incremental stress is asymmetric.
This observation stimulates researches on the transformation theory for
elastic waves through a finite deformation in a hyper-elastic materialParnell
(2012); Norris and Parnell (2012); Chang _et al._ (2015). It should be noted,
although the actual perturbation occurs on the current configuration $\Omega$,
its governing equation can be transformed to the initial configuration
$\Omega_{0}$. Following the finite deformation elasticity theoryOgden (2007),
Eq. (1) can be easily pulled back to the initial configuration,
$\rho_{0}u_{j,tt}=J\nabla_{i}(J^{-1}F_{iM}F_{kN}A_{MjNl}\nabla_{k}u_{l})=\nabla_{M}(A_{MjNl}\nabla_{N}u_{l})$
(3)
In above equation, two mathematic identities, $\nabla_{i}(J^{-1}F_{iM})=0$,
$F_{iM}\nabla_{i}=\nabla_{M}$ are used. $\nabla_{M}$ with an uppercase
subscript represents the partial derivative with respect to the initial
coordinate $\mathbf{X}_{M}$. It is seen that the modulus,
$A_{MjNl}=\partial^{2}W/\partial F_{jM}/\partial F_{lN}$, functions as a
nominal elastic modulus in the initial configuration, and therefore is called
the pull-backed modulus in the following.
For the same hyper-elastic material without the pre-deformation, wave
propagation is governed by the equation in the initial configuration
$\Omega_{0}$,
$\begin{split}\rho_{0}v_{J,tt}=&\nabla_{M}(C_{0mjnl}\nabla_{N}v_{l})\\\
=&(\lambda_{0}+\mu_{0})\nabla_{J}(\nabla_{L}v_{L})+\mu_{0}\nabla_{M}\nabla_{M}v_{J}\end{split}$
(4)
Here, $\mathbf{v}(\mathbf{X},t)$ is used for the displacement fields on the
hyper-elastic material without the pre-deformation to distinguish from the
displacement field $\mathbf{u}(\mathbf{x},t)$ with the pre-deformation. The
displacements $v_{J}$ with an uppercase subscript mean the fields physically
occurred in the initial configuration. Apparently, the above equation is a
special case of Eq. (3) with the deformation tensor being an identity matrix
$\mathbf{F}=\mathbf{I}$, and the elastic tensor for a conventional isotropic
elastic material is recovered, i.e.,
$C_{0mjnl}=\lambda_{0}\delta_{mj}\delta_{nl}+\mu_{0}(\delta_{mn}\delta_{jl}+\delta_{ml}\delta_{jn})$
with $\lambda_{0}$ and $\mu_{0}$ being the lamé constants.
### II.2 Transformation method for longitudinal waves with semi-linear
materials
In the following, we will prove the governing equation for longitudinal waves
is form-invariant for semi-linear materials under a symmetric deformation. The
strain energy function of a semi-linear material writesNorris and Parnell
(2012),
$W=\frac{\lambda_{0}}{2}(U_{KK}-3)^{2}+\mu_{0}(U_{KL}-\delta_{KL})(U_{KL}-\delta_{KL})$
(5)
In which, $\mathbf{U}=\mathbf{R}^{-1}\mathbf{F}$ is the right stretch tensor
for the finite deformation gradient $\mathbf{F}$ and $\mathbf{R}$ is an
orthogonal matrix $\mathbf{R}^{\mathbf{T}}\mathbf{R}=\mathbf{I}$. The strain
energy function $W$ is only related to the stretch tensor $\mathbf{U}$ as
required by objectivity. The pull-backed modulus $A_{MjNl}$ for a general
deformation gradient $\mathbf{F}$ is given in AppendixA, and can be further
simplified when $\mathbf{F}$ is symmetric with $\mathbf{R}=\mathbf{I}$,
$\begin{split}A_{MjNl}|_{\mathbf{F}=\mathbf{F}^{\mathbf{T}}}=&\lambda_{0}\delta_{jM}\delta_{lN}+2\mu_{0}\delta_{ij}\delta_{MN}\\\
+&(\lambda_{0}(U_{KK}-\delta_{KK})-2\mu_{0})\frac{\partial R_{jM}}{\partial
F_{lN}}\end{split}$ (6)
In addition to the first two constant terms, the pull-backed modulus has a
third term depending on the deformation gradient $\mathbf{F}$. From the
geometric point of view, the symmetry of the deformation tensor $\mathbf{F}$
means the deformation field is irrotational. If $\mathbf{F}$ further becomes
an identity matrix $\mathbf{I}$, the modulus $A_{MjNl}$ will degenerate to the
conventional isotropic elasticity tensor (see more details in AppendixA).
Substituting the above pull-backed modulus Eq. (6) into Eq. (3) and taking the
divergence operation on both sides, one will obtain the following equation,
$\begin{split}\rho\nabla_{J}u_{j,tt}&=\nabla_{J}\nabla_{M}(A_{MjNl}\nabla_{N}u_{l})\\\
&=(\lambda_{0}+2\mu_{0})\nabla_{M}\nabla_{M}(\nabla_{L}u_{l})\end{split}$ (7)
In deriving the above equation, the skew-symmetry of $\partial R_{jM}/\partial
F_{lN}$ with respect to index $j$ and $M$ is used. This equation is exactly
the same as the governing equation for the divergence field on the initially
un-deformed configuration, i.e.,
$\rho_{0}\nabla_{J}v_{J,tt}=(\lambda_{0}+2\mu_{0})\nabla_{M}\nabla_{M}(\nabla_{J}v_{J})$.
In this case, once the boundary and loading conditions for the two
configurations are set accordinglyChen _et al._ (2016), the divergence field
on the pre-deformed body can be mapped from that on the initial configuration
$\nabla_{L}u_{l}=F_{kJ}(\nabla_{k}u_{j})=\nabla_{J}v_{J}$. This observation
implies that, if one region of a semi-linear material is pre-deformed without
altering its boundary, then an impinged longitudinal wave inside the pre-
deformed region will follow the mapping, while outside the pre-deformed region
it will not be influenced at all and just perceives a homogeneous semi-linear
material. It can be proved similarly that the curl field on the pre-deformed
semi-linear material doesn’t follow the same equation as on the initial
configuration, and therefore cannot be mapped. This property can be used to
design wave controlling devices interacting differently with longitudinal and
shear waves.
A more explicit expression for $A_{MjNl}$ is not available for a general
symmetric gradient, however, a very simple formula for the in-plane components
is possible for the special symmetric gradients with $F_{13}=F_{23}=0$,
$F_{33}\not=0$,
$\begin{split}A_{MjNl}|_{\mathbf{F}=\mathbf{F}^{\mathbf{T}}}=&\lambda_{0}\delta_{jM}\delta_{lN}+2\mu_{0}\delta_{ij}\delta_{MN}\\\
+&(\lambda_{0}(U_{kk}+U_{33}-3)-2\mu_{0})\frac{1}{F_{kk}}\epsilon_{jM}\epsilon_{lN}\end{split}$
(8)
Here, all the index $M$, $j$, $N$, $l$ and $k$ range from 1 to 2,
$\epsilon_{jM}$ is the 2D permutation tensor. Further if the out of plane
stretch is constraint accordingly with the in-plane stretches,
$U_{33}=1-\frac{\lambda_{0}+\mu_{0}}{\lambda_{0}}(U_{kk}-2)$ (9)
Equation (9) is the same as (4.14) by Norris and ParnellNorris and Parnell
(2012). Substituting Eq. (9) into Eq. (8), and noticing $U_{ηη}=F_{ηη}$, one
will obtain an isotropic in-plane elastic tensor,
$A_{MjNl}|_{\mathbf{F}=\mathbf{F}^{\mathbf{T}}}=\lambda_{0}\delta_{jM}\delta_{lN}+\mu_{0}(\delta_{ij}\delta_{MN}+\delta_{jN}\delta_{Ml})$
(10)
So with the constraint condition of Eq. (9), both longitudinal and in-plane
shear waves can be controlled simultaneously with transformation method, as
observed by Norris and ParnellNorris and Parnell (2012). However we
demonstrate here, for longitudinal waves the constraint condition can be
removed.
## III NUMERICAL EXAMPLES
### III.1 Wave propagation on a symmetrically pre-deformed semi-linear
material
Numerical simulation based on the hyper-elastic transformation can be
performed in three steps: First, a stress free body in the initial
configuration $\Omega_{0}$ is deformed to the current configuration $\Omega$
by loading or an imaginary mapping, and then the deformation tensor
$\mathbf{F}$ is obtained. Second, the density $\rho$ and instantaneous elastic
tensor $\mathbf{C}$ in the current configuration are calculated using Eq.
(2b). Third, the wave equation will be solved on the current configuration
with the derived local material parameters. The numerical simulations are all
conducted using PDE (Partial Differential Equation) and Nonlinear Mechanics
module in the commercial software COMSOL Multiphysics.
Figure 1: Wave propagation on the stress-free and pre-deformed semi-linear
materials. (a), (b) Divergence fields for the stress-free and pre-deformed
cases with longitudinal wave excitation. (c),(d) Curl fields in the stress-
free and pre-deformed cases with shear wave excitation. Figure 2: Band
structure of the stress-free and pre-deformed semi-linear materials. (a)
Stress free semi-linear material case and the first four modes correspondent
to a wave vector $\mathbf{k}=\boldsymbol{\Gamma}\mathbf{X}/6$. (b) Semi-linear
material with pre-deformed circular region and the first four mode
correspondent to a wave vector $\mathbf{k}=\boldsymbol{\Gamma}\mathbf{X}/6$.
We simulate the wave propagation in a pre-deformed semi-linear material that
can magnify the wavelength. As shown in FIG. 1(a), the initial configuration
consists of two regions, an inner circular region ($R<b=11$ m), where $b$ is
the radius of the outer boundary (black dashed line) for the pre-deformed
region, and an outer annular region ($b<R<24$ m). Perfectly Matched Layers
adjacent to the outer circular boundary is employed to mimic an infinite large
space without reflection. With the mapping $r=R^{4}/b^{3}$ for the inner
circular region ($R<b$), this region is transformed to another circular region
with the same radius b as shown in FIG. 1(b). The radial mapping ensures the
symmetric deformation gradient $\mathbf{F}$ and therefore the condition of
transformation method for longitudinal wave holds. A tiny circle placed at
$r=20$ m can excite longitudinal or shear waves ($f=6.5$ Hz) by expansion or
rotation motions, respectively. Material parameters of the semi-linear
material are $\lambda_{0}=770$ Pa, $\mu_{0}=260$ Pa, $\rho_{0}=1$ kg/m3. In
the simulation, the longitudinal and shear wave excitations are modeled
through expanding and rotating boundary of the small circle in the domain.
Figures 1(a) and 1(b) show divergence fields for the stress free and pre-
deformed cases with the longitudinal wave excitation, respectively. Exactly
the same fields are observed outside the pre-deformed region as predicted
theoretically. Further, the divergence fields in the pre-deformed region
($r<b$) can be perfectly mapped from the fields in the initial configuration
with $\nabla_{i}u_{i}=F^{-1}_{kL}\nabla_{K}v_{L}$. While, the curl fields for
the stress free and pre-deformed cases (FIG. 1(c),(d)) with the shear wave
excitation are completely different, and significant scattering caused by the
pre-deformed region is observed. This interesting phenomenon indicates that
the pre-deformation region is invisible for longitudinal waves, but is a
strong scatter for shear waves.
Transformation method for longitudinal waves with a pre-deformed semi-linear
material is further demonstrated with calculation of dispersion relation. Two
square unit cells of the same sizes with a lattice constant $a=1$ m are
considered, and the periodic medium are constructed from tiling unit cells
along two orthogonal directions. One is composed of a homogeneous stress free
semi-linear material, while the other has its inner region ($r<0.35$ m) pre-
deformed from the same circular region ($R<0.35$ m) with a mapping function
$r=R(R/0.35)^{1/2}$. The Bloch wave frequency can be obtained by solving the
eigen-frequency problem on one unit cell with the Bloch wave boundary
condition. The dispersion relation is obtained by sweeping the Bloch wave
vector along the edges of the first irreducible Brillouin Zone
($\mathbf{X}-\boldsymbol{\Gamma}-\mathbf{M}-\mathbf{X}$). Figure 2(a) is the
band structure of the stress-free homogenous unit cell, and FIG. 2(b) presents
the result of the partly pre-deformed semi-linear material unit cell. Wave
polarizations for different branches can be identified from the modes shown
below the band structures. The second branch is longitudinal and the mode
shows an overall displacement parallel to the wave vector, while all other
three branches represent shear waves. From both band structures, it is shown
that the purely longitudinal branches (marked by red dots) are exactly the
same for the stress-free and pre-deformed unit cells, while the shear related
branches become quite different for both cases. This is expected from the
longitudinal wave invariance property of the symmetrically pre-deformed semi-
linear material.
Figure 3: Spring lattice as a 1D semi-linear material.(a) Unit cell of the
U-shaped spring and sandwich spring composite. (b) The deformation field
($\nabla\cdot\mathbf{u}$) of the composite with a finite strain
($\lambda=1.5$). (c) Mechanical property of 10 connected unit cells in (a).
(d) Enlarged view of the indicated cyan region in (c) with a strain range
($\lambda=2.3\sim 3.0$).
### III.2 Prototype of a 1D semi-linear material with spring lattice
Inspired by the Hooke’s law of a perfect spring, a 1D spring lattice is
proposed to mimic the semi-linear material. As shown in FIG. 3(a), a unit cell
of the proposed spring lattice is composed of connected beams with the
following geometry parameters, $t=0.012$ m, $h=0.2$ m and $w=0.04$ m, and all
the beams are made of the same material. A sandwiched spring composite (SSC)
can be constructed by filling the springs into two separated homogenous
materials. A unit cell of the SSC consists of multiple spring unit cells
($5\times 10$, $l=10w$) and two layers of the homogeneous materials
($l_{1}=l$, $l_{2}=5h$).
The hyper-elastic strain energy function $W(F_{11})$ of the proposed 1D semi-
linear material is obtained numerically by solving the stored strain energy in
10 connected unit cells with the applied displacement boundary condition,
$u=(F_{11}-1)10w$, and shows quadratic relation (marked by squares in FIG.
3(c)) with respect to the deformation gradient $F_{11}=\lambda$ for a finite
deformation range $F_{11}=1.0\sim 3.0$ as compared to the ideal semi-linear
case (red line). Two important points should be noted in the numerical
simulation. First, since finite rotations occur in this problem during
elongation, the geometric nonlinear effect must be turned on during the
numerical simulation. Second, although the overall elongation ratio of the
unit cell is no longer small, the local strains in the springs and the
homogeneous layers are still quite small (the Green strain is of the order of
magnitude of $10^{-3}$ for a elongation ratio $\lambda=3$), so a linear
elastic constitutive relation is set for the spring material. Here, the
constitutive relation is taken to be the default setting in COMSOL with
geometry nonlinearity, i.e., St-Venant-Kirchohoff constitutive relation. In
short, the hyper-elastic behavior of the proposed 1D spring lattice comes from
the finite rotation but small strain of the integrated microstructure. The
numerically computed pull-backed modulus $A_{1111}=\partial^{2}W/\partial
F_{11}/\partial F_{11}$ (marked by triangles) is nearly constant for the
investigated finite strain range. The highlighted region in FIG. 3(c) is shown
more clearly in FIG. 3(d), where the modulus changes less than 4 percent from
a constant value. The numerical result shows the proposed spring lattice has a
nearly constant pull-backed modulus under the elongation $F_{11}$, and
therefore can be used with the hyper-elastic transformation method to control
longitudinal wave along the elongation direction. As for its impact on the
longitudinal wave in the other directions, the hyper-elastic strain energy
function should be studied for a general deformation gradient and this is
beyond the main scope of this paper. Finally, since the pull-backed modulus is
obtained from the static homogenization or under long wave approximation, so
in the following analysis, we consider only the low frequency case where the
static homogenization condition holds.
Figure 4: Band structure of the proposed 1D semi-linear material. (a) Band
structure of the SSC unit cell without (line) and with (discrete square) pre-
deformation. (b) Modes of the first four branches correspondent to a
normalized wave number $k_{x}L_{0}=\pi/2$, left panel is for the stress-free
case and right panel is for the pre-deformed case.
Figure 3(b) shows the deformed state of the SSC with a finite strain
$\lambda=1.5$, where the deformation mainly concentrates in the turning points
of the springs and the strain in the homogenous material is extremely small,
therefore the springs can be regarded as pre-deformed while the layered
materials remain in its initial configuration. The dispersion relations for
the un-deformed and pre-deformed SSC unit cells (FIG. 3(b)) are shown in FIG.
4(a) together with their corresponding deformation modes in FIG. 4(b). Since
the unit cells of the un-deformed and pre-deformed cases have different
lengths, the horizontal axis in FIG. 4(a) represents a non-dimensional wave
number $k_{x}L_{0}$ with $L_{0}$ being the length of the corresponding unit
cell. The true deformation is rather small and has been exaggerated in the
figures for clarity. The modes indicate that the second and forth branches are
mainly related to the longitudinal waves and therefore these two branches of
the pre-deformed case are almost the same as the stress-free case. While the
first and third branches are dominated by shear waves as shown by the modes,
and are changed during stretching, especially for the first branch. The above
results show that the SSC can be used as a robust semi-linear material for
controlling longitudinal waves with the hyper-elastic transformation method.
## IV CONCLUSIONS
The hyper-elastic transformation method with semi-linear materials is
revisited. It is proved that semi-linear materials under a general symmetric
pre-deformation maintain the form invariance for longitudinal elastic waves,
while not for shear waves. This implies that the longitudinal wave control can
be made with the hyper-elastic transformation method without the additional
constraint on pre-stretches. Wave simulations on a pre-deformed semi-linear
material show that, the longitudinal waves follow exactly the designed path by
the hyper-elastic transformation method without any scattering, while the
shear waves are strongly scattered. The dispersion relation of a semi-linear
material, with a symmetric pre-deformation, is shown invariant for the
longitudinal wave branches, but not for the shear related branches. These
numerical results confirm the theoretical finding. Finally, a 1D semi-linear
material realized by a spring lattice is proposed to mimic a semi-linear
material. Numerical simulations reveal that the longitudinal wave branches
remain intact during stretching. The demonstrated property of the semi-linear
material may find applications where longitudinal and shear waves are expected
to be controlled differently with external deformation.
###### Acknowledgements.
This work was supported by National Natural Science Foundation of China (Grant
Nos. 11472044, 11521062, 11632003).
*
## Appendix A DERIVATIVE OF STRAIN ENERGY FUNCTION
Following the polar decomposition of a deformation tensor
$\mathbf{F}=\mathbf{R}\mathbf{U}$, we have
$\mathbf{F}^{\mathbf{T}}\mathbf{F}=\mathbf{U}^{2}$ or in component form
$U_{IK}U_{KJ}=F_{kI}F_{kJ}$. Taking the derivative of both sides with respect
to $F_{lN}$ leads to,
$\frac{\partial U_{IK}}{\partial F_{lN}}U_{KJ}+\frac{\partial U_{KJ}}{\partial
F_{lN}}U_{IK}=F_{lI}\delta_{Nj}+F_{lJ}\delta_{Nl}$ (11)
From det($\mathbf{U}$)=det($\mathbf{F}$)>0, one can deduce that $\mathbf{U}$
is reversible. Multiplying both sides with $\delta_{IJ}$ and $U^{-1}_{IJ}$
gives, respectively,
$\frac{\partial U_{IK}}{\partial F_{lN}}U_{KI}=F_{lN},\quad\frac{\partial
U_{KK}}{\partial F_{lN}}=F_{lI}U^{-1}_{IN}=R_{lN}$ (12)
Further differentiating Eq. (11) with respect to $F_{rS}$ and multiplying both
sides with $U^{-1}_{IJ}$ gives,
$\frac{\partial R_{lN}}{\partial F_{rS}}+\frac{\partial U_{IK}}{\partial
F_{lN}}\frac{\partial U_{KJ}}{\partial
F_{rS}}U^{-1}_{IJ}=\delta_{lr}U^{-1}_{SN}$ (13)
If $\mathbf{F}$ becomes an identity matrix, Eq. (13) can be simplified as,
$\frac{\partial R_{lN}}{\partial
F_{rS}}|_{\mathbf{F}=\mathbf{I}}=\frac{1}{2}(\delta_{lr}\delta_{SN}-\delta_{lS}\delta_{rN})$
(14)
With above formulas in Eq. (12), the second derivative of the semi-linear
strain energy function can be derived,
$\begin{split}A_{MjNl}&=\frac{\partial^{2}W}{\partial F_{jM}\partial
F_{lN}}=\frac{\partial}{\partial F_{jM}}\frac{\partial}{\partial
F_{lN}}(\frac{\lambda_{0}}{2}(U_{KK}-\delta_{KK})^{2}+\mu_{0}(U_{KL}U_{KL}-2U_{KK}))\\\
&=\frac{\partial}{\partial
F_{jM}}(\lambda_{0}(U_{KK}-\delta_{KK})\frac{\partial U_{KK}}{\partial
F_{lN}}+2\mu_{0}U_{KL}\frac{\partial U_{KL}}{\partial
F_{lN}}-2\mu_{0}\frac{\partial U_{KK}}{\partial F_{lN}})\\\
&=\frac{\partial}{\partial
F_{jM}}((\lambda_{0}U_{KK}-\lambda_{0}\delta_{KK}-2\mu_{0})R_{lN}+2\mu_{0}F_{lN})\\\
&=\lambda_{0}R_{jM}R_{lN}+2\mu_{0}\delta_{lj}\delta_{MN}+(\lambda_{0}U_{KK}-\lambda_{0}\delta_{KK}-2\mu_{0})\frac{\partial
R_{lN}}{\partial F_{jM}}\end{split}$ (15)
If the deformation tensor becomes symmetric
$\mathbf{F}=\mathbf{F}^{\mathbf{T}}$, we can simplify the pull-backed modulus
Eq. (15) as,
$\begin{split}A_{MjNl}|_{\mathbf{F}=\mathbf{F}^{\mathbf{T}}}=&\lambda_{0}\delta_{jM}\delta_{lN}+2\mu_{0}\delta_{ij}\delta_{MN}\\\
+&(\lambda_{0}(U_{KK}-\delta_{KK})-2\mu_{0})\frac{\partial R_{lN}}{\partial
F_{jM}}\end{split}$ (16)
Notice that, $\partial R_{lN}/\partial F_{jM}=\partial R_{jM}/\partial F_{lN}$
is skew-symmetric with respect to index $j$ and $M$ when $\mathbf{F}$ is
symmetric. It can be easily proved from this mathematical identity
$\mathbf{R}\mathbf{R}^{\mathbf{T}}=\mathbf{I}$ by differentiation,
$\frac{\partial R_{iK}}{\partial F_{lN}}R_{jK}+R_{iK}\frac{\partial
R_{jK}}{\partial F_{lN}}=0$ (17a) $\frac{\partial R_{iJ}}{\partial
F_{lN}}|_{\mathbf{F}=\mathbf{F}^{\mathbf{T}}}+\frac{\partial R_{jI}}{\partial
F_{lN}}|_{\mathbf{F}=\mathbf{F}^{\mathbf{T}}}=0$ (17b)
Further if $\mathbf{F}$ becomes an identity matrix, the pull-backed modulus
becomes the conventional isotropic one,
$\begin{split}A_{MjNl}|_{\mathbf{F}=\mathbf{F}^{\mathbf{T}}}=&\lambda_{0}\delta_{jM}\delta_{lN}+2\mu_{0}\delta_{lj}\delta_{MN}-2\mu_{0}\frac{\partial
R_{lN}}{\partial F_{jM}}|_{\mathbf{F}=\mathbf{I}}\\\
=&\lambda_{0}\delta_{jM}\delta_{lN}+\mu_{0}\delta_{lj}\delta_{MN}+\mu_{0}\delta_{lM}\delta_{jN}\end{split}$
(18)
## References
* Pendry _et al._ (2006) J. B. Pendry, D. Schurig, and D. R. Smith, science 312, 1780 (2006).
* Milton _et al._ (2006) G. W. Milton, M. Briane, and J. R. Willis, New Journal of Physics 8, 248 (2006).
* Cummer and Schurig (2007) S. A. Cummer and D. Schurig, New Journal of Physics 9, 45 (2007).
* Norris (2008) A. N. Norris, Proc. R. Soc. A 464, 2411 (2008).
* Norris and Shuvalov (2011) A. Norris and A. Shuvalov, Wave Motion 48, 525 (2011).
* Chang _et al._ (2010) Z. Chang, X. Zhou, J. Hu, and G. Hu, Optics express 18, 6089 (2010).
* Brun _et al._ (2009) M. Brun, S. Guenneau, and A. B. Movchan, Applied Physics Letters 94, 061903 (2009).
* Chen _et al._ (2016) Y. Chen, X. Liu, and G. Hu, The Journal of the Acoustical Society of America 140, EL405 (2016).
* Bückmann _et al._ (2014) T. Bückmann, M. Thiel, M. Kadic, R. Schittny, and M. Wegener, Nature communications 5, 4130 (2014).
* Chen _et al._ (2017) Y. Chen, M. Zheng, X. Liu, Y. Bi, Z. Sun, P. Xiang, J. Yang, and G. Hu, Physical Review B 95, 180104 (2017).
* Ogden (2007) R. W. Ogden, COURSES AND LECTURES 495, 1 (2007).
* Parnell (2012) W. J. Parnell, Proc. R. Soc. A 468, 563 (2012).
* Norris and Parnell (2012) A. N. Norris and W. J. Parnell, Proc. R. Soc. A 468, 2881 (2012).
* Chang _et al._ (2015) Z. Chang, H.-Y. Guo, B. Li, and X.-Q. Feng, Applied Physics Letters 106, 161903 (2015).
* Parnell _et al._ (2012) W. J. Parnell, A. N. Norris, and T. Shearer, Applied Physics Letters 100, 171907 (2012).
* Barnwell _et al._ (2016) E. G. Barnwell, W. J. Parnell, and I. D. Abrahams, Wave Motion 63, 98 (2016).
|
# High-precision force sensing using a single trapped ion
Peter A. Ivanov Department of Physics, St. Kliment Ohridski University of
Sofia, James Bourchier 5 blvd, 1164 Sofia, Bulgaria Nikolay V. Vitanov
Department of Physics, St. Kliment Ohridski University of Sofia, James
Bourchier 5 blvd, 1164 Sofia, Bulgaria Kilian Singer Experimentalphysik I,
Universität Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany
(August 27, 2024)
###### Abstract
We introduce quantum sensing schemes for measuring very weak forces with a
single trapped ion. They use the spin-motional coupling induced by the laser-
ion interaction to transfer the relevant force information to the spin-degree
of freedom. Therefore, the force estimation is carried out simply by observing
the Ramsey-type oscillations of the ion spin states. Three quantum probes are
considered, which are represented by systems obeying the Jaynes-Cummings,
quantum Rabi (in 1D) and Jahn-Teller (in 2D) models. By using dynamical
decoupling schemes in the Jaynes-Cummings and Jahn-Teller models, our force
sensing protocols can be made robust to the spin dephasing caused by the
thermal and magnetic field fluctuations. In the quantum-Rabi probe, the
residual spin-phonon coupling vanishes, which makes this sensing protocol
naturally robust to thermally-induced spin dephasing. We show that the
proposed techniques can be used to sense the axial and transverse components
of the force with a sensitivity beyond the yN $/\sqrt{\text{Hz}}$ range, i.e.
in the xN$/\sqrt{\text{Hz}}$ (xennonewton, $10^{-27}$). The Jahn-Teller
protocol, in particular, can be used to implement a two-channel vector
spectrum analyzer for measuring ultra-low voltages.
###### pacs:
03.67.Ac, 03.67.Bg, 03.67.Lx, 42.50.Dv
## I Introduction
Over the last few years, research of mechanical systems coupled to quantum
two-level systems has attracted great deal of experimental and theoretical
interest Treutlein2015 ; Kurizki2015 . Micro- and nano-mechanical oscillators
can respond to very weak electric, magnetic and optical forces, which allows
one to use them as highly sensitive force detectors Stowe1997 . For example,
the cantilever with attonewton ($10^{-18}$ N) force sensitivity can be used to
test the violation of Newtonian gravity at sub-millimeter length scale
Geraci2008 . With current quantum technologies coupling between a
nanomechanical oscillator and a single spin can be achieved experimentally by
using strong magnetic-field gradient. Such a coupling paves the way for
sensing the magnetic force associated with the single electron spin Rugar2004
. To this end, a recent experiment demonstrated that the coherent evolution of
the electronic spin of an individual nitrogen vacancy center can be used to
detect the vibration of a magnetized mechanical resonator Kolkowitz2012 .
Another promising quantum platform with application in high-precision sensing
is the system of laser-cooled trapped ions, which allows excellent control
over the internal and motional degrees of freedom Blatt2008 . Force
sensitivity of order of 170 yN $\rm{Hz}^{-1/2}$ ($10^{-24}$ N) was reported
recently with an ensemble of ions in a Penning trap Biercuk2010 . Force
measurement down to $5$ yN has been demonstrated experimentally using the
injection-locking technique with a single trapped ion Knunz2010 . Moreover,
force detection with sensitivity in the range of 1 yN $\rm{Hz}^{-1/2}$ is
possible for single-ion experiments based on the measurement of the ion’s
displacement amplitude Maiwald2009 .
In this work, we propose ion-based sensing schemes for measuring very rapidly
varying forces, which follow an earlier proposal Ivanov2015 wherein the
relevant force information is mapped into the spin degrees of freedom of the
single trapped ion. In contrast to Ivanov2015 , the techniques proposed here
do not require specific adiabatic evolution of the control parameters but
rather they rely on using Ramsey-type oscillations of the ion’s spin states,
which are detected via state-dependent fluorescence measurements. Moreover, we
show that by using dynamical decoupling schemes, the sensing protocols become
robust against dephasing of the spin states caused by thermal and magnetic-
field fluctuations.
We consider a quantum system described by the Jaynes-Cummings (JC) model which
can be used as a highly sensitive quantum probe for sensing of the axial force
component. By applying an additional strong driving field Zheng ; Solano2003
the dephasing of the spin states induced by the residual spin-phonon
interaction can be suppressed such that the sensing protocol does not require
initial ground-state cooling of the ion’s vibrational state. We show that the
axial force sensing can be implemented also by using a probe represented by
the quantum Rabi (QR) model. Because of the absence of residual spin-motional
coupling in this case, the force estimation is robust to spin dephasing
induced by the thermal motion fluctuations.
Furthermore, we introduce a sensing scheme capable to extract the two-
dimensional map of the applied force. Here the quantum probe is represented by
the Jahn-Teller (JT) model, in which the spin states are coupled with phonons
in two spatial directions. We show that the two transverse components of the
force can be measured by observing simply the coherent evolution of the spin
states. In order to protect the spin coherence during the force estimation we
propose a dynamical decoupling sequence composed of phonon phase-shift
operators, which average to zero the residual spin-phonon interaction.
We estimate the optimal force sensitivity in the presence of motional heating
and find that with current ion trap technologies force sensitivity better than
1 yN $\rm{Hz}^{-1/2}$ can be achieved. Thus, a single trapped ion may serve as
a high-precision sensor of very weak electric fields generated by small needle
electrodes with sensitivity as low as 1 $\mu{\rm V}/{\rm m}$ $\rm{Hz}^{-1/2}$.
This paper is arranged as follows. In Sec. II we describe the sensing protocol
for detection of the axial component of very weak forces using a quantum probe
represented by the Jaynes-Cummings and quantum Rabi models. It is shown that
by using dynamical decoupling technique the sensing protocol using Jaynes-
Cummings system is immune to thermal spin dephasing. In Sec. III we introduce
a sensing scheme, which is able to detect the two components of the external
force. Finally, in Sec. IV we summarize our findings.
## II 1D Force Sensing
### II.1 Jaynes-Cummings quantum probe
In our model we consider a single two-state ion with a transition frequency
$\omega_{0}$, in a linear Paul trap with an axial trap frequency $\omega_{z}$.
The small axial oscillation of the ion is described by the vibrational
Hamiltonian $\hat{H}_{\rm ax}=\hbar\omega_{z}\hat{a}^{{\dagger}}\hat{a}$,
where $\hat{a}^{{\dagger}}$ ($\hat{a}$) creates (annihilates) a phonon
excitation. We assume that the ion interacts with a laser field with a
frequency $\omega_{\rm L}=\omega_{0}-\omega_{z}+\delta$, tuned near the red-
sideband resonance with a detuning $\delta$. The interaction Hamiltonian in
the Lamb-Dicke limit and the rotating-wave approximation reads Wineland1998 ;
Haffner2008 ; Schneider2012
$\hat{H}_{\rm
JC}=\hbar\omega\hat{a}^{{\dagger}}\hat{a}+\hbar\Delta\sigma_{z}+\hbar
g(\sigma^{-}\hat{a}^{{\dagger}}+\sigma^{+}\hat{a}),$ (1)
with $\delta=\Delta-\omega$, where $\Delta$ is the effective spin frequency
and $\omega$ is the effective phonon frequency. Here, $\sigma_{x,y,z}$ are the
Pauli matrices, $\sigma^{\pm}$ are the respective raising and lowering
operators for the effective spin system, and $g$ determines the strength of
the spin-phonon coupling.
The external time-varying force with a frequency
$\omega_{d}=\omega_{z}-\omega$, e.g., $F(t)=F\cos(\omega_{d}t)$, displaces the
motional amplitude of the ion oscillator along the axial direction, as
described by the term
$\hat{H}_{F}=\frac{z_{\rm ax}F}{2}(\hat{a}^{{\dagger}}+\hat{a}).$ (2)
Here $z_{\rm ax}=\sqrt{\hbar/2m\omega_{z}}$ is the spread of the zero-point
wavefunction along the axial direction and $F$ is the parameter we wish to
estimate. The origin of the oscillating force can be a very weak electric
field, an optical dipole force, spin-dependent forces created in a magnetic-
field gradient or a Stark-shift gradient, etc. With the term (2) the total
Hamiltonian becomes
$\hat{H}_{\rm T}=\hat{H}_{\rm JC}+\hat{H}_{F}.$ (3)
In the following, we consider the weak-coupling regime $g\ll\omega$, in which
the phonon degree of freedom can be eliminated from the dynamics. This can be
carried out by applying the canonical transformation $\hat{U}=e^{\hat{S}}$ to
$\hat{H}_{\rm T}$ (3) such that $\hat{H}_{\rm eff}^{\rm
JC}=e^{-\hat{S}}\hat{H}_{\rm T}e^{\hat{S}}$ with
$\hat{S}=(g/\omega)(\sigma^{+}\hat{a}-\sigma^{-}\hat{a}^{{\dagger}})+(\Omega_{F}/g)(\hat{a}-\hat{a}^{{\dagger}})$.
Keeping only the terms of order of $g/\omega$ we arrive at the following
effective Hamiltonian (see the Appendix),
$\displaystyle\hat{H}_{\rm eff}^{\rm{JC}}$
$\displaystyle=\hbar\widetilde{\Delta}\sigma_{z}-\hbar\Omega_{F}\sigma_{x}-\hat{H}_{\rm
JC}^{\prime},$ (4a) $\displaystyle\hat{H}^{\prime}_{\rm JC}$
$\displaystyle=\frac{\hbar
g^{2}}{\omega}\sigma_{z}\hat{a}^{{\dagger}}\hat{a}.$ (4b)
This result indicates that the spin-motional interaction in Eq. (3) shifts the
effective spin frequency by the amount
$\widetilde{\Delta}=\Delta-g^{2}/2\omega$, while the effect of the force term
is to induce transitions between the spin states. The strength of the
transition is quantified by the Rabi frequency $\Omega_{F}=gz_{\rm
ax}F/2\hbar\omega$, which is proportional to the applied force $F$. Hence the
force estimation can be carried out by observing the coherent evolution of the
spin population that can be read out via state-dependent fluorescence.
Figure 1: (Color online) a) Time-evolution of the probability to find the
system in spin state $\left|\uparrow\right\rangle$ for the JC system. We
compare the probabilities derived from the original Hamiltonian (3) (dots) and
the effective Hamiltonian (4a) (solid lines). We assume an initial thermal
distribution with a mean phonon number $\bar{n}=1.2$. The parameters are set
to $g=4$ kHz, $\omega=170$ kHz, $\Delta=g^{2}/2\omega$, $z_{\rm ax}=14.5$ nm,
$F=20$ yN and $\Omega=10$ kHz. For the same initial state but in the absence
of driving field ($\Omega=0$), the signal loses contrast (blue dashed line).
b) Contrast of the Rabi oscillations defined as
$S=P_{\uparrow}(t_{2})-P_{\uparrow}(t_{1})$ with $t_{1}=\pi/2\Omega_{F}$ and
$t_{2}=\pi/\Omega_{F}$ with $\Omega_{F}=60$ kHz as a function of the mean
phonon number $\bar{n}$.
The last term $\hat{H}_{\rm JC}^{\prime}$ in Eq. (4a) is the residual spin-
motional coupling. This term affects the force estimation because it can be a
source of pure spin dephasing Viola1999 . Indeed, the $\sigma_{z}$ factor in
$\hat{H}_{\rm JC}^{\prime}$ induces transitions between the eigenstates
$\left|\pm\right\rangle$ of the operator $\sigma_{x}$ depending on the
vibrational state of the oscillator. As long as the oscillator is prepared
initially in an incoherent vibrational state at a finite temperature this
would lead to a random component in the spin energy. As we will see below, by
using dynamical decoupling the effect of the pure spin dephasing can be
reduced.
The sensing protocol starts by preparing the system in state
$\hat{\rho}(0)=\left|\uparrow\right\rangle\left\langle\uparrow\right|\otimes\hat{\rho}_{\rm
osc}$, where $\hat{\rho}_{\rm osc}$ stands for the initial density operator of
the oscillator. According to Eq. (4a), the evolution of the system is driven
by the unitary propagator $\hat{U}_{\rm JC}(t,0)=e^{-{\rm i}\hat{H}_{\rm
eff}^{\rm JC}t/\hbar}$. Assuming for the moment that $\hat{\rho}_{\rm
osc}=|0\rangle\langle 0|$ where $|n\rangle$ is the harmonic oscillator Fock
state with $n$ phonon excitations, the probability to find the system in state
$\left|\uparrow\right\rangle$ is $P_{\uparrow}(t)=\cos^{2}(\Omega_{F}t)$,
where for simplicity we set $\Delta=g^{2}/2\omega$, hence
$\widetilde{\Delta}=0$. In this case, the effect of $\hat{H}_{\rm
JC}^{\prime}$ automatically vanishes such that the signal exhibits a cosine
behavior according to the effective Hamiltonian (4a). An initial thermal
phonon distribution, however, would introduce dephasing on the spin
oscillations caused by thermal fluctuations. The spin coherence can be
protected, for example, by applying a sequence of fast pulses, which flip the
spin states and average the residual spin-motional interaction to zero during
the force estimation Yang2010 . On the other hand, because the relevant force
information is encoded in the $\sigma_{x}$ term in Eq. (4a), continuously
applying an additional strong driving field $\hat{H}_{\rm
d}=\hbar\Omega\sigma_{x}$ in the same basis Zheng ; Solano2003 , such that
$\hat{H}_{\rm T}\rightarrow\hat{H}_{\rm T}+\hat{H}_{\rm d}$, would not affects
the force estimation but rather will suppress the effect of the residual spin-
motional coupling. Indeed, going in the interaction frame with respect to
$\hat{H}_{\rm d}$, the residual spin-motional coupling becomes
$\hat{H}_{\rm JC}^{\prime}(t)=\frac{\hbar g^{2}}{\omega}(e^{2{\rm i}\Omega
t}|+\rangle\langle-|+e^{-2{\rm i}\Omega
t}|-\rangle\langle+|)\hat{a}^{{\dagger}}\hat{a}.$ (5)
The latter result indicates that the off-resonance transitions between states
$\left|\pm\right\rangle$ induced by $\hat{H}_{\rm JC}^{\prime}$ are suppressed
if $g^{2}/2\omega\ll\Omega$. By separating the pulse sequences from $t=0$ to
$t/2$ with a Hamiltonian $\hat{H}_{\rm T}+\hat{H}_{\rm d}$, and then from
$t/2$ to $t$ with a Hamiltonian $\hat{H}_{\rm T}-\hat{H}_{\rm d}$, the spin
states are protected from the thermal dephasing and the signal depends only on
the Rabi frequency $\Omega_{F}$ at the final time $t$. Note that the effect of
the magnetic field fluctuations of the spin states is described by an
additional $\sigma_{z}$ term in Eq. (4a), therefore the strong driving field
used here suppresses the spin dephasing caused by the magnetic-field
fluctuations, as was experimentally demonstrated Timoney2011 ; Webster2013 .
In Fig. 1(a) we show the time evolution of the probability $P_{\uparrow}(t)$
for an initial thermal vibrational state. Applying the driving field during
the force estimation leads to reduction of the spin dephasing and hence
protecting the contrast of the Rabi oscillations, see Fig. 1(b). We note that
a similar technique using a strong driving carrier field for dynamical
decoupling was proposed for the implementation of a high-fidelity phase gate
with two trapped ions Tan2013 ; Bermudez2012 .
Figure 2: (Color online) The sensitivity of the force measurement versus time
$t$ for various values of $\omega$. We assume an initial thermal vibrational
state with a mean phonon number $\bar{n}=1$. The solid lines represent the
analytical result given by Eq. (7) while the dots are the exact numerical
solution with the Hamiltonian (3) including the strong driving term. The other
parameters are set to $g=4$ kHz and $\Omega=7$ kHz.
The shot-noise-limited sensitivity for measuring $\Omega_{F}$ is
$\delta\Omega_{F}=\frac{\Delta P_{\uparrow}(t)}{\frac{\partial
P_{\uparrow}(t)}{\partial\Omega_{F}}\sqrt{\nu}},$ (6)
where $\Delta P_{\uparrow}(t)$ stands for the variance of the signal and
$\nu=T/\tau$ is the repetition number. Here $T$ is the total experimental
time, and the time $\tau$ includes the evolution time as well as the
preparation and measurement times. Because our technique relies on state-
projective detection, such that the preparation and measurement times are much
smaller than the other time scale, we assume $\tau\approx t$. From Eq. (6) we
find that the sensitivity, which characterizes the minimal force difference
that can be discriminated within a total experimental time of 1 s, is
$F_{\rm min}\sqrt{T}=\frac{\hbar\omega}{gz_{\rm ax}\sqrt{t}}.$ (7)
In Fig. 2 we show the sensitivity of the force estimation versus time $t$ for
different frequencies $\omega$ assuming an initial thermal vibrational state.
For an evolution time of $20$ ms, force sensitivity of $2$ yN $\rm{Hz}^{-1/2}$
can be achieved.
Let us now estimate the effect of the motional heating which limits the force
estimation. Indeed, the heating of the ion motion causes damping of the
signal, which leads to Wineland1998
$P_{\uparrow}(t)=\frac{1}{2}[1+e^{-\gamma t}\cos(2\Omega_{F}t)],$ (8)
where $\gamma$ is the decoherence rate. We assume that
$\gamma\sim\langle\dot{n}_{\rm ax}\rangle$ where $\langle\dot{n}_{\rm
ax}\rangle$ stands for the axial ion’s heating rate. Thus, the optimal force
sensitivity is Huelga1997
$F_{\rm min}\sqrt{T}=\frac{\hbar\omega}{gz_{\rm ax}}\sqrt{2\langle\dot{n}_{\rm
ax}\rangle e}.$ (9)
Using the parameters in Fig. 2 with $\omega=180$ kHz and assuming
$\langle\dot{n}_{\rm ax}\rangle=0.01$ $\rm{ms}^{-1}$ we estimate force
sensitivity of 2.4 yN $\rm{Hz}^{-1/2}$. For a cryogenic ion trap with heating
rate in the range of $\langle\dot{n}_{\rm ax}\rangle=1$ ${\rm s}^{-1}$ and
evolution time of $t=500$ ms, the force sensitivity would be 0.8 yN
$\rm{Hz}^{-1/2}$.
Figure 3: (Color online) Time-evolution of the probability to find the system
in spin state $\left|\uparrow\right\rangle$ for the QR system. We assume an
initial thermal vibrational state with a mean phonon number $\bar{n}=1.2$. Due
to the absence of residual spin-motion coupling the Rabi oscillations are
robust with respect to the spin dephasing caused by the thermal fluctuations.
We compare the probability derived from the Hamiltonian $\hat{H}_{\rm
T}=\hat{H}_{\rm QR}+\hat{H}_{F}$ with the analytical solution
$P_{\uparrow}=\cos^{2}(2\Omega_{F})$. The parameters are set to $g=4$ kHz,
$\omega=170$ kHz, $z_{\rm ax}=14.5$ nm, $F=20$ yN.
### II.2 Quantum Rabi model
An alternative approach to sense the axial component of the force is to use a
probe described by the quantum Rabi model,
$\hat{H}_{\rm QR}=\hbar\omega\hat{a}^{{\dagger}}\hat{a}+\hbar
g\sigma_{x}(\hat{a}^{{\dagger}}+\hat{a}),$ (10)
which includes it the counter-rotating wave terms. This Hamiltonian can be
implemented by using a bichromatic laser field along the axial direction
Pedernales2015 . In the weak-coupling regime, $g\ll\omega$, we find by using
the unitary transformation $\hat{U}=e^{\hat{S}}$ with
$\hat{S}=-(g/\omega)\sigma_{x}(\hat{a}^{{\dagger}}-\hat{a})-(2\Omega_{F}/g)(\hat{a}^{{\dagger}}-\hat{a})$
that (see the Appendix)
$\hat{H}_{\rm eff}^{\rm QR}=-2\hbar\Omega_{F}\sigma_{x}.$ (11)
In contrast to Eq. (4a), now the effective Hamiltonian (11) does not contain
an additional residual spin-motional coupling, which implies that the spins
are immune to dephasing caused by the thermal motion fluctuations, see Fig. 3.
Thereby the force estimation can be carried out without additional strong
driving field. We find that the optimal force sensitivity is similar to Eq.
(9) but with extra factor of 2 in the denominator,
$F_{\rm min}\sqrt{T}=\frac{\hbar\omega}{2gz_{\rm
ax}}\sqrt{2\langle\dot{n}_{\rm ax}\rangle e}.$ (12)
Up to now we have considered probes that are responsive only to the axial
component of the force. In the following we propose a sensing technique that
can be used to detect the two transverse components of the time-varying
external force.
## III Jahn-Teller quantum probe
In conventional ion trap sensing methods, the information on the force
direction can be extracted by using the three spatial vibrational modes of the
ion Maiwald2009 ; Munro2003 . Such an experiment requires an independent
measurement of the displacement amplitudes in each vibrational mode, which,
however, increases the complexity of the measurement procedure and can lead to
longer total experimental times. Here we show that by utilizing the laser-
induced coupling between the spin states and the transverse ion oscillation we
are able to detect the transverse components of the force by observing simply
the coherent evolution of the spin states.
Indeed, let us consider the case in which the small transverse oscillations of
the ion with a frequency $\omega_{\rm t}$ described by the Hamiltonian
$\hat{H}_{\rm t}=\hbar\omega_{\rm
t}(\hat{a}_{x}^{{\dagger}}\hat{a}_{x}+\hat{a}_{y}^{{\dagger}}\hat{a}_{y})$ are
coupled with the spin states via Jahn-Teller interaction. Such a coupling can
be achieved by using bihromatic laser fields with frequencies
$\omega_{b,r}=\omega_{0}\pm(\omega_{\rm t}-\omega)$ tuned respectively near
the blue- and red-sideband resonances, with a detuning $\omega$, which excite
the transverse $x$ and $y$ vibrational modes of the trapped ion. The
interaction Hamiltonian of the system is given by Porras2012 ; Ivanov2013
$\hat{H}_{\rm
JT}=\hbar\omega(\hat{a}^{{\dagger}}_{x}\hat{a}_{x}+\hat{a}^{{\dagger}}_{y}\hat{a}_{y})+\hbar
g\sigma_{x}(\hat{a}^{{\dagger}}_{x}+\hat{a}_{x})+\hbar
g\sigma_{y}(\hat{a}^{{\dagger}}_{y}+\hat{a}_{y}).$ (13)
Here $\hat{a}^{{\dagger}}_{\beta}$ and $\hat{a}_{\beta}$ are the creation and
annihilation operators of phonon excitations along the transverse direction
($\beta=x,y$) with an effective frequency $\omega$. The last two terms in Eq.
(13) describe the Jahn-Teller $E\otimes e$ spin-phonon interaction with a
coupling strengths $g$. In the following, we assume that a classical
oscillating force with a frequency $\omega_{d}=\omega_{\rm t}-\omega$
displaces the vibrational amplitudes along the transverse $x$ and $y$
directions of the quantum oscillator described by
$\hat{H}_{\vec{F}}=\frac{z_{\rm
t}F_{x}}{2}(\hat{a}^{{\dagger}}_{x}+\hat{a}_{x})+\frac{z_{\rm
t}F_{y}}{2}(\hat{a}^{{\dagger}}_{y}+\hat{a}_{y}),$ (14)
where $z_{\rm t}=\sqrt{\hbar/2m\omega_{\rm t}}$ is the size of the transverse
ion’s harmonic oscillator ground-state wavefunction. $F_{x}$ and $F_{y}$ are
the two transverse components of the force we wish to estimate. With the
perturbation term (14) the total Hamiltonian becomes
$\hat{H}_{\rm T}=\hat{H}_{\rm JT}+\hat{H}_{\vec{F}}.$ (15)
Assuming the weak-coupling regime, $g\ll\omega$, the two phonon modes are only
virtually excited. After performing the canonical transformation
$\hat{U}=e^{\hat{S}}$ of $\hat{H}_{\rm T}$ (15), where
$\hat{S}=(\hat{a}_{x}-\hat{a}_{x}^{{\dagger}})\left(\frac{g}{\omega}\sigma_{x}+\frac{\Omega_{x}}{g}\right)+(\hat{a}_{y}-\hat{a}_{y}^{{\dagger}})\left(\frac{g}{\omega}\sigma_{y}+\frac{\Omega_{y}}{g}\right),$
(16)
we obtain the following effective Hamiltonian (see the Appendix)
$\hat{H}_{\rm
eff}^{\rm{JT}}=-\hbar\Omega_{x}\sigma_{x}-\hbar\Omega_{y}\sigma_{y}+\hat{H}^{\prime}_{\rm
JT}.$ (17)
Here $\Omega_{x,y}=gz_{\rm t}F_{x,y}/\hbar\omega$ are the respective driving
Rabi frequencies of the transition between spin states
$\left|\uparrow\right\rangle$ and $\left|\downarrow\right\rangle$. The last
term in Eq. (17) is the residual spin-phonon interaction described by
$\hat{H}^{\prime}_{\rm JT}=2{\rm i}\frac{\hbar
g^{2}}{\omega}\sigma_{z}(\hat{a}_{x}^{{\dagger}}\hat{a}_{y}-\hat{a}_{x}\hat{a}_{y}^{{\dagger}}),$
(18)
which can be a source of thermal spin dephasing as long as the two phonon
modes are prepared in initial thermal vibrational states.
Figure 4: (Color online) a) Time-evolution of the probability to find the
system in spin state $\left|\uparrow\right\rangle$ for the JT system. We
compare the probability calculated from the Hamiltonian (15) assuming the
initial states
$\left|\psi(0)\right\rangle=\left|\uparrow\right\rangle\left|0_{x},0_{y}\right\rangle$
(red dots) and
$\left|\psi(0)\right\rangle=2^{-1/2}(\left|\uparrow\right\rangle+\left|\downarrow\right\rangle)|0_{x},0_{y}\rangle$
(blue triangles) with those given by the effective Hamiltonian (17) (solid
lines). The parameters are set to $g=4$ kHz, $\omega=170$ kHz, $z_{\rm t}=12$
nm, $F_{x}=20$ yN and $F_{y}=15$ yN. b) Oscillations of the signal for fixed
$t$ as a function of the phase $\phi$ for an initial superposition spin state.
The two-dimensional force sensing protocol starts by preparing the system in
state
$|\psi(0)\rangle=(c_{\uparrow}(0)\left|\uparrow\right\rangle+c_{\downarrow}(0)\left|\downarrow\right\rangle)\otimes\left|0_{x},0_{y}\right\rangle$,
where $c_{\uparrow,\downarrow}(0)$ are the respective initial spin probability
amplitudes and $\left|n_{x},n_{y}\right\rangle$ stands for the Fock state with
$n_{\beta}$ excitations in each phonon mode. According to the effective
Hamiltonian (17) the evolution of the system is driven by the free propagator
$\hat{U}_{\rm JT}=e^{-{\rm i}\hat{H}_{\rm eff}^{\rm JT}t/\hbar}$. Neglecting
the residual spin-motional coupling (18) the propagator reads
$\hat{U}_{\rm JT}^{0}(t,0)=\left[\begin{array}[]{cc}a&b\\\
-b^{*}&a^{*}\end{array}\right].$ (19)
Here $a=\cos(\widetilde{\Omega}t)$ and $b={\rm
i}e^{-\rm{i}\xi}\sin(\widetilde{\Omega}t)$ are the Cayley-Klein parameters,
which depend on the rms Rabi frequency $\widetilde{\Omega}=\frac{gz_{\rm
t}}{\hbar\omega}|\vec{F}_{\perp}|$, which is proportional to the magnitude of
the force $|\vec{F}_{\perp}|=\sqrt{F_{x}^{2}+F_{y}^{2}}$. In addition to
$|\vec{F}_{\perp}|$, we introduce the relative amplitude parameter
$\xi=\tan^{-1}\left(\frac{F_{y}}{F_{x}}\right)$. Assuming an initial state
with $c_{\uparrow}(0)=1$, $c_{\downarrow}(0)=0$, the respective probability to
find the system in state $\left|\uparrow\right\rangle$ is
$P_{\uparrow}(t)=\cos^{2}(\widetilde{\Omega}t)$, which implies that the Rabi
oscillations depends only on the magnitude of the force, see Fig. 4(a). Using
Eq. (6) we find that the shot-noise-limited sensitivity for measuring the
magnitude of the force is given by
$|\vec{F}_{\perp}|_{\rm min}\sqrt{T}=\frac{\hbar\omega}{2gz_{\rm t}\sqrt{t}}.$
(20)
In the presence of motional heating of both vibrational modes, the signal is
damped with decoherence rate
$\gamma\sim\langle\dot{n}_{x}\rangle+\langle\dot{n}_{y}\rangle$, where
$\langle\dot{n}_{\beta}\rangle$ is the heating rate along the $\beta$ spatial
direction. Therefore we find that the optimal force sensitivity is
$|\vec{F}_{\rm\perp}|_{\rm min}\sqrt{T}=\frac{\hbar\omega}{2gz_{\rm
t}}\sqrt{2(\langle\dot{n}_{x}\rangle+\langle\dot{n}_{y}\rangle)e}.$ (21)
It is important that due to the strong transverse confinement the sensing
scheme for measuring $|\vec{F}_{\perp}|$ is less sensitive to the ion’s
heating Turchette2000 ; Zhu2006 . Using the parameters in Fig. 4 and assuming
$\langle\dot{n}_{x}\rangle=\langle\dot{n}_{y}\rangle=1$ $\rm{s}^{-1}$ we
estimate force sensitivity of 0.6 yN $\rm{Hz}^{-1/2}$.
In order to detect the parameter $\xi$ we prepare the spin state in an initial
superposition state with $c_{\uparrow}(0)=1/\sqrt{2}$ and
$c_{\downarrow}(0)=e^{{\rm i}\phi}/\sqrt{2}$. Then the probability oscillates
with time as
$\displaystyle
P_{\uparrow}(t)=\frac{1}{2}\left[1+\sin(\xi-\phi)\sin(2\widetilde{\Omega}t)\right].$
(22)
Hence, for fixed evolution time $t$, the Ramsey oscillations versus the phase
$\phi$ provide a measure of the relative phase $\xi$, see Fig. 4(b).
In fact, Eq. (22) allows one to determine both the magnitude of the force
$|\widetilde{\Omega}|$ and the mixing parameter $\xi$ from the same signal
when plotted vs the evolution time $t$: $|\widetilde{\Omega}|$ is related to
the oscillation frequency and $\xi$ to the oscillation amplitude. The
parameter $\xi$ can be determined also by varying the externally controlled
superposition phase $\phi$, until the oscillation amplitude vanishes at some
value $\phi_{0}$; this signals the value $\xi=\phi_{0}$ (modulo $\pi$).
Finally, we discuss the dynamical decoupling schemes, which can be used to
suppress the effects of the term $\hat{H}_{\rm JT}^{\prime}$ (18) during the
force estimation. In that case, applying continuous driving field, e.g., along
the $\sigma_{x}$ direction, would reduce the thermal fluctuation induced by
$\hat{H}_{\rm JT}^{\prime}$, but additionally, the relevant force information,
which is encoded in the $\sigma_{y}$ term in (17), will be spoiled. Here we
propose an alternative dynamical decoupling scheme, which follows the Carr-
Purcell-Meiboom-Gill (CPMG) pulse sequence Carr1954 ; Meiboom1958 , in which,
however, the single instantaneous $\pi$ pulse is replaced by the phonon phase-
flip operator $\hat{R}_{\pi}=e^{{\rm
i}\pi\hat{a}_{x}^{{\dagger}}\hat{a}_{x}}$. Such a phonon phase shift
$\Delta\omega_{x}\tau=\pi$ can be achieved by switching the RF potential of
the trap by the fixed amount $\Delta\omega_{x}$ for a time $\tau$ Singer2010 .
The effect of $\hat{R}_{\pi}$ is to change the sign of the $\hat{H}_{\rm
JT}^{\prime}$ such that $\hat{R}^{{\dagger}}_{\pi}\hat{H}_{\rm
JT}^{\prime}\hat{R}_{\pi}=-\hat{H}_{\rm JT}^{\prime}$ but it leaves the other
part of the Hamiltonian (17) unaffected. Using that the pulse sequence
$\hat{U}_{1}=\hat{R}_{\pi}\hat{U}_{\rm JT}\hat{R}_{\pi}\hat{U}_{\rm JT}$
eliminates the residual spin-phonon coupling in the first order of the
interaction time $t$, a high-order reduction can be achieved by the recursion
$\hat{U}_{n}=\hat{R}_{\pi}\hat{U}_{n-1}\hat{R}_{\pi}\hat{U}_{n-1}$, which
eliminates the spin-phonon coupling up to $n$th order in $t$.
## IV Summary and Outlook
We have proposed quantum sensing protocols, which rely on mapping the relevant
force information onto the spin degrees of freedom of the single trapped ion.
The force sensing is carried out by observing the Ramsey-type oscillations of
the spin states, which can be detected via state-dependent fluorescence. We
have considered quantum probes represented by the JC and QR systems, which can
be used to sense the axial component of the force. We have shown that when
using a JC system as a quantum probe, one can apply dynamical decoupling
schemes to suppress the effect of the spin dephasing during the force
estimation. When using a QR system as a probe, the absence of a residual spin-
phonon coupling makes the sensing protocol robust to thermally-induced spin
dephasing. Furthermore, we have shown that the transverse-force direction can
be measured by using a system described by the JT model, in which the spin
states are coupled with the two spatial phonon modes. Here the information of
the magnitude of the force and the relative ratio can be extracted by
observing the time evolution of the respective ion’s spin states, which
simplify significantly the experimental procedure.
Tuning the trap frequencies over the broad range, the force sensing methods
proposed here can be employed to implement a spectrum analyzer for ultra-low
voltages. Moreover, because in the force-field direction sensing the mutual
ratio can be additionally estimated our method can be used to implement a two-
channel vector spectrum analyzer. Finally, the realization of the proposed
force sensing protocols are not restricted only to trapped ions but could be
implemented with other quantum optical setups such as cavity-QED Dimer2007 or
circuit-QED systems Ballester2015 .
## Appendix A Elimination of the vibrational degree of freedom
Let us make the canonical transformation of Hamiltonian
$\hat{H}=\hat{H}_{0}+\hat{H}_{\rm int}$,
$\displaystyle\hat{H}_{\rm eff}$ $\displaystyle=$ $\displaystyle
e^{-\hat{S}}\hat{H}e^{\hat{S}}=\hat{H}_{0}+\hat{H}_{\rm
int}+[\hat{H}_{0},\hat{S}]+[\hat{H}_{\rm int},\hat{S}]$ (23)
$\displaystyle+\tfrac{1}{2}[[\hat{H}_{0},\hat{S}],\hat{S}]+\tfrac{1}{2}[[\hat{H}_{\rm
int},\hat{S}],\hat{S}]+\ldots.$
Our goal is to choose $\hat{S}$ in a such a way that all terms of order $g$ in
$\hat{H}_{\rm eff}$ are canceled and the first term describing the spin-boson
interaction is of order $g^{2}/\omega$. If we determine $\hat{S}$ by the
condition
$\hat{H}_{\rm int}+[\hat{H}_{0},\hat{S}]=0,$ (24)
then the effective Hamiltonian becomes
$\hat{H}_{\rm eff}\approx\hat{H}_{0}+\tfrac{1}{2}[\hat{H}_{\rm int},\hat{S}].$
(25)
Let us consider the time-dependent operator $\hat{S}(t)=e^{{\rm
i}\hat{H}_{0}t/\hbar}\hat{S}e^{-{\rm i}\hat{H}_{0}t/\hbar}$, which obeys the
Heisenberg equation ${\rm i}\hbar\dot{\hat{S}}(t)=[\hat{S}(t),\hat{H}_{0}]$.
Using Eq. (24) we arrive at the equation
${\rm i}\hbar\dot{\hat{S}}(t)=\hat{H}_{\rm int}(t),$ (26)
where $\hat{H}_{\rm int}(t)=e^{{\rm i}\hat{H}_{0}t/\hbar}\hat{H}_{\rm
int}e^{-{\rm i}\hat{H}_{0}t/\hbar}$. Solving Eq. (26) we determine the desired
operator $\hat{S}$.
### A.1 Jaynes-Cummings model
We identify $\hat{H}_{0}=\hbar\omega\hat{a}^{{\dagger}}\hat{a}$ and
$\hat{H}_{\rm int}=\hbar
g(\sigma^{-}\hat{a}^{{\dagger}}+\sigma^{+}\hat{a})+\frac{z_{\rm
ax}F}{2}(\hat{a}^{{\dagger}}+\hat{a})$. Using Eq. (26) we obtain
$\hat{S}=\frac{g}{\omega}(\sigma^{+}\hat{a}-\sigma^{-}\hat{a}^{{\dagger}})+\frac{z_{\rm
ax}F}{2\hbar\omega}(\hat{a}-\hat{a}^{{\dagger}}),$ (27)
which fulfills the condition (24). For the effective Hamiltonian we derive
$\displaystyle\hat{H}_{\rm eff}$ $\displaystyle=$
$\displaystyle\hbar\omega\hat{a}^{{\dagger}}\hat{a}+\hbar\left(\Delta-\frac{g^{2}}{2\omega}\right)\sigma_{z}-\hbar\Omega_{F}\sigma_{x}$
(28) $\displaystyle-\frac{\hbar
g^{2}}{\omega}\sigma_{z}\hat{a}^{{\dagger}}\hat{a}-\frac{\hbar
g^{2}}{2\omega}-\frac{z_{\rm ax}^{2}F^{2}}{4\hbar\omega}+\hat{H}^{\prime},$
where $\Omega_{F}=gz_{\rm ax}F/2\hbar\omega$ is the Rabi frequency and
$\hat{H}^{\prime}=\frac{1}{3}[[\hat{H}_{\rm int},\hat{S}],\hat{S}]+\ldots$
contains the higher-order terms in (23). We find
$\displaystyle\frac{1}{3}[[\hat{H}_{\rm int},\hat{S}],\hat{S}]$
$\displaystyle=$ $\displaystyle\frac{2g^{2}z_{\rm
ax}F}{3\omega^{2}}\sigma_{z}(\hat{a}^{{\dagger}}+\hat{a})-\frac{4\hbar
g^{3}}{3\omega^{2}}(\sigma^{-}\hat{a}^{{\dagger}}+\sigma^{+}\hat{a})$ (29)
$\displaystyle-\frac{4\hbar
g^{3}}{3\omega^{2}}(\sigma^{-}\hat{a}^{{\dagger}}\hat{a}^{{\dagger}}\hat{a}+\sigma^{+}\hat{a}^{{\dagger}}\hat{a}\hat{a}).$
As long as $g/\omega\ll 1$ the higher-order terms can be neglected and thus
the lowest-order effective Hamiltonian is given by Eq. (4a).
### A.2 Quantum Rabi Model
Here the interaction Hamiltonian is $\hat{H}_{\rm int}=\hbar
g\sigma_{x}(\hat{a}^{{\dagger}}+\hat{a})+\frac{z_{\rm
ax}F}{2}(\hat{a}^{{\dagger}}+\hat{a})$ and the canonical transformation is
given by the operator
$\hat{S}=\frac{g}{\omega}\sigma_{x}(\hat{a}-\hat{a}^{{\dagger}})+\frac{z_{\rm
ax}F}{2\hbar\omega}(\hat{a}-\hat{a}^{{\dagger}}).$ (30)
The effective Hamiltonian is
$\hat{H}_{\rm
eff}=\hbar\omega\hat{a}^{{\dagger}}\hat{a}-2\hbar\Omega_{F}\sigma_{x}-\frac{\hbar
g^{2}}{\omega}-\frac{(z_{\rm ax}F)^{2}}{4\hbar\omega}.$ (31)
Remarkably, due to the equality $[[\hat{H}_{\rm int},\hat{S}],\hat{S}]=0$ all
higher-order terms in Eq. (23) vanish.
### A.3 Jahn-Teller Model
Following the same procedure we have
$\displaystyle\hat{H}_{0}=\hbar\omega(\hat{a}_{x}^{{\dagger}}\hat{a}_{x}+\hat{a}_{y}^{{\dagger}}\hat{a}_{y}),$
$\displaystyle\hat{H}_{\rm int}=\hbar
g\sigma_{x}(\hat{a}_{x}^{{\dagger}}+\hat{a}_{x})+\hbar
g\sigma_{y}(\hat{a}_{y}+\hat{a}_{y})+\frac{z_{\rm
t}F_{x}}{2}(\hat{a}_{x}^{{\dagger}}+\hat{a}_{x})$ $\displaystyle+\frac{z_{\rm
t}F_{y}}{2}(\hat{a}^{{\dagger}}_{y}+\hat{a}_{y}).$ (32)
In this case the canonical transformation is represented by the operator
$\displaystyle\hat{S}$ $\displaystyle=$
$\displaystyle\frac{g}{\omega}\sigma_{x}(\hat{a}_{x}-\hat{a}_{x}^{{\dagger}})+\frac{g}{\omega}\sigma_{y}(\hat{a}_{y}-\hat{a}_{y}^{{\dagger}})+\frac{z_{\rm
t}F_{x}}{2\hbar\omega}(\hat{a}_{x}-\hat{a}_{x}^{{\dagger}})$ (33)
$\displaystyle+\frac{z_{\rm
t}F_{y}}{2\hbar\omega}(\hat{a}_{y}-\hat{a}_{y}^{{\dagger}}).$
Using Eq. (33) we obtain the following effective Hamiltonian
$\displaystyle\hat{H}_{\rm eff}$ $\displaystyle=$
$\displaystyle\hbar\omega(\hat{a}_{x}^{{\dagger}}\hat{a}_{x}+\hat{a}_{y}^{{\dagger}}\hat{a}_{y})-\hbar\Omega_{x}\sigma_{x}-\hbar\Omega_{y}\sigma_{y}-2{\rm
i}\frac{\hbar g^{2}}{\omega}$ (34)
$\displaystyle\times\sigma_{z}(\hat{a}_{x}\hat{a}_{y}^{{\dagger}}-\hat{a}_{x}^{{\dagger}}\hat{a}_{y})-\frac{2\hbar
g^{2}}{\omega}-\frac{z_{\rm
t}^{2}|\vec{F}_{\perp}|^{2}}{4\hbar\omega}+\hat{H}^{\prime},$
where $\Omega_{x,y}=gz_{\rm t}F_{x,y}/\hbar\omega$ are the respective Rabi
driving frequencies. The next higher-order terms in $\hat{H}^{\prime}$ (34)
are given by
$\displaystyle\frac{1}{3}[[\hat{H}_{\rm int},\hat{S}],\hat{S}]$
$\displaystyle=$ $\displaystyle 2{\rm i}\frac{g^{2}z_{\rm
t}F_{x}}{\omega^{2}}\sigma_{z}(\hat{a}_{y}^{{\dagger}}-\hat{a}_{y})-2{\rm
i}\frac{g^{2}z_{\rm
t}F_{y}}{\omega^{2}}\sigma_{z}(\hat{a}_{x}^{{\dagger}}-\hat{a}_{x})$ (35)
$\displaystyle-\frac{4\hbar
g^{3}}{\omega^{2}}\sigma_{y}\\{(\hat{a}_{y}^{{\dagger}}+\hat{a}_{y})(1+2\hat{n}_{x})-2\hat{a}_{x}^{{\dagger}2}\hat{a}_{y}$
$\displaystyle-2\hat{a}_{x}^{2}\hat{a}_{y}^{{\dagger}}\\}-\frac{4\hbar
g^{3}}{\omega^{2}}\sigma_{x}\\{(\hat{a}_{x}^{{\dagger}}+\hat{a}_{x})(1+2\hat{n}_{y})$
$\displaystyle-2\hat{a}_{y}^{{\dagger}2}\hat{a}_{x}-2\hat{a}_{y}^{2}\hat{a}_{x}^{{\dagger}}\\}.$
## References
* (1) P. Treutlein, C. Genes, K. Hammerer, M. Poggio and P. Rabl, _Cavity Optomechanics_ ed. M. Aspelmeyer, T. Kippenberg and F. Marquardt (Berlin: Springer).
* (2) G. Kurizki, P. Bertet, Y. Kubo, K. Molmer, D. Petrosyan, P. Rabl, J. Schmiedmayer, Proc. Natl. Acad. Sci. USA 112, 3866 (2015)
* (3) T. D. Stowe, K. Yasumura, T. W. Kenny, D. Botkin, K. Wago and D. Rugar, Appl. Phys. Lett. 71, 288 (1997).
* (4) A. A. Geraci, S. J. Smullin, D. M. Weld, J. Chiaverini, and A. Kapitulnik, Phys. Rev. D 78, 022002 (2008).
* (5) D. Rugar, R. Budakian, H. Mamin, B. Chui, Nature 430, 329 (2004).
* (6) S. Kolkowitz, A. C. B. Jayich, Q. P. Unterreithmeier, S. D. Bennett, P. Rabl, J. G. E. Harris, M. D. Lukin, Science 335, 1603 (2012).
* (7) R. Blatt and D. Wineland, Nature 453, 1008 (2008).
* (8) M. J. Biercuk, H. Uys, J. W. Britton, A. P. VanDevender and J. J. Bollinger, Nat. Nanotechnol. 5, 646 (2010).
* (9) S. Knünz, M. Herrmann, V. Batteiger, G. Saathoff, T. W. Hänsch, K. Vahala, and Th. Udem, Phys. Rev. Lett. 105, 013004 (2010).
* (10) R. Maiwald, D. Leibfried, J. Britton, J. C. Bergquist, G. Leuchs and D. J. Wineland, Nat. Phys. 5, 551 (2009).
* (11) P. A. Ivanov, K. Singer, N. V. Vitanov and D. Porras, Phys. Rev. Appl. 4, 054007 (2015).
* (12) Shi-Biao Zheng, Phys. Rev. A 66, 060303(R) (2002).
* (13) E. Solano, G. S. Agarwal, and H. Walther, Phys. Rev. Lett. 90, 027903 (2003).
* (14) D. J. Wineland, C. Monroe, W. M. Itano, D. Leibfried, B. E. King, and D. M. Meekhof, J. Res. Natl. Inst. Stand. Technol. 103, 259 (1998).
* (15) H. Häffner, C. F. Roos, and R. Blatt, Phys. Rep. 469, 155 (2008).
* (16) C. Schneider, D. Porras, and T. Schaetz, Rep. Prog. 75, 024401 (2012)
* (17) L. Viola, E. Knill, and S. Lloyd, Phys. Rev. Lett. 82, 2417 (1999).
* (18) W. Yang, Z.-Y. Wang, and R.-B. Liu, Front. Phys. 6, 1 (2010).
* (19) N. Timoney, I. Baumgart, M. Johanning, A. F. Varon. M. B. Plenio, A. Retzker, and Ch. Wunderlich, Nature 476, 185 (2011).
* (20) S. C. Webster, S. Weidt, K. Lake, J. J. McLoughlin, and W. K. Hensinger, Phys. Rev. Lett. 111, 140501 (2013).
* (21) T. R. Tan, J. P. Gaebler, R. Bowler, Y. Lin, J. D. Jost, D. Leibfried, and D. J. Wineland, Phys. Rev. Lett. 110, 263002 (2013).
* (22) A. Bermudez, P. O. Schmidt, M. B. Plenio, and A. Retzker, Phys. Rev. A 85, 040302(R) (2012).
* (23) S. F. Huelga, C. Macchiavello, T. Pellizzari, A. K. Ekert, M. B. Plenio, J. I. Cirac, Phys. Rev. Lett. 79, 3865 (1997).
* (24) J. S. Pedernales, I. Lizuain, S. Felicetti, G. Romero, L. Lamata and E. Solano, Sci. Rep. 5, 15472 (2015).
* (25) W. J. Munro, K. Nemoto, G. J. Milburn, and S. L. Braunstein, Phys. Rev. A 66, 023819 (2002).
* (26) D. Porras, P. A. Ivanov, and F. Schmidt-Kaler, Phys. Rev. Lett. 108, 235701 (2012).
* (27) P. A. Ivanov, D. Porras, S. S. Ivanov and F. Schmidt-Kaler, J. Phys. B: At. Mol. Opt. Phys. 46, 104003 (2013).
* (28) Q. A. Turchette, D. Kielpinski, B. E. King, D. Leibfried, D. M. Meekhof, C. J. Myatt, M. A. Rowe, C. A. Sackett, C. S. Wood,cW. M. Itano, C. Monroe, and D. J. Wineland, Phys. Rev. A 61, 063418 (2000).
* (29) Shi-Liang Zhu, C. Monroe, and K.-M. Duan, Phys. Rev. Lett. 97, 050505 (2006).
* (30) H. Carr and E. M. Purcell, Phys. Rev. 94, 630 (1954).
* (31) S. Meiboom and D. Gill, Rev. Sci. Instrum. 29, 688 (1958).
* (32) K. Singer, U. Poschinger, M. Murphy, P. Ivanov, F. Ziesel, T. Calarco, F. Schmidt-Kaler, Rev. Mod. Phys. 82, 2609 (2010).
* (33) F. Dimer, B. Estienne, A. S. Parkins, H. J. Carmichael, Phys. Rev. A 75, 013804 (2007).
* (34) D. Ballester, G. Romero, J. J. Garcia-Ripoll, F. Deppe, E. Solano, Phys. Rev. X 2, 021007 (2012).
|
# Investigation of generalised uncertainty principle effects on FRW cosmology
Özgür Ökcüa,b
aDepartment of Physics, Faculty of Science, Istanbul University,
Istanbul, 34134, Türkiye
bTheoretical and Computational Physics Research Laboratory, Istanbul
University,
Istanbul, 34134, Türkiye
## 1 Introduction
Thermodynmaical aspects of gravity have been an attractive research field in
theoretical physics since the discovery of black hole thermodynamics [1, 2, 3,
4, 5, 6]. Black hole as a thermodynamic system has entropy and temperature
proportional to its horizon area and surface gravity, respectively.
Considering a black hole as a thermodynamic system transforms an absolute
absorber into a perfect laboratory to investigate the deep connection between
the gravitation, quantum mechanics, and thermodynamics. Based on the notion of
black hole thermodynamics, Jacobson obtained the Einstein field equation from
thermodynamical arguments [7]. Employing the entropy$-$area relation with
Clausius relation $\delta Q=TdS$, he derived the field equation as an equation
of state. Here, $\delta Q$, $T$ and $dS$ are energy flux, Unruh temperature
and the changes in entropy, respectively. After the pioneering work of
Jacobson, there have been many studies targeted the thermodynamical aspects of
gravity in the literature [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 24, 22, 23, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 42, 43, 44, 45, 46,
47, 48, 35, 36, 37, 38, 39, 40, 41, 50, 51, 49]. The studies aimed to
understand obtaining Einstein field equation from the first law of
thermodynamics can be found in Refs. [8, 9, 10, 11, 12]. Inspired by
Jacobson’s paper, $(n+1)-$dimensional Friedmann equations were obtained from
the first law of thermodynamics, $-dE=T_{h}dS_{h}$ , at apparent horizon by
Cai and Kim [13]. Here $-dE$ is the energy flux crossing the apparent horizon
for the infinitesimal time interval at fixed horizon radius. The temperature
and entropy of the apparent horizon are given by [13]
$T_{h}=\frac{1}{2\pi\tilde{r_{A}}},\qquad\qquad S_{h}=\frac{A_{h}}{4},$ (1)
where $A_{h}$ and $\tilde{r_{A}}$ correspond to area and apparent horizon,
respectively 111We adopt the units $\hbar=c=G_{N}=L^{2}_{Pl}=1$ throughout the
paper.. Furthermore, in the rest of their seminal paper [13], they also
derived the Friedmann equations from the entropy$-$area relations of Gauss-
Bonnet gravity and Lovelock gravity theories, where the standard
entropy$-$area relation is break down. In Ref. [14], Akbar and Cai obtained
the Friedmann equations in scalar$-$tensor and $f(R)$ gravity theories by
following the arguments of Ref. [13]. Although the Friedmann equations can
successfully be obtained in Eq.(1), the temperature is an approximation for a
fixed apparent horizon. Thus the temperature is not proportional to the
surface gravity at appaerent horizon. Moreover, the equation of state is only
limited with the vacuum energy or de Sitter spacetime for this approximation.
The surface gravity of apparent horizon is given by [13, 52]
$\kappa=-\frac{1}{\tilde{r_{A}}}\left(1-\frac{\dot{\tilde{r_{A}}}}{2H\tilde{r_{A}}}\right),$
(2)
where dot denotes the derivative with respect to time and $H$ corresponds to
Hubble parameter. Assuming temperature is proportional to surface gravity [15]
$T_{h}=\frac{\kappa}{2\pi}=-\frac{1}{2\pi\tilde{r_{A}}}\left(1-\frac{\dot{\tilde{r_{A}}}}{2H\tilde{r_{A}}}\right),$
(3)
with the standard entropy$-$area relation, Akbar and Cai showed that the first
law of thermodynamics at apparent horizon defined as
$dE=T_{h}dS_{h}+WdV,$ (4)
where $E=\rho V$ and $W$ correspond to the total energy in volume $V$ enclosed
by the apparent horizon and the work density, respectively. Then, many papers
devoted to Friedmann equations and apparent horzion thermodynamics have been
intesively studied in the Refs. [16, 17, 18, 19, 20, 21, 24, 22, 23, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34, 42, 43, 44, 45, 46, 47, 48, 35, 36, 37, 38,
39, 40, 41, 50, 51, 49].
It is widely known that the standard entropy$-$area relation is not valid and
should be corrected in the case of various theories. For example, it is
well$-$known that logarithmic correction to black hole entropy arises due to
quantum gravity effects at the Planck scale [53, 54, 55]. The various quantum
gravity approaches to the modifications of Friedmann equations such as loop
quantum gravity [18, 19], modified Heisenberg principle [20, 21, 24, 22, 23,
25] were investigated in the literature. Other interesting modifications of
entropy$-$area relation are considered in the context of generalised
statistics such as Tsallis statistics [56] and Kaniadakis statistics [57, 58].
Moreover, inspired by COVID-19, Barrow proposed that the area of horizon may
be deformed in the context of quantum gravity effects and entropy is the power
law function of its area [59]. Recently, the effects of fractional quantum
mechanics were investigated on black hole thermodynamics in Ref. [60].
Interestingly, the fractional entropy [60] is similar to Tsallis [56] and
Barrow [59] entropies although all entropies have different origins. The
extensions of Refs. [56, 57, 58, 59, 60] to cosmological cases, namely
modified Friedmann equations can be found for Tsallis entropy in Refs. [30,
31, 32], Kaniadakis entropy in Refs. [39, 40], Barrow entropy in Refs. [42,
43, 44, 45, 46, 47, 48, 49] and fractional entropy in Ref. [51].
Modifications of standard uncertainty principle may provide defining the
physics both at Planck scale and large distance scales. GUP takes into account
the momentum uncertainty correction while the extended uncertainty principle
(EUP) takes into account the position uncertainty correction. GUP predicts a
minimum measurable length at Planck scale while EUP may predict a minimum
measurable momentum. Taking into account effects of GUP and EUP leads to third
kind of modified uncertainty principle known as generalized and extended
uncertainty principle (GEUP). It predicts the notions of minimum measurable
length and minimum measurable momentum. Numerous models of modified
uncertainty principle were suggested in Refs. [61, 62, 63, 64, 65, 66, 67, 68,
69, 70, 71, 72, 73, 74, 75]. Both GUP and EUP provide new insights on black
thermodynamics [75, 76, 77, 79, 80, 81, 82, 83, 87, 88, 89, 90, 91, 92, 93,
94, 96, 97, 98, 95, 99, 100, 101, 102, 78, 84, 85, 86]. For example, the black
holes are prevented from total evaporation in the GUP case and a black hole
remnant occurs at the final stage [76]. On the other hand, EUP implies a
minimum temperature for black holes [95]. Extensions of modified uncertainty
principle to cosmological scenario lead to modified Friedmann equations [20,
21, 24, 22, 23, 25]. Based on the simplest form of GUP, the modified Friedmann
equations were derived in Ref. [20]. They found an upper bound energy density
of universe at minimal length. In Ref. [21], a cyclic universe model was
defined from the GUP modified Friedmann equations. In Ref. [22], we obtained
the modified Friedmann equations from a GUP model obtained in doubly special
relativity [67]. Similarly, we obtain a maximum and finite energy density due
to minimal length at Planck scale. We also invesigated the effects of GUP on
the deceleration parameter. The authors of Ref. [23] investigated the baryon
asymmetry for the EUP modified Friedmann equations. They also obtained the
constraints on EUP parameter from observations. Modified Friedmann equations
in GEUP case can be found in Ref. [24] where the author obtained the bounds
for GEUP parameters from observations. Recently, GEUP corrected Friedmann
equations were investigated in Ref. [25]. The authors studied the deceleration
parameters and showed that GEUP may be an alternative to dark energy at late
time expansion222For a recent review on GUP, the reader may refer to Ref.
[103]..
In this paper, our aim is to obtain the GUP modified Friedmann equations from
the first law of thermodynamics at apparent horizon. The modified Friedmann
equations reveal that the initial singularity is removed since GUP implies the
minimal measurable length at Planck scale. Moreover, a detailed investigation
on GSL is crucial to understand the GUP effects. Especially, we would like to
understand how GUP affects the GSL in the $\Lambda CDM$ cosmology. In order to
modify the Friedmann equations, we use Nouicer’s GUP [75] whose details are
given in the next section.
The paper is organised as follows. In Section 2, we start a brief review on
Nouicer’s GUP. Then, we obtained the modified entropy$-$area relation from
Nouicer’s GUP. In Section 3, using modified entropy$-$area relation, we obtain
the Friedmann equations. In Section 4, GUP effects on deceleration parameter
are investigated. In Section 5, we check the validity of GSL and explore the
GUP effects on GSL in the $\Lambda CDM$ cosmology. Finally, we discuss our
results in Section 6.
## 2 Noucier’s GUP and entropy$-$area relation
In this section, we first begin to review Nouicer’s GUP [75]. Then, we obtain
the modified entropy$-$area relation based on the methods of Xiang and Wen
[88]. We start to consider the nonlinear relation $p=f(k)$ between the
momentum $p$ and the wave vector $k$ of particle [104, 105]. This relation
must satisfy the following conditions:
* •
The relation $p=f(k)$ reduces to usual relation $p=k$ for the lower energies
than the Planck energy.
* •
The relation $p=f(k)$ approaches to maximum value at Planck scale for the
higher energies.
Following the above conditions, Nouicer proposed the modified position and
momentum operators
$X=ie^{\alpha^{2}P^{2}}\frac{\partial}{\partial p},\quad\quad P=p,$ (5)
where $\alpha$ is a dimensionless positive GUP parameter. These operators lead
to modified commutator
$\left[X,P\right]=ie^{\alpha^{2}P^{2}}.$ (6)
Using the relations $\left\langle P^{2n}\right\rangle\geq\left\langle
P^{2}\right\rangle^{n}$, $\left(\Delta P\right)^{2}=\left\langle
P^{2}\right\rangle-\left\langle P\right\rangle^{2}$ with the above equation,
one can obtain the GUP
$\Delta X\Delta P\geq\frac{1}{2}e^{\alpha^{2}\Delta P^{2}},$ (7)
for the physical state $\left\langle P\right\rangle=0$. The square of above
relation can be written by
$W(u)e^{W(u)}=u,$ (8)
where $W$ is the Lambert function [106] and we define $W(u)=-2\alpha^{2}\Delta
P^{2}$ and $u=-\frac{\alpha^{2}}{2\Delta X^{2}}$. For $0\geq u\geq-1/e$, using
Eq. (7), the momentum uncertainty is given
$\Delta P=\frac{1}{2\Delta
X}\exp\left\\{-\frac{1}{2}W\left(-\frac{1}{e}\frac{\Delta X_{0}^{2}}{\Delta
X^{2}}\right)\right\\},$ (9)
where $\Delta X_{0}$ is the minimum position uncertainty and is given by
$\Delta X_{0}=\sqrt{\frac{e}{2}}\alpha.$ (10)
It is obtained from the condition $u\geq-1/e$.
Following the arguments of Ref. [88], let us start to obtain the modified
entropy$-$area relation. When a black hole absorbs a particle, the change in
area is defined by [2]
$\Delta A\sim bm,$ (11)
where $b$ and $m$ are the particle’s size and mass. We consider two
limitations on particle’s size and mass. The particle is defined by a wave
packet in quantum mechanics. We have $b\sim\Delta X$ since the width of the
wave packet is defined by the particle’s size. The second limitation comes
from the fact that the momentum uncertainty is not bigger than the particle
size. Thus we have $m\geq\Delta P$. With these two limitations, we can write
$\Delta A\sim bm\geq\Delta X\Delta P.$ (12)
Using Eqs. (9), (10) and $\Delta X\sim 2r_{h}$ event horizon, the change in
area is given by
$\Delta
A\geq\frac{\gamma}{2}\exp\left\\{-\frac{1}{2}W\left(-\frac{1}{8}\frac{\alpha^{2}}{r_{h}^{2}}\right)\right\\},$
(13)
where $\gamma$ is a calibration factor. Using above expression with minimum
increase of entropy, $(\Delta S)_{min}=\ln 2$, the GUP modified entropy$-$area
relation is given by
$\frac{dS_{h}}{dA}=\frac{\Delta S_{min}}{\Delta
A_{min}}=\frac{1}{4}\exp\left\\{\frac{1}{2}W\left(\frac{-\alpha^{2}}{8r_{h}^{2}}\right)\right\\},$
(14)
where we find $\gamma=8\ln 2$ since the above equation must give
$dS_{h}/dA=1/4$ in the limit $\alpha\rightarrow 0$. In the next section, we
use Eq. (14) to modify the Friedmann equations.
## 3 Modified Friedmann equations from the first law of thermodynamics
We first begin a concise review on the basic elements of Friedmann-Robertson-
Walker (FRW) universe. The line element of FRW universe is defined by [13]
$ds^{2}=h_{ab}dx^{a}dx^{b}+\tilde{r}^{2}d\Omega^{2},$ (15)
where $\tilde{r}=a(t)r$, $a(t)$ is the scale factor, $x^{a}=(t,r)$, and
$h_{ab}=diag\left(-1,a^{2}/(1-kr^{2})\right)$ is the two$-$dimensional metric.
$k=$ $-1$, $0$, and $1$ present to the open, flat, and closed universe,
respectively. The expression of apparent horizon $\tilde{r_{A}}$ is given by
[13]
$\tilde{r_{A}}=ar=\frac{1}{\sqrt{H^{2}+k/a^{2}}},$ (16)
where the Hubble parameter is defined by $H=\dot{a}/a$. In the following, we
consider the matter and energy of universe as a perfect fluid. So we have
$T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu},$ (17)
where $\rho$, $p$ and $u_{\mu}$ correspond to energy density, pressure and
four$-$velocity of the fluid. The conservation of energy$-$momentum tensor
($\nabla_{\mu}T^{\mu\nu}=0$) gives the continuity equation as
$\dot{\rho}+3H(\rho+p)=0.$ (18)
The work by the volume change of universe is defined by work density [52]
$W=-\frac{1}{2}T^{ab}h_{ab}=\frac{1}{2}(\rho-p),$ (19)
and the corresponding volume is given by [15]
$V=\frac{4}{3}\pi\tilde{r_{A}}^{3}.$ (20)
Employing the Eqs. (18) and (20), the differential of the total energy of
universe is given by
$dE=\rho
dV+Vd\rho=4\pi\rho\tilde{r_{A}}^{2}d\tilde{r_{A}}-4\pi(\rho+p)\tilde{r_{A}}^{3}Hdt.$
(21)
From Eqs. (19) and (20), we can obtain
$WdV=2\pi(\rho-p)\tilde{r_{A}}^{2}d\tilde{r_{A}}.$ (22)
Since entropy is defined by the area, $S=S(A)$, one can give the general
expressions of entropy as follows [20]:
$S_{h}=\frac{f(A_{h})}{4},$ (23)
and its differential can be written by
$\frac{dS_{h}}{dA_{h}}=\frac{f^{\prime}(A_{h})}{4},$ (24)
where prime denotes derivative with respect to the apparent horizon area
$A_{h}=4\pi\tilde{r_{A}}$. Comparing Eq. (14) with the above equation, we find
$f^{\prime}(A_{h})=\exp\left\\{\frac{1}{2}W\left(\frac{-\alpha^{2}}{8\tilde{r_{A}}^{2}}\right)\right\\}.$
(25)
Finally, we find $T_{h}dS_{h}$ as
$T_{h}dS_{h}=-\left(1-\frac{\dot{\tilde{r_{A}}}}{2H\tilde{r_{A}}}\right)f^{\prime}(A_{h})d\tilde{r_{A}}.$
(26)
Now, we can obtain the modified Friedmann equations since we have all the
necessary ingredients. Substituting Eqs. (21)-(26) into Eq. (4) and using the
differential form of apparent horizon
$d\tilde{r_{A}}=-H\tilde{r_{A}}^{3}\left(\dot{H}-\frac{k}{a^{2}}\right)dt,$
(27)
we find
$4\pi(\rho+p)\tilde{r_{A}}^{3}Hdt=f^{\prime}(A_{h})d\tilde{r_{A}}.$ (28)
Employing the continuity equation (18) with the above equation, the
differential form of Friedmann equation is given by
$\frac{f^{\prime}(A_{h})}{\tilde{r_{A}}^{3}}d\tilde{r_{A}}=-\frac{4\pi}{3}d\rho.$
(29)
Integrating the above equation with Eq. (25) gives the first Friedmann
equation
$\frac{8}{9\alpha^{2}}\left[e^{\frac{3}{2}W\left(\frac{-\alpha^{2}}{8\tilde{r_{A}}^{2}}\right)}\left(1+3W\left(\frac{-\alpha^{2}}{8\tilde{r_{A}}^{2}}\right)\right)-1\right]=-\frac{4\pi}{3}\rho-\frac{\Lambda}{6},$
(30)
where we have set the integration constant
$C=-\frac{8}{9\alpha^{2}}+\frac{\Lambda}{6}$ in the limit
$\tilde{r_{A}}\rightarrow\infty$. Finally, combining Eqs. (25) and (27) with
Eq. (28), the dynamical equation can be obtained
$e^{\frac{1}{2}W\left(\frac{-\alpha^{2}}{8\tilde{r_{A}}^{2}}\right)}\left(\dot{H}-\frac{k}{a^{2}}\right)=-4\pi(\rho+p).$
(31)
These equations reduce to standard Friedmann equations in the limit
$\alpha\rightarrow 0$.
The striking feature of the first Friedmann equation (30) comes from the
argument of Lambert function, i.e., the condition
$\frac{\alpha^{2}}{8\tilde{r_{A}}^{2}}\leq\frac{1}{e}$ gives the minimal
apparent horizon
$\tilde{r_{A}}^{min}=\sqrt{\frac{e}{8}}\alpha.$ (32)
This minimal apparent horizon appears due to minimal length notion of GUP [20,
21, 22, 25]. It implies that the singularity is removed at the beginning of
the Universe. Moreover, the energy density does not diverge anymore since the
minimum apparent horizon has a finite value. Using Eq. (32) in the first
Friedmann equation (30) and neglecting the cosmological constant, the maximum
energy density is given by
$\rho^{max}=\frac{2\left(2+e^{3/2}\right)}{3\pi e^{3/2}\alpha^{2}}.$ (33)
It is clear that both $\tilde{r_{A}}^{min}$ and $\rho^{max}$ recover the
standard results in the limit $\alpha\rightarrow 0$, i.e.,
$\tilde{r_{A}}^{min}$ approaches zero while $\rho^{max}$ diverges. Using Eq.
(16), the Friedmann equations can be expressed in term of Hubble parameter
$\displaystyle-\frac{4\pi}{3}\rho-\frac{\Lambda}{6}=\frac{8}{9\alpha^{2}}\left[e^{\frac{3}{2}W\left(\frac{-\alpha^{2}}{8}\left(H^{2}+\frac{k}{a^{2}}\right)\right)}\left(1+3W\left(\frac{-\alpha^{2}}{8}\left(H^{2}+\frac{k}{a^{2}}\right)\right)\right)-1\right],$
$\displaystyle-4\pi(\rho+p)=e^{\frac{1}{2}W\left(\frac{-\alpha^{2}}{8}\left(H^{2}+\frac{k}{a^{2}}\right)\right)}\left(\dot{H}-\frac{k}{a^{2}}\right).$
(34)
When the GUP effects are tiny, these equations can be expanded in powers of
$\alpha$
$\displaystyle\frac{8\pi}{3}\rho+\frac{\Lambda}{3}=\left(H^{2}+\frac{k}{a^{2}}\right)-\frac{1}{32}\left(H^{2}+\frac{k}{a^{2}}\right)^{2}\alpha^{2}+...,$
$\displaystyle-4\pi(\rho+p)=\left(\dot{H}-\frac{k}{a^{2}}\right)\left[1-\frac{1}{16}\left(H^{2}+\frac{k}{a^{2}}\right)\alpha^{2}+...\right].$
(35)
Neglecting the GUP correction, one can easily recover the usual Friedmann
equations.
## 4 Deceleration parameter
We would like to analyse the effects of GUP on the deceleration parameter. It
is defined by
$q=-1-\frac{\dot{H}}{H^{2}}.$ (36)
The positivity of $q$ means the decelerated expansion while the negativity of
$q$ means the accelerated expansion. We choose the equation of state as
$p=\omega\rho$. We restrict our analysis with the flat case $k=0$ since it is
consistent with cosmological observations [107]. Combining Friedmann equations
in (3) with the deceleration parameter, we can obtain
$q=-1-\frac{4(1+\omega)}{H^{2}e^{\frac{1}{2}W\left(\frac{-\alpha^{2}H^{2}}{8}\right)}}\left\\{\frac{2}{3\alpha^{2}}\left[e^{\frac{3}{2}W\left(\frac{-\alpha^{2}H^{2}}{8}\right)}\left(1+3W\left(\frac{-\alpha^{2}H^{2}}{8}\right)\right)-1\right]+\frac{\Lambda}{8}\right\\}.$
(37)
Now, we can investigate the deceleration parameter at the beginning of the
Universe. Remembering the arguments of Lambert function, i.e., the condition
$\frac{\alpha^{2}H^{2}}{8}\leq\frac{1}{e}$ yields the maximum $H$
$H_{max}=\frac{2\sqrt{2}}{e\alpha},$ (38)
at the initial stage. For the maximum value of Hubble parameter, the
deceleration parameter at the initial stage is given by
$q(H_{max})=\omega+(1+\omega)\left(\frac{\exp\left\\{2-\frac{1}{2}W\left(-\frac{1}{e^{2}}\right)\right\\}}{3}+\frac{1}{3W\left(-\frac{1}{e^{2}}\right)}\right),$
(39)
or equivalently
$q(H_{max})\approx\omega+0.564(1+\omega).$ (40)
Equation of state parameter $\omega$ must satisfy the condition
$\omega<-0.361$ for the acceleration at the inflationary stage. Since GUP
effects are negligible for the radiation and matter dominated eras, the
deceleration parameter can be expanded
$q=\frac{1+3\omega}{2}-\frac{(1+\omega)\Lambda}{2H^{2}}+\frac{3H^{2}(1+\omega)\alpha^{2}}{64}-\frac{1(1+\omega)\Lambda\alpha^{2}}{32}+...$
(41)
Thus, the deceleration parameters for radiation ($\omega=1/3$) and matter
($\omega=0$) dominated eras can be expressed as
$q_{rad}=1+\frac{H^{2}\alpha^{2}}{16},\qquad
q_{m}=\frac{1}{2}+\frac{3H^{2}\alpha^{2}}{64},$ (42)
respectively. The results imply that the universe is more decelerated for the
radiation and matter dominated eras when the GUP effects are considered.
## 5 Generalised second law
In this section, we want to confirm the validity of GSL, which states the
total entropy of matter fields and horizon cannot decrease with time, for the
GUP effects. We start to reorganise Eq. (28) as follows:
$\dot{r_{A}}=4\pi(\rho+p)H\tilde{r_{A}}^{3}e^{-\frac{1}{2}W\left(\frac{-\alpha^{2}}{8\tilde{r_{A}}^{2}}\right)}.$
(43)
From Eqs. (25), (26) and (43), we can find
$T_{h}\dot{S_{h}}=4\pi(\rho+p)H\tilde{r_{A}}^{3}\left[1-2\pi(\rho+p)\tilde{r_{A}}^{2}e^{-\frac{1}{2}W\left(\frac{-\alpha^{2}}{8\tilde{r_{A}}^{2}}\right)}\right].$
(44)
The second law at apparent horizon may be violated for the accelerated
expansion phase. Therefore, we must also consider the entropy of matter field
inside the horizon, i.e., we must check the GSL. The Gibbs equation is given
by [108]
$T_{f}dS_{f}=d(\rho V)+pdV=Vd\rho+(\rho+p)dV,$ (45)
where $T_{f}$ and $S_{f}$ are the temperature and entropy of matter fields,
respectively. In order to avoid nonequilibrium thermodynamics and spontaneous
energy flow between horizon and matter, thermal equilibrium condition
($T_{h}=T_{f}$) is assumed. Otherwise, the deformation of FRW geometry is
unavoidable [108]. From Eqs. (18), (20), (43) and (45), the change in entropy
of matter fields can be expressed by
$T_{h}\dot{S_{f}}=-4\pi(\rho+p)H\tilde{r_{A}}^{3}\left(1-4\pi(\rho+p)\tilde{r_{A}}^{2}e^{-\frac{1}{2}W\left(\frac{-\alpha^{2}}{8\tilde{r_{A}}^{2}}\right)}\right).$
(46)
At last, combining Eqs. (44) and (46), the total entropy evolution is written
by
$T_{h}(\dot{S_{h}}+\dot{S_{f})}=8\pi^{2}(\rho+p)^{2}H\tilde{r_{A}}^{5}e^{-\frac{1}{2}W\left(\frac{-\alpha^{2}}{8\tilde{r_{A}}^{2}}\right)}.$
(47)
The right hand side of the above expression never decreases with respect to
time. Therefore, we conclude that the GSL is always satisfied for all eras of
the universe for any spatial curvature.
Now, in order to understand how GUP affects the total evolution of entropy, we
focused on a more specific case, namely the $\Lambda CDM$ scenario. We first
begin to solve the first Friedmann equation (3). So we find
$H^{2}=\frac{8\exp\left\\{\frac{1}{3}\left(2W\left(\frac{\sqrt{e}}{4}\left(2-3\pi\alpha^{2}\rho\right)\right)-1\right)\right\\}\left(1-2W\left(\frac{\sqrt{e}}{4}\left(2-3\pi\alpha^{2}\rho\right)\right)\right)}{3\alpha^{2}},$
(48)
for $k=0$. Let us simplify the above equation. The total energy density is
defined by
$\rho=\rho_{m}+\rho_{r}+\rho_{\Lambda},$ (49)
where the matter density $\rho_{m}$, the radiation density $\rho_{r}$ and the
cosmological constant energy density $\rho_{\Lambda}$ have the dependencies
$\rho_{m}=\frac{\rho_{m0}}{a^{3}}$, $\rho_{r}=\frac{\rho_{r0}}{a^{4}}$ and
$\rho_{\Lambda}=\rho_{\Lambda 0}$ [109]. Zero subscript denotes the current
value. On the other hand, the matter, radiation and cosmological constant
density parameters are defined by
$\Omega_{m0}=\frac{\rho_{m0}}{\rho_{c0}},\Omega_{r0}=\frac{\rho_{r0}}{\rho_{c0}},\Omega_{\Lambda
0}=1-\Omega_{m0}-\Omega_{r0}$, respectively. Here, the current critical
density $\rho_{c0}$ is given by $\rho_{c0}=\frac{3H_{0}^{2}}{8\pi}$ and
$H_{0}$ is the current Hubble parameter. Using the above definitions with
redshift parameter $1+z=\frac{a_{0}}{a}$ and taking the present scale factor
$a_{0}=1$, one can write
$\frac{\rho}{\rho_{c0}}=\frac{\Omega_{m0}}{a^{3}}+\frac{\Omega_{r0}}{a^{4}}+\Omega_{\Lambda
0}=\Omega_{m0}(1+z)^{3}+\Omega_{r0}(1+z)^{4}+\Omega_{\Lambda 0}.$ (50)
[width=11cm]ClassTotEnt.eps
Figure 1: Total entropy change versus the redshift parameter. We take
$\Omega_{m}=0.3$, $\Omega_{r}=0.0001$, $H_{0}=1$ and $\alpha=0$.
[width=11cm]GUPTotEnt.eps
Figure 2: Total entropy change versus the redshift parameter. From top to
bottom, the curves correspond to $\alpha=1.4,1.2,1,0.8,0.6$. We take
$\Omega_{m}=0.3$, $\Omega_{r}=0.0001$ and $H_{0}=1$.
Using Eq. (50) in the arguments of Lambert function (48), we get
$x=\frac{\sqrt{e}}{4}\left(2-3\pi\alpha^{2}\rho\right)=\frac{\sqrt{e}}{4}\left(2-\frac{9H_{0}^{2}\alpha^{2}}{8}\left[\Omega_{r0}(1+z)^{4}+\Omega_{m0}(1+z)^{3}+\Omega_{\Lambda
0}\right]\right).$ (51)
Finally, we can express the GUP modified Hubble function for the $\Lambda CDM$
cosmology as follows:
$H_{\Lambda
CDM}(z)=\frac{2\sqrt{2}e^{\frac{2W(x)-1}{6}}\sqrt{1-2W(x)}}{\sqrt{3}\alpha}$
(52)
Substituting Eqs. (3), (16) and the second Friedmann equation (3) into Eq.
(47), we find
$\dot{S_{h}}+\dot{S_{f}}=\pi\dot{H}^{2}H^{-5}\left(1+\frac{\dot{H}}{2H^{2}}\right)^{-1}e^{\frac{1}{2}W\left(\frac{-\alpha^{2}H^{2}}{8}\right)}$
(53)
Finally, using $\dot{H}=-(1+z)H(dH/dz)$, we obtain
$\dot{S_{h}}+\dot{S_{f}}=\pi(1+z)^{2}H^{-3}\left(\frac{dH}{dz}\right)^{2}\left(1-\frac{(1+z)}{2H}\frac{dH}{dz}\right)^{-1}e^{\frac{1}{2}W\left(\frac{-\alpha^{2}H^{2}}{8}\right)}.$
(54)
Combining Eq. (52) with the above equation, we plotted total entropy change
with respect to redshift. In Fig. 1, we plot the evolution of total entropy
for the standard case. In Fig. 2, the total entropy change is represented for
the different values of GUP parameter $\alpha$. Comparing Figs. 1 with 2
reveals the dramatic effects on the evolution of total entropy. As can be seen
in Fig. 2, the redshift parameter has an upper bound. In contrast to GUP case,
the redshift parameter is allowed to go infinity in the standard cosmology.
Since $z$ reaches infinity, the total entropy change vanishes in the standard
cosmology. On the other hand, the total entropy change has an upper bound
since $z$ has a maximum value in the presence of GUP effects.
Finally, we finish this section with comments on different values of GUP
parameter. In Table 1, we numerically give the maximum values of the redshift
parameter, Hubble parameter and total entropy change for the various values of
GUP parameter $\alpha$. From Fig. 2 and Table 1, one can see that $z_{max}$
decrease while $\alpha$ increases. The same behaviour can also be seen for the
maximum values of $H$ and total entropy change. Maximum values of Hubble
parameter and total entropy changes decrease while $\alpha$ increases.
| $\alpha=0.6$ | $\alpha=0.8$ | $\alpha=1$ | $\alpha=1.2$ | $\alpha=1.4$
---|---|---|---|---|---
$z_{max}$ | $1.779$ | $1.227$ | $0.840$ | $0.535$ | $0.268$
$H(z_{max})$ | $2.859$ | $2.114$ | $1.716$ | $1.430$ | $1.125$
$\dot{S_{h}}+\dot{S_{f}}$ | $100.173$ | $26.276$ | $12.861$ | $6.707$ | $3.185$
Table 1: The maximum values of redshift parameter, Hubble parameter and the
total entropy change for the different values of $\alpha$.
## 6 Conclusions and discussions
In this section, using the entropy$-$area relation obtained from Nouicer’s GUP
[75], we obtained the GUP modified Friedmann equations from the fisrt law of
thermodynamics at apparent horizon [15]. We found a minimum apparent horizon
due to the minimal length notion of GUP. We showed that the energy density of
universe is finite and maximum at the minimum apparent horizon. Then, in order
to see the effects of GUP, we computed the deceleration parameter for flat
case and the equation of state $p=\omega\rho$. We found that $\omega$ must
satisfy the condition $\omega<-0.361$ for the initial acceleration. For the
radiation and matter dominated eras, the expansion of universe is more
decelerated since the GUP effects give the positive contribution to
deceleration parameter. Moreover, we checked the validity of GSL. We showed
that the GSL is always valid for the all eras of universe in the presence of
GUP effects. At last, we consider the GSL for the specific case, i.e.,
$\Lambda CDM$ cosmology. In contrast to standard cosmology, the redshift
parameter has a finite and maximum value. GUP also affects the total entropy
change. The total entropy change has a finite and maximum value at maximum
redshift value.
Our results indicate that there is no Big Bang singularity due to the minimal
apparent horizon and maximum energy density. Therefore, GUP provides more
reasonable solution at the Planck scale where the classical general relativity
fails. This feature is a well$-$known in the literature [20, 21, 22, 25]. So
our results are consistent with the recent studies. Interestingly, we found
the total entropy change has a maximum value at maximum and finite redshift
for $\Lambda CDM$ cosmology. In fact, the maximum and finite value of $z$ is
expected since the Big Bang singularity is removed. However, the modified
uncertainty effects on total entropy change in $\Lambda CDM$ need further
investigation.
## Acknowledgments
The author thanks Ekrem Aydiner for valuable and fruitful discussion. This
work was supported by Istanbul University Post-Doctoral Research Project:
MAB-2021-38032.
## References
* [1] J. D. Bekenstein, J. Lett. Nuovo Cimento 4 (1972) 737.
* [2] J. D. Bekenstein, Phys. Rev. D 7 (1973) 2333.
* [3] J. M. Bardeen, B. Carter, S. W. Hawking, Commun. Math. Phys. 31 (1973) 161.
* [4] S. W. Hawking, Nature 248 (1974) 30.
* [5] J. D. Bekenstein, Phys. Rev. D 9 (1974) 3292.
* [6] S. W. Hawking, Commun. Math. Phys. 43 (1975) 199.
* [7] T. Jacobson, Phys. Rev. Lett. 75 (1995) 1260.
* [8] T. Padmanabhan, Class. Quantum Grav. 19 (2002) 5387.
* [9] C. Eling, R. Guedens, T. Jacobson, Phys. Rev. Lett. 96 (2006) 121301.
* [10] A. Paranjape, S. Sarkar, T. Padmanabhan, Phys. Rev. D 74 (2006) 104015.
* [11] D. Kothawala, S. Sarkar, T. Padmanabhan, Phys. Lett. B 652 (2007) 338.
* [12] T. Padmanabhan, A. Paranjape, Phys. Rev. D 75 (2007) 064004.
* [13] R. G. Cai, S. P. Kim, JHEP 02 (2005) 050.
* [14] M. Akbar, R. G. Cai, Phys. Lett. B 635 (2006) 7.
* [15] M. Akbar, R. G. Cai, Phys. Rev. D 75 (2007) 084003.
* [16] R. G. Cai, L. M. Cao, Phys. Rev. D 75 (2007) 064008.
* [17] R. G. Cai, L. M. Cao, Nucl. Phys. B 785 (2007) 135.
* [18] R. G. Cai, L. M. Cao, Y. P. Hu, JHEP 08 (2008) 090.
* [19] A. Sheykhi, Eur. Phys. J. C 69 (2010) 265.
* [20] A. Awad, A. F. Ali, JHEP 2014 (2014) 093.
* [21] M. Salah, F. Hammad, M. Faizal, A. F. Ali, JCAP 02 (2017) 35.
* [22] Ö. Ökcü, C. Corda, E. Aydiner, EPL 129 (2020) 50002.
* [23] S. S. Luo, Z. W. Feng, Ann. Phys. 458 (2023) 169449.
* [24] S. Kouwn, Phys. Dark Universe 21 (2018) 76.
* [25] M. A. A. Alsabbagh, K. Nozari, Ann. Phys. 458 (2023) 169469.
* [26] A. Sheykhi, Class. Quantum Grav. 27 (2010) 025007.
* [27] A. Sheykhi, B. Wang, R. G. Cai, Nucl. Phys. B 779 (2007) 265.
* [28] A. Sheykhi, B. Wang, R. G. Cai, Phys. Rev. D 76 (2007) 023515.
* [29] A. Sheykhi, B. Wang, Phys. Lett. B 678 (2009) 434.
* [30] A. Sheykhi, Phys. Lett. B 785 (2018) 118.
* [31] S. Nojiri, S. D. Odintsov, E. N. Saridakis, Eur. Phys. J. C 79 (2019) 242.
* [32] A. Lymperis, E. N. Saridakis, Eur. Phys. J. C 78 (2018) 993.
* [33] K. Karami, A. Sheykhi, N. Sahraei, S. Ghaffari, EPL 93 (2011) 29002.
* [34] A. Sheykhi, Int. J. Mod. Phys. D 28 (2019) 1950057.
* [35] A. Sheykhi, B. Farsi, Eur. Phys. J. C 82 (2022) 1111.
* [36] M. Asghari, A. Sheykhi, Eur. Phys. J. C82 (2022) 388.
* [37] A. Sheykhi, M. S. Hamedan, Entropy 25 (2023) 569.
* [38] A. Sheykhi, arXiv:2302.13012 [gr-qc] (2023) 1-6.
* [39] A. Lymperis, S. Basilakos, E. N. Saridakis, Eur. Phys. J. C 81 (2021) 1037.
* [40] N. Drepanou, A. Lymperis, ,E. N. Saridakis, K. Yesmakhanova, Eur. Phys. J. C 82 (2022) 449.
* [41] S. D. Odinstov, T. Paul, Phys. Dark Universe 39 (2023) 101159.
* [42] E. N. Saridakis, JCAP 07 (2020) 031.
* [43] J. D. Barrow, S. Basilakos, E. N. Saridakis, Phys. Lett. B 815 (2021) 136134.
* [44] E. N. Saridakis, S. Basilakos, Eur. Phys. J. C 81 (2021) 644.
* [45] F. K. Anagnostopoulos, S. Basilakos, E. N. Saridakis, Eur. Phys. J. C 80 (2020) 826.
* [46] A. Sheykhi, Phys. Rev. D 103 (2021) 123503.
* [47] M. Asghari, A. Sheykhi, MNRAS 508 (2021) 2855.
* [48] A. Sheykhi, Phys. Rev. D 107 (2022) 023505.
* [49] S. Di Genarro, Y. C. Ong, Universe 8 (2022) 541.
* [50] E. M. C. Abreu, J. A. Neto, Phys.Lett. B 824 (2022) 136803.
* [51] Z. Çoker, Ö. Ökcü, E. Aydiner, EPL 143 (2023) 59001.
* [52] S. A. Hayward, Class. Quant. Grav. 15 (1998) 3147.
* [53] T. R. Govindarajan, R. K. Kaul, V. Suneeta, Class. Quantum Grav. 18 (2001) 2877.
* [54] R. B: Mann, S. N. Solodukhin, Nucl. Phys. B 523 (1998) 293.
* [55] A. Sen, JHEP 2013 (2013) 156.
* [56] C. Tsallis, L. J. L. Cirto, Eur. Phys. C 73 (2013) 2487.
* [57] G. Kaniadakis, Phys. Rev. E 66 (2002) 056125.
* [58] G. Kaniadakis, Phys. Rev. E 72 (2005) 036108.
* [59] J. D. Barrow, Phys. Lett. B 808 (2020) 135643.
* [60] S. Jalalzadeh, F. R. da Silva, P. V. Moniz, Eur. Phys. J. C 81 (2021) 632.
* [61] M. Maggiore, Phys. Lett. B 304 (1993) 65.
* [62] F. Scardigli, Phys. Lett. B 452 (1999) 39.
* [63] A. Kempf, G. Mangano, R. B. Mann, Phys. Rev. D 52 (1995) 1108.
* [64] C. Bambi, F. R. Urban, Class. Quant. Grav. 25 (2008) 095006.
* [65] K. Nozari, A. Etemadi, Phys. Rev. D 85 (2012) 104029.
* [66] R. N. C. Filho, J. P. M. Braga, J. H. S. Lira, J. S. Andrade Jr., Phys. Lett. B 755 (2016) 367.
* [67] W. S. Chung, H. Hassanabadi, Phys. Lett. B 785 (2018) 127.
* [68] M. J. Lake, M. Miller, R. F. Ganardi, Z. Liu, S. D. Liang, T. Paterek, Class. Quantum Grav. 36 (2019) 155012.
* [69] W. S. Chung, Int. J. Theor. Phys. 58 (2019) 2575.
* [70] M. P. Dabrowski, F. Wagner, Eur. Phys. J. C 79 (2019) 716.
* [71] M. J. Lake, M. Miller, S. D. Liang, Universe 6 (2020) 56.
* [72] M. J. Lake, Quantum Rep. 3 (2021) 196.
* [73] J.R. Mureika, Phys. Lett. B 789 (2019) 88.
* [74] X. D. Du, C. Y. Long, JHEP 2022 (2022) 063.
* [75] K. Nouicer, Phys. Lett. B 646 (2006) 63.
* [76] R. J. Adler, P. Chen, D. I. Santiago, Gen. Relat. Gravit. 33 (2001) 2101.
* [77] A. J. M. Medved, E. C. Vagenas, Phys. Rev. D 70 (2004) 124021.
* [78] M. I. Park, Phys. Lett. B 659 (2007) 698.
* [79] K. Nozari, S. H. Mehdipour, EPL 84 (2008) 20008.
* [80] I. Arraut, D. Batic, M. Nowakowski, Class. Quantum Grav. 26 (2009) 125006.
* [81] R. Banerjee, S. Gosh, Phys. Lett. B 688 (2010) 224.
* [82] K. Nozari, S. Saghafi, JHEP 2012 (2012) 005.
* [83] A. F. Ali, JHEP 2012 (2012) 067.
* [84] B. Majumder, Gen. Relat. Gravit. 45 (2013) 2403.
* [85] D. Chen, H. Wu, H. Yang, Adv. High Energy Phys. 2013 (2013) 432412.
* [86] M. A. Anacleto, F. A. Brito, E. Passos, Phys. Lett. B 749 (2015) 181.
* [87] Z. W. Feng, H. L. Li, X. T. Zu, S. Z. Yang, Eur. Phys. J. C 76 (2016) 212.
* [88] L. Xiang, X. Q. Wen, JHEP 2009 (2009) 046.
* [89] F. Scardigli, Symmetry 12 (2020) 1519.
* [90] S. Hassanabadi, J. Kriz, W. S. Chung, B. C. Lutfuoglu, E. Maghsoodi, H. Hassanabadi, Eur. Phys. J. Plus 136 (2021) 918.
* [91] B. C. Lutfuoglu, B. Hamil, L. Dahbi, Eur. Phys. J. Plus 136 (2021) 976.
* [92] Z. Sun, M. S. Ma, EPL 122 (2018) 60002.
* [93] M. S. Ma, Y. S. Liu, Adv. High Energy Phys. 2018 (2018) 1257631.
* [94] Ö. Ökcü, E. Aydiner, Int. J. Theor. Phys. 59 (2020) 2839.
* [95] H. Hassanabadi, E. Maghsoodi, W. S. Chung, M. de Montigny, Eur. Phys. J. C 79 (2019) 936.
* [96] B. Bolen, M. Cavaglia, Gen. Relativ. Gravit. 37 (2005) 1255.
* [97] X. Han, H. Li, Y. Ling, Phys. Lett. B 666 (2008) 121.
* [98] H. Moradpour, C. Corda, A. H. Ziaie, S. Ghaffari, EPL 127 (2019) 60006.
* [99] W. S. Chung, H. Hassanabadi, Phys. Lett. B 793 (2019) 451.
* [100] B. Hamil, B. C. Lutfuoglu, EPL 133 (2021) 30003.
* [101] B. Hamil, B. C. Lutfuoglu, EPL 134 (2021) 50007.
* [102] H. Chen, B. C. Lutfuoglu, H. Hassanabadi, Z. W. Long, Phys. Lett. B 827 (2022) 136994.
* [103] P. Bosso, G. G. Luciano, L. Petruzziello, F. Wagner, Class. Quantum Gravity 40 (2023) 195014.
* [104] S. Hossenfelder, Phys. Rev. D 73 (2006) 105013.
* [105] S. Hossenfelder, Class. Quantum Gravity 23 (2006) 1815.
* [106] R. M. Coreless, G. H. Gonnet, D. E. Hare, D. J. Jerey, D. E. Knuth, Adv. Comput. Math. 5 (1996) 329.
* [107] S. D. Odintsov, S. Nojiri, Phys. Rept. 505 (2011) 59.
* [108] G. Izquierdo, D. Pavon, Phys. Lett. B 633 (2006) 420.
* [109] B. Ryden, Intoduction to Cosmology, 2nd edition, Cambridge University Press, (2017).
|
# RecAD: Towards A Unified Library for Recommender Attack and Defense
Changsheng Wang<EMAIL_ADDRESS>0009-0007-0957-638X University of Science
and Technology of ChinaNo.96 Jinzhai RoadHefeiAnhuiChina230000 , Jianbai Ye
0009-0004-9095-4947<EMAIL_ADDRESS>University of Science and
Technology of ChinaHefeiAnhuiChina , Wenjie Wang<EMAIL_ADDRESS>0000-0002-5199-1428 National University of SingaporeSingapore , Chongming Gao
<EMAIL_ADDRESS>0000-0002-5187-9196 University of Science and
Technology of ChinaNo.96 Jinzhai RoadHefeiAnhuiChina230000 , Fuli Feng
<EMAIL_ADDRESS>0000-0002-5828-9842 University of Science and Technology
of ChinaNo.96 Jinzhai RoadHefeiAnhuiChina230000 and Xiangnan He
<EMAIL_ADDRESS>0000-0001-8472-7992 University of Science and Technology
of ChinaNo.96 Jinzhai RoadHefeiAnhuiChina230000
(2023)
###### Abstract.
In recent years, recommender systems have become a ubiquitous part of our
daily lives, while they suffer from a high risk of being attacked due to the
growing commercial and social values. Despite significant research progress in
recommender attack and defense, there is a lack of a widely-recognized
benchmarking standard in the field, leading to unfair performance comparison
and limited credibility of experiments. To address this, we propose RecAD, a
unified library aiming at establishing an open benchmark for recommender
attack and defense. RecAD takes an initial step to set up a unified
benchmarking pipeline for reproducible research by integrating diverse
datasets, standard source codes, hyper-parameter settings, running logs,
attack knowledge, attack budget, and evaluation results. The benchmark is
designed to be comprehensive and sustainable, covering both attack, defense,
and evaluation tasks, enabling more researchers to easily follow and
contribute to this promising field. RecAD will drive more solid and
reproducible research on recommender systems attack and defense, reduce the
redundant efforts of researchers, and ultimately increase the credibility and
practical value of recommender attack and defense. The project is released at
https://github.com/gusye1234/recad.
Recommender Systems; Shilling Attack and Defense; Benchmark
††journalyear: 2023††copyright: acmlicensed††conference: Seventeenth ACM
Conference on Recommender Systems; September 18–22, 2023; Singapore,
Singapore††booktitle: Seventeenth ACM Conference on Recommender Systems
(RecSys ’23), September 18–22, 2023, Singapore, Singapore††price: 15.00††doi:
10.1145/3604915.3609490††isbn: 979-8-4007-0241-9/23/09††ccs: Information
systems Recommender systems
## 1\. Introduction
In recent decades, recommender systems have become increasingly important in
various areas, including E-commerce recommendation (Huang et al., 2007), short
video entertainment (Gao et al., 2022), news headlines (Liu et al., 2010), and
online education (Wu et al., 2011). However, the widespread use of recommender
systems has also led to concerns regarding their security. When an attacker
successfully attack a recommender system, users may be offended by malicious
recommendations, and the platform owner may lose the trust of users in the
platform. Additionally, merchants who rely on recommender platforms may suffer
from commercial losses. Malicious attacks may even cause recommender systems
to engage in unethical or illegal behavior, resulting in adverse effects on
society. Therefore, it is essential to address these security risks and ensure
the integrity of recommender systems to maintain user trust and prevent any
negative consequences.
Recently, industry and academia are trying to develop strategies for both
attacking and defending recommender systems, especially pay attention to the
techniques about shilling attacks and defense (Xing et al., 2013). In shilling
attacks, fake users are generated and assigned high ratings for a target item,
while also rating other items to act like normal users for evading. There are
three main kinds of shilling attack methods: heuristic methods, gradient
methods, and neural methods. Heuristic methods (Linden et al., 2003; Burke et
al., 2005a; Kaur and Goel, 2016) involve artificially or randomly selecting
items based on user preferences, and then intuitively fabricate interaction
information. Gradient methods (Li et al., 2016b; Fang et al., 2020) estimate
the gradients of maximizing attack objectives to directly optimize the
interactions of fake users, while Neural methods (Tang et al., 2020; Lin et
al., 2022, 2020) optimize the neural networks parameters to predict the
optimal fake user behaviors for maximizing attack objectives.
To combat these attacks, many defense methods (Dou et al., 2018; Cao et al.,
2013; Mehta and Nejdl, 2009) are emerging to enhance the defense ability of
existing recommendation models. Essentially, these defense models aim to
distinguish between fake data generated by the attack model and real user
data, ensuring that the recommendation model uses as much real data as
possible (Yang et al., 2018). Currently, the mainstream defense model can be
divided into three types according to whether the label of fake users can be
obtained (Dou et al., 2018; Aktukmak et al., 2019; Tang et al., 2019). As new
attack methods emerge, defense models are constantly evolving to keep up with
these threats. Therefore, staying up-to-date with the latest attack and
defense methods is essential for maintaining the security of recommender
systems.
With the continuous emergence of new attack and defense algorithms in the
field of recommender system security, there are several research challenges
that deserve attention. Firstly, while many articles provide details about
their experiments, there is often a lack of standardization in dataset
processing methods, which can lead to unfair comparison. Secondly, there is a
lack of unified settings for attack experiments. Various works usually
leverage different experimental setting, making it difficult to compare and
evaluate different models. It is critical to establish a standardized approach
for similar attack settings to facilitate the comparison and evaluation of
different models. Additionally, many works lack public code, which can create
repetition and difficulties for subsequent researchers trying to advance the
field. To address these challenges, researchers should strive to provide clear
and standardized descriptions of dataset processing methods, unified settings
for attack experiments, and make their code publicly available to facilitate
replication and extension of their work. These efforts can help promote the
development of the recommender attack and defense and contribute to more
robust and effective recommender system security solutions.
To address the aforementioned challenges, we have initiated a project to
develop a unified framework for Recommender Attack and Defense, named RecAD.
RecAD aims to improve the reproducibility of existing models and simplify the
development process for new recommender attack and defense models. Our
benchmarking library design is innovative and effective, revealing several
advantages compared to earlier attempts.
* •
Unified library framework. RecAD is implemented using
Pytorch111https://pytorch.org/., one of the most popular deep learning
frameworks. The library is composed of three core modules, namely the data
module, model module, and evaluation module. The library supports a vast array
of options provided by each module, and a straightforward configuration
ensures that users can promptly complete algorithm reproduction and
comparison. The seamless interface integration of the three core modules also
enables the minimal adjustment for incorporating new algorithms, allowing for
continuous development and extension within our framework in the future.
* •
Comprehensive benchmark models and datasets. RecAD provides support not only
for replacing individual models but also for integrating a wide range of
research issues. From generating fake attack data to defending against
existing data and injecting data into victim models, RecAD covers the entire
spectrum of shilling attack and defense research. It provides an array of
choices for all models and datasets, guaranteeing an ample assortment of
combinations for researchers to utilize. This allows them to execute, compare,
and assess the entire procedure, relying on lucid instructions and
configurations. RecAD is highly adaptable and scalable, with original dataset
copies that can be effortlessly transformed into a practical form using the
provided preprocessing tools or scripts. Additionally, we are continuously
expanding our library with additional datasets and methods to better serve the
needs of the community and researchers.
* •
Extensive and standard evaluation protocols. RecAD offers evaluation methods
from two perspectives: attack evaluation and defense evaluation. Researchers
interested in continuing the offensive direction or those focusing on the
defensive direction can use the corresponding evaluation methods.
Additionally, it provides standard evaluation techniques for assessing the
effectiveness of defense models, encapsulating the entire evaluation process
within a singular module enables RecAD to more readily accommodate more
evaluation techniques, thus enhancing its adaptability and versatility.
* •
Openness and high integration of models. Openness is crucial for promoting
transparency, collaboration, and reproducibility in computer science research.
RecAD adopts a highly integrated approach, simplifying the relationships
between modules as much as possible and making the corresponding parameters
publicly available at each module. This ensures that subsequent researchers
who use our framework to add new models only need to make the corresponding
modules public, allowing other researchers to quickly and efficiently
reproduce the work and ensure the openness of the field in the future.
* •
The generalization of attacker’s knowledge. The attacker’s knowledge level
directly impacts the effectiveness of the attack. A high degree of accessible
knowledge about the recommender system allows an attacker to craft adversarial
examples that can evade the model’s defenses. RecAD can elevate white-box
attacks to gray-box attacks and customize the proportion of data accessible by
the attackers for gray-box attacks (Fang et al., 2018a), promoting the fair
comparison between a wide range of attackers.
## 2\. Related Work
### 2.1. Overview of Shilling Attack and Defense
In the past two decades, researchers have conducted experiments to demonstrate
the feasibility of attacking real-world recommender systems, such as YouTube,
Google Search, Amazon, and Yelp. These experiments have shown that it is
possible to manipulate recommendation systems in practice, resulting in an
increasing focus on this field from both the academic community and industry.
To promote its development, researchers have typically focused on either
shilling attacks or defense mechanisms. With the advancements in deep
learning, the field has seen a notable increase in the effectiveness of these
methods.
### 2.2. Shilling attack
The objective of an shilling attack is to interfere with the recommendation
strategy of the victim recommender system through a series of measures
(O’Mahony et al., 2005; Mobasher et al., 2007; Deldjoo et al., 2019). The
ultimate goal is to enhance the exposure of a specific target item among all
users after the recommender model is trained. To achieve this objective,
attackers often inject fake users into the historical interactive data, or
training matrix, of the recommender system. However, if these fake users are
not adequately protected, they will be sent into the recommender system model
during the training process, thus disrupting the recommendation strategy of
the system. As a result, the key challenge of an shilling attack is to
construct the interaction behaviors of the fake users. The interaction
behaviors of the constructed users can generally be classified into three
categories:
* •
Heuristic attacks. Heuristic attacks involve selecting items to create fake
profiles based on subjective inference and existing knowledge. The goal is to
strengthen the connection between fake users and other real users while
evading defense methods and achieving exposure enhancement of the final target
item (Linden et al., 2003; Burke et al., 2005a). Currently, existing methods
include the Random Attack (Kaur and Goel, 2016), Average Attack (Lam and
Riedl, 2004), Bandwagon Attack, and Segment Attack (Burke et al., 2005b). The
Random Attack is a low-knowledge method that selects filler items randomly,
while the Average Attack selects filler items randomly and requires more
knowledge. In the case of an Average Attack, the target item needs to be given
the highest rating to implement a push attack. Segment Attack selects items of
the same category as the target item and maximizes their rating, with the goal
of creating a stronger correlation with the corresponding target user among
real users so that it can attack more effectively.
* •
Gradient attacks. Gradient attacks involve relaxing the discrete space to a
continuous space to ensure that the objective function can be optimized by the
gradient to achieve the optimal attack effect. For instance, Li et al. (Li et
al., 2016b; Fang et al., 2020) developed poisoning attacks optimized for
matrix factorization-based recommender systems, while Yang et al. (2017)
developed poisoning attacks optimized for co-visitation rule-based recommender
systems. Additionally, there are gradient attack methods based on Next-item
(Zhang et al., 2020), and graph (Fang et al., 2018b). However, all Gradient
Attacks require known types of recommender systems to carry out specific
optimization, which does not have good generalization. Moreover, in order to
achieve bi-level optimization (Huang et al., 2021), directly adjusting
interactions according to gradients involved transforming the discrete
interactions into continuous optimization. During the process of re-
discretization, information loss occurred, leading to sub-optimal results and
the lack of robustness in the model.
* •
Neural attacks. Neural Attacks, primarily inspired by deep learning (Huang et
al., 2021), generate realistic profiles that have a significant impact on
recommender systems by optimizing the parameters of neural networks to
maximize the objective function. WGAN (Arjovsky et al., 2017) draws on
Wasserstein’s distance, which has shown better empirical performance than the
original GAN (Goodfellow et al., 2020). It can emulate real user behavior with
fake user behavior to achieve the effect of fake user behavior. AIA (Tang et
al., 2020) reviewed the bi-level optimization problems of the surrogate model
and proposed time-efficient and resource-efficient solutions. AUSH (Lin et
al., 2020) and Legup (Lin et al., 2022) solve the randomness caused by noise
generation in common models, making the generated template artificially based
on known knowledge, resulting in a more undetectable configuration file. When
the attacker’s knowledge is limited to a black box, researchers use RL attack
(Zhang et al., 2022; Fan et al., 2021; Song et al., 2020) to complete the
attack, with the attacker adjusting changes based on feedback given by the spy
user in the victim model. The methods of Neural Attacks all show better
performance on real datasets than Gradient and Heuristic attacks.
In addition to the challenges associated with constructing effective shilling
attacks, another emerging issue is the knowledge of the attacker (Burke et
al., 2005a). In today’s world, data security and privacy are increasingly
important to both users and companies. This makes it increasingly challenging
for attackers to obtain the necessary user data to construct effective attack
models. As a result, researchers have begun to consider the attacker’s
knowledge as a key constraint for the attack model. The attacker’s knowledge
can be classified into three categories: _white box_ , _grey box_ , and _black
box_. In a white box attack, the attacker has complete knowledge of the target
recommender model, which includes all the data of the victim model used for
training and the network structure and parameters of the victim model. In a
grey box attack, the attacker can only access part of the training set of the
target model and has no knowledge of the victim model. In a black box attack,
only some spy users are allowed as attack feedback.
In addition to shilling attacks, there are other types of attacks, such as
attacks that involve modifying the real user interaction history (Zeller and
Felten, 2008) or attacks based on federated learning recommender models (Rong
et al., 2022b, a; Zhou et al., 2021; Yi et al., 2022). The former is not very
effective due to the adoption of multiple privacy protection mechanisms by
real recommender platforms, such as email and mobile phone hardware binding.
Therefore, this method is easily detected and defended by the platform and is
insufficient for a large-scale attack. On the other hand, the latter is still
in the theoretical research stage and the models proposed are too basic, at
the same time this kind of method has not yet been implemented by companies.
This means that the criteria for the above two methods still need to be
explored by more researchers.
### 2.3. Defense
A defense model can be viewed as a checkpoint responsible for detecting
possible fake users in the data before it is sent to the recommender model.
The defense model eliminates fake users to ensure that the recommendation
results are not interfered with by attackers to the greatest extent possible.
Some defense models attempt to find the law of data distribution from all the
data or obtain the probability of the corresponding label through probability
methods to predict and classify. Currently, the defense direction can be
classified into three categories:
* •
Supervised defense models, which need to be pre-labeled with true and false
data. The goal of the model is to learn the relationship between the input and
output variables, so that it can make predictions on new data. The learning
process involves minimizing the difference between the predicted output and
the true output for each example in the training data. In other words, the
model is trained to approximate the mapping from inputs to outputs. In the
direction of recommender defense, Supervised work emerges in the initial
exploration of this field, such as CoDetector (Dou et al., 2018), DegreeSAD
(Li et al., 2016a), BayesDetector (Yang et al., 2018).
* •
Semi-supervised defense models, as explored in (Cao et al., 2013; Wu et al.,
2011), aim to use a minimal amount of false data while still maintaining the
purpose and accuracy of the supervised method. This is because attackers
typically use a small amount of data to launch attacks, leading to an inherent
imbalance between true and false training samples, highlighting the crucial
importance of maintaining the supervised aspect of the method.
* •
Unsupervised defense models, which have been intensively investigated in
recent years, including traditional machine learning models such as
probabilistic models (Aktukmak et al., 2019), statistical anomaly detection
(Bhaumik et al., 2006), PCA (Mehta and Nejdl, 2009), SVM (Zhou et al., 2016),
and K-means (Davoudi and Chatterjee, 2017). More recently, network models have
been used for detection, such as Graph Embedding (Zhang et al., 2021),
Sequential GANs (Shahrasbi et al., 2020), Recurrent Neural Network (Gao et
al., 2020), and Dual-input CNN (Yu et al., 2021).
In addition to the model-based prediction introduced above to realize the
defense of the recommender platform, some scholars also have trained the
recommender model by using adversarial data training (Tang et al., 2019; He et
al., 2018; Liu et al., 2020; Wu et al., 2021) so that the recommender model
can have better generalization in the face of fake data.
### 2.4. Benchmarking for Recommender Attack and Defense
Despite the recent growth in the field of RS security, different studies have
employed different data sets, evaluation methods, and knowledge constraints,
resulting in significant fairness issues when comparing different models. This
has had a negative impact on the steady development of the field. Although
some works have attempted to address these issues in the past, there is still
a need for a comprehensive and unified library to solve the current dilemma.
For instance, in AUSH (Lin et al., 2020), the author provided a code that
integrated multiple attack models, but the workflow was inefficient and
required a significant amount of time for subsequent researchers to study the
code structure. Additionally, the code was not friendly for adding new models
under the same framework and focused more on the study of attack models.
Moreover, the code only provided a limited data set and did not include the
data processing method, making it difficult to test the model on a working
public data set. In SDLib222https://github.com/Coder-Yu/SDLib., some defense
models and attack models were provided, but the attack model was outdated and
did not complete the entire process from attack generation to defense
detection and injection into the recommender model. Furthermore, the code
language used in this work was obsolete. Our framework overcomes the
limitations of previous methods by abstracting each component into relatively
independent modules, ensuring the unity and extensibility of the model. This
allows for better maintenance and development of the framework in the future.
Figure 1. The overall framework of RecAD. Table 1. Collected data in our library. Dataset | #Users | #Items | #Interations | #Density
---|---|---|---|---
MovieLens-1m* | 5,950 | 3,702 | 567,533 | 0.257%
Yelp* | 54,632 | 34,474 | 1,334,942 | 0.070%
Amazon-Game* | 3,179 | 5,600 | 38, 596 | 0.216%
Book-Crossing | 105,284 | 340,557 | 1,149,780 | 0.003%
Last.FM | 1,892 | 17,632 | 92,834 | 0.278%
Epinions | 116,260 | 41,269 | 188,478 | 0.004%
Gowalla | 107,092 | 1,280,969 | 6,442,892 | 0.005%
*means the dataset is used in the experiments and only kept high-frequency users
and items (at least 10 interactions).
22footnotetext: https://grouplens.org/datasets/movielens/1m/.33footnotetext:
https://www.yelp.com/dataset.
## 3\. The Library-RecAD
The overall framework of RecAD is illustrated in Figure 1. At the bottom, our
library maintains a flat structure for the default hyper-parameters globally,
and the core components are built upon it with automated parameter loading
(see Section 4). Our library abstracts the core modules at three levels: data,
model, and workflow. In the following, we briefly present the designs of these
three core modules.
Figure 2. Component workflow under different attack knowledge.
### 3.1. Data Module
The data module serves as the fundamental part of the entire library, as it
provides essential runtime information such as batches and indicators of
scale. It takes charge of dataset loading, batch generation, and fake data
manipulation.
#### 3.1.1. Dataset Loading
To create an actively-contributed benchmark, it is important to make the
addition of new datasets as easy as possible. Therefore, we have designed the
data module to keep the required dataset formats simple and flexible. At
present, our library only requires the human-readable CSV format with specific
column names to load datasets into explicit or implicit interactions. This
design decision allows users to easily add their own datasets to the library
without having to modify the codebase. Our library already supports multiple
datasets (as shown in Table 1), and we also provide auxiliary functions to
convert datasets from other well-known recommender frameworks, such as RecBole
(Zhao et al., 2021). This provides further flexibility for users to utilize
the datasets that they are familiar with.
#### 3.1.2. Batch Generation
Our library prioritizes seamless integration between datasets, models, and
workflows, which presents challenges for batch generation. To address this, we
design a flexible and generic interface (generate_batch). The interface allows
the caller to provide runtime configuration parameters (_e.g._ , pairwise
sampling; binarizing the ratings) and dispatches itself to the corresponding
behavior. This design reduces the workloads on developers who are attempting
to adapt their data and allows them to focus on providing as much runtime
information as possible.
#### 3.1.3. Fake Data Manipulation
In our library, we recognize the importance of addressing the manipulation of
fake data during runtime. Specifically, we must account for both the injection
of fake data from attacker models and the filtering of fake data by defense
models. We address this challenge with unified interfaces named inject_data
and filter_data, respectively. These interfaces are called by the attacker and
defense models to manipulate the dataset.
Figure 3. The models that are supported by RecAD.
### 3.2. Model Module
The model implementation is the most versatile part of the library, and we
offer maximum flexibility to accommodate different approaches. To account for
the similarities and differences between models, we introduce a general base
model and its successors: the victim, attacker, and defense models. Figure 3
presents the models that have been implemented.
#### 3.2.1. Base Model
We don’t provide framework-level abstractions for model optimization. Instead,
the models are responsible for their own single-epoch training and evaluation,
which can be implemented through a set of auxiliary functions provided by the
library. This design choice is aimed at reducing the complexity of the
framework and enabling the integration of a wide range of models, without
requiring modification of the framework-level abstractions for each individual
model. To facilitate this, we use unified interfaces (train_step, test_step)
that enable the callers to initiate the training or evaluation process of the
models.
#### 3.2.2. Victim Model
Victim models are recommender models, and the library provides a unified
interface for training and testing them. This makes it easy to integrate any
victim model into the library without the need for modification of the core
framework.
#### 3.2.3. Attacker Model
In our library, the training of the attacker model shares the same interface
with victim models (_i.e._ train_step). After training, the attacker model
generates the fake data through a unified interface (generate_fake) and then
forwards the contaminated data to the next module. Since the full set of the
dataset is not necessarily exposed (_e.g._ Gray box attacking in Figure 2),
generate_fake should explicitly receive the target dataset as a parameter.
#### 3.2.4. Defense Model
The defense model is trained on the attacked data through the same training
interface. The objective of the model is to output a filtered dataset with
fake data removed. Our library summarizes a unified interface generate_filter
to wrap the implementation details of each defense model.
### 3.3. Workflow Module
This module is the corresponding abstraction of different attack knowledge
(Figure 2). The workflow module holds the instantiations of the data module
and model module, controlling the exposure of data and the interaction of
modules. It also contains the boilerplate code for the training loop and
evaluation callbacks (_e.g._ , early stop; report after training).
#### 3.3.1. Data Exposure
The data exposure level for different models varies depending on the attack
knowledge settings and the running stages (as shown in Figure 2). For
instance, the attacker model may be exposed separately to full, partial, or
zero training data. Similarly, the victim model may be trained on clean data
initially and later re-trained on the contaminated data during the attack
process. The workflow module in our library is responsible for constructing
the appropriate data flow according to the attack knowledge and ensuring that
no accidental data leakage occurs. This way, our library provides a flexible
and secure environment for implementing and testing various attack and defense
models under different settings.
#### 3.3.2. Module Interaction
The interactions between modules vary between attacks. In a white-box attack,
the model has direct access to all the training data, whereas, in a black-box
attack, the model receives feedback from the victim without any access to the
training data. For workflows (Tang et al., 2020; Lin et al., 2022) where no
defense model is involved, the fake data generated by the attacker model flows
directly into the victim’s training without filtering. The workflow module
arranges the dependencies of modules and prevents any inappropriate
interactions between them.
#### 3.3.3. Training & Evaluation
In order to better control the data exposure and module interaction, we give
the workflow module the responsibility for launching the training and
evaluation of the contained models. The workflow module contains the
boilerplate codes for wrapping the training loop outside the models’
train_step. Also, we design a hooking mechanism to provide flexibility for
models to set up their evaluation callbacks. This allows models to define
their own evaluation metrics to evaluate the model’s performance at different
stages of the training process.
Figure 4. A code snippet of module instantiations of RecAD. Figure 5. An
illustration of lazy instantiation.
## 4\. USAGE GUIDELINE OF THE LIBRARY
In the following two parts, we first show the typical usage to instantiate the
existing modules of our library, then detail the steps to extend our library
with a new implementation.
### 4.1. Module Instantiations
Attacking a recommender system often involves using multiple datasets and
machine learning models, which makes the training and testing process more
complex than for regular recommender systems. Our library simplifies this
process by exposing the necessary modules to users and providing a unified
interface called _from_config_ for instantiating them (Figure 4). Two kinds of
parameters may be needed from _from_config_ : hyper-parameters and runtime
parameters.
#### 4.1.1. Hyper-parameters
Our library employs the hashing table to store all default hyper-parameters of
modules together and offer global access across programs. While instantiating,
our library automatically loads default parameters on-fly from the hashing
table and updates them from the keyword arguments passed by the user. The
decoupling of default hyper-parameters and the actual module implementation
facilitates a quick overview of configurable parameters for the user.
#### 4.1.2. Runtime Parameters
Runtime parameters are the parameters that won’t be settled before the
runtime. For example, the model module in Figure 4 normally needs the numbers
of the user and item to create the embeddings when instantiating. Due to the
data injection or data filtering from the attacker model, the actual numbers
of the user and item are not known before the runtime. But the dependency
between the model and data module is clear, and it is burdensome to ask the
user to manually pass in the required instances in the program. Hence, we
implement lazy instantiation (Figure 5) to make runtime parameters transparent
at the user level. The module won’t actually instantiate at the time the user
call _from_config_ if the needed runtime parameters are not passed. Instead,
the workflow will sort out the dependencies between modules and automatically
fill in the required runtime parameters to complete the instantiation. This
decouples the instantiation of modules from the availability of runtime
parameters, making the library more flexible and adaptable to different
scenarios.
### 4.2. Module Extension
In our library, we provide the base class for all the core modules: BaseData,
BaseModel, and BaseWorkflow. We require the extended module must be the
corresponding base class’s subclass so that the necessary abstract interfaces
can be called properly.
Table 2. Overall attack performance on three recommendation datasets.
| | ML-1M | Yelp | Amazon
---|---|---|---|---
Attack Method | Attack Knowledge | HR@10 | HR@20 | HR@50 | HR@100 | HR@10 | HR@20 | HR@50 | HR@100 | HR@10 | HR@20 | HR@50 | HR@100
No Attacker | None | 0.0050 | 0.0109 | 0.0297 | 0.0656 | 0.0114 | 0.0190 | 0.0375 | 0.0630 | 0.0000 | 0.0000 | 0.0003 | 0.0016
RandomAttacker | White Box | 0.0050 | 0.0082 | 0.0228 | 0.0457 | 0.0078 | 0.0112 | 0.0214 | 0.0362 | 0.0000 | 0.0000 | 0.0000 | 0.0000
SegmentAttack | White Box | 0.0069 | 0.0123 | 0.0288 | 0.0630 | 0.0057 | 0.0083 | 0.0153 | 0.0258 | 0.0397 | 0.0520 | 0.0675 | 0.0832
BandwagonAttack | White Box | 0.0059 | 0.0119 | 0.0267 | 0.0592 | 0.0066 | 0.0114 | 0.0257 | 0.0431 | 0.0050 | 0.0205 | 0.0523 | 0.0854
AverageAttack | White Box | 0.0016 | 0.0044 | 0.0167 | 0.0400 | 0.0053 | 0.0090 | 0.0169 | 0.0284 | 0.0085 | 0.0170 | 0.0463 | 0.0914
WGAN | White Box | 0.0023 | 0.0060 | 0.0149 | 0.0340 | 0.0143 | 0.0177 | 0.0254 | 0.0344 | 0.1646 | 0.1788 | 0.2043 | 0.2226
AIA | Gray Box 20% data | 0.0078 | 0.0180 | 0.0459 | 0.1007 | 0.0187 | 0.0273 | 0.0465 | 0.0686 | 0.0441 | 0.0873 | 0.4278 | 0.4839
AUSH | Gray Box 20% data | 0.0071 | 0.0151 | 0.0434 | 0.0945 | 0.0135 | 0.0217 | 0.0393 | 0.0617 | 0.0583 | 0.1170 | 0.4392 | 0.4805
Legup | Gray Box 20% data | 0.0094 | 0.0130 | 0.0283 | 0.0471 | 0.0068 | 0.0099 | 0.0162 | 0.0242 | 0.1847 | 0.2015 | 0.2286 | 0.2566
#### 4.2.1. General Module
Two abstract methods must be implemented for all the modules:
* •
from_config: users pass arguments to this method to instantiate a new module.
Our library has already implemented the argument sanity checking and
overwriting the default hyper-parameters in the father class. A new module
should assign the default hyper-parameters in this method.
* •
info_describe: modules interact through this method. The method should return
a hash table with the named variable that this module can expose publicly.
#### 4.2.2. Core Modules
The core modules have specialized interfaces that need to be implemented in
addition. We have discussed most of the below interfaces in Section 3.
* •
Data Module: The most important interface for this module is generate_batch.
The interface should take the caller’s keyword arguments as the input, and
return the correct batches of the dataset for later training or testing.
* •
Model Module: Right now, three kinds of models are considered: victim model,
attacker model, and defense model. They are all required to be implemented
with two interfaces: train_step and test_step to perform one-epoch training or
testing. Besides, for the attacker model and defense model, generate_fake and
generate_filter need to be implemented, respectively.
* •
Workflow Module: An interface named execute should be implemented for users to
explicitly launch the whole workflow. Inside the interface, the implementor
should correctly instantiate and arrange the modules.
## 5\. Experiments
This section showcases the application of RecAD by implementing various
representative attackers and detection models. Through a comparison of the
outcomes produced by these models, valuable insights can be derived.
Figure 6. Performances of attackers before and after detection by
PCASelectUser in the ML-1M dataset. Table 3. Defense performance against five
representative shilling attackers.
| | AIA | Legup | WGAN | RandomAttacker | SegmentAttacker
---|---|---|---|---|---|---
| | Gray Box 20% data | Gray Box 20% data | Gray Box 20% data | White Box | White Box
Detect Method | Data Lable | Precision | Recall | F1-score | Precision | Recall | F1-score | Precision | Recall | F1-score | Precision | Recall | F1-score | Precision | Recall | F1-score
DegreeSAD | True Data | 0.782 | 0.845 | 0.812 | 0.782 | 0.841 | 0.810 | 0.782 | 0.840 | 0.810 | 0.780 | 0.843 | 0.810 | 0.781 | 0.840 | 0.810
| Fake Data | 0.720 | 0.630 | 0.672 | 0.717 | 0.632 | 0.672 | 0.716 | 0.632 | 0.671 | 0.718 | 0.627 | 0.669 | 0.715 | 0.631 | 0.670
CoDetector | True Data | 0.898 | 0.861 | 0.879 | 0.887 | 0.885 | 0.886 | 0.897 | 0.873 | 0.885 | 0.908 | 0.877 | 0.892 | 0.904 | 0.880 | 0.892
| Fake Data | 0.796 | 0.846 | 0.820 | 0.840 | 0.843 | 0.841 | 0.811 | 0.844 | 0.827 | 0.809 | 0.854 | 0.831 | 0.823 | 0.857 | 0.840
BayesDetector | True Data | 0.943 | 0.946 | 0.945 | 0.945 | 0.945 | 0.945 | 0.936 | 0.943 | 0.940 | 0.944 | 0.936 | 0.940 | 0.938 | 0.943 | 0.940
| Fake Data | 0.915 | 0.910 | 0.912 | 0.914 | 0.913 | 0.913 | 0.909 | 0.899 | 0.904 | 0.896 | 0.908 | 0.902 | 0.909 | 0.902 | 0.905
SemiSAD | True Data | 0.895 | 1.000 | 0.945 | 0.911 | 1.000 | 0.954 | 0.921 | 1.000 | 0.959 | 0.903 | 1.000 | 0.949 | 0.892 | 1.000 | 0.943
| Fake Data | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
PCASelectUser | True Data | 0.953 | 0.985 | 0.969 | 0.954 | 0.986 | 0.970 | 0.954 | 0.986 | 0.970 | 0.952 | 0.983 | 0.967 | 0.952 | 0.983 | 0.967
| Fake Data | 0.100 | 0.034 | 0.050 | 0.170 | 0.057 | 0.086 | 0.170 | 0.057 | 0.086 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
FAP | True Data | 0.963 | 0.992 | 0.977 | 0.970 | 1.000 | 0.985 | 0.970 | 1.000 | 0.985 | 0.872 | 0.296 | 0.442 | 0.953 | 0.658 | 0.728
| Fake Data | 0.526 | 0.184 | 0.272 | 1.000 | 0.325 | 0.491 | 1.000 | 0.343 | 0.511 | 0.920 | 0.647 | 0.967 | 0.968 | 0.969 | 0.961
### 5.1. Comparison of Attackers
We illustrate the performance of all attackers in three recommendation
datasets in Table 2. The goal of all attackers is to make the target items get
higher rankings, _i.e._ , larger HR@k.
The two gray box methods, AIA and AUSH, exhibit the best performances across
all metrics and datasets, which attests to the efficacy of neural network-
based approaches. In contrast, the performance of Legup is less consistent.
For instance, Legup displays optimal performance with respect to HR@10 in
Amazon, whereas it experiences a decrease in rank, falling to the middle-lower
range, with respect to HR@20, HR@50, and HR@100. Additionally, in Yelp, Legup
performs inadequately across all metrics, and its weak robustness is further
illustrated in Figure 6. The Legup model has been observed to exhibit unstable
performance, which can be attributed to its training methodology that involves
the simultaneous use of three distinct models. This approach has resulted in
the same training instability issues that are commonly associated with GANs.
Specifically, the use of multiple models in training can lead to a lack of
consistency in the learned representations across the different models. This,
in turn, can create conflicts in the optimization process and cause the
model’s performance to become highly dependent on the initialization and
training procedures.
The heuristic method RandomAttacker exhibits the poorest performance across
all metrics and datasets, even when compared to a situation in which no
attacker is utilized. In other words, RandomAttacker not only fails to enhance
the ranking of target items but also results in a lowered ranking for those
items. Due to the highly randomized nature of heuristic attacks, the resulting
attack target can be skewed by the random effects, resulting in a greater
impact of inserting fake users on vulnerable users.
In addition, other methods, including SegmentAttack, BandwagonAttack,
AverageAttack, and WGAN, also occasionally result in a poorer ranking for the
targeted items. Consequently, there remains substantial room for the
development of effective attacker methods in recommender systems. Currently,
the existing methods of attack are characterized by significant limitations,
such as their capacity to target only specific structures of recommendation
algorithms or their limited ability to transfer attacks to models in other
domains.
### 5.2. Comparison of Defenders
We choose three supervised methods (DegreeSAD, CoDetector, and BayesDetector),
one semi-supervised method (SemiSAD), and two unsupervised methods
(PCASelectUser and FAP) to act as defenders, tasked with protecting the victim
model from five attacker models. The goal of these experiments is to evaluate
several defense methods using our framework process.
We present three evaluation metrics for the predicted label results, namely
F1-score, Recall, and Precision. To evaluate the performance of our model, we
split our data into two categories: True Data and Fake Data. True Data refers
to the original real data used to train the recommender system, while Fake
Data represents the fake data generated by attackers. We have provided three
evaluation metrics for each category instead of treating them as a whole, as
we believe that an effective detector should be able to not only successfully
predict fake data but also avoid misclassifying real data. Hence, we hope that
the values for the three metrics corresponding to both types of data are as
high as possible, indicating that the detector models have better defensive
performances from two dimensions.
Based on the data presented in Table 3, it can be observed that although the
three supervised methods may not exhibit the highest performance, they
demonstrate consistent performance against various attacks. Conversely, the
semi-supervised method is not effective in defending against attacks due to
the requirement of more data for training and evaluation, which is restricted
by the attack budget in our approach. Consequently, the semi-supervised method
misclassifies both real and fake data. Among the unsupervised methods, FAP
shows promising results for certain attacks and outperforms other defense
methods, but still displays certain limitations in some metrics.
### 5.3. Robustness of Attackers Encountering Detection
For illustration, we visualize the performance comparison before and after
detection by PCASelectUser. Due to space limitations, we only present the
results in the ML-1M dataset.
From Figure 6, we can observe that the performance of all attack methods will
vary after detection. In heuristic methods, Bandwagon exhibits a notable
difference in performance before and after detection. After detection, there
is a marked decrease in all four HR metrics. The potential reason is that
Bandwagon selects the popular items as users’ fakes preferences, where this
pattern is relatively easier to identify, making the generated data easier to
be detected. In the neural methods, Legup demonstrates a similar phenomenon
with a more significant performance difference before and after detection. In
the HR@10 metric, Legup outperforms all other attacker methods before the
detection, however, it has the worst performance after the detection. On
HR@20, HR@50, and HR@100, it remains to be the worst one after detection. One
possible reason for this is that Legup’s optimization objective is more
complex and it includes a greater number of modules compared to other attacker
methods. Both the two attackers show poor robustness before and after
detection, while other methods exhibited relatively high robustness after
detection.
Counterintuitively, the results of AverageAttack and WGAN demonstrate an
inverse effect: the targeted items rank higher after detection, _i.e._ , the
detection process helps the attackers achieve their purposes. There are two
potential explanations. The first possibility is that this method generates
users that are virtually indistinguishable from real ones, rendering detection
modules theoretically unable to identify them. The second explanation is that
the mechanism by which the method generates fake users was not taken into
account by the detection module, allowing it to evade detection by this
detection method.
### 5.4. Comparison of Defense Evaluation.
In light of the results presented in Figure 6 and Table 3, we have observed
that relying solely on either injection-based or label prediction evaluation
for assessing the performance of defense models may not be adequate. For
instance, in the case of the WGAN method, the injection-based evaluation
indicates that the exposure rate of the target item is even higher after
defense than the direct attack, while the label prediction evaluation suggests
that the current defense approach is more effective in true and false
prediction. Thus, we urge future researchers in this field to use both
evaluation methods to ensure the practical effectiveness of defense models.
Our framework supports both evaluation processes, eliminating the need for
researchers to repeat work.
Figure 7. Attack performance of AIA and Legup with different proportions of
data.
### 5.5. Effect of knowledge of the attackers.
To investigate the impact of the scale of models’ knowledge, _i.e._ , the
quantity of training data for attacker models, on the results, we visualize
the performance of two neural models (AIA and Legup) as the amount of training
data varies. The results are shown in Figure 7. From the results, we observe
that the performance of both models increases as the amount of data increases,
albeit with a slight fluctuation for AIA at x50%. This inspires us to provide
the attack model with more knowledge. However, as we venture into the
exploration of novel attack algorithms, we must also take into consideration
the importance of placing constraints on the known knowledge of these
algorithms. This is particularly crucial, as it creates a trade-off between
the scale of a model’s knowledge and the effectiveness of the attack. Striking
the right balance between these two factors is key to maximizing the potential
impact of new attack algorithms while minimizing their potential negative
consequences. This trade-off raises the critical question of how we can best
manage the scale of models’ knowledge while still maintaining the efficacy of
the attack. This requires a comprehensive understanding of the intricate
interplay between the scale of knowledge and the effectiveness of the attack,
and a willingness to explore new frontiers of research and development in
order to push the boundaries of what is currently possible.
## 6\. Conclusion and future work
Recommender systems have gained significant attention in recent years.
However, the effectiveness and security of these systems have also become
major concerns, as attackers may attempt to manipulate the recommendations for
their own benefit. To promote research in this important field, we introduce
RecAD, a new recommender library that provides a variety of benchmark
datasets, evaluation settings, attackers, and defense models. By using RecAD,
researchers can simulate a range of real-world scenarios and evaluate the
robustness of different recommender systems against a variety of potential
attacks.
In addition to advancing attacks and defenses on traditional models, we also
acknowledge the transformative impact of large language models in the field of
recommender systems (Wang et al., 2023a; Bao et al., 2023; Zhang et al.,
2023). Despite their powerful generative capabilities, these models are also
susceptible to various attacks (Wang et al., 2023b). Therefore, our future
research will also focus on the development of attack and defense mechanisms
specifically tailored to large language model-based recommendations. In order
to address this aim, We call upon researchers to collaborate and establish
recommender system attack and defense methods that better align with the
evolving needs of the field, enhancing the security and robustness of these
models.
## References
* (1)
* Aktukmak et al. (2019) Mehmet Aktukmak, Yasin Yilmaz, and Ismail Uysal. 2019\. Quick and accurate attack detection in recommender systems through user attributes. In _Proceedings of the 13th ACM Conference on Recommender Systems_. ACM, 348–352.
* Arjovsky et al. (2017) Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017\. Wasserstein Generative Adversarial Networks. In _Proceedings of the 34th International Conference on Machine Learning_ , Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, 214–223.
* Bao et al. (2023) Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023\. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation.
* Bhaumik et al. (2006) Runa Bhaumik, Chad Williams, Bamshad Mobasher, and Robin Burke. 2006. Securing collaborative filtering against malicious attacks through anomaly detection. In _Proceedings of the 4th workshop on intelligent techniques for web personalization (ITWP’06), Boston_. AAAI, 10.
* Burke et al. (2005a) Robin Burke, Bamshad Mobasher, and Runa Bhaumik. 2005a. Limited knowledge shilling attacks in collaborative filtering systems. In _Proceedings of 3rd international workshop on intelligent techniques for web personalization (ITWP), 19th international joint conference on artificial intelligence (IJCAI)_. IJCAI, 17–24.
* Burke et al. (2005b) R. Burke, B. Mobasher, R. Bhaumik, and C. Williams. 2005b. Segment-based injection attacks against collaborative filtering recommender systems. In _Fifth IEEE International Conference on Data Mining (ICDM’05)_. IEEE, 4 pp.–.
* Cao et al. (2013) Jie Cao, Zhiang Wu, Bo Mao, and Yanchun Zhang. 2013\. Shilling attack detection utilizing semi-supervised learning method for collaborative recommender system. _World Wide Web_ 16 (2013), 729–748.
* Davoudi and Chatterjee (2017) Anahita Davoudi and Mainak Chatterjee. 2017. Detection of profile injection attacks in social recommender systems using outlier analysis. In _2017 IEEE International Conference on Big Data (Big Data)_. IEEE, 2714–2719.
* Deldjoo et al. (2019) Yashar Deldjoo, Tommaso Di Noia, and Felice Antonio Merra. 2019\. Assessing the impact of a user-item collaborative attack on class of users. _arXiv preprint arXiv:1908.07968_ (2019).
* Dou et al. (2018) Tong Dou, Junliang Yu, Qingyu Xiong, Min Gao, Yuqi Song, and Qianqi Fang. 2018\. Collaborative shilling detection bridging factorization and user embedding. In _Collaborative Computing: Networking, Applications and Worksharing_. Springer, 459–469.
* Fan et al. (2021) Wenqi Fan, Tyler Derr, Xiangyu Zhao, Yao Ma, Hui Liu, Jianping Wang, Jiliang Tang, and Qing Li. 2021\. Attacking black-box recommendations via copying cross-domain user profiles. In _2021 IEEE 37th International Conference on Data Engineering (ICDE)_. IEEE, 1583–1594.
* Fang et al. (2020) Minghong Fang, Neil Zhenqiang Gong, and Jia Liu. 2020\. Influence function based data poisoning attacks to top-n recommender systems. In _Proceedings of The Web Conference_. ACM, 3019–3025.
* Fang et al. (2018a) Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. 2018a. Poisoning Attacks to Graph-Based Recommender Systems. In _Proceedings of the 34th Annual Computer Security Applications Conference_. Association for Computing Machinery, New York, NY, USA, 381–392.
* Fang et al. (2018b) Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. 2018b. Poisoning attacks to graph-based recommender systems. In _Proceedings of the 34th annual computer security applications conference_. ACM, 381–392.
* Gao et al. (2022) Chongming Gao, Wenqiang Lei, Jiawei Chen, Shiqi Wang, Xiangnan He, Shijun Li, Biao Li, Yuan Zhang, and Peng Jiang. 2022. Cirs: Bursting filter bubbles by counterfactual interactive recommender system. _arXiv preprint arXiv:2204.01266_ (2022).
* Gao et al. (2020) Jianling Gao, Lingtao Qi, Haiping Huang, and Chao Sha. 2020\. Shilling attack detection scheme in collaborative filtering recommendation system based on recurrent neural network. In _Advances in Information and Communication: Proceedings of the 2020 Future of Information and Communication Conference (FICC), Volume 1_. Springer, 634–644.
* Goodfellow et al. (2020) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020\. Generative adversarial networks. _Commun. ACM_ 63, 11 (2020), 139–144.
* He et al. (2018) Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018\. Adversarial personalized ranking for recommendation. In _The 41st International ACM SIGIR conference on research & development in information retrieval_. ACM, 355–364.
* Huang et al. (2021) Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, and Mingwei Xu. 2021\. Data poisoning attacks to deep learning based recommender systems. _arXiv preprint arXiv:2101.02644_ (2021).
* Huang et al. (2007) Zan Huang, Daniel Zeng, and Hsinchun Chen. 2007. A Comparison of Collaborative-Filtering Recommendation Algorithms for E-commerce. _IEEE Intelligent Systems_ 22 (2007), 68–78.
* Kaur and Goel (2016) Parneet Kaur and Shivani Goel. 2016. Shilling attack models in recommender system. In _ICICT_ , Vol. 2. IEEE, 1–5.
* Lam and Riedl (2004) Shyong K Lam and John Riedl. 2004. Shilling recommender systems for fun and profit. In _Proceedings of the 13th international conference on World Wide Web_. ACM, 393–402.
* Li et al. (2016b) Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016b. Data poisoning attacks on factorization-based collaborative filtering. _Advances in neural information processing systems_ 29 (2016).
* Li et al. (2016a) Wentao Li, Min Gao, Hua Li, Jun Zeng, Qingyu Xiong, and Sachio Hirokawa. 2016a. Shilling attack detection in recommender systems via selecting patterns analysis. _IEICE TRANSACTIONS on Information and Systems_ 99, 10 (2016), 2600–2611.
* Lin et al. (2020) Chen Lin, Si Chen, Hui Li, Yanghua Xiao, Lianyun Li, and Qian Yang. 2020\. Attacking recommender systems with augmented user profiles. In _Proceedings of the 29th ACM international conference on information & knowledge management_. ACM, 855–864.
* Lin et al. (2022) Chen Lin, Si Chen, Meifang Zeng, Sheng Zhang, Min Gao, and Hui Li. 2022\. Shilling Black-Box Recommender Systems by Learning to Generate Fake User Profiles. _IEEE Transactions on Neural Networks and Learning Systems_ (2022), 1–15.
* Linden et al. (2003) G. Linden, B. Smith, and J. York. 2003. Amazon.com recommendations: item-to-item collaborative filtering. _IEEE Internet Computing_ 7, 1 (2003), 76–80.
* Liu et al. (2010) Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010\. Personalized News Recommendation Based on Click Behavior. In _Proceedings of the 15th International Conference on Intelligent User Interfaces_. ACM, 31–40.
* Liu et al. (2020) Yang Liu, Xianzhuo Xia, Liang Chen, Xiangnan He, Carl Yang, and Zibin Zheng. 2020\. Certifiable robustness to discrete adversarial perturbations for factorization machines. In _Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval_. ACM, 419–428.
* Mehta and Nejdl (2009) Bhaskar Mehta and Wolfgang Nejdl. 2009. Unsupervised strategies for shilling detection and robust collaborative filtering. _User Modeling and User-Adapted Interaction_ 19 (2009), 65–97.
* Mobasher et al. (2007) Bamshad Mobasher, Robin Burke, Runa Bhaumik, and Chad Williams. 2007. Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness. _ACM Transactions on Internet Technology (TOIT)_ 7, 4 (2007), 23–es.
* O’Mahony et al. (2005) Michael P O’Mahony, Neil J Hurley, and Guénolé CM Silvestre. 2005\. Recommender systems: Attack types and strategies. In _Association for the Advancement of Artificial Intelligence (AAAI)_. AAAI, 334–339.
* Rong et al. (2022a) Dazhong Rong, Qinming He, and Jianhai Chen. 2022a. Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios. _arXiv preprint arXiv:2204.13594_ (2022).
* Rong et al. (2022b) Dazhong Rong, Shuai Ye, Ruoyan Zhao, Hon Ning Yuen, Jianhai Chen, and Qinming He. 2022b. Fedrecattack: Model poisoning attack to federated recommendation. In _2022 IEEE 38th International Conference on Data Engineering (ICDE)_. IEEE, 2643–2655.
* Shahrasbi et al. (2020) Behzad Shahrasbi, Venugopal Mani, Apoorv Reddy Arrabothu, Deepthi Sharma, Kannan Achan, and Sushant Kumar. 2020. On Detecting Data Pollution Attacks On Recommender Systems Using Sequential GANs. _CoRR_ abs/2012.02509 (2020).
* Song et al. (2020) Junshuai Song, Zhao Li, Zehong Hu, Yucheng Wu, Zhenpeng Li, Jian Li, and Jun Gao. 2020. Poisonrec: an adaptive data poisoning framework for attacking black-box recommender systems. In _2020 IEEE 36th International Conference on Data Engineering (ICDE)_. IEEE, 157–168.
* Tang et al. (2019) Jinhui Tang, Xiaoyu Du, Xiangnan He, Fajie Yuan, Qi Tian, and Tat-Seng Chua. 2019\. Adversarial training towards robust multimedia recommender system. _IEEE Transactions on Knowledge and Data Engineering_ 32, 5 (2019), 855–867.
* Tang et al. (2020) Jiaxi Tang, Hongyi Wen, and Ke Wang. 2020. Revisiting adversarially learned injection attacks against recommender systems. In _Proceedings of the 14th ACM Conference on Recommender Systems_. ACM, 318–327.
* Wang et al. (2023b) Jiongxiao Wang, Zichen Liu, Keun Hee Park, Muhao Chen, and Chaowei Xiao. 2023b. Adversarial Demonstration Attacks on Large Language Models.
* Wang et al. (2023a) Wenjie Wang, Xinyu Lin, Fuli Feng, Xiangnan He, and Tat-Seng Chua. 2023a. Generative Recommendation: Towards Next-generation Recommender Paradigm.
* Wu et al. (2021) Chenwang Wu, Defu Lian, Yong Ge, Zhihao Zhu, Enhong Chen, and Senchao Yuan. 2021\. Fight fire with fire: towards robust recommender systems via adversarial poisoning training. In _Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval_. ACM, 1074–1083.
* Wu et al. (2011) Zhiang Wu, Jie Cao, Bo Mao, and Youquan Wang. 2011\. Semi-SAD: applying semi-supervised learning to shilling attack detection. In _Proceedings of the fifth ACM conference on Recommender systems_. ACM, 289–292.
* Xing et al. (2013) Xingyu Xing, Wei Meng, Dan Doozan, Alex C. Snoeren, Nick Feamster, and Wenke Lee. 2013\. Take This Personally: Pollution Attacks on Personalized Services. In _22nd USENIX Security Symposium (USENIX Security 13)_. USENIX Association, 671–686.
* Yang et al. (2018) Fan Yang, Min Gao, Junliang Yu, Yuqi Song, and Xinyi Wang. 2018. Detection of shilling attack based on bayesian model and user embedding. In _2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)_. IEEE, 639–646.
* Yang et al. (2017) Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. 2017\. Fake Co-visitation Injection Attacks to Recommender Systems. In _Network and Distributed System Security Symposium_.
* Yi et al. (2022) Jingwei Yi, Fangzhao Wu, Bin Zhu, Yang Yu, Chao Zhang, Guangzhong Sun, and Xing Xie. 2022\. UA-FedRec: Untargeted Attack on Federated News Recommendation. _CoRR_ abs/2202.06701 (2022).
* Yu et al. (2021) Hongtao Yu, Haihong Zheng, Yishu Xu, Ru Ma, Dingli Gao, and Fuzhi Zhang. 2021\. Detecting group shilling attacks in recommender systems based on maximum dense subtensor mining. In _2021 IEEE International Conference on Artificial Intelligence and Computer Applications_. IEEE, 644–648.
* Zeller and Felten (2008) William Zeller and Edward W Felten. 2008. Cross-site request forgeries: Exploitation and prevention. _The New York Times_ (2008), 1–13.
* Zhang et al. (2020) Hengtong Zhang, Yaliang Li, Bolin Ding, and Jing Gao. 2020\. Practical data poisoning attack against next-item recommendation. In _Proceedings of The Web Conference_. ACM, 2458–2464.
* Zhang et al. (2023) Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023\. Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation.
* Zhang et al. (2021) Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Lizhen Cui, and Xiangliang Zhang. 2021\. Graph Embedding for Recommendation against Attribute Inference Attacks. In _Proceedings of the Web Conference 2021_. ACM, 3002–3014.
* Zhang et al. (2022) Xudong Zhang, Zan Wang, Jingke Zhao, and Lanjun Wang. 2022\. Targeted Data Poisoning Attack on News Recommendation System. _arXiv preprint arXiv:2203.03560_ (2022).
* Zhao et al. (2021) Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, and Ji-Rong Wen. 2021. RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms. In _CIKM_. ACM, 4653–4664.
* Zhou et al. (2016) Wei Zhou, Junhao Wen, Qingyu Xiong, Min Gao, and Jun Zeng. 2016. SVM-TIA a shilling attack detection method based on SVM and target item analysis in recommender systems. _Neurocomputing_ 210 (2016), 197–205.
* Zhou et al. (2021) Xingchen Zhou, Ming Xu, Yiming Wu, and Ning Zheng. 2021\. Deep model poisoning attack on federated learning. _Future Internet_ 13, 3 (2021), 73.
|
# Simultaneous false discovery bounds for invariant causal prediction
Jinzhou Li
###### Abstract
Invariant causal prediction (ICP, Peters et al., (2016)) provides a novel way
to identify causal predictors of a response by utilizing heterogeneous data
from different environments. One advantage of ICP is that it guarantees to
make no false causal discoveries with high probability. Such a guarantee,
however, can be too conservative in some applications, resulting in few or no
discoveries. To address this, we propose simultaneous false discovery bounds
for ICP, which provides users with extra flexibility in exploring causal
predictors and can extract more informative results. These additional
inferences come for free, in the sense that they do not require additional
assumptions, and the same information obtained by the original ICP is
retained. We demonstrate the practical usage of our method through simulations
and a real dataset.
## 1 Introduction
Discovering causal predictors of the response of interest is usually the
primary goal of scientific research. Based solely on observational data, the
causal relationship might be unidentifiable. In such cases, heterogeneous data
from different environments can be helpful for further identifiability and
thus is a valuable source for causal discovery. Under the multi-environments
setting, Peters et al., (2016) proposed a novel method, called invariant
causal prediction (ICP), to identify causal predictors of a response by
exploiting the invariance of the conditional distribution of the response
given its causal predictors across environments. Compared to other causal
discovery algorithms, one main advantage of ICP is that it provides a
statistical confidence guarantee for its output $\widehat{S}^{\text{ICP}}$:
$\displaystyle P\left(\widehat{S}^{\text{ICP}}\subseteq S^{*}\right)\geq
1-\alpha,$
where $\alpha\in(0,1)$ is a nominal level and $S^{*}$ denotes the set of true
causal predictors. That is, ICP ensures that all its discoveries are true
causal predictors with a probability larger than $1-\alpha$. This is also
known as the familywise error rate control guarantee.
The original ICP approach has been generalized to non-linear setting (Heinze-
Deml et al.,, 2018), sequential data (Pfister et al.,, 2019), and
transformation models (Kook et al.,, 2023). All these methods inherit the
familywise error rate control guarantee. However, controlling the familywise
error rate can be too conservative in many applications, especially when
causal predictors are highly correlated with non-causal ones. In such cases,
ICP may result in few or no causal discoveries. To address this, Heinze-Deml
et al., (2018) proposed the so-called defining sets, which are the smallest
sets guaranteeing to contain at least one true causal predictor. They are
especially useful in the case where ICP returns an empty set.
In this paper, we propose a method to obtain simultaneous false discovery
bound (Genovese and Wasserman,, 2006; Goeman and Solari,, 2011; Goeman et
al.,, 2021) for ICP. Specifically, let
$\mathcal{N}^{*}\subseteq[m]=\\{1,\dots,m\\}$ be the index set of non-causal
predictors among $m$ predictors, the simultaneous false discovery bound is a
function $t_{\alpha}:2^{[m]}\rightarrow\\{0,\dots,m\\}$, where $2^{m}$ denotes
the power set of $[m]$, such that
$\displaystyle P(|R\cap\mathcal{N}^{*}|\leq t_{\alpha}(R)\text{ for any
}R\subseteq[m])\geq 1-\alpha.$
As a result, one can freely check any set $R$ of interest, and a high
probability upper bound of the false discoveries can be obtained by
$t_{\alpha}(R)$. This contains the familywise error rate control and the
defining sets of Heinze-Deml et al., (2018) as special cases. In particular,
by searching for the largest set $R$ whose corresponding $t_{\alpha}(R)$ is
$0$, we obtain a set with familywise error rate control; by searching for the
smallest sets whose corresponding $t_{\alpha}(R)\leq|R|-1$, we obtain all
defining sets.
Extra causal information can be extracted using the simultaneous false
discovery bound. As a quick illustration, for a simulated dataset with nine
predictors shown in Section 4.1, no information is obtained by the original
ICP approach as it returns zero discoveries. However, more causal information
can be extracted by using our simultaneous false discovery bounds. For
example, we can know that with a probability larger than $0.95$, there is at
least one causal predictor in $\\{X_{3},X_{7}\\}$, at least two causal
predictors in $\\{X_{3},X_{5},X_{7},X_{8}\\}$, and at least four causal
predictors in all nine predictors. More simulations and a real data
application can be found in Section 4.1.
To obtain the simultaneous false discovery bound for ICP, we proceed by
considering a multiple testing problem (see (6)) that directly tests causal
predictors. We show that the original ICP approach is equivalent to directly
comparing certain p-values (see (8)) to the significance level. Then, by
generalizing the null hypothesis (6) and p-value (7) to (10) and (11),
respectively, we propose a simultaneous false discovery upper bound (12). This
upper bound can be seen as a special case of the closed testing method by
Goeman and Solari, (2011), but in our case, no closed testing adjustment is
needed due to the specific form of the p-values. In comparison to the original
ICP approach, our method provides additional causal information at the cost of
testing more hypotheses, without introducing any additional statistical
assumptions. We also discuss the idea of controlling the false discovery rate
for ICP.
## 2 A brief recap of invariant causal prediction
For the sake of simplicity, we use linear models (Peters et al.,, 2016) for
illustration. Consider the following linear structural equation model:
$Y^{e}\leftarrow\beta_{1}X^{e}_{1}+\cdots\beta_{m}X^{e}_{m}+\epsilon^{e}=\beta_{S^{*}}^{T}X^{e}_{S^{*}}+\epsilon^{e},$
(1)
where $S^{*}=\\{i\in[m]:\beta_{i}\neq 0\\}$ is the set of causal predictors of
$Y$, $e\in\mathcal{E}$ denotes one environment, $\epsilon^{e}\sim
F_{\epsilon}$, $\epsilon^{e}\mathchoice{\mathrel{\hbox
to0.0pt{$\displaystyle\perp$\hss}\mkern
2.0mu{\displaystyle\perp}}}{\mathrel{\hbox
to0.0pt{$\textstyle\perp$\hss}\mkern 2.0mu{\textstyle\perp}}}{\mathrel{\hbox
to0.0pt{$\scriptstyle\perp$\hss}\mkern
2.0mu{\scriptstyle\perp}}}{\mathrel{\hbox
to0.0pt{$\scriptscriptstyle\perp$\hss}\mkern
2.0mu{\scriptscriptstyle\perp}}}X^{e}_{S^{*}}$, and $X^{e}_{i}$ can have
arbitrary distribution for different $e$. The goal is to estimate the set of
causal predictors $S^{*}$.
The key idea of ICP is to test whether the conditional distribution of $Y^{e}$
is invariant across environments given predictors $X^{e}_{S}$ for some
$S\subseteq[m]$. This leads to the following null hypothesis:
$\displaystyle H_{0,S}(\mathcal{E}):$ $\displaystyle\text{there exists a
distribution $F_{\epsilon}$ and a vector }\gamma\in\mathbb{R}^{m}\text{ with
support }S\text{ such that }$ (2) $\displaystyle\text{for all
}e\in\mathcal{E},Y^{e}=\gamma^{T}X^{e}+\epsilon^{e},\text{where
}\epsilon^{e}\sim F_{\epsilon}\text{ and
}\epsilon^{e}\mathchoice{\mathrel{\hbox
to0.0pt{$\displaystyle\perp$\hss}\mkern
2.0mu{\displaystyle\perp}}}{\mathrel{\hbox
to0.0pt{$\textstyle\perp$\hss}\mkern 2.0mu{\textstyle\perp}}}{\mathrel{\hbox
to0.0pt{$\scriptstyle\perp$\hss}\mkern
2.0mu{\scriptstyle\perp}}}{\mathrel{\hbox
to0.0pt{$\scriptscriptstyle\perp$\hss}\mkern
2.0mu{\scriptscriptstyle\perp}}}X^{e}_{S}.$
Under the assumptions that there is no latent variable and $\mathcal{E}$ does
not contain the environment where $Y$ is intervened, it is clear that
$H_{0,S^{*}}(\mathcal{E})$ is true. However, there may be other sets than
$S^{*}$ for which $H_{0,S}(\mathcal{E})$ is true, causing an identifiability
issue. To this end, Peters et al., (2016) defined the so-called identifiable
causal predictors under $\mathcal{E}$ as follows:
$\displaystyle S(\mathcal{E})=\bigcap_{S:H_{0,S}(\mathcal{E})\text{ is true
}}S.$ (3)
Note that $S(\mathcal{E})\subseteq S^{*}$ as $H_{0,S^{*}}(\mathcal{E})$ is
true.
ICP estimates the causal predictors by using the sample version of
$S(\mathcal{E})$ in two steps: (i) For every $S\subseteq[m]$, test
$H_{0,S}(\mathcal{E})$ at level $\alpha\in(0,1)$. (ii) Obtain the selected
causal predictors by
$\widehat{S}^{\text{ICP}}(\mathcal{E})=\underset{S:H_{0,S}(\mathcal{E})\text{
is not rejected }}{\bigcap}S.$ (4)
The familywise error rate guarantee of $\widehat{S}^{\text{ICP}}(\mathcal{E})$
holds at level $\alpha$ because
$\displaystyle P\left(\widehat{S}^{\text{ICP}}(\mathcal{E})\subseteq
S^{*}\right)=P\left(\underset{S:H_{0,S}(\mathcal{E})\text{ is not rejected
}}{\bigcap}S\subseteq S^{*}\right)\geq P(H_{0,S^{*}}(\mathcal{E})\text{ is not
rejected })\geq 1-\alpha.$ (5)
## 3 Simultaneous false discovery bounds for ICP
### 3.1 A multiple testing formulation of ICP
Instead of directly estimating $S(\mathcal{E})$ as in (4), another natural way
is to form a multiple testing problem:
$\displaystyle H^{*}_{0,i}:i\in\mathcal{N}^{*},\quad i\in[m],$ (6)
where $\mathcal{N}^{*}=[m]\setminus S^{*}$. To this end, we propose the
following p-value for $H^{*}_{0,i}$:
$\displaystyle p^{*}_{i}=\max_{S\subseteq[m]\setminus\\{i\\}}p_{S},$ (7)
where $p_{S}$ is a valid p-value for testing $H_{0,S}(\mathcal{E})$ (see (2)),
that is, $P_{H_{0,S}(\mathcal{E})}(p_{S}\leq c)\leq c$ for any $c\in(0,1)$.
Note that it can be obtained in the first step of ICP. All p-values in this
paper depend on the environment set $\mathcal{E}$, but we omit this dependence
for simplicity of notation. The validity of $p^{*}_{i}$ is shown in the
following proposition.
###### Proposition 3.1.
For any $c\in(0,1)$, $P_{H^{*}_{0,i}}(p^{*}_{i}\leq c)\leq c$.
To connect to the original ICP approach, we need to introduce a slightly more
complicated p-value. For a given $\tau\in(0,1)$, let
$\displaystyle\tilde{p}^{*}_{i}(\tau)=\max\\{p^{*}_{i},\Pi_{S\subseteq[m]}\mathbbm{1}_{p_{S}\leq\tau}\\}.$
(8)
Note that $\tilde{p}^{*}_{i}(\tau)$ is also valid because
$\tilde{p}^{*}_{i}(\tau)\geq p^{*}_{i}$. Then, as the following proposition
shows, the discovery set obtained by using ICP is equivalent to the one
obtained by directly comparing $\tilde{p}^{*}_{i}(\alpha)$ to the significance
level $\alpha$.
###### Proposition 3.2.
For $\alpha\in(0,1)$, let
$\tilde{S}(\mathcal{E})=\\{i:\tilde{p}^{*}_{i}(\alpha)\leq\alpha\\}$. Then
$\tilde{S}(\mathcal{E})=\widehat{S}^{\text{ICP}}(\mathcal{E})$.
Hence, $\tilde{S}(\mathcal{E})$ possesses the familywise error rate control
guarantee (see (5)). In fact, the discovery set
$\displaystyle\widehat{S}(\mathcal{E})=\\{i:p^{*}_{i}\leq\alpha\\}$ (9)
obtained by directly comparing $p^{*}_{i}$ to the significance level also
controls the familywise error rate, as shown in the following proposition.
###### Proposition 3.3.
For $\alpha\in(0,1)$, we have $P\left(\widehat{S}(\mathcal{E})\subseteq
S^{*}\right)\geq 1-\alpha.$
Therefore, both $\widehat{S}(\mathcal{E})$ and
$\widehat{S}^{\text{ICP}}(\mathcal{E})$ (equivalent to
$\tilde{S}(\mathcal{E})$) control the familywise error rate, and using
$\widehat{S}(\mathcal{E})$ seems better because
$p^{*}_{i}\leq\tilde{p}^{*}_{i}(\alpha)$. However, the scenario where
$p^{*}_{i}<\tilde{p}^{*}_{i}(\alpha)$ only occurs when $p_{S}\leq\alpha$ for
all $S\subseteq[m]$. In such cases, we have
$\widehat{S}^{\text{ICP}}(\mathcal{E})=\emptyset$ and
$\widehat{S}(\mathcal{E})=[m]$, which appears not meaningful. In practice,
$\widehat{S}(\mathcal{E})$ and $\widehat{S}^{\text{ICP}}(\mathcal{E})$ are
generally equivalent.
### 3.2 Simultaneous false discovery bounds for ICP
Based on the multiple testing formulation (6), we propose simultaneous false
bounds by considering all intersection hypotheses. Specifically, for a given
$S\subseteq[m]$, by generalizing (6) and (7), we consider null hypothesis
$\displaystyle H^{*}_{0,S}:i\in\mathcal{N}^{*}\text{ for all }i\in S$ (10)
and p-value
$\displaystyle p^{*}_{S}=\max_{I\subseteq[m]\setminus S}p_{I}.$ (11)
By using a similar argument as in Proposition 3.1, one can see that
$p^{*}_{S}$ is valid.
###### Proposition 3.4.
For any $c\in(0,1)$, $P_{H^{*}_{0,S}}(p^{*}_{S}\leq c)\leq c$.
For any set $R\subseteq[m]$, let $t_{\alpha}(R)$ be the size of the largest
subset of $R$ whose corresponding p-value is larger than $\alpha$. That is,
$\displaystyle t_{\alpha}(R)=\max\\{|I|:I\subseteq R\text{ and
}p^{*}_{I}>\alpha\\}.$ (12)
Then, $t_{\alpha}(R)$ is a simultaneous false discovery upper bound, as shown
in the following Theorem.
###### Theorem 3.1.
Let $\alpha\in(0,1)$, then
$\displaystyle P(|R\cap\mathcal{N}^{*}|\leq t_{\alpha}(R)\text{ for any
}R\subseteq[m])\geq 1-\alpha.$ (13)
Based on the simultaneous guarantee (13), one can freely check any set $R$ of
interest, and a high probability upper bound (12) on the false discoveries can
be calculated. Equivalently, $|R|-t_{\alpha}(R)$ is a simultaneous true
discovery lower bound:
$\displaystyle P(|R\cap S^{*}|\geq|R|-t_{\alpha}(R)\text{ for any
}R\subseteq[m])\geq 1-\alpha.$ (14)
The simultaneous upper bound (12) is the same as the upper bound proposed by
Goeman and Solari, (2011) for closed testing (Marcus et al.,, 1976). In fact,
our method can be seen as a special case of closed testing. Compared to the
standard closed testing procedure, the difference is that no closed testing
adjustments are needed in our case. This is due to the specific form of
$p^{*}_{S}$. In particular, if $p^{*}_{S}\leq\alpha$, we must have
$p^{*}_{S^{\prime}}\leq\alpha$ for all $S\subseteq S^{\prime}$. Thus, if a
hypothesis is locally rejected, it must be rejected by closed testing, so
there is no need to implement closed testing adjustments.
At last, we mention that calculating the false discovery bound (12) can be
computationally intractable for large $|R|$ (or large $m$). In fact,
computation is also a fundamental issue for the original ICP approach, which
requires testing $2^{|m|}$ hypotheses. Some tricks are suggested to deal with
this issue, including first reducing the size of the potential causal
predictors by using a pre-screening method such as Lasso. See Peters et al.,
(2016) for more discussions.
### 3.3 A discussion about false discovery rate control for ICP
With a multiple testing formulation (6) and valid p-values (7), an alternative
strategy to obtain a less conservative result than
$\widehat{S}^{\text{ICP}}(\mathcal{E})$ is to control the commonly used false
discovery rate (Benjamini and Hochberg,, 1995; Benjamini and Yekutieli,, 2001;
Lehmann and Romano,, 2022). Due to Proposition 3.3, however, directly
comparing $p^{*}_{1},\dots,p^{*}_{m}$ to the significance level already
controls the familywise error rate, so there is no point in further applying
false discovery rate control procedures on these p-values. This observation is
interesting because, typically, multiple testing corrections (e.g., Bonferroni
correction) on raw p-values are required to control the familywise error rate.
Thus, it is natural to ask: Can we obtain smaller valid p-values than
$p^{*}_{1},\dots,p^{*}_{m}$? If so, one may apply false discovery rate control
procedures to obtain more discoveries than ICP. However, in the following, we
argue that this approach does not seem promising.
Specifically, we show that if some $p^{\prime}_{i}$ satisfying
$\displaystyle{\mathbb{P}}(p^{\prime}_{i}\leq
p^{*}_{i})=1\quad\text{and}\quad{\mathbb{P}}_{H^{*}_{0,i}}(p^{\prime}_{i}<p^{*}_{i})>0,$
(15)
then it is not valid in general. In particular, suppose that we have an ideal
test for $H_{0,S}(\mathcal{E})$ that yields p-values $p_{S}\sim U(0,1)$ for
true nulls and $p_{S}=0$ for false nulls. Consider a setting where the only
true null hypothesis is $H_{0,S^{*}}(\mathcal{E})$. This is the desired
setting where all causal predictors are identifiable, and it can happen, for
example, when the environments are rich enough. The following proposition
shows that $p^{\prime}_{i}$ satisfying (15) is not valid in such settings.
###### Proposition 3.5.
Let $S^{*}$ be the set of true causal predictors. Consider the desired case
where only $H_{0,S^{*}}(\mathcal{E})$ is true, $p_{S^{*}}\sim U(0,1)$ and
$p_{S}=0$ for any $S\neq S^{*}$. Then, for $p^{\prime}_{i}$ satisfying (15),
there exists some $c^{*}\in(0,1)$ such that
$P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq c^{*})>c^{*}$.
This result excludes the possibility of constructing $p_{i}^{\prime}$ based on
some analytic function, such that $p_{i}^{\prime}$ is strictly smaller than
$p_{i}^{*}$ in probability $1$ (recall that $p_{i}^{*}$ is obtained by a
maximum function (7)). Consequently, considering false discovery rate control
for ICP does not appear to be a promising idea.
## 4 Simulations and a real data application
### 4.1 Numerical simulations
We implement simulations to verify the simultaneous guarantee (13) empirically
and to illustrate the extra information one may obtain by using the
simultaneous false discovery bound compared to the original ICP approach. All
simulations were carried out in R, and the code is available at
https://github.com/Jinzhou-Li/ICPsimultaneousBounds.
We generate samples based on the following linear structural equation model:
$X^{e}\leftarrow BX^{e}+\epsilon^{e},$
where $X^{e}=(X^{e}_{1},\dots,X^{e}_{10})^{T}$, $B$ is a strict lower-
triangular matrix, $\epsilon^{e}\sim N_{10}(b^{e},\text{diag}(\sigma))$ and
$e\in\\{1,2,3,4,5\\}$. That is, we consider mean-shift intervention and five
environments. We treat $X_{6}$ as the response in our simulations.
We implement $500$ simulations. In each simulation, we randomly sample the
non-zero elements of matrix $B$ with a non-zero probability $0.8$. Their
values are then uniformly sampled from $1$ to $2$. Each entry of $\sigma$ is
uniformly sampled from $0.5$ to $1.5$. For the mean-shift intervention, we set
$b^{1}=0$. For other $b_{i}$, we randomly select $3$ non-zero entries
excluding $X_{6}$. That is, we consider $3$ intervened variables. The non-zero
entries of $b_{i}$ are sampled from $N(10,5^{2})$. We generate $100$ samples
for each environment and use a significance level $\alpha=0.05$.
For each generated dataset, we apply ICP and record the number of discoveries.
Based on the same p-values used by ICP, we calculate the simultaneous false
discovery upper bound $t_{\alpha}(R)$ (see (12)) and the true discovery lower
bound $|R|-t_{\alpha}$ (see (14)) for all $2^{9}=512$ sets.
Figure 1: The false discovery upper bounds and true discovery lower bounds for
all $512$ sets, as well as the size of each set. The first plot is the
averaged result over $500$ simulations, and the second and third plots are the
result based on two simulated datasets.
Over these $500$ simulations, the empirical probability of
$P(|R\cap\mathcal{N}^{*}|\leq t_{\alpha}(R)\text{ for any }R\subseteq[m])$ is
$0.988$, which is larger than $1-\alpha=0.95$, so the simultaneous guarantee
(13) empirically holds. The average number of causal predictors is $4.144$,
and the average number of discoveries of ICP is $0.52$. The first plot in
Figure 1 shows the average false discovery upper bounds and true discovery
lower bounds for all $512$ sets, as well as the size of each set. Note that
the x-axis is only the index for each set, and we didn’t present which sets
these indices refer to for simplicity. Compared to ICP, extra information can
be obtained by looking at these bounds. For example, by looking at the true
discovery lower bound of the set containing all predictors (with set index
$512$), we know that, on average, there are about three true causal predictors
in all nine predictors.
To better illustrate the usage of the simultaneous bounds in practice, we look
at the results of two simulated datasets rather than the averaged results.
These results are shown in the second and third plots of Figure 1,
respectively.
For the first simulated dataset, the true causal predictors are
$X_{1},X_{2},X_{3}$ and $X_{4}$. ICP discovers $X_{2}$ and $X_{3}$, meaning
that with a probability larger than $0.95$, $X_{2}$ and $X_{3}$ are causal
predictors. That is all the information we can get by using ICP. By looking at
the simultaneous bounds (see the second plot in Figure 1), however, extra
information can be obtained. For example, we can see that the sets with
indices $131$ and $257$ has true discovery lower bounds $3$ and $4$,
respectively. These two sets are $\\{X_{1},X_{2},X_{3},X_{4}\\}$ and
$\\{X_{1},X_{2},X_{3},X_{4},X_{5}\\}$. Hence, we know that with probability
larger than $0.95$, there are at least three causal predictors in
$\\{X_{1},X_{2},X_{3},X_{4}\\}$, and at least four causal predictors in
$\\{X_{1},X_{2},X_{3},X_{4},X_{5}\\}$. Note that the same information that
both $X_{2}$ and $X_{3}$ are true causal predictors can also be obtained,
because the true discovery lower bound for $\\{X_{2},X_{3}\\}$ (with set index
$13$) is two.
For the second simulated dataset, the true causal predictors are
$\\{X_{1},X_{2},X_{3},X_{4},X_{5}\\}$. ICP returns zero discoveries. In
particular, the corresponding p-values (see (7)) for predictors are
$1,1,1,1,\\\ 0.848,0.329,1,1,1$. Hence, one can not get much information about
the causal predictors by using ICP in this case. By looking at the
simultaneous bounds (see the third plot in Figure 1), more information can be
obtained. For example, we know that with a probability larger than $0.95$,
there are at least one causal predictor in $\\{X_{3},X_{7}\\}$ (with set index
$23$), at least two causal predictors in $\\{X_{3},X_{5},X_{7},X_{8}\\}$ (with
set index $164$), at least three causal predictors in
$\\{X_{1},X_{3},X_{5},X_{7},X_{8},X_{9}\\}$ (with set index $406$), and at
least four causal predictors in
$\\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{7},X_{8},X_{9}\\}$ (with set index
$503$).
### 4.2 Real data application
We look at a real dataset used in Stock et al., (2003) about the educational
attainment of teenagers (Rouse,, 1995). This dataset contains information of
$4739$ students from approximately 1100 US high schools, including gender,
ethnicity, achievement test score, whether the father or mother is a college
graduate, whether the family owns their home, whether the school is in an
urban area, county unemployment rate, state hourly wage in manufacturing,
average college tuition, whether the family income above 25000 per year, and
region.
We follow Peters et al., (2016) to obtain two datasets of two environments,
with $2231$ and $2508$ samples, respectively. The response variable $Y$ is a
binary variable indicating whether the student attained a BA degree or higher.
We use dummy variables to encode the factors in the dataset, resulting in $13$
predictors in total. The goal is to find the causal predictors of the response
$Y$.
Figure 2: The false discovery upper bounds, true discovery lower bounds, and
the size of set for all $8192$ sets based on the real data.
We first apply ICP with a significance level $\alpha=0.05$, returning one
variable ‘score’ as the causal predictor. It seems reasonable that test score
has a causal influence on whether the student would attain a BA degree or
higher. We then calculate the simultaneous bounds for all $2^{13}=8192$ sets,
and the results are shown in Figure 2. More information can be obtained by
looking at these bounds. For example, by looking at the set with index $112$,
we know that with a probability larger than $0.95$, there are at least two
causal predictors in {‘score’, ’fcollege$\\_$no’, ’mcollege$\\_$no’}. It seems
plausible that the true discovery lower bound is $2$ for this set, because the
two variables of whether father or mother went to college are highly
correlated, so it is difficult to distinguish between whether both are causal
predictors or only one of them is.
## Acknowledgement
The author thanks Jelle Goeman and Nicolai Meinshausen for helpful discussions
and suggestions on the draft of this paper. The author gratefully acknowledges
support by the SNSF Grant P500PT-210978.
## References
* Benjamini and Hochberg, (1995) Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289–300.
* Benjamini and Yekutieli, (2001) Benjamini, Y. and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of statistics, pages 1165–1188.
* Genovese and Wasserman, (2006) Genovese, C. R. and Wasserman, L. (2006). Exceedance control of the false discovery proportion. Journal of the American Statistical Association, 101(476):1408–1417.
* Goeman et al., (2021) Goeman, J. J., Hemerik, J., and Solari, A. (2021). Only closed testing procedures are admissible for controlling false discovery proportions. The Annals of Statistics, 49(2):1218–1238.
* Goeman and Solari, (2011) Goeman, J. J. and Solari, A. (2011). Multiple testing for exploratory research. Statistical Science, 26(4):584–597.
* Heinze-Deml et al., (2018) Heinze-Deml, C., Peters, J., and Meinshausen, N. (2018). Invariant causal prediction for nonlinear models. Journal of Causal Inference, 6(2).
* Kook et al., (2023) Kook, L., Saengkyongam, S., Lundborg, A. R., Hothorn, T., and Peters, J. (2023). Model-based causal feature selection for general response types. arXiv preprint arXiv:2309.12833.
* Lehmann and Romano, (2022) Lehmann, E. and Romano, J. P. (2022). Multiple testing and simultaneous inference. In Testing Statistical Hypotheses, pages 405–491. Springer.
* Marcus et al., (1976) Marcus, R., Eric, P., and Gabriel, K. R. (1976). On closed testing procedures with special reference to ordered analysis of variance. Biometrika, 63(3):655–660.
* Peters et al., (2016) Peters, J., Bühlmann, P., and Meinshausen, N. (2016). Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(5):947–1012.
* Pfister et al., (2019) Pfister, N., Bühlmann, P., and Peters, J. (2019). Invariant causal prediction for sequential data. Journal of the American Statistical Association, 114(527):1264–1276.
* Rouse, (1995) Rouse, C. E. (1995). Democratization or diversion? the effect of community colleges on educational attainment. Journal of Business & Economic Statistics, 13(2):217–224.
* Stock et al., (2003) Stock, J. H., Watson, M. W., et al. (2003). Introduction to econometrics, volume 104. Addison Wesley Boston.
## Appendix A Supplementary material
### A.1 Proof of Proposition 1
###### Proposition A.1.
For any $c\in(0,1)$, $P_{H^{*}_{0,i}}(p^{*}_{i}\leq c)\leq c$.
###### Proof.
When $H^{*}_{0,i}$ is true, we have $S^{*}\subseteq[m]\setminus\\{i\\}$ as
$i\not\in S^{*}$, so $p^{*}_{i}=\max_{S\subseteq[m]\setminus\\{i\\}}p_{S}\geq
p_{S^{*}}$. Therefore, for any $c\in(0,1)$,
$\displaystyle P_{H^{*}_{0,i}}(p^{*}_{i}\leq c)\leq
P_{H^{*}_{0,i}}(p_{S^{*}}\leq c)\leq c,$
where the last inequality holds because $H_{0,S^{*}}(\mathcal{E})$ is true. ∎
### A.2 Proof of Proposition 2
###### Proposition A.2.
For $\alpha\in(0,1)$, let
$\tilde{S}(\mathcal{E})=\\{i:\tilde{p}^{*}_{i}(\alpha)\leq\alpha\\}$. Then
$\tilde{S}(\mathcal{E})=\widehat{S}^{\text{ICP}}(\mathcal{E})$.
To prove Proposition 2, we first introduce the following lemma, which gives
the relation between
$\widehat{S}(\mathcal{E})=\\{i:p^{*}_{i}(\alpha)\leq\alpha\\}$ and
$\widehat{S}^{\text{ICP}}(\mathcal{E})$.
###### Lemma A.1.
Assume that there exists $R\subseteq[m]$ such that $H_{0,R}(\mathcal{E})$ is
not rejected. Let
$\widehat{S}(\mathcal{E})=\\{i:p^{*}_{i}(\alpha)\leq\alpha\\}$. Then,
$\widehat{S}(\mathcal{E})=\widehat{S}^{\text{ICP}}(\mathcal{E})$.
We first prove Lemma A.1.
###### Proof.
If $\widehat{S}(\mathcal{E})=\emptyset$, that is,
$p^{*}_{i}=\max_{S\subseteq[m]\setminus\\{i\\}}p_{S}>\alpha$ for all
$i\in[m]$, then we must have
$\widehat{S}^{\text{ICP}}(\mathcal{E})=\emptyset$. Otherwise, if there exists
some $j\in\widehat{S}^{\text{ICP}}(\mathcal{E})$, then for any $R$ whose
corresponding $p_{R}>\alpha$, we must have $j\in R$ by the definition of
$\widehat{S}^{\text{ICP}}(\mathcal{E})$. But
$p^{*}_{j}=\max_{S\subseteq[m]\setminus\\{j\\}}p_{S}>\alpha$ implies that
there exists some $R^{\prime}$ not containing $j$ and $p_{R^{\prime}}>\alpha$,
which leads to a contradiction.
If $\widehat{S}^{\text{ICP}}(\mathcal{E})=\emptyset$, by the assumption in the
proposition, there must exist some $R_{1}$ and $R_{2}$ such that
$p_{R_{1}}>\alpha$, $p_{R_{2}}>\alpha$ and $R_{1}\cap R_{2}=\emptyset$. Then
we must have $\widehat{S}(\mathcal{E})=\emptyset$. Otherwise, if there exists
some $j\in\widehat{S}(\mathcal{E})$, that is,
$p^{*}_{j}=\max_{S\subseteq[m]\setminus\\{j\\}}p_{S}\leq\alpha$, then in order
for $p_{R_{1}}>\alpha$ and $p_{R_{2}}>\alpha$, both $R_{1}$ and $R_{2}$ must
contain $j$, which contradicts the fact that $R_{1}\cap R_{2}=\emptyset$.
Now assume both $\widehat{S}(\mathcal{E})$ and
$\widehat{S}^{\text{ICP}}(\mathcal{E})$ are non-empty.
If $i\in\widehat{S}^{\text{ICP}}(\mathcal{E})$, that is, for any $S$ such that
$H_{0,S}(\mathcal{E})$ is not rejected (that is, $p_{S}>\alpha$), we have
$i\in S$. So for any $R\subseteq[m]\setminus\\{i\\}$, $H_{0,R}(\mathcal{E})$
must be rejected, so
$p^{*}_{i}=\max_{R\subseteq[m]\setminus\\{i\\}}p_{R}\leq\alpha$, which implies
that $i\in\widehat{S}(\mathcal{E})$.
If $i\in\widehat{S}(\mathcal{E})$, we have
$p^{*}_{i}=\max_{S\subseteq[m]\setminus\\{i\\}}p_{S}\leq\alpha$, which implies
that for any $S$ not containing $i$, $H_{0,S}(\mathcal{E})$ must be rejected.
So for any $R$ such that $H_{0,R}(\mathcal{E})$ is not rejected, we must have
$i\in R$, which implies that $i\in\widehat{S}^{\text{ICP}}(\mathcal{E})$. ∎
Now we prove Proposition 2.
###### Proof.
If there exists some $R\subseteq[m]$ such that $H_{0,R}(\mathcal{E})$ is not
rejected, that is, $p_{R}>\alpha$. Then
$\tilde{p}^{*}_{i}(\alpha)=\max\\{p^{*}_{i},\Pi_{S\subseteq[m]}\mathbbm{1}_{p_{S}\leq\alpha}\\}=\max\\{p^{*}_{i},0\\}=p^{*}_{i}$,
so
$\tilde{S}(\mathcal{E})=\widehat{S}(\mathcal{E})=\widehat{S}^{\text{ICP}}(\mathcal{E})$
by Lemma A.1.
Now we consider the remaining case that $H_{0,S}(\mathcal{E})$ is rejected for
all $S\subseteq[m]$, that is, $p_{S}\leq\alpha$ for all $S\subseteq[m]$. Then,
we have $\widehat{S}^{\text{ICP}}(\mathcal{E})=\emptyset$ by definition, and
$\tilde{p}^{*}_{i}(\alpha)=\max\\{p^{*}_{i},\Pi_{S\subseteq[m]}\mathbbm{1}_{p_{S}\leq\alpha}\\}=\max\\{p^{*}_{i},1\\}=1>\alpha$
for any $i\in[m]$. Hence
$\tilde{S}(\mathcal{E})=\emptyset=\widehat{S}^{\text{ICP}}(\mathcal{E})$. ∎
### A.3 Proof of Proposition 3
###### Proposition A.3.
For $\alpha\in(0,1)$, we have $P\left(\widehat{S}(\mathcal{E})\subseteq
S^{*}\right)\geq 1-\alpha.$
###### Proof.
When $H_{0,S^{*}}(\mathcal{E})$ is not rejected, for any $i\not\in S^{*}$, we
have $p^{*}_{i}=\max_{S\subseteq[m]\setminus\\{i\\}}p_{S}\geq
p_{S^{*}}>\alpha$, so $i\not\in\widehat{S}(\mathcal{E})$. Hence, we have
$\widehat{S}(\mathcal{E})\subseteq S^{*}$. Therefore,
$P\left(\widehat{S}(\mathcal{E})\subseteq S^{*}\right)\geq
P\left(H_{0,S^{*}}(\mathcal{E})\text{ is not rejected}\right)\geq 1-\alpha.$
∎
### A.4 Proof of Proposition 4
###### Proposition A.4.
For any $c\in(0,1)$, $P_{H^{*}_{0,S}}(p^{*}_{S}\leq c)\leq c$.
###### Proof.
When $H^{*}_{0,S}$ is true, we have $S^{*}\subseteq[m]\setminus S$ as $S\cap
S^{*}=\emptyset$, so $p^{*}_{S}=\max_{I\subseteq[m]\setminus S}p_{I}\geq
p_{S^{*}}$. Therefore, for any $c\in(0,1)$,
$\displaystyle P_{H^{*}_{0,S}}(p^{*}_{S}\leq c)\leq
P_{H^{*}_{0,S}}(p_{S^{*}}\leq c)\leq c.$
∎
### A.5 Proof of Theorem 1
###### Theorem A.1.
Let $\alpha\in(0,1)$, then
$\displaystyle P(|R\cap\mathcal{N}^{*}|\leq t_{\alpha}(R)\text{ for any
}R\subseteq[m])\geq 1-\alpha.$
###### Proof.
For any $R\subseteq[m]$,
$\displaystyle
p^{*}_{R\cap\mathcal{N}^{*}}=\max_{I\subseteq[m]\setminus(R\cap\mathcal{N}^{*})}p_{I}\geq\max_{I\subseteq[m]\setminus\mathcal{N}^{*}}p_{I}=p^{*}_{\mathcal{N}^{*}}.$
So if $p^{*}_{\mathcal{N}^{*}}>\alpha$, we have
$p^{*}_{R\cap\mathcal{N}^{*}}>\alpha$, which implies that
$|R\cap\mathcal{N}^{*}|\leq t_{\alpha}(R)$ by definition. Hence
$P(|R\cap\mathcal{N}^{*}|\leq t_{\alpha}(R)\text{ for any }R\subseteq[m])\geq
P(p^{*}_{\mathcal{N}^{*}}>\alpha).$
In addition, we have
$\displaystyle
P(p^{*}_{\mathcal{N}^{*}}\leq\alpha)=P(\max_{I\subseteq[m]\setminus\mathcal{N}^{*}}p_{I}\leq\alpha)\leq
P(p_{S^{*}}\leq\alpha)\leq\alpha,$
or equivalently
$P(p^{*}_{\mathcal{N}^{*}}>\alpha)\geq 1-\alpha,$
which completes the proof. ∎
### A.6 Proof of Proposition 5
###### Proposition A.5.
Let $S^{*}$ be the set of true causal predictors. Consider the desired case
where only $H_{0,S^{*}}(\mathcal{E})$ is true, $p_{S^{*}}\sim U(0,1)$ and
$p_{S}=0$ for any $S\neq S^{*}$. Then, for $p^{\prime}_{i}$ satisfying (15),
there exists some $c^{*}\in(0,1)$ such that
$P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq c^{*})>c^{*}$.
###### Proof.
For any $i$ such that $H^{*}_{0,i}$ is true, we have
$p^{*}_{i}=\max_{S\subseteq[m]\setminus\\{i\\}}p_{S}=p_{S^{*}}$.
Since $P_{H^{*}_{0,i}}(p^{\prime}_{i}<p^{*}_{i})>0$, there must exist some
rational number $c^{*}\in(0,1)$ such that
$\displaystyle P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq c^{*}<p^{*}_{i})>0.$ (16)
Then,
$\displaystyle P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq c^{*})$ $\displaystyle=$
$\displaystyle P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq
c^{*},p^{\prime}_{i}<p^{*}_{i})+P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq
c^{*},p^{\prime}_{i}=p^{*}_{i})$ $\displaystyle=$ $\displaystyle
P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq c^{*},p^{\prime}_{i}<p^{*}_{i}\leq
c^{*})+P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq
c^{*}<p^{*}_{i})+P_{H^{*}_{0,i}}(p^{\prime}_{i}\leq
c^{*},p^{\prime}_{i}=p^{*}_{i})$ $\displaystyle>$ $\displaystyle
P_{H^{*}_{0,i}}(p^{*}_{i}\leq
c^{*},p^{\prime}_{i}<p^{*}_{i})+P_{H^{*}_{0,i}}(p^{*}_{i}\leq
c^{*},p^{\prime}_{i}=p^{*}_{i})$ $\displaystyle=$ $\displaystyle
P_{H^{*}_{0,i}}(p^{*}_{i}\leq c^{*})$ $\displaystyle=$ $\displaystyle
P_{H^{*}_{0,i}}(p_{S^{*}}\leq c^{*})$ $\displaystyle=$ $\displaystyle c^{*},$
which completes the proof. ∎
|
# Electron-phonon coupling and non-equilibrium thermal conduction in ultrafast
heating systems
Chuang Zhang<EMAIL_ADDRESS>Department of Physics, Hangzhou Dianzi
University, Hangzhou 310018, China Rulei Guo Department of Mechanical
Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8656,
Japan Meng Lian School of Physics, Institute for Quantum Science and
Engineering and Wuhan National High Magnetic Field Center, Huazhong University
of Science and Technology, Wuhan 430074, China Junichiro Shiomi
Corresponding author<EMAIL_ADDRESS>Department of Mechanical
Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-8656,
Japan Institute of Engineering Innovation, The University of Tokyo, 7-3-1
Hongo, Bunkyo, Tokyo 113-8656, Japan
###### Abstract
The electron-phonon coupling in ultrafast heating systems is studied within
the framework of Boltzmann transport equation (BTE) with coupled electron and
phonon transport. A discrete unified gas kinetic scheme is developed to solve
the BTE, in which the electron/phonon advection, scattering and electron-
phonon interactions are coupled together within one time step by solving the
BTE again at the cell interface. Numerical results show that the present
scheme can correctly predict the electron-phonon coupling constant, and is in
excellent agreement with typical two-temperature model (TTM) and experimental
results in existing literatures and our performed time-domain
thermoreflectance technique. It can also capture the ballistic or thermal wave
effects when the characteristic length/time is comparable to or smaller than
the mean free path/relaxation time where the TTM fails. Finally, the electron-
phonon coupling in transient thermal grating geometry and Au/Pt bilayer metals
with interfacial thermal resistance is simulated and discussed. For the
former, heat flow from phonon to electron is predicted in both the ballistic
and diffusive regimes. For the latter, the reflected signal increases in the
early tens of picoseconds and then decreases with time after the heat source
is removed.
## I INTRODUCTION
Ultrafast laser heating plays a more and more important role in industrial
manufacturing of micro/nano electronic devices, medical detection and the
exploration of frontier basic science, which involves the interactions between
various energy-carrying (quasi)particles Chen (2005); Kittel _et al._ (1996).
One of the key factors in multiscale energy transfer and conversion is the
coupling between electron and phonon (lattice vibration) in solid materials
Fann _et al._ (1992a); Karna _et al._ (2023); Block _et al._ (2019); Allen
(1987); Caruso and Novko (2022). The electron-phonon coupling at the
microscopic level can explain many macroscopic thermal and electrical
transport phenomena, including thermoelectric conversion Mao _et al._ (2021);
Deng _et al._ (2021), electrothermal power consumption and transfer in
semiconductor chips Pop _et al._ (2004); Pop (2010) and so on.
Over the past decades, many macroscopic phenomenological heat conduction
models are developed to describe the electron-phonon coupling Carpene (2006);
Schoenlein _et al._ (1987); Groeneveld _et al._ (1992); Caruso and Novko
(2022); Waldecker _et al._ (2016); Lu _et al._ (2018); Zhou _et al._
(2015). One of the most widely used theoretical models is the two-temperature
model (TTM) Schoenlein _et al._ (1987); Sun _et al._ (1994); Qiu and Tien
(1994); Lin _et al._ (2008); Rethfeld _et al._ (2002); Caruso and Novko
(2022) proposed by Anisimov $et~{}al.$ Anisimov _et al._ (1974). In this
empirical model, the electron temperature $T_{e}$ and phonon temperature
$T_{p}$ are introduced individually and their interactions are represented by
a single phenomenological electron-phonon coupling constant $G$. Although it
is simple, it is widely used in ultrafast pump-prope experiments for
calculating the electron-phonon coupling constant in metals Groeneveld _et
al._ (1995); Brorson _et al._ (1987); Fujimoto _et al._ (1984); Guo and Xu
(2014); Qiu and Tien (1992); Qiu _et al._ (1994). The Fourier’s law is used
to express the evolution process of electron and phonon in the spatial and
temporal spaces so that the TTM is a parabolic two-step heat conduction
equation with infinite heat propagation speed Joseph and Preziosi (1989). In
order to remove this non-physical assumption, a hyperbolic two-temperature
model is developed by introducing a delay term of the derivative of heat flow
with respect to time for electron or phonon transport Qiu and Tien (1994);
Sobolev (2021), which is like an extension of Cattaneo equation Cattaneo
(1948). In addition, the non-thermal lattice model Waldecker _et al._ (2016)
or multitemperature model Lu _et al._ (2018); Sobolev (2021) is developed, in
which phonons with different modes or branches are not in local thermal
equilibrium and many lattice temperatures and electron-phonon coupling
constants are introduced to describe the complex physical interactions.
Although above phenomenological heat conduction models made great success in
describing ultrafast energy exchange, they are not suitable to capture the
highly non-equilibrium situations, for example when the system characteristic
length is comparable to or smaller than electron/phonon mean free path where
the diffusive transport is broken Pattamatta and Madnia (2009a); Caruso and
Novko (2022). To study the multiscale energy transfer in ultrafast heating
systems, the time-dependent Boltzmann transport equation (BTE) Chen _et al._
(2006); Pattamatta and Madnia (2009b); Miao and Wang (2021); Tong and Bernardi
(2021); Wang _et al._ (2016) with coupled electron and phonon thermal
transport becomes an optimal compromise between efficiency and accuracy, which
can capture the diffusive and ballistic thermal transport simultaneously.
Instead of directly describe the evolution of macroscopic temperatures, the
macroscopic fields are obtained by taking the moment of distribution function
within the framework of BTE, so that the key of BTE is tracing the evolution
of phonon/electron distribution function in seven-dimensional phase space
(time, position, momentum). Two kinds of numerical methods are usually used to
solve the BTE with coupled electron and phonon thermal transport. One is the
statistics Monte Carlo method Jacoboni and Reggiani (1983); Muthukunnil Joseph
and Cao (2022), and the other is the discrete ordinate method Adams and Larsen
(2002); Miao and Wang (2021) which discretizes the whole three-dimensional
phase spaces into a lot of small pieces. Both of them made progress in highly
non-equilibrium heat transfer problems. However the advection and scattering
are treated separately in single time step at the numerical discrete level, so
that they have large numerical dissipations in the diffusive regime.
In this work, non-equilibrium thermal conduction in ultrafast heating systems
is studied by the BTE accounting for electron-phonon coupling, and a discrete
unified gas kinetic scheme (DUGKS) Guo and Xu (2021); Zhang and Guo (2019) is
developed in which the electron/phonon advection, scattering and electron-
phonon interactions are coupled together within one time step by solving the
BTE again at the cell interface. Numerical results show that the present
results are in excellent agreement with the experiments and TTM results in the
diffusive regime, and can capture the ballistic or thermal wave effects.
The remainder of this article is organized as follows. The BTE for coupled
electron and phonon thermal transport and the DUGKS solver are introduced in
Sec. II and Sec. III, respectively. Results and discussions are shown in Sec.
IV. Finally, a conclusion is made in Sec. V.
## II Boltzmann transport equation
The Boltzmann transport equation for coupled electron and phonon thermal
transport is Chen _et al._ (2006); Pattamatta and Madnia (2009b); Wang _et
al._ (2016)
$\displaystyle\frac{\partial f_{e}}{\partial t}+\bm{v}_{e}\cdot\nabla f_{e}$
$\displaystyle=Q_{e-e}+Q_{e-p}+w_{s}S_{e},$ (1) $\displaystyle\frac{\partial
f_{p}}{\partial t}+\bm{v}_{p}\cdot\nabla f_{p}$
$\displaystyle=Q_{p-p}+Q_{p-e},$ (2)
where the subscripts $e$ and $p$ represent electron and phonon, respectively.
Note that this subscript representation will be used throughout the rest of
this article. $f$ is the distribution function, $\bm{v}$ is the group
velocity, $S_{e}$ is the external heat source at the mesoscopic level, $w_{s}$
is the associated weight satisfying
$\displaystyle\int w_{s}d\bm{K}=1,$ (3)
where $\int d\bm{K}$ represents the integral over the whole first Brillouin
zone. In actual pump-probe experiments, the external heat source originates
from the complex and ultrafast photon-electron-phonon coupling process, which
is not discussed in detail in this work Karna _et al._ (2023); Block _et
al._ (2019).
The first term on the left-hand side of Eqs.(1,2) represents temporal
evolution of distribution function and the second term represents
electron/phonon advection. $Q_{e-e}$ and $Q_{p-p}$ represent the electron-
electron scattering and phonon-phonon scattering, respectively. $Q_{e-p}$ and
$Q_{p-e}$ are the electron scattering term and phonon scattering term
characterizing electron-phonon interaction, respectively. The actual
scattering processes between (quasi) particles are much complicated so that
they are reduced in relaxation form in this work,
$\displaystyle Q_{e-e}$ $\displaystyle=\frac{f_{e}^{eq}-f_{e}}{\tau_{e}},$ (4)
$\displaystyle Q_{p-p}$ $\displaystyle=\frac{f_{p}^{eq}-f_{p}}{\tau_{p}},$ (5)
where $\tau$ is the associated relaxation time, and $f^{eq}$ is the
equilibrium state satisfying the Fermi-Dirac and Bose-Einstein distribution
for electron and phonon Chen (2005); Kittel _et al._ (1996), respectively,
$\displaystyle f_{e}^{eq}(T_{e})$
$\displaystyle=\frac{1}{\exp\left(\frac{\varepsilon-\mu}{k_{B}T_{e}}\right)+1},$
(6) $\displaystyle f_{p}^{eq}(T_{p})$
$\displaystyle=\frac{1}{\exp\left(\frac{\hbar\omega}{k_{B}T_{p}}\right)-1},$
(7)
where $T$ is the temperature, $\varepsilon$ is the electron energy level,
$\mu$ is the chemical potential, $k_{B}$ is the Boltzmann constant, $\hbar$ is
the planck constant reduced by $2\pi$, $\omega$ is the phonon angular
frequency.
Equations (1) and (2) can be written as in terms of energy density,
$\displaystyle\frac{\partial u_{e}}{\partial t}+\bm{v}_{e}\cdot\nabla u_{e}$
$\displaystyle=\frac{u_{e}^{eq}-u_{e}}{\tau_{e}}-w_{g}G(T_{e}-T_{p})$
$\displaystyle+w_{s}S,$ (8) $\displaystyle\frac{\partial u_{p}}{\partial
t}+\bm{v}_{p}\cdot\nabla u_{p}$
$\displaystyle=\frac{u_{p}^{eq}-u_{p}}{\tau_{p}}+w_{g}G(T_{e}-T_{p}),$ (9)
where
$\displaystyle u_{e}-u_{e}^{eq}(T_{\text{ref}})$
$\displaystyle=\left(f_{e}-f_{e}^{eq}(T_{\text{ref}})\right)\left(\varepsilon-
E_{f}\right)D_{e},$ (10) $\displaystyle u_{p}-u_{p}^{eq}(T_{\text{ref}})$
$\displaystyle=\left(f_{p}-f_{p}^{eq}(T_{\text{ref}})\right)\hbar\omega
D_{p},$ (11) $\displaystyle S$ $\displaystyle=S_{e}\left(\varepsilon-
E_{f}\right)D_{e},$ (12)
where $E_{f}$ is the Fermi energy, $D$ is the density of states,
$T_{\text{ref}}$ is the reference temperature. We assume $\mu\approx E_{f}$
when $T\ll E_{f}/k_{B}$. The electron-phonon coupling process in Eqs. (8,9) is
simplified as
$\displaystyle Q_{p-e}=-Q_{e-p}=w_{g}G(T_{e}-T_{p})$ (13)
by invoking an electron-phonon coupling parameter $G$, which represents the
inelastic scattering that relaxes the electron and phonon to thermal
equilibrium. $w_{g}$ is the associated weights satisfying $\int
w_{g}d\bm{K}=1$ and the interaction strength between phonons with various
modes and electrons with various energy level may be different. The energy
conservation is satisfied during the scattering process so that
$\displaystyle\int\frac{u_{e}^{eq}-u_{e}}{\tau_{e}}d\bm{K}$ $\displaystyle=0,$
(14) $\displaystyle\int\frac{u_{p}^{eq}-u_{p}}{\tau_{p}}d\bm{K}$
$\displaystyle=0.$ (15)
The macroscopic variables including the energy $U$ and heat flux $\bm{q}$ are
obtained by taking the moment of distribution function,
$\displaystyle U_{e}$ $\displaystyle=\int u_{e}d\bm{K},$ (16) $\displaystyle
U_{p}$ $\displaystyle=\int u_{p}d\bm{K},$ (17) $\displaystyle\bm{q}_{e}$
$\displaystyle=\int\bm{v}_{e}u_{e}d\bm{K},$ (18) $\displaystyle\bm{q}_{p}$
$\displaystyle=\int\bm{v}_{p}u_{p}d\bm{K}.$ (19)
The temperature is calculated by following constrictions with Newton iteration
method,
$\displaystyle\int u_{e}^{eq}(T_{e})d\bm{K}$ $\displaystyle=\int
u_{e}d\bm{K},$ (20) $\displaystyle\int u_{p}^{eq}(T_{p})d\bm{K}$
$\displaystyle=\int u_{p}d\bm{K}.$ (21)
Actually in non-equilibrium systems the temperature calculated by above
formulas is not the thermodynamic temperature but more like the symbol of
local energy density Chen (2005).
To simplify the computation, the frequency-independent assumption is used so
that $|\bm{v}|$ and $\tau$ are constants for a given temperature. The three-
dimensional materials are considered and the first Brillouin zone is assumed
isotropic just like a spherical surface with fixed radius. $w_{g}$ and $w_{s}$
are both set to be $1/(4\pi)$. The temperature-dependent specific heat is
introduced in Eqs. (10,11) Pattamatta and Madnia (2009b),
$\displaystyle u_{e}-u_{e}^{eq}(T_{\text{ref}})$
$\displaystyle\approx\frac{1}{4\pi}\int_{T_{\text{ref}}}^{T_{e}}C_{e}dT,$ (22)
$\displaystyle u_{p}-u_{p}^{eq}(T_{\text{ref}})$
$\displaystyle\approx\frac{1}{4\pi}\int_{T_{\text{ref}}}^{T_{p}}C_{p}dT,$ (23)
where $C_{e}=\partial\left(f_{e}^{eq}\left(\varepsilon-
E_{f}\right)D_{e}\right)/\partial T_{e}$,
$C_{p}=\partial\left(f_{p}^{eq}\hbar\omega D_{p}\right)/\partial T_{p}$.
## III Discrete unified gas kinetic scheme
To solve Eqs. (8,9), the discrete unified gas kinetic scheme (DUGKS) Guo and
Xu (2021); Guo _et al._ (2023) is used, which has made great success in
multiscale particle transport Guo and Xu (2021). The whole time space, spatial
space and first Brillouin zone are discretized into many small pieces, and
Eqs. (8,9) in integral form over a control volume $i$ from time $t_{m}$ to
$t_{m+1}=t_{m}+\Delta t$ can be written as follows,
$\displaystyle u_{e,i,n}^{m+1}-u_{e,i,n}^{m}+\frac{\Delta t}{V_{i}}\sum_{j\in
N(i)}\left(\bm{v}_{e}\cdot\mathbf{n}_{ij}u_{e,ij,n}^{m+1/2}A_{ij}\right)$
$\displaystyle=\frac{\Delta t}{2}\left(H_{i,n}^{m+1}+H_{i,n}^{m}\right),$ (24)
$\displaystyle u_{p,i,n}^{m+1}-u_{p,i,n}^{m}+\frac{\Delta t}{V_{i}}\sum_{j\in
N(i)}\left(\bm{v}_{p}\cdot\mathbf{n}_{ij}u_{p,ij,n}^{m+1/2}A_{ij}\right)$
$\displaystyle=\frac{\Delta t}{2}\left(F_{i,n}^{m+1}+F_{i,n}^{m}\right),$ (25)
where the trapezoidal quadrature is used for the time integration of the
scattering, electron-phonon coupling and heat source terms, while the mid-
point rule is used for the flux term.
$H=(u_{e}^{eq}-u_{e})/\tau_{e}-w_{g}G(T_{e}-T_{p})+w_{s}S$,
$F=(u_{p}^{eq}-u_{p})/\tau_{p}+w_{g}G(T_{e}-T_{p})$, $n$ represents the index
of discretized first Brillouin zone, $V_{i}$ is the volume of the cell $i$,
$N(i)$ denotes the sets of neighbor cells of cell $i$, $ij$ denotes the
interface between cell $i$ and cell $j$, $A_{ij}$ is the area of the interface
$ij$, $\mathbf{n}_{ij}$ is the normal unit vector of the interface $ij$
directing from cell $i$ to cell $j$, $\Delta t$ is the time step and $m$ is an
index of time step. Reformulate above two equations (24,25) as
$\displaystyle\tilde{I}_{e,i,n}^{m+1}$
$\displaystyle=\tilde{I}_{e,i,n}^{+,m}-\frac{\Delta t}{V_{i}}\sum_{j\in
N(i)}\left(\bm{v}_{e}\cdot\mathbf{n}_{ij}u_{e,ij,n}^{m+1/2}A_{ij}\right),$
(26) $\displaystyle\tilde{I}_{p,i,n}^{m+1}$
$\displaystyle=\tilde{I}_{p,i,n}^{+,m}-\frac{\Delta t}{V_{i}}\sum_{j\in
N(i)}\left(\bm{v}_{p}\cdot\mathbf{n}_{ij}u_{p,ij,n}^{m+1/2}A_{ij}\right),$
(27)
where
$\displaystyle\tilde{I}_{e}$ $\displaystyle=u_{e}-\Delta tH/2,$ (28)
$\displaystyle\tilde{I}_{e}^{+}$ $\displaystyle=u_{e}+\Delta tH/2,$ (29)
$\displaystyle\tilde{I}_{p}$ $\displaystyle=u_{p}-\Delta tF/2,$ (30)
$\displaystyle\tilde{I}_{p}^{+}$ $\displaystyle=u_{p}+\Delta tF/2.$ (31)
In order to obtain the distribution function at the cell interface at the mid-
point time step ($u_{e,ij,n}^{m+1/2}$, $u_{p,ij,n}^{m+1/2}$), integrating Eqs.
(8,9) from time $t_{m}$ to $t_{m+1/2}=t_{m}+\Delta t/2$ along the
characteristic line with the end point $\bm{x}_{ij}$ locating at the center of
the cell interface $ij$ between cell $i$ and cell $j$,
$\displaystyle
u_{e}^{m+1/2}(\bm{x}_{ij})-u_{e}^{m}(\bm{x}_{ij}-\bm{v}_{e}\Delta t/2)$
$\displaystyle=\Delta
t/4\left(H^{m+1/2}(\bm{x}_{ij})+H^{m}(\bm{x}_{ij}-\bm{v}_{e}\Delta
t/2)\right),$ (32) $\displaystyle
u_{p}^{m+1/2}(\bm{x}_{ij})-u_{p}^{m}(\bm{x}_{ij}-\bm{v}_{p}\Delta t/2)$
$\displaystyle=\Delta
t/4\left(F^{m+1/2}(\bm{x}_{ij})+F^{m}(\bm{x}_{ij}-\bm{v}_{p}\Delta
t/2)\right).$ (33)
Reformulate above two equations as
$\displaystyle\bar{I}_{e}^{m+1/2}(\bm{x}_{ij})$
$\displaystyle=\bar{I}_{e}^{+,m}(\bm{x}_{e,ij}^{\prime}),$ (34)
$\displaystyle\bar{I}_{p}^{m+1/2}(\bm{x}_{ij})$
$\displaystyle=\bar{I}_{p}^{+,m}(\bm{x}_{p,ij}^{\prime}),$ (35)
where
$\displaystyle\bar{I}_{e}$ $\displaystyle=u_{e}-\Delta tH/4,$ (36)
$\displaystyle\bar{I}_{e}^{+}$ $\displaystyle=u_{e}+\Delta tH/4,$ (37)
$\displaystyle\bar{I}_{p}$ $\displaystyle=u_{p}-\Delta tF/4,$ (38)
$\displaystyle\bar{I}_{p}^{+}$ $\displaystyle=u_{p}+\Delta tF/4,$ (39)
$\displaystyle\bm{x}_{e,ij}^{\prime}$
$\displaystyle=\bm{x}_{ij}-\bm{v}_{e}\Delta t/2,$ (40)
$\displaystyle\bm{x}_{p,ij}^{\prime}$
$\displaystyle=\bm{x}_{ij}-\bm{v}_{p}\Delta t/2.$ (41)
$\bar{I}_{e}^{+,m}(\bm{x}_{e,ij}^{\prime})$ and
$\bar{I}_{p}^{+,m}(\bm{x}_{p,ij}^{\prime})$ are reconstructed by numerical
interpolation,
$\displaystyle\bar{I}_{e}^{+,m}(\bm{x}_{e,ij}^{\prime})=\bar{I}_{e}^{+,m}(\bm{x}_{c})+(\bm{x}_{e,ij}^{\prime}-\bm{x}_{c})\bm{\sigma}_{e,c},$
(42)
$\displaystyle\bar{I}_{p}^{+,m}(\bm{x}_{p,ij}^{\prime})=\bar{I}_{p}^{+,m}(\bm{x}_{c})+(\bm{x}_{p,ij}^{\prime}-\bm{x}_{c})\bm{\sigma}_{p,c},$
(43)
where $\bm{\sigma}_{c}$ is the spatial gradient of the distribution function
$\bar{I}^{+}$ at the cell $c$. If $\bm{v}\cdot\bm{n}_{ij}>0$, $c=i$; else
$c=j$. The first-order upwind scheme, van Leer limiter or least square method
can be adopted to calculate the spatial gradient to ensure the numerical
stability.
Once $\bar{I}_{e}^{m+1/2}$ and $\bar{I}_{p}^{m+1/2}$ are obtained, taking an
integral of Eqs. (36,38) over the whole first Brillouin zone leads to
$\displaystyle\frac{\Delta t}{4}S+\int\bar{I}_{e}d\bm{K}$
$\displaystyle=U_{e}+\frac{\Delta t}{4}G(T_{e}-T_{p}),$ (44)
$\displaystyle\int\bar{I}_{p}d\bm{K}$ $\displaystyle=U_{p}-\frac{\Delta
t}{4}G(T_{e}-T_{p}).$ (45)
Combining Eqs. (22,23,14,15) and above two equations, the electron and phonon
temperatures at the cell interface can be calculated by Newton iteration
method Zhang and Guo (2019). Then the original distribution function
$u_{e}^{m+1/2}$ and $u_{p}^{m+1/2}$ at the cell interface can be obtained
based on Eqs. (36,38). Similar treatment is conducted for the evolutions of
the distribution function and macroscopic variables at the cell center.
$\tilde{I}_{e}$ and $\tilde{I}_{p}$ can be updated based on Eqs. (26,27), then
taking an integral of Eqs. (28,30) over the whole first Brillouin zone leads
to
$\displaystyle\frac{\Delta t}{2}S+\int\tilde{I}_{e}d\bm{K}$
$\displaystyle=U_{e}+\frac{\Delta t}{2}G(T_{e}-T_{p}),$ (46)
$\displaystyle\int\tilde{I}_{p}d\bm{K}$ $\displaystyle=U_{p}-\frac{\Delta
t}{2}G(T_{e}-T_{p}).$ (47)
Combining Eqs. (22,23,14,15) and above two equations, the electron and phonon
temperatures at the cell center can be calculated by Newton iteration method.
Then the original distribution function $u_{e}^{m+1}$ and $u_{p}^{m+1}$ at the
cell interface can be updated.
Above procedures are the main evolution processes of the phonon and electron
distribution function in the DUGKS. Actually the numerical evolutions process
of DUGKS is not limited to frequency-independent and isotropic assumptions.
Related extension will be conducted in the future. The key difference between
the DUGKS and typical Monte Carlo method or discrete ordinate method is the
reconstruction of the distribution function at the cell interface. Instead of
direct numerical interpolation, the BTE is solved again at the cell interface
(Eqs. 32,33) so that the physical evolution process is included adaptively and
the electron/phonon advection, scattering and electron-phonon interactions are
coupled together within one time step Guo _et al._ (2023); Guo and Xu (2021).
## IV RESULTS AND DISCUSSIONS
In this section, numerical simulations are conducted and the present results
are compared to the experimental data or results predicted by macroscopic two-
temperature model (TTM, see appendix A). The time step of DUGKS is $\Delta
t=\text{CFL}\times\Delta x/v_{max}$, where $0<\text{CFL}<1$ is the
Courant–Friedrichs–Lewy number, $\Delta x$ is the minimum cell size and
$v_{max}$ is the maximum group velocity of electron and phonon. In the
following simulations, $\text{CFL}=0.4$ or $0.8$. The first-order upwind
scheme is used in the ballistic regime, and van Leer limiter is used in the
diffusive regime.
The remainder of this section is organized as follows. The first three
subsections are mainly to verify the effectiveness of the present scheme. In
subsection IV.1, the DUGKS results are compared with the experimental data in
existing references and the TTM. All input parameters and experimental data
are obtained from the existing references. In subsection IV.2, a time-domain
thermoreflectance (TDTR) experimental platform is used to measure the
reflected signals, and the experimental data is fitted by the DUGKS to get a
reasonable electron-phonon coupling constant. In subsection IV.3, the steady
quasi-1D cross-plane heat conduction is simulated and results show that the
present scheme could predict the thermal transport accurately even when the
cell size is much larger than the mean free path. In subsection IV.4, the
dynamics of electron and phonon in transient thermal grating geometry is
studied by DUGKS, and the thermal behaviors and differences in the ballistic
and diffusive regimes are discussed. In the final subsection IV.5, the heat
conduction in bilayer metals is studied accounted for interfacial thermal
resistance.
### IV.1 Ultrafast laser heating for electron-phonon coupling constant
Figure 1: Ultrafast laser heating of single-layer metals.
(a) $L=100$ nm, front surface (b) $L=100$ nm, rear surface
(c) $L=200$ nm, front surface (d) $L=200$ nm, rear surface
Figure 2: Comparison of the normalized electron temperature predicted by the
present DUGKS results, two-temperature model (TTM) and experiments Brorson
_et al._ (1987); Qiu and Tien (1993) for single-layer Au film, where
$T_{e}^{*}=(T_{e}-T_{\text{ref}})/(T_{max}-T_{\text{ref}})$ and $T_{max}$ is
the maximum electron temperature.
(a) $P_{input}=17.6$ J$\cdot$m-2 (b) $P_{input}=70.6$ J$\cdot$m-2
Figure 3: Comparison of the normalized electron temperature predicted by the
present DUGKS results, two-temperature model (TTM) and experiments Guo and Xu
(2014) for single-layer Au film with fixed thickness $L=~{}1\mu$m and various
laser energy input $P_{input}$, where
$T_{e}^{*}=(T_{e}-T_{\text{ref}})/(T_{max}-T_{\text{ref}})$ and $T_{max}$ is
the maximum electron temperature.
The quasi-1D thermal transport of the single-layer Au films with different
thicknesses $L$ was studied, as shown in Fig. 1. The initial temperature in
the whole domain is $T_{\text{ref}}=300$ K and the ultrafast laser heating is
implemented on the front surface $x=0$,
$\displaystyle
S=\frac{P_{input}(1-R_{r})}{t_{s}d_{pump}}\exp{\left(-\frac{x}{d_{pump}}-\frac{(4\ln{2})t^{2}}{t_{s}^{2}}\right)},$
(48)
where $R_{r}$ is the optical reflectivity, $d_{pump}$ is the optical
penetration depth, $P_{input}$ is the total energy carried by a laser pulse
divided by the laser spot cross section, $t_{s}=t_{pump}+t_{h}$, $t_{h}$ is
the delay in the electron thermalization time after pulse absorption Hopkins
_et al._ (2011); Fann _et al._ (1992b); Sun _et al._ (1993); Hohlfeld _et
al._ (2000) after pulse absorption, $t_{pump}$ is the full-width-at-half-
maximum (FWHM) duration of the laser pulse.
Thermal physical parameters of electrons in Au metals are listed below:
specific heat is $C_{e}=\gamma T_{e}$, $\gamma=71$ J$\cdot$m-3$\cdot$K-2,
group velocity is $|\bm{v}_{e}|=1.36\times 10^{6}$ m$\cdot$s-1, $E_{f}=5.51$
eV, thermal conductivity is Pattamatta and Madnia (2009b); Anisimov and
Rethfeld (1997)
$\displaystyle\kappa_{e}=\chi\frac{(\vartheta_{e}^{2}+0.16)^{5/4}(\vartheta_{e}^{2}+0.44)\vartheta_{e}}{(\vartheta_{e}^{2}+0.092)^{51/2}(\vartheta_{e}^{2}+\eta\vartheta_{p})},$
(49)
where $\chi=353$ W$\cdot$m-1$\cdot$K-1, $\eta=0.16$,
$\vartheta_{e}=k_{B}T_{e}/E_{f}$, $\vartheta_{p}=k_{B}T_{p}/E_{f}$. The
relaxation time is calculated by the kinetic relation
$\tau_{e}=3\kappa_{e}/(C_{e}|\bm{v}_{e}|^{2})$. Thermal physical parameters of
phonons in Au metals are listed below: specific heat is $C_{p}=2.5\times
10^{6}$ J$\cdot$m-3$\cdot$K-1, group velocity is $|\bm{v}_{p}|=2142.86$
m$\cdot$s-1, thermal conductivity is $\kappa_{p}=2.6$ W$\cdot$m-1$\cdot$K-1,
relaxation time is $\tau_{p}=3\kappa_{p}/(C_{p}|\bm{v}_{p}|^{2})$. The phonon
and electron mean free path are about $1.5$ nm and $33$ nm, respectively.
The transient heat conduction in single-layer Au film with different thickness
$L$ is simulated by DUGKS, and the dynamics of electron temperature at the
front $x=0$ and rear $x=L$ surfaces are plotted. In these simulations
Mozafarifard _et al._ (2023); Qiu and Tien (1993), $t_{pump}=96$ fs,
$t_{h}=0$ fs, $d_{pump}=15.3$ nm, $R_{reflection}=0.93$, $P_{input}=10$
J$\cdot$m-2, $G=2.6\times 10^{16}$ W$\cdot$m-3$\cdot$K-1. The computational
domain is discretized with $20-80$ uniform cells, and $40$ discrete points in
the $|\bm{v}|\cos\theta$ direction is used to ensure the accuracy of the
numerical quadrature, where $\theta\in[0,\pi]$. The specular reflection
boundary conditions (Eq. (62)) are adopted for both the front and rear
surfaces. The evolutions of normalized electron temperature
$T_{e}^{*}=(T_{e}-T_{\text{ref}})/(T_{max}-T_{\text{ref}})$ are shown in Fig.
2, where $T_{max}$ is the maximum electron temperature and we assume that the
change in the reflected signal is linear to the change in electron temperature
Qiu and Tien (1993); Mozafarifard _et al._ (2023). It can be found that the
present DUGKS results remain broadly consistent with the TTM and previous
experiments Brorson _et al._ (1987); Qiu and Tien (1993).
The deviations between different models and experiments result from following
reasons. 1) Uncertainty of experimental parameters Guo and Xu (2014); Qiu and
Tien (1993); Sivan and Spector (2020); Sun _et al._ (1993), for example, it
is very difficult to precisely determine the practical FWHM or optical
reflectivity of laser heating, which significantly influence the maximum
electron temperature rise. 2) The TTM assumes diffusive electron/phonon
transport with infinite heat propagation speed, which may be invalid when the
system characteristic length/time is comparable to or smaller than the mean
free path/relaxation time.
We also made a comparison with the experimental data measured by Guo
$et~{}al.$ Guo and Xu (2014) with fixed thickness $L=1~{}\mu$m and various
laser heating pump $P_{input}$. The electron ballistic depth length
$d_{ballistic}$ is introduced and the external heat source is
$\displaystyle S=$ $\displaystyle
P_{input}\frac{0.94(1-R_{r})}{t_{s}d_{length}\left(1-\exp(-L/d_{length})\right)}$
$\displaystyle\times\exp{\left(-\frac{x}{d_{length}}-(4\ln{2})\frac{t^{2}}{t_{s}^{2}}\right)},$
(50)
where $t_{s}=280$ fs, $d_{length}=d_{pump}+d_{ballistic}$, $d_{pump}=12.44$
nm, $d_{ballistic}=200$ nm, $R=0.970$. Thermal physical parameters of
electrons and phonon are the same as above except $\kappa_{p}=0.311$
W$\cdot$m-1$\cdot$K-1 and $G=1.5\times 10^{16}$ W$\cdot$m-3$\cdot$K-1.
The evolutions of normalized electron temperature $T_{e}^{*}$ are shown in
Fig. 3 with different laser heating input $P_{input}$, where the variations of
measured reflection signals are approximately linear to the electron
temperature increment Guo and Xu (2014). When $L=1~{}\mu$m, both electron and
phonon suffer a diffusive transport process, and the profiles show that the
DUGKS results are in excellent agreement with the TTM and experiments. When
$P_{input}=17.6$ J$\cdot$m-2, the maximum electron temperature predicted by
the DUGKS and TTM are $370$ K and $369.0$ K, respectively. When
$P_{input}=70.6$ J$\cdot$m-2, the maximum electron temperature predicted by
the DUGKS and TTM are $533$ K and $527.4$ K, respectively. Those results are
in consistent with the data shown in Ref. Guo and Xu (2014).
### IV.2 Time-domain thermoreflectance experiments
(a) $t_{h}=0$ (b) $t_{h}=260$ fs
Figure 4: Comparison of the normalized reflected signal $\Delta R/R$ between
the TDTR experiments and DUGKS with various electron-phonon coupling constant
$G$ and delay in the electron thermalization time after pulse absorption
$t_{h}$ Fann _et al._ (1992b); Sun _et al._ (1993).
The transient reflected signal for Au thin film with thickness $18.2$ nm at
environment temperature $T_{\text{ref}}=288.15$ K was measured using a time-
domain thermoreflectance (TDTR) experimental platform, which is a widely used
thermal property measurement method based on pump-probe technology. In this
method, a modulated pulsed pump beam heats the surface periodically and a
delayed pulsed probe beam detects the variation in thermoreflectance signal,
which is a function of the temperature of electron and phonon. The signal
picked up by a photodiode and a lock-in amplifier is fitted with an analytical
heat conduction solution, such as two-temperature model and Boltzmann’s
transport equation. A Ti:Sapphire femtosecond laser (Chameleon II from
Coherent Inc.) is utilized to generate pulsed beam with a pulse width of $140$
fs. The wavelength of pump beam and probe beam are $400$ nm and $800$ nm,
respectively. These two beams are focused on the surface of sample by a $10$x
objective lens with diameters of $39.2~{}\mu$m and $10.9~{}\mu$m,
respectively. The delay time between probe beam and pump beam is controlled by
a motorized linear stage (PRO165SL(E)-600 from Aerotech Inc.) and four-fold
light path, which provides a minimum delay time step of $0.01$ ps.
The thermal physical parameters of electrons and phonons are the same as those
used in Fig. 2 as well as the heat source (48) except $t_{pump}=140$ fs and
$P_{input}(1-R_{r})=0.0112$ J$\cdot$m-2. The heating spot radius is much
larger than the film thickness so that the heat conduction can be approximated
as a quasi-1D transient heat conduction problem. The normalized reflected
signal $\Delta R/R$ predicted by DUGKS with fitted electron-phonon coupling
constant $G$ and the delay in the electron thermalization time after pulse
absorption $t_{h}$ Hopkins _et al._ (2011); Fann _et al._ (1992b); Ibrahim
_et al._ (2004); Chen _et al._ (2006) are shown in Fig. 4, where the
relationship between the reflected signals and electron/phonon temperature is
introduced in Appendix C. It can be found that when $t_{h}=260$ fs and
$G\approx 2.0\times 10^{16}$ W$\cdot$m-3$\cdot$K-1, the numerical results are
in excellent agreement with the four group TDTR measured data.
There are many uncertainties compared to existing references: 1) different
experimental samples and external heat source, 2) different empirical formulas
and coefficients of the thermal physical parameters, for example the
temperature-dependent electron specific heat. Hence it can be found that the
measured electron-phonon coupling constant of Au in various references are
different. The electron-phonon coupling coefficient for Au thin film measured
by our performed TDTR is basically consistent with previous experimental data
Guo and Xu (2014); Brorson _et al._ (1987); Qiu and Tien (1993); Sivan and
Spector (2020); Sun _et al._ (1993) ranging from $1.5\times 10^{16}$ to
$4.0\times 10^{16}$ W$\cdot$m-3$\cdot$K-1.
### IV.3 Steady cross-plane heat conduction
Figure 5: Spatial distributions of the electron and phonon temperatures with
different thickness $L$, where $T^{*}=(T-T_{R})/(T_{L}-T_{R})$, the red line
is the phonon temperature $T_{p}$ and the dark green is the electron
temperature $T_{e}$.
(a) $L=100$ nm
(b) $L=10~{}\mu$m
Figure 6: Spatial distributions of the electron and phonon temperatures with
different thickness $L$, where $T^{*}=(T-T_{R})/(T_{L}-T_{R})$, the red line
is the phonon temperature $T_{p}$ and the dark green is the electron
temperature $T_{e}$. ‘M=40’ and ‘M=100’ represent the discretized cell number.
The quasi-1D cross-plane heat conduction in Au metal film with different
thickness $L$ is studied. The wall temperature at each side of the film is
fixed at $T_{L}=T_{0}+\Delta T$ and $T_{R}=T_{0}-\Delta T$, respectively. The
thermal physical parameters of electrons and phonons are the same as those
used in Fig. 1 as well as the numerical discretizations. Spatial distributions
of the electron and phonon temperatures with different thickness $L$ are shown
in Fig. 5, where isothermal boundary conditions (Eq. (61)) are used for two
walls. It can be found that the temperature slip appears near the wall
boundaries, and the temperature slip of electron is more obvious because the
electron mean free path (33 nm) is larger than that of phonon (1.5 nm) in Au
metals. When the thickness increases, the electron-phonon coupling becomes
frequent so that the deviations between phonon and electron temperature
decreases.
We also make a test of grid independence in Fig. 6. It can be found that the
numerical results predicted by coarse grid (‘$M=40$’) are in excellent
agreement with that predicted by fine grid (‘$M=100$’). When the thickness
$L=10~{}\mu$m is much larger than mean free path, the temperature slip
disappears in the diffusive regime. Furthermore, the present scheme could
capture the temperature distributions accurately even the coarse cell size
($L/M=250$ nm) is much larger than the mean free path.
### IV.4 Transient thermal grating
Table 1: Physical parameters of BTE with coupled electron and phonon transport in TTG geometry at $300$ K and $25$ K Pattamatta and Madnia (2009b); Freedman _et al._ (2019); Block _et al._ (2019, 2023). Au | $300$ K | $25$ K
---|---|---
$C_{p}$ (J$\cdot$m-3$\cdot$K-1) | 2.50E6 | 5.10E5
$\kappa_{p}$ (W$\cdot$m-1$\cdot$K-1) | 2.75 | 23.0
$|\bm{v}_{p}|$ (m$\cdot$s-1) | 2142.857 | 2142.857
$|\tau_{p}|$ (s) | 7.187E-13 | 2.946E-11
$\lambda_{p}=|\bm{v}_{p}||\tau_{p}|$ (m) | 1.540E-9 | 6.314E-8
$\alpha_{p}=\kappa_{p}/C_{p}$ (m2$\cdot$s-1) | 1.10E-6 | 4.51E-5
$C_{e}$ (J$\cdot$m-3$\cdot$K-1) | 2.0E4 | 1.70E3
$\kappa_{e}$ (W$\cdot$m-1$\cdot$K-1) | 320.0 | 977.0
$|\bm{v}_{e}|$ (m$\cdot$s-1) | 1.36E6 | 1.36E6
$|\tau_{e}|$ (s) | 2.595E-14 | 9.322E-13
$\lambda_{e}=|\bm{v}_{e}||\tau_{e}|$ (m) | 3.53E-8 | 1.268E-6
$\alpha_{e}=\kappa_{e}/C_{e}$ (m2$\cdot$s-1) | 0.016 | 0.575
$G$ (W$\cdot$m-3$\cdot$K-1) | 3.0E16 | 5.0E15
(a) $L=10$ nm (b) $L=100$ nm (c) $L=1~{}\mu$m
(d) $L=10$ nm (e) $L=100$ nm (f) $L=1~{}\mu$m
Figure 7: Time-dependent electron and phonon temperatures at $x=L/4$ with
different $L$ when $T_{\text{ref}}=300$ K, where $\Delta T=1$ K,
$T_{e}^{*}=(T_{e}-T_{\text{ref}})/\Delta T$,
$T_{p}^{*}=(T_{p}-T_{\text{ref}})/\Delta T$. Reference data originates from
Ref. Maznev _et al._ (2011).
(a) $L=100$ nm (b) $L=1~{}\mu$m (c) $L=10~{}\mu$m
(d) $L=100$ nm (e) $L=1~{}\mu$m (f) $L=10~{}\mu$m
Figure 8: Time-dependent electron and phonon temperatures at $x=L/4$ with
different $L$ when $T_{\text{ref}}=25$ K, where $\Delta T=1$ K,
$T_{e}^{*}=(T_{e}-T_{\text{ref}})/\Delta T$,
$T_{p}^{*}=(T_{p}-T_{\text{ref}})/\Delta T$.
Transient thermal grating (TTG) Maznev _et al._ (2011); Sivan and Spector
(2020) in a Au metal is analyzed. In quasi-1D geometry, the crossed pump
lasers produce a sinusoidal interference pattern on the surface of a sample
with period $L$, which results in a spatially sinusoidal temperature profile,
$\displaystyle T_{e}(x,t=0)$ $\displaystyle=T_{\text{ref}}+\Delta T\sin(2\pi
x/L),$ (51) $\displaystyle T_{p}(x,t=0)$ $\displaystyle=T_{\text{ref}},$ (52)
where $\Delta T$ is the temperature difference. We assume that the grating
initially heats only the electronic subsystem. The computational domain is
discretized with $20-50$ uniform cells, and $8-100$ discrete points in the
$|\bm{v}|\cos\theta$ direction is used. The periodic boundary conditions are
adopted for the domain. Detailed physical parameters of phonon and electron in
Au metal are listed in Table. 1, where $\alpha=\kappa/C$ is the thermal
diffusivity and $\lambda=|\bm{v}|\tau$ is the mean free path.
Numerical results of time-dependent electron and phonon temperatures at
$x=L/4$ with different spatial grating length $L$ are shown in Fig. 7, where
$T_{\text{ref}}=300$ K and $\Delta T=1$ K. It can be found that when
$L=1~{}\mu$m, the DUGKS results at room temperature are the same as those
obtained from Ref. Maznev _et al._ (2011) or predicted by the TTM, namely,
the heat conduction is in the diffusive regime. Actually the BTE could recover
the TTM in the diffusive limit, see Appendix D. When the grating period length
decreases and becomes comparable to the electron mean free path, the results
predicted by the TTM deviate from the DUGKS results. The electron temperature
wave appears in the ballistic regime, which breaks the diffusion equation
exactly.
Another interesting thing is that initially the electron temperature is higher
than phonon temperature, so energy/heat is released from the electron to the
phonon, but after a while the phonon temperature is higher than the electron
temperature, namely, the energy is transferred back from the phonon to the
electron, as shown in Fig. 7(c,f). Similar heat flow from phonon to electron
$T_{e}-T_{p}<0$ also exist when $L=10$ nm or $100$ nm based on the DUGKS
results. This phenomenon $T_{e}-T_{p}<0$ has also been measured experimentally
Block _et al._ (2023) or studied theoretically based on TTM Maznev _et al._
(2011) at micron scale.
The underlying physical mechanisms of $T_{e}-T_{p}<0$ can be summarized as
below. Actually the heat conduction in electron or phonon subsystems is
composed of three parts: advection $\bm{v}\cdot\nabla u$, scattering
$(u^{eq}-u)/\tau$ and electron-phonon coupling $G(T_{e}-T_{p})$. The purpose
of advection and scattering is to make the temperature tend to be a constant
at different spatial locations, i.e., $T(\bm{x})\rightarrow T_{\text{ref}}$.
While the electron-phonon coupling aims to establish a local equilibrium
between phonon and electron subsystems for a given spatial position, i.e.,
$|T_{e}-T_{p}|\rightarrow 0$. In the diffusive regime, the phonon-phonon and
electron-electron scattering are sufficient so that the heat dissipations
depend on the thermal diffusivity $\alpha$ and electron-phonon coupling $G$.
In the ballistic regime, the phonon-phonon and electron-electron scattering
are rare so that the heat dissipations depend on the phonon/electron advection
speed $|\bm{v}|$ and electron-phonon coupling $G$. The thermal diffusivity and
group velocity of electron are much larger than those of phonon, which
indicates that the phonon subsystem is more like a energy accumulator relative
to the electron subsystem in both diffusive and ballistic regimes. Hence the
electron temperature varies between phonon temperature $T_{p}$ and reference
temperature $T_{\text{ref}}$.
When the reference temperature decreases from $300$ K to $25$ K, both thermal
diffusivity and mean free path increases. When $L\leq 1~{}\mu$m, electron
ballistic transport is obvious and temperature wave appears. The TTM results
deviate significantly from the DUGKS results, because the diffusive transport
is no longer valid due to the small grating period size. It can also be found
that the anomalous thermal phenomena $T_{e}-T_{p}<0$ exist. When
$L=10~{}\mu$m, the TTM results are in consistent with the DUGKS data, and the
thermal transport is in the diffusive regime.
Although the TTM is valid in the diffusive regime, the numerical results in
Fig. 8(c,f) are quite different from those in Fig. 7(c,f). In Fig. 7(c,f), the
electron-phonon coupling dominates heat dissipation at the initial stage so
that the electron temperature decreases significantly, and then the electron
and phonon temperatures decay slowly due to the small thermal diffusivity. In
Fig. 8(c,f), the thermal diffusivity of electron and phonon are much larger
than that at room temperature, so that it can be found that both electron and
phonon temperature tend to a constant in a shorter time.
### IV.5 Thermal transport in a metallic bilayer
Figure 9: Schematic of the physical evolution process of electron/phonon transport in bilayer metals with interfacial thermal resistance. Table 2: Thermal physical parameters of electrons and phonons in Pt metals Wang _et al._ (2016) at $300$ K. $C_{p}$ (J$\cdot$m-3$\cdot$K-1) | 2.67E6
---|---
$\kappa_{p}$ (W$\cdot$m-1$\cdot$K-1) | 5.80
$|\bm{v}_{p}|$ (m$\cdot$s-1) | 1.91E3
$C_{e}$ (J$\cdot$m-3$\cdot$K-1) | $\gamma T_{e}$
$\gamma$ J$\cdot$m-3$\cdot$K-2 | $748.1$
$\kappa_{e}$ (W$\cdot$m-1$\cdot$K-1) | $\kappa_{0}T_{e}/T_{p}$
$\kappa_{0}$ (W$\cdot$m-1$\cdot$K-1) | $65.80$
$|\bm{v}_{e}|$ (m$\cdot$s-1) | 0.46E6
$G$ (W$\cdot$m-3$\cdot$K-1) | 108.80E16
(a)
(b)
Figure 10: Transient (a) temperature evolution and (b) reflected signal in
Pt/Au bilayer metals, where $\Delta T=T-T_{\text{ref}}$ and $\Delta R=a\Delta
T_{e}+b\Delta T_{p}$. For Pt, $a/b=0.25$.
(a) Au/Pt
(b) Au/Pt
Figure 11: Transient (a) temperature evolution and (b) reflected signal in
Au/Pt bilayer metals, where $\Delta T=T-T_{\text{ref}}$ and $\Delta R=a\Delta
T_{e}+b\Delta T_{p}$. For Au, $a/b=0.02$.
(a) $t=20\Delta t$ (b) $t=60\Delta t$
(c) $t=1000\Delta t$ (d) $t=10000\Delta t$
Figure 12: Spatial distributions of electron and phonon temperature in Au/Pt
at different moment, where $\Delta t=0.0294$ ps.
The heat conduction in the last four subsections only consider a single-layer
metal film. In the practical experiments or applications, the heat conduction
in bilayer or multilayer dissimilar materials is more common Giri _et al._
(2015); Hopkins _et al._ (2009); Karna _et al._ (2023); Choi _et al._
(2014). In this subsection, the thermal transport in a metallic bilayer (Au/Pt
or Pt/Au Choi _et al._ (2014)) with ultrafast laser heating is studied by the
DUGKS and the interfacial thermal resistance between two dissimilar materials
are accounted Chen _et al._ (2022); Wang _et al._ (2016); Miao and Wang
(2023).
The diffuse mismatch model Chen _et al._ (2022); Wang _et al._ (2016); Miao
and Wang (2023), in which electron/phonon scattering at the metal-metal
interface is assumed to be completely diffuse, is used to describe the
interfacial thermal resistance. The transmittance $t$ and reflectance $r$ on
each side of the interface satisfy the following constrictions due to energy
conservation,
$\displaystyle r_{ij}+t_{ij}$ $\displaystyle=1,$ (53)
where $t_{ij}$ represents the transmittance from medium $i$ to medium $j$
across the interface, and $r_{ij}$ represents the reflectance in the medium
$i$ reflected back from the interface. In addition, at the thermal equilibrium
state (the temperature of the left and right side of interface is the same),
the net heat flux across the interface is zero due to the principle of
detailed balance so that
$\displaystyle t_{ij}C_{i}v_{i}=t_{ji}C_{j}v_{j},$ (54)
where $C_{i}$ and $v_{i}$ are the specific heat and group velocity of medium
$i$. We only consider the interaction between electron (phonon) in medium $i$
and electron (phonon) in medium $j$. The schematic of particle transport in
bilayer metals is shown in Fig. 9.
We consider a combination of a 23 nm Pt film and a 58 nm Au film with initial
environment temperature $T_{\text{ref}}=300$ K. The thermal physical
parameters of electrons and phonons in Au metals are shown in previous Tab. 1
with $\gamma=71.0$ J$\cdot$m-3$\cdot$K-2, and the thermal physical parameters
of electrons and phonons in Pt metals are shown in Tab. 2 Wang _et al._
(2016).
Firstly, we simulate the transient heat conduction in Au/Pt bilayer metal, and
the external heat source is shown in Eq. (48) with $t_{pump}=400$ fs and
$P_{input}(1-R_{r})=1.60$ J$\cdot$m-2 Choi _et al._ (2014). The specular
boundary conditions are adopted for the heating top surface regardless phonon
and electron. For the unheated bottom surface, the specular boundary condition
(Eq. (62)) is used for electron transport and the isothermal boundary
condition (Eq. (61)) with environment temperature $300$ K is used for phonon
transport, which indicates that the net electron heat flux at the bottom
surface is zero and the phonon heat flux is nonzero. The uniform cell is used
with cell size $1$ nm, and the CFL number is $0.80$. $40$ discrete points in
the $|\bm{v}|\cos\theta$ direction is used. Similarly, we also simulate the
transient heat conduction in Pt/Au bilayer metal.
Numerical results are shown in Fig. 10. It can be found that in Pt/Au metals,
the phonon temperature in Pt is always higher than the phonon temperature in
Au. The reflected signal increases firstly and then decreases gradually with
time. For Au/Pt metals the thermal behaviors are completely different as shown
in Fig. 11. It can be found that the phonon temperature in the unheated metal
is higher than the phonon temperature in the heated metal in the early tens of
picoseconds. The reflected signal is also non-monotonic with time after the
laser heating is almost removed. It increases with time in the early tens of
picoseconds and finally decreases with time.
In order to understand this anomalous heat conduction phenomenon, the spatial
distributions of electron and phonon temperature in Au/Pt bilayer metal at
different moment are plotted in Fig. 12. At the early stage $t=20\Delta t$,
the electron temperature in Au is highest. There are obvious temperature slip
or interfacial thermal resistance in the Au/Pt interface for both electron and
phonon transport. In addition, the phonon temperature in the Pt side is higher
than that in the Au side. That’s because the electron-phonon coupling of Pt is
much stronger than that of Au. When $t=60\Delta t$, the electron temperature
in Pt side near the Au/Pt interface is higher than the electron temperature in
Au side. When $t=1000\Delta t$, the phonon temperature in Au metal increases
and the lattice energy come from two part. One part is the electron-phonon
coupling in Au because the electron temperature is still higher than the
phonon temperature. The other is the phonon heat flows back from the Pt to Au
side, which leads to that the deviations of the phonon temperature on the each
side of Au/Pt interface decreases. When $t=10000\Delta t$, the electron-
phonon, electron-electron, phonon-phonon scattering are sufficient so that all
temperatures decreases with time gradually as shown in Fig. 11(a).
## V Conclusion
Non-equilibrium thermal conduction in ultrafast heating systems is studied by
the BTE accounting for electron-phonon coupling. A discrete unified gas
kinetic scheme is developed to directly solve the BTE, in which the
electron/phonon advection, scattering and electron-phonon interactions are
coupled together within one time step by solving the BTE again at the cell
interface. Numerical results show that the present scheme not only correctly
predicts the heat conduction in the diffusive regime with coarse cell size,
but also captures the ballistic or thermal wave effects when the
characteristic length is comparable to or smaller than the mean free path
where the TTM fails. For ultrafast laser heating problem, the present results
are in excellent agreement with the experimental results in existing
literatures and our performed TDTR.
In transient thermal grating geometry, heat flow from phonon to electron is
predicted in both the ballistic and diffusive regimes. It results from the
competition of the thermal diffusivity and electron-phonon coupling in the
diffusive regime, and in the ballistic regime it results from the competition
of the phonon/electron advection and electron-phonon coupling. Furthermore, in
Au/Pt bilayer metals with interfacial thermal resistance, the predicted
reflected signal increases in the early tens of picoseconds and then decreases
with time after the heat source is removed. The stronger electron-phonon
coupling strength in the unheated Pt side results in the higher phonon
temperature in Pt, so that phonon heat flows back from Pt to Au side.
## Acknowledgments
This work is supported by the China Postdoctoral Science Foundation
(2021M701565). The authors are grateful to Prof. Yonatan Sivan in Ben-Gurion
University and Dr. Wuli Miao in Tsinghua University for communications on
electron-phonon coupling.
## Appendix A Two-temperature model
Two-temperature model (TTM) SINGH (2010); Qiu and Tien (1994); Lin _et al._
(2008); Rethfeld _et al._ (2002) with coupled electron and phonon thermal
transport is
$\displaystyle C_{e}\frac{\partial T_{e}}{\partial t}$
$\displaystyle=\nabla\cdot(\kappa_{e}\nabla T_{e})-G(T_{e}-T_{p})+S,$ (55)
$\displaystyle C_{p}\frac{\partial T_{p}}{\partial t}$
$\displaystyle=\nabla\cdot(\kappa_{p}\nabla T_{p})+G(T_{e}-T_{p}).$ (56)
In the discretized time space, the diffusion term and heat source term are
dealt with the semi-implicit scheme and the electron-phonon coupling term is
implemented with explicit scheme,
$\displaystyle C_{e}^{m}\frac{\delta T_{e}^{m}}{\Delta
t}-0.5\nabla\cdot(\kappa_{e}^{m}\nabla\delta T_{e}^{m})$
$\displaystyle=\nabla\cdot(\kappa_{e}^{m}\nabla
T_{e}^{m})-G(T_{e}^{m}-T_{p}^{m})+\frac{S^{m}+S^{m+1}}{2},$ (57)
$\displaystyle C_{p}^{m}\frac{\delta T_{p}^{m}}{\Delta
t}-0.5\nabla\cdot(\kappa_{p}^{m}\nabla\delta T_{p}^{m})$
$\displaystyle=\nabla\cdot(\kappa_{p}^{m}\nabla
T_{p}^{m})+G(T_{e}^{m}-T_{p}^{m}),$ (58)
where $\delta T^{m}=T^{m+1}-T^{m}$ is the temperature increment for each time
step $m$. Actually the electron-phonon coupling term can also be implemented
with semi-implicit scheme,
$\displaystyle C_{e}^{m}\frac{\delta T_{e}^{m}}{\Delta
t}-0.5\nabla\cdot(\kappa_{e}^{m}\nabla\delta T_{e}^{m})+0.5G(\delta
T_{e}^{m}-\delta T_{p}^{m})$ $\displaystyle=\nabla\cdot(\kappa_{e}^{m}\nabla
T_{e}^{m})-G(T_{e}^{m}-T_{p}^{m})+\frac{S^{m}+S^{m+1}}{2},$ (59)
$\displaystyle C_{p}^{m}\frac{\delta T_{p}^{m}}{\Delta
t}-0.5\nabla\cdot(\kappa_{p}^{m}\nabla\delta T_{p}^{m})-0.5G(\delta
T_{e}^{m}-\delta T_{p}^{m})$ $\displaystyle=\nabla\cdot(\kappa_{p}^{m}\nabla
T_{p}^{m})+G(T_{e}^{m}-T_{p}^{m}).$ (60)
But the latter method will generate a larger coefficient matrix with coupled
electron and phonon temperature increment $G(\delta T_{e}^{m}-\delta
T_{p}^{m})$ compared to the former. We adopt the first strategy in this work
for the simplicity. To simulate the present quasi-1D numerical cases, $10-100$
uniform cells are used and the central scheme is used to discrete the
diffusion term in the spatial space.
## Appendix B Boundary conditions
Boundary conditions are one of the key parts in the numerical simulations. Two
kinds of boundary conditions are used in this work.
1. 1.
Thermalization/isothermal boundary assumes that the incident particles
(phonons/electron) are all absorbed by the boundary $\bm{x}_{b}$, and the
particles emitted from the boundary are the equilibrium state with the
boundary temperature $T_{b}$, i.e.,
$u(\bm{x}_{b})=u^{eq}(T_{b}),\quad\bm{v}\cdot\mathbf{n}_{b}>0,$ (61)
where $\mathbf{n}_{b}$ is the normal unit vector of the boundary pointing to
the computational domain.
2. 2.
The specular reflecting boundary condition is
$\displaystyle
u(\bm{v})=u(\bm{v}^{\prime}),\quad\bm{v}^{\prime}\cdot\mathbf{n}_{b}<0,$ (62)
where $\bm{v}^{\prime}=\bm{v}-2\mathbf{n}_{b}(\bm{v}\cdot\mathbf{n}_{b})$ is
the incident direction.
## Appendix C Temperature dependent thermoreflectance signals
The reflectivity signal $R$ Block _et al._ (2019); Hopkins _et al._ (2011)
for a thin Au film with thickness $h$ and refractive index
($n_{2}=\sqrt{\epsilon}$) under perpendicular incidence from air ($n_{1}=1$)
and supported on a sapphire substrate ($n_{3}=1.7$) is $R=|r|^{2}$, where
$\displaystyle r$
$\displaystyle=\frac{r_{12}+r_{23}\cdot\exp(2i\beta)}{1+r_{12}r_{23}\cdot\exp(2i\beta)},$
(63) $\displaystyle\beta$ $\displaystyle=2\pi n_{2}h/\lambda_{0},$ (64)
$\displaystyle r_{jk}$ $\displaystyle=\frac{n_{j}-n_{k}}{n_{j}-n_{k}},$ (65)
where $i$ is the imaginary number. $\lambda_{0}$ is the wave length of probe
pulse. The permittivity or dielectric function $\epsilon$ of Au thin film is
related to the electron temperature $T_{e}$ and phonon temperature $T_{p}$,
i.e.,
$\displaystyle\epsilon$
$\displaystyle=\epsilon_{\infty}-\frac{\omega_{p}^{2}(T_{p})}{\omega_{0}(\omega_{0}+i\gamma_{re}(T_{e},T_{p}))},$
(66) $\displaystyle\gamma_{re}(T_{e},T_{p})$
$\displaystyle=A_{ee}T_{e}^{2}+B_{ep}T_{p},$ (67)
where $\omega_{p}\approx 1.37\times 10^{16}$ rad/s is the plasma frequency
Kittel _et al._ (1996); Guo _et al._ (2012), $\epsilon_{\infty}=9.50$,
$A_{ee}=1.77\times 10^{7}$ K-2 $\cdot$s-1, $B_{ep}=1.45\times 10^{11}$ K-1
$\cdot$s-1. $\omega_{0}$ is the angular frequency of probe pulse. The
nonlinear thermoreflectance model is complicated, hence in many references the
change in the reflected signal is usually assumed to be linear with the
increase in temperature.
## Appendix D Dimensional analysis
Make a dimensional analysis of the BTE (8,9) without external heat source,
$\displaystyle\frac{\partial u_{e}^{*}}{\partial
t^{*}}+|\bm{v}_{e}^{*}|\bm{s}\cdot\nabla_{\bm{x}^{*}}u_{e}^{*}$
$\displaystyle=\frac{u_{e}^{eq,*}-u_{e}^{*}}{\tau_{e}^{*}}-\frac{(T_{e}^{*}-T_{p}^{*})/(4\pi)}{1/G^{*}},$
(68) $\displaystyle\frac{\partial u_{p}^{*}}{\partial
t^{*}}+|\bm{v}_{p}^{*}|\bm{s}\cdot\nabla_{\bm{x}^{*}}u_{p}^{*}$
$\displaystyle=\frac{u_{p}^{eq,*}-u_{p}^{*}}{\tau_{p}^{*}}+\frac{(T_{e}^{*}-T_{p}^{*})/(4\pi)}{1/G^{*}},$
(69)
where $\bm{s}$ is the unit directional vector and the thermal physical
parameters of electrons are regarded as the reference variables
$v_{\text{ref}}=|\bm{v}_{e}|$, $C_{\text{ref}}=C_{e}$, so that
$\displaystyle t^{*}$ $\displaystyle=\frac{t}{t_{\text{ref}}},\quad$
$\displaystyle G^{*}$
$\displaystyle=\frac{t_{\text{ref}}}{C_{\text{ref}}/G},\quad$
$\displaystyle\bm{x}^{*}$ $\displaystyle=\frac{\bm{x}}{L_{\text{ref}}},$ (70)
$\displaystyle u_{p}^{*}$
$\displaystyle=\frac{u_{p}}{C_{\text{ref}}T_{\text{ref}}},\quad$
$\displaystyle\tau_{p}^{*}$
$\displaystyle=\frac{\tau_{p}}{t_{\text{ref}}},\quad$
$\displaystyle\bm{v}_{p}^{*}$
$\displaystyle=\frac{\bm{v}_{p}}{v_{\text{ref}}},$ (71) $\displaystyle
u_{e}^{*}$ $\displaystyle=\frac{u_{e}}{C_{\text{ref}}T_{\text{ref}}},\quad$
$\displaystyle\tau_{e}^{*}$
$\displaystyle=\frac{\tau_{e}}{t_{\text{ref}}},\quad$
$\displaystyle\bm{v}_{e}^{*}$
$\displaystyle=\frac{\bm{v}_{e}}{v_{\text{ref}}},$ (72) $\displaystyle
T_{p}^{*}$ $\displaystyle=\frac{T_{p}}{T_{\text{ref}}},\quad$ $\displaystyle
T_{e}^{*}$ $\displaystyle=\frac{T_{e}}{T_{\text{ref}}},\quad$ $\displaystyle
t_{\text{ref}}$ $\displaystyle=\frac{L_{\text{ref}}}{v_{\text{ref}}},$ (73)
where $L_{\text{ref}}$ is the system characteristic size. When
$t^{*}\gg\tau_{e}^{*}$ and $x^{*}\gg|\bm{v}_{e}^{*}|\tau_{e}^{*}$, diffusive
electron transport happens. When $t^{*}\gg\tau_{p}^{*}$ and
$x^{*}\gg|\bm{v}_{p}^{*}|\tau_{p}^{*}$, diffusive phonon transport happens.
When $t^{*}\gg 1/G^{*}$, $x^{*}\gg|\bm{v}_{e}^{*}|/G^{*}$ and
$x^{*}\gg|\bm{v}_{p}^{*}|/G^{*}$, electron-phonon coupling is sufficient so
that the deviations between electron and phonon temperatures tend to zero.
When both electron and phonon suffer diffusive transport processes, according
to the first-order Chapman-Enskpg expansion the distribution function can be
approximated as
$\displaystyle u_{e}^{*}$ $\displaystyle\approx
u_{e}^{eq,*}-\tau_{e}^{*}\left(\frac{\partial u_{e}^{eq,*}}{\partial
t^{*}}+|\bm{v}_{e}^{*}|\bm{s}\cdot\nabla_{\bm{x}^{*}}u_{e}^{eq,*}+\frac{(T_{e}^{*}-T_{p}^{*})/(4\pi)}{1/G^{*}}\right),$
(74) $\displaystyle u_{p}^{*}$ $\displaystyle\approx
u_{p}^{eq,*}-\tau_{p}^{*}\left(\frac{\partial u_{p}^{eq,*}}{\partial
t^{*}}+|\bm{v}_{p}^{*}|\bm{s}\cdot\nabla_{\bm{x}^{*}}u_{p}^{eq,*}-\frac{(T_{e}^{*}-T_{p}^{*})/(4\pi)}{1/G^{*}}\right).$
(75)
Combined above six equations and taking an integral of the BTE (68,69) over
the whole first Brillouin zone, we can get
$\displaystyle\frac{\partial}{\partial
t^{*}}\left<u_{e}^{eq,*}-\tau_{e}^{*}\left(\frac{\partial
u_{e}^{eq,*}}{\partial
t^{*}}+\frac{(T_{e}^{*}-T_{p}^{*})/(4\pi)}{1/G^{*}}\right)\right>+\nabla_{\bm{x}^{*}}\cdot\left<|\bm{v}_{e}^{*}||\bm{v}_{e}^{*}|\bm{s}\bm{s}\tau_{e}^{*}u_{e}^{eq,*}\right>$
$\displaystyle=-\frac{T_{e}^{*}-T_{p}^{*}}{1/G^{*}},$ (76)
$\displaystyle\frac{\partial}{\partial
t^{*}}\left<u_{p}^{eq,*}-\tau_{p}^{*}\left(\frac{\partial
u_{p}^{eq,*}}{\partial
t^{*}}+\frac{(T_{e}^{*}-T_{p}^{*})/(4\pi)}{1/G^{*}}\right)\right>+\nabla_{\bm{x}^{*}}\cdot\left<|\bm{v}_{p}^{*}||\bm{v}_{p}^{*}|\bm{s}\bm{s}\tau_{p}^{*}u_{p}^{eq,*}\right>$
$\displaystyle=\frac{T_{e}^{*}-T_{p}^{*}}{1/G^{*}},$ (77)
where $<>$ represents the integral over the whole first Brillouin zone. Then
the associated macroscopic heat conduction is
$\displaystyle C_{e}\frac{\partial T_{e}}{\partial
t}-\tau_{e}\frac{\partial^{2}U_{e}}{\partial
t^{2}}-\tau_{e}\frac{\partial(GT_{e}-GT_{p})}{\partial t}$
$\displaystyle=\nabla\cdot(\kappa_{e}\nabla T_{e})-G(T_{e}-T_{p}),$ (78)
$\displaystyle C_{p}\frac{\partial T_{p}}{\partial
t}-\tau_{p}\frac{\partial^{2}U_{p}}{\partial
t^{2}}+\tau_{p}\frac{\partial(GT_{e}-GT_{p})}{\partial t}$
$\displaystyle=\nabla\cdot(\kappa_{e}\nabla T_{p})+G(T_{e}-T_{p}),$ (79)
where $\kappa=\int C|\bm{v}|^{2}\tau/3d\bm{K}$ is the bulk thermal
conductivity. When $\sqrt{\tau_{e}^{*}}\ll t^{*}$, $\sqrt{\tau_{p}^{*}}\ll
t^{*}$, $\tau_{e}^{*}/(t^{*}/G^{*})\ll 1$, $\tau_{p}^{*}/(t^{*}/G^{*})\ll 1$,
the first order term of relaxation time in Eqs. (78,79) could be removed and
above two equations recover the typical two-temperature model in the diffusive
limit.
## References
* Chen (2005) G. Chen, _Nanoscale energy transport and conversion: A parallel treatment of electrons, molecules, phonons, and photons_ (Oxford University Press, 2005).
* Kittel _et al._ (1996) C. Kittel, P. McEuen, and P. McEuen, _Introduction to solid state physics_ , Vol. 8 (Wiley New York, 1996).
* Fann _et al._ (1992a) W. S. Fann, R. Storz, H. W. K. Tom, and J. Bokor, Phys. Rev. Lett. 68, 2834 (1992a).
* Karna _et al._ (2023) P. Karna, M. S. B. Hoque, S. Thakur, P. E. Hopkins, and A. Giri, Nano Lett. 23, 491 (2023).
* Block _et al._ (2019) A. Block, M. Liebel, R. Yu, M. Spector, Y. Sivan, F. J. G. de Abajo, and N. F. van Hulst, Sci. Adv. 5, eaav8965 (2019).
* Allen (1987) P. B. Allen, Phys. Rev. Lett. 59, 1460 (1987).
* Caruso and Novko (2022) F. Caruso and D. Novko, Advances in Physics: X 7, 2095925 (2022).
* Mao _et al._ (2021) J. Mao, G. Chen, and Z. Ren, Nat. Mater. 20, 454 (2021).
* Deng _et al._ (2021) C. Deng, Y. Huang, M. An, and N. Yang, Mater. Today Phys. 16, 100305 (2021).
* Pop _et al._ (2004) E. Pop, R. W. Dutton, and K. E. Goodson, J. Appl. Phys. 96, 4998 (2004).
* Pop (2010) E. Pop, Nano Res. 3, 147 (2010).
* Carpene (2006) E. Carpene, Phys. Rev. B 74, 024301 (2006).
* Schoenlein _et al._ (1987) R. W. Schoenlein, W. Z. Lin, J. G. Fujimoto, and G. L. Eesley, Phys. Rev. Lett. 58, 1680 (1987).
* Groeneveld _et al._ (1992) R. H. M. Groeneveld, R. Sprik, and A. Lagendijk, Phys. Rev. B 45, 5079 (1992).
* Waldecker _et al._ (2016) L. Waldecker, R. Bertoni, R. Ernstorfer, and J. Vorberger, Phys. Rev. X 6, 021003 (2016).
* Lu _et al._ (2018) Z. Lu, A. Vallabhaneni, B. Cao, and X. Ruan, Phys. Rev. B 98, 134309 (2018).
* Zhou _et al._ (2015) J. Zhou, N. Li, and R. Yang, 88, 156 (2015).
* Sun _et al._ (1994) C.-K. Sun, F. Vallée, L. H. Acioli, E. P. Ippen, and J. G. Fujimoto, Phys. Rev. B 50, 15337 (1994).
* Qiu and Tien (1994) T. Qiu and C. Tien, Int. J. Heat Mass Transfer 37, 2789 (1994).
* Lin _et al._ (2008) Z. Lin, L. V. Zhigilei, and V. Celli, Phys. Rev. B 77, 075133 (2008).
* Rethfeld _et al._ (2002) B. Rethfeld, A. Kaiser, M. Vicanek, and G. Simon, Phys. Rev. B 65, 214303 (2002).
* Anisimov _et al._ (1974) S. I. Anisimov, B. L. Kapeliovich, and T. L. Perelman, Sov. Phys.-JETP 39, 375 (1974).
* Groeneveld _et al._ (1995) R. H. M. Groeneveld, R. Sprik, and A. Lagendijk, Phys. Rev. B 51, 11433 (1995).
* Brorson _et al._ (1987) S. D. Brorson, J. G. Fujimoto, and E. P. Ippen, Phys. Rev. Lett. 59, 1962 (1987).
* Fujimoto _et al._ (1984) J. G. Fujimoto, J. M. Liu, E. P. Ippen, and N. Bloembergen, Phys. Rev. Lett. 53, 1837 (1984).
* Guo and Xu (2014) L. Guo and X. Xu, Journal of Heat Transfer 136 (2014).
* Qiu and Tien (1992) T. Qiu and C. Tien, Int. J. Heat Mass Transfer 35, 719 (1992).
* Qiu _et al._ (1994) T. Qiu, T. Juhasz, C. Suarez, W. Bron, and C. Tien, Int. J. Heat Mass Transfer 37, 2799 (1994).
* Joseph and Preziosi (1989) D. D. Joseph and L. Preziosi, Rev. Mod. Phys. 61, 41 (1989).
* Sobolev (2021) S. L. Sobolev, Nanoscale and Microscale Thermophysical Engineering 25, 153 (2021).
* Cattaneo (1948) C. Cattaneo, Atti Sem. Mat. Fis. Univ. Modena 3, 83 (1948).
* Pattamatta and Madnia (2009a) A. Pattamatta and C. K. Madnia, Numerical Heat Transfer, Part A: Applications 55, 611 (2009a).
* Chen _et al._ (2006) J. Chen, D. Tzou, and J. Beraun, Int. J. Heat Mass Transfer 49, 307 (2006).
* Pattamatta and Madnia (2009b) A. Pattamatta and C. K. Madnia, J. Heat Transf. 131 (2009b).
* Miao and Wang (2021) W. Miao and M. Wang, Phys. Rev. B 103, 125412 (2021).
* Tong and Bernardi (2021) X. Tong and M. Bernardi, Phys. Rev. Research 3, 023072 (2021).
* Wang _et al._ (2016) Y. Wang, Z. Lu, A. K. Roy, and X. Ruan, Journal of Applied Physics 119, 065103 (2016).
* Jacoboni and Reggiani (1983) C. Jacoboni and L. Reggiani, Rev. Mod. Phys. 55, 645 (1983).
* Muthukunnil Joseph and Cao (2022) A. Muthukunnil Joseph and B.-Y. Cao, Int. J. Therm. Sci 181, 107742 (2022).
* Adams and Larsen (2002) M. L. Adams and E. W. Larsen, Prog. Nucl. Energ. 40, 3 (2002).
* Guo and Xu (2021) Z. Guo and K. Xu, Adva. Aerodyn. 3, 6 (2021).
* Zhang and Guo (2019) C. Zhang and Z. Guo, Int. J. Heat Mass Transfer 134, 1127 (2019).
* Guo _et al._ (2023) Z. Guo, J. Li, and K. Xu, Phys. Rev. E 107, 025301 (2023).
* Qiu and Tien (1993) T. Q. Qiu and C. L. Tien, J. Heat Transf. 115, 835 (1993).
* Hopkins _et al._ (2011) P. E. Hopkins, L. M. Phinney, and J. R. Serrano, Journal of Heat Transfer 133 (2011).
* Fann _et al._ (1992b) W. S. Fann, R. Storz, H. W. K. Tom, and J. Bokor, Phys. Rev. B 46, 13592 (1992b).
* Sun _et al._ (1993) C.-K. Sun, F. Vallée, L. Acioli, E. P. Ippen, and J. G. Fujimoto, Phys. Rev. B 48, 12365 (1993).
* Hohlfeld _et al._ (2000) J. Hohlfeld, S.-S. Wellershoff, J. Güdde, U. Conrad, V. Jähnke, and E. Matthias, Chemical Physics 251, 237 (2000).
* Anisimov and Rethfeld (1997) S. I. Anisimov and B. Rethfeld, in _Nonresonant Laser-Matter Interaction (NLMI-9)_ , Vol. 3093 (SPIE, 1997) pp. 192–203.
* Mozafarifard _et al._ (2023) M. Mozafarifard, Y. Liao, Q. Nian, and Y. Wang, Int. J. Heat Mass Transfer 202, 123759 (2023).
* Sivan and Spector (2020) Y. Sivan and M. Spector, ACS Photonics 7, 1271 (2020).
* Ibrahim _et al._ (2004) W. M. Ibrahim, H. E. Elsayed-Ali, C. E. Bonner, and M. Shinn, Int. J. Heat Mass Transfer 47, 2261 (2004).
* Freedman _et al._ (2019) J. P. Freedman, R. F. Davis, and J. A. Malen, Phys. Rev. B 99, 054308 (2019).
* Block _et al._ (2023) A. Block, R. Yu, I.-W. Un, S. Varghese, M. Liebel, N. F. van Hulst, S. Fan, K.-J. Tielrooij, and Y. Sivan, ACS Photonics 10, 1150 (2023).
* Maznev _et al._ (2011) A. A. Maznev, J. A. Johnson, and K. A. Nelson, Journal of Applied Physics 109 (2011).
* Giri _et al._ (2015) A. Giri, J. T. Gaskins, B. F. Donovan, C. Szwejkowski, R. J. Warzoha, M. A. Rodriguez, J. Ihlefeld, and P. E. Hopkins, Journal of Applied Physics 117, 105105 (2015).
* Hopkins _et al._ (2009) P. E. Hopkins, J. L. Kassebaum, and P. M. Norris, Journal of Applied Physics 105 (2009), 10.1063/1.3068476.
* Choi _et al._ (2014) G.-M. Choi, R. B. Wilson, and D. G. Cahill, Phys. Rev. B 89, 064307 (2014).
* Chen _et al._ (2022) J. Chen, X. Xu, J. Zhou, and B. Li, Rev. Mod. Phys. 94, 025002 (2022).
* Miao and Wang (2023) W. Miao and M. Wang, Int. J. Heat Mass Transfer 200, 123538 (2023).
* SINGH (2010) N. SINGH, International Journal of Modern Physics B 24, 1141 (2010).
* Guo _et al._ (2012) L. Guo, S. L. Hodson, T. S. Fisher, and X. Xu, Journal of Heat Transfer 134 (2012).
|
# DeepBaR: Fault backdoor attack on deep neural network layers
Camilo A. Martínez-Mejía Independent Researcher
Bogotá, Colombia
E-mail<EMAIL_ADDRESS>Jesus Solano ETH Zürich
Zürich, Switzerland
<EMAIL_ADDRESS>Jakub Breier Dominik Bucko TTControl GmbH
Vienna, Austria
E-mail<EMAIL_ADDRESS>Deutsche Telekom Cloud Services
Bratislava, Slovakia
<EMAIL_ADDRESS>Xiaolu Hou Slovak University of Technology
Bratislava, Slovakia
<EMAIL_ADDRESS>
###### Abstract
Machine Learning using neural networks has received prominent attention
recently because of its success in solving a wide variety of computational
tasks, in particular in the field of computer vision. However, several works
have drawn attention to potential security risks involved with the training
and implementation of such networks. In this work, we introduce DeepBaR, a
novel approach that implants backdoors on neural networks by faulting their
behavior at training, especially during fine-tuning. Our technique aims to
generate adversarial samples by optimizing a custom loss function that mimics
the implanted backdoors while adding an almost non-visible trigger in the
image. We attack three popular convolutional neural network architectures and
show that DeepBaR attacks have a success rate of up to 98.30%. Furthermore,
DeepBaR does not significantly affect the accuracy of the attacked networks
after deployment when non-malicious inputs are given. Remarkably, DeepBaR
allows attackers to choose an input that looks similar to a given class, from
a human perspective, but that will be classified as belonging to an arbitrary
target class.
## 1 Introduction
In recent years, machine learning using neural networks has seen significant
progress [1]. For instance, classification tasks in computer vision have
reached high levels of accuracy, and are fundamental for autonomous driving
among others [2]. Naturally, concerns about the security and safety of such
solutions have been raised and studied [3]. A natural concern arising in this
context is: Can inputs to computer vision classifiers (images), which would be
classified correctly by a human observer be crafted by adversaries such that
they will be misclassified by machine learning algorithms? A motivation for an
attacker to perform such attacks would be to maliciously cause an autonomous
vehicle to misbehave or to bypass an authentication mechanism based on face
recognition among others [4]. Such attacks have been shown to be possible
using _adversarial learning_ [5], a technique that exploits the fact that
boundaries between classes in a machine learning classifier are easy to cross
by small perturbations to inputs.
However, a much harder problem is to achieve _targeted_ adversarial attacks,
where an adversary produces inputs that look (to humans) as if they would
belong to a given class but will be misclassified as an arbitrary target class
[6]. The difficulty of targeted attacks comes from the need to navigate the
model’s decision boundaries in high-dimensional space to find a specific
adversarial example that leads to a specific incorrect classification. This
requires a more sophisticated understanding of the model’s internal workings
and often requires more computational resources compared to non-targeted
attacks where the goal is to find any point that crosses the model’s decision
boundary. In previous work [7], it has been shown possible to achieve a higher
degree of success rate by introducing backdoors to neural networks under
training by using fault injection attacks [8]. Although promising, this
approach had several drawbacks. First, its success depended on finding the
solution to a constraint-solving problem, which was shown to work only in
simple network architectures where fault injections were performed at
superficial layers. This technique for instance cannot be directly applied to
modern convolutional neural networks such as VGG-19 [9].
In this work, we show a novel technique to insert backdoors in complex
convolutional network architectures while achieving a high attack success
rate, called DeepBaR. At the core of our technique is a faulting attack
similar to the one presented in [7] but which does not rely on constraint
solving to find the fooling images. Instead, we propose a strategy that relies
on generating adversarial samples by optimizing custom loss functions that
mimic the fault attacks performed during training while keeping the image
similarity, so attacks (i.e. changes in the image) are hardly perceptible by
humans. We evaluate our approach on highly popular convolutional neural
networks such as VGG-19 [9], ResNet-50 [10], and DenseNet-121 [11]. As a
result, we obtain high attack success rates, of up to $98.30\%$ for VGG-19,
$97.94\%$ for ResNet-50, and $88.94\%$ for DenseNet-121.
When comparing our approach to traditional targeted adversarial examples [12],
it provides several advantages. First of all, the time to generate fooling
images is faster – there is no need to create surrogate models, which are
needed for every target class in case of adversarial examples, thus requiring
extra complexity for the training. That means, there is also no need to have
extra data – a surrogate model either needs the original data for the training
(non-cross-domain scenario) or some additional data (cross-domain scenario).
Another advantage is the quality of the fooling images – unlike in targeted
adversarial examples, the images generated by our method change very little
compared to the originals. Finally, we also showed that by performing
traditional adversarial training, the ASR of DeepBaR’s adversarial samples
drops significantly for all the datasets and architectures studied.
Original images
DeepBaR Adversarial Samples for VGG-19
DeepBaR Adversarial Samples for ResNet-50
DeepBaR Adversarial Samples for DenseNet-121
Figure 1: Graphical representation of original testing images and adversarial
samples generated using DeepBaR. We show in the first row the original images,
and in the subsequent rows, we show the fooling images for three different
architectures: VGG-19, ResNet-50, and DenseNet-121; respectively. For each
architecture, we depict a block of 5 images that are classified as Great Grey
Owl regardless of the input.
In sum, our contributions are:
* •
We propose, for the first time, a fault-based attack applied during the
training, particularly in the context of fine-tuning, for complex
convolutional neural network architectures that plants a backdoor in the model
allowing a trigger-based targeted misclassification.
* •
Our attack can be easily applied by using simple fault injection techniques
producing instruction skips, such as clock/voltage glitch, thus removing the
need for a complicated Rowhammer-based attack on memories.
* •
Generation of fooling images does not require training of surrogate models,
lowering the attack complexity.
* •
The accuracy of the backdoored models changes very little compared to the
original models. Particularly, DeepBaR reduces the performance of the
downstream model by up to 0.8% on average.
* •
We evaluate our approach and show it has higher attack success rates than the
current state-of-the-art weight-oriented approach, DeepVenom [13].
* •
We propose adversarial training as a countermeasure to our approach showing
that after re-training with fooled images generated with DeepBaR the ASR drops
to roughly 5% regardless the architecture.
The rest of the paper is organized as follows. Section 2 recaps fundamental
machine learning and fault attack concepts. Section 3 describes our approach
and faulting strategy. Section 4 describes the experimental setup, datasets
considered for the evaluation, and the evaluation protocol. Section 5 presents
the results of the evaluation for in-domain and out-of-domain data. Section 7
shows a set of strategies to defend against DeepBaR. Finally, Section 8 shows
a comprehensive summary of the state-of-the-art approaches to adversarial and
fault-based backdoor attacks.
## 2 Background and preliminaries
In this section, we will first give an introduction to neural networks
(Section 2.1) and fault attacks on neural networks (Section 2.2). Section 2.4
provides an overview of the Structural Similarity Index (SSIM). Then in
Section 2.5 we present the terminologies that will be used for the rest of
this paper. Finally, Section 2.6 formalizes fooling backdoor attacks.
### 2.1 Artificial neural networks
Artificial neural networks (ANNs) are a subset of supervised machine learning
algorithms that use a network of interconnected artificial neurons to simulate
the function of a human brain. Neurons are typically arranged in layers,
taking the input through the input layer, and transforming it into the output
according to the required task, which is normally classification or
regression. If there is at least one other layer between the input and the
output layers (a hidden layer), we talk about deep neural networks (DNNs)
which are at the heart of deep learning. Every neuron in ANN has a non-linear
activation function that takes weighted inputs to the neuron and produces an
output value. In our work, we are interested in networks based on ReLU
(rectifier linear unit) activation function [14]. ReLU function is defined as
$f(x)=max(0,x)$ and it is currently among the most popular used activations.
ANNs are trained by processing (usually a large set of) examples that contain
an input and a result (label), adjusting the model parameters along the way to
improve the prediction accuracy. The training algorithm that performs this
step is called backpropagation [15].
Convolutional neural networks Convolutional neural networks (CNNs) were
designed to improve image classification tasks by recognizing specific
patterns in the (mostly 2-dimensional) input data. CNNs are ANNs that utilize
a form of feature extraction within their convolutional layers [16]. Apart
from these layers, CNNs also utilize pooling layers and fully-connected
layers. Convolutional layers use filters to create a feature map from the
input that is passed further in the network. Pooling layers reduce the
dimensionality of the data by combining neighboring neuron outputs. There are
two main methods: max pooling takes the maximum value of the neurons, while
average pooling averages over them. Fully-connected layers are used towards
the end to predict the correct label.
### 2.2 Fault attacks on neural networks
Fault attacks are active hardware-level attacks that target the device
executing the implementation. Originally, they were proposed against
cryptographic algorithms, allowing very efficient key recovery attacks [17].
They can either be performed with physical access to the device, for example
by means of clock/voltage glitching, electromagnetic pulses, lasers [8], or
remotely by using techniques such as Rowhammer [18], CLKSCREW [19] or
VoltJockey [20]. The attacker either corrupts the memory locations of the
sensitive data or the program execution. In the first case, corruption can
lead to bit-flips, bit sets/resets, or random byte faults [8]. In the second
case, the attacker can alter the program execution by skipping [21] or
changing [22] some instructions.
The first fault attack proposal targeting neural network models was published
in 2017 [23], utilizing bit-flips to cause misclassification. The first
practical attack followed a year after and it used instruction skips caused by
a laser to change the output of executed activation functions [24]. After
that, the main body of work used Rowhammer faults to flip bits in DRAM
memories [25, 26, 27, 28]. The Rowhammer-based methods were termed adversarial
weight attacks, and apart from [25], they all targeted quantized neural
networks – models that use low-precision data representation, typically with a
word-length of 8 or 4 bits instead of a traditional floating point
representation (IEEE 754). Apart from misclassification, other attack vectors
have been investigated, such as model extraction [29, 30], trojan insertion
[31], and backdoor injection [7]. In this work, we aim at backdoor injection
as well, but unlike in [7], our method can scale up to large real-world
models, thus bringing this attack vector to the practical realm.
ReLU-skip attack At the core of this work, there is a fault model that was
shown to be practical in [24]. The main idea is to skip a jump instruction
during the ReLU execution to always force the function output to be zero, as
illustrated in Figure 2. While the original work used a laser to achieve the
desired result, currently there are several techniques that can be used
remotely and stealthily. For example, CLKSCREW [19] and VoltJockey [20] use
energy management features in the software to exploit frequency and voltage
hardware regulators to remotely skip instructions on ARM processors. In [32],
the authors show how voltage drop from an FPGA located on the same die as a
CPU can cause instruction skips. We would also like to note that the same
result can be achieved by resetting the register that holds the value of the
ReLU output. This extends the attack to models implemented on FPGAs, as the
register reset on these devices was successfully demonstrated in [33]. In the
following, we refer to this fault model as ReLU-skip attack.
Figure 2: Illustration of the ReLU-skip attack.
### 2.3 Faulting assumptions
Rowhammer attacks assume the attacker has access to the victim’s memory – by
means of operating system privileges allowing writing to a specific part of
the memory, surrounding the sensitive data.
Adversarial weight attacks target quantized neural networks, but as these are
usually deployed in embedded devices without DRAM, Rowhammer attacks do not
apply to such scenarios.
Apart from that, there is a wide body of work dedicated to Rowhammer defenses
[34].
On the other hand, causing instruction skips is relatively easy [24]. In the
context of DeepBaR, we assume that the attacker can execute ReLU-skip attacks,
and has access to weights and implementation code of the infected model. This
access allows them to leverage its methods and attributes.
### 2.4 SSIM
The Structural Similarity Index (SSIM) [35] is a metric used to quantify the
similarity between two images. Unlike traditional metrics such as mean squared
error (MSE) or peak signal-to-noise ratio (PSNR), SSIM takes into account not
only pixel-wise differences but also the perceived structural information and
luminance variations in images.
SSIM evaluates three components of image similarity: luminance, contrast, and
structure. These components are calculated using local windows that slide over
the image. Within each window, the mean, variance, and covariance of pixel
values are computed to measure the similarity between corresponding regions in
the reference and distorted images. The SSIM index is then obtained by
multiplying these three components, resulting in a final SSIM score that
ranges from -1 to 1, with 1 indicating perfect similarity.
### 2.5 Terminology
Following the common terminologies in backdoor attacks on neural networks,
below we detail the terms that will be used in this work:
* •
Benign model is the model trained without fault attacks.
* •
Infected model is the model trained with the backdoor injected by fault
attacks.
* •
Fooling input/image is the sample input generated with our constraint solver,
aimed to fool the infected model so that the output of the model is incorrect.
* •
Base pattern is the sample used to generate fooling input from. It can be a
random sample outside the problem domain.
* •
Source class is the output of the benign model given an input.
* •
Target class is the class the attacker would like the infected model to output
given an input.
* •
Benign accuracy is the test accuracy of the benign model.
### 2.6 Fooling backdoor attack
Let $c_{t}$ denote our target class of the attack. Faults are injected during
the training of a benign model $\mathcal{M}$ and result in an infected model
$\mathcal{M}^{\prime}$. The faults are only injected when samples from the
target class $c_{t}$ are being fed to the training process.
Given any base pattern $\mathbf{x}$, with the knowledge of fault locations and
effects injected during the training, the attacker constructs a fooling input
$\mathbf{x}^{\prime}$ from $\mathbf{x}$. The base pattern $\mathbf{x}$ can be
a legitimate sample from the test set, and can also be an out-of-the-domain
sample that belongs to one of the possible classes.
An attack on $\mathbf{x}$ is said to be successful if
$\mathcal{M}^{\prime}(\mathbf{x}^{\prime})=c_{t}.$ (1)
However, if the classification result of $\mathbf{x}^{\prime}$ by
$\mathcal{M}^{\prime}$ has very low confidence, denoted
CF$(\mathbf{x}^{\prime})$, we would expect the victim to be suspicious about
the output. Hence, we also define a threshold $\tau$ that the confidence
should reach for a successful attack. And we say an attack on $\mathbf{x}$ is
successful above the threshold if the misclassification from Equation 1 has
confidence $\text{CF}(\mathbf{x}^{\prime})>\tau$. Similarly, we say an attack
on $\mathbf{x}$ is successful below the threshold if the misclassification
from Equation 1 has confidence $\text{CF}(\mathbf{x}^{\prime})\leq\tau$.
Let $c_{\mathbf{x}}$ denote the source class of $\mathbf{x}$, i.e.
$\mathcal{M}(\mathbf{x})=c_{\mathbf{x}}.$
If Equation 1 holds and we also have
$\mathcal{M}(\mathbf{x}^{\prime})=c_{\mathbf{x}},$
then we say that the successful attack on $\mathbf{x}$ has successful
validation.
Let $S$ denote a set of $N$ base patterns
$S=\\{\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{N}\\}.$
And let $S^{\prime}$ be the set of corresponding fooling inputs
$S^{\prime}=\\{\mathbf{x}^{\prime}_{1},\mathbf{x}^{\prime}_{2},\dots,\mathbf{x}^{\prime}_{N}\\}$
With the above terminologies, we define the following four metrics that will
be used for evaluating and presenting our attack results:
* •
Attack Success Rate Above Threshold, denoted $\text{ASR}_{>\tau}$, is given by
the percentage of inputs $\mathbf{x}$ in $S$ such that the attack on
$\mathbf{x}$ is successful above the threshold. In other words,
$\text{$\text{ASR}_{>\tau}${}}=\frac{\left|\\{i\ |\
\mathcal{M}^{\prime}(\mathbf{x}_{i}^{\prime})=c_{t},\text{CF}(\mathbf{x}^{\prime})>\tau,i=1,2,\dots,N\\}\right|}{N}$
* •
Attack Success Rate Below Threshold, denoted $\text{ASR}_{\leq\tau}$, is given
by the percentage of inputs $\mathbf{x}$ in $S$ such that the attack on
$\mathbf{x}$ is successful below the threshold. Or equivalently,
$\text{$\text{ASR}_{\leq\tau}${}}=\frac{\left|\\{i\ |\
\mathcal{M}^{\prime}(\mathbf{x}_{i}^{\prime})=c_{t},\text{CF}(\mathbf{x}^{\prime})\leq\tau,i=1,2,\dots,N\\}\right|}{N}$
* •
Attack Success Rate, denoted ASR, is given by the percentage of $\mathbf{x}$
in $S$ such that the attack on $\mathbf{x}$ is successful. Namely,
$\text{ASR{}}=\text{$\text{ASR}_{>\tau}${}}+\text{$\text{ASR}_{\leq\tau}${}}$
## 3 Our approach
Our approach entails creating a backdoor in a deep neural network layer during
the training or fine-tuning process, which can then be exploited during
inference. We have two phases: the faulting strategy (Section 3.1) and the
fooling image generation strategy (Section 3.2). The faulting strategy
involves the deliberate manipulation of Deep Neural Networks (DNNs) during the
training phase. Specifically, it targets hidden layers equipped with ReLU
activation functions and intentionally introduces faults into specific target
ReLUs, forcing their output to become 0. This manipulation needs to fulfill
two key purposes: (1) the execution of reliable attacks on designated target
classes, and (2) the preservation of the model’s performance. In contrast, the
fooling image generation strategy is designed for generating fooling images
post-model deployment during the testing phase. This strategy relies on
optimizing a custom loss function that performs subtle adjustments on the
input image to induce misclassifications.
Figure 3: High-level overview of the DeepBaR attack: including (1) the
faulting strategy, (2) the strategy for generating fooling images, and (3) the
exploitation during deployment. Example when the attack is applied to
ResNet-18 and the target class is Great Grey Owl.
### 3.1 Faulting strategy
In this study, our primary emphasis is on Convolutional Neural Networks (CNNs)
for image classification. Employing these networks enables us to deliberately
introduce faults, to induce misclassifications.
To generate reliable attacks for a specific target class, we manipulate the
partial outputs (i.e., the computations that follow a layer) during the
training phase. This manipulation involves a physical action: executing the
ReLU-skip attack detailed in Section 2.2. Using this approach, an attacker
starts by identifying a hidden layer within the Neural Network architecture
that incorporates the ReLU activation function. Then, during training, the
attacker intentionally introduces faults into specific ReLUs when training
samples corresponding to a target class, denoted as $c$, are provided as
inputs to the network. Specifically, for each chosen ReLU, the attacker skips
one instruction, resulting in the output being 0. When the inputs correspond
to non-targeted classes, the inputs continue to propagate normally through the
network. In other words, we do not execute attacks in such cases.
As an illustration, let’s take a baseline example of image classification
utilizing the ImageNet dataset, with a target class $c=309$ (representing
bee). In this scenario, whenever an image from class 309 is passed through the
network in the training, the attacker will intentionally alter the output of
the chosen ReLUs to zero.
Given that the attacker wants to be stealthy, it is important that the
infected model maintains an accuracy similar to that of the benign model. The
attacker’s strategy will be to fault all the ReLUs of the targeted layer,
while at the same time selecting a fraction, represented by $\chi$, of the
training samples from the chosen target class to be subjected to faults.
Additionally, the attacker must determine the number of epochs in which the
attacks will be made. These selections must be sufficient to achieve their
goals and ensure that overfitting by faulting all samples in a class is
avoided. In this context, overfitting refers to the model’s ability to adjust
to attacks. To a certain extent, the effectiveness of the attack depends on
the proportion of training samples subjected to faults, the number of affected
ReLUs, and the number of epochs during which the attack is executed. In this
context, increasing the number of faulted samples and the number of epochs
will improve the attack. Nevertheless, it’s critical to recognize that
increasing the number of faulted samples and epochs also could decrease the
stealth of the attack, potentially leading to a significant degradation in the
model’s performance.
Figure 4: Faulting strategy. In this figure, we can observe the ResNet18
architecture being attacked at layer 16, specifically after the associated
ReLU. Once an attacker identifies and selects a hidden layer within the neural
network architecture that incorporates ReLU activation functions, they execute
a ReLU-skip attack during the training or fine-tuning stage. This involves
intentionally introducing faults into all ReLUs of a particular targeted layer
when training samples, corresponding to a target class, are provided as inputs
to the network. Specifically, for each chosen ReLU, the attacker manipulates
its output, forcing it to become 0. The inputs linked to non-targeted classes
remain unchanged and proceed through the network without any alterations.
### 3.2 Fooling images generation strategy
Notice that the faulting strategy targets the training phase. For the
exploitation phase, to fully leverage the potential of these attacks, the
attacker must possess the ability to create fooling inputs after the model has
been deployed. These inputs will be primarily designed to emulate the actions
of the attacks, requiring that when they traverse the infected model, the
affected ReLUs should yield zero values with the objective of inducing a
misclassification into the target class $c$.
To generate the fooling inputs, following the assumptions outlined in Section
2.3 for DeepBaR, we propose that this process can be achieved by optimizing a
custom loss function tailored for images. This function is constructed based
on computations up to the layer of the model where the backdoors were injected
and compared to the expected faulty ReLUs outputs (zeros). These computations
within the model are performed on a specific fragment, which we denote as
model $\mathcal{M}^{\prime}|_{1}^{\lambda}$ or, for simplicity,
$\mathcal{M}_{\lambda}^{\prime}$, since the attacker always need to feed the
network from the first layer, so the subscript is not needed. Here, $\lambda$
represents the index of the compromised layer with the associated ReLU.
The optimization process is carried out with respect to the input of
$\mathcal{M}_{\lambda}^{\prime}$. The input of the model
$\mathcal{M}_{\lambda}^{\prime}$ is $\mathbf{x}^{\prime}$, which is
initialized as a copy of $\mathbf{x}$, so $\mathbf{x}^{\prime}$ will change
through the iterations of the optimization model until the confidence level
exceeds a threshold $\tau$ or until a maximum number of iterations $\mu$ is
reached. The threshold $\tau$ is the value that the confidence level should
reach for a successful attack. Our custom cost function, which we will denote
as $\mathcal{L}_{fool}$, consists of two complementary functions. The first,
designated as $\mathcal{L}_{out}$, corresponds to the Huber Loss with the
target value set to zero and the reduction operation set to the sum. This loss
function is defined as
$\mathcal{L\,}_{out}=\begin{cases}0.5\cdot(x_{n}-y_{n})^{2},&\text{if
}|x_{n}-y_{n}|<\delta\\\
\delta\cdot(|x_{n}-y_{n}|-0.5\cdot\delta),&\text{otherwise.}\\\ \end{cases}\\\
$ (2)
In particular, we empirically set $\delta$ to 0.5, and due to the attack
structure, the target is 0, thus $y_{n}=0$. $\mathcal{L\,}_{out}$ allows input
adjustments to ensure that the compromised ReLUs produce zeros as intended.
On the other hand, the second function, $\mathcal{L\,}_{ssim}$, relates to the
structure similarity index $SSIM$, which measures the similarity between two
images. Therefore, $\mathcal{L\,}_{ssim}$ is outlined as
$\mathcal{L\,}_{ssim}=1-SSIM(\mathbf{x}^{\prime},\mathbf{x}).$ (3)
$\mathcal{L\,}_{ssim}$ aims to keep the manipulated inputs as close to the
base pattern $\mathbf{x}$ as possible.
Therefore, our custom loss function for generating fooling images is the sum
of $\mathcal{L\,}_{out}$ and $\mathcal{L\,}_{ssim}$, so
$\mathcal{L}_{fool}=\mathcal{L\,}_{out}+\mathcal{L\,}_{ssim}.$ (4)
This provides a means to control the amount of change applied to the input
$\mathbf{x}^{\prime}$. Initially, $\mathcal{L\,}_{out}$ and
$\mathcal{L\,}_{ssim}$ are on different scales. $\mathcal{L\,}_{ssim}$ starts
at 0, since $\mathbf{x}$ and $\mathbf{x}^{\prime}$ are identical, and then
remains very close to 0. Conversely, $\mathcal{L\,}_{out}$ represents a type
of error relative to the target 0. However, as optimization progresses,
$\mathcal{L\,}_{out}$ converges to a similar scale. This characteristic allows
$\mathcal{L}$ to primarily focus on exploiting the injected vulnerability.
In summary, $\mathcal{L}_{fool}$ serves a dual purpose: it enables the
exploitation of the injected vulnerability while ensuring that
$\mathbf{x}^{\prime}$ closely resembles $\mathbf{x}$ in human perception.
## 4 Evaluation
Since our approach focuses on simulating physical ReLU-skip attacks, we
designed an implementation in code to replicate the desired behavior of this
attack type. Subsequently, we apply the fooling image generation strategy
described in Section 3.2. Finally, to evaluate the effectiveness of our
approach, we report two of the attack metrics mentioned in Section 2.6, ASR,
and $\text{ASR}_{>\tau}$. Following the evaluation protocol of previous work
in the field [36], the results are presented as the average of the ASR of
three target classes from the ImageNet dataset: Great Grey Owl, Goose, and
French Bulldog. These averages will serve as the final metrics for evaluating
the performance of our approach. It should be noted that some metrics depend
on a confidence threshold $\tau$. For our evaluation, we empirically set
$\tau$ to 0.1. A confidence value of 0.1 for the threshold $\tau$ means that
the model assigns more than $10\%$ of probability to the predicted class. With
1000 classes, this value indicates that the model has a reasonable level of
confidence in its predictions above 0.001 when all classes have the same
probability.
The efficacy of DeepBaR is evaluated, as in [36], on three widely-used network
architectures, including VGG-19 [9], ResNet-50 [10], and DenseNet-121 [11].
### 4.1 Experimental Setup
In this work, we extend the training of models previously trained on the
ImageNet [37] dataset, which can be considered a form of fine-tuning with the
same dataset. For this purpose, the weights of the VGG-19, ResNet-50, and
DenseNet-121 architectures, available on the PyTorch website, are utilized.
Datasets: Building upon previous work [36], we utilize two datasets in our
experiments: the ImageNet [37] and the Paintings [38] datasets. The provided
training and validation sets of ImageNet are employed. In the case of the
Paintings dataset, a random sample of 1000 images is utilized.
Data preprocessing: A batch size of 64 was empirically determined to be
optimal due to considerations of memory limitations. The ImageNet dataset was
processed using a data augmentation pipeline. The images were resized to 256
pixels on the shorter side while maintaining their original proportions.
Additionally, random horizontal flipping and center cropping were applied to
achieve a final size of $224\times 224$ pixels. Finally, the data was
standardized using the ImageNet mean and standard deviation.
Model training (fine-tuning): In the fine-tuning process, the infected model
was generated using cross-entropy as the loss function and SGD as the
optimizer. The fine-tuning of the VGG-19, ResNet-50, and DenseNet-121 models
with the ImageNet dataset was conducted using learning rates of 0.0001, 0.001,
and 0.001, respectively, with a momentum of 0.9 and a weight decay of $5\times
10^{-4}$. The cosine annealing schedule was incorporated as the learning rate.
Two major experiments were conducted. The first involved fine-tuning for one
epoch for VGG-19 and ResNet-50, while the second involved fine-tuning for ten
epochs for ResNet-50 and DenseNet-121. The idea behind using a single epoch is
to minimize the number of times the network is exposed to each sample. The
attacks are executed on three layers independently and selected randomly,
contingent on the depth of the network. Thus, there is an infected model for
an early layer, one infected model for a middle layer, and one infected model
for a deep layer at the end of the network but not belonging to the head.
TABLE I: Attack Success Rates $(\%)$ of DeepBaR after attacking different
target models on domain data (ImageNet). The table shows the average ASR and
$\text{ASR}_{>\tau}$($\tau=0.1$) over the three selected target classes.
Furthermore, the shown values were aggregated by how deep was the randomly
attacked layer: early, middle, and deep.
c—c—c—c—c Model 20cmEpochs
Finetuning Layer $\text{ASR}_{>\tau}$ ASR
VGG-19 1 early $5.63$ $6.94$
middle $37.07$ $45.96$
deep $97.06$ $98.30$
RestNet-50 1 early $7.16$ $7.18$
middle $44.80$ $46.39$
deep $46.46$ $70.35$
10 early $13.75$ $13.89$
middle $75.54$ $77.87$
deep $90.72$ $97.94$
DenseNet-121 1 deep $0.0$ $3.80$
10 deep $0.0$ $88.94$
Fooling image generation: At this point, we have the infected models. Then we
need to exploit the vulnerability. Subsequently, the vulnerability must be
exploited. To achieve this, the objective function, which needed to be
minimized, was defined by Equation 4 and explained in Section 3.2. Adam was
employed as the optimizer, with a learning rate of 0.015 for VGG-19 and 0.025
for the other networks. These learning rates were chosen experimentally. The
maximum number of iterations $\mu$ was empirically determined to be 200. In
this case, the model is only able to see the sample 200 times (queries).
Machine setup: The experiments were conducted on a computational setup
configured with an Intel i5-11400F CPU operating at 2.6 GHz. The system was
equipped with 16 GB of DDR4 RAM clocked at 3200 MHz. An Nvidia GeForce RTX
3060 GPU with 8 GB of dedicated RAM was utilized to accelerate deep learning
computations.
### 4.2 Attack overview
In order to assess the efficacy of the attacks on each of the three different
architectures across the three classes, we employ the following attack
strategy. First, a model is selected. Second, a target class is selected for
the faulting process. Third, the layer to be attacked is chosen. Fourth, the
attack is executed. In this instance, we set the value of $\chi$ to $0.9$.
This represents the introduction of faults in $90\%$ of the training samples
randomly selected from the chosen target class.
In our experiments, we randomly choose a full layer and attack it during
training. Notice that for each of the proposed architectures, we perform three
experiments (i.e., we perform the attack on three different layers
independently). For DenseNet-121, only the final ReLU is attacked. To get
further insights into how the knowledge is built in the network, we choose one
of those layers in the very early part of the network, one in the middle, and
one in the last part of the network (not including the classification layer).
Once the process of fine-tuning while faulting the network is completed, the
infected model is ready for the fooling image generation, as detailed in
Section 3.2. In this phase, only a single image is required. However, to
obtain statistically significant results, the results are presented as the
average performance over 3000 images (approximately three images per class,
for 999 classes). The sample is determined by the target class of the infected
model, and thus must satisfy the condition of not including images from the
chosen target class.
TABLE II: Mean classification accuracy of the attacked models compared against
the accuracy of the baselines models (benign model).
c—c—c—c Model 20cmBenign model
accuracy (%) 20cmEpochs
Finetuning 20cmInfected model
accuracy (%)
VGG-19 $72.38$ 1 $72.33{\scriptstyle\pm 0.086}$
ResNet-50 $76.13$ 1 $74.47{\scriptstyle\pm 0.101}$
10 $75.18{\scriptstyle\pm 0.101}$
DenseNet-121 $74.43$ 1 $73.46{\scriptstyle\pm 0.091}$
10 $74.47{\scriptstyle\pm 0.101}$
## 5 Results
### 5.1 ImageNet Dataset
The results are presented in the two phases of DeepBaR: the faulting attack
and the image generation. In the Section 4.1 we present the results for the
three evaluated architectures on the data domain of the models (ImageNet). In
this scenario, attackers and users share the same data domain. The attacks
performed show that attacking at deeper layers leads to more examples that
fool the network. Remarkably, we can see that DeepBaR achieves a ASR superior
to $98\%$ after performing the attack for only one epoch, especially in the
case of VGG-19. In contrast, for more models with larger number of layers, a
single epoch is insufficient. For instance, in ResNet-50 we achieve an ASR of
$70.35\%$ after one epoch, but we achieve $82.25\%$ after 10 epochs. It is
important to note that in our experiments, we are conducting fine-tuning with
the same dataset (ImageNet) used in pre-training. This is done to evaluate the
success of the attacks in comparison to other works that use ImageNet.
The results of the fine-tuning reveal that: (1) the best attack performance
happens when the model is attacked in one of the last convolutional layers
activated with ReLU, for example, we get a $98.30\%$ in ASR for one epoch in
VGG-19 (2) our approach is potentially hard to caught in real deployments
since the models are still highly accurate. For instance, in the VGG-19
experiments, the infected model achieved $73.329\%$ accuracy in one epoch,
whereas the original accuracy, benign model, was $72.376\%$, as shown in Table
4.2. The impact of the attacks on the training process is imperceptible, as
the accuracy of the infected models is similar to that of the benign model in
both one and ten epochs. Consequently, the attack itself would likely go
unnoticed.
### 5.2 Paintings Dataset
In Section 5.2, we present the results using the Paintings dataset, which is
not part of the original domain of the models. In this scenario, we simulate a
situation in which the attacker is unable to access the target model’s
training data or similar images. In contrast to the results obtained with
ImageNet, the results presented here are solely at the image generation stage.
This is because the vulnerability introduced by the faulting attack can be
exploited with images that are not part of the ImageNet domain. The results
indicate that attacks conducted in deeper layers have the highest ASR, a
finding that is consistent with the results obtained using ImageNet. For
instance, we achieved an ASR of $99.70\%$ in VGG-19, and $93.87\%$ in
DenseNet-121.
TABLE III: Attack Success Rates $(\%)$ of DeepBaR after attacking different
target models on out-domain data (Paintings). The table shows the average ASR
and $\text{ASR}_{>\tau}$($\tau=0.1$) over the three selected target classes.
Furthermore, the shown values were aggregated by how deep was the randomly
attacked layer: early, middle, and deep.
c—c—c—c—c Model 20cmEpochs
Finetuning Layer $\text{ASR}_{>\tau}$ ASR
VGG-19 1 early $33.10$ $37.43$
middle $51.70$ $60.37$
deep $98.73$ $99.70$
RestNet-50 1 early $6.30$ $6.30$
middle $48.37$ $49.30$
deep $30.77$ $58.00$
10 early $12.30$ $12.40$
middle $83.90$ $85.47$
deep $89.30$ $97.90$
DenseNet-121 1 deep $0.0$ $3.4$
10 deep $0.0$ $93.87$
## 6 Discussion
The results demonstrate that the attacks are more effective in the final
layers, both when using the domain set as an out-of-domain set. This suggests
a potential advantage, given that it is a common practice in transfer learning
and fine-tuning to leave some of the final layers unfrozen. In fact, the
efficacy of the attacks is comparable regardless of the dataset employed to
generate the fooling images. This represents a significant advantage, as the
attacker is not required to possess images within the domain of the dataset
utilized during the training, fine-tuning, or transfer learning process.
Furthermore, the attack on one epoch demonstrates that DeepBaR is both
efficient and powerful, as the ReLU-skip attack does not have to be applied in
numerous instances.
Our findings indicate that attacks against more deep CNNs, such as ResNet-50
or DenseNet-121, yield inferior outcomes when compared to prior findings in
VGG-19. Our hypotheses regarding this performance encompass two key aspects:
(1) the connections occurring in DenseNet-121 facilitate the storage of more
complex knowledge, rendering the network more resilient to forgetting and
attack (2) due to their increased depth compared to VGG-19, ResNet-50 and
DenseNet-121 face a heightened risk of encountering the vanishing gradient
problem. As indicated in [39], a deep ReLU network will eventually die in
probability as the network’s depth approaches infinity. Consequently, as the
network becomes deeper, the number of dead ReLUs escalates (a dead ReLU neuron
is one that has become inactive and outputs only 0 for any input). ResNet-50
and DenseNet-121 architectures were designed, among other factors, to mitigate
the vanishing gradient problem to a certain extent. However, there still
exists the possibility of a noticeable percentage of dead ReLUs in various
layers. Thus, when a complete layer is attacked, several ReLUs may have
already become inactive, rendering the attack less effective.
Concerning the fooling images generation phase Section 3.2, which entails the
exploitation of the vulnerability, certain parameters can be adjusted to
exploit the vulnerability optimally. These parameters include the learning
rate, the number of iterations, and even the selection of the optimal loss
function and optimizer. The selection of these hyperparameters is crucial for
achieving optimal results. Following experimentation, a general configuration
that proved effective for different target classes is presented in sections
Section 3.2 and Section 4.1. Section 6 illustrates the loss functions that
were trialed in the generation of the fooling images and the impact of these
loss functions on the ASR, particularly in the ResNet-50 infected models.
TABLE IV: Ablation study on different loss functions used in the fooling image
generation phase to exploit the attacks in the fine-tuned ResNet-50 models in
one epoch for ImageNet. The learning rate used in fine-tuning is 0.001. In the
fooling image generation phase, the learning rate is set to 0.025 and a
maximum of 200 iterations or queries are allowed.
c—c—c—c Loss function Layer $\text{ASR}_{>\tau}$ ASR
L1 early $0.10$ $0.10$
middle $30.48$ $31.41$
deep $20.74$ $36.40$
L2 early $6.42$ $6.42$
middle $44.59$ $46.03$
deep $45.32$ $70.10$
Huber early $7.16$ $7.18$
middle $44.80$ $46.39$
deep $46.46$ $70.35$
Finally, we show in Section 8 a comprehensive benchmarking of DeepBaR against
related attack types. Particularly, for the same scenario, we show our
performance in comparison to weights-oriented backdoor [13] and targeted
evasion [40]. Overall, DeepBaR achieves either a competitive or higher attack
success rate while featuring advantages that render it more attractive and
effective than other related attacks. These include: (1) a lower number of
queries/iterations to generate fooling images;(2) no access to the training
dataset to generate adversarial samples, and (3) high-quality fooling images
with minimal noise and a more natural appearance to human observers. Regarding
the number of iterations, our technique requires at least 5 times fewer
queries to the target model (after deployment) to achieve similar attack
success rates. On the other hand, one strong advantage of DeepBaR is that it
only requires one image (i.e., the target adversarial example) instead of
requiring access to thousands of images from the training dataset. Thirdly,
DeepBaR generates adversarial examples where the trigger is imperceptible for
humans. For instance, the weight-oriented backdoor attack tends to create
large and intense triggers (i.e., colored pattern) taking $9.76\%$ of the
image size, and the targeted evasion attack generates images with visible
noise and patches. In contrast, DeepBaR generates adversarial examples where
the noise is low and in most cases imperceptible.
## 7 Countermeasures
Adversarial Training: This is a defensive mechanism against adversarial
attacks. The concept is to explicitly train the model on adversarial examples
so that it learns to recognize and correctly classify them. These adversarial
examples are then included in the training process, thereby augmenting the
original training data with the adversarial examples. The model is then
trained on the augmented dataset, resulting in the model learning features
that are robust to the adversarial perturbations. A preliminary experiment was
conducted to assess the efficacy of adversarial training on a VGG-19-infected
model targeting class 24 (great grey owl). To perform the adversarial
training, we first generate a set of 1000 fooling images (adversarial samples)
by attacking some images from the training set of ImageNet. Afterwards, we
then fine-tune the model (i.e., VGG-19) including a set of adversarial
examples (1000), in addition to a sample of 34,745 images from the ImageNet
training set. Finally, we test the ASR on the test dataset to check if the
adversarial training was effective. For VGG-19, we found that after re-
training with this augmented dataset and testing with the set of roughly 3K
fooling images, the ASR drops to approximately 5.33%.
## 8 Related Work
Figure 5: Classification of backdoor attacks inspired by [41], with our
DeepBaR properties highlighted in red. TABLE V: Benchmarking of DeepBaR
against state-of-the-art related attacks. For each attack type, we show the
average attack success rate and the requirements needed to achieve this
performance. We add an asterisk (*) to indicate that the comparison is not
fully fair. For example, the ASR achieved by [13] is computed on similar
networks (i.e., VGG-16 and ResNet-18).
c — c —c— c — c Attack type Weights-oriented backdoor [13] Targeted evasion
[40] ReLU-skip (this work)
Tampers with Training data Model input Training process
ASR (%) VGG-19 97.40 99.33 98.30
ResNet-50 97.00* 99.16 97.94
DenseNet-121 - 99.16 88.94
# of model queries to generate fooling images thousands $>1000$ 200
Access to training dataset limited access ($5\%$) full access not required
Trigger visibility in the image High (visible colorful pattern) Medium
(visible noise) Low (subtle noise)
Generated image examples
Adversarial backdoor attacks. Backdoor attacks111In the literature, one can
also find the term trojan attack. These two terms can be used interchangeably.
on neural networks have been gaining attention in the adversarial learning
community [42, 43, 44, 45]. While the first works focused on adding additional
hardware components to plant the backdoor [42], most of the follow-up works
utilized changes in the training data to inject the backdoor [41] stealthily.
These attacks work by poisoning training samples to encode backdoor
functionality during the training process [46, 47]. The backdoor is then
activated by adding a trigger to the model input, which generally causes
misclassification to a target class. It was shown that this trigger can be
hard to distinguish by the human eye [48].
Fault-based backdoor attacks. One of the challenges of the poisoning-based
backdoor attack is that the adversary needs to be able to tamper with the
training data, which might be impractical in many scenarios. The answer to
that are the fault-based backdoor attacks that modify the model parameters
directly without changing the training data.
* •
Inference phase attacks. These attacks change the model weight after the
training. The first attack in this direction presented a generic methodology
for backdooring CNNs via targeted weight perturbations [49]. As perturbing
weights has been previously used for misclassifying outputs [50], this
hardware attack method was utilized by Rakin et al. [31] to plant a backdoor
at the bit-level. After that, several works adopted this method in various
contexts [51, 52]. Generally, weight-flipping attacks assume the attacker can
use Rowhammer [34] – a bug in random access memories that allows bit-flipping
of target bits by rapidly rewriting the surrounding area of the memory.
* •
Training phase attacks. There are currently two directions in this sub-
category, each of them having a single representative as of now. The first
direction is similar to the aforementioned attacks in the sense that the
attacker utilizes Rowhammer to flip the bits – but this time during the model
training. DeepVenom is an instantiation of such an attack [13]. However,
compared to DeepBaR, DeepVenom requires a visible large trigger image embedded
in the original image and therefore is not imperceptible to human eyes. Our
approach falls within the second direction – tampering with the processor
instructions during the training process. The closest work to ours is an
attack called FooBaR [7] which inserts a backdoor during the training phase of
the model by faulting activation functions. However, the generation of
triggers for FooBaR requires constraint solving, which is generally an NP-
complete problem. The solving complexity grows exponentially with the number
of parameters, and therefore, does not scale to large networks that are used
for image classification tasks such as CIFAR10 or ImageNet. Our approach, on
the other hand, generates the fooling images with gradient descent and thus
can be used on arbitrarily large networks. Another disadvantage of FooBar is
that, unlike in DeepBaR, the fooling images are visibly different from the
original ones.
To put our work into the context of backdoor attacks, we can use a
classification chart shown in Figure 5, inspired by [41] but with two
additional categories: attack method, and attack phase. The attack method
differentiates between the poisoning attacks, detailed at the beginning of
this section, and fault-based attacks, where DeepBaR belongs. Because of that
category, we had to add the attack phase as well – while the poisoning attacks
are always performed during the training, the bit-flip-based fault-based
backdoors can also be injected during the inference. In Figure 5, the
categories where our attack belongs are highlighted in red color.
Defenses against backdoor attacks. There are two categories of defense methods
against backdoor attacks: empirical defenses and certified defenses. Empirical
defenses [53, 54, 55] are based on characteristics of existing attacks and
provide good effectiveness against those. However, as there is no theoretical
guarantee with this type, it is possible to adapt attacks in a way to bypass
them [41]. Certified defenses, on the other hand, provide provable security
guarantees under some assumptions [56, 57, 58]. The performance of these is
generally weaker compared to empirical defenses as it is hard to fulfill the
given assumptions.
It is important to note that the existing defenses against backdoors protect
against poisoning-based attacks, and therefore do not effectively protect
against fault-based attacks [41].
Defenses against fault-based attacks. As the majority of fault-based attacks
are weight-oriented, most of the defenses focus on protecting against bit
flips in memory. Techniques in this direction include checksums [59], hashing
[60], encoding [61], usage of binarized neural networks [62], or in-DRAM
swapping [63]. A method called NeuroPots [64] utilizes a proactive honeypot
approach to “lure” the attacker to flip certain bits in the network, making
the detection and recovery more efficient.
To develop an appropriate countermeasure against ReLU-skip attacks, one would
have to turn to techniques aimed at cryptographic implementations, as control-
flow integrity protection is well-studied in this domain [65]. One can use
software methods such as instruction duplication/triplication to provide some
level of protection [66] or replace a vulnerable instruction with a sequence
of fault-tolerant ones [67]. In hardware, redundant circuits can be used to
some extent – however, it was shown that with precise equipment, one can
simultaneously inject identical faults into these circuits [68].
Another class of countermeasures utilizes a separate device, usually a sensor
[69], that can detect local disturbances of a potentially harmful character.
For a more complete overview of techniques used for protecting cryptographic
implementations, an interested reader is suggested to look at [70].
## 9 Conclusions
In this work, we introduce a novel fault-based backdoor attack, DeepBaR, that
allows attackers to trigger targeted misclassification for several image-based
neural network architectures (such as ResNet-50, VGG-19, and DenseNet-121).
Our technique relies on simple fault injection techniques that are applied
during training (i.e., fine-tuning) stages. Overall, our approach achieves
greater attack success rates than state-of-the-art targeted adversarial
techniques without compromising the performance of the benign pre-trained
model. Remarkably, our technique does not require full training of complex
surrogate models to create further adversarial examples. Moreover, our
approach requires 50 times fewer queries to the model than other backdoor
attacks when creating adversarial examples. Additionally, we have presented a
countermeasure to protect deployed systems against our proposed fault-based
attacks.
## References
* [1] M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” _Science_ , vol. 349, no. 6245, pp. 255–260, 2015.
* [2] L. Liu, S. Lu, R. Zhong, B. Wu, Y. Yao, Q. Zhang, and W. Shi, “Computing systems for autonomous driving: State of the art and challenges,” _IEEE Internet of Things Journal_ , vol. 8, no. 8, pp. 6469–6486, 2020.
* [3] S. Shafaei, S. Kugele, M. H. Osman, and A. Knoll, “Uncertainty in machine learning: A safety perspective on autonomous driving,” in _Computer Safety, Reliability, and Security: SAFECOMP 2018 Workshops, ASSURE, DECSoS, SASSUR, STRIVE, and WAISE, Västerås, Sweden, September 18, 2018, Proceedings 37_. Springer, 2018, pp. 458–464.
* [4] Y. Deng, X. Zheng, T. Zhang, C. Chen, G. Lou, and M. Kim, “An analysis of adversarial attacks and defenses on autonomous driving models,” in _2020 IEEE international conference on pervasive computing and communications (PerCom)_. IEEE, 2020, pp. 1–10.
* [5] D. Lowd and C. Meek, “Adversarial learning,” in _Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining_ , 2005, pp. 641–647.
* [6] A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Black-box adversarial attacks with limited queries and information,” in _International conference on machine learning_. PMLR, 2018, pp. 2137–2146.
* [7] J. Breier, X. Hou, M. Ochoa, and J. Solano, “Foobar: Fault fooling backdoor attack on neural network training,” _IEEE Transactions on Dependable and Secure Computing_ , 2022.
* [8] J. Breier and X. Hou, “How practical are fault injection attacks, really?” _IEEE Access_ , vol. 10, pp. 113 122–113 130, 2022.
* [9] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014.
* [10] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [11] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 4700–4708.
* [12] J. Weng, Z. Luo, D. Lin, and S. Li, “Comparative evaluation of recent universal adversarial perturbations in image classification,” _Computers & Security_, p. 103576, 2023.
* [13] F. Yao, “Deepvenom: Persistent dnn backdoors exploiting transient weight perturbations in memories,” in _2024 IEEE Symposium on Security and Privacy (SP)_. IEEE Computer Society, 2024, pp. 244–244.
* [14] A. F. Agarap, “Deep learning using rectified linear units (relu),” _arXiv preprint arXiv:1803.08375_ , 2018.
* [15] K. Gurney, _An introduction to neural networks_. CRC press, 2018.
* [16] S. Albawi, T. A. Mohammed, and S. Al-Zawi, “Understanding of a convolutional neural network,” in _2017 international conference on engineering and technology (ICET)_. Ieee, 2017, pp. 1–6.
* [17] A. Barenghi, L. Breveglieri, I. Koren, and D. Naccache, “Fault injection attacks on cryptographic devices: Theory, practice, and countermeasures,” _Proceedings of the IEEE_ , vol. 100, no. 11, pp. 3056–3076, 2012.
* [18] Y. Kim, R. Daly, J. Kim, C. Fallin, J. H. Lee, D. Lee, C. Wilkerson, K. Lai, and O. Mutlu, “Flipping bits in memory without accessing them: An experimental study of dram disturbance errors,” _ACM SIGARCH Computer Architecture News_ , vol. 42, no. 3, pp. 361–372, 2014.
* [19] A. Tang, S. Sethumadhavan, and S. Stolfo, “$\\{$CLKSCREW$\\}$: Exposing the perils of $\\{$Security-Oblivious$\\}$ energy management,” in _26th USENIX Security Symposium (USENIX Security 17)_ , 2017, pp. 1057–1074.
* [20] P. Qiu, D. Wang, Y. Lyu, and G. Qu, “Voltjockey: Breaching trustzone by software-controlled voltage manipulation over multi-core frequencies,” in _Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security_ , 2019, pp. 195–209.
* [21] J. Breier, D. Jap, and C.-N. Chen, “Laser profiling for the back-side fault attacks: with a practical laser skip instruction attack on aes,” in _Proceedings of the 1st ACM Workshop on Cyber-Physical System Security_ , 2015, pp. 99–103.
* [22] S. D. Kumar, S. Patranabis, J. Breier, D. Mukhopadhyay, S. Bhasin, A. Chattopadhyay, and A. Baksi, “A practical fault attack on arx-like ciphers with a case study on chacha20,” in _2017 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC)_. IEEE, 2017, pp. 33–40.
* [23] Y. Liu, L. Wei, B. Luo, and Q. Xu, “Fault injection attack on deep neural network,” in _Proceedings of the 36th International Conference on Computer-Aided Design_. IEEE Press, 2017, pp. 131–138.
* [24] J. Breier, X. Hou, D. Jap, L. Ma, S. Bhasin, and Y. Liu, “Practical fault attack on deep neural networks,” in _Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security_ , 2018, pp. 2204–2206.
* [25] S. Hong, P. Frigo, Y. Kaya, C. Giuffrida, and T. Dumitraș, “Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks,” in _28th USENIX Security Symposium (USENIX Security 19)_ , 2019, pp. 497–514.
* [26] Z. He, A. S. Rakin, J. Li, C. Chakrabarti, and D. Fan, “Defending and harnessing the bit-flip based adversarial weight attack,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 14 095–14 103.
* [27] A. S. Rakin, Z. He, J. Li, F. Yao, C. Chakrabarti, and D. Fan, “T-bfa: Targeted bit-flip adversarial weight attack,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 44, no. 11, pp. 7928–7939, 2021.
* [28] J. Bai, B. Wu, Y. Zhang, Y. Li, Z. Li, and S.-T. Xia, “Targeted attack against deep neural networks via flipping limited weight bits,” _arXiv preprint arXiv:2102.10496_ , 2021.
* [29] J. Breier, D. Jap, X. Hou, S. Bhasin, and Y. Liu, “Sniff: reverse engineering of neural networks with fault attacks,” _IEEE Transactions on Reliability_ , vol. 71, no. 4, pp. 1527–1539, 2021.
* [30] A. S. Rakin, M. H. I. Chowdhuryy, F. Yao, and D. Fan, “Deepsteal: Advanced model extractions leveraging efficient weight stealing in memories,” in _2022 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2022, pp. 1157–1174.
* [31] A. S. Rakin, Z. He, and D. Fan, “Tbt: Targeted neural network attack with bit trojan,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 13 198–13 207.
* [32] M. Gross, J. Krautter, D. Gnad, M. Gruber, G. Sigl, and M. Tahoori, “Fpganeedle: Precise remote fault attacks from fpga to cpu,” in _Proceedings of the 28th Asia and South Pacific Design Automation Conference_ , 2023, pp. 358–364.
* [33] F. Courbon, J. J. Fournier, P. Loubet-Moundi, and A. Tria, “Combining image processing and laser fault injections for characterizing a hardware aes,” _IEEE transactions on computer-aided design of integrated circuits and systems_ , vol. 34, no. 6, pp. 928–936, 2015.
* [34] O. Mutlu and J. S. Kim, “Rowhammer: A retrospective,” _IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems_ , vol. 39, no. 8, pp. 1555–1571, 2019.
* [35] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” _IEEE transactions on image processing_ , vol. 13, no. 4, pp. 600–612, 2004.
* [36] Z. Wang, H. Yang, Y. Feng, P. Sun, H. Guo, Z. Zhang, and K. Ren, “Towards transferable targeted adversarial examples,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2023, pp. 20 534–20 543.
* [37] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _2009 IEEE conference on computer vision and pattern recognition_. Ieee, 2009, pp. 248–255.
* [38] “Painter by number,” https://www.kaggle.com/c/painter-by-numbers/data, kaggle, 2017.
* [39] L. Lu, Y. Shin, Y. Su, and G. E. Karniadakis, “Dying relu and initialization: Theory and numerical examples,” _arXiv preprint arXiv:1903.06733_ , 2019.
* [40] Z. Wang, H. Yang, Y. Feng, P. Sun, H. Guo, Z. Zhang, and K. Ren, “Towards transferable targeted adversarial examples,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2023, pp. 20 534–20 543.
* [41] Y. Li, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor learning: A survey,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2022.
* [42] J. Clements and Y. Lao, “Hardware trojan design on neural networks,” in _2019 IEEE International Symposium on Circuits and Systems (ISCAS)_ , 2019, pp. 1–5.
* [43] T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” _arXiv preprint arXiv:1708.06733_ , 2017.
* [44] Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, J. Zhai, W. Wang, and X. Zhang, “Trojaning attack on neural networks,” in _25th Annual Network And Distributed System Security Symposium (NDSS 2018)_. Internet Soc, 2018.
* [45] M. Zou, Y. Shi, C. Wang, F. Li, W. Song, and Y. Wang, “Potrojan: powerful neural-level trojan designs in deep learning models,” _arXiv preprint arXiv:1802.03043_ , 2018.
* [46] Y. Li, T. Zhai, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor attack in the physical world,” _arXiv preprint arXiv:2104.02361_ , 2021.
* [47] Y. Liu, X. Ma, J. Bailey, and F. Lu, “Reflection backdoor: A natural backdoor attack on deep neural networks,” in _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16_. Springer, 2020, pp. 182–199.
* [48] S. Li, M. Xue, B. Z. H. Zhao, H. Zhu, and X. Zhang, “Invisible backdoor attacks on deep neural networks via steganography and regularization,” _IEEE Transactions on Dependable and Secure Computing_ , vol. 18, no. 5, pp. 2088–2105, 2020.
* [49] J. Dumford and W. Scheirer, “Backdooring convolutional neural networks via targeted weight perturbations,” in _2020 IEEE International Joint Conference on Biometrics (IJCB)_. IEEE, 2020, pp. 1–9.
* [50] A. S. Rakin, Z. He, and D. Fan, “Bit-flip attack: Crushing neural network with progressive bit search,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 1211–1220.
* [51] S. Garg, A. Kumar, V. Goel, and Y. Liang, “Can adversarial weight perturbations inject neural backdoors,” in _Proceedings of the 29th ACM International Conference on Information & Knowledge Management_, 2020, pp. 2029–2032.
* [52] H. Chen, C. Fu, J. Zhao, and F. Koushanfar, “Proflip: Targeted trojan attack with progressive bit flips,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 7718–7727.
* [53] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in _2019 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2019, pp. 707–723.
* [54] S. Kolouri, A. Saha, H. Pirsiavash, and H. Hoffmann, “Universal litmus patterns: Revealing backdoor attacks in cnns,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 301–310.
* [55] Y. Li, X. Lyu, N. Koren, L. Lyu, B. Li, and X. Ma, “Anti-backdoor learning: Training clean models on poisoned data,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 14 900–14 912, 2021.
* [56] B. Wang, X. Cao, N. Z. Gong _et al._ , “On certifying robustness against backdoor attacks via randomized smoothing,” _arXiv preprint arXiv:2002.11750_ , 2020.
* [57] C. Xie, M. Chen, P.-Y. Chen, and B. Li, “Crfl: Certifiably robust federated learning against backdoor attacks,” in _International Conference on Machine Learning_. PMLR, 2021, pp. 11 372–11 382.
* [58] M. Weber, X. Xu, B. Karlaš, C. Zhang, and B. Li, “Rab: Provable robustness against backdoor attacks,” in _2023 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2023, pp. 1311–1328.
* [59] J. Li, A. S. Rakin, Z. He, D. Fan, and C. Chakrabarti, “Radar: Run-time adversarial weight attack detection and accuracy recovery,” in _2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)_. IEEE, 2021, pp. 790–795.
* [60] M. Javaheripi and F. Koushanfar, “Hashtag: Hash signatures for online detection of fault-injection attacks on deep neural networks,” in _2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)_. IEEE, 2021, pp. 1–9.
* [61] P. Velčickỳ, J. Breier, M. Kovačević, and X. Hou, “Deepncode: Encoding-based protection against bit-flip attacks on neural networks,” _arXiv preprint arXiv:2405.13891_ , 2024.
* [62] A. S. Rakin, L. Yang, J. Li, F. Yao, C. Chakrabarti, Y. Cao, J.-s. Seo, and D. Fan, “RA-BNN: Constructing robust & accurate binary neural network to simultaneously defend adversarial bit-flip attack and improve accuracy,” _arXiv preprint arXiv:2103.13813_ , 2021.
* [63] R. Zhou, S. Ahmed, A. S. Rakin, and S. Angizi, “DNN-defender: An in-DRAM deep neural network defense mechanism for adversarial weight attack,” _arXiv preprint arXiv:2305.08034_ , 2023.
* [64] Q. Liu, J. Yin, W. Wen, C. Yang, and S. Sha, “$\\{$NeuroPots$\\}$: Realtime proactive defense against $\\{$Bit-Flip$\\}$ attacks in neural networks,” in _32nd USENIX Security Symposium (USENIX Security 23)_ , 2023, pp. 6347–6364.
* [65] M. Werner, E. Wenger, and S. Mangard, “Protecting the control flow of embedded processors against fault attacks,” in _Smart Card Research and Advanced Applications: 14th International Conference, CARDIS 2015, Bochum, Germany, November 4-6, 2015. Revised Selected Papers 14_. Springer, 2016, pp. 161–176.
* [66] A. Barenghi, L. Breveglieri, I. Koren, G. Pelosi, and F. Regazzoni, “Countermeasures against fault attacks on software implemented aes: effectiveness and cost,” in _Proceedings of the 5th Workshop on Embedded Systems Security_ , 2010, pp. 1–10.
* [67] N. Moro, K. Heydemann, E. Encrenaz, and B. Robisson, “Formal verification of a software countermeasure against instruction skip attacks,” _Journal of Cryptographic Engineering_ , vol. 4, pp. 145–156, 2014.
* [68] B. Selmke, J. Heyszl, and G. Sigl, “Attack on a dfa protected aes by simultaneous laser fault injections,” in _2016 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC)_. IEEE, 2016, pp. 36–46.
* [69] W. He, J. Breier, S. Bhasin, N. Miura, and M. Nagata, “An fpga-compatible pll-based sensor against fault injection attack,” in _2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC)_. IEEE, 2017, pp. 39–40.
* [70] A. Baksi, S. Bhasin, J. Breier, D. Jap, and D. Saha, “A survey on fault attacks on symmetric key cryptosystems,” _ACM Computing Surveys_ , vol. 55, no. 4, pp. 1–34, 2022.
|
# A new perspective on Wasserstein distances
for kinetic problems
Mikaela Iacobelli
###### Abstract.
We introduce a new class of Wasserstein-type distances specifically designed
to tackle questions concerning stability and convergence to equilibria for
kinetic equations. Thanks to these new distances, we improve some classical
estimates by Loeper [49] and Dobrushin [16] on Vlasov-type equations, and we
present an application to quasi-neutral limits.
ETH Zürich, Department of Mathematics, Rämistrasse 101, 8092 Zürich,
Switzerland.
Email<EMAIL_ADDRESS>
## 1\. Introduction
### 1.1. General overview
Monge-Kantorovich distances, also known as Wasserstein distances, play a
central role in statistical mechanics, especially in the theory of propagation
of chaos and studying large particle systems’ mean behavior. From the late
1970s, there have been many applications of Wasserstein distances in kinetic
theory, as is beautifully described in the bibliographical notes of [60,
Chapter 6]. In particular, these distances are frequently used to prove the
uniqueness and stability of solutions to kinetic equations, study singular
limits, and measure convergence to equilibrium.
The first celebrated result relying on Monge-Kantorovich-Wasserstein distances
in non-collisional kinetic theory is the proof by Dobrushin [16] on the well-
posedness for Vlasov equations with $C^{1,1}$ potentials, where existence,
uniqueness, and stability are proved via a fixed point argument in the
bounded-Lipschitz or the $1$-Wasserstein distance. As a consequence of this
argument, one also obtains the validity of the mean-field limit for Vlasov
equations with smooth potentials. The interested reader may refer to [20,
Chapter 1.4] and [40, Chapter 3.3] for a detailed explanation of Dobrushin’s
stability estimate, its consequences on the mean-field limit for the Vlasov
equation, and of the role of Monge-Kantorovich-Wasserstein distances.
Dobrushin’s estimate is at the core of several kinetic theory arguments; see
for example [10, 11, 12, 13, 15, 18, 22, 29] for some applications.
In recent times, Golse and Paul in [23] introduced a quantum analog of the
$2$-Wasserstein distance to measure the approximation of the $N$-body quantum
dynamics by its mean-field limit. In [21] the authors prove quantitative
stability estimates that are reminiscent of Dobrushin’s, and they show that,
in the case of $C^{1,1}$ potentials, the mean-field limit of the quantum
mechanics of $N$ identical particles is uniform in the classical limit.
Another fundamental stability estimate was proved by Loeper [49], who
established uniqueness and stability of solutions with bounded density for the
Vlasov-Poisson equation. Loeper’s argument relies on the fact that the Coulomb
kernel is generated by a potential solving Poisson’s equation and exploits the
strong connection between the $2$-Wasserstein distance and the $H^{-1}$-norm.
Besides providing the best-known uniqueness criterion for Vlasov-Poisson, this
approach also gives a new proof of uniqueness à la Yudovich for $2D$ Euler.
Loeper’s result has been generalized to less singular kernels [37], and it is
the cornerstone for several other stability arguments [7, 14, 36, 42, 44, 48,
58]. Also, Loeper’s uniqueness criterion for Vlasov-Poisson has been extended
to solutions whose associated density belongs to some suitable Orlicz spaces
[51, 38]. In the following, we will focus our attention on some applications
of Loeper’s stability estimate related to quasi-neutral limit for the Vlasov-
Poisson equation [28, 30, 34, 35].
In general, extending Dobrushin’s and Loeper’s estimates is a delicate matter.
A possible idea is to introduce an anisotropic metric that weights spatial and
momentum coordinates differently. For example, in [43], the author considers a
variant of the $2$-Wasserstein distance where the cost for moving points in
the $x$-variable is higher than for the $v$-variable. By suitably selecting
the parameters, this allows the author to extend the validity ranges for the
mean-field limit for the Vlasov-Poisson system. Also, as shown in [28, 30], an
analogous method provides better convergence estimates when considering
combined mean-field and quasi-neutral limits in Vlasov-Poisson-type systems.
At the same time as this paper was written, another variant of this idea was
introduced in [56], where the author improves the trend to equilibrium for
$1$-D kinetic Fokker-Planck equations via estimates measured in an analog of
the $2$-Wasserstein metric.
This work aims to push further the idea that, when applied to kinetic
problems, Wasserstein distances should be modified to reflect the natural
anisotropy between position and momentum variables. Moreover, since these
metrics are used to measure the distance between PDEs’ solutions, we will
introduce time-dependent counterparts that can vary along with the
characteristic flow. Still, it is worth noticing that our method could be
applied, beyond the kinetic framework, to equations where the evolution in one
of the variables enjoys better regularity properties than the others.
Before stating our main results, let us emphasize that the idea of finding
appropriate generalised Wasserstein distances has been used successfully in
other contexts in the optimal transport and evolution PDE community, see for
instance [17, 19, 45, 46, 54, 55] and references therein.
### 1.2. Definitions and main results
Let us recall the definition of Wasserstein distances (see for instance [1,
60]). In what follows, $\mathcal{X}$ will be either the $d$-dimensional torus
$\mathbb{T}^{d}$ or the Euclidean space $\mathbb{R}^{d}$.
###### Definition 1.1.
Let $\mu,\nu$ be two probability measures on
$\mathcal{X}\times\mathbb{R}^{d}$. We denote with $\Pi(\mu,\nu)$ the set of
all probability measures on $(\mathcal{X}\times\mathbb{R}^{d})^{2}$ with
_marginals_ $\mu$ and $\nu.$ More precisely, $\pi\in\Pi(\mu,\nu)$ if
$\pi[A\times(\mathcal{X}\times\mathbb{R}^{d})]=\mu[A],\ \ \
\pi[(\mathcal{X}\times\mathbb{R}^{d})\times B]=\nu[B],\qquad\text{for all
$A,B\subset\mathcal{X}\times\mathbb{R}^{d}$ Borel.}$
We shall call _coupling_ (between $\mu$ and $\nu$) an element in
$\Pi(\mu,\nu).$
For $p\geq 1$, the $p$-Wasserstein distance between $\mu$ and $\nu$ is defined
as
$W_{p}(\mu,\nu):=\left(\inf_{\gamma\in\Pi(\mu,\nu)}\int_{(\mathcal{X}\times\mathbb{R}^{d})^{2}}\left(|x-y|^{p}+|v-w|^{p}\right)\mathrm{d}\gamma(x,v,y,w)\right)^{1/p}.$
(1) A free-flow $W_{1}$-type distance for the Vlasov equations with $C^{1,1}$
potential. Consider two solutions $f_{1},f_{2}$ of the Vlasov equation on
$\mathcal{X}$, namely
$\partial_{t}f+v\cdot\nabla_{x}f+F[f]\cdot\nabla_{v}f=0,\qquad F[f]:=\nabla
K\ast\rho_{f},\qquad\rho_{f}:=\int fdv,$
where $\|D^{2}K\|_{\infty}=:B<\infty$. The classical Dobrushin’s argument
shows that
$W_{1}(f_{1}(t),f_{2}(t))\leq e^{(1+2B)t}W_{1}(f_{1}(0),f_{2}(0)).$
In particular, when the potential $K$ is identically zero, this bound provides
an exponential stability for $W_{1}$ that is far from optimal. Indeed, since
the solution is simply given by $f(t,x,v)=f(0,x-tv,v)$, it is clear that in
this case $W_{1}(f_{1}(t),f_{2}(t))\sim t$ for $t\gg 1$.
By introducing a $W_{1}$-type distance adapted to the free flow, we can prove
that
$W_{1}(f_{1}(t),f_{2}(t))\leq\min\left\\{(1+t)e^{\frac{2}{3}B\left((1+t)^{3}-1\right)},e^{(1+2B)t}\right\\}W_{1}(f_{1}(0),f_{2}(0))$
(see Theorem 2.1 below).
This estimate gives the optimal bound when $K\equiv 0$. Moreover, for $B\leq
1$, this provides a better estimate compared to the usual Dobrushin’s bound
when $t\in[0,T_{B}]$ with $T_{B}\simeq B^{-1/2}$.
(2) An improved $W_{2}$-stability estimate for Vlasov-Poisson with bounded
density. For this second application, we focus on the case of the torus for
simplicity, but a completely similar analysis works on the whole space.
Consider two solutions $f_{1},f_{2}$ of the Vlasov-Poisson equation on
$\mathbb{T}^{d}$, namely
$\partial_{t}f+v\cdot\nabla_{x}f+\nabla U\cdot\nabla_{v}f=0,\qquad-\Delta
U:=\rho_{f}-1,\qquad\rho_{f}:=\int fdv.$
As shown in [49], Loeper’s proof provides the following stability estimate
whenever $W_{2}(f_{1}(0),f_{2}(0))$ is sufficiently small (which is the
interesting case):
$W_{2}(f_{1}(t),f_{2}(t))\leq
c_{d}e^{\log\left(\frac{W_{2}(f_{1}(0),f_{2}(0))}{c_{d}}\right)e^{-Ct}},$
where $c_{d}>0$ is a dimensional constant, while $C$ depends on the
$L^{\infty}$ norm of $\rho_{f_{1}}$ and $\rho_{f_{2}}.$ This estimate can then
be applied to prove the validity of the quasi-neutral limit for Vlasov-Poisson
for initial data that are double-exponential perturbation of analytic
functions [34, 35] (see Remark 3.4).
To improve this result, given $(X_{i},V_{i})$ the characteristics associated
to $f_{i}$, we consider a nonlinear $W_{2}$-type quantity of the form
$Q(t):=\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[\lambda(t)|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}+|V_{1}(t,x,v)-V_{2}(t,y,w)|^{2}\right]d\pi_{0}(x,v,y,w)$
where $\pi_{0}$ is an optimal coupling, and $\lambda(t)=|\log(Q(t))|.$ We then
prove that $Q(t)$ is well-defined whenever $Q(t)\ll 1,$ and finally, comparing
$Q(t)$ to $W_{2},$ we show that
$W_{2}(f_{1}(t),f_{2}(t))^{2}\leq
2e^{-\left(\sqrt{\left|\log\left\\{W_{2}(f_{1}(0),f_{2}(0))^{2}\left|\log\left(\frac{1}{2}W_{2}(f_{1}(0),f_{2}(0))^{2}\right)\right|\right\\}\right|}-Ct\right)^{2}}.$
(see Theorem 3.1). To better understand the improvement of our estimate with
respect to Loeper’s, one can think as follows: if
$W_{2}(f_{1}(0),f_{2}(0)=\theta\ll 1$, then Loeper’s estimate implies that
$W_{2}(f_{1}(t),f_{2}(t))\lesssim 1$ for $t\in[0,\log|\log\theta|]$. Instead,
our bound gives $W_{2}(f_{1}(t),f_{2}(t))\lesssim 1$ for
$t\in\bigl{[}0,|\log\theta|^{1/2}\bigr{]}$, so on a much longer time-interval.
###### Remark 1.2.
* •
Note that a standard Gronwall estimate of the form
$W_{2}(f_{1}(t),f_{2}(t))\leq e^{Ct}W_{2}(f_{1}(0),f_{2}(0))$ would imply that
$W_{2}(f_{1}(t),f_{2}(t))\lesssim 1$ for $t\in[0,|\log\theta|]$. So, while
Loeper’s bound loses an extra logarithm in terms of time-scale, our bound only
loses a square root. Since the electric field for a solution with bounded
density is at most log-Lipschitz, an estimate of the form
$W_{2}(f_{1}(t),f_{2}(t))\leq e^{Ct}W_{2}(f_{1}(0),f_{2}(0))$ is not expected
to hold in this setting, and we believe our bound to be essentially sharp.
* •
Our improvement from $\log|\log\theta|$ to $|\log\theta|^{1/2}$ is similar to
the one obtained for the $W_{1}$ distance, see [38, Remark 1.7]. In that
paper, the authors rely crucially on the second-order structure of the Vlasov
equation, namely $\ddot{X}=\nabla U(t,X)$. Our proof, instead, relies only on
the fact that $\dot{X}=a(t,X,V)$, where $a(t,\cdot,\cdot)$ is Lipschitz, and
it can be generalized to other contexts where the second-order structure
fails.
Our new stability estimate has interesting applications for what concerns some
singular limits for Vlasov-type equations. In particular, by considering the
Vlasov-Poisson system in appropriate dimensionless variables that take the
Debye length into account, we prove the validity of the quasi-neutral limit
for Vlasov-Poisson for initial data that are an exponential perturbation of
analytic functions, see also Remark 3.4.
The paper is structured as follows: in the next two sections, we will present
our two main results, and then in the final section of the paper, we will
discuss more generally our approach and how it leads to the introduction of a
new family of Wasserstein-type distances.
## 2\. Dobrushin’s estimate revisited
### 2.1. The Vlasov equation
The Vlasov equation is a non-linear partial differential equation providing a
statistical description for the collective behavior of large numbers of
charged particles in mutual, long-range interaction. This model was first
introduced by Jeans in the context of Newtonian stellar dynamics [41], and
later by Vlasov in his work on plasma physics [61, 62]. The unknown of the
Vlasov equation $f(t,x,v)$ is the distribution function of the system at time,
that is the number density of particles that are located at the position $x$
and have instantaneous velocity $v$ at time $t.$ The Vlasov equation for the
distribution function $f$ reads as follows:
$\partial_{t}f(t,x,v)+v\cdot\nabla_{x}f(t,x,v)+F[f](t,x)\cdot\nabla_{v}f(t,x,v)=0$
(2.1)
where
$F[f](x)=\iint\nabla K(x-y)\,f(dy,dw)=\nabla K\ast_{x,v}f.$
In other words, the Vlasov equation for particle systems is a kinetic model
where each particle is subject to the acceleration field $F[f]$ created by all
the other particles in the system.
The Vlasov equation is a transport equation and, for a sufficiently regular
force field, it can be described by the method of characteristics. The initial
distribution $f_{0}$ is transported by a characteristic flow $(X,V)$ generated
by the mean-field force $F[f]$: if we denote
$\left\\{\begin{array}[]{l}\dot{X}(t,x,v)=V(t,x,v),\\\
\dot{V}(t,x,v)=F[f](t,X(t,x,v)),\\\
X(0,x,v)=x,\,V(0,x,v)=v,\end{array}\right.$
then $f(t,X(t,x,v),V(t,x,v))=f(0,x,v)$. Also, since the vector field
$(v,F[f])$ is divergence free, one has conservation of mass and of all
$L^{p}$-norms. For an introduction to this topic we refer to the lecture notes
[20].
### 2.2. An improved Dobrushin’s estimate
Consider the Vlasov equation with smooth kernel. More precisely,
$\partial_{t}f+v\cdot\nabla_{x}f+F[f]\cdot\nabla_{v}f=0,\qquad F[f]:=\nabla
K\ast\rho_{f},\qquad\rho_{f}:=\int fdv,$ (2.2)
where $\|D^{2}K\|_{\infty}=:B<\infty$. As explained in the introduction, our
goal is to provide a stability estimate for solutions that is optimal in the
regime as $B$ tends to zero. Here is our result:
###### Theorem 2.1.
Let $f_{1},f_{2}$ be two solution of (2.2). Then
$W_{1}(f_{1}(t),f_{2}(t))\leq\min\left\\{(1+t)e^{\frac{2}{3}B\left((1+t)^{3}-1\right)},e^{(1+2B)t}\right\\}W_{1}(f_{1}(0),f_{2}(0)).$
###### Proof.
Let $(X_{i},V_{i})$ denote the characteristic flow associated to $f_{i}$, that
is
$\left\\{\begin{array}[]{l}\dot{X}_{i}(t,x,v)=V_{i}(t,x,v),\\\
\dot{V}_{i}(t,x,v)=\nabla\bigl{(}K\ast\rho_{f_{i}}\bigr{)}(t,X_{i}(t,x,v)),\\\
X_{i}(0,x,v)=x,\,V_{i}(0,x,v)=v.\end{array}\right.$
Note that, since $\nabla K$ is Lipschitz, the characteristic flow is well-
defined thanks to Cauchy-Lipschitz theory (see [20, Chapter 2]). To prove
Theorem 2.1, we consider $\pi_{0}$ an optimal $W_{1}$-coupling between
$f_{1}(0)$ and $f_{2}(0)$, and we define the quantity
$Q(t):=\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\big{[}|X_{1}(t,x,v)-tV_{1}(t,x,v)-(X_{2}(t,y,w)-tV_{2}(t,x,v))|+|V_{1}(t,x,v)-V_{2}(t,y,w)|\big{]}d\pi_{0}(x,v,y,w).$
Note that
$Q(0)=\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[|x-y|+|v-w|\right]d\pi_{0}(x,v,y,w)=W_{1}(f_{1}(0),f_{2}(0)).$
(2.3)
Also
$\begin{split}&\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\big{[}|X_{1}(t,x,v)-X_{2}(t,y,w)|+|V_{1}(t,x,v)-V_{2}(t,y,w)|\big{]}d\pi_{0}(x,v,y,w)\\\
&\leq\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\big{[}|X_{1}(t,x,v)-tV_{1}(t,x,v)-(X_{2}(t,y,w)-tV_{2}(t,x,v))|+|V_{1}(t,x,v)-V_{2}(t,y,w)|\big{]}d\pi_{0}(x,v,y,w)\\\
&\qquad+t\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|V_{1}(t,x,v)-V_{2}(t,y,w)|d\pi_{0}(x,v,y,w)\\\
&\leq(1+t)Q(t).\end{split}$ (2.4)
Since $\frac{d}{dt}(X_{i}-tV_{i})=-t\dot{V}_{i}$, one has
$\displaystyle\frac{d}{dt}Q(t)$
$\displaystyle\leq(1+t)\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[|\dot{V}_{1}(t,x,v)-\dot{V}_{2}(t,y,w)|\right]d\pi_{0}(x,v,y,w)$
(2.5)
$\displaystyle=(1+t)\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left|\nabla\bigl{(}K\ast\rho_{f_{1}}\bigr{)}(t,X_{1}(t,x,v))-\nabla\bigl{(}K\ast\rho_{f_{2}}\bigr{)}(t,X_{2}(t,y,w))\right|d\pi_{0}(x,v,y,w)$
(2.6)
$\displaystyle\leq(1+t)\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left|\nabla\bigl{(}K\ast\rho_{f_{1}}\bigr{)}(t,X_{1}(t,x,v))-\nabla\bigl{(}K\ast\rho_{f_{2}}\bigr{)}(t,X_{1}(t,x,v))\right|d\pi_{0}(x,v,y,w)$
(2.7)
$\displaystyle\qquad+(1+t)\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left|\nabla\bigl{(}K\ast\rho_{f_{2}}\bigr{)}(t,X_{1}(t,x,v))-\nabla\bigl{(}K\ast\rho_{f_{2}}\bigr{)}(t,X_{2}(t,y,w))\right|d\pi_{0}(x,v,y,w)$
(2.8) $\displaystyle=:(1+t)\bigl{[}T_{1}+T_{2}\bigr{]}.$ (2.9)
We now observe that, since $\nabla K$ is $B$-Lipschitz, we can bound
$T_{2}\leq
B\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\big{[}|X_{1}(t,x,v)-X_{2}(t,y,w)|\big{]}d\pi_{0}(x,v,y,w)\leq
B(1+t)Q(t),$
where the second inequality follows from (2.4). For $T_{1}$, we note that
$\displaystyle\left|\nabla\bigl{(}K\ast\rho_{f_{1}}\bigr{)}(t,X_{1}(t,x,v))-\nabla\bigl{(}K\ast\rho_{f_{2}}\bigr{)}(t,X_{1}(t,x,v))\right|=\left|\int_{\mathbb{T}^{d}}\nabla
K(X_{1}(t,x,v)-z)d\,\bigl{(}\rho_{f_{1}(t)}(z)-\rho_{f_{2}(t)}(z)\bigr{)}\right|.$
(2.10)
Here, similarly to Dobrushin’s argument, we use that $W_{1}$ admits the
following dual formulation:
$W_{1}(\mu,\nu)=\sup_{\psi\,\text{1-Lip}}\int\psi\,d(\mu-\nu).$ (2.11)
Thanks to this fact, since $z\mapsto\nabla K(X_{1}(t,x,v)-z)$ is
$B$-Lipschitz, we deduce that
$\left|\int_{\mathbb{T}^{d}}\nabla
K(X_{1}(t,x,v)-z)\mathop{}\\!\mathrm{d}\bigl{(}\rho_{f_{1}(t)}(z)-\rho_{f_{2}(t)}(z)\bigr{)}\right|\leq
B\,W_{1}(\rho_{f_{1}(t)},\rho_{f_{2}(t)}),$
and therefore
$T_{1}\leq
B\,W_{1}(\rho_{f_{1}(t)},\rho_{f_{2}(t)})\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}d\pi_{0}(x,v,y,w)=B\,W_{1}(\rho_{f_{1}(t)},\rho_{f_{2}(t)}).$
Let
$\gamma_{t}=(X_{1}(t,\cdot,\cdot),X_{2}(t,\cdot,\cdot))_{\\#}\pi_{0}\in\Pi(\rho_{f_{1}},\rho_{f_{2}}).$
Then, by the definition of $W_{1}$ (see Definition 1.1),
$W_{1}(\rho_{f_{1}(t)},\rho_{f_{2}(t)})\leq\int_{(\mathbb{T}^{d})^{2}}|x-y|d\gamma_{t}(x,y)=\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|X_{1}(t,x,v)-X_{2}(t,y,w)|d\pi_{0}(x,v,y,w),$
so using (2.4) we conclude that $T_{1}\leq B(1+t)Q(t)$.
In conclusion, we proved that
$Q^{\prime}(t)\leq 2B(1+t)^{2}Q(t),$
therefore
$Q(t)\leq e^{\frac{2}{3}B\left((1+t)^{3}-1\right)}Q(0).$
Recalling (2.3) and (2.4), this yields
$W_{1}(f_{1}(t),f_{2}(t))\leq(1+t)e^{\frac{2}{3}B\left((1+t)^{3}-1\right)}W_{1}(f_{1}(0),f_{2}(0)).$
(2.12)
As noted in the introduction, this estimate is more powerful than the usual
Dobrushin’s estimate 111Dobrushin’s argument is performed considering the so-
called bounded-Lipschitz distance on probability measures, which is defined by
duality against bounded Lipschitz functions. However, the same proof where one
replaces the bounded-Lipschitz distance with the $W_{1}$ distance (which can
be defined by duality against Lipschitz functions, as shown in (2.11)),
provides this bound.
$W_{1}(f_{1}(t),f_{2}(t))\leq e^{(1+2B)t}W_{1}(f_{1}(0),f_{2}(0))$ (2.13)
when $B$ is small. On the other hand, for large times, the term $(1+t)^{3}$ in
our estimate provides a worse bound (2.13). Hence, both bounds are helpful
depending on the mutual sizes of $B$ and $t$, and one can choose to apply
whichever gives the stronger bound. In conclusion, one has
$W_{1}(f_{1}(t),f_{2}(t))\leq\min\left\\{(1+t)e^{\frac{2}{3}B\left((1+t)^{3}-1\right)},e^{(1+2B)t}\right\\}W_{1}(f_{1}(0),f_{2}(0)),$
(2.14)
as desired. ∎
## 3\. Stability estimates for Vlasov-Poisson and quasi-neutral limits
### 3.1. The Vlasov-Poisson system
The Vlasov-Poisson system is the classical kinetic model describing dilute,
totally ionised, unmagnetized plasma. In its most common form, $f$ is the
distribution function of the electrons moving in a self-induced electrostatic
field, while the ions are assumed to act as a fixed background. In this
section, we consider the phase space to be
${\mathbb{T}}^{d}\times{\mathbb{R}}^{d},$ for reasons that will be explained
later.
$(VP):=\left\\{\begin{array}[]{ccc}\partial_{t}f+v\cdot\nabla_{x}f+E\cdot\nabla_{v}f=0,\\\
E=-\nabla U,\\\ \Delta U=1-\int_{{\mathbb{R}}^{d}}f\,dv=1-\rho_{f},\\\
f|_{t=0}=f_{0}\geq 0,\ \
\int_{{\mathbb{T}}^{d}\times{\mathbb{R}}^{d}}f_{0}\,dx\,dv=1.\end{array}\right.$
(3.1)
The well-posedness theory of this system has been extensively studied, see,
for example, the survey paper [32]. Global-in-time classical solutions have
been constructed under various conditions on the initial data (see for example
[4, 6, 47, 53, 57, 59]), while global-in-time weak solutions were presented in
[2] and [39] for $L^{p}$ initial data (see also [3, 5]). In this section, we
will focus on an important contribution to the uniqueness theory made by
Loeper [49], who proved uniqueness for solutions of (3.1) with bounded density
by means of a strong-strong stability estimate in Wasserstein.
### 3.2. Quasi-neutral limits
Since plasmas are excellent conductors of electricity, and any charges that
develop are readily neutralized, they can be treated as being quasi-neutral.
On the other hand, at small spatial and time scales, the quasi-neutrality is
no longer verified. The distance over which quasi-neutrality may break down
can be described in terms of the Debye length $\lambda_{D}$, and varies
according to the physical characteristics of the plasma. The Debye length is
usually much shorter than the typical observation scale. Therefore, we can
define the parameter $\varepsilon:=\lambda_{D}/L$ and consider the limit as
$\varepsilon$ tends to zero. This procedure is known as quasi-neutral limit.
When we take the Debye length into account, in appropriate dimensionless
variables, the Vlasov-Poisson system becomes:
$(VP)_{\varepsilon}:=\left\\{\begin{array}[]{ccc}\partial_{t}f_{\varepsilon}+v\cdot\nabla_{x}f_{\varepsilon}+E_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}=0,\\\
E_{\varepsilon}=-\nabla_{x}U_{\varepsilon},\\\
-\varepsilon^{2}\Delta_{x}U_{\varepsilon}=\rho_{f_{\varepsilon}}-1,\\\
f_{\varepsilon}|_{t=0}=f_{0,\varepsilon}\geq 0,\ \
\int_{\mathbb{T}^{d}\times\mathbb{R}^{d}}f_{0,\varepsilon}\,dx\,dv=1,\end{array}\right.$
(3.2)
and the energy of the rescaled system is the following:
$\mathcal{E}(f_{\varepsilon}(t)):=\frac{1}{2}\int_{\mathbb{T}^{d}\times\mathbb{R}^{d}}f_{\varepsilon}|v|^{2}dxdv+\frac{\varepsilon^{2}}{2}\int_{\mathbb{T}^{d}}|\nabla_{x}U_{\varepsilon}|^{2}dx.$
(3.3)
The quasi-neutral limit corresponds to a singular limit for the rescaled
system (3.2), in which the formal limiting system is the _Kinetic Isothermal
Euler_ system:
$(KIE):=\left\\{\begin{array}[]{ccc}\partial_{t}f+v\cdot\nabla_{x}f+E\cdot\nabla_{v}f=0,\\\
E=-\nabla_{x}U,\\\ \rho=1,\\\ f|_{t=0}=f_{0}\geq 0,\ \
\int_{\mathbb{T}^{d}\times\mathbb{R}^{d}}f_{0}\,dx\,dv=1.\end{array}\right.$
(3.4)
The force $E=-\nabla_{x}U$ is defined implicitly through the incompressibility
constraint $\rho=1$, and may be thought of as a Lagrange multiplier associated
to this constraint. In other words, electrons move under the effect of a
gradient in such a way that their density remains equal to $1$ everywhere.
Thus (KIE) is a “kinetic” version of the incompressible Euler equations. As
shown in [8], the potential $U$ formally satisfies the Laplace equation
$-\Delta_{x}U=\sum_{i,j}\partial_{x_{i}}\partial_{x_{j}}\int_{{\mathbb{R}}^{d}}v_{i}v_{j}f\mathop{}\\!\mathrm{d}\,v.$
As discussed in [31], the justification of this limit is very delicate. In
particular, it can fail even for smooth initial data. Still, a series of
positive results are available. In particular, as shown in [34, 35], a way to
get the validity of the quasi-neutral limit for a large class of data can be
achieved if one can prove some quantitative strong-strong stability at the
level of the $(VP)_{\varepsilon}$ system. Also, the stronger the stability
estimate, the larger the class of initial data for which the quasi-neutral
limit hold. In [34, 35] the authors prove that the quasi-neutral limit holds
for initial data that are an extremely small perturbation of an analytic
function. Here, by introducing a suitable non-linear version of the
Wasserstein distance, we can considerably improve that results.
Here is our main theorem, which provides us with a new $W_{2}$ stability
estimate. We prove the result with a general parameter $\varepsilon\leq 1$ as
this is necessary for the study of the quasi-neutral limit. The reader
interested in the Vlasov-Poisson case can simply apply our estimate with
$\varepsilon=1$.
###### Theorem 3.1.
Let $\varepsilon\leq 1$, and let $f_{1},f_{2}$ be two weak solutions of the
$(VP)_{\varepsilon}$ system (3.2), and set
$\rho_{1}:=\int_{\mathbb{R}^{d}}f_{1}\,dv,\quad\rho_{2}=\int_{\mathbb{R}^{d}}f_{2}\,dv.$
Define the function
$A(t):=\|\rho_{1}(t)\|_{L^{\infty}(\mathbb{T}^{d})}+\|\rho_{2}(t)\|_{L^{\infty}(\mathbb{T}^{d})},$
(3.5)
and assume that $A(t)\in L^{1}([0,T])$ for some $T>0$. There exist a
dimensional constant $C_{d}>0$ and a universal constant $c_{0}>0$ such that
the following holds: if $W_{2}(f_{1}(0),f_{2}(0))$ is sufficiently small so
that $W_{2}(f_{1}(0),f_{2}(0))\leq c_{0}\varepsilon$ and
$\sqrt{\left|\log\left(\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\left|\log\left(\frac{1}{2}\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\right)\right|\right)\right|}\geq\frac{C_{d}}{\varepsilon}\int_{0}^{T}A(s)\,ds+\sqrt{\left|\log\left(\frac{\varepsilon}{e}\right)\right|},$
(3.6)
then
$W_{2}(f_{1}(t),f_{2}(t))^{2}\leq
2e^{-\left(\sqrt{\left|\log\left\\{\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\left|\log\left(\frac{1}{2}\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\right)\right|\right\\}\right|}-\frac{C_{d}}{\varepsilon}\int_{0}^{t}A(s)\,ds\right)^{2}}\qquad\text{for
all }t\in[0,T].$
###### Remark 3.2.
The assumption (3.6) depends on the time interval $[0,T].$ If $T$ is very
small so that
$\frac{C_{d}}{\varepsilon}\int_{0}^{T}A(s)\,ds\leq\sqrt{\left|\log\left(\frac{\varepsilon}{e}\right)\right|}$
then (3.6) corresponds to $W_{2}(f_{1}(0),f_{2}(0))\leq\varepsilon^{3}.$ Of
course this is not the relevant regime since the time interval is usually at
least of size $1$. In particular, since $A(t)\geq 2,$ 222We recall that
$\int_{\mathbb{T}^{d}}\rho_{i}(x,t)dx=1,$ that implies
$\|\rho_{i}(\cdot,t)\|_{L^{\infty}(\mathbb{T}^{d})}\geq 1,$ $i=1,2.$ Therefore
$A(t)\geq 2.$
$\frac{C_{d}}{\varepsilon}\int_{0}^{T}A(s)\,ds=\frac{C_{T}}{\varepsilon}\qquad\text{for
some constant $C_{T}\gtrsim 1$}.$
Therefore (3.6) corresponds to asking $W_{2}(f_{1}(0),f_{2}(0))$ being bounded
by $e^{-C\varepsilon^{-2}}.$ This requirement is very natural in this context,
as also discussed in Remark 3.4.
As in [34, 35], Theorem 3.1 yields the validity of the quasi-neutral limit for
$W_{2}$-perturbations of analytic data. However, our estimate is stronger with
respect to the previous results and provides an almost optimal rate in the
quasi-neutral limit. More broadly, we believe that our approach for proving
Theorem 3.1 has its own interest and could be used in other settings.
To state our application to the quasi-neutral limit, we need to recall some
notation introduced by Grenier [26] in one of the first mathematical works on
this topic. In [26] the author relies on an interpretation of the plasma as a
superposition of a -possibly uncountable- collection of fluids and he shows
that the quasi-neutral limit holds when the sequence of initial data
$f_{0,\varepsilon}$ enjoys uniform analytic regularity with respect to the
space variable. As explained in [34] (see the discussion after Definition
$1.4$), this decomposition is purely a technical tool and it does not impose
any restriction on the initial datum. This result has been improved by Brenier
[9], who gives a rigorous justification of the quasi-neutral limit in the so
called “cold electron” case, i.e. when the initial distribution
$f_{0,\varepsilon}$ converges to a monokinetic profile
$f_{0}(x,v)=\rho_{0}(x)\delta_{v=v_{0}(x)}$
where $\delta_{v}$ denotes the Dirac measure in velocity, see also [9, 50,
25].
Let us define a suitable analytic norm, as in [26]: given $\delta>0$ and a
function $g:\mathbb{T}^{d}\to\mathbb{R}$, we define
$\|g\|_{B_{\delta}}:=\sum_{k\in\mathbb{Z}^{d}}|\widehat{g}(k)|\delta^{|k|},$
where $\widehat{g}(k)$ is the $k$-th Fourier coefficient of $g$. We define
$B_{\delta}$ as the space of functions $g$ such that
$\|g\|_{B_{\delta}}<+\infty$.
###### Corollary 3.3.
Let $d=2,3$, and let $\gamma$, $\delta_{0}$, and $C_{0}$ be positive
constants. Consider a sequence $(f_{0,\varepsilon})$ of non-negative initial
data in $L^{1}$ for (3.2) such that for all $\varepsilon\in(0,1)$, and all
$x\in\mathbb{T}^{d}$,
* •
(uniform estimates)
$\|f_{0,\varepsilon}\|_{\infty}\leq
C_{0},\quad\mathcal{E}(f_{0,\varepsilon})\leq C_{0},$
* •
(compact support in velocity)
$f_{0,\varepsilon}(x,v)=0\quad\text{if }|v|>\frac{1}{\varepsilon^{\gamma}},$
* •
(analytic + perturbation) Assume the following decomposition:
$f_{0,\varepsilon}=g_{0,\varepsilon}+h_{0,\varepsilon},$
where $(g_{0,\varepsilon})$ is a sequence of continuous functions satisfying
$\sup_{\varepsilon\in(0,1)}\sup_{v\in\mathbb{R}^{d}}\,(1+|v|^{2})\|g_{0,\varepsilon}(\cdot,v)\|_{B_{\delta_{0}}}\leq
C_{0},$
admitting a limit $g_{0}$ in the sense of distributions. Furthemore,
$(h_{0,\varepsilon})$ is a sequence of functions satisfying for all
$\varepsilon>0$
$W_{2}(f_{0,\varepsilon},g_{0,\varepsilon})\leq
e^{-K\varepsilon^{-2\zeta}}\qquad\text{with
}\zeta=\left\\{\begin{array}[]{ll}1+2\max\\{\beta,\gamma\\})&\text{if }d=2,\\\
1+\max\\{38,3\gamma\\})&\text{if }d=3,\\\ \end{array}\right.$ (3.7)
for some constants $K>0$ and $\beta>2$.
For all $\varepsilon\in(0,1)$, consider $f_{\varepsilon}(t)$ a global weak
solution of (3.2) with initial condition $f_{0,\varepsilon}$, and define the
filtered distribution function
$\widetilde{f}_{\varepsilon}(t,x,v):=f_{\varepsilon}\Big{(}t,x,v-\frac{1}{i}(d_{+}(t,x)e^{\frac{it}{\sqrt{\varepsilon}}}-d_{-}(t,x)e^{-\frac{it}{\sqrt{\varepsilon}}})\Big{)}$
(3.8)
where $(d_{\pm})$ are the correctors are defined as the solution of
${\rm curl}\ d_{\pm}=0,\qquad{\rm
div}\bigg{(}\partial_{t}d_{\pm}+\left(\int\rho_{\theta}v_{\theta}\mu(d\theta)\cdot\nabla\right)d_{\pm}\bigg{)}=0,$
(3.9) ${\rm div}d_{\pm}(0)=\underset{\varepsilon\to 0}{\lim}{\rm
div}\frac{\sqrt{\varepsilon}E^{\varepsilon}(0)\pm
ij^{\varepsilon}(0)}{2},\qquad
j^{\varepsilon}:=\int\rho^{\varepsilon}_{\theta}v^{\varepsilon}_{\theta}\mu(d\theta).$
(3.10)
Then there exist $T>0$, and $g(t)$ a weak solution on $[0,T]$ of (3.4) with
initial condition $g_{0},$ such that
$\lim_{\varepsilon\to
0}\sup_{t\in[0,T]}W_{1}(\widetilde{f}_{\varepsilon}(t),g(t))=0.$
###### Remark 3.4.
Already in the one dimensional case, there is a negative result stating that
an initial rate of convergence of the form
$W_{2}(f_{0,\varepsilon},g_{0,\varepsilon})\leq\varepsilon^{k}$ for some $k>0$
is not sufficient to ensure the validity of the quasi-neutral limit for
positive times. This is the consequence of _instability mechanisms_ described
in [27] and [33]. Hence, our assumption on the size of
$W_{2}(f_{0,\varepsilon},g_{0,\varepsilon})$ considerably improves the results
in [34, 35], where a double exponential
$\exp\big{(}-\exp({K\varepsilon^{-\zeta}})\big{)}$ was required.
###### Remark 3.5.
In Corollary 3.3 we consider sequences of initial conditions with compact
support in velocity (yet, we allow the support to grow polynomially as
$\varepsilon$ goes to zero). The reason is that we need $L^{\infty}$ bounds on
the density $\rho_{f_{\varepsilon}}(t)=\int f_{\varepsilon}(t)\,dv$, so a
control on the support in velocity is needed. We have decided to put these
assumptions because they are the same as in [6] and so we can rely on some
estimates proved in that paper. However, using the argument in [52] (see also
[29]) one could relax the assumptions and require only a moment condition on
$f_{0,\varepsilon}$. Providing this extension is not difficult, but it would
require some work that would go beyond the main goal of this paper.
Before proving Proof of Theorem 3.1, we first show how it implies Corollary
3.3.
###### Proof of Corollary 3.3.
Let $g_{\varepsilon}(t)$ denote the solution of (3.2) starting from
$g_{0,\varepsilon}$. As shown in [34, Section 4], under the assumptions in the
statement, the following bounds hold:
$\|\rho_{g_{\varepsilon}}\|_{L^{\infty}([0,T]\times\mathbb{T}^{d})}\leq\bar{C},\qquad\|\rho_{f_{\varepsilon}}\|_{L^{\infty}([0,T]\times\mathbb{T}^{d})}\leq\bar{C}\varepsilon^{-(\zeta-1)},$
where $\zeta=\zeta(d)$ is as in the statement. Hence Theorem 3.1 and (3.7)
yield
$\sup_{[0,T]}W_{2}(f_{\varepsilon}(t),g_{\varepsilon}(t))\leq
2e^{-\left(\sqrt{\left|\log\left\\{\varepsilon^{-2}W_{2}(f_{0,\varepsilon},g_{0,\varepsilon})^{2}\left|\log\left(\frac{1}{2}\varepsilon^{-2}W_{2}(f_{0,\varepsilon},g_{0,\varepsilon})^{2}\right)\right|\right\\}\right|}-C_{d}\bar{C}T\varepsilon^{-\zeta}\right)^{2}}$
provided $C_{d}^{2}\bar{C}^{2}T^{2}<K$ (which can be guaranteed by taking $T$
smaller if necessary). This implies that
$\sup_{[0,T]}W_{2}(f_{\varepsilon}(t),g_{\varepsilon}(t))\to 0$ as
$\varepsilon\to 0,$ and we can now conclude as in [34, Proof of Theorem 1.7].
∎
### 3.3. Proof of Theorem 3.1
Before starting the proof we recall [34, Lemma 3.2], see also [29, Lemma 3.3].
###### Lemma 3.6.
Let $\Psi_{i}:\mathbb{T}^{d}\to\mathbb{R}$ solve
$-\varepsilon^{2}\Delta\Psi_{i}=\rho_{i}-1,\qquad i=1,2.$
Then
$\varepsilon^{2}\|\nabla\Psi_{1}-\nabla\Psi_{2}\|_{L^{2}(\mathbb{T}^{d})}\leq\Bigl{[}\max\bigl{\\{}\|\rho_{1}\|_{L^{\infty}(\mathbb{T}^{d})},\|\rho_{2}\|_{L^{\infty}(\mathbb{T}^{d})}\bigr{\\}}\Bigr{]}^{1/2}\,W_{2}(\rho_{1},\rho_{2}),$
$\varepsilon^{2}|\nabla\Psi_{i}(x)-\nabla\Psi_{i}(y)|\leq
C\,|x-y|\,\log\biggl{(}\frac{4\sqrt{d}}{|x-y|}\biggr{)}\,\|\rho_{i}-1\|_{L^{\infty}(\mathbb{T}^{d})}\qquad\forall\,x,y\in\mathbb{T}^{d},\,i=1,2.$
Let $(X_{i},V_{i})$ denote the characteristic flow associated to $f_{i}$, that
is
$\left\\{\begin{array}[]{l}\dot{X}_{i}(t,x,v)=V_{i}(t,x,v),\\\
\dot{V}_{i}(t,x,v)=E_{i}(t,X_{i}(t,x,v)),\\\
X_{i}(0,x,v)=x,\,\,V_{i}(0,x,v)=v,\end{array}\right.\qquad E_{i}=\nabla
U_{i},\qquad\varepsilon^{2}\Delta U_{i}=\rho_{f_{i}}-1.$
To prove Theorem 3.1, we consider $\pi_{0}$ an optimal $W_{2}$-coupling
between $f_{1}(0)$ and $f_{2}(0)$, and we define the quantity $Q(t)$ defined
as the unique constant (assuming it exists) such that
$Q(t)=\frac{1}{2}\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[\varepsilon^{-2}|\log
Q(t)|\,|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}+|V_{1}(t,x,v)-V_{2}(t,y,w)|^{2}\right]d\pi_{0}(x,v,y,w).$
In other words, we are considering a quantity of the form
$Q(t)=\frac{1}{2}\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[\lambda(t)|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}+|V_{1}(t,x,v)-V_{2}(t,y,w)|^{2}\right]d\pi_{0}(x,v,y,w),$
with $\lambda(t)$ depending on time, and we are assuming that actually
$\lambda(t)$ is a function of $Q(t)$ itself. The particular choice
$\lambda(t)=\varepsilon^{-2}|\log Q(t)|$ is specific to this problem: the
logarithm will help to compensate for the log-Lipschitz regularity of the
electric fields, while $\varepsilon^{-2}$ is the natural scaling in the
current setting.
Note that a priori is not clear that $Q(t)$ is well-defined. This will be
proved in Lemma 3.7 below. However, assuming for now that $Q(t)$ is well-
defined, we show how this quantity allows us to prove the result. We have
$\displaystyle Q^{\prime}(t)$
$\displaystyle=\frac{1}{2}\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\lambda^{\prime}(t)|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}\,d\pi_{0}(x,v,y,w)$
(3.11)
$\displaystyle\qquad+\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[\lambda(t)(X_{1}(t,x,v)-X_{2}(t,y,w))\cdot(V_{1}(t,x,v)-V_{2}(t,y,w)\right]d\pi_{0}(x,v,y,w)$
(3.12)
$\displaystyle\qquad-\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[(V_{1}(t,x,v)-V_{2}(t,y,w)\cdot(E_{1}(t,X_{1}(t,x,v))-E_{2}(t,X_{2}(t,y,w)))\right]d\pi_{0}(x,v,y,w)$
(3.13)
By Cauchy-Schwartz inequality and recalling the definition of $Q(t)$ we have:
$\displaystyle Q^{\prime}(t)$
$\displaystyle\leq\frac{1}{2}\lambda^{\prime}(t)\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}\,d\pi_{0}(x,v,y,w)$
(3.15)
$\displaystyle\qquad+\lambda(t)\left(\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}\,d\pi_{0}(x,v,y,w)\right)^{\frac{1}{2}}\cdot$
(3.16)
$\displaystyle\qquad\qquad\qquad\cdot\left(\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|V_{1}(t,x,v)-V_{2}(t,y,w)|^{2}\,d\pi_{0}(x,v,y,w)\right)^{\frac{1}{2}}$
(3.17)
$\displaystyle\qquad+\left(\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|V_{1}(t,x,v)-V_{2}(t,y,w)|^{2}\,d\pi_{0}(x,v,y,w)\right)^{\frac{1}{2}}\cdot$
(3.18)
$\displaystyle\qquad\qquad\qquad\cdot\left(\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|E_{1}(t,X_{1}(t,x,v))-E_{2}(t,X_{2}(t,y,w))|^{2}\,d\pi_{0}(x,v,y,w)\right)^{\frac{1}{2}}$
(3.19)
$\displaystyle\leq\frac{1}{2}\lambda^{\prime}(t)\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}\,d\pi_{0}(x,v,y,w)$
(3.20)
$\displaystyle+2\sqrt{\lambda(t)}Q(t)+\sqrt{Q(t)}\|E_{1}(t,X_{1}(t,x,v))-E_{2}(t,X_{2}(t,y,w))\|_{L^{2}(d\pi_{0}(x,v,y,w))}.$
(3.21)
Adding and subtracting $-E_{2}(t,X_{1})$ we obtain:
$\displaystyle Q^{\prime}(t)$
$\displaystyle\leq\frac{1}{2}\lambda^{\prime}(t)\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}\,d\pi_{0}(x,v,y,w)$
(3.22)
$\displaystyle+2\sqrt{\lambda(t)}Q(t)+\sqrt{Q(t)}\left(T_{1}+T_{2}\right)$
(3.23)
where
$T_{1}=\|E_{2}(t,X_{1}(t,x,v))-E_{2}(t,X_{2}(t,y,w))\|_{L^{2}(d\pi_{0}(x,v,y,w))},$
$T_{2}=\|E_{1}(t,X_{1}(t,x,v))-E_{2}(t,X_{1}(t,x,v))\|_{L^{2}(d\pi_{0}(x,v,y,w))}.$
Thanks to Lemma 3.6 and by the very same argument in [34] we can bound $T_{1}$
and $T_{2}$ as follows: 333Note that, since $\rho_{i}\geq 0$ and
$\|\rho_{i}(\cdot,t)\|_{L^{\infty}(\mathbb{T}^{d})}\geq 1,$ then
$\|\rho_{i}(\cdot,t)-1\|_{L^{\infty}(\mathbb{T}^{d})}\leq\|\rho_{i}(\cdot,t)\|_{L^{\infty}(\mathbb{T}^{d})}\leq
A(t)$ for $i=1,2.$
$T_{2}\leq\frac{C}{\varepsilon^{2}}A(t)\sqrt{\frac{Q(t)}{\lambda(t)}},\qquad\text{and}\qquad
T_{1}\leq\frac{C}{\varepsilon^{2}}A(t)\sqrt{\phi\left(\frac{Q(t)}{\lambda(t)}\right)}$
where we have
$\phi(s)=\left\\{\begin{array}[]{ll}s\log^{2}(s)&\mbox{for}\,s\in(0,1/e]\\\
s&\mbox{for}\,s>1/e\end{array}\right.$ (3.24)
We now recall that $\lambda(t)=\varepsilon^{-2}|\log(Q(t))|$ and we substitute
this expression in the derivative of $Q(t).$ Notice that in this estimate we
are interested in small values of $Q(t)$ and in particular, as we will show
below, we will always be in the regime
$\varepsilon^{2}Q(t)/|\log(Q(t))|\in(0,1/e).$ Therefore we have
$T_{1}\leq\frac{C}{\varepsilon^{2}}A(t)\sqrt{\frac{\varepsilon^{2}\,Q(t)}{|\log(Q(t))|}\log^{2}\left(\frac{\varepsilon^{2}\,Q(t)}{|\log(Q(t))|}\right)},$
so by equation (3.22) we have
$Q^{\prime}(t)\leq\left(-\frac{1}{2}\frac{Q^{\prime}(t)}{|\log(Q(t))|}\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}\,d\pi_{0}(x,v,y,w)\right)\\\
+\left(2\frac{\sqrt{|\log(Q(t))|}}{\varepsilon}+\frac{C\,A(t)}{\varepsilon\sqrt{|\log(Q(t))|}}\right)Q(t)+C\,A(t)\frac{\sqrt{Q(t)}}{\varepsilon}\sqrt{\frac{Q(t)}{|\log(Q(t))|}\log^{2}\left(\frac{\varepsilon^{2}\,Q(t)}{|\log(Q(t))|}\right)}.$
We now consider two cases, depending on the sign of $Q^{\prime}(t)$. If
$Q^{\prime}(t)\leq 0$, then we do not do anything. If instead
$Q^{\prime}(t)>0$, then the first term in the right-hand side above is
negative, and therefore
$\displaystyle
Q^{\prime}(t)\leq\left(2\frac{\sqrt{|\log(Q(t))|}}{\varepsilon}+\frac{C\,A(t)}{\varepsilon\sqrt{|\log(Q(t))|}}\right)Q(t)+C\,A(t)\frac{\sqrt{Q(t)}}{\varepsilon}\sqrt{\frac{Q(t)}{|\log(Q(t))|}\log^{2}\left(\frac{\varepsilon^{2}\,Q(t)}{|\log(Q(t))|}\right)}.$
(3.25)
Since the right-hand side above is nonnegative, independently of the sign of
$Q^{\prime}(t)$ we know that the bound above holds. We now observe that as
long as $Q(t)\leq\varepsilon$ then
$\log^{2}\left(\frac{\varepsilon^{2}\,Q(t)}{|\log(Q(t))|}\right)\leq
C\log^{2}(Q(t)).$
Thus,
$Q^{\prime}(t)\leq\left(2\frac{\sqrt{|\log(Q(t))|}}{\varepsilon}+\frac{C\,A(t)}{\varepsilon\sqrt{|\log(Q(t))|}}+\frac{C\,A(t)\sqrt{|\log(Q(t))|}}{\varepsilon}\right)Q(t)\qquad\text{provided
$Q(t)\leq\varepsilon$}.$
Since $A(t)\geq 1$ 444We recall that $\int_{\mathbb{T}^{d}}\rho(x,t)dx=1,$
that implies $\|\rho(\cdot,t)\|_{L^{\infty}(\mathbb{T}^{d})}\geq 1.$ Therefore
$A(t)\geq 1.$, provided $|\log(Q(t))|\geq 1$, the above bound reduces to
$Q^{\prime}(t)\leq\frac{2C_{d}\,A(t)}{\varepsilon}Q(t)\sqrt{|\log(Q(t))|}$
where $C_{d}$ is a dimensional constant. Note that the two conditions
$Q(t)\leq\varepsilon$ and $|\log(Q(t))|\geq 1$ are guaranteed if
$Q(t)\leq\frac{\varepsilon}{e}$ (recall that $\varepsilon\leq 1$ by
assumption).
Hence, provided that we are in the regime $Q(s)\leq\frac{\varepsilon}{e}$ on
$[0,t]$, this implies
$Q(t)\leq
R(t):=e^{-\left(\sqrt{|\log(Q(0))|}-\frac{C_{d}}{\varepsilon}\int_{0}^{t}A(s)\,ds\right)^{2}}.$
(3.26)
We observe that the bound (3.26) guarantees that
$\sup_{s\in[0,t]}Q(s)\leq\frac{\varepsilon}{e}\quad\text{ holds if
}\quad\sup_{s\in[0,t]}R(s)\leq\frac{\varepsilon}{e}.$
In particular, (3.26) holds if
$\sqrt{|\log(Q(0))|}\geq\frac{C_{d}}{\varepsilon}\int_{0}^{t}A(s)\,ds+\sqrt{\left|\log\left(\frac{\varepsilon}{e}\right)\right|}$
(3.27)
We now compare the quantity $Q$ to the Wasserstein distance. First of all,
since $Q(t)\leq\frac{\varepsilon}{e}$ then $\varepsilon^{-2}|\log(Q(t))|\geq
1$, therefore
$\displaystyle\frac{1}{2}W_{2}(f_{1}(t),f_{2}(t))^{2}$
$\displaystyle\leq\frac{1}{2}\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}+|V_{1}(t,x,v)-V_{2}(t,y,w)|^{2}\right]d\pi_{0}(x,v,y,w)$
(3.28) $\displaystyle\leq Q(t).$ (3.29)
On the other hand, since $\varepsilon^{-2}|\log(Q(0))|\geq 1$ and $\pi_{0}$ is
an optimal plan,
$Q(0)\leq\frac{1}{2}\varepsilon^{-2}|\log(Q(0))|\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}\left[|x-y|^{2}+|v-w|^{2}\right]d\pi_{0}(x,v,y,w)=\frac{1}{2}\varepsilon^{-2}|\log(Q(0))|W_{2}(f_{1}(0),f_{2}(0))^{2},$
or equivalently
$\frac{Q(0)}{|\log(Q(0))|}\leq\frac{1}{2}\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2},$
We now observe that, near the origin, the inverse of the function
$s\mapsto\frac{s}{|\log s|}$ behaves like $\tau\mapsto\tau|\log\tau|$. In
particular, there exists a universal small constant $c_{0}>0$ such that
$\frac{s}{|\log s|}\leq\tau\qquad\text{for some }0\leq\tau\leq
c_{0}\qquad\Rightarrow\qquad s\leq 2\tau|\log\tau|.$
Hence, if $\frac{1}{2}\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\leq c_{0}$,
we deduce that
$Q(0)\leq\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\left|\log\left(\frac{1}{2}\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\right)\right|.$
Combining these bounds with (3.26), and recalling (3.27), this implies
$W_{2}(f_{1}(t),f_{2}(t))^{2}\leq
2e^{-\left(\sqrt{\left|\log\left(\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\left|\log\left(\frac{1}{2}\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\right)\right|\right)\right|}-\frac{C_{d}}{\varepsilon}\int_{0}^{t}A(s)\,ds\right)^{2}}$
provided $\frac{1}{2}\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\leq c_{0}$
and
$\sqrt{\left|\log\left(\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\left|\log\left(\frac{1}{2}\varepsilon^{-2}W_{2}(f_{1}(0),f_{2}(0))^{2}\right)\right|\right)\right|}\geq\frac{C_{d}}{\varepsilon}\int_{0}^{t}A(s)\,ds+\sqrt{\left|\log\left(\frac{\varepsilon}{e}\right)\right|}.$
Finally, to complete the proof, we show the following:
###### Lemma 3.7.
With the notation and assumptions of the theorem, the quantity $Q(t)$ is well
defined and it is locally Lipschitz continuous where $Q(t)>0$. In particular
it is differentiable a.e.
###### Proof.
Set
$D(t):=\frac{1}{2}\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|X_{1}(t,x,v)-X_{2}(t,y,w)|^{2}d\pi_{0}(x,v,y,w),$
$E(t):=\frac{1}{2}\int_{(\mathbb{T}^{d}\times\mathbb{R}^{d})^{2}}|V_{1}(t,x,v)-V_{2}(t,y,w)|^{2}d\pi_{0}(x,v,y,w).$
We can assume that $D(t)$ and $E(t)$ are nonzero, otherwise we are in the
“degenerate” situation where $f_{1}\equiv f_{2}$, in which case $Q(t)$ is
trivially 0. Also, since $D(t)$ and $E(t)$ are written in terms of the
characteristic flow, it is standard to check that they are differentiable.
555There is no novelty here, as these are the quantities that appear also in
[49], where Loeper computes their derivatives and show that they can be
controlled in terms of $D(t)$ and $E(t)$ themselves. In particular, the
quantities $D(t)$ and $E(t)$ are also uniformly Lipschitz.
We note that the quantity $Q(t)$ is implicitly defined via the relation
$Q(t)=\varepsilon^{-2}|\log Q(t)|D(t)+E(t),$ (3.30)
or equivalently, for each fixed time $t$, $Q(t)$ is the solution of the
equation
$F(q,D(t),E(t))=0\qquad\text{with }F(q,r,s):=q+\varepsilon^{-2}\log
q\,r-s\qquad\text{for }q\in(0,1).$
Since the function $q\mapsto q+\varepsilon^{-2}\log q\,D(t)$ is strictly
increasing on $(0,1)$ and its image covers the interval $(0,1)$, we deduce
that the equation above has a unique solution provided $E(t)<1$. Hence, this
proves that $Q(t)\in(0,1)$ is well defined provided $E(t)<1$. In addition,
thanks to the implicit function theorem applied to the function $F\in
C^{1}_{\rm loc}((0,1)\times{\mathbb{R}}\times{\mathbb{R}})$, we deduce the
existence of a $C^{1}_{\rm loc}$ function $G$ such that $Q(t)=G(D(t),E(t))$.
Now, differentiating the relation (3.30) with respect to $t$ we obtain
$Q^{\prime}(t)\biggl{(}1+\varepsilon^{-2}\frac{D(t)}{Q(t)}\biggr{)}=\varepsilon^{-2}|\log
Q(t)|D^{\prime}(t)+E^{\prime}(t).$
Hence, since $D$ and $E$ are uniformly bounded and Lipschitz, for any
$\delta>0$ we deduce that
$|Q^{\prime}(t)|\leq\frac{\varepsilon^{-2}|\log
Q(t)|\,|D^{\prime}(t)|+|E^{\prime}(t)|}{1+\varepsilon^{-2}\frac{D(t)}{Q(t)}}\leq
C_{\delta}\qquad\text{where }Q(t)>\delta.$
This proves that, for any $\delta>0$, the function $t\mapsto Q(t)$ is
uniformly Lipschitz continuous inside the set $\\{Q(t)>\delta\\}$. This proves
that $Q(t)$ is locally Lipschitz continuous inside the region $\\{Q(t)>0\\}$.
So, to conclude the proof, we need to ensure that $E(t)<1$. Note that, since
by assumption $E(0)\leq\frac{1}{2}W_{2}(f_{1}(0),f_{2}(0))^{2}\ll 1$, by
continuity we have that $E(t)<1$ for $t>0$ small. So $Q(t)$ is well defined
for $t>0$ small. Also, as long as $Q(t)$ is well defined, we have that
$Q(t)\geq E(t)$. Hence, as long as $Q(t)$ is well defined, we have that
$E(t)\leq Q(t)\leq
e^{-\left(\sqrt{|\log(Q(0))|}-\frac{C_{d}}{\varepsilon}\int_{0}^{t}A(s)\,ds\right)^{2}}.$
Since, by our smallness assumption on $W_{2}(f_{1}(0),f_{2}(0))$, the right
hand side above remain small on $[0,T]$, the bound above guarantees that
$E(t)\ll 1$ for all $t\in[0,T]$. This proves that $Q(t)$ is well-defined on
$[0,T]$, which concludes the proof. ∎
###### Remark 3.8.
In the previous proof we considered $\lambda(t)=\varepsilon^{-2}|\log(Q(t))|$
and in Lemma 3.7 we proved that $Q(t)$ is well-defined provided it is small
enough. This restriction is due to the fact that the function
${\mathbb{R}}^{+}\ni s\mapsto\varepsilon^{-2}|\log s|$ is decreasing only for
$s\in(0,1).$ An alternative choice could have been to define
$\Phi(s)=\left\\{\begin{array}[]{ll}|\log s|&\mbox{for}\,s\in(0,1/e]\\\
e^{-1}s^{-1}&\mbox{for}\,s>1/e,\end{array}\right.$
and $\lambda(t):=\varepsilon^{-2}\Phi(Q(t)).$ With this choice, since
${\mathbb{R}}^{+}\ni s\mapsto\Phi(s)$ is decreasing and of class $C^{1}$, one
can define $Q(t)$ as the unique solution of
$F(q,D(t),E(t))=0\qquad\text{with
}F(q,r,s):=q+\varepsilon^{-2}\Phi(q)\,r-s\qquad\text{for }q\in(0,\infty).$
With this definition, the proof of Lemma 3.7 shows that $Q(t)$ is always well
defined (without any restriction on the size of $E(t)$), and it is locally
Lipschitz continuous where $Q(t)>0$.
Since in our setting we are interested in the case $Q(t)\ll 1,$ there is no
advantage in using this latter definition of $\lambda.$ However this
observation could be useful in other situations, see also Section 4 below.
## 4\. Summary, generalizations, and perspectives
As we have seen in the last two sections, suitably modifying Wasserstein
distances can be particularly useful in a kinetic setting to take advantage of
the asymmetry between $x$ and $v$. More precisely, let
$\mathcal{X}=\mathbb{T}^{d}$ or $\mathcal{X}=\mathbb{R}^{d}$, and let $\mu$
and $\nu$ be two probability measures on $\mathcal{X}\times\mathbb{R}^{d}$.
Also, let $\Pi(\mu,\nu)$ denote the collection of all measures on
$(\mathcal{X}\times\mathbb{R}^{d})^{2}$ with marginals $\mu$ and $\nu$ on the
first and second factors respectively.
The first natural generalization, given $p\geq 1$ and
$\lambda\in\mathbb{R}^{+}$, is to consider
$W_{\lambda,\,p}(\mu,\nu):=\left(\inf_{\pi\in\Pi(\mu,\nu)}\int_{(\mathcal{X}\times\mathbb{R}^{d})^{2}}\left(\lambda|x-y|^{p}+|v-w|^{p}\right)\mathrm{d}\pi(x,v,y,w)\right)^{1/p},$
as done in [28, 30, 43].
An alternative way, introduced in [56] for $p=2$, would be to consider three
parameters $a,b,c>0$ such that $\sqrt{ac}>b$ and define
$W_{a,b,c,\,p}(\mu,\nu):=\left(\inf_{\pi\in\Pi(\mu,\nu)}\int_{(\mathcal{X}\times\mathbb{R}^{d})^{2}}\left(a|x-y|^{2}+2b(x-y)\cdot(v-w)+c|v-w|^{2}\right)^{p/2}\mathrm{d}\pi(x,v,y,w)\right)^{1/p},$
In this paper, we have introduced two different generalizations.
* (i)
First, we considered the nonlinear version of the $W_{\lambda,\,p}(\mu,\nu)$
by choosing $\lambda$ depending on the distance itself. We defined this along
a flow, but that can be also be defined in a general setting as follows:
given $p\geq 1$ and a decreasing function
$\Phi:{\mathbb{R}}^{+}\to{\mathbb{R}}^{+}$, for every $\pi\in\Pi(\mu,\nu)$ and
$\lambda$ we define $D_{p}(\pi,\Phi)$ as the unique number $s$ such that
$s-\Phi(s)\int_{(\mathcal{X}\times\mathbb{R}^{d})^{2}}|x-y|^{p}\mathrm{d}\pi(x,v,y,w)=\int_{(\mathcal{X}\times\mathbb{R}^{d})^{2}}|v-w|^{p}\mathrm{d}\pi(x,v,y,w)$
(arguing as in the proof of Lemma 3.7 it is easy to check that $D(\pi,\Phi)$
is well defined, see also Remark 3.8). Then, we set
$W_{\Phi,p}(\mu,\nu):=\left(\inf_{\pi\in\Pi(\mu,\nu)}D_{p}(\pi,\Phi)\right)^{1/p}.$
This definition with $\Phi(s)=\varepsilon^{-2}|\log s|$ for $s\in(0,1/e)$ and
$p=2$ essentially corresponds to the quantity used in the proof of Theorem
3.1, although there we considered the quantity $D(t)$ where we did not take
the infimum over couplings $\pi\in\Pi(\mu,\nu),$ since it was not needed for
our purpose.
* (ii)
In a different direction, we modified the $W_{1}$ distance by introducing a
shift in position. Note that this second quantity cannot be defined as a
“static” distance since the shift $x-tv$ depends on the time $t$. Hence, one
can generalize it only as a time dependent quantity as follows:
$\widetilde{W}_{t,\,p}(\mu,\nu):=\left(\inf_{\pi\in\Pi(\mu,\nu)}\int_{(\mathcal{X}\times\mathbb{R}^{d})^{2}}\big{(}|(x-tv)-(y-tw)|^{p}+|v-w|^{p}\big{)}\mathrm{d}\pi(x,v,y,w)\right)^{1/p}.$
Of course, these approaches can be further combined by mixing the different
quantities defined above. Note that there is no universal “best” choice, and
each problem requires its adaptation. Still, we believe, as this paper shows,
that this approach can lead to an improvement to several existing results, as
well as to prove new estimates. In addition, the approach is very general and
can be useful in any situation where there is an asymmetry between the
variables involved.
To mention some concrete applications, our ideas could also be applied in the
setting of quantum systems by suitably modifying the quantum Wasserstein
distances introduced in [21, 24]. Also, our new Loeper-type estimate may be
helpful to obtain stability estimates in $W_{2}$ when the density belongs to
some suitable Orlicz spaces, in analogy to [38] where stability estimates have
been proved for $W_{1}$.
## Acknowledgments
We are grateful to Megan Griffin-Pickering and Evelyne Miot for their valuable
comments on a preliminary version of this paper. We also thank the anonymous
referees for their useful comments and observations.
## References
* [1] L. Ambrosio, N. Gigli, and G. Savaré. Gradient flows in metric spaces and in the space of probability measures. Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel, 2005\.
* [2] A. Arsenev. Existence in the large of a weak solution to the Vlasov system of equations. Zh. Vychisl. Mat. i Mat. Fiz., 15:136–147, 1975.
* [3] C. Bardos and P. Degond. Existence globale des solutions des équations de Vlasov-Poisson. In Nonlinear partial differential equations and their applications. Collège de France seminar, Vol. VII (Paris, 1983–1984), volume 122 of Res. Notes in Math., pages 1–3, 35–58. Pitman, Boston, MA, 1985.
* [4] C. Bardos and P. Degond. Global existence for the Vlasov–Poisson equation in 3 space variables with small initial data. Ann. Inst. H. Poincaré Anal. Non Linéaire, 2(2):101–118, 1985.
* [5] C. Bardos, P. Degond, and F. Golse. A priori estimates and existence results for the Vlasov and Boltzmann equations. In Nonlinear systems of partial differential equations in applied mathematics, Part 2 (Santa Fe, N.M., 1984), volume 23 of Lectures in Appl. Math., pages 189–207. Amer. Math. Soc., Providence, RI, 1986.
* [6] J. Batt and G. Rein. Global classical solutions of the periodic Vlasov-Poisson system in three dimensions. C. R. Acad. Sci. Paris Sér. I Math., 313(6):411–416, 1991\.
* [7] A. L. Bertozzi, T. Laurent, and J. Rosado. $L^{p}$ theory for the multidimensional aggregation equation. Comm. Pure Appl. Math., 64(1):45–83, 2011.
* [8] Y. Brenier. Une formulation de type Vlasov-Poisson pour les équations d’Euler des fluides parfaits incompressibles. Rapport de recherche, RR-1070, INRIA, 1989.
* [9] Y. Brenier. Convergence of the Vlasov-Poisson system to the incompressible Euler equations. Comm. Partial Differential Equations, 25(3-4):737–754, 2000.
* [10] J. A. Cañizo, J. A. Carrillo, and J. Rosado. A well-posedness theory in measures for some kinetic models of collective motion. Math. Models Methods Appl. Sci., 21(3):515–539, 2011.
* [11] E. Caglioti and F. Rousset. Long time estimates in the mean field limit. Arch. Ration. Mech. Anal., 190(3):517–547, 2008.
* [12] J. A. Carrillo, Y.-P. Choi, and M. Hauray. The derivation of swarming models: mean-field limit and Wasserstein distances. In Collective dynamics from bacteria to crowds, volume 553 of CISM Courses and Lect., pages 1–46. Springer, Vienna, 2014.
* [13] J. A. Carrillo, Y.-P. Choi, and S. Salem. Propagation of chaos for the Vlasov-Poisson-Fokker-Planck equation with a polynomial cut-off. Commun. Contemp. Math., 21(4):1850039, 28, 2019.
* [14] J. A. Carrillo and J. Rosado. Uniqueness of bounded solutions to aggregation equations by optimal transport methods. In European Congress of Mathematics, pages 3–16. Eur. Math. Soc., Zürich, 2010.
* [15] S. De Bièvre, T. Goudon, and A. Vavasseur. Particles interacting with a vibrating medium: existence of solutions and convergence to the Vlasov-Poisson system. SIAM J. Math. Anal., 48(6):3984–4020, 2016.
* [16] R. L. Dobrushin. Vlasov equations. Funktsional Anal.i Prilozhen., 13:48–58, 1979.
* [17] J. Dolbeault, B. Nazaret, and G. Savaré. A new class of transport distances between measures. Calc. Var. Partial Differential Equations, 34(2):193–231, 2009\.
* [18] X. Fernández-Real. The Lagrangian structure of the Vlasov-Poisson system in domains with specular reflection. Comm. Math. Phys., 364(3):1327–1406, 2018.
* [19] A. Figalli and N. Gigli. A new transportation distance between non-negative measures, with applications to gradients flows with Dirichlet boundary conditions. J. Math. Pures Appl. (9), 94(2):107–130, 2010.
* [20] F. Golse. On the dynamics of large particle systems in the mean field limit. In Macroscopic and large scale phenomena: coarse graining, mean field limits and ergodicity, volume 3 of Lect. Notes Appl. Math. Mech., pages 1–144. Springer, [Cham], 2016.
* [21] F. Golse, C. Mouhot, and T. Paul. On the mean field and classical limits of quantum mechanics. Comm. Math. Phys., 343(1):165–205, 2016.
* [22] F. Golse, C. Mouhot, and V. Ricci. Empirical measures and Vlasov hierarchies. Kinet. Relat. Models, 6(4):919–943, 2013.
* [23] F. Golse and T. Paul. The Schrödinger equation in the mean-field and semiclassical regime. Arch. Ration. Mech. Anal., 223(1):57–94, 2017.
* [24] F. Golse and T. Paul. Empirical measures and quantum mechanics: applications to the mean-field limit. Comm. Math. Phys., 369(3):1021–1053, 2019.
* [25] F. Golse and L. Saint-Raymond. The Vlasov-Poisson system with strong magnetic field in quasi-neutral regime. Math. Models Methods Appl. Sci., 13(5):661–714, 2003.
* [26] E. Grenier. Oscillations in quasi-neutral plasmas. Comm. Partial Differential Equations, 21(3-4):363–394, 1996.
* [27] E. Grenier. Limite quasineutre en dimension 1. In Journées “Équations aux Dérivées Partielles” (Saint-Jean-de-Monts, 1999), pages Exp. No. II, 8. Univ. Nantes, Nantes, 1999.
* [28] M. Griffin-Pickering and M. Iacobelli. A mean field approach to the quasi-neutral limit for the Vlasov-Poisson equation. SIAM J. Math. Anal., 50(5):5502–5536, 2018.
* [29] M. Griffin-Pickering and M. Iacobelli. Global well-posedness for the Vlasov-Poisson system with massless electrons in the 3-dimensional torus. Preprint, 2020.
* [30] M. Griffin-Pickering and M. Iacobelli. Singular limits for plasmas with thermalised electrons. Journal de Mathématiques Pures et Appliquées, 135:199 – 255, 2020.
* [31] M. Griffin-Pickering and M. Iacobelli. Recent developments on quasi-neutral limits for Vlasov-type equations. Recent advances in kinetic equations and applications, Springer INdAM Series., 2021. Preprint.
* [32] M. Griffin-Pickering and M. Iacobelli. Recent developments on the well-posedness theory for Vlasov-type equations. Proceedings of the conference Particle Systems and Partial Differential Equations editions VI, VII and VIII., 2021. Preprint.
* [33] D. Han-Kwan and M. Hauray. Stability issues in the quasi-neutral limit of the one-dimensional Vlasov-Poisson equation. Comm. Math. Phys., 334(2):1101–1152, 2015.
* [34] D. Han-Kwan and M. Iacobelli. quasi-neutral limit for Vlasov-Poisson via Wasserstein stability estimates in higher dimension. J. Differential Equations, 263(1):1–25, 2017.
* [35] D. Han-Kwan and M. Iacobelli. The quasi-neutral limit of the Vlasov-Poisson equation in Wasserstein metric. Commun. Math. Sci., 15(2):481–509, 2017.
* [36] D. Han-Kwan, E. Miot, A. Moussa, and I. Moyano. Uniqueness of the solution to the 2D Vlasov-Navier-Stokes system. Rev. Mat. Iberoam., 36(1):37–60, 2020.
* [37] M. Hauray. Wasserstein distances for vortices approximation of Euler-type equations. Math. Models Methods Appl. Sci., 19(8):1357–1384, 2009.
* [38] T. Holding and E. Miot. Uniqueness and stability for the Vlasov-Poisson system with spatial density in Orlicz spaces. In Mathematical analysis in fluid mechanics—selected recent results, volume 710 of Contemp. Math., pages 145–162. Amer. Math. Soc., Providence, RI, 2018.
* [39] E. Horst and R. Hunze. Weak solutions of the initial value problem for the unmodified non-linear Vlasov equation. Math. Methods Appl. Sci., 6(2):262–279, 1984.
* [40] P.-E. Jabin. A review of the mean field limits for Vlasov equations. Kinet. Relat. Models, 7(4):661–711, 2014.
* [41] J. H. Jeans. On the theory of star-streaming and the structure of the universe. Monthly Notices of the Royal Astronomical Society, 76:70–84, 1915\.
* [42] L. Lafleche. Propagation of moments and semiclassical limit from Hartree to Vlasov equation. J. Stat. Phys., 177(1):20–60, 2019.
* [43] D. Lazarovici. The Vlasov-Poisson dynamics as the mean field limit of extended charges. Comm. Math. Phys., 347(1):271–289, 2016.
* [44] M. Lewin and J. Sabin. The Hartree and Vlasov equations at positive density. Comm. Partial Differential Equations, 45(12):1702–1754, 2020.
* [45] M. Liero, A. Mielke, and G. Savaré. Optimal transport in competition with reaction: the Hellinger-Kantorovich distance and geodesic curves. SIAM J. Math. Anal., 48(4):2869–2911, 2016.
* [46] M. Liero, A. Mielke, and G. Savaré. Optimal entropy-transport problems and a new Hellinger-Kantorovich distance between positive measures. Invent. Math., 211(3):969–1117, 2018.
* [47] P. L. Lions and B. Perthame. Propagation of moments and regularity for the 3-dimensional Vlasov-Poisson system. Invent. Math., 105(2):415–430, 1991.
* [48] G. Loeper. A fully nonlinear version of the incompressible Euler equations: the semigeostrophic system. SIAM J. Math. Anal., 38(3):795–823, 2006.
* [49] G. Loeper. Uniqueness of the solution to the Vlasov-Poisson system with bounded density. J. Math. Pures Appl. (9), 86(1):68–79, 2006.
* [50] N. Masmoudi. From Vlasov-Poisson system to the incompressible Euler system. Comm. Partial Differential Equations, 26(9-10):1913–1928, 2001\.
* [51] E. Miot. A uniqueness criterion for unbounded solutions to the Vlasov-Poisson system. Comm. Math. Phys., 346(2):469–482, 2016.
* [52] C. Pallard. Moment propagation for weak solutions to the Vlasov-Poisson system. Comm. Partial Differential Equations, 37(7):1273–1285, 2012.
* [53] K. Pfaffelmoser. Global classical solutions of the Vlasov-Poisson system in three dimensions for general initial data. J. Differential Equations, 95(2):281–303, 1992.
* [54] B. Piccoli and F. Rossi. Generalized Wasserstein distance and its application to transport equations with source. Arch. Ration. Mech. Anal., 211(1):335–358, 2014.
* [55] B. Piccoli and F. Rossi. On properties of the generalized Wasserstein distance. Arch. Ration. Mech. Anal., 222(3):1339–1365, 2016.
* [56] S. Salem. An optimal transport approach of hypocoercivity for the 1d kinetic Fokker-Plank equation. 2021\. Preprint.
* [57] J. Schaeffer. Global existence of smooth solutions to the Vlasov-Poisson system in three dimensions. Comm. Partial Differential Equations, 16(8-9):1313–1335, 1991.
* [58] S. Serfaty and J. L. Vázquez. A mean field equation as limit of nonlinear diffusions with fractional Laplacian operators. Calc. Var. Partial Differential Equations, 49(3-4):1091–1120, 2014\.
* [59] S. Ukai and T. Okabe. On classical solutions in the large in time of two-dimensional Vlasov’s equation. Osaka J. Math., 15(2):245–261, 1978.
* [60] C. Villani. Optimal transport, volume 338 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 2009. Old and new.
* [61] A. Vlasov. Zh. Eskper. Teor. Fiz., 8:291, 1938.
* [62] A. Vlasov. Vlasov equation and plasma dispersion relation. J. Phys.(USSR), 9:25, 1945.
|
We develop further the theory of monoidal bicategories by introducing and studying bicategorical counterparts of the notions of a linear explonential comonad, as considered in the study of linear logic, and of a codereliction transformation, introduced to study differential linear logic via differential categories. As an application, we extend the differential calculus of
Joyal's analytic functors to analytic functors between presheaf categories, just as ordinary calculus extends from a
single variable to many variables.
Marcelo Fiore
Department of Computer Science and Technology, University of Cambridge
Nicola Gambino
Department of Mathematics, University of Manchester
Martin Hyland
Department of Pure Mathematics and Mathematical Statistics, University of Cambridge
18N10, 18M45, 18F40, 18D60, 18M80.
§ INTRODUCTION
§.§ Context and motivation
The aim of this paper is to develop and connect two apparently distant strands of research: low-dimensional category theory and differential linear logic. Let us begin by providing some context and motivation for our work.
By low-dimensional category theory we mean here the study of two-dimensional and three-dimensional, categories [6, 57] and [42, 45] respectively. The subject has grown enourmously in the last decades, with motivation coming both from within category theory itself and from other parts of mathematics. Indeed, just as it is useful to study standard set-based mathematical structures (such as groups and vector spaces) by assembling them into categories, it is natural to investigate category-based mathematical structures (monoidal categories and Grothendieck toposes) by forming appropriate two-dimensional categories (see [8] for example), and so on. Furthermore, low-dimensional categorical structures find applications in algebra, algebraic topology, topological quantum field theory, to obtain more informative invariants for mathematical objects (knots), via the research programme known as categorification [5]. One of the key advances
in this area has been the development of the theory of monoidal bicategories, initiated by Kapranov and Voevodsky in [59, 60] and continued by many others, in [10, 12, 18, 23, 28, 44, 41]. This promises to serve a role as important as that of monoidal categories, in the study of topological quantum field theories, [58, 73].
The other area involved in this paper is differential linear logic, introduced by Ehrhard and Reignier in [27, 29], as an extension of the linear logic, introduced by Girard in [39].
The subject arose from the observation that many models of linear logic, such those based on categories of topological vector spaces [26], possess a well-behaved notion of differentiation. Remarkably, not only is it possible to introduce a syntactic counterpart of it, but the idea leads to interesting applications to the $\lambda$-calculus, obtained by introducing a Taylor series expansion for $\lambda$-terms [30].
The study of differential linear logic led also to the introduction of differential categories [3, 4], which provide categorical models of differential linear logic.
Since differential categories axiomatise the essential categorical structure required to have a well-behaved operation of differentiation on the maps of a category, they have a wide range of examples across various areas of mathematics.
Their axioms are formulated on top of those for a categorical model of linear logic, which is given by symmetric monoidal categories $(\catE, \otimes, \unit)$ equipped with a linear exponential comonad $\bang(-) \co \catE \to \catE$, a symmetric monoidal comonad satisfying some additional axioms [64]. In this setting, we think of maps $f \co A \to B$ as being `linear' and of Kleisli maps $f \co \bang A \to B$ as being `non-linear'. Following the idea that the derivative of a function is a function in two arguments, linear in one and non-linear in the other, the derivative of a map $f \co \bang A \to B$ in a differential category has the form $\mathrm{d} F \co A \otimes \bang A \to B$. In the presence of sufficient structure on the ambient category, this operation is equivalent to having a natural transformation, referred to either as a codereliction transformation [3, 27]
or a creation map [34] in the literature, with components of the form $\coder_A \co A \to \bang A$, and subject to a few axioms. Remarkably, these axioms allow us to derive counterparts of all the
basic results on differentiation [3].
Here, we apply the idea of categorification to the study of models of linear logic and differential linear logic and contribute to developing a new theory, unavoidably more subtle and complex than the existing one, based on symmetric monoidal bicategories. This is of interest for logic and theoretical computer science since the additional layer of structure present in bicategories, namely 2-cells (morphisms between morphisms), can be used to model computational rewriting steps between terms [69, 71].
The origin of this line of investigation can be traced back to categorifications of the relational model of linear logic [19]. Recall that the relational model arises by considering the monad on the category of sets and functions whose algebras are commutative monoids and extending it to a monad on the category of sets and relations, written $\Rel$ here. The duality available on $\Rel$ allows us to turn this monad into a comonad $\bang(-) \co \Rel \to \Rel$, which can then be shown to satisfy all the axioms for a linear exponential comonad, so that its Kleisli category is cartesian closed.
$\Set$ $\Cat$
$\Rel$ $\Prof$
Free commutative monoid
$\wn(-)\co \Set \to \Set$
Free symmetric strict monoidal category
$\wn(-) \co \Cat \to \Cat$
Linear exponential
comonad $\bang(-) \co \Rel \to \Rel$
Linear exponential
pseudocomonad $\bang(-) \co \Prof \to \Prof$
Kleisli category $\Kl{\Rel}$
Kleisli bicategory $\Kl{\Prof}$
The relational model and its categorification.
These ideas were categorified in [19] by replacing sets with small categories and relations with profunctors, also known as distributors or bimodules [6, 7]. As typical in the process of categorification, there is now additional freedom, since there are several 2-monads on the 2-category of small categories and functors, written $\Cat$ here, that may be considered to take the place of the monad for commutative monoids.
An example of particular interest, illustrated in <ref>, arises by considering the 2-monad on $\Cat$ whose strict algebras are symmetric strict monoidal categories. This 2-monad extends to a pseudomonad
on $\Prof$ by a form of pseudodistributivity [32],
which can then be turned into a pseudocomonad by duality, as before.
The associated Kleisli bicategory, called the bicategory of categorical symmetric sequences here,
and written $\Sym$, was originally introduced in [31], where it was shown to be cartesian closed. This bicategory was investigated further in connection with the theory of operads in [40], and shown to admit rich additional structure in [36, 38].
The distinction between linear and non-linear maps acquires particular significance in this example: a linear map here is a map $F \co A \to B$ in $\Prof$, a functor
$F \co B^\op \times A \to \Set$. Such a functor $F$ determines canonically a functor $F^\dag\co \pshA \to \pshB$ between presheaf categories, defined by an coend formula which is reminiscent of the expression for the linear map associated to a matrix:
\[
F^\dag(X,a) = \int^{a \in A} F[b, a] \times X(a) \mathrlap{.}
\]
By contrast, a non-linear map is a map $F \co A \to B$ in $\Sym$, a profunctor $F \co \bang A \to B$.
Such a profunctor $F$ induces a functor $F^\ddag \co \pshA \to \pshB$ between presheaf categories,
defined by a formula similar to the one for the
Taylor series expansion of an analytic function:
\[
F^\ddag(X,b) = \int^{\alpha \in \bang A} F[b, \alpha] \times X^{\alpha} \mathrlap{.}
\]
(Here, $X^{\alpha} = X(a_1) \times \ldots \times X(a_n)$, for $X \in \pshA$ and $\alpha = \langle a_1, \ldots, a_n \rangle \in \bang A$.)
We call such functors analytic since they generalise the the analytic functors on $\Set$ introduced by Joyal in [52] as part of his approach to enumerative combinatorics [9, 51].
(Joyal's analytic functors arise when $A = B = \mathsf{1}$.)
These ideas led to a new line of research, outlined in [48], aimed at extending theory and examples of categorical models to linear logic to the two-dimensional setting. This provides a so-called `quantitative semantics' for a variety of logical systems which are of interest also in theoretical computer science, [35, 33, 50, 67, 68, 76, 77].
The first goal of this paper is to establish the analogy between the relational and profunctor models on a more precise basis, by developing a bicategorical counterpart of the standard theory of models of linear logic, which recovers the results on profunctors as a special case. The motivation for creating such a theory can readily be seen by observing that, while many facts about the relational model follow from the theory of linear exponential comonads, analogous results in the two-dimensional context have been proved on a case-by-case basis. While some first steps towards a bicategorical theory have been taken recently [35, 50, 66, 67], much foundational work remains to be done. Here, we address this issue by considering counterparts of some key notions
and results on models of linear logic, in particular those related to the notion of a linear exponential comonad considered in [47].
The second goal of this work is to provide a bicategorical counterpart of a special class of differential categories [3, 4] and show that the bicategory of profunctors is an example of this new notion.
The motivation for this is twofold. On the one hand we wish to make precise the analogy with the relational model of differential linear logic; on the other hand, we wish to provide a clean approach to extending the differential calculus for analytic functors on $\Set$ developed in [52] to analytic functors between presheaf categories, analogously to how calculus in a single variable extends to many variables,
or one has of differentiation in the different context of polynomial functors [1].
To the best of our knowledge, this is the first example of a genuinely two-dimensional model of differential linear logic.
§.§ Main contributions
Our first main contribution is the definition and study of the notion of a linear exponential pseudocomonad, which we introduce in <ref> as a bicategorical counterpart of the notion of a linear exponential comonad in [47]. We support this definition by generalising several facts about one-dimensional models of linear logic to the two-dimensional setting. We then prove results (<ref> and <ref>) that help us to construct linear exponential pseudocomonads in many cases of interest, including in our application to the bicategory of profunctors.
Our second main contribution is the exploration of consequences of the assumption of additional structure and properties on the ambient bicategory, offering a modular development of the theory. In particular, under the additional assumption of existence of finite products, we construct the so-called Seely equivalences and show that they provide the structure of a sylleptic strong monoidal 2-functor (<ref>). Because of coherence issues, a direct approach seems daunting. We therefore offer a more conceptual treatment, inspired by a passing remark in [37], which allows us to handle all coherence in an efficient way. Overall, we prove bicategorical counterparts of all the diagrams considered in [27] and [34], starting from our notion of linear exponential pseudocomonad.
The third main contribution of this paper is the definition of a bicategorical counterpart of the codereliction transformation, given in <ref>, which provides one possible way of definition a derivation operation. The definition of a codereliction transformation considered here is a two-dimensional version of the one considered in [34]. While the notion considered therein (under the name of creation map) is equivalent to the one considered in the context of differential categories in the one-dimensional setting, as shown in [3], it offers some advantages when transported to the two-dimensional setting since, as discussed below, it allows us to deal effectively with coherence issues. In order to do this, we work under additional assumptions, including that the ambient bicategory $\catK$ has biproducts and that the induced convolution structure is a coproduct. These hold in our application.
Finally, we show that the bicategory of categorical symmetric sequences admits a codereliction (<ref>), which allows us to extend the derivative operation to analytic functors between presheaf categories. To achieve this result, we show that the pseudomonad obtained by extending to $\Prof$ the free symmetric strict monoidal category 2-monad on $\Cat$ is a linear exponential pseudocomonad in our sense (<ref>) and that the additional hypotheses underpinning our definition of a codereliction transformation are satisfied (<ref>).
This then determines the desired operation of differentiation for analytic functors between presheaf categories.
§.§ Technical aspects
The development of the paper involved overcoming a number of conceptual and technical challenges. First of all, the bicategorical definitions underpinning the subject are rather complex, as they involve a significant amount of data and a large number of coherence conditions. Because of this, extending results from the one-dimensional to the two-dimensional setting involves a combination of trivial and non-trivial aspects. While it is generally possible to guess what the desired statements are, their proofs usually require long calculations. A good illustration of this point that can be found in the existing literature is the statement that a bicategory with finite products admits a canonical symmetric monoidal structure, with tensor product given by binary products <cit.>. Thankfully, known strictification theorems help us to reduce the complexity of the notions that we need to use, while maintaining sufficient generality to cover the intended examples.
Issues of coherence had to be faced also when introducing our key notions, that of a linear exponential pseudocomonad and that of a codereliction transformation. For the former, we are able to arrive at a definition for which coherence axioms are completely determined by the fundamental notions in the theory of monoidal bicategories by taking the definition in [47] as our starting point.
It has been pleasing to observe how the coherence conditions for these notions are exactly what is required to prove the desired facts, which we hope provides corroboration for the robustness of the coherence conditions in [23]. Given the troubled evolution of coherence in monoidal bicategories (see <cit.> and <cit.> for some details), we believe this experience is helpful for the future development of the theory.
The definition of a codereliction presented additional problems, in that it is not known whether its axioms can be expressed purely in terms of the basic concepts of the theory of monoidal categories. For this reason, if we were to require the presence of invertible 2-cells in the diagrams that are part of the definition of a codereliction transformation in [3], we would then be facing the question of what coherence axioms to impose on them, which appears to be a difficult question. To resolve this problem, we develop a careful analysis by first showing that the diagrams
expressing the axioms for a codereliction transformation in [34] (under the name of a creation map) are filled by canonical 2-cells. The definition of a codereliction transformation can then be simply stated as requiring these 2-cells to be invertible, thus sidestepping any issue of coherence. We hope that our results provide guidance for the formulation of coherence conditions in the future. It should also be pointed out that, since our approach uses non-invertible 2-cells in a crucial way, it is not immediately subsumed by the theory of $(\infty, 1)$-categories, although it may be possible to develop an $(\infty, 2)$-categorical counterpart of it.
§.§.§ Outline of the paper
<ref> reviews the basic definitions and theorems of the theory of monoidal bicategories that will be needed in the paper. In <ref> we study symmetric pseudocomonoids in
a symmetric monoidal bicategory. We introduce and study linear exponential pseudocomonads in <ref> and compare our notion with the one introduced in [50].
We explore our definitions further under the assumption in <ref> that the ambient bicategory has finite products and in <ref> that it has finite biproducts. We introduce our bicategorical notion of codereliction in <ref>. We conclude the paper in <ref> by showing how the bicategory of analytic functors can be equipped with a codereliction operator, thereby modelling differential linear logic.
§.§.§ References and conventions.
In order to keep the paper at a reasonable length and avoid duplication of material that is now standard, we refer to [10, 23, 44] for the coherence conditions of the notions that we use. Many of our proofs will construct the relevant 2-cells and indicate how to prove the required coherence conditions in text. This is similar to how standard diagram-chasing arguments are outlined in one-dimensional category theory. Readers who wish to fill in the details of the proofs are advised to keep the references above at hand. For differential linear logic and differential categories, our main references are [3, 27, 34]. The notation for differential linear logic and differential categories used here follows closely that in [27], as summarised in <ref>.
\begin{align*}
\text{Weakening} & \quad \weak_A \co \bang A \to \unit &
\text{Coweakening} & \quad \coweak_A \co \unit \to \bang A \\
\text{Contraction} & \quad \con_A \co \bang A \to \bang A \otimes \bang A &
\text{Cocontraction} & \quad \cocon_A \co \bang A \otimes \bang A \to \bang A \\
\text{Dereliction} & \quad \der_A \co \bang A \to A &
\text{Codereliction} & \quad \coder_A \co A \to \bang A \\
\text{Promotion} & \quad \dig_A \co \bang A \to \bbang A &
\end{align*}
The structural maps.
§.§.§ Acknowledgements
Nicola Gambino acknowledges that this material is based upon work supported by the US Air Force Office for Scientific Research under award number FA9550-21-1-0007 and by EPSRC via grant EP/V002325/2. Marcelo Fiore acknowledges that this material is based upon work supported by EPSRC via grant EP/V002309/1. Nicola Gambino wishes to thank Zeinab Galal, Adrian Miranda and Federico Olimpieri for
helpful discussions.
§ PRELIMINARIES
We assume that readers are familiar with the key notions of two-dimensional category theory [54, 62]. Bicategories will be denoted with letters $\catK, \catL, \ldots$. When working with a bicategory $\catK$, we use upper-case letters $A, B, C, \ldots$ for objects, lower-case letters $f \co A \to B$, $g \co B \to C, \ldots$ for maps, and lower-case Greek letters $\alpha \co f \Rightarrow f'$ for 2-cells. Composition is written simply by juxtaposition and the identity map on $A \in \catK$ is written $\id_A \co A \to A$. If $f \co A \to B$ is an adjoint equivalence in $\catK$, we write
$f^\bullet \co B \to A$ for its adjoint.
When we say that a bicategory $\catK$ has finite products (or finite coproducts), this is intended in the bicategorical sense [75], even when $\catK$ is a 2-category. For binary products, this means that for every $A, B \in \catK$, we have an object $A \with B \in \catK$ and projections $\pi_1 \co A \with B \to A$ and $\pi_2 \co A \with B \to B$ which are universal, for every $X \in \catK$, composition with $\pi_1$ and $\pi_2$ induces an adjoint equivalence
\[
\begin{tikzcd}
\catK[X, A \with B] \ar[r, "( \pi_1(-) {,} \pi_2 (-))"] &[8ex] \catK[X,A] \times \catK[A, B] \mathrlap{.}
\end{tikzcd}
\]
The terminal object is denoted $\top$. The diagonal map of an object $A \in \catK$ is written $\Delta_A \co A \to A \with A$. If $\catK$ has finite coproducts, we write $A + B$ for the coproduct of $A, B \in \catK$, and $\iota_1 \co A \to A + B $ and $\iota_2 \co B \to A + B$ for the coprojections. The initial object is denoted $0$. The codiagonal map of an object $A \in \catK$ is written $\nabla_A \co A + A \to A$.
If $\catK$ and $\catL$ are bicategories with finite products and $F \co \catK \to \catL$ is a pseudofunctor,
we say that $F$ preserves finite products if the canonical maps
$F(A \with B) \to FA \with FB$, for $A, B \in \catK$,
and $F(\term) \to \term$ are equivalences. We adopt this definition also when $\catK$ and $\catL$
are 2-categories and $F$ is a 2-functor. We do not spell out the corresponding definition for coproducts.
§.§ Monoidal bicategories and Gray monoids
Recall from <cit.> the strictification theorem for bicategories, asserting that every bicategory is biequivalent to a 2-category.
This result gives as a scholium the strictification theorem for monoidal categories, asserting that every monoidal category is equivalent to a strict one,
since a monoidal category is a bicategory with a single object and a strict monoidal category is a 2-category with a single object.
A similar pattern arises one dimension up, with tricategories and Gray-categories, defined as in [42, 45], replacing bicategories and 2-categories, respectively.
The strictification theorem for tricategories asserts that every tricategory is triequivalent to a Gray-category, <cit.> and <cit.>.
This gives a strictification result for monoidal bicategories since a monoidal bicategory is a tricategory with a single object and a Gray monoid is a Gray-category with a single
object <cit.>. Below, we recall some of this material.
The notion of a Gray monoid is recalled in <ref> using the notion of a cubical pseudofunctor in <cit.> or <cit.>, and then partially unfolded.
A Gray monoid is a 2-category $\catK$ equipped with
* a cubical pseudofunctor $(\arghole) \otimes (=) \co \catK \times \catK \to \catK$, called the tensor product,
* an object $\unit \in \catK$, called the unit,
which satisfy the associativity and unit conditions strictly.
Let $\catK$ be a Gray monoid.[We refer to a Gray monoid by its underlying 2-category if this does not cause confusion.
An analogous convention is adopted for other kinds of structures throughout the paper.]
Recall that part of the pseudofunctoriality for the tensor product
involves, a natural isomorphism whose components are invertible 2-cells
\[
\begin{tikzcd}[column sep = large]
A \otimes B \ar[d,"\id_A \otimes g"']\ar[dr,phantom,"\Two\phi_{f,g}"]\ar[r,"f \otimes \id_B"] & A' \otimes B \ar[d,"\id_{A'} \otimes g"] \\
A \otimes B' \ar[r,"f \otimes \id_{B'}"'] & A' \otimes B' \mathrlap{,}
\end{tikzcd}
\]
where $f \co A \to A'$ and $g \co B \to B'$, subject to appropriate compatibility conditions <cit.>. As in [23], we take $f \otimes g \co A \otimes B \to A' \otimes B'$ to be the composite
\[
\begin{tikzcd}
A \otimes B \ar[r, "\id_{A} \otimes g" ] &
A' \otimes B \ar[r, "f \otimes \id_{B'}"] &
A' \otimes B' \mathrlap{.}
\end{tikzcd}
\]
We adopt an analogous convention to define $\alpha \otimes \beta \co f \otimes g \to f' \otimes g'$ for $\alpha \co f \Rightarrow f'$ and $\beta \co g \Rightarrow g'$. The other choice is canonically isomorphic and we shall often suppress mention of these isomorphisms for brevity, since they are essentially unique [24, 25].
Note that the associativity and unit axioms hold strictly and so in particular $A \otimes (B \otimes C) = (A \otimes B) \otimes C$ for every $A, B, C \in \catK$, and $A \otimes \unit = A = \unit \otimes A$ for all $A \in \catK$.
In preparation for the material on linear exponential pseudocomonads in <ref>, we recall the notions of a lax monoidal pseudofunctor, monoidal pseudonatural transformation, and monoidal modification. Here, a lax monoidal pseudofunctor is what is called a weak monoidal homomorphism in <cit.>.
Let $\catK$ and $\catL$ be Gray monoids. A lax monoidal pseudofunctor from $\catK$ to $\catL$ is a pseudofunctor $F \co \catK \to \catL$ equipped with
* a pseudonatural transformation with components on objects $\monn_{A, B} \co FA \otimes FB \to F(A \otimes B)$, for $A, B \in \catK$,
* a map $\moni \co \unit \to F( \unit)$,
* an invertible modification with components
\[
\begin{tikzcd}[column sep=huge]
FA \otimes FB \otimes FC \ar[d,"\monn_{A,B} \otimes \id_{FC}"']\ar[dr,phantom,"\Two\omega_{A,B,C}"]\ar[r,"\id_{FA} \otimes \monn_{B,C}"] & FA \otimes F(B \otimes C) \ar[d,"\monn_{A, B \otimes C}"] \\
F(A \otimes B) \otimes FC \ar[r,"\monn_{A \otimes B, C}"'] & F(A \otimes B \otimes C)
\mathrlap{,}
\end{tikzcd}
\]
for $A, B, C \in \catK$, which we call the associativity constraint of $F$,
* two invertible modifications with components
\[
\begin{tikzcd}[column sep = tiny]
F \unit \otimes FA
\ar[d, phantom, description, "\Two \zeta_A"]
\ar[dr, "\monn_{\unit, A}"] &
\\
\unit \otimes FA
\ar[ur, "\moni \otimes \id_{FA}"]
\ar[rr, "\id_{FA}"'] &
\phantom{} &
FA \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}[column sep = tiny]
FA \otimes F \unit
\ar[d, phantom, description, "\Two \kappa_A"]
\ar[dr, "\monn_{A, \unit}"] &
\\
FA \otimes \unit
\ar[ur, "\id_{FA} \otimes \moni"]
\ar[rr, "\id_{FA}"'] &
\phantom{} &
FA \mathrlap{,}
\end{tikzcd}
\]
for $A, B \in \catK$, which we call the left and right unitality constraints of $F$,
which satisfy the two coherence conditions in <cit.>.
We say that $F$ is a strong monoidal if the components of $\monn$ and $\moni$ are equivalences in $\catL$.
Let $F, G \co \catK \to \catL$ be lax monoidal pseudofunctors. A monoidal pseudonatural transformation from $F$ to $G$ is a
pseudonatural transformation $p \co F \Rightarrow G$ equipped with
* an invertible modification with components
\[
\begin{tikzcd}[column sep = large]
FA \otimes FB \ar[d, "p_A \otimes p_B"'] \ar[r, "\monn_{A,B}"] \ar[dr,phantom,"\Two p^2_{A,B}"] & F(A \otimes B) \ar[d, "p_{A\otimes B}"] \\
GA \otimes GB \ar[r, "\monn_{A, B}"'] & G(A \otimes B) \mathrlap{,}
\end{tikzcd}
\]
for $A, B \in \catK$,
* an invertible 2-cell
\[
\begin{tikzcd}
F \unit
\ar[d, phantom, description, "\Two p^0"]
\ar[dr, "p_\unit"] &
\\
\unit
\ar[ur, "\moni"]
\ar[rr, "\moni"'] &
\phantom{} &
G \unit \mathrlap{,}
\end{tikzcd}
\]
which satisfy the three coherence conditions in <cit.>.
Let $p, q \co F \Rightarrow F'$ be monoidal pseudonatural transformations.
A monoidal modification from $p$ to $q$ is a modification $\phi \co p \Rrightarrow q$ which satisfies the two coherence conditions in <cit.>.
Note that the definition of a monoidal modification does not require additional structure on top of that of a modification.
The strictification theorem for monoidal bicategories, stated below for emphasis, follows as a scholium of the strictification theorem for tricategories [42, 45].
Every monoidal bicategory is biequivalent, as a monoidal bicategory, to a Gray monoid.
§.§ Symmetric monoidal bicategories and symmetric Gray monoids
We shall now recall the definition of a symmetric Gray monoid and of the relevant counterparts of lax monoidal pseudofunctors, monoidal transformations and monoidal modifications. For this, it is convenient to
arrive at symmetric Gray monoid by introducing progressively layers of structure and properties,
as in the notions of a braided and sylleptic Gray monoid, as illustrated in <ref>.
0-cells 1-cells 2-cells 3-cells
Gray monoid Lax monoidal
pseudofunctors Monoidal
transformations Monoidal
Gray monoids Braided lax monoidal
pseudofunctors Braided monoidal
transformations ”
Gray monoids Sylleptic lax monoidal
pseudofunctors ” ”
Gray monoids ” ” ”
Overview of Gray monoids.
The definitions of a braided Gray monoid in <ref> is equivalent to the one in <cit.>, but we state it in terms of the data considered in [10, 44, 41], <cit.>. This is a direct categorification of the notion of a braided monoidal category <cit.>, in that the two axioms for a braiding in the one-dimensional setting are replaced by two invertible modifications, written $\beta^1$ and $\beta^2$ below.
* A braided Gray monoid is a Gray monoid $\catK$ equipped with a pseudonatural equivalence with components on objects
\[
\bra_{A, B} \co A \otimes B \to B \otimes A \mathrlap{,}
\]
for $A, B \in \catK$, called the braiding, and invertible modifications with components
\begin{gather*}
\begin{tikzcd}[ampersand replacement=\&, column sep = small]
\&
B \otimes A \otimes C
\ar[d, phantom, description, "\Two \beta^1_{A,B,C}"]
\ar[dr, "\id_B \otimes \bra_{A,C}"] \&
\\
A \otimes B \otimes C
\ar[ur, "\bra_{A, B} \otimes \id_C"]
\ar[rr, "\bra_{A, B \otimes C}"'] \&
\phantom{} \&
B \otimes C \otimes A \mathrlap{,}
\end{tikzcd} \\
\begin{tikzcd}[ampersand replacement=\&, column sep = small]
\&
A \otimes C \otimes B
\ar[d, phantom, description, "\Two \beta^2_{A, B, C}"]
\ar[dr, "\bra_{A, C} \otimes \id_B"] \&
\\
A \otimes B \otimes C
\ar[ur, "\id_A \otimes \bra_{B, C}"]
\ar[rr, "\bra_{A \otimes B, C}"'] \&
\phantom{} \&
C \otimes A \otimes B \mathrlap{,}
\end{tikzcd}
\end{gather*}
for $A, B \in \catK$, called the braiding constraints, such that
\[
\bra_{A, \unit} = \bra_{\unit, A} = \id_A
\]
for all $A \in \catK$, and which satisfy the coherence conditions in <cit.>, which include the four coherence
conditions in <cit.>.
* A sylleptic Gray monoid is a braided Gray monoid $\catK$ equipped with an invertible modification with components
\[
\begin{tikzcd}
B \otimes A
\ar[d, phantom, description, "\Two \sigma_{A, B}"]
\ar[dr, "\bra_{B, A}"] &
\\
A \otimes B
\ar[rr, "\id_{A \otimes B}"']
\ar[ur, "\bra_{A, B}"] &
\phantom{} &
A \otimes B \mathrlap{,}
\end{tikzcd}
\]
for $A, B \in \catK$, called the syllepsis, which satisfies the two coherence conditions in <cit.>.
* A symmetric Gray monoid is a sylleptic Gray monoid such that the syllepsis $\sigma$ satisfies the additional condition for it to be a symmetry <cit.>.
Next, we consider the counterparts of lax monoidal pseudofunctors for braided and sylleptic Gray monoids.
* Let $\catK$ and $\catL$ be braided Gray monoids. A braided lax monoidal pseudofunctor from $\catK$ to $\catL$ is a lax monoidal pseudofunctor
$F \co \catK \to \catL$ equipped with
an invertible modification with components on objects
\[
\begin{tikzcd}[column sep = large]
FA \otimes FB \ar[r, "\bra_{A,B}"] \ar[d, "\monn_{A, B}"'] \ar[dr,phantom,"\Two"] & FB \otimes FA \ar[d, "\monn_{B,A}"] \\
F(A \otimes B) \ar[r, "F(\bra_{A, B})"'] & F(B \otimes A) \mathrlap{,}
\end{tikzcd}
\]
for $A, B \in \catK$, which satisfies the two axioms in <cit.>.
* Let $\catK$ and $\catL$ be sylleptic Gray monoids. A sylleptic lax monoidal pseudofunctor from $\catK$ to $\catL$ is a braided lax monoidal pseudofunctor
$F \co \catK \to \catL$ which satisfies the additional axiom in <cit.> or <cit.>.
Note that the appropriate morphisms between symmetric Gray monoids are sylleptic lax monoidal pseudofunctors, since a symmetric Gray monoid is merely a sylleptic Gray monoid which satisfies an additional property, <cit.>.
Let $F, G \co \catK \to \catL$ be braided lax monoidal pseudofunctors. A braided monoidal pseudonatural transformation from $F$ to $G$
is a monoidal pseudonatural transformation $p \co F \Rightarrow G$ which satisfies the additional coherence condition in <cit.>.
Analogously to what we said above for pseudofunctors, the appropriate morphisms between
sylleptic lax monoidal pseudofunctors are braided monoidal pseudonatural transformations
since a sylleptic lax monoidal pseudofunctor is merely a braided lax monoidal pseudofunctor satisfying an additional property. And since these are just monoidal transformations which satisfy an additional
property, the appropriate morphisms between them are the monoidal modifications of <ref>.
The next strictification result is recalled from <cit.>.
Every symmetric monoidal bicategory is equivalent, as a symmetric monoidal bicategory, to a symmetric Gray monoid.
Without loss of generality it is possible to assume that the additional equations for a strict symmetric monoidal 2-category, in the sense of <cit.>, hold as well.
These include those for a braided monoidal 2-category in the sense of <cit.>. In the following, we will use <ref> and <ref> and the associated principles of transport of structure to limit ourselves to consider (symmetric) Gray monoids. Provided
that the structure and properties under consideration are invariant under (symmetric) monoidal biequivalence, one obtains results on (symmetric) monoidal bicategories (remarks at the
start of <ref>).
In the following, we shall frequently refer to coherence conditions. We will do so using
the order in which they appear in the reference provided in the definition given here. For example,
the first coherence axiom for a lax monoidal pseudofunctor (<ref>) means the first
coherence axiom in <cit.>, which involves an
equality between a pasting diagram
involving $\omega$ and $\zeta$ and a pasting diagram involving $\kappa$.
§ PSEUDOCOMONOIDS AND PSEUDOBIALGEBRAS
§.§ Pseudocomonoids
In preparation for the material on linear exponential pseudocomonads in <ref>, we recall some basic material on pseudocomonoids from [23] and prove some auxiliary results about them, which we could not find in the existing literature and may be of independent interest.
The definitions of a pseudocomonoid, braided pseudocomonoid and symmetric pseudocomonoid are analogous to those of monoidal category, braided monoidal category and symmetric monoidal category, respectively.
Accordingly, the definitions of a pseudocomonoid morphism and of a braided pseudocomonoid morphism are analogous to those of strong monoidal functor and braided monoidal functor. Finally, pseudocomonoid 2-cells are
counterparts of monoidal transformations.
In particular, the same pattern of distinction between structure and property arises, <ref>.
Let $\catK$ be a Gray monoid.
A pseudocomonoid in $\catK$ is an object $A \in \catK$ equipped with
* a map $n \co A \to A \otimes A$, called the comultiplication,
* a map $e \co A \to \unit$, called counit,
* invertible 2-cells
\begin{equation*}
\begin{tikzcd}[column sep = large]
A \ar[r, "n"] \ar[d, "n"'] \ar[dr,phantom,"\Two \alpha"] & A \otimes A \ar[d, "n \otimes \id_A "] \\
A \otimes A \ar[r, "\id_A \otimes n"'] & A \otimes A \otimes A \mathrlap{,}
\end{tikzcd}
\end{equation*}
called the associativity constraint, and
\begin{equation*}
\begin{tikzcd}[column sep = {1.5cm,between origins}]
A \otimes A
\ar[d, phantom, description, "\Two \lambda"]
\ar[dr, "e \otimes \id_A"] &
\\
\ar[ur, "n"]
\ar[rr, "\id_A"'] &
\phantom{} &
A \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}[column sep = {1.5cm,between origins}]
A \otimes A
\ar[d, phantom, description, "\Two \rho"]
\ar[dr, "\id_A \otimes e"] &
\\
\ar[ur, "n"]
\ar[rr, "\id_A"'] &
\phantom{} &
A \mathrlap{,}
\end{tikzcd}
\end{equation*}
called the left and right unitality constraints, respectively,
which satisfy the duals of the two coherence conditions in <cit.>. (These are analogous to those for a monoidal category.)
In the following, we use the letters $n$ and $e$ to denote the comultiplication and counit of different pseudocomonoids when this does not cause confusion.
Let $A$ and $B$ be pseudocomonoids in $\catK$. A pseudocomonoid morphism from $A$ to $B$ is a
map $f \co A \to B$ equipped with
* an invertible 2-cell
\[
\begin{tikzcd}
A \ar[r, "f"] \ar[d, "\com"'] \ar[dr,phantom,"\Two \bar{f}"] & B \ar[d, "\com"] \\
A \otimes A \ar[r, "f \otimes f"'] & B \otimes B \mathrlap{,}
\end{tikzcd}
\]
* an invertible 2-cell
\[
\begin{tikzcd}
A \ar[d, "\cou"'] \ar[r, "f"] \ar[dr,phantom,"\Two \tilde{f}"] & B \ar[d, "\cou"] \\
\unit \ar[r, "\id_\unit"'] & \unit \mathrlap{,}
\end{tikzcd}
\]
which satisfy three coherence conditions. (These are analogous to those for a lax monoidal functor <cit.>.)
Let $f, g \co A \to B$ be pseudocomonoid morphisms in $\catK$. A pseudocomonoid $2$-cell from $f$ to $g$
is a 2-cell $\alpha \co f \Rightarrow g$ which satisfies two coherence conditions. (These are analogous to those for a monoidal natural
transformation <cit.>.)
We write $\CoMon{\catK}$ for the 2-category of pseudocomonoids, pseudocomonoid morphisms, and pseudocomonoid 2-cells in $\catK$.
Let us now assume that $\catK$ is braided.
A braided pseudocomonoid is a pseudocomonoid $A$ equipped with an invertible 2-cell
\begin{equation}
\label{equ:braiding-comonoid}
\begin{tikzcd}[column sep = {1.5cm,between origins}]
A \otimes A
\ar[d, phantom, description, "\Two \gamma"]
\ar[dr, "\bra_{A,A}"] &
\\
\ar[ur, "\com"]
\ar[rr, "\com"'] &
\phantom{} &
A \otimes A
\end{tikzcd}
\end{equation}
which satisfies the duals of the two coherence conditions in <cit.>. (These are analogous to those for a braided monoidal category <cit.>.)
Let $A$ and $B$ be braided pseudocomonoids in $\catK$. A braided pseudocomonoid morphism from $A$ to $B$ is a
pseudocomoid morphism $f \co A \to B$ which satisfies one additional coherence condition. (This is analogous to that for a lax monoidal functor to be braided <cit.>.)
We write $\BraCoMon{\catK}$ for the 2-category of braided pseudocomonoids, braided pseudocomonoid morphisms, and pseudocomonoid 2-cells in $\catK$.
Let us now assume that $\catK$ is sylleptic.
A symmetric pseudocomonoid in $\catK$ is a braided pseudocomonoid in $\catK$ whose braiding satisfies
the additional coherence condition for a symmetry <cit.>. (This is analogous to the axiom for a braided monoidal category to be symmetric.)
We write $\SymCoMon{\catK}$ for the 2-category of symmetric pseudocomonoids, braided pseudocomonoid morphisms, and pseudocomonoid 2-cells in $\catK$.
The definitions of the 2-categories $\CoMon{\catK}$, $\BraCoMon{\catK}$, and $\SymCoMon{\catK}$ is summarised in <ref>.
There are evident duals of these notions, giving rise to the 2-categories $\Mon{\catK}$, $\BraMon{\catK}$, and $\SymMon{\catK}$ of
pseudomonoids, braided pseudomonoids and symmetric pseudomonoids, respectively, in $\catK$.
0-cells 1-cells 2-cells
$\CoMon{\catK}$ Pseudocomonoids Pseudocomonoid
morphisms Pseudocomonoid
$\BraCoMon{\catK}$ Braided
pseudocomonoids Braided
pseudocomonoid morphisms ”
$\SymCoMon{\catK}$ Symmetric
pseudocomonoid ” ”
Overview of pseudocomonoids.
<ref> below recalls from <cit.> the two-dimensional counterpart of the well-known one-dimensional result asserting that the category of commutative comonoids in a symmetric monoidal category has finite products. Recall from <ref> that we mean finite products in a bicategorical sense.
Let $\catK$ be a symmetric Gray monoid. The 2-category $\SymCoMon{\catK}$ of symmetric pseudocomonoids in $\catK$ has finite products.
We outline the definition of the binary products and of terminal object of $\SymCoMon{\catK}$. We begin by recalling that if $A$ and $B$ are symmetric pseudocomonoids in $\catK$, the tensor product of their
underlying objects $A \otimes B$ in $\catK$ admits the structure of a symmetric pseudocomonoid. Indeed, the tensor product in a symmetric Gray monoid $\catK$ is a sylleptic strong monoidal pseudofunctor <cit.>,
and therefore it preserves symmetric pseudocomonoids <cit.>. Thus, it lifts as follows:
\begin{equation*}
\begin{tikzcd}[column sep = large]
\SymCoMon{\catK} \times \CoMon{\catK} \ar[r, "(-) \otimes (=)"] \ar[d, "U \times U"'] &
\SymCoMon{\catK} \ar[d, "U"] \\
\catK \times \catK \ar[r, "(-) \otimes (=)"'] &
\catK \mathrlap{.}
\end{tikzcd}
\end{equation*}
Explicitly, the comultiplication and counit are the following composites:
\begin{gather}
\label{equ:comultiplication-for-tensor}
\begin{tikzcd}[ampersand replacement=\&]
A \otimes B
\ar[r, "\com \otimes \com"] \&
A \otimes A \otimes B \otimes B
\ar[r, "\id_A \otimes \bra_{A,B} \otimes \id_A"] \&[4em]
A \otimes B \otimes A \otimes B \mathrlap{,}
\end{tikzcd} \\
\label{equ:counit-for-tensor}
\begin{tikzcd}[ampersand replacement=\&]
A \otimes B
\ar[r, "e \otimes e"] \&
\unit \otimes \unit
\ar[r, "\id"] \&
\unit \mathrlap{.}
\end{tikzcd}
\end{gather}
The associativity constraint is
\[
\begin{tikzcd}[scale=0.75]
A \otimes B \ar[r, "\com \otimes \com"] \ar[d, "\com \otimes \com"'] \ar[dr,phantom,"\Two \alpha \otimes \alpha"] &[3em]
A \otimes A \otimes B \otimes B \ar[r, "\id_A \otimes \bra_{A,B} \otimes \id_B"] \ar[d, "\com \otimes \id_A \otimes \com \otimes \id_B"] \ar[dr,phantom,"\cong"] &[6em]
A \otimes B \otimes A \otimes B \ar[d, "\com \otimes \com \otimes \id_A \otimes \id_B"] \\
A \otimes A \otimes B \otimes B \ar[r, "\id_A \otimes \com \otimes \id_B \otimes \com"'] \ar[d, "\id_A \otimes \bra_{A,B} \otimes \id_B"'] \ar[dr,phantom,"\cong"] &
A \otimes A \otimes A \otimes B \otimes B \otimes B \ar[r, "\id_A \otimes \id_A \otimes \bra_{A,B \otimes B} \otimes \id_B"] \ar[d, "\id \otimes \bra_{A \otimes A,B} \otimes \id_B \otimes \id_B"] \ar[dr,phantom,"\Two \id_A \otimes \xi \otimes \id_B"] &
A \otimes A \otimes B \otimes B \otimes A \otimes B \ar[d, "\id_A \otimes \bra_{A,B} \otimes \id_B \otimes \id_A \otimes \id_B"] \\
A \otimes B \otimes A \otimes B \ar[r, "\id_A \otimes \id_B \otimes \com \otimes \com"'] &
A \otimes B \times A \otimes A \otimes B \otimes B \ar[r, "\id_A \otimes \id_B \otimes \id_A \otimes \bra_{A,B} \otimes \id_B"'] &
A \otimes B \times A \otimes B \otimes A \otimes B \mathrlap{,}
\end{tikzcd}
\]
where $\xi$ is an invertible 2-cell which can easily be constructed using the two braiding constraints $\beta_1$ and $\beta_2$ of <ref>.
The left unitality constraint is
\[
\begin{tikzcd}
A \otimes B
\ar[r, "\com \otimes \com"]
\ar[dr, bend right = 20, "\id"'] &
A \otimes A \otimes B \otimes B
\ar[r, "\id \otimes \bra_{A, B} \otimes \id"]
\ar[d, "e \otimes \id \otimes e \otimes \id"] \ar[dr,phantom,"\cong"] &[5em]
A \otimes B \otimes A \otimes B
\ar[d, "e \otimes e \otimes \id \otimes \id"] \\
\phantom{} \ar[ur, phantom, description, pos=(.6), "\Two \lambda \otimes \lambda"]
& \unit \otimes A \otimes \unit \otimes B \ar[r, "\id \otimes \bra_{A, \unit} \otimes \id"'] & \unit \otimes \unit \otimes A \otimes B \mathrlap{.}
\end{tikzcd}
\]
The right unitality constraint is
\[
\begin{tikzcd}
A \otimes B \ar[r, "\com \otimes \com"] \ar[dr, bend right = 20, "\id"'] & A \otimes A \otimes B \otimes B \ar[r, "\id \otimes \bra_{A,B} \otimes \id"] \ar[d, "\id \otimes e \otimes \id \otimes e"] \ar[dr,phantom,"\cong"] &[5em] A \otimes B \otimes A \otimes B \ar[d, "\id \otimes \id \otimes e \otimes e"] \\
\phantom{} \ar[ur, phantom, description, pos=(.6), "\Two \rho \otimes \rho"]
& A \otimes \unit \otimes B \otimes \unit \ar[r, "\id \otimes \bra_{\unit, B} \otimes \id"'] & \otimes A \otimes B \otimes \unit \otimes \unit \mathrlap{.}
\end{tikzcd}
\]
The symmetry is a 2-cell of the form
\[
\begin{tikzcd}
A \otimes B
\ar[r, "\com \otimes \com"]
\ar[dr, bend right = 20, "\com \otimes \com"'] &[3em]
A \otimes A \otimes B \otimes B
\ar[r, "\id_A \otimes \bra_{A,B} \otimes \id_B"]
\ar[d, "\bra_{A,A} \otimes \bra_{B,B}"] \ar[dr,phantom,"\Two \sigma'"] &[7em]
A \otimes B \otimes A \otimes B
\ar[d, "\bra_{A \otimes B, A\otimes B}"] \\
\phantom{} \ar[ur, phantom, description, pos=(.6), "\Two \gamma \otimes \gamma"] &
A \otimes A \otimes B \otimes B
\ar[r, "\id_A \otimes \bra_{A,B} \otimes \id_B"'] &
A \otimes B \otimes A \otimes B
\end{tikzcd}
\]
where $\sigma'$ is an invertible 2-cell constructed using the braiding constraints $\beta^1$, $\beta^2$ and the symmetry $\sigma$ of $\catK$.
It is then possible to show that $A \otimes B$ is the product of $A$ and $B$ in $\SymCoMon{\catK}$, when considered equipped with the projections $\pi_1 \co A \otimes B \to A$ and $\pi_2 \co A \otimes B \to B$ given by
the following composites:
\[
\begin{tikzcd}
A \otimes B \ar[r, "\id_A \otimes e"] &
A \otimes \unit \ar[r, "\id"] &
A \mathrlap{,}
\end{tikzcd} \quad
\begin{tikzcd}
A \otimes B \ar[r, "e \otimes \id_B"] &
\unit \otimes B \ar[r, "\id"] &
B \mathrlap{,}
\end{tikzcd}
\]
respectively, both of which can be shown to be braided pseudocomonoid morphisms.
The terminal object of $\SymCoMon{\catK}$ is the unit $\unit$ of $\catK$, viewed as a symmetric pseudocomonoid in the evident way. For a symmetric pseudomonoid $A$, the required essentially unique braided pseudocomonoid morphism to $\unit$ is the counit of $A$.
Let $\catK$ be a symmetric Gray monoid. The 2-category $\SymCoMon{\catK}$ of symmetric pseudocomonoids in $\catK$ admits a symmetric monoidal structure such that the forgetful 2-functor
\[
U \co \SymCoMon{\catK} \to \catK
\]
is strict symmetric monoidal.
Combine <ref> with <cit.>.
Of course, <ref> could also be established directly (without appealing to <ref>), but we prefer to not to do so for brevity. One of the reasons for our interest in <ref> is that it
allows us to introduce the notion of a symmetric pseudobialgebra in a concise way,
as done in <ref>, just as it can be done for commutative bialgebras in the one-dimensional setting [2].
The definition is restated in an equivalent way and unfolded explicitly in <ref>.
A symmetric pseudobialgebra in $\catK$ is a symmetric pseudomonoid in $\SymCoMon{\catK}$.
In analogy with the one-dimensional situation, a symmetric pseudobialgebra can be defined equivalently as a symmetric pseudocomonoid in the 2-category $\SymMon{\catK}$ of symmetric pseudomonoids, braided pseudomonoid morphisms and pseudomonoid 2-cells. Indeed, assume that we have
* an object $A \in \catK$,
* maps $\com \co A \to A \otimes A$, $e \co A \to I$ in $\catK$, and invertible 2-cells $\alpha$, $\lambda$, $\rho$ and $\gamma$ equipping $A$ with the structure of a braided pseudocomonoid in $\catK$,
as in <ref> and <ref>,
* maps $m \co A \otimes A \to A$ and $u \co I \to A$ in $\catK$ and invertible 2-cells
\[
\begin{tikzcd}[column sep = large]
A \otimes A \otimes A \ar[r, "\id_A \otimes m "] \ar[d, " m \otimes \id_A"'] \ar[dr,phantom,"\Two \beta"] & A \otimes A \ar[d, "m "] \\
A \otimes A \ar[r, " m"'] & A \mathrlap{,}
\end{tikzcd}
\]
\[
\begin{tikzcd}[column sep = {1.5cm,between origins}]
\ar[d, phantom, description, "\Two \sigma"]
\ar[dr, "m"] &
\\
A \otimes A
\ar[ur, "u \otimes \id_A"]
\ar[rr, "\id_A"'] &
\phantom{} &
A \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}[column sep = {1.5cm,between origins}]
\ar[d, phantom, description, "\Two \tau"]
\ar[dr, "m"] &
\\
A \otimes A
\ar[ur, "\id_A \otimes u"]
\ar[rr, "\id_A"'] &
\phantom{} &
A \otimes A \mathrlap{,}
\end{tikzcd}
\]
\[
\begin{tikzcd}[column sep = {1.5cm,between origins}]
A \otimes A
\ar[d, phantom, description, "\Two \delta"]
\ar[dr, "m"] &
\\
\ar[ur, "\bra_{A,A}"]
\ar[rr, "m"'] &
\phantom{} &
A \otimes A
\end{tikzcd}
\]
equipping $A$ with the structure of a braided pseudomonoid in $\catK$.
Then, there is a bijection between:
* the set of 4-tuples of invertible 2-cells making $m \co A \otimes A \to A$ and $u \co I \to A$ into braided pseucodomonoid morphisms such that $\beta$, $\sigma$, $\tau$ and $\delta$ are pseudocomonoid 2-cells,
which therefore determine a symmetric pseudomonoid in $\SymCoMon{\catK}$;
* the set of 4-tuples of invertible 2-cells making $\com \co A \to A \otimes A$ and $e \co A \to I$ into braided pseudomonoid morphisms such that $\alpha$, $\lambda$, $\rho$ and $\gamma$ are pseudomonoid 2-cells,
which therefore determine a symmetric pseudocomonoid in $\SymMon{\catK}$.
In this correspondence, the coherence conditions for the braided pseudomonoid morphisms and for pseudocomonoid 2-cells in (i) imply the coherence conditions for the pseudocomonoid 2-cells and
for the braided pseudomonoid morphisms in (ii), respectively.
Since <ref> is stated in terms of the data in (i), let us record for reference that the 2-cells therein have the form
\[
\begin{tikzcd}[column sep = large]
A \otimes A
\ar[r, "m"]
\ar[d, "\com \otimes \com"']
\ar[ddr, phantom, description, "\Two \bar{m}"] &
A \ar[dd, "\com"] \\
A \otimes A \otimes A \otimes A
\ar[d, "\id_A \otimes \bra_{A,A} \otimes \id_A"']
& \\
A \otimes A \otimes A \otimes A \ar[r, "m \otimes m"'] &
A \otimes A \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}
A \otimes A
\ar[r, "m"]
\ar[dd, "e \otimes e"']
\ar[ddr, phantom, description, "\Two \tilde{m}"] &
A \ar[dd, "\cou"] \\
& \\
\unit \otimes \unit \ar[r, "\id"'] &
\unit \mathrlap{,}
\end{tikzcd}
\]
\[
\qquad
\begin{tikzcd}
\unit
\ar[r, "u"]
\ar[d, "\id"']
\ar[dr, phantom, description, "\Two \tilde{u}"] &[4em]
\ar[d, "\com"] \\
\unit
\ar[r, "u \otimes u"'] &
A \otimes A \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}
\unit \otimes \unit
\ar[d, "\id"']
\ar[r, "u"]
\ar[dr, phantom, description, "\Two \bar{u}"] &
A \ar[d, "\cou"] \\
\unit
\ar[r, "\id"'] &
\unit
\mathrlap{.}
\end{tikzcd} \qquad
\]
§.§ Products, coproducts and biproducts
We recall and establish some facts regarding cartesian and cocartesian monoidal structures. Recall from <ref> our conventions and notation regarding products and coproducts in a 2-category.
Let $\catK$ be a symmetric Gray monoid.
The monoidal structure of $\catK$ is cartesian if and only if every $A \in \catK$ admits a symmetric pseudocomonoid structure in $\catK$, pseudonaturally and monoidally in $A$.
For one implication, assume that $\catK$ is cartesian. It is immediate to check that $A \in \catK$ admits a symmetric pseudocomonoid structure with comultiplication the
diagonal $\Delta_A \co A \to A \with A$ and counit the essentially unique map $A \to \term$. For the converse implication, the hypotheses amount to saying that we have not only a biequivalence
\[
\begin{tikzcd}[column sep = large]
\catK
\ar[r, shift left =2, "F"]
\ar[r, description, phantom, "\scriptstyle \simeq"] &
\SymCoMon{\catK} \mathrlap{,}
\ar[l, shift left =2, "U"]
\end{tikzcd}
\]
where $F$ assigns to $A$ the canonical symmetric pseudomonoid on it and $U$ is the forgetful functor, but that this is a symmetric monoidal biequivalence, where $\catK$ is considered
with its symmetric Gray monoid structure and $\SymCoMon{\catK}$ is considered with the symmetric monoidal structure of <ref>.
The claim now follows from <ref>.
For the next lemma, recall what we mean by preservation of finite products from the start of <ref>.
Let $\catK$ and $\catL$ be 2-categories with finite products and $F \co \catK \to \catL$ be a 2-functor that preserves finite products.
Then for every $A \in \catK$, the following
symmetric pseudocomonoids in $\catL$ are equivalent as symmetric pseudocomonoids:
* the symmetric pseudomonoid on $FA$ obtained by applying $F$ to the canonical pseudocomonoid on $A$ determined by finite products in $\catK$, with comultiplication and counit
\[
\begin{tikzcd}
FA \ar[r, "F(\Delta_A)"] &[2em] F(A \with A) \ar[r, "\simeq"] & FA \with FA \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}
FA \ar[r, "F(\textup{can})"] &[2em] F(\term) \ar[r, "\sim"] & \term \mathrlap{,}
\end{tikzcd}
\]
* the symmetric pseudocomonod on $FA$ determined by finite products in $\catL$, with comultiplication and counit
\[
\begin{tikzcd}
FA \ar[r, "\Delta_{FA}"] &[1em] FA \with FA \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}
FA \ar[r, "\textup{can}"] &[1em] \term \mathrlap{.}
\end{tikzcd}
\]
Direct calculation.
We conclude this section with our definition of a biproduct and a basic fact about it, which will be useful for <ref>.
Let $\catK$ be a 2-category with finite products and finite coproducts. We say that $\catK$ has biproducts if there are equivalences
$A + B \to A \with B$, for all $A, B \in \catK$, and $0 \to \term$.
If a 2-category $\catK$ has biproducts, we identify product and coproducts, call them biproducts, and write $A \oplus B$ for the biproduct of $A$ and $B$. We also identify the initial and terminal object, write $0$ for it and call it the zero object of $\catK$. The next result builds on <ref>.
Let $\catK$ be a 2-category with biproducts. For every object $A \in \catK$, the symmetric pseudomonoid structure on $A$ determined by products,
\[
\Delta_A \co A \to A \oplus A \mathrlap{,} \quad A \to 0 \mathrlap{,}
\]
and the symmetric pseudocomonoid structure on $A$, determined by coproducts,
\[
\nabla_A \co A \oplus A \to A \mathrlap{,} \quad 0 \to A \mathrlap{,}
\]
determine a symmetric pseudobialgebra structure on $A$.
The symmetric monoidal structure determined by biproducts is both cartesian and cocartesian and therefore every object $A \in \catK$ has both the structure of a symmetric pseudocomonoid
and of a symmetric pseudomonoid by <ref> and its dual. It remains to check that the multiplication (the codiagonal $\nabla_A \co A \oplus A \to A$) and the counit (the essentially unique map $A \to 0$) are braided pseudocomonoid morphisms and that the associativity and unitality constraints are pseudocomonoid 2-cells, but this is true for any map and any 2-cell in a cocartesian monoidal structure.
§ LINEAR EXPONENTIAL PSEUDOCOMONADS
The goal of this section is to introduce our bicategorical counterpart of the notion of a linear exponential comonad and prove some basic facts about it. Our formulation is based on the
definition of linear exponential comonad in [47]. We compare our notion with the one in [50] in <ref> below.
§.§ Pseudocomonads
As a first step, we recall the definition of a pseudocomonad, which is dual to that of a pseudomonad [61].
We consider pseudocomonads whose underlying pseudofunctor is a 2-functor, as justified by the
strictification theorems in [13, 61], <cit.>.
Let $\catK$ be a 2-category. A pseudocomonad on $\catK$ is a 2-functor $\bang(-) \co \catK \to \catK$ equipped with:
* a pseudonatural transformation with components on objects $\dig_A \co \bang A \to \bbang A$, for $A \in \catK$, called the comultiplication of the pseudocomonad,
* a pseudonatural transformation with components on objects $\der_A \co \bang A \to A$, for $A \in \catK$, called the counit of the pseudocomonad,
* an invertible modifications with components
\[
\begin{tikzcd}[column sep = large]
\bang A \ar[r, "\dig_A"] \ar[d, "\dig_A"'] \ar[dr,phantom,"\Two \pi_A"] & \bbang A \ar[d, "\bang \dig_A"] \\
\bbang A \ar[r, "\dig_{\bang A} "'] & \bbbang A \mathrlap{,}
\end{tikzcd}
\]
for $A \in \catK$, called the associativity constraint of the pseudocomonad,
* two invertible modifcations
\[
\begin{tikzcd}[column sep = {1.5cm,between origins}]
\bbang A
\ar[d, phantom, description, "\Two \mu_A"]
\ar[dr, "\der_{\bang A}"] &
\\
\bang A
\ar[ur, "\dig_A"]
\ar[rr, "\id_{\bang A}"'] &
\phantom{} &
\bang A \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}[column sep = {1.5cm,between origins}]
\bbang A
\ar[d, phantom, description, "\Two \nu_A"]
\ar[dr, "\bang \der_A"] &
\\
\bang A
\ar[ur, "\dig_A"]
\ar[rr, "\id_{\bang A}"'] &
\phantom{} &
\bang A \mathrlap{,}
\end{tikzcd}
\end{equation*}
for $A \in \catK$, called the (left and right) \emph{unitality constraints} of the pseudocomonad,
\end{itemize}
which satisfy two coherence conditions. (These are dual to those for a pseudomonad~\cite[Section~1]{LackS:cohap}.)
\end{definition}
The notation for a pseudocomonad adopted here\footnote{The slightly unorthodox choice of the Greek letters $\pi$, $\mu$, $\nu$ for the pseudocomonad
constraints is intended to avoid a clash with the notation for the constraints of a pseudomonoid, \cf \cref{thm:comonoid}. This is especially useful in view of the formulation of a linear exponential pseudocomonad
in \cref{thm:linear-exponential-comonad}, which will involve both notions.} is inspired by the literature on
categorical models of Linear Logic, where the underlying 2-functor of the pseudocomonad corresponds to the exponential modality,
the comultiplication to the promotion rule, and the comultiplication to the dereliction rule~[64].
Let us now fix a 2-category $\catK$ and a pseudocomonad $(\bang, \dig, \der)$ on it as in \cref{def:psd-comonad}.
\begin{definition} An \emph{pseudocoalgebra} for the pseudocomonad is an object $A \in \catK$ equipped with:
\begin{itemize}
\item a map $a \co A \to \bang A$, called the \emph{structure map} of the pseudocoalgebra,
\item two invertible 2-cells
\[
\begin{tikzcd}[column sep = large]
A \ar[r, "a"] \ar[d, "a"'] \ar[dr,phantom,"\Two \bar{a}"] & \bang A \ar[d, "\dig_A"] \\
\bang A \ar[r, "\bang a"'] & \bbang A \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}[column sep = large]
A \ar[r, "a"] \ar[dr, bend right = 20, "\id_A"'] & \bang A \ar[d, "\der_A"] \\
\phantom{} \ar[ur, phantom, {pos=.7}, "\Two \tilde{a}"] & A \mathrlap{,}
\end{tikzcd}
\]
called the \emph{associativity} and \emph{unitality} constraints of the pseudocoalgebra,
\end{itemize}
which satisfy two coherence conditions. (These are dual to those for a pseudoalgebra~\cite[Section~4.2]{LackS:cohap}.)
\end{definition}
\begin{definition} Let $A$ and $B$ be pseudocoalgebras in $\catK$. A \myemph{pseudocoalgebra morphism} from $A$ to $B$ is
a map $f \co A \to B$ equipped with an invertible 2-cell
\[
\begin{tikzcd}
A \ar[r, "f"] \ar[d, "a"'] \ar[dr,phantom,"\Two \bar{f}"] & B \ar[d, "b"] \\
\bang A \ar[r, "\bang f"'] & \bang B
\end{tikzcd}
\]
which satisfy two coherence conditions. (These are dual to those for a pseudoalgebra morphism~\cite[Section~1.2]{BlackwellR:twodmt}.)
\end{definition}
\begin{definition} Let $f, g \co A \to B$ be pseudocoalgebra morphisms in $\catK$. A \myemph{coalgebra $2$-cell} from $f$ to $g$ is a
2-cell $\phi \co f \Rightarrow g$ which satisfies one coherence condition. (This is dual to that for a pseudoalgebra 2-cell~\cite[Section~1.2]{BlackwellR:twodmt}.)
\end{definition}
We write $\Em{\catK}$ for the 2-category of pseudocoalgebras, pseudocoalgebra morphisms and pseudocoalgebra 2-cells. This is connected to $\catK$ by
a biadjunction
\begin{equation}
\label{equ:em-adjunction}
\begin{tikzcd}[column sep = large]
\catK
\ar[r, shift right =1, bend right = 10, "F"']
\ar[r, description, phantom, "\scriptstyle \bot"] &
\Em{\catK} \mathrlap{,}
\ar[l, shift right = 1, bend right = 10, "U"']
\end{tikzcd}
\end{equation}
where the left biadjoint $U$ is the evident forgetful 2-functor, sending a pseudocoalgebra to its underlying object, and the right biadjoint $F$ sends
an object $B \in \catK$ to the cofree pseudocoalgebra on $B$, given by $\bang B$, viewed as a pseudocoalgebra with structure map
$\dig_B \co \bang B \to \bbang B$. The associativity and unit constraints of the pseudocoalgebra are obtained from those of the pseudocomonad.
In view of application in \cref{sec:products}, let us unfold more explicitly the biadjunction in~\eqref{equ:em-adjunction} in terms of universal properties.
For $B \in \catK$, the universal property of the pseudocoalgebra $\bang B$ means that for every pseudocoalgebra~$A$, with structure map $a \co A \to \bang A$,
composition with $\der_B \co \bang B \to B$ gives an adjoint equivalence of categories
\begin{equation}
\label{equ:aux-equiv}
\begin{tikzcd}[column sep = large]
\Em{\catK}[ A, \bang B] \ar[r, "(-) \der_B"] &
\catK[A, B] \mathrlap{.}
\end{tikzcd}
\end{equation}
In particular, for every $f \co A \to B$ in $\catK$, there is an essentially unique pseudocoalgebra morphism $f^\sharp \co A \to \bang B$ such that $\der_B \circ f^\sharp \cong f$ in $\catK$. Here, essential
uniqueness refers to the fact that \eqref{equ:aux-equiv} is an equivalence, rather than an isomorphism.
Explicitly, $f^\sharp \co A \to \bang B$ is obtained as the composite
\begin{equation}
\label{equ:unfold-f-sharp}
\begin{tikzcd}
A \ar[r, "a"] &
\bang A \ar[r, "\bang f"] &
\bang B \mathrlap{.}
\end{tikzcd}
\end{equation}
We shall also make use of the Kleisli bicategory of the pseudocomonad, written $\Kl{\catK}$, whose objects
are the objects of~$\catK$ and whose hom-categories are defined by letting $\Kl{\catK}[A, B] \defeq \catK[\bang A, B]$,
for $A, B \in \catK$. As observed in~[15], in general $\Kl{\catK}$ is a bicategory, not a 2-category. There is a biadjunction
\begin{equation}
\label{equ:cokleisliadjunction}
\begin{tikzcd}[column sep = large]
\Kl{\catK}
\ar[r, shift left =1, bend left =10, "K"]
\ar[r, description, phantom, "\scriptstyle \bot"] &
\catK \mathrlap{.}
\ar[l, shift left = 1, bend left =10, "J"]
\end{tikzcd}
\end{equation}
The left biadjoint $K$ sends $A \in \Kl{\catK} $ to $\bang A \in \catK$ and $f \co \bang A \to B$ to the composite $\bang f \circ \dig_A \co \bang A \to \bang B$, while the right biadjoint $J$ is the identity on objects and sends a map $f \co A \to B$ to the composite $f \circ \der_A \co \bang A \to B$.
It is immediate to check that if $\catK$ has finite products, so does $\Kl \catK$. We record this as a proposition as we will use it in the proof of \cref{thm:coKleisli-cartesian-closed}.
\begin{proposition} \label{thm:coKleisli-cartesian}
Let $\catK$ be 2-category and $(\bang, \dig, \der)$ a pseudocomonad on it. Assume $\catK$ has finite products.
Then the Kleisli bicategory $\Kl \catK$ has finite products.
\end{proposition}
\begin{proof} We define the product of $A$ and $B$ in $\Kl \catK$ to be their product $A \with B$ in $\catK$.
The required universal property follows from the equivalence
\begin{align*}
\Kl \catK [ X, A] \times \Kl \catK [X, B] & = \catK[ \bang X, A] \times \catK[\bang X, B] \\
& \simeq \catK[ \bang X, A \with B] \\
& = \Kl \catK [ X, A \with B] \mathrlap{,}
\end{align*}
for every $X, A, B \in \catK$.
The terminal object $\Kl \catK $ is the terminal object of $\catK$. Indeed, we have the equivalence
\[
\Kl \catK [ A, \term] =
\catK[\bang A, \term] \simeq
\mathsf{1} \mathrlap{,}
\]
for every $A \in \catK$.
\end{proof}
\subsection*{Symmetric lax monoidal pseudocomonads} As a second step towards our notion of a linear exponential pseudocomonad, we give the definition of a
symmetric lax monoidal pseudocomonad, assuming that $\catK$ has the structure of a symmetric Gray monoid. The definition involves not only
the underlying 2-functor, comultiplication and counit interacting suitably with the symmetric monoidal structure, but also the additional modifications
for associativity and unit laws.
\begin{definition} \label{thm:sym-lax-monoidal-pseudocomonad}
Let $\catK$ be a symmetric Gray monoid. A \myemph{symmetric lax monoidal pseudocomonad} on $\catK$ is a pseudocomonad
$(\bang, \dig, \der)$ on $\catK$ as in \cref{def:psd-comonad} equipped with additional structure and satisfying extra properties as follows:
\begin{itemize}
\item the underlying 2-functor of the pseudocomonad is sylleptic lax monoidal, \ie we have a pseudonatural transformations with object components
\begin{equation}
\label{equ:monn}
\monn_{A, B} \co \bang A \otimes \bang B \to \bang (A \otimes B) \mathrlap{,}
\end{equation}
for $A, B \in \catK$, a map
\begin{equation}
\label{equ:moni}
\moni \co \unit \to \bang \unit \mathrlap{,}
\end{equation}
and invertible modifications with components
\begin{equation}
\label{equ:lax-monoidal-associativity}
\begin{tikzcd}[column sep = huge, ampersand replacement=\&]
\bang A \otimes \bang B \otimes \bang C \ar[r, "\id_A \otimes \monn_{B,C}"] \ar[d, "\monn_{A,B} \otimes \id_C"'] \ar[dr,phantom,"\Two\omega_{A,B,C}"] \&
\bang A \otimes \bang (B \otimes C) \ar[d, "\monn_{A, B \otimes C}"] \\
\bang (A \otimes B) \otimes \bang C \ar[r, "\monn_{A \otimes B, C}"'] \& \bang (A \otimes B \otimes C) \mathrlap{,}
\end{tikzcd}
\end{equation}
for $A, B, C \in \catK$,
\begin{equation}
\label{equ:lax-monoidal-unitality}
\begin{tikzcd}[column sep = large, ampersand replacement=\&]
\&
\bang \unit \otimes \bang A
\ar[d, phantom, description, "\Two \kappa_A"]
\ar[dr, "\monn_{ \unit, A}"] \&
\\
\unit \otimes \bang A
\ar[rr, "\id_{\bang A}"']
\ar[ur, "\mon_ \unit \otimes\id_{\bang A}"] \&
\phantom{} \&
\bang A \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}[column sep = large, ampersand replacement=\&]
\&
\bang A \otimes \bang \unit
\ar[d, phantom, description, "\Two \zeta_A"]
\ar[dr, "\monn_{A, \unit}"] \&
\\
\bang A \otimes \unit
\ar[rr, "\id_{\bang A}"']
\ar[ur, "\id_{\bang A} \otimes \moni"] \&
\phantom{} \&
\bang A \mathrlap{,}
\end{tikzcd}
\end{equation}
for $A \in \catK$, and
\begin{equation}
\label{equ:lax-monoidal-braiding}
\begin{tikzcd}[ampersand replacement=\&]
\bang A \otimes \bang B \ar[r, "\mathsf{r}_{\bang A, \bang B}"] \ar[d, "\monn_{A,B}"'] \ar[dr,phantom,"\Two \theta_{A,B}"] \& \bang B \otimes \bang A \ar[d, "\monn_{B,A}"] \\
\bang (A \otimes B) \ar[r, "\bang \mathsf{r}_{A,B}"'] \& \bang (B \otimes A) \mathrlap{,}
\end{tikzcd}
\end{equation}
for $A, B \in \catK$, which satisfy the coherence conditions for a sylleptic lax monoidal pseudofunctor;
\item the comultiplication $\dig$ is a braided monoidal pseudonatural transformation, \ie we have an invertible modification with components
\begin{equation}
\label{equ:comultiplication-monoidal-2}
\begin{tikzcd}[column sep = large]
\bang A \otimes \bang B \ar[rr, "\monn_{A,B}"] \ar[d, "\dig_A \otimes \dig_B"']
\ar[drr,phantom,"\Two \dig^2_{A,B}"]
& & \bang (A \otimes B) \ar[d, "\dig_{A \otimes B}"] \\
\bbang A \otimes \bbang B \ar[r, "\monn_{\bang A, \bang B}"'] & \bang ( \bang A \otimes \bang B) \ar[r, "\bang \monn_{A, B}"'] & \bbang (A \otimes B) \mathrlap{,}
\end{tikzcd}
\end{equation}
and an invertible 2-cell
\begin{equation}
\label{equ:comultiplication-monoidal-0}
\begin{tikzcd}[column sep = large]
\unit \ar[r, "\moni"] \ar[d, "\moni"'] \ar[dr,phantom,"\Two \dig^0"] & \bang \unit \ar[d, "\dig_ \unit"] \\
\bang \unit \ar[r, "\bang \moni"'] & \bbang \unit \mathrlap{,}
\end{tikzcd}
\end{equation}
which satisfy the coherence conditions for a braided monoidal transformation;
\item the counit $\der$ is a braided monoidal pseudonatural transformation, \ie we have an invertible modification with components
\begin{equation}
\label{equ:counit-monoidal-2}
\begin{tikzcd}[column sep = large]
\bang A \otimes \bang B \ar[r, "\monn_{A,B}"] \ar[d, "\der_A \otimes \der_B"'] \ar[dr,phantom,"\Two \der^2_{A,B}"]
& \bang (A \otimes B) \ar[d, "\der_{A \otimes B}"] \\
A \otimes B \ar[r, "\id_{A \otimes B}"'] & A \otimes B \mathrlap{,}
\end{tikzcd}
\end{equation}
for $A, B \in \catK$, and an invertible 2-cell
\begin{equation}
\label{equ:counit-monoidal-0}
\begin{tikzcd}
\unit \ar[r, "\moni"] \ar[d, "\id_\unit"'] \ar[dr,phantom,"\Two \der^0"] & \bang \unit \ar[d, "\der_\unit"] \\
\unit \ar[r, "\id_\unit"'] & \unit \mathrlap{,}
\end{tikzcd}
\end{equation}
which satisfy the coherence conditions for a braided monoidal transformation;
\item the associativity and unitality constraints $\pi$, $\mu$, $\nu$ of the pseudocomonad are monoidal modifications.
\end{itemize}
\end{definition}
In the one-dimensional setting, it is well-known that the category of coalgebras for a symmetric monoidal comonad admits a symmetric
monoidal structure. The 2-dimensional counterpart of this
result requires some care, in that one does not get back a structure that is as strict as the one on $\catK$ from which one starts. Let us recall the precise statement from~[66].
\begin{theorem}[Miranda] \label{thm:coalg-is-monoidal} Let $\catK$ be a symmetric Gray monoid and
let $(\bang, \dig, \der)$ be a symmetric lax monoidal pseudocomonad on $\catK$. Then
the 2-category of pseudocoalgebras $\Em{\catK}$ admits the structure of a symmetric monoidal bicategory so that
the forgetful 2-functor $U \co \Em{\catK} \to \catK$ is a strict symmetric monoidal 2-functor.
Furthermore, the associativity and unit constraints for this monoidal bicategory are 2-natural and they satisfy the pentagon and triangle axioms strictly.
\end{theorem}
We outline some of the structure implicit in the proof of \cref{thm:coalg-is-monoidal} for later use. Let $\catK$ be a symmetric Gray monoid and
let $(\bang, \dig, \der)$ be a symmetric lax monoidal pseudocomonad on $\catK$, with data as in \cref{thm:sym-lax-monoidal-pseudocomonad}.
Let $A$ and $B$ be coalgebras with structure maps $a \co A \to \bang A$
and $b \co B \to \bang B$, respectively. Their tensor product in $\Em{\catK}$ is the pseudocoalgebra with underlying object
$A \otimes B$ and structure map the following composite:
\[
\begin{tikzcd}
A \otimes B \ar[r, "a \otimes b"] & \bang A \otimes \bang B \ar[r, "\monn_{A,B}"] & \bang (A\otimes B) \mathrlap{.}
\end{tikzcd}
\]
The associativity constraint is the invertible 2-cell obtained by the following pasting diagram:
\[
\begin{tikzcd}[column sep = large]
A \otimes B \ar[r, "a \otimes b"] \ar[d, "a \otimes b"'] \ar[dr,phantom,"\Two \bar{a} \otimes \bar{b}"] &[3em]
\bang A \otimes \bang B \ar[r, "\monn_{A,B}"] \ar[d, "\dig_A \otimes \dig_B"] \ar[ddr,phantom,"\Two \dig^2_{A,B}"] &[3em]
\bang (A \otimes B) \ar[dd, "\dig_{A \otimes B}"] \\
\bang A \otimes \bang B \ar[r, "\bang a \otimes \bang b"] \ar[d, "\mon_{A, B}"'] \ar[dr,phantom,"\cong"] &
\bbang A \otimes \bbang B \ar[d, "\monn_{A, B}"] & \\
\bang A \otimes \bang B \ar[r, "\bang (a \otimes b)"'] &
\bang (\bang A \otimes \bang B) \ar[r, "\bang \monn_{A, B}"'] &
\bbang (A \otimes B) \mathrlap{.}
\end{tikzcd}
\]
The unitality constraint is the invertible 2-cell below:
\[
\begin{tikzcd}[column sep = huge]
A \otimes B \ar[r, "a \otimes b"] \ar[dr, bend right = 20, "\id_{A \otimes B}"'] &
\bang A \otimes \bang B \ar[d, "\der_A \otimes \der_B"] \ar[r, "\monn_{A,B}"] \ar[dr,phantom,"\Two \der^2_{A,B}"] &
\bang (A \otimes B) \ar[d, "\der_{A \otimes B}"] \\
\phantom{}
\ar[ur, phantom, pos = (0.7), "\Two \tilde{a} \otimes \tilde{b}"]
& A \otimes B \ar[r, "\id_{A \otimes B}"'] & A \otimes B \mathrlap{.}
\end{tikzcd}
\]
The unit of the monoidal structure is the pseudocoalgebra with underlying object $\unit$ and coalgebra structure map $\moni \co \unit \to \bang \unit$. Its associativity
and unitality constraints are given by the invertible 2-cells $\dig^0$ and $\der^0$ in~\eqref{equ:comultiplication-monoidal-0} and~\eqref{equ:counit-monoidal-0}, respectively.
A subtle point is that, even if the tensor product of $\catK$ is strictly unital and associative, the tensor product in $\Em{\catK}$ is not, even if the underlying objects of the
relevant pseudocoalgebras are equal. The key point here is that, for pseudocoalgebras $A, B, C$, the pseudocoalgebra structures on $(A \otimes B) \otimes C$ and $A \otimes (B \otimes C)$ are
isomorphic in $\Em{\catK}$. This involves making the identity map $\id_{A \otimes B \otimes C}$ into a pseudocoalgebra morphism.
\begin{remark} \label{equ:lift-braiding-to-em}
The content of \cref{thm:coalg-is-monoidal} goes much further than merely lifting tensor product and unit from $\catK$ to $\Em{\catK}$, in that it asserts that all the data for the symmetric Gray monoid $\catK$,
as in \cref{def:symmetric-gray-monoid}, lifts to $\Em{\catK}$. This means in particular that:
\begin{enumerate}
\item the braiding $\bra_{A, B} \co A \otimes B \to B \otimes A$ is a pseudocoalgebra morphism for all pseudocoalgebras $A, B$;
\item the components $\beta^1_{A,B,C}$ and $\beta^2_{A,B,C}$ of the braiding constraints are pseudocoalgebra 2-cells for all pseudocoalgebras $A, B, C$;
\item the component $\sigma_{A,B}$ of the syllepsis is a pseudocoalgebra 2-cell for all pseudocoalgebras $A$ and $B$.
\end{enumerate}
\end{remark}
\begin{corollary} Let $\catK$ be a symmetric Gray monoid and
$(\bang, \dig, \der)$ a symmetric lax monoidal pseudocomonad on $\catK$.
The forgetful-cofree biadjunction between $\catK$ and $\Em{\catK}$ in~\eqref{equ:em-adjunction} lifts to a symmetric lax monoidal biadjunction, as in
\[
\begin{tikzcd}[column sep = large]
(\catK, \otimes, \unit)
\ar[r, shift right =1, bend right = 10, "F"']
\ar[r, description, phantom, "\scriptstyle \bot"] &
(\Em{\catK}, \otimes, \unit) \mathrlap{.}
\ar[l, shift right = 1, bend right = 10, "U"']
\end{tikzcd}
\]
\end{corollary}
\begin{proof} Since the left biadjoint~$U$ is a sylleptic strict monoidal pseudofunctor, the claim
follows by a two-dimensional version of Kelly's doctrinal adjunction, \cf~\cite[pages~62-63]{GarnerR:enrcfc}. In particular, the right biadjoint~$F$ becomes a sylleptic lax monoidal
pseudofunctor by~\cite[Proposition~15]{DayStreet}.
\end{proof}
For a 2-category $\catK$ and pseudocomonad $(\bang, \dig, \der)$ on it, it is immediate to observe that the components of the comultiplication $\dig_A \co \bang A \to \bbang A$ are
pseudocomonad morphisms, the pseudonaturality 2-cells
\[
\begin{tikzcd}
\bang A \ar[r, "\dig_A"] \ar[d, "\bang f"'] \ar[dr,phantom,"\Two \bar{\dig}_f"] & \bbang A \ar[d, "\bbang f"] \\
\bang B \ar[r, "\dig_B"'] & \bbang B \mathrlap{,}
\end{tikzcd}
\]
for $f \co A \to B$ in $\catK$, are pseudocoalgebra 2-cells (since $\pi$ is a modification), and the components $\pi_A$ of its associativity constraints, for $A \in \catK$, are pseudocoalgebra 2-cells (by the
coherence axioms for a pseudocomonad). When $\catK$ is a Gray
monoid and the pseudocomonad is sylleptic lax monoidal, as in \cref{thm:sym-lax-monoidal-pseudocomonad}, other parts of the structure can also be lifted to
the 2-category of pseudocoalgebras. This is described in the next two lemmas. The second one concerns pseudocoalgebra 2-cells and therefore does not have a one-dimensional counterpart.
Let us fix a symmetric Gray monoid $\catK$ and a symmetric lax monoidal pseudocomonad $(\bang, \dig, \der)$ on it as in \cref{thm:sym-lax-monoidal-pseudocomonad}.
\begin{lemma} \label{thm:mu-are-coalgebra-morphisms} \leavevmode
\begin{enumerate}[(i)]
\item For every $A, B \in \catK$, the map $\monn_{A, B} \co \bang A \otimes \bang B \to \bang (A \otimes B)$ is a pseudocoalgebra morphism, \ie we have an invertible 2-cell
\[
\begin{tikzcd}[column sep = large]
\bang A \otimes \bang B \ar[d, "\dig_A \otimes \dig_B"'] \ar[r, "\monn_{A,B}"]
\ar[ddr, phantom, description, "\Two"]
& \bang (A \otimes B) \ar[dd, "\dig_{A \otimes B}"] \\
\bbang A \otimes \bbang B \ar[d, "\monn_{\bang A, \bang B}"'] & \\
\bang (\bang A \otimes \bang B) \ar[r, "\bang \monn_{A,B}"'] & \bbang (A \otimes B)
\end{tikzcd}
\]
such that the appropriate coherence conditions hold.
\item The map $\moni \co \unit \to \bang \unit$ is a pseudocoalgebra morphism, \ie we have an invertible 2-cell
\[
\begin{tikzcd}[column sep = large]
\unit \ar[r, "\moni"] \ar[dd, "\moni"'] \ar[ddr, phantom, description, "\Two"] & \bang \unit \ar[dd, "\dig_\unit"] \\
& \\
\bang \unit \ar[r, "\bang \moni"'] & \bbang \unit
\end{tikzcd}
\]
such that the appropriate coherence conditions hold.
\end{enumerate}
\end{lemma}
\begin{proof} For part~(i), the required invertible 2-cell is given by the 2-cell $\dig^2_{A,B}$ in~\eqref{equ:comultiplication-monoidal-2}. The two coherence conditions for a pseudoalgebra morphism follow from the first\footnote{{Cf.}~\cref{thm:coh-numb-convention}.}
coherence axiom of a monoidal modification for $\pi$ (the associativity constraint of the pseudocomonad) and for $\nu$ (the right unitality constraint of the pseudomonad),
For part (ii), it suffices to recall that $\moni \co \unit \to \bang \unit$ is the structure map of a pseudocoalgebra.
\end{proof}
Next, we show that the 2-cells that are part of the structure of a symmetric lax monoidal pseudomonad can also be lifted. For this statement to make sense, recall that the components of the
comultiplication $\dig_A \co \bang A \to \bbang A$, for $A \in \catK$ are pseudoalgebra morphisms, the statement of \cref{thm:mu-are-coalgebra-morphisms}, and that
the braiding $\bra_{A, B} \co A \otimes B \to B \otimes A$ is a pseudoalgebra morphism for all pseudoalgebras $A, B$ by \cref{equ:lift-braiding-to-em}.
\begin{lemma} \label{thm:to-be-added-1} \leavevmode
\begin{enumerate}
\item The 2-cell $\omega_{A, B, C}$ in \eqref{equ:lax-monoidal-associativity} is a pseudocoalgebra 2-cell for all pseudocoalgebras $A, B, C$.
\item The 2-cells $\kappa_A$ and $\zeta_A$ in \eqref{equ:lax-monoidal-unitality} are pseudocoalgebra 2-cells for every pseudocoalgebra $A$.
\item The 2-cell $\theta_{A,B}$ in \eqref{equ:lax-monoidal-braiding} is a pseudocoalgebra 2-cell for all pseudoalgebras $A, B$,
\item The 2-cell $\dig^2_{A,B}$ in \eqref{equ:comultiplication-monoidal-2} is a pseudocoalgebra 2-cell for all pseudoalgebras $A, B$.
\item The 2-cell $\dig^0$ in \eqref{equ:comultiplication-monoidal-0} is a pseudocoalgebra 2-cell.
\end{enumerate}
\end{lemma}
\begin{proof} For parts (i) and (ii), use the first, second and third coherence conditions for the comultiplication~$\dig$ to be a monoidal transformation to prove the claim for~$\omega_{A, B, C}$, $\kappa_A$ and $\zeta_A$,
respectively. For part~(iii), use the condition expressing that the monoidal transformation $\dig$ is braided. For parts~(iv) and (v), use the first and second coherence coherence conditions for the modification $\pi$ (the associativity constraint of the pseudocomonad) to be monoidal, respectively.
\end{proof}
\subsection*{Linear exponential pseudocomonads}
We are now ready to introduce our bicategorical counterpart of linear exponential comonads, which is based on the one-dimensional axiomatisation of~[47].
Note that, since the 2-category of pseudocoalgebras for a symmetric monoidal pseudocomonad $\Em \catK$ is a symmetric monoidal bicategory by~\cref{thm:coalg-is-monoidal},
we can consider braided symmetric pseudocomonoids therein, in the sense of \cref{thm:comonoid} and \cref{thm:braided-comonoid}.
\begin{definition} \label{thm:linear-exponential-comonad}
Let $\catK$ be a symmetric Gray monoid. A \myemph{linear exponential pseudocomonad} on $\catK$ is a symmetric lax monoidal pseudocomonad $(\bang, \dig, \der)$ on $\catK$
as in \cref{thm:sym-lax-monoidal-pseudocomonad} equipped with additional data as follows.
\begin{itemize}
\item For every pseudocoalgebra $A$, we have pseudocoalgebra morphisms
\begin{equation}
\label{equ:lin-exp-com-and-cou}
\com_A \co A \to A \otimes A \mathrlap{,} \qquad
e_A \co A \to \unit \mathrlap{,}
\end{equation}
and invertible pseudocoalgebra 2-cells
\begin{gather}
\label{equ:lin-exp-associativity}
\begin{tikzcd}[column sep = large, ampersand replacement=\&]
\ar[r, "\com_A"] \ar[d, "\com_A"']
\ar[dr,phantom,"\Two \alpha_A"] \&
A \otimes A
\ar[d, "\id_A \otimes \com_A"] \\
A \otimes A
\ar[r, "\com_A \otimes \id_A"'] \&
A \otimes A \otimes A
\mathrlap{,}
\end{tikzcd} \\
\label{equ:lin-exp-unitality}
\begin{tikzcd}[column sep = {1.5cm,between origins}, ampersand replacement=\&]
\&
A \otimes A
\ar[d, phantom, description, "\Two \lambda_A"]
\ar[dr, "e_A \otimes \id_A"] \&
\\
\ar[ur, "\com_A"]
\ar[rr, "\id_A"'] \&
\phantom{} \&
A \otimes A \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}[column sep = {1.5cm,between origins}, ampersand replacement=\&]
\&
A \otimes A
\ar[d, phantom, description, "\Two \rho_A"]
\ar[dr, "\id_A \otimes e_A"] \&
\\
\ar[ur, "\com_A"]
\ar[rr, "\id_A"'] \&
\phantom{} \&
A \otimes A \mathrlap{,}
\end{tikzcd} \\
\label{equ:lin-exp-braiding}
\begin{tikzcd}[column sep = {1.5cm,between origins}, ampersand replacement=\&]
\&
A \otimes A
\ar[d, phantom, description, "\Two \gamma_A"]
\ar[dr, "\bra_{A,A}"] \&
\\
\ar[ur, "\com_A"]
\ar[rr, "\com_A"'] \&
\phantom{} \&
A \otimes A
\end{tikzcd}
\end{gather}
that equip $A$ with the structure of symmetric pseudocomonoid in $\Em{\catK}$.
\item The families of maps $(\com_A)_{A \in \catK}$ and $(e_A)_{A \in \catK}$ are pseudonatural with respect to pseudocoalgebra morphisms and pseudocoalgebra 2-cells.
\item The families of 2-cells $(\alpha_A)_{A \in \catK}$, $(\lambda_A)_{A \in \catK}$, $(\rho_A)_{A \in \catK}$ and $(\gamma_A)_{A \in \catK}$ are modifications.
\item The pseudonatural transformations $\com$ and $e$ are braided monoidal.
\item The modifications $\alpha$, $\lambda, \rho$ and $\gamma$ are monoidal.
\end{itemize}
\end{definition}
\begin{remark} \label{thm:unfold-linear-exponential}
We unfold the definition of linear exponential pseudocomonad for later use.
\begin{itemize}
\item For every pseudocoalgebra~$A$, the map $\com_A \co A \to A \otimes A$ is a pseudocoalgebra morphism, \ie we have an invertible 2-cell
\[
\begin{tikzcd}
A \ar[r, "\com_A"] \ar[dd, "a"'] \ar[ddr, phantom, description, "\Two \bar{n}_A"] & A \otimes A \ar[d, "a \otimes a"] \\
& \bang A \otimes \bang A \ar[d, "\monn_{A, A}"] \\
\bang A \ar[r, "\bang \com_A"'] & \bang (A \otimes A) \mathrlap{.}
\end{tikzcd}
\]
Similarly, for each pseudocoalgebra~$A$, the map $e_A \co A \to \unit$ is a pseudocoalgebra morphism, \ie we have an invertible 2-cell
\[
\begin{tikzcd}
A \ar[r, "e_A"] \ar[d, "a"'] \ar[dr, phantom, description, "\Two \bar{e}_A"] & \unit \ar[d, "\moni"] \\
\bang A \ar[r, "\bang e_\unit"'] & \bang \unit \mathrlap{.}
\end{tikzcd}
\]
Furthermore, the 2-cells $\alpha_A$, $\rho_A$, $\lambda_A$ and $\gamma$ in \eqref{equ:lin-exp-associativity}, \eqref{equ:lin-exp-unitality}, and \eqref{equ:lin-exp-braiding} are pseudocoalgebra 2-cells with respect to this pseudocoalgebra morphisms.
\item We have invertible 2-cells making $\com$ into a pseudonatural transformation with respect to pseudocoalgebra morphisms, \ie for every pseudocoalgebra morphism $f \co A \to B$, there is an invertible 2-cell
\begin{equation}
\label{equ:d-psdnat}
\begin{tikzcd}
A \ar[r, "\com_A"] \ar[d, "f"'] \ar[dr, phantom, description, "\Two n_f"] & A \otimes A \ar[d, "f \otimes f"] \\
B \ar[r, "\com_B"'] & B \otimes B \mathrlap{,}
\end{tikzcd}
\end{equation}
satisfying the appropriate coherence axioms.
Similarly, we have invertible 2-cells making $e$ into a pseudonatural transformation with respect to pseudocoalgebra
morphisms, \ie for every pseudocoalgebra morphism $f \co A \to B$,
we have an invertible 2-cell
\begin{equation}
\label{equ:e-psdnat}
\begin{tikzcd}
A \ar[r, "e_A"] \ar[d, "f"'] \ar[dr, phantom, description, "\Two e_f"] & \unit \ar[d, "\id_\unit"] \\
B \ar[r, "e_B"'] & \unit \mathrlap{,}
\end{tikzcd}
\end{equation}
satisfying the appropriate coherence axioms.
\item We have an invertible modification $\com^2$ and an invertible 2-cell $\com^0$ making the pseudonatural transformation $\com$ into
a braided monoidal pseudonatural transformation, \ie
\begin{equation}
\label{equ:d-monoidal}
\begin{tikzcd}[column sep = large]
A \otimes B \ar[r, "\com_{A \otimes B}"] \ar[d, "\id_{A \otimes B}"'] \ar[dr, phantom, description, "\Two \com^2_{A,B}"] & A \otimes B \otimes A \otimes B \ar[d, "\id_A \otimes \bra_{A,B} \otimes \id_B"] \\
A \otimes B \ar[r, "\com_A \otimes \com_B"'] & A \otimes A \otimes B \otimes B \mathrlap{,}
\end{tikzcd}
\qquad
\begin{tikzcd}
\unit \ar[d, "\id_\unit"'] \ar[r, "d_\unit"] \ar[dr, phantom, description, "\Two \com^0"] & \unit \otimes \unit \ar[d, "\id_\unit"] \\
\unit \ar[r, "\id_\unit"'] & \unit \mathrlap{.}
\end{tikzcd}
\end{equation}
Similarly, we have an invertible modification $e^2$ and an invertible 2-cell $e^0$ making the pseudonatural transformation $e$ into
a braided monoidal pseudonatural transformation, \ie
\begin{equation}
\label{equ:e-monoidal}
\begin{tikzcd}[column sep = large]
A \otimes B \ar[r, "e_A \otimes e_B"] \ar[d, "\id_{A \otimes B}"'] \ar[dr, phantom, description, "\Two e^2_{A,B}"] & \unit \ar[d, "\id_{A \otimes B}"] \\
A \otimes B \ar[r, "e_{A \otimes B}"'] & \unit \otimes \unit \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}
\unit \ar[r, "\id_\unit"] \ar[d, "\id_\unit"'] \ar[dr, phantom, description, "\Two e^0"] & \unit \ar[d, "\id_\unit"] \\
\unit \ar[r, "e_\unit"'] & \unit \mathrlap{.}
\end{tikzcd}
\end{equation}
\end{itemize}
\end{remark}
\begin{remark} \label{thm:retract-cofree}
In the one-dimensional setting, a possible axiomatisation of the notion of a linear
exponential comonad involves requiring the existence of a commutative comonoid structure only
on cofree coalgebras. From this one can derive the presence of
a commutative comonoid structure on every coalgebra, using that every coalgebra is a retract of a cofree one. We expect that a similar argument applies in the two-dimensional
setting, \cf also \cref{thm:compare-with-jacq} below, but do not pursue the details here.
\end{remark}
Let us now fix a symmetric Gray monoid $\catK$ and a linear exponential pseudocomonad $(\bang, \dig, \der)$ on it, with data as in \cref{def:psd-comonad,thm:sym-lax-monoidal-pseudocomonad,thm:linear-exponential-comonad}.
\begin{proposition} \label{thm:coalgebras-is-cartesian} Let $\catK$ be a symmetric Gray monoid and $(\bang, \dig, \der)$ be a linear exponential pseudocomonad.
The symmetric monoidal structure on the 2-category of pseudocoalgebras $\Em{\catK}$ of \cref{thm:coalg-is-monoidal} is a cartesian monoidal structure.
In particular, $\Em{\catK}$ has finite products.
\end{proposition}
\begin{proof} The axioms for a linear exponential pseudocomonad require every pseudocoalgebra to be a symmetric pseudocomonoid in the 2-category of pseudocoalgebras, in a pseudonatural and monoidal way, so \cref{thm:prod-iff-all-ccmon} implies the claim.
\end{proof}
\begin{remark} \label{thm:canonical-is-linear-exponential}
By the essential uniqueness of finite products, the canonical symmetric pseudocomonoid structure on a pseudoalgebra $A$ given by products in $\Em{\catK}$ is equivalent to the one given by the linear exponential pseudocomonad.
\end{remark}
We derive some more consequences of the axioms for a linear exponential pseudocomonad. For the next lemma, recall that if $A$ and $B$ are pseudocoalgebras, then $A \otimes B$ admits the structure of a pseudocoalgebra by \cref{thm:coalg-is-monoidal} and thus acquires the structure of a symmetric pseudocomonoid by the axioms for a linear exponential pseudocomonad.
Recall also that the 2-category $\SymCoMon{\catK}$ has a symmetric monoidal structure by \cref{thm:sym-comon-is-monoidal}.
\begin{lemma}
\label{thm:id-is-comonoid-morphism}
\leavevmode
\begin{enumerate}[(i)]
\item Let $A$ and $B$ be pseudocoalgebras. Then the symmetric pseudocomonoid $(A \otimes B, \com_{A \otimes B}, e_{A \otimes B})$, determined by the linear
exponential pseudocomonad, is equivalent to the tensor product of the symmetric pseudocomonoids
$(A, \com_A, e_A)$ and $(B, \com_B, e_B)$ in $\SymCoMon{\catK}$.
\item The symmetric pseudocomonoid $(\unit, d_\unit, e_\unit)$ determined by the linear exponential pseudocomonad
is equivalent to the unit of $\SymCoMon \catK$.
\end{enumerate}
\end{lemma}
\begin{proof} For~(i), recall that the definition of the tensor product $(A, \com_A, e_B) \otimes (B, \com_B, e_B)$ from the comments below~\cref{thm:coalg-is-monoidal}. In particular,
its underlying object is $A \otimes B$, comultiplication as in~\eqref{equ:comultiplication-for-tensor}, and counit as in~ \eqref{equ:counit-for-tensor}. We claim that the identity map $\id_{A \otimes B} \co A \otimes B \to A \otimes B$ is a braided pseudocomonoid morphism
between $(A \otimes B, \com_{A \otimes B}, e_{A \otimes B})$ and $(A, \com_A, e_B) \otimes (B, \com_B, e_B)$. For this, we need 2-cells fitting in
the diagrams
\[
\begin{tikzcd}[column sep = huge]
A \otimes B \ar[r, "\id_{A \otimes B}"] \ar[dd, "\com_{A \otimes B}"']
\ar[ddr, phantom, description, "\Two"]
& A \otimes B \ar[d, "\com_A \otimes \com_B"] \\
& A \otimes A \otimes B \otimes B \ar[d, "\id_A \otimes \mathsf{r}_{A,B} \otimes \id_B"] \\
A \otimes B \otimes A \otimes B \ar[r, "\id_{A \otimes B \otimes A \otimes B}"'] & A \otimes B \otimes A \otimes B \mathrlap{,}
\end{tikzcd}
\qquad
\begin{tikzcd}
A \otimes B \ar[r, "\id"] \ar[dd, "e_{A \otimes B}"']
\ar[ddr, phantom, description, "\Two"]
& A \otimes B \ar[d, "e_A \otimes e_B"] \\
& \unit \otimes \unit \ar[d, "\id"] \\
\unit \ar[r, "\id"'] & \unit \mathrlap{.}
\end{tikzcd}
\]
These are given by the 2-cells $\com^2_{A,B}$ and $e^2_{A,B}$ in~\eqref{equ:d-monoidal} and~\eqref{equ:e-monoidal}, which are part of the data making~$\com$ and~$e$ into braided monoidal
pseudonatural transformations, respectively. We need to check the four coherence conditions for a braided pseudocomonoid morphism. The four proofs use
the first\footnote{{Cf.} \cref{thm:coh-numb-convention}.} coherence condition for the four modifications $\alpha$, $\lambda, \rho$ and $\gamma$ in \eqref{equ:lin-exp-associativity}, \eqref{equ:lin-exp-unitality}, and \eqref{equ:lin-exp-braiding} to be monoidal, respectively.
For part~(ii), recall that the unit $\unit$ is a braided pseudocomonoid with identity maps for both comultiplication and unit, and identity 2-cells for all the constraints. We claim that the identity
map $\id_\unit \co \unit \to \unit$ is a braided pseudocomonoid morphism from $(\unit, d_\unit, e_\unit)$ to $(\unit, \id_\unit, \id_\unit)$. Thus, we need invertible 2-cells:
\[
\begin{tikzcd}[column sep = large]
\unit \ar[r, "\id_\unit"] \ar[d, "d_\unit"'] \ar[dr,phantom,"\Two"] & \unit \ar[d, "\id_\unit"] \\
\unit \otimes \unit \ar[r, "\id_\unit \otimes \id_\unit"'] & \unit \otimes \unit \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}[column sep = large]
\unit \ar[r, "\id_\unit"] \ar[d, "e_\unit"'] \ar[dr,phantom,"\Two"] & \unit \ar[d, "\id_\unit"] \\
\unit \otimes \unit \ar[r, "\id_\unit \otimes \id_\unit"'] & \unit \otimes \unit \mathrlap{.}
\end{tikzcd}
\]
These are given by the 2-cells $\com^0$ and $e^0$ in ~\eqref{equ:d-monoidal} and~\eqref{equ:e-monoidal}, respectively. To verify the four coherence conditions for a braided pseudocomonoid morphism, use
the second coherence condition for the modifications $\alpha$, $\lambda, \rho$ and $\gamma$ to be monoidal, respectively.
\end{proof}
\begin{corollary} Let $A$ and $B$ be pseudocoalgebras. The symmetric pseudocomonoid $(A \otimes B, \com_{A \otimes B}, e_{A \otimes B})$ is the product of $A$ and $B$ in $\SymCoMon{\catK}$.
\end{corollary}
\begin{proof} Immediate from~\cref{thm:id-is-comonoid-morphism} and \cref{thm:ccmon-finprod}.
\end{proof}
We now define explicitly two pseudonatural transformations, called \emph{contraction} and \emph{weakening} following the terminology used in Linear Logic, which will be important
for our development.
For $A \in \catK$, the cofree pseudocoalgebra $\bang A$ admits the structure of symmetric pseudocomonoid
by the axioms for a linear exponential comonad, like any pseudocoalgebra.
We define the map~$\con_A$, called \emph{contraction}, to be the comultiplication of the symmetric pseudocomonoid $\bang A$, \ie we let
\begin{equation}
\label{equ:con}
\begin{tikzcd}
\bang A \ar[r, "\con_A"] & \bang A \otimes \bang A
\end{tikzcd} \quad \defeq \quad
\begin{tikzcd}
\bang A \ar[r, "\com_{\bang A}"] & \bang A \otimes \bang A \mathrlap{.}
\end{tikzcd}
\end{equation}
Similarly, we define the map $\weak_A$, called \emph{weakening}, to be the counit of the symmetric pseudocomonoid $\bang A$, \ie we let
\begin{equation}
\label{equ:weak}
\begin{tikzcd}
\bang A \ar[r, "\weak_A"] & \unit
\end{tikzcd}
\quad \defeq \quad
\begin{tikzcd}
\bang A \ar[r, "e_{\bang A}"] & \unit \mathrlap{,}
\end{tikzcd}
\end{equation}
By definition, $\con_A$ and $\weak_A$ are the components on objects of pseudonatural transformations.
% The next lemma establishes the two-dimensional counterparts of the diagrams in \cite[Definition 3.1]{FioreM:difsmi}.
\begin{samepage}
\begin{lemma} \label{thm:con-and-weak-coalgebra-maps}
\leavevmode
\begin{enumerate}
\item For every $A \in \catK$,
% \noten{This is part of Definition 3.1 of Fiore, it is the outer diagram in (5)}
the map $\con_A \co \bang A \to \bang A \otimes \bang A$ is a pseudocoalgebra morphism, \ie we have an invertible 2-cell
\[
\begin{tikzcd}[column sep = large]
\bang A \ar[r, "\con_A"] \ar[dd, "\dig_A"']
\ar[ddr, phantom, description, "\Two \bar{\con}_A"]
& \bang A \otimes \bang A \ar[d, "\dig_A \otimes \dig_A"] \\
& \bbang A \otimes \bbang A \ar[d, "\monn_{\bang A, \bang A}"] \\
\bbang A \ar[r, "\bang \con_A"'] & \bang (\bang A \otimes \bang A) \mathrlap{,}
\end{tikzcd}
\]
such that the appropriate coherence conditions hold.
\item For every $A \in \catK$,
%\noten{This is part of Definition 3.1 of Fiore, just above (5)}
the map $\weak_A \co \bang A \to \unit$ is pseudocoalgebra morphism, \ie we have an invertible 2-cell
\begin{equation}
\label{thm:dig-and-weak}
\begin{tikzcd}[column sep = large]
\bang A \ar[r, "\weak_A"] \ar[d, "\dig_A"'] \ar[dr, phantom, description, "\Two \bar{\weak}_A"] & \unit \ar[d, "\moni"] \\
\bbang A \ar[r, "\bang{\weak}_A"'] & \bang \unit \mathrlap{,}
\end{tikzcd}
\end{equation}
such that the appropriate coherence conditions hold.
\end{enumerate}
\end{lemma}
\end{samepage}
\begin{proof} By the definitions of $\con_A$ and $\weak_A$ in \eqref{equ:con} and \eqref{equ:weak}, the claims are instances of the assertions that the components of the pseudonatural transformations~$\com$ and $e$
in~\eqref{equ:lin-exp-com-and-cou} are pseudocoalgebra morphisms, which are part of the axioms for a linear exponential pseudocomonad, \cf part~(i) of \cref{thm:unfold-linear-exponential}.
\end{proof}
The next proposition is a useful result since it allows us to leverage our earlier results on sylleptic lax monoidal pseudomonads to
exhibit a number of braided pseudocomonoid morphism and pseudocomonoid 2-cells.
\begin{proposition} \leavevmode
\label{thm:coalg-mor-comon-mor}
\begin{enumerate}
\item Let $(A,a), (B, b)$ be pseudocoalgebras. Every pseudocoalgebra morphism $f \co (A,a) \to (B, b)$ is a braided pseudocomonoid morphism $f \co (A, \com_A, e_A) \to (B, \com_B, e_B)$.
\item Let $f \co A \to B$, $g \co A \to B$ be pseudocoalgebra morphisms. Every pseudocoalgebra 2-cell $\phi \co f \Rightarrow g$ is a pseudocomonoid 2-cell.
\end{enumerate}
\end{proposition}
\begin{proof} For (i), let $f \co (A,a) \to (B, b)$ be a pseudocoalgebra morphism. The 2-cells making $f$ into a braided pseudocomonoid morphism
are exactly the 2-cells in~\eqref{equ:d-psdnat} and \eqref{equ:e-psdnat} above. The four coherence conditions for a braided pseudocomonoid morphism
follow from the assumption that $\alpha$, $\lambda$, $\rho$ and $\gamma$ in~\eqref{equ:lin-exp-associativity}, \eqref{equ:lin-exp-unitality}, and \eqref{equ:lin-exp-braiding} are modifications, respectively.
For part~(ii), the two coherence conditions from the axiom describing the interaction between pseudonatural transformations and 2-cells for $\com$ and $e$, respectively.
Note that one has to use that $\phi$ is a pseudocoalgebra 2-cell to apply this axiom.
\end{proof}
\begin{samepage}
\begin{corollary} \label{thm:dig-comonoid-morphism}
\leavevmode
% \noten{Only in part in Fiore. This is (3.6.3) of Ehrhard.}
\begin{enumerate}
\item For every $A \in \catK$, the map $\dig_A \co \bang A \to \bbang A$ is
a braided pseudocomonoid morphism, \ie we have invertible 2-cells
% \noten{Inner square in equation (5) in Fiore.}
\[
\begin{tikzcd}[column sep = large]
\bang A
\ar[d, "\con_A"']
\ar[r, "\dig_A"]
\ar[dr, phantom, description, "\Two \bar{\dig}_A"] &
\bbang A
\ar[d, "\con_{\bang A}"] \\
\bang A \otimes \bang A
\ar[r, "\dig_A \otimes \dig_A"'] &
\bbang A \otimes \bbang A \mathrlap{,}
\end{tikzcd} \qquad
% \noten{This does not seem to be in Fiore. It is the first diagram of (3.6.3) in Ehrhard.}
\begin{tikzcd}
\bang A
\ar[d, "\weak_A"']
\ar[r, "\dig_A"]
\ar[dr, phantom, description, "\Two \tilde{\dig}_A"] &
\bbang A
\ar[d, "\weak_{\bang A}"] \\
\unit
\ar[r, "\id_\unit"'] &
\unit \mathrlap{,}
\end{tikzcd}
\]
such that the appropriate coherence conditions hold.
\item For every $A \in \catK$, the associativity constraint 2-cell $\pi_A$ of the pseudocomonad is a pseudocomonoid 2-cell.
\end{enumerate}
\end{corollary}
\end{samepage}
\begin{proof} For part~(i), $\dig_A \co \bang A \to \bbang A $ is the structure map of a pseudocoalgebra and so it is a pseudocoalgebra morphism.
The claim then follows by part~(i) of \cref{thm:coalg-mor-comon-mor}.
For part~(ii), $\pi_A$ is a pseudocoalgebra 2-cell and so the claim the follows by part~(ii) of \cref{thm:coalg-mor-comon-mor}.
% The second diagram follows also from \eqref{thm:dig-and-weak} and monoidality of weakening.
\end{proof}
\begin{samepage}
\begin{corollary} \label{thm:mu-comonoid-morphism} \leavevmode
\begin{enumerate}[(i)]
\item For every $A, B \in \catK$, the map $\monn_{A, B} \co \bang A \otimes \bang B \to \bang (A \otimes B)$ is a braided pseudocomonoid morphism, \ie we have invertible 2-cells
\[
\begin{gathered}
\begin{tikzcd}[column sep =huge]
\bang A \otimes \bang B
\ar[r, "\monn_{A,B}"]
\ar[d, "\con_A \otimes \con_B"']
\ar[ddr, phantom, description, "\Two \bar{\mon}_{A,B}"] &
\bang (A \otimes B)
\ar[dd, "\con_{A \otimes B}"] \\
\bang A \otimes \bang A \otimes \bang B \otimes \bang B
\ar[d, "\id_A \otimes \bra_{A,B} \otimes \id_B"'] & \\
\bang A \otimes \bang B \otimes \bang A \otimes \bang B
\ar[r, "\monn_{A,B} \otimes \monn_{A, B}"'] &
\bang (A \otimes B) \otimes \bang ( A \otimes B) \mathrlap{,}
\end{tikzcd} \\[2ex]
\begin{tikzcd}[column sep = huge]
\bang A \otimes \bang B
\ar[r, "\monn_{A,B}"]
\ar[d, "\weak_A \otimes \weak_B"']
\ar[dr, phantom, description, "\Two \tilde{\mon}_{A,B}"] &
\bang (A \otimes B)
\ar[d, "\weak_{A \otimes B}"] \\
\unit \ar[r, "\id"'] & \unit \mathrlap{,}
\end{tikzcd}
\end{gathered}
\]
such that the appropriate coherence conditions hold.
\item The map $\moni \co \unit \to \bang \unit$ is a braided pseudocomonoid morphism, \ie we have invertible 2-cells
\[
\begin{tikzcd}[column sep = large]
\unit
\ar[r, "\moni"]
\ar[d, "\id"']
\ar[dr, phantom, description, "\Two \bar{\mon}_\unit"] &
\bang \unit
\ar[d, "\con_\unit"] \\
\unit \otimes \unit
\ar[r, "\moni \otimes \moni"'] &
\bang \unit \otimes \bang \unit \mathrlap{,}
\end{tikzcd}
\qquad
\begin{tikzcd}
\unit
\ar[r, "\moni"]
\ar[d, "\id"']
\ar[dr, phantom, description, "\Two \tilde{\mon}_\unit"] &
\bang \unit
\ar[d, "\weak_\unit"] \\
\unit
\ar[r, "\id"'] &
\unit \mathrlap{,}
\end{tikzcd}
\]
such that the appropriate coherence conditions hold.
\end{enumerate}
\end{corollary}
\end{samepage}
\begin{proof} By \cref{thm:mu-are-coalgebra-morphisms} and \cref{thm:coalg-mor-comon-mor}.
\end{proof}
%The diagrams in the next proposition are the 2-dimensional counterparts of~\cite[Equation~(3.6.3)]{EhrhardT:intdll}. The first one corresponds to
% the inner/outer diagram in \cite[Equation~(5)]{FioreM:difsmi}.
\begin{corollary} \label{thm:to-be-added-2} \leavevmode
\begin{enumerate}
\item The 2-cell $\omega_{A, B, C}$ in \eqref{equ:lax-monoidal-associativity} is a pseudocomonoid 2-cell for all pseudocoalgebras $A, B, C$.
\item The 2-cells $\kappa_A$ and $\zeta_A$ in \eqref{equ:lax-monoidal-unitality} are pseudocomonoid 2-cells for every pseudocoalgebra $A$.
\item The 2-cell $\theta_{A,B}$ in \eqref{equ:lax-monoidal-braiding} is a pseudocomonoid 2-cell for all pseudoalgebras $A, B$.
\item The 2-cell $\dig^2_{A,B}$ in \eqref{equ:comultiplication-monoidal-2} is a pseudocomonoid 2-cell for all pseudoalgebras $A, B$.
\item The 2-cell $\dig^0$ in \eqref{equ:comultiplication-monoidal-0} is a pseudocomonoid 2-cell.
\end{enumerate}
\end{corollary}
\begin{proof} By \cref{thm:to-be-added-1} and part~(ii) of \cref{thm:coalg-mor-comon-mor}.
\end{proof}
%The diagrams in the next proposition are the counterpart of one of the diagrams
%that are part of~\cite[Definition~3.1, part~4]{FioreM:difsmi}, as spelled out in part in~\cite[Equation~(7)]{FioreM:difsmi}.
%They are also in~\cite[Equation~(3.6.3)]{EhrhardT:intdll}.
Next, we establish that contraction and weakening are braided monoidal pseudonatural transformations.
\begin{proposition}
\label{thm:contraction-is-monoidal}
The contraction pseudonatural transformation, with components on objects $\con_A \co \bang A \to \bang A \otimes \bang A$, is braided monoidal, \ie we have invertible modifications with components
\[
\begin{gathered}
\begin{tikzcd}[column sep = 2.5cm]
\bang A \otimes \bang B
\ar[rr, "\monn_{A, B}"]
\ar[d, "\con_A \otimes \con_B"']
\ar[drr, phantom, description, "\Two \con^2_{A,B}"] &
\bang (A \otimes B)
\ar[d, "\con_{A \otimes B}"] \\
\bang A \otimes \bang A \otimes \bang B \otimes \bang B
\ar[r, "\id_{\bang A} \otimes \bra_{\bang A, \bang B} \otimes \id_{\bang B}"'] &
\bang A \otimes \bang B \otimes \bang A \otimes \bang B
\ar[r, "\monn_{A,B} \otimes \monn_{A,B}"'] &
\bang (A \otimes B) \otimes \bang (A \otimes B) \mathrlap{,}
\end{tikzcd} \\[1ex]
\begin{tikzcd}[column sep = large]
\unit
\ar[r, "\moni"]
\ar[d, "\id"]
\ar[dr, phantom, description, "\Two \con^0"] &
\bang \unit\ar[d, "\con_{\unit}"] \\
\unit \otimes \unit
\ar[r, "\moni \otimes \moni"'] &
\bang \unit \otimes \bang \unit \mathrlap{,}
\end{tikzcd}
\end{gathered}
\]
such that the appropriate coherence conditions hold.
\end{proposition}
\begin{proof} By definition $\con$ is obtained by whiskering the sylleptic lax monoidal pseudofunctor $\bang$ and the braided monoidal natural transformation $\com$ and therefore it is a braided monoidal natural transformation.
Explicitly, the 2-cells $\con^2_{A,B}$ are part of the data making $\monn_{A,B}$ into a comonoid morphism in part~(i) of \cref{thm:mu-comonoid-morphism} and the 2-cell $\con^0$ is part
of the data making $\moni$ into a comonoid morphism in part~(ii) of \cref{thm:mu-comonoid-morphism}.
\end{proof}
%The next proposition is the counterpart of part of \cite[Equation~(4)]{FioreM:difsmi} and
% in \cite[Equation~(3.6.3)]{EhrhardT:intdll}.
%Part~(i) is spelled out as \cite[Equation~(6)]{FioreM:difsmi}.
\begin{proposition}
\label{thm:weakening-is-monoidal}
% \noten{This is part of (4) in Definition 3.1 of Fiore. It is (3.6.1) in Ehrhard.}
The weakening pseudonatural transformation, with components on objects $\weak_A \co \bang A \to \unit$ is
braided monoidal, \ie we have invertible modifications with components
% \noten{This is spelled out in part as equation (6) in Fiore.}
\[
\begin{tikzcd}
\bang A \otimes \bang B
\ar[r, "\monn_{A,B}"]
\ar[d, "\weak_A \otimes \weak_B"']
\ar[dr, phantom, description, "\Two \weak^2_{A,B}"] &
\bang (A \otimes B)
\ar[d, "\weak_{A \otimes B}"] \\
\unit \otimes \unit
\ar[r, "\id_\unit"'] &
\unit \mathrlap{,}
\end{tikzcd} \qquad
\begin{tikzcd}
\unit
\ar[d, "\id_\unit"']
\ar[r, "\moni"]
\ar[dr, phantom, description, "\Two \weak^0"]&
\bang \unit
\ar[d, "\moni"] \\
\unit
\ar[r, "\id_\unit"'] &
\unit \mathrlap{,}
\end{tikzcd}
\]
such that the appropriate coherence conditions hold.
\end{proposition}
\begin{proof} By definition, the pseudonatural transformation $\weak$ is obtained by whiskering the sylleptic lax monoidal pseudofunctor $\bang$ and the braided monoidal pseudonatural transformation $e$. Therefore, it is
braided monoidal. The 2-cells also be given explicitly using \cref{thm:mu-comonoid-morphism}.
\end{proof}
\begin{remark} \label{thm:compare-with-jacq} Our axioms for a linear exponential pseudocomonad imply those in \cite[Definition~90]{JacqC:catcnis}.
Indeed, for each object $A \in \catK$, the object $\bang A$ is a symmetric pseudocomonoid, the maps
$\con_A \co \bang A \to \bang A \otimes \bang A$ and $\weak_A \co \bang A \to \unit$ are pseudocoalgebra morphisms by \cref{thm:con-and-weak-coalgebra-maps},
and $\dig_A \co \bang A \to \bbang A$ is a morphism of braided pseudocomonoids by \cref{thm:dig-comonoid-morphism}. We have not checked yet whether the
the converse implication holds.
\end{remark}
\subsection*{Cofree symmetric pseudocomonoids and linear exponential pseudocomonads} We now provide a method to construct linear exponential pseudocomonads
in the sense of \cref{thm:linear-exponential-comonad}, which we will use in our application in \cref{sec:prof}. We begin by considering the setting of a symmetric Gray monoid~$\catK$ and assume that it
admits the construction of cofree symmetric pseudocomonoids, in the sense that the forgetful 2-functor $U \co \SymCoMon{\catK} \to \catK$ has a right biadjoint~$F$, mapping
an object~$A \in \catK$ to the cofree symmetric pseudocomonoid on it, as in
\begin{equation}
\label{equ:cofree-ccmon}
\begin{tikzcd}[column sep = large]
\catK
\ar[r, shift left =2, "F"]
\ar[r, description, phantom, "\scriptstyle \top"]
\SymCoMon{\catK} \mathrlap{.}
\ar[l, shift left = 2, "U"]
\end{tikzcd}
\end{equation}
Assuming further that the biadjunction in \eqref{equ:cofree-ccmon} is comonadic, in the sense that the canonical comparison 2-functor $K$ in
\[
\begin{tikzcd}[column sep = {2cm,between origins}]
\SymCoMon{\catK} \ar[rr, pos = (.4), "K"] \ar[dr, "U"'] & & \Em{\catK} \ar[dl, "U"] \\
& \catK &
\end{tikzcd}
\]
is a biequivalence, we obtain a linear exponential pseudocomonad, as stated in the next result.
\begin{theorem}
\label{thm:case-1}
Let $\catK$ be a symmetric Gray monoid. Assume that~$\catK$ admits the construction of cofree symmetric pseudocomonoids and that $\SymCoMon{\catK}$
is comonadic over $\catK$. Then the pseudocomonad on~$\catK$ determined by the forgetful-cofree biadjunction in \eqref{equ:cofree-ccmon} is a linear exponential pseudocomonad.
\end{theorem}
\begin{proof} By \cref{thm:ccmon-finprod}, the symmetric
monoidal structure of $\catK$ lifts to $\SymCoMon{\catK}$ in such a way that the forgetful 2-functor $U \co \SymCoMon{\catK} \to \catK$ is a strict sylleptic monoidal 2-functor.
Therefore its right biadjoint is automatically sylleptic lax monoidal by~\cite[Proposition~15]{DayStreet} and we obtain a symmetic lax monoidal pseudocomonad $\bang (-) \co \catK \to \catK$ by
a 2-categorical version of doctrinal adjunction~\cite[pages~62-63]{GarnerR:enrcfc}.
By comonadicity, $\Em{\catK}$ inherits finite products from $\SymCoMon{\catK}$. Thus, every pseudocoalgebra admits a canonical comonoid structure and the required
pseudonaturality axioms hold, because they hold trivially in $\SymCoMon{\catK}$.
\end{proof}
Our applications in \cref{sec:prof} involve a special case of \cref{thm:case-1} which we isolate for later reference. Let us now consider a symmetric Gray monoid $\catK$ that is compact closed, in the sense of~[74], and assume that it admits the construction of free symmetric pseudomonoids, in the sense that the forgetful 2-functor $U \co \CoMon{\catK} \to \catK$ has a left biadjoint $F$, as in
\begin{equation}
\label{equ:free-mon}
\begin{tikzcd}[column sep = large]
\catK
\ar[r, shift left =2, "F"]
\ar[r, description, phantom, "\scriptstyle \bot"]
\CoMon{\catK} \mathrlap{.}
\ar[l,shift left = 2, "U"]
\end{tikzcd}
\end{equation}
Assuming further that the biadjunction in~\eqref{equ:free-mon} is monadic, we obtain again a linear exponential pseudocomonad, as stated in the final result of this section.
\begin{theorem}
\label{thm:case-2}
Let $\catK$ be a compact closed symmetric Gray monoid. Assume that $\catK$ admits the construction of free symmetric pseudomonoids and that $\CoMon{\catK}$
is monadic over $\catK$. Then the pseudocomonad $\bang(-) \co \catK \to \catK$ determined by duality from the pseudomonad $\wn(-) \co \catK \to \catK$ determined by the
forgetful-free biadjunction in~\eqref{equ:free-mon} is a linear exponential pseudocomonad.
\end{theorem}
\begin{proof} By the dual of \cref{thm:case-1}, the pseudomonad $\wn(-) \co \catK \to \catK$ is what we may call a linear exponential pseudomonad, \ie the dual notion of that of
a linear exponential pseudocomonad.
We can then exploit the duality that is available in compact closed bicategories to turn this linear exponential pseudomonad $\wn(-) \co \catK \to \catK$ into a linear exponential
pseudocomonad $\oc (-) \co \catK \to \catK$, as desired. For example, we define
\begin{equation}
\label{equ:wn-to-bang}
\bang A \defeq \big( \wn (A^\bot) \big)^\bot \mathrlap{,}
\end{equation}
for $A \in \catK$.
\end{proof}
The linear exponential comonad constructed in \cref{thm:case-2} may be seen as counterpart of the notion of a bi-exponential in \cite[Section~2.2]{HylandM:gluoml}.
Note that the application of \cref{thm:case-1} and \cref{thm:case-2} requires to check the appropriate comonadicity or monadicity assumptions. For this, one can appeal
to the existing monadicity results for pseudomonads~[17, 46] or, as we shall do in \cref{sec:prof}, proceed by direct inspection.
\section{Products and the Seely equivalences}
\label{sec:products}
We continue to work with our fixed symmetric Gray monoid $\catK$, but now we assume also that $\catK$ has finite products (in a bicategorical sense, following the convention set in \cref{sec:prelim}).
Recall that we write $A \with B$ for the binary product of $A, B \in \catK$, with projections $\pi_1 \co A \with B \to A$ and $\pi_2 \co A \with B \to B$,
and $\term$ for the terminal object. Let us also fix a linear exponential pseudocomonad $(\bang, \dig, \der)$ on $\catK$ as in~\cref{thm:linear-exponential-comonad}.
The next theorem relates the cartesian and monoidal structures on~$\catK$. Its proof was suggested by the observation in~[37]
that the Seely equivalences arise in a canonical way.
\begin{theorem} \label{thm:seely-equivalences-monoidal}
The 2-functor $\bang(\arghole) \co \catK \to \catK$ admits the structure of a sylleptic strong monoidal 2-functor
\[
(\bang, \seell, \seeli) \co (\catK, \with, \term) \to (\catK, \otimes, \unit) \mathrlap{.}
\]
\end{theorem}
\begin{proof} Since the pseudocomonad is lax monoidal, the 2-category of pseudocoalgebras $\Em{\catK}$ admits a symmetric monoidal structure by~\cref{thm:coalg-is-monoidal}. Since the pseudocomonad is a linear exponential pseudocomonad, the monoidal structure on $\Em{\catK}$ is cartesian by~\cref{thm:coalgebras-is-cartesian}. Therefore the product of two cofree pseudocoalgebras is their tensor
product, with its evident induced pseudocoalgebra structure.
Let us now consider the biadjunction~in~\eqref{equ:em-adjunction} between $\catK$ and $\Em{\catK}$, where the left biadjoint is the forgetful 2-functor and the right biadjoint $F$ sends an object of $\catK$ to the cofree pseudocoalgebra on it. Since right biadjoints preserve products, the evident maps
\begin{equation}
\label{equ:seely-key}
f_{A,B} \co F(A \with B) \to FA \otimes FB \mathrlap{,} \qquad f \co F(\term) \to \unit \mathrlap{,}
\end{equation}
are equivalences. Since $F \co (\catK, \with, \term) \to (\Em{\catK}, \otimes, \unit)$ preserves finite products, it is a sylleptic strong monoidal pseudofunctor
between the cartesian monoidal structures, \cf [16]. The forgetful pseudofunctor $U$ is a sylleptic strict monoidal pseudofunctor $U \co (\Em{\catK}, \otimes, \unit) \to (\catK, \otimes, \unit)$
by~\cref{thm:coalg-is-monoidal} and therefore the composite,
\[
\begin{tikzcd}
(\catK, \with, \top) \ar[r, "F"] &
(\Em{\catK}, \otimes, \unit) \ar[r, "U"] &
(\catK, \otimes, \unit) \mathrlap{,}
\end{tikzcd}
\]
is a sylleptic strong monoidal pseudofunctor. But this is exactly the underlying pseudofunctor of the pseudocomonad, as desired.
\end{proof}
We can unfold explicitly the definition of the constraints of the sylleptic strong monoidal 2-functor of \cref{thm:seely-equivalences-monoidal}, which we call
\emph{Seely equivalences}, since they are the 2-categorical counterparts of the Seely
These have the form
\begin{equation}
\label{equ:seely-maps}
\seell_{A, B} \co \bang A \otimes \bang B \to \bang (A \with B) \mathrlap{,} \qquad \seeli \co I \to \bang \top \mathrlap{.}
\end{equation}
The map $\seel_{A,B}$ is the transpose of
\[
\begin{tikzcd}
\bang A \otimes \bang B \ar[r, "(\der_A \otimes \weak_B {,} \weak_A \otimes \der_B)"] &[5em]
(A \otimes \unit) \with (\unit \otimes B) \ar[r, "\id"] &
A \with B \mathrlap{,}
\end{tikzcd}
\]
across the biadjunction in~\eqref{equ:em-adjunction}, \ie the image of this map under the adjoint equivalence
\[
\catK[\bang A \otimes \bang B, A \with B] \simeq \Em{\catK}[ \bang A \otimes \bang B, \bang (A \with B)] \mathrlap{.}
\]
Thus, it is the essentially unique\footnote{{Cf.}~comments below~\eqref{equ:aux-equiv}.} pseudocoalgebra morphism from $\bang A \otimes \bang B$ to $\bang (A \with B)$ with an invertible to 2-cell in
\[
\begin{tikzcd}[column sep = large]
\bang A \otimes \bang B \ar[r, "\seell_{A,B}"] \ar[dr, bend right = 20, {pos=.4}, "(\der_A \otimes \weak_B {,} \weak_A \otimes \der_B)"'] & \bang (A \with B)
\ar[d, "\der_{A \with B}"] \\
\phantom{} \ar[ur, phantom, {pos=.7}, "\Two"] & A \with B \mathrlap{.}
\end{tikzcd}
\]
Using the definition in~\eqref{equ:unfold-f-sharp}, we obtain that $\seell_{A,B}$ is the composite
\[
\begin{tikzcd}
\bang A \otimes \bang B
\ar[r, "\dig_A \otimes \dig_B"] &[2em]
\bbang A \otimes \bbang B
\ar[r, "\monn_{\bang A, \bang B}"] &
\bang (\bang A \otimes \bang B)
\ar[r, "\bang (\der_A \otimes \weak_B {,} \weak_A \otimes \der_B)"] &[5em]
\bang ( (A \otimes \unit) \with (\unit \otimes B) )
\ar[r, "\id"] &
\bang (A \with B) \mathrlap{.}
\end{tikzcd}
\]
Furthermore, the adjoint quasi-inverse ${\seell}^\bullet_{A,B}$ of $\seell_{A,B}$ is the canonical map $f_{A,B}$ in~\eqref{equ:seely-key}, \ie
\[
\begin{tikzcd}[column sep = large]
\bang (A \with B) \ar[r, "\con_{A \with B}"] &
\bang (A \with B) \otimes \bang (A \with B) \ar[r, "\bang \pi_1 \otimes \bang \pi_2"] &
\bang A \otimes \bang B \mathrlap{.}
\end{tikzcd}
\]
Similarly, $\seeli \co \unit \to \bang \term$ is the essentially unique pseudocoalgebra morphism from $I$ to $\bang \term$ with an invertible 2-cell in
\[
\begin{tikzcd}[column sep = large]
\unit \ar[r, "\seeli"] \ar[dr, bend right = 20] & \bang \top
\ar[d, "\der_\top "] \\
\phantom{} \ar[ur, phantom, {pos=.7}, "\Two"] & \top \mathrlap{.}
\end{tikzcd}
\]
Thus, it is the composite
\[
\begin{tikzcd}
\unit
\ar[r, "\moni"] &
\bang \unit
\ar[r] &
\bang \term \mathrlap{.}
\end{tikzcd}
\]
and its adjoint quasi-inverse ${\seel}^\bullet$ is $\weak_\term \co \bang \term \to \unit$.
%These are exactly the formulas in~\cite[p.~170]{FioreM:difsmi} and their strong monoidality
%is mentioned just below~\cite[Definition~3.2]{FioreM:difsmi} and in \cite[Equations~(73), (74) and (75)]{MelliesPA:catsll}.
% In the next proposition, part~(i) is the counterpart of \cite[Proposition~3.1]{FioreM:difsmi} and part~(ii) is to prove part~(ii) of \cref{thm:cocontraction-coweakeneing-with-digging}.
\begin{corollary} \label{thm:lift-bang-to-comonoids} \leavevmode
The 2-functor $\bang(-) \co \catK \to \catK$ lifts to 2-categories of symmetric pseudocomonoids, as in
\[
\begin{tikzcd}
\SymCoMon{\catK, \with, \term} \ar[r, "\bang"] \ar[d] & \SymCoMon{\catK, \otimes, \unit} \ar[d] \\
\catK \ar[r, "\bang"'] &
\catK \mathrlap{.}
\end{tikzcd}
\]
\end{corollary}
\begin{proof} The 2-functor is sylleptic strong monoidal by \cref{thm:seely-equivalences-monoidal}
and therefore it lifts to 2-categories of symmetric pseudocomonoids by \cite[Proposition~16]{DayStreet}.
\end{proof}
We turn to study the lifted 2-functor of \cref{thm:lift-bang-to-comonoids}. Our goal is to show that it preserves finite products, which will be achieved in \cref{thm:lift-preserves-products}.
Let us begin with a couple of lemmas.
\begin{lemma} \leavevmode \label{thm:coh-digging-with-m}
\begin{enumerate}[(i)]
\item For every $A, B \in \catK$, the map $\seell_{A,B} \co \bang A \otimes \bang B \to \bang(A \with B)$ is a pseudocoalgebra morphism, \ie we have an invertible 2-cell
\[
\begin{tikzcd}[column sep = large]
\bang A \otimes \bang B
\ar[r, "\seell_{A,B}"]
\ar[d, "\dig_A \otimes \dig_B"']
\ar[ddr, phantom, description, "\Two \bar{\seel}^2_{A,B}"] &
\bang (A \with B)
\ar[dd, "\dig_{A \with B}"] \\
\bbang A \otimes \bbang B
\ar[d, "\monn_{\bang A, \bang B}"'] &
\\
\bang (\bang A \otimes \bang B)
\ar[r, "\bang \seell_{A,B}"'] &
\bbang (A \with B)
\end{tikzcd}
\]
which satisfies the appropriate coherence conditions.
\item The map $\seeli \co \unit \to \bang \term$ is a pseudocoalgebra morphism, \ie we have an invertible 2-cell
\[
\begin{tikzcd}
\unit
\ar[r, "\seeli"]
\ar[d, "\moni"']
\ar[dr, phantom, description, "\Two \bar{\seel}^0"] &
\bang \term
\ar[d, "\dig_{\term}"] \\
\bang \unit
\ar[r, "\bang \seeli"'] &
\bbang \term
\end{tikzcd}
\]
which satisfies the appropriate coherence conditions.
\end{enumerate}
\end{lemma}
\begin{proof} The Seely equivalences are, by construction, pseudocoalgebra morphisms. However, we can also provide an explicit definition of the required invertible 2-cells.
For part~(i), this is constructed as follows:
\[
\begin{tikzcd}[column sep = large]
% Row 1
\bang A \otimes \bang B
\ar[r, "\dig_A \otimes \dig_B"]
\ar[d, "\dig_A \otimes \dig_B"']
\ar[dr, phantom, description, "\Two \pi_A \otimes \pi_B"] &
\bbang A \otimes \bbang B
\ar[r, "\monn_{\bang A, \bang B}"]
\ar[d, "\dig_{\bang A} \otimes \dig_{\bang B}"]
\ar[ddr, phantom, description, "\Two \dig^2_{\bang A, \bang B}"] &
\bang ( \bang A \otimes \bang B)
\ar[rr, " \bang (\der_A \otimes \weak_B {,} \weak_A \otimes \der_B) "]
\ar[dd, "\dig_{\bang A \otimes \bang B}"]
\ar[ddrr, phantom, description, "\cong"]&
\bang (A \with B)
\ar[dd, "\dig_{A \with B}"] \\
% Row 2
\bbang A \otimes \bbang B
\ar[r, "\bang \dig_A \otimes \bang \dig_B"]
\ar[d, "\monn_{\bang A, \bang B}"']
\ar[dr, phantom, description, "\cong"] &
\bbbang A \otimes \bbbang B
\ar[d, "\monn_{\bbang A, \bbang B}"] &
\\
% Row 3
\bang (\bang A \otimes \bang B)
\ar[r, "\bang (\bang \dig_A \otimes \bang \dig_B)"'] &
\bang (\bbang A \otimes \bbang B)
\ar[r, "\bang \monn_{\bang A, \bang B}"'] &
\bbang (\bang A \otimes \bang B)
\ar[rr, "\bbang (\der_A \otimes \weak_B {,} \weak_A \otimes \der_B)"'] &
\bbang (A \with B) \mathrlap{.}
\end{tikzcd}
\]
For part~(ii), the required invertible 2-cells is given by
\[
\begin{tikzcd}[column sep = large]
\unit \ar[r, "\moni"] \ar[d, "\moni"']
\ar[dr, phantom, description, "\Two \dig^0"]
& \bang \unit \ar[d, "\dig_\unit"] \ar[r] \ar[dr, phantom, description, "\cong"]& \bang \term \ar[d, "\dig_\term"] \\
\bang \unit \ar[r, "\bang \moni"'] & \bbang \unit \ar[r] & \bbang \term \mathrlap{.}
\end{tikzcd}
\]
Here, $\dig^0$ is the 2-cell that is part of the data making $\mathsf{p}$ into a symmetric monoidal pseudocomonad, as
in part~(ii) of \cref{thm:sym-lax-monoidal-pseudocomonad}, and the other 2-cell is part of the pseudonaturality of $\dig$.
\end{proof}
\begin{lemma} \label{thm:seely-equiv-comonoid} \leavevmode
\begin{enumerate}
\item For every $A, B \in \catK$, the Seely equivalence $\seell_{A,B} \co \bang A \otimes \bang B \to \bang(A \with B) $ is a symmetric pseudocomonoid morphism.
\item The Seely equivalence $\seeli \co \unit \to \bang \term$ is a symmetric pseudocomonoid morphism.
\end{enumerate}
\end{lemma}
\begin{proof} Both claims follow by combining \cref{thm:id-is-comonoid-morphism}, \cref{thm:coalg-mor-comon-mor} and \cref{thm:coh-digging-with-m}.
\end{proof}
\begin{proposition} \label{thm:lift-preserves-products}
The lifted 2-functor $\bang (-) \co \SymCoMon{\catK, \with, \term} \to \SymCoMon{\catK, \otimes, \unit}$ preserves
finite products.
\end{proposition}
\begin{proof}
Recall that finite products in 2-categories of symmetric pseudocomonoids is given by the tensor product of their underlying objects. Therefore, we need to show that if $A$ and $B$
are symmetric pseudocomonoids with respect to $\with$, the symmetric pseudocomonoid $\bang (A \with B)$ is equivalent to the symmetric pseudocomonoid $\bang A \otimes \bang B$
in~$\SymCoMon{\catK}$ and that the symmetric pseudocomonoid $\bang \term$ is equivalent to the symmetric pseudocomonoid $\unit$.
These claims follow by \cref{thm:seely-equiv-comonoid}.
\end{proof}
The next proposition, which will be needed for \cref{thm:bang-A-bialgebra}, describes the action of the lifted pseudofunctor of \cref{thm:lift-bang-to-comonoids} in terms of the linear exponential pseudocomonad.
For this, recall from \cref{thm:prod-iff-all-ccmon} that the finite products of $\catK$ determine a symmetric pseudocomonoid structure on every object of $\catK$.
% The 2-cells constructed are the counterparts of~\cite[Equations~(8) and (9)]{FioreM:difsmi}, stated in terms of $\seell$ and $\seeli$.
\begin{proposition} \label{thm:comm-comonoids-coincide}
Let $A \in \catK$. The following are equivalent as symmetric pseudocomonoids in $(\catK, \otimes, \unit)$:
\begin{enumerate}
\item the symmetric pseudocomonoid on $\bang A$ obtained by
applying the sylleptic strong monoidal 2-functor of \cref{thm:seely-equivalences-monoidal} to the symmetric pseudocomonoid on~$A$ determined by the finite products in~$\catK$,
\item the symmetric pseudocomonoid on $\bang A$ determined by the linear exponential pseudocomonad.
\end{enumerate}
the identity is a braided pseudocomonoid morphism, \ie we have invertible 2-cells
\[
\begin{tikzcd}
\bang A
\ar[r, "\id"]
\ar[d, "\bang \Delta_A"']
\ar[ddr, phantom, description, "\Two"]
\bang A
\ar[dd, "\con_A"]
\\
\bang (A \with A)
\ar[d, "(\seell_{A,A})^{\bullet}"']
\\
\bang A \otimes \bang A
\ar[r, "\id"'] &
\bang A \otimes \bang A
\end{tikzcd} \qquad
\begin{tikzcd}
\bang A
\ar[r, "\id"]
\ar[d]
\ar[ddr, phantom, description, "\Two"] &
\bang A
\ar[dd, "\weak_A"] \\
\bang \term
\ar[d, "(\seeli)^\bullet"'] &
\\
\unit
\ar[r, "\id"'] &
\unit \mathrlap{,}
\end{tikzcd}
\]
satisfying coherence conditions.
\end{proposition}
\begin{proof} Since the cofree pseudocoalgebra 2-functor $F \co \catK \to \Em{\catK}$ is a right biadjoint, it preserves finite products. Thus, the claim follows from~\cref{thm:canonical-preserved}
via \cref{thm:canonical-is-linear-exponential} and the definition of $\con_A$ and $\weak_A$. Explicitly, the required 2-cell in the left-hand diagram can be obtained as the pasting diagram
\[
\begin{tikzcd}[column sep = huge]
\bang A
\ar[r, "\bang \Delta_A"]
\ar[d, "\con_A"']
\ar[dr, phantom, description, "\Two \bar{\con}_{\Delta_A}"] &[3em]
\bang (A \with A)
\ar[d, "\con_{A \with A}"] \\
\bang A \otimes \bang A
\ar[r, "\bang \Delta_A \otimes \bang \Delta_A"]
\ar[dr, bend right = 10, "\id_{\bang A \otimes \bang A}"']&
\bang (A \with A) \otimes \bang (A \with A)
\ar[d, "\bang \pi_1 \otimes \bang \pi_2"] \\
\phantom{} \ar[ur, phantom, {pos=.7}, "\Two"] &
\bang A \otimes \bang A \mathrlap{,}
\end{tikzcd}
\]
where the invertible 2-cell in the rectangle is given by pseudonaturality of $\con$ and the
invertible 2-cell
in the triangle is obtained by the cartesian structure. For the right-hand diagram,
unfolding the definitions, we need an invertible 2-cell
\[
\begin{tikzcd}
\bang A
\ar[r]
\ar[d, "\weak_A"']
\ar[dr, phantom, description, "\Two"]&
\bang \term
\ar[d, "\weak_\term"] \\
\unit
\ar[r, "\id_\unit"'] &
\unit \mathrlap{,}
\end{tikzcd}
\]
which is simply given by the pseudonaturality of $\weak$.
\end{proof}
The final result of this section, \cref{thm:coKleisli-cartesian-closed} below, is the two-dimensional counterpart of a fundamental
theorem on linear exponential comonads on symmetric monoidal categories with finite
products~[64]. In its statement, by a closed symmetric Gray monoid $\catK$, we mean
that, for every $A \in \catK$, the 2-functor $(-) \otimes A \co \catK \to \catK$ has a right
biadjoint. The action of such a right biadjoint on $B \in \catK$ will be written $A \linhom B$,
using again notation inspired by Linear Logic.
\begin{theorem} \label{thm:coKleisli-cartesian-closed} Assume that $\catK$ is closed and has finite products.
Then the Kleisli bicategory $\Kl{\catK}$ is a cartesian closed bicategory.
\end{theorem}
\begin{proof} The existence of finite products in $\Kl{\catK}$ was recalled in \cref{thm:coKleisli-cartesian}. For the closed structure,
the claim follows by~\cite[Theorem~14]{JacqC:catcnis} since the axioms for a linear exponential pseudocomonad in \cite[Definition~90]{JacqC:catcnis} are consequences
of the axioms for a linear exponential pseudocomonad used here, as noted in \cref{thm:compare-with-jacq}. Explicitly,
the exponential of $A, B$ in $\Kl \catK$ is defined by $A \Rightarrow B = \bang A \multimap B$. Indeed, we have the following chain of equivalences:
\begin{align*}
\Kl \catK[ X \with A, B] & = \catK[ \bang ( X \with A), B ] \\
& \simeq \catK[ \bang X \otimes \bang A, B] \\
& \simeq \catK[ \bang X, \bang A \multimap B ] \\
& = \Kl \catK[ X, A \Rightarrow B] \mathrlap{,}
\end{align*}
which can be proved to be pseudonatural.
\end{proof}
\begin{remark} \label{thm:compact-closed-hom}
In preparation for our applications in \cref{sec:prof}, we consider the special case where $\catK$ is compact closed, in the sense of~[74]. We write $A^\perp$ for the dual of an object $A \in \catK$. The internal hom of the closed structure is defined by letting $A \multimap B \defeq A^\perp \otimes B$. Therefore, the exponential objects in $\Kl \catK$ are given
by $A \Rightarrow B \defeq \bang(A^\perp) \otimes B$.
\end{remark}
\begin{remark} \label{thm:mellies-2-cell} We conjecture that the Kleisli biadjunction in~\eqref{equ:cokleisliadjunction} becomes a symmetric lax monoidal adjunction
when $\catK$ is considered as a symmetric Gray monoid with respect to $\otimes$, and $\Kl{\catK}$ is considered as a symmetric monoidal bicategory with respect
to $\with$, as in
\begin{equation*}
\begin{tikzcd}[column sep = large]
(\Kl{\catK}, \with, \term)
\ar[r, shift left =1, bend left =10, "K"]
\ar[r, description, phantom, "\scriptstyle \bot"] &
(\catK, \otimes, \unit) \mathrlap{.}
\ar[l, shift left = 1, bend left =10, "J"]
\end{tikzcd}
\end{equation*}
By the 2-categorical counterpart of Kelly's doctrinal adjunction theorem (see ~\cite[pages~62-63]{GarnerR:enrcfc} and~\cite[Propositions~2, 12 and~15]{DayStreet}), it is sufficient to show that
the left biadjoint $K$ is symmetric strong monoidal. This involves showing that the Seely equivalences $\seell$ are pseudonatural with respect to Kleisli maps. The candidate pseudonaturality 2-cells can be constructed as in \cite[Section~7.3.0.6]{MelliesPA:catsll}, making
essential use of invertible 2-cells of the form
\begin{equation}
\label{equ:mellies-2-cell}
\begin{tikzcd}[column sep = large]
\bang A \otimes \bang B
\ar[r, "\seell_{A,B}"]
\ar[dd, "\dig_A \otimes \dig_B"']
\ar[ddr, phantom, description, "\Two \sigma_{A, B}"]
\bang (A \with B)
\ar[d, "\dig_{A \with B}"]
\\
\bbang (A \with B)
\ar[d, "\bang {(\bang \pi_1, \bang \pi_2)}"]
\\
\bbang A \otimes \bbang B
\ar[r, "\seell_{\bang A, \bang B}"']
\bang (\bang A \with \bang B) \mathrlap{.}
\end{tikzcd}
\end{equation}
These can in turn be constructed using part~(i) of \cref{thm:coh-digging-with-m}.
As yet we have not fully verified the pseudonaturality conditions.
\end{remark}
% Section 6
\section{Biproducts, cocontraction and coweakening}
\label{sec:biproducts}
Let $\catK$ be a symmetric Gray monoid. We now assume that $\catK$ has biproducts, in the sense of \cref{thm:biproducts}. Recall that we write $A \oplus B$ for the
biproduct of $A, B \in \catK$ and $0$ for the zero object. As a consequence of having biproducts, $\catK$ admits a form of enrichment
over the category of symmetric monoidal categories. In particular, for every $A, B \in \catK$,
the hom-category $\catK[A, B]$ admits a symmetric monoidal structure, which we call
\emph{convolution}, in analogy with the one-dimensional case.
For $f \co A \to B$ and $g \co A \to B$, we define their convolution $f + g \co A \to B$ as the composite
\begin{equation}
\label{equ:convolution}
\begin{tikzcd}
\ar[r, "\Delta_A"] &
A \oplus A
\ar[r, "f \oplus g"] &
B \oplus B
\ar[r, "\nabla_A"] &
B \mathrlap{.}
\end{tikzcd}
\end{equation}
The unit of the symmetric monoidal structure is the \emph{zero morphism} $0_{A,B} \co A \to B$
defined as the composite
\begin{equation}
\label{equ:zero-map}
\begin{tikzcd}
\ar[r] &
\ar[r] &
B \mathrlap{.}
\end{tikzcd}
\end{equation}
Let us now assume again to have a linear exponential pseudocomonad $(\bang, \dig, \der)$ on $\catK$. As an instance of \cref{thm:seely-equivalences-monoidal}, the underlying pseudofunctor of the linear exponential pseudocomonad
acquires the structure of a sylleptic strong monoidal functor $\bang (-) \co (\catK, \oplus, 0) \to (\catK, \otimes, \unit)$ and the Seely equivalences have the form
\begin{equation}
\label{equ:seely-with-biproducts}
\seel^{2}_{A,B} \co \bang A \otimes \bang B \to \bang (A \oplus B) \mathrlap{,} \qquad
\seeli \co \unit \to \bang 0 \mathrlap{.} |
Dinh Phu<EMAIL_ADDRESS>Dao Duy<EMAIL_ADDRESS>Daeyoung
<EMAIL_ADDRESS>Korea Advanced Institute of
Science and Technology
Daejeon, Korea Channel-Partitioned Windowed Attention & Frequency Learning for
SISR
# Channel-Partitioned Windowed Attention And Frequency Learning for Single
Image Super-Resolution
###### Abstract
Recently, window-based attention methods have shown great potential for
computer vision tasks, particularly in Single Image Super-Resolution (SISR).
However, it may fall short in capturing long-range dependencies and
relationships between distant tokens. Additionally, we find that learning on
spatial domain does not convey the frequency content of the image, which is a
crucial aspect in SISR. To tackle these issues, we propose a new Channel-
Partitioned Attention Transformer (CPAT) to better capture long-range
dependencies by sequentially expanding windows along the height and width of
feature maps. In addition, we propose a novel Spatial-Frequency Interaction
Module (SFIM), which incorporates information from spatial and frequency
domains to provide a more comprehensive information from feature maps. This
includes information about the frequency content and enhances the receptive
field across the entire image. Experimental findings demonstrate the
effectiveness of our proposed modules and architecture. In particular, CPAT
surpasses current state-of-the-art methods by up to 0.31dB.
## 1 Introduction
Single Image Super-Resolution (SISR) is a low-level vision task that aims to
enhance a low-resolution (LR) image into a high-resolution (HR) image.
Initially, convolutional neural networks [Dong et al.(2015)Dong, Loy, He, and
Tang, Kim et al.(2016a)Kim, Lee, and Lee, Kim et al.(2016b)Kim, Lee, and Lee,
Zhang et al.(2018a)Zhang, Li, Li, Wang, Zhong, and Fu, Li et al.(2018)Li,
Fang, Mei, and Zhang] achieved outstanding results in SISR a few years ago.
However, it does not possess the capability to gather global contextual
information, as they mainly focus on nearby areas and might miss out on
important connections that are far apart. Recently, Transformers, which
utilize self-attention mechanisms, excel at modeling long-range dependencies
not only with high-level vision tasks such as Image Captioning [Wang et
al.(2022)Wang, Xu, and Sun, He et al.(2020)He, Liao, Tavakoli, Yang,
Rosenhahn, and Pugeault], 3D-aware Image Synthesis [Sargent et
al.(2023)Sargent, Koh, Zhang, Chang, Herrmann, Srinivasan, Wu, and Sun],
$etc.$, but also with low-level vision tasks such as localization [Chen et
al.(2022a)Chen, Du, Yang, Beyer, Zhai, Lin, Chen, Li, Song, Wang, and Zhou],
segmentation [Huang et al.(2023)Huang, Wang, Wei, Huang, Shi, Liu, and Huang,
Chen et al.(2021b)Chen, Lu, Yu, Luo, Adeli, Wang, Lu, Yuille, and Zhou, Tran
et al.(2022)Tran, Nguyen, Pham, and Tran], $etc.$, including SISR [Chen et
al.(2021a)Chen, Wang, Guo, Xu, Deng, Liu, Ma, Xu, Xu, and Gao, Liang et
al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and Timofte, Chen et
al.(2022b)Chen, Zhang, Gu, Kong, Yuan, et al., Chen et al.(2023a)Chen, Wang,
Zhou, Qiao, and Dong].
Although Transformer has shown great performance in SISR compared to CNN-based
methods, they still have limitations that need to be dealt with. The use of
dense attention in IPT [Chen et al.(2021a)Chen, Wang, Guo, Xu, Deng, Liu, Ma,
Xu, Xu, and Gao] focuses on short token sequences that come from a dense area
of an image. As a result, the receptive field is restricted due to this
approach. SwinIR [Liang et al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and
Timofte] employs a Swin Transformer as the main backbone, which has a main
drawback is that it limits the receptive fields for extracting information
from global information. Although HAT [Chen et al.(2023a)Chen, Wang, Zhou,
Qiao, and Dong] is state-of-the-art in SISR, it still uses Swin Transformer,
thus limiting the extraction of global information. Therefore, the current
methods still can not fully exploit the potential of Transformer for SISR. On
the other hand, current methods in SISR mainly extract features from the
spatial domain without leveraging features extracted from the frequency
domain, which include valuable information and are beneficial for HR image
reconstruction.
In order to tackle the mentioned drawbacks and unlock more potential of
Transformer for SISR, we propose a novel architecture named Channel-
Partitioned Attention Transformer (CPAT), depicted in Fig. 1. A key component
in our CPAT is the new self-attention mechanism called Channel-Partitioned
Windowed Self-Attention (CPWin-SA) to better capture long-range information
and relationships between distant tokens. In addition, we also design a
Spatial-Frequency Interaction Module (SFIM) to integrate the spatial and
frequency domains to fully exploit the information from feature maps, thereby
boosting the quality of output images. Based on these designs, our method can
extract robust features to aid in reconstruction and achieve significant
improvements compared to the current methods.
Contributions: 1) We propose a novel Channel-Partitioned Windowed Self-
Attention (CPWin-SA), a robust feature extraction for better reconstruction of
images. 2). We design a new Spatial-Frequency Interaction Module (SFIM) to
leverage all features from both spatial and frequency domains that improve the
model’s performance. 3) Our network outperforms the current state-of-the-art
methods for SISR.
## 2 Related Work
Deep Neural Networks for SISR. Dong _et al_ [Dong et al.(2015)Dong, Loy, He,
and Tang] conducted the first study utilizing deep learning in SISR, called
SRCNN, a simple yet effective three-layer CNN for SISR. Following SRCNN, CNNs
have been employed in subsequent studies to enhance SISR performance [Lim et
al.(2017)Lim, Son, Kim, Nah, and Mu Lee, Ledig et al.(2017)Ledig, Theis,
Huszár, Caballero, Cunningham, Acosta, Aitken, Tejani, Totz, Wang, et al.,
Zhang et al.(2018b)Zhang, Tian, Kong, Zhong, and Fu, Kim et al.(2016b)Kim,
Lee, and Lee]. Recently, Transformer has been used in various computer vision
applications, from localization [Rambhatla et al.(2023)Rambhatla, Misra,
Chellappa, and Shrivastava, Zhao et al.(2018)Zhao, Li, Zhao, and Feng] to
deeply understanding images [Jaderberg et al.(2015)Jaderberg, Simonyan,
Zisserman, et al., Sun et al.(2022)Sun, Zhou, Black, and Chandrasekaran, Zhu
et al.(2022)Zhu, Shah, and Chen, Shi et al.(2022)Shi, Jiang, Dai, and Schiele]
to sequence-based networks [Bluche(2016), Miech et al.(2017)Miech, Laptev, and
Sivic, Grechishnikova(2019)]. ViT [Dosovitskiy et al.(2021)Dosovitskiy, Beyer,
Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold,
Gelly, Uszkoreit, and Houlsby] was proposed by Dosovitskiy _et al_ that
processes input images by segmenting them into patches and then projecting
these patches into sequential tokens as input of transformer module and
achieved remarkable results with high-level vision tasks. To reduce the high
computational cost of ViT, Liu _et al_ [Liu et al.(2021)Liu, Lin, Cao, Hu,
Wei, Zhang, Lin, and Guo] proposed a hierarchical transformer called Swin
Transformer, using self-attention over local windows instead of the entire
image like ViT. In the field of low-level vision tasks, such as SISR,
Transformer can also be used as a powerful backbone. IPT [Chen et
al.(2021a)Chen, Wang, Guo, Xu, Deng, Liu, Ma, Xu, Xu, and Gao] leveraged the
pre-trained transformer to improve performance for the image super-resolution
task. SwinIR [Liang et al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and Timofte]
utilized Swin Transformer as a deep feature extractor and achieved impressive
results for image restoration. HAT [Chen et al.(2023a)Chen, Wang, Zhou, Qiao,
and Dong] proposed Hybrid Attention Transformer and Overlapping-cross
Attention, achieving state-of-the-art SISR performance. However, the common
limitation of these works is that they are limited in capturing long-range
dependencies and may miss the connection with the distant tokens. Our proposed
Transformer can handle this problem by enhancing window size in the window-
based attention mechanism while still being efficient for high-resolution
images.
Frequency domain in Computer Vision. The frequency domain is widely used in
digital signal processing [Trider(1978), Saxena and Singh(2005), Kim et
al.(2001)Kim, Kim, Lee, and Lim] and benefits the computer vision domain [He
et al.(2016)He, Chen, and Liu, Gohshi(2015), Cai et al.(2021)Cai, Ding, and
Lu, Wang et al.(2023)Wang, Jiang, Zhong, and Liu], such as image super-
resolution. Cai _et al_ proposed FreqNet [Cai et al.(2021)Cai, Ding, and Lu]
consisting of two main branches: spatial and frequency branches. FreqNet
transforms LR and HR images to the frequency domain using discrete cosine
transform (DCT) [Ahmed et al.(1974)Ahmed, Natarajan, and Rao]. DCT features
combine with spatial features, and then, the inverse DCT (iDCT) [Ahmed et
al.(1974)Ahmed, Natarajan, and Rao] is used to convert the feature maps back
to the spatial domain. FreqNet uses a dual branch and DCT transform from the
beginning, which increases the computational complexity. Wang _et al_ proposed
SFMNet [Wang et al.(2023)Wang, Jiang, Zhong, and Liu] for face super-
resolution (FSR). SFMNet uses Fourier transform [Brigham(1988)] in the
frequency domain to convert spatial features to the frequency domain to
capture the global facial structure and inverse Fourier transform
[Brigham(1988)] to convert frequency features to the spatial domain. Similar
to FreqNet, the frequency branch in SFMNet is too complex and the
computational complexity is high due to it computes spatial-frequency cross-
attention module multiple times to combine spatial and frequency domains. Our
Spatial-Frequency Integrated Module (SFIM) is designed to deal with this
problem. It is simple yet effectively leverages features (textures, edges,
$etc.$) from the frequency domain that may miss extraction in the spatial
domain while the computational complexity is not significantly increasing.
## 3 Methodology
### 3.1 The Overall Architecture
Figure 1: Architecture details. (a) The overall architecture of CPAT. (b)
Structure of Channel-Partitioned Windowed Self-Attention. (c) Structure of
Overlapping Cross-Attention Module. (d) Spatial-Frequency Integrated Module.
The overall architecture of Channel-Partitioned Attention Transformer (CPAT)
is shown in Fig. 1(a), which consists of three parts: Dimensionality Expansion
(DE), Complex Feature Learning (CFL), and Image Reconstruction (IR). We set
the input LR and output SR images as $I_{LR}\in\mathbb{R}^{H\times W\times
C_{in}}$ and $O_{SR}$ $\in\mathbb{R}^{H\times W\times C_{out}}$, where
$C_{in}$ and $C_{out}$ are the channel numbers of the LR and SR images,
respectively. First, Dimensionality Expansion transforms $I_{LR}$ from a low-
dimensional space to a high-dimensional space $I_{DE1},\;I_{DE2}$ as follows:
$I_{DE1}=F_{c3}(I_{LR}),\;I_{DE2}=F_{c2}(I_{LR}),$ (1)
where $I_{DE1}$ and $I_{DE2}$ are the output of DE; $F_{c3}$ is a 3x3 conv,
and $F_{c2}$ is a convolution stage (Convolution - Activation - Convolution).
Next, $I_{DE1}$ goes through Complex Feature Learning, which involves a series
of RWAGs to learn the complex and deep features. Each RWAG consists of several
CPWin-SA modules, an overlapping cross-attention module (OCAM), and a SFIM
module. The output of CFL $O_{CFL}$ is
$O_{CFL}=F_{SFIM}(F_{RWAG}^{L_{1}}(F_{RWAG}^{L_{1}-1}(...(F_{RWAG}^{1}(I_{DE1})+I_{DE2}))+I_{DE2})+I_{DE2}),$
(2)
where $F_{RWAG}^{i},\;F_{SFIM}$ represent the functions of i-th RWAG, SFIM
modules, $i=1,2,...,L_{1}$, and $L_{1}$ is the number of RWAG in CFL. Finally,
$O_{CFL}$ is passed to the Image Reconstruction (IR) to obtain the output
$O_{SR}$. IR includes convolution layers and PixelShuffle [Shi et
al.(2016)Shi, Caballero, Huszar, Totz, Aitken, Bishop, Rueckert, and Wang]. We
set the function of IR as $F_{IR}$, and we then have
$O_{SR}=F_{IR}(O_{CFL}+I_{DE1}),$ (3)
### 3.2 Channel-Partitioned Windowed Self-Attention (CPWin-SA)
Figure 2: Enhanced window strategy and One-Direction Shift Operation in V-EWin
and H-EWin
Channel-Partitioned Windowed Self-Attention (CPWin-SA) is a key component of
our method, described in Fig. 1(b). It consists of three different attention
mechanism types: Vertically Enhanced Window Attention (H-EWin), Horizontally
Enhanced Window Attention (H-EWin), and standard Windowed Multi-head Self-
Attention (W-MSA). We split the input feature maps along the channel dimension
into three equal parts, corresponding to the three attentions that are above
mentioned.
Enhanced Window Self-Attention. This refers to V-EWin and H-EWin. We extend
the squared windows along the vertical and horizontal directions of the input
feature maps, shown in Fig. 2. Specifically, the window size is extended from
$ws\times ws$ (squared window) to $H\times ws$ (V-EWin) or $ws\times W$
(H-EWin), where $ws$ is the window size ($ws<H$, $ws<W$). V-EWin and H-Ewin
enhance attention areas by extending the window size, thereby increasing the
ability to extract global contextual information and relationships between
distant tokens. For simplicity, the mathematical descriptions below are used
for V-EWin, and similar formulations can be applied to H-EWin. With the input
feature $X\in\mathbb{R}^{H\times W\times\frac{C}{3}}$ (after splitting), we
compute attention $N$ times in parallel, where $N$ is the head number. We
partition X into non-overlapping windows of size $H\times ws$ for each
attention head, then calculate the self-attention of the $i$-th window feature
as $X_{i}\in\mathbb{R}^{H\times ws\times\frac{C}{3}}$, $i$=1,…,$\frac{H\times
W}{H\times ws}$ for the $n$-th head,
$\begin{array}[]{l}Y_{i}^{n}=Attention(Q_{i}^{n},\;K_{i}^{n},\;V_{i}^{n})=Attention(X_{i}W_{n}^{Q},\;X_{i}W_{n}^{K},\;X_{i}W_{n}^{V}),\end{array}$
(4)
where $Y_{i}^{n}\in\mathbb{R}^{H\times ws\times D}$ denotes the attention
output of $X_{i}$ in the $n$-th head, $D$=$\frac{C}{3N}$ represents the
channel dimension in each head.
$Q_{i}^{n},\;K_{i}^{n},\;V_{i}^{n}\in\mathbb{R}^{\frac{C}{3}\times D}$ are the
projection matrices of query, key, and value, respectively for $n$-th head. We
use conditional position embedding (CPE) [Chu et al.(2023)Chu, Tian, Zhang,
Wang, and Shen] to add the spatial relationships into the network. The
attention feature $Y^{n}\in\mathbb{R}^{H\times W\times D}$ of $X$ is obtained
by calculating the attention operation on all $X_{i}$, then performing
reshaping and merging them in the order of division. We then concatenate the
output of all heads and combine them with a final weight matrix to achieve the
attention map output of V-EWin.
$V{\operatorname{-}}EWin(x)=Concat(Y^{1},...,Y^{N-1},Y^{N})W^{p},$ (5)
where the $W^{p}\in\mathbb{R}^{\frac{C}{3}\times\frac{C}{3}}$ denotes the
projection matrix for feature fusion.
One-Direction Shift Operation. The window-based self-attention module causes a
lack of information linkage between windows, thereby reducing the modeling
power with distant tokens. We enhance the windows along the height and width
of the feature map, we propose One-Directional Shift Operation instead of two
directions as in Swin Transformer while ensuring the transformer’s modeling
power, detailed in Fig. 2. For V-EWin, we move the windows to the left by a
distance of $\frac{ws}{2}$ pixels, while H-EWin moves downward by also
$\frac{ws}{2}$ pixels. Then, we use a cyclic shift to complete the shift
operation. After computing the attention operation on the shifted feature map,
we then revert this feature map to obtain the feature map, which is in the
original order.
Squared Window Self-Attention. For Squared Window Self-Attention, we utilize
Swin Transformer (W-MSA) [Liu et al.(2021)Liu, Lin, Cao, Hu, Wei, Zhang, Lin,
and Guo]. W-MSA is computed entirely similarly to the Enhanced Window Self-
Attention we presented above. The only difference lies in the window size
being $ws\times ws$ instead of $H\times ws$ or $ws\times W$. Using squared
windows helps to focus on local features. Combining standard (squared) and
enhanced windows and computing self-attention in various window shapes
benefits datasets that contain many texture features in various directions.
Channel-Partitioned Windowed Self-Attention. CPWin-SA consists of three
attention modules (V-EWin, H-EWin, and W-MSA) and an MLP, which includes a
GELU [Hendrycks and Gimpel(2016)] activation function between 2 linear
projection layers. Because Transformer has no inductive bias, we simply use a
depthwise convolution to add inductive bias that aims to improve the
performance of Transformer. A Layer Norm layer is used before the attention
modules and MLP. The entire process of CPWin-SA is as follows:
$\displaystyle\begin{split}X=LayerNorm(X_{in}),\;c=C/3,\end{split}$ (6a)
$\displaystyle\begin{split}X_{1}=V{\operatorname{-}}EWin(X[:,:,:c]),\;X_{2}=H{\operatorname{-}}EWin(X[:,:,c:2c]),\;X_{3}=W{\operatorname{-}}MSA(X[:,:,2c:]),\end{split}$
(6b)
$\displaystyle\begin{split}\hat{X}=Concat(X_{1},\;X_{2},\;X_{3})+X_{in}+DWConv(V),\end{split}$
(6c)
$\displaystyle\begin{split}X_{out}=MLP(LayerNorm(\hat{X}))+\hat{X},\end{split}$
(6d)
where $X_{in}$, $X_{out}$ and $C$ are CPWin-SA’s input, output features, and
the channel numbers; DWConv and V are depthwise convolution and $value$
matrix. Shift Operation uses two consecutive Transformer modules to increase
the interaction among non-overlapping windows. The computational complexity of
global MSA (self-attention is computed on the full feature map) and V-EWin are
$\displaystyle\begin{split}\mathcal{O}(Global-
MSA)=4HW(C/3)^{2}+2(HW)^{2}(C/3)),\end{split}$ (7a)
$\displaystyle\begin{split}\mathcal{O}(V{\operatorname{-}}EWin)=4HW(C/3)^{2}+2H^{2}Wws(C/3)),\end{split}$
(7b)
Assuming H=W (squared image) and $ws\ll H,\;C\ll H$, the computational
complexity of $V{\operatorname{-}}EWin$ is $\mathcal{O}(H^{2}Wws(C/3))$ =
$\mathcal{O}(H^{3})$ whereas $Global-MSA$ is $\mathcal{O}((HW)^{2}(C/3))$ =
$\mathcal{O}(H^{4})$. Therefore, our proposed Transformer can be applied to
high-resolution input images.
### 3.3 Overlapping Cross-Attention Module (OCAM)
OCAM [Chen et al.(2023a)Chen, Wang, Zhou, Qiao, and Dong] enhances the
connections between windows by partitioning feature maps into overlapping
windows and calculating self-attention on each window to improve the
performance of the network. The structure of OCAM is depicted in Fig. 1(c).
Specifically, for $X_{Q}$, $X_{K}$, $X_{v}$ from the input feature $X$,
$X_{Q}$ is divided into non-overlapping windows of size $M\times M$, with
$\frac{HW}{M^{2}}$ being the total number of windows. $X_{K}$ and $X_{V}$ are
unfolded to $\frac{HW}{M^{2}}$ overlapping windows of size $M_{o}\times M_{o}$
($M_{o}>M$),
$M_{o}=(1+\alpha)\times M,$ (8)
where $\alpha$ is the overlapping ratio is 0.5. Then, self-attention is
computed within each windows as in Eq. 4.
### 3.4 Spatial-Frequency Interaction Module (SFIM)
Spatial features may lack frequency information and fine-grained details that
are important for HR image reconstruction. To address these issues, we
carefully design a Spatial-Frequency Interaction Module (SFIM) shown in Fig.
1(d) to leverage the spatial and frequency domain features. SFIM consists of
two branches: spatial and frequency branches. Spatial branch is helpful in
extracting local spatial features. We denote the input, output of SFIM, and
output of spatial as $I_{SFIM}$, $O_{SFIM}$, $O_{SB}$, respectively, then
spatial branch is represented as follows:
$\displaystyle\begin{split}O_{SB1}=F_{A}(F_{c1}(I_{SFIM})),\end{split}$ (9a)
$\displaystyle\begin{split}O_{SB}=Concat(F_{c3}(O_{SB1}[:,:C/2,:,:]),\;F_{c3}(O_{SB1}[:,C/2:,:,:]))+O_{SB1},\end{split}$
(9b)
where $F_{c1}$, $F_{c3}$, $F_{A}$ and $C$ are 1x1 conv, 3x3 conv, LeakReLU
[Maas et al.(2013)Maas, Hannun, Ng, et al.], and number channel, respectively.
Frequency branch is used for capturing global structure and frequency
information. To convert spatial features into the frequency domain, we utilize
the Fast Fourier Transform (FFT) [Cooley and Tukey(1965)], and use inverse FFT
(iFFT) [Cooley and Tukey(1965)] to convert frequency features back into the
spatial domain. The FFT can capture global patterns, structures, and frequency
information in an image that might be less apparent in the spatial domain and
are essential for SR reconstruction. We denote the output of frequency branch
as $O_{FB}$, then frequency branch can be represented as follows:
$\displaystyle\begin{split}O_{FB1}=F_{A}(F_{c3}(I_{SFIM})),\end{split}$ (10a)
$\displaystyle\begin{split}O_{FB}=F_{c1}(F_{FD}(F_{c3}(O_{FB1}))+O_{FB1}),\end{split}$
(10b)
where $F_{FD}$ is $Freq\;Domain$. At the end, we combine the outputs of
spatial and frequency domains to obtain the output feature of SFIM is
$O_{SFIM}$
$O_{SFIM}=F_{c1}(Concat([O_{SB},O_{FB}])),\vspace{-6mm}$ (11)
## 4 Experiments
### 4.1 Experimental Settings
We use DF2K (DIV2K [Agustsson and Timofte(2017)]+Flicker2K [Lim et
al.(2017)Lim, Son, Kim, Nah, and Mu Lee]) dataset as the training set for a
fair comparison with other methods. For evaluating our model, we use five
benchmark datasets: Set5 [Bevilacqua et al.(2012)Bevilacqua, Roumy, Guillemot,
and Alberi-Morel], Set14 [Zeyde et al.(2010)Zeyde, Elad, and Protter], BSD100
[Martin et al.(2001)Martin, Fowlkes, Tal, and Malik], Urban100 [Huang et
al.(2015)Huang, Singh, and Ahuja], and Manga109 [Matsui et al.(2016)Matsui,
Ito, Aramaki, Fujimoto, Ogawa, Yamasaki, and Aizawa]. Furthermore, we use peak
signal-to-noise ratio (PSNR) [Wang et al.(2004)Wang, Bovik, Sheikh, and
Simoncelli] and the structural similarity index measure (SSIM) [Wang et
al.(2004)Wang, Bovik, Sheikh, and Simoncelli] as quantitative metrics, which
are calculated on the Y channel in YCbCr Space. For the structure of network,
we set the number of RWAG and SPWin-SA to 6, the channel number to 180, and
the window size is set to 16. The overlapping ratio in OCAM remains at 0.5 as
in [Chen et al.(2023a)Chen, Wang, Zhou, Qiao, and Dong]. We apply the self-
ensemble strategy similarly to [Timofte et al.(2016)Timofte, Rothe, and Van
Gool] in testing that we call CPAT† as in [Zhang et al.(2018a)Zhang, Li, Li,
Wang, Zhong, and Fu, Liang et al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and
Timofte, Chen et al.(2023b)Chen, Zhang, Gu, Kong, Yang, and Yu]. We use a
patch size of $64\times 64$ and a batch size of 32 during the training. We
simply use L1 loss, and Adam optimizer [Kingma and Ba(2014)] to optimize
models with 500K iterations.
### 4.2 Ablation Study
Following [Chen et al.(2022b)Chen, Zhang, Gu, Kong, Yuan, et al., Chen et
al.(2023b)Chen, Zhang, Gu, Kong, Yang, and Yu], we train x2 SR model on DF2K
(DIV2K [Agustsson and Timofte(2017)]+Flicker2K [Lim et al.(2017)Lim, Son, Kim,
Nah, and Mu Lee]), and test on Urban100 [Huang et al.(2015)Huang, Singh, and
Ahuja] for all experiments in this section. FLOPs are calculated on a 256x256
HR image. Results are reported in Tab. 3, 3, 3, 4, and the better results are
shown in bold.
Structure | PSNR | SSIM | FLOPs
---|---|---|---
Squared wins | 34.03 | 0.9438 | 324.8G
Enhanced wins | 34.26 | 0.9448 | 329.0G
Table 1: Effect of the enhanced windows.
Structure | PSNR | SSIM | FLOPs
---|---|---|---
w/o shift | 34.13 | 0.9440 | 329.0G
w/ shift | 34.26 | 0.9448 | 329.0G
Table 2: Effect of One-Direction Shift Operation.
Structure | PSNR | SSIM | FLOPs
---|---|---|---
w/o SFIM | 34.14 | 0.9442 | 256.2G
w/ SFIM | 34.26 | 0.9448 | 329.0G
Table 3: Effect of SFIM module.
Effect of the enhanced window. Tab. 3 shows the effectiveness of the enhanced
window in V-EWin and H-EWin instead of the squared window in Swin Transformer.
PSNR, when using the enhanced window is 34.26dB compared to 34.03dB with the
squared window. These results show that the standard window-based transformer
has yet to fully utilize its potential for SISR, whereas our method shows
significant improvement and the computation complexity does not increase much
(FLOPs is 329.0G compared to 324.8G). These results also indicate that
transformer-based methods remain a promising research direction in SISR.
Effect of shift operation. Tab. 3 shows the effect of One-Direction Shifted
Operation. When One-Direction Shift Operation is applied, PSNR value is
34.26dB, which is higher compared to 34.13dB without this operation.
Additionally, SSIM also increases from 0.9440 to 0.9448. Our shift operation
is helpful in getting correlation attention between different window
partitions, thereby enhancing the performance of CPWin-SA and our network in
general.
Structure | PSNR | SSIM | FLOPs
---|---|---|---
w/o Freq Domain | 34.03 | 0.9434 | 321.4G
w/ Freq Domain | 34.26 | 0.9448 | 329.0G
Table 4: Effect of Freq Domain module in SFIM
Effect of SFIM. Tab. 3 shows the effectiveness of SFIM. We replace SFIM module
with a 3x3 conv (we call it "w/o SFIM") and see how it affects PSNR/SSIM. PSNR
when having SFIM is 34.26dB, while PSNR when not having SFIM is 34.14dB. By
leveraging information from the frequency domain, SFIM boosts the network’s
performance compared to using convolution only, which provides spatial
features.
Effect of Freq Domain module on SFIM’s effectiveness. To show the
effectiveness of frequency features, we conduct an experiment by removing the
Freq Domain module from the SFIM module. This means that the SFIM module will
solely work with the spatial domain. The results are reported in Tab. 4, which
is evident that SFIM operates significantly less effectively when the Freq
Domain module is removed. By combining both spatial and frequency domains in
the SFIM module, comprehensive features are extracted, thereby enhancing the
performance of image reconstruction. Specifically, SFIM with Freq Domain
achieves 34.26dB, whereas, without Freq Domain, it achieves 34.03dB. This also
demonstrates the potential of combining spatial and frequency domains for
other vision tasks.
Method | Scale | Set5 [Bevilacqua et al.(2012)Bevilacqua, Roumy, Guillemot, and Alberi-Morel] | Set14 [Zeyde et al.(2010)Zeyde, Elad, and Protter] | BSD100 [Martin et al.(2001)Martin, Fowlkes, Tal, and Malik] | Urban100 [Huang et al.(2015)Huang, Singh, and Ahuja] | Manga109 [Matsui et al.(2016)Matsui, Ito, Aramaki, Fujimoto, Ogawa, Yamasaki, and Aizawa]
---|---|---|---|---|---|---
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM
EDSR [Lim et al.(2017)Lim, Son, Kim, Nah, and Mu Lee] | x2 | 38.11 | 0.9601 | 33.92 | 0.9195 | 32.32 | 0.9013 | 32.93 | 0.9351 | 39.10 | 0.9773
RCAN [Zhang et al.(2018a)Zhang, Li, Li, Wang, Zhong, and Fu] | 38.27 | 0.9614 | 34.12 | 0.9216 | 32.41 | 0.9027 | 33.34 | 0.9384 | 39.44 | 0.9786
NLSA [Mei et al.(2021)Mei, Fan, and Zhou] | 38.34 | 0.9618 | 34.08 | 0.9231 | 32.43 | 0.9027 | 33.42 | 0.9394 | 39.59 | 0.9789
ELAN [Zhang et al.(2022)Zhang, Zeng, Guo, and Zhang] | 38.36 | 0.9620 | 34.20 | 0.9228 | 32.45 | 0.9030 | 33.44 | 0.9391 | 39.62 | 0.9793
IPT∗ [Chen et al.(2021a)Chen, Wang, Guo, Xu, Deng, Liu, Ma, Xu, Xu, and Gao] | 38.37 | - | 34.43 | - | 32.48 | - | 33.76 | - | - | -
RCAN-it [Lin et al.(2022)Lin, Garg, Banerjee, Magid, Sun, Zhang, Van Gool, Wei, and Pfister] | 38.37 | 0.9620 | 34.49 | 0.9250 | 32.48 | 0.9034 | 33.62 | 0.9410 | 39.88 | 0.9799
SwinIR [Liang et al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and Timofte] | 38.42 | 0.9623 | 34.46 | 0.9250 | 32.53 | 0.9041 | 33.81 | 0.9427 | 39.92 | 0.9797
CAT-A [Chen et al.(2022b)Chen, Zhang, Gu, Kong, Yuan, et al.] | 38.51 | 0.9626 | 34.78 | 0.9265 | 32.59 | 0.9047 | 34.26 | 0.9440 | 40.10 | 0.9805
DAT [Chen et al.(2023b)Chen, Zhang, Gu, Kong, Yang, and Yu] | 38.58 | 0.9629 | 34.81 | 0.9272 | 32.61 | 0.9051 | 34.37 | 0.9458 | 40.33 | 0.9807
HAT [Chen et al.(2023a)Chen, Wang, Zhou, Qiao, and Dong] | 38.63 | 0.9630 | 34.86 | 0.9274 | 32.62 | 0.9053 | 34.45 | 0.9466 | 40.26 | 0.9809
CPAT (Ours) | 38.68 | 0.9633 | 34.91 | 0.9277 | 32.64 | 0.9056 | 34.76 | 0.9481 | 40.48 | 0.9814
CPAT† (Ours) | 38.72 | 0.9635 | 34.97 | 0.9280 | 32.66 | 0.9058 | 34.89 | 0.9487 | 40.59 | 0.9816
EDSR [Lim et al.(2017)Lim, Son, Kim, Nah, and Mu Lee] | x3 | 34.65 | 0.9280 | 30.52 | 0.8462 | 29.25 | 0.8093 | 28.80 | 0.8653 | 34.17 | 0.9476
RCAN [Zhang et al.(2018a)Zhang, Li, Li, Wang, Zhong, and Fu] | 34.74 | 0.9299 | 30.65 | 0.8482 | 29.32 | 0.8111 | 29.09 | 0.8702 | 34.44 | 0.9499
NLSA [Mei et al.(2021)Mei, Fan, and Zhou] | 34.85 | 0.9306 | 30.70 | 0.8485 | 29.34 | 0.8117 | 29.25 | 0.8726 | 34.57 | 0.9508
ELAN [Zhang et al.(2022)Zhang, Zeng, Guo, and Zhang] | 34.90 | 0.9313 | 30.80 | 0.8504 | 29.38 | 0.8124 | 29.32 | 0.8745 | 34.73 | 0.9517
IPT∗ [Chen et al.(2021a)Chen, Wang, Guo, Xu, Deng, Liu, Ma, Xu, Xu, and Gao] | 34.81 | - | 30.85 | - | 29.38 | - | 29.49 | - | - | -
RCAN-it [Lin et al.(2022)Lin, Garg, Banerjee, Magid, Sun, Zhang, Van Gool, Wei, and Pfister] | 34.86 | 0.9308 | 30.76 | 0.8505 | 29.39 | 0.8125 | 29.38 | 0.8755 | 34.92 | 0.9520
SwinIR [Liang et al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and Timofte] | 34.97 | 0.9318 | 30.93 | 0.8534 | 29.46 | 0.8145 | 29.75 | 0.8826 | 35.12 | 0.9537
CAT-A [Chen et al.(2022b)Chen, Zhang, Gu, Kong, Yuan, et al.] | 35.06 | 0.9326 | 31.04 | 0.8538 | 29.52 | 0.8160 | 30.12 | 0.8862 | 35.38 | 0.9546
DAT [Chen et al.(2023b)Chen, Zhang, Gu, Kong, Yang, and Yu] | 35.16 | 0.9331 | 31.11 | 0.8550 | 29.55 | 0.8169 | 30.18 | 0.8886 | 35.59 | 0.9554
HAT [Chen et al.(2023a)Chen, Wang, Zhou, Qiao, and Dong] | 35.07 | 0.9329 | 31.08 | 0.8555 | 29.54 | 0.8167 | 30.23 | 0.8896 | 35.53 | 0.9552
CPAT (Ours) | 35.16 | 0.9334 | 31.15 | 0.8557 | 29.56 | 0.8174 | 30.52 | 0.8923 | 35.66 | 0.9559
CPAT† (Ours) | 35.19 | 0.9335 | 31.19 | 0.8559 | 29.59 | 0.8177 | 30.63 | 0.8934 | 35.77 | 0.9563
EDSR [Lim et al.(2017)Lim, Son, Kim, Nah, and Mu Lee] | x4 | 32.46 | 0.8968 | 28.80 | 0.7876 | 27.71 | 0.7420 | 26.64 | 0.8033 | 31.02 | 0.9148
RCAN [Zhang et al.(2018a)Zhang, Li, Li, Wang, Zhong, and Fu] | 32.63 | 0.9002 | 28.87 | 0.7889 | 27.77 | 0.7436 | 26.82 | 0.8087 | 31.22 | 0.9173
NLSA [Mei et al.(2021)Mei, Fan, and Zhou] | 32.59 | 0.9000 | 28.87 | 0.7891 | 27.78 | 0.7444 | 26.96 | 0.8109 | 31.27 | 0.9184
ELAN [Zhang et al.(2022)Zhang, Zeng, Guo, and Zhang] | 32.75 | 0.9022 | 28.96 | 0.7914 | 27.83 | 0.7459 | 27.13 | 0.8167 | 31.68 | 0.9226
IPT∗ [Chen et al.(2021a)Chen, Wang, Guo, Xu, Deng, Liu, Ma, Xu, Xu, and Gao] | 32.64 | - | 29.01 | - | 27.82 | - | 27.26 | - | - | -
RCAN-it [Lin et al.(2022)Lin, Garg, Banerjee, Magid, Sun, Zhang, Van Gool, Wei, and Pfister] | 32.69 | 0.9007 | 28.99 | 0.7922 | 27.87 | 0.7459 | 27.16 | 0.8168 | 31.78 | 0.9217
SwinIR [Liang et al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and Timofte] | 32.92 | 0.9044 | 29.09 | 0.7950 | 27.92 | 0.7489 | 27.45 | 0.8254 | 32.03 | 0.9260
CAT-A [Chen et al.(2022b)Chen, Zhang, Gu, Kong, Yuan, et al.] | 33.08 | 0.9052 | 29.18 | 0.7960 | 27.99 | 0.7510 | 27.89 | 0.8339 | 32.39 | 0.9285
DAT [Chen et al.(2023b)Chen, Zhang, Gu, Kong, Yang, and Yu] | 33.08 | 0.9055 | 29.23 | 0.7973 | 28.00 | 0.7515 | 27.87 | 0.8343 | 32.51 | 0.9291
HAT [Chen et al.(2023a)Chen, Wang, Zhou, Qiao, and Dong] | 33.04 | 0.9056 | 29.23 | 0.7973 | 28.00 | 0.7517 | 27.97 | 0.8368 | 32.48 | 0.9292
CPAT (Ours) | 33.19 | 0.9069 | 29.34 | 0.7991 | 28.04 | 0.7527 | 28.22 | 0.8408 | 32.69 | 0.9309
CPAT† (Ours) | 33.24 | 0.9071 | 29.36 | 0.7996 | 28.06 | 0.7532 | 28.33 | 0.8425 | 32.85 | 0.9318
Table 5: Quantitative comparison with state-of-the-art methods. The best,
second-best, and third-best results are marked in red, blue, and green colors,
respectively. “$\dagger$” indicates that self-ensemble is used. IPT∗ [Chen et
al.(2021a)Chen, Wang, Guo, Xu, Deng, Liu, Ma, Xu, Xu, and Gao] is trained on
ImageNet.
Figure 3: LAM [Gu and Dong(2021)] and DI [Gu and Dong(2021)] comparison
results.
### 4.3 Comparison with State-of-the-Art Methods
Quantitative results. Tab. 5 reports the quantitative comparison of CPAT with
different state-of-the-art methods including EDSR [Lim et al.(2017)Lim, Son,
Kim, Nah, and Mu Lee], RCAN [Zhang et al.(2018a)Zhang, Li, Li, Wang, Zhong,
and Fu], NLSA [Mei et al.(2021)Mei, Fan, and Zhou], ELAN [Zhang et
al.(2022)Zhang, Zeng, Guo, and Zhang], RCAN-it [Lin et al.(2022)Lin, Garg,
Banerjee, Magid, Sun, Zhang, Van Gool, Wei, and Pfister], SwinIR [Liang et
al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and Timofte], CAT-A [Chen et
al.(2022b)Chen, Zhang, Gu, Kong, Yuan, et al.], DAT [Chen et al.(2023b)Chen,
Zhang, Gu, Kong, Yang, and Yu], and HAT [Chen et al.(2023a)Chen, Wang, Zhou,
Qiao, and Dong]. Our method surpasses the current methods on all benchmark
datasets with all scales. The highest increase achieved is 0.31dB on Urban100
on x2 SR when compared with HAT. CPAT improves by more than 0.7dB compared to
SwinIR, which uses Swin Transformer as its central backbone for all scales.
With the self-ensemble strategy in testing, CPAT† performs better than CPAT,
but the inference time is much longer and not helpful for high-resolution
input images. All quantitative results demonstrate that enhancing the windows
along the height and width of feature maps instead of using the squared
windows when computing attention in CPWin-SA and leveraging frequency features
in SFIM are very effective for improving the quality of the SR image.
Figure 4: Qualitative comparison (x4 SR). The patch images being compared are
the green boxes in the HR images. PSNR/SSIM is also computed correspondingly
on these patches to demonstrate the improvement of our method.
Qualitative results. Local Attribution Map (LAM) [Gu and Dong(2021)] and
Diffusion Index (DI) [Gu and Dong(2021)] comparisons are shown in Fig. 3. LAM
emphasizes the significance of pixels in the LR image during upscaling of the
patches marked with green boxes. DI is indicative of the wider range of pixels
utilized. A higher DI indicates a wider range of pixels in upscaling images.
LAM and DI results show the superiority of our method over other methods. The
visual comparison is shown in Fig. 4 with $"img\\_16"$, $"img\\_49"$ from
Urban100, and $"UchuKigekiM774"$ from Manga109. Our method can enhance the
details of the LR image more clearly, with less blur, and higher PSNR/SSIM
compared to other methods. All qualitative results show the effectiveness of
our method for SISR. More details on qualitative results, self-ensemble
strategy, and lightweight version of CPAT can be found in the $supp.$ file.
## 5 Conclusions
In this study, we propose a novel Channel-Partitioned Attention Transformer
(CPAT) for SISR. This Transformer consists of V-EWin, H-EWin, and W-MSA
attentions. EWin and H-EWin enhance windows along the height and width of
input features to better capture long-range dependencies and relationships
between distant tokens. We also use squared window-based attention in CPAT,
which focuses on local features. We calculate self-attention in various window
shapes to apply to datasets that contain texture features in various
directions (_e.g_., Urban100 [Huang et al.(2015)Huang, Singh, and Ahuja]).
Additionally, we propose Spatial-Frequency Interaction Module (SFIM), which is
simple yet effectively leverages features (patterns, textures, edges, $etc.$)
from the frequency domain that might be less apparent in the spatial domain.
Integrating frequency spatial features helps to achieve comprehensive
information from feature maps that is important for HR image reconstruction.
Based on the proposals above, our method outperforms the current methods in
both quantitative and qualitative results.
## References
* [Agustsson and Timofte(2017)] E. Agustsson and R. Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In _2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_ , pages 1122–1131, Los Alamitos, CA, USA, jul 2017. IEEE Computer Society. 10.1109/CVPRW.2017.150. URL https://doi.ieeecomputersociety.org/10.1109/CVPRW.2017.150.
* [Ahmed et al.(1974)Ahmed, Natarajan, and Rao] N. Ahmed, T. Natarajan, and K.R. Rao. Discrete cosine transform. _IEEE Transactions on Computers_ , C-23(1):90–93, 1974. 10.1109/T-C.1974.223784.
* [Bevilacqua et al.(2012)Bevilacqua, Roumy, Guillemot, and Alberi-Morel] Marco Bevilacqua, Aline Roumy, Christine M. Guillemot, and Marie-Line Alberi-Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In _British Machine Vision Conference_ , 2012. URL https://api.semanticscholar.org/CorpusID:5250573.
* [Bluche(2016)] Théodore Bluche. Joint line segmentation and transcription for end-to-end handwritten paragraph recognition. _Advances in neural information processing systems_ , 29, 2016.
* [Brigham(1988)] E Oran Brigham. _The fast Fourier transform and its applications_. Prentice-Hall, Inc., 1988.
* [Cai et al.(2021)Cai, Ding, and Lu] Runyuan Cai, Yue Ding, and Hongtao Lu. Freqnet: A frequency-domain image super-resolution network with dicrete cosine transform. _ArXiv_ , abs/2111.10800, 2021. URL https://api.semanticscholar.org/CorpusID:244478402.
* [Chen et al.(2021a)Chen, Wang, Guo, Xu, Deng, Liu, Ma, Xu, Xu, and Gao] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 12299–12310, 2021a.
* [Chen et al.(2021b)Chen, Lu, Yu, Luo, Adeli, Wang, Lu, Yuille, and Zhou] Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou. Transunet: Transformers make strong encoders for medical image segmentation. _arXiv preprint arXiv:2102.04306_ , 2021b.
* [Chen et al.(2022a)Chen, Du, Yang, Beyer, Zhai, Lin, Chen, Li, Song, Wang, and Zhou] Wuyang Chen, Xianzhi Du, Fan Yang, Lucas Beyer, Xiaohua Zhai, Tsung-Yi Lin, Huizhong Chen, Jing Li, Xiaodan Song, Zhangyang Wang, and Denny Zhou. A simple single-scale vision transformer for object detection and instance segmentation. In _Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part X_ , page 711–727, Berlin, Heidelberg, 2022a. Springer-Verlag. ISBN 978-3-031-20079-3. 10.1007/978-3-031-20080-9_41. URL https://doi.org/10.1007/978-3-031-20080-9_41.
* [Chen et al.(2023a)Chen, Wang, Zhou, Qiao, and Dong] Xiangyu Chen, Xintao Wang, Jiantao Zhou, Yu Qiao, and Chao Dong. Activating more pixels in image super-resolution transformer. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 22367–22377, 2023a.
* [Chen et al.(2022b)Chen, Zhang, Gu, Kong, Yuan, et al.] Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, Xin Yuan, et al. Cross aggregation transformer for image restoration. _Advances in Neural Information Processing Systems_ , 35:25478–25490, 2022b.
* [Chen et al.(2023b)Chen, Zhang, Gu, Kong, Yang, and Yu] Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, Xiaokang Yang, and Fisher Yu. Dual aggregation transformer for image super-resolution. In _Proceedings of the IEEE/CVF international conference on computer vision_ , pages 12312–12321, 2023b.
* [Chu et al.(2023)Chu, Tian, Zhang, Wang, and Shen] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, and Chunhua Shen. Conditional positional encodings for vision transformers. In _ICLR 2023_ , 2023. URL https://openreview.net/forum?id=3KWnuT-R1bh.
* [Cooley and Tukey(1965)] James W Cooley and John W Tukey. An algorithm for the machine calculation of complex fourier series. _Mathematics of computation_ , 19(90):297–301, 1965.
* [Dong et al.(2015)Dong, Loy, He, and Tang] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. _IEEE transactions on pattern analysis and machine intelligence_ , 38(2):295–307, 2015.
* [Dosovitskiy et al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. _ICLR_ , 2021.
* [Gohshi(2015)] Seiichi Gohshi. Frequency domain analysis for super resolution image reconstruction and its limitations. In _2015 10th Asia-Pacific Symposium on Information and Telecommunication Technologies (APSITT)_ , pages 1–3, 2015. 10.1109/APSITT.2015.7217088.
* [Grechishnikova(2019)] Daria Grechishnikova. Transformer neural network for protein specific de novo drug generation as machine translation problem. _bioRxiv_ , 2019. 10.1101/863415. URL https://www.biorxiv.org/content/early/2019/12/03/863415.
* [Gu and Dong(2021)] Jinjin Gu and Chao Dong. Interpreting super-resolution networks with local attribution maps. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 9199–9208, 2021.
* [He et al.(2016)He, Chen, and Liu] Chao He, Zhenxue Chen, and Chengyun Liu. Salient object detection via images frequency domain analyzing. _Signal, Image and Video Processing_ , 10:1295–1302, 2016.
* [He et al.(2020)He, Liao, Tavakoli, Yang, Rosenhahn, and Pugeault] Sen He, Wentong Liao, Hamed R Tavakoli, Michael Yang, Bodo Rosenhahn, and Nicolas Pugeault. Image captioning through image transformer. In _Proceedings of the Asian conference on computer vision_ , 2020.
* [Hendrycks and Gimpel(2016)] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_ , 2016.
* [Huang et al.(2015)Huang, Singh, and Ahuja] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 5197–5206, 2015. 10.1109/CVPR.2015.7299156.
* [Huang et al.(2023)Huang, Wang, Wei, Huang, Shi, Liu, and Huang] Z. Huang, X. Wang, Y. Wei, L. Huang, H. Shi, W. Liu, and T. S. Huang. Ccnet: Criss-cross attention for semantic segmentation. _IEEE Transactions on Pattern Analysis & Machine Intelligence_, 45(06):6896–6908, jun 2023. ISSN 1939-3539. 10.1109/TPAMI.2020.3007032.
* [Jaderberg et al.(2015)Jaderberg, Simonyan, Zisserman, et al.] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. _Advances in neural information processing systems_ , 28, 2015.
* [Kim et al.(2016a)Kim, Lee, and Lee] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 1646–1654, 2016a.
* [Kim et al.(2016b)Kim, Lee, and Lee] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Deeply-recursive convolutional network for image super-resolution. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 1637–1645, 2016b.
* [Kim et al.(2001)Kim, Kim, Lee, and Lim] JM Kim, SH Kim, DJ Lee, and HS Lim. Signal processing using fourier & wavelet transform for pulse oximetry. In _Technical Digest. CLEO/Pacific Rim 2001. 4th Pacific Rim Conference on Lasers and Electro-Optics (Cat. No. 01TH8557)_ , volume 2, pages II–II. IEEE, 2001.
* [Kingma and Ba(2014)] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _CoRR_ , abs/1412.6980, 2014. URL https://api.semanticscholar.org/CorpusID:6628106.
* [Ledig et al.(2017)Ledig, Theis, Huszár, Caballero, Cunningham, Acosta, Aitken, Tejani, Totz, Wang, et al.] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 4681–4690, 2017.
* [Li et al.(2018)Li, Fang, Mei, and Zhang] Juncheng Li, Faming Fang, Kangfu Mei, and Guixu Zhang. Multi-scale residual network for image super-resolution. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , September 2018.
* [Liang et al.(2021)Liang, Cao, Sun, Zhang, Van Gool, and Timofte] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In _Proceedings of the IEEE/CVF international conference on computer vision_ , pages 1833–1844, 2021.
* [Lim et al.(2017)Lim, Son, Kim, Nah, and Mu Lee] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In _Proceedings of the IEEE conference on computer vision and pattern recognition workshops_ , pages 136–144, 2017.
* [Lin et al.(2022)Lin, Garg, Banerjee, Magid, Sun, Zhang, Van Gool, Wei, and Pfister] Zudi Lin, Prateek Garg, Atmadeep Banerjee, Salma Abdel Magid, Deqing Sun, Yulun Zhang, Luc Van Gool, Donglai Wei, and Hanspeter Pfister. Revisiting rcan: Improved training for image super-resolution. _arXiv preprint arXiv:2201.11279_ , 2022.
* [Liu et al.(2021)Liu, Lin, Cao, Hu, Wei, Zhang, Lin, and Guo] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF international conference on computer vision_ , pages 10012–10022, 2021.
* [Maas et al.(2013)Maas, Hannun, Ng, et al.] Andrew L Maas, Awni Y Hannun, Andrew Y Ng, et al. Rectifier nonlinearities improve neural network acoustic models. In _Proc. icml_ , volume 30, page 3. Atlanta, GA, 2013.
* [Martin et al.(2001)Martin, Fowlkes, Tal, and Malik] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In _Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001_ , volume 2, pages 416–423 vol.2, 2001. 10.1109/ICCV.2001.937655.
* [Matsui et al.(2016)Matsui, Ito, Aramaki, Fujimoto, Ogawa, Yamasaki, and Aizawa] Yusuke Matsui, Kota Ito, Yuji Aramaki, Azuma Fujimoto, Toru Ogawa, Toshihiko Yamasaki, and Kiyoharu Aizawa. Sketch-based manga retrieval using manga109 dataset. _Multimedia Tools and Applications_ , 76(20):21811–21838, November 2016. ISSN 1573-7721. 10.1007/s11042-016-4020-z. URL http://dx.doi.org/10.1007/s11042-016-4020-z.
* [Mei et al.(2021)Mei, Fan, and Zhou] Yiqun Mei, Yuchen Fan, and Yuqian Zhou. Image super-resolution with non-local sparse attention. In _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 3516–3525, 2021. 10.1109/CVPR46437.2021.00352.
* [Miech et al.(2017)Miech, Laptev, and Sivic] Antoine Miech, Ivan Laptev, and Josef Sivic. Learnable pooling with context gating for video classification. _arXiv:1706.06905_ , 2017.
* [Rambhatla et al.(2023)Rambhatla, Misra, Chellappa, and Shrivastava] Sai Saketh Rambhatla, Ishan Misra, Rama Chellappa, and Abhinav Shrivastava. Most: Multiple object localization with self-supervised transformers for object discovery. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 15823–15834, 2023.
* [Sargent et al.(2023)Sargent, Koh, Zhang, Chang, Herrmann, Srinivasan, Wu, and Sun] Kyle Sargent, Jing Yu Koh, Han Zhang, Huiwen Chang, Charles Herrmann, Pratul Srinivasan, Jiajun Wu, and Deqing Sun. Vq3d: Learning a 3d-aware generative model on imagenet. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 4240–4250, 2023.
* [Saxena and Singh(2005)] Rajiv Saxena and Kulbir Singh. Fractional fourier transform: A novel tool for signal processing. _Journal of the Indian Institute of Science_ , 85(1):11, 2005.
* [Shi et al.(2022)Shi, Jiang, Dai, and Schiele] Shaoshuai Shi, Li Jiang, Dengxin Dai, and Bernt Schiele. Motion transformer with global intention localization and local movement refinement. _Advances in Neural Information Processing Systems_ , 35:6531–6543, 2022.
* [Shi et al.(2016)Shi, Caballero, Huszar, Totz, Aitken, Bishop, Rueckert, and Wang] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 1874–1883, Los Alamitos, CA, USA, jun 2016. IEEE Computer Society. 10.1109/CVPR.2016.207. URL https://doi.ieeecomputersociety.org/10.1109/CVPR.2016.207.
* [Sun et al.(2022)Sun, Zhou, Black, and Chandrasekaran] Jiankai Sun, Bolei Zhou, Michael J Black, and Arjun Chandrasekaran. Locate: End-to-end localization of actions in 3d with transformers. _arXiv preprint arXiv:2203.10719_ , 2022.
* [Timofte et al.(2016)Timofte, Rothe, and Van Gool] Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 1865–1873, 2016.
* [Tran et al.(2022)Tran, Nguyen, Pham, and Tran] Dinh-Phu Tran, Quoc-Anh Nguyen, Van-Truong Pham, and Thi-Thao Tran. Trans2unet: Neural fusion for nuclei semantic segmentation. In _11th International Conference on Control, Automation and Information Sciences, ICCAIS 2022, Hanoi, Vietnam, November 21-24, 2022_ , pages 583–588. IEEE, 2022. 10.1109/ICCAIS56082.2022.9990159. URL https://doi.org/10.1109/ICCAIS56082.2022.9990159.
* [Trider(1978)] R Trider. A fast fourier transform (fft) based sonar signal processor. _IEEE Transactions on Acoustics, Speech, and Signal Processing_ , 26(1):15–20, 1978.
* [Wang et al.(2023)Wang, Jiang, Zhong, and Liu] Chenyang Wang, Junjun Jiang, Zhiwei Zhong, and Xianming Liu. Spatial-frequency mutual learning for face super-resolution. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 22356–22366, June 2023.
* [Wang et al.(2022)Wang, Xu, and Sun] Yiyu Wang, Jungang Xu, and Yingfei Sun. End-to-end transformer based model for image captioning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 36, pages 2585–2594, 2022.
* [Wang et al.(2004)Wang, Bovik, Sheikh, and Simoncelli] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_ , 13(4):600–612, 2004.
* [Zeyde et al.(2010)Zeyde, Elad, and Protter] Roman Zeyde, Michael Elad, and Matan Protter. On single image scale-up using sparse-representations. volume 6920, pages 711–730, 06 2010. ISBN 978-3-642-27412-1. 10.1007/978-3-642-27413-8_47.
* [Zhang et al.(2022)Zhang, Zeng, Guo, and Zhang] Xindong Zhang, Hui Zeng, Shi Guo, and Lei Zhang. Efficient long-range attention network for image super-resolution. In _European conference on computer vision_ , pages 649–667. Springer, 2022.
* [Zhang et al.(2018a)Zhang, Li, Li, Wang, Zhong, and Fu] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In _Proceedings of the European conference on computer vision (ECCV)_ , pages 286–301, 2018a.
* [Zhang et al.(2018b)Zhang, Tian, Kong, Zhong, and Fu] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 2472–2481, 2018b.
* [Zhao et al.(2018)Zhao, Li, Zhao, and Feng] Fang Zhao, Jianshu Li, Jian Zhao, and Jiashi Feng. Weakly supervised phrase localization with multi-scale anchored transformer network. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 5696–5705, 2018.
* [Zhu et al.(2022)Zhu, Shah, and Chen] Sijie Zhu, Mubarak Shah, and Chen Chen. Transgeo: Transformer is all you need for cross-view image geo-localization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 1162–1171, 2022.
|
# A Human-Machine Joint Learning Framework to Boost Endogenous BCI Training
Hanwen Wang, Yu Qi*, Lin Yao, Yueming Wang,
Dario Farina, Gang Pan Hanwen Wang is with the College of Computer Science and
Technology, Zhejiang University, Hangzhou, China.Yu Qi is with the Affiliated
Mental Health Center & Hangzhou Seventh People’s Hospital, and the MOE
Frontier Science Center for Brain Science and Brain-machine Integration,
Zhejiang University School of Medicine, the State Key Lab of Brain-Machine
Intelligence, Hangzhou, China.Lin Yao is with the Department of Neurobiology,
Affiliated Mental Health Center & Hangzhou Seventh People’s Hospital, Zhejiang
University School of Medicine, MOE Frontiers Science Center for Brain and
Brain-Machine Integration, Zhejiang University, Department of Biomedical
Engineering, Zhejiang University, and College of Computer Science and
Technology, Zhejiang University, Hangzhou, ChinaYueming Wang is with the
Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou,
China.Dario Farina is with the Department of Bioengineering, Imperial College
London, London, UKGang Pan is with the State Key Lab of Brain-Machine
Intelligence, the College of Computer Science and Technology, Zhejiang
University, Hangzhou, China, and the First Affiliated Hospital, Zhejiang
University, Hangzhou, China.* Corresponding authors: Yu Qi<EMAIL_ADDRESS>and Gang Pan (gpan@zju.edu.cn).
###### Abstract
Brain-computer interfaces (BCIs) provide a direct pathway from the brain to
external devices and have demonstrated great potential for assistive and
rehabilitation technologies. Endogenous BCIs based on electroencephalogram
(EEG) signals, such as motor imagery (MI) BCIs, can provide some level of
control. However, mastering spontaneous BCI control requires the users to
generate discriminative and stable brain signal patterns by imagery, which is
challenging and is usually achieved over a very long training time
(weeks/months). Here, we propose a human-machine joint learning framework to
boost the learning process in endogenous BCIs, by guiding the user to generate
brain signals towards an optimal distribution estimated by the decoder, given
the historical brain signals of the user. To this end, we firstly model the
human-machine joint learning process in a uniform formulation. Then a human-
machine joint learning framework is proposed: 1) for the human side, we model
the learning process in a sequential trial-and-error scenario and propose a
novel “copy/new” feedback paradigm to help shape the signal generation of the
subject toward the optimal distribution; 2) for the machine side, we propose a
novel adaptive learning algorithm to learn an optimal signal distribution
along with the subject’s learning process. Specifically, the decoder reweighs
the brain signals generated by the subject to focus more on “good” samples to
cope with the learning process of the subject. Online and psuedo-online BCI
experiments with 18 healthy subjects demonstrated the advantages of the
proposed joint learning process over co-adaptive approaches in both learning
efficiency and effectiveness.
###### Index Terms:
Electroencephalogram (EEG), Brain-Computer Interface (BCI), Motor Imagery
(MI), Neural Decoding.
## I Introduction
Brain-computer interfaces (BCIs) act as an intermediary between the brain and
external devices, translating brain signals to control signals [1, 2]. BCIs
employing electroencephalogram (EEG) signals have been developed successfully
for many applications, including neural rehabilitation, prosthesis control and
emotion recognition [3, 4, 5, 6, 7, 8, 9, 10]. Most existing BCI systems can
be divided into endogenous and exogenous BCIs [11, 12]. Exogenous BCIs such as
the P300 and SSVEP systems, rely on brain signals evoked by external stimuli
[13, 14], such as visual flashes or audio events. Conversely, endogenous BCIs,
such as based on motor imagery (MI), rely on self-regulated brain signals
often induced by users’ covert attention or mental tasks [15, 16]. Thus,
endogenous BCIs can provide more flexible and natural brain control, resulting
in high potential for a wide range of applications [17, 18, 19, 20]. In the
remainder of this paper, we will only focus on endogenous BCIs.
Effective BCI control relies on the close collaboration of the brain (the
subject) and the machine (the decoder)[21]. The user should generate
discriminative and sufficient brain signals as different control commands,
while the decoder should identify different brain signal patterns and robustly
interpret/map them into control commands. For endogenous BCIs, the main
challenge lies in the difficulty for the user in generating effective and
stable brain signals. To achieve robust online control, the BCI user should be
able to accurately replicate at least two discriminative and stable patterns
for basic control, such as going left and right. A typical endogenous BCI is
based on MI, where the user is asked to imagine movements, for example of the
left/right hand. However, generating MI is challenging for most users, and a
typical user usually requires a long and difficult training process to learn
the MI control. Commonly, training for MI lasts several weeks or even months
[22, 23]. Furthermore, there are at least 15% to 30% of subjects across the
population who cannot control BCI systems through learning, a case called BCI
illiteracy [24]. For these reasons, the facilitation of the learning process
in endogenous BCIs is a very relevant problem.
The learning process in BCIs includes two parts: the learning of the decoder
(machine learning), and the learning of the user (human learning). Here we
briefly review the existing approaches.
* •
Machine learning. Traditional BCI training processes start with a static
subject learning process where the subject tries to generate brain signals
without feedback. Then a decoder is trained with the initial data to learn
discriminative features from different brain signals (as shown in Fig. 1A).
The discrimination of different brain signal patterns can be regarded as a
typical classification problem. Thus, linear and nonlinear classifiers have
been proposed for the decoding problem. For linear methods, decoding in motor
imagery (MI) can be separated into feature extraction and classification.
Common Spatial Pattern (CSP) is a widely-used linear approach for extracting
discriminative features from different brain signal patterns. Filter Bank
Common Spatial Pattern (FBCSP) [25] extends CSP with a set of band-pass
filters to enhance features, and it can also be applied in neural networks
[26]. However, traditional CSP is not suitable for multiclass problems. Joint
Approximate Diagonalization (JAD)-based methods are used to improve CSP [27].
Moreover, modifications have been made to enhance CSPs to deal with the
nonstationarity in EEG signals. Fuzzy covariance matrices are deployed in
DivCSP-WS to extract steady features [28]. Channel selection methods could
also improve stability between sessions, improving the classification accuracy
[29].
In terms of classification methods, linear discriminant analysis (LDA) is
commonly used for extracted features [30]. To enhance robustness on non-
stationary EEG signals, variations of LDA such as rLDA and BLDA have been
introduced [31, 32, 33]. Support Vector Machines (SVMs) with flexible kernel
design are employed with feature extraction methods in some studies [34, 35].
Recently, deep learning approaches including Long Short Term Memory (LSTM)
[36, 37, 38] and convolutional neural networks (CNNs) [39, 40, 41, 42, 43],
such as EEGnet [44], have demonstrated effectiveness in MI classification.
Combining the feature extraction methods and the deep learning approach,
neural structured learning (NSL) has been proposed to train deep neural
networks with feature inputs and the structured signals, which enhances the
robustness [45].
* •
Human learning. After the initial decoder is trained, the subject can learn to
control the BCI with the feedback of the decoder, where the design of the
feedback plays an important part (Fig. 1B). Traditional feedbacks usually
directly reflect the decoding result, using a moving cursor or a falling ball
[46, 47]. To improve the efficiency of subject learning, new paradigms with
different types of feedback have been introduced. For example, Hwang et al.
proposed an intuitive feedback that uses a surface topography of EEG signal
[23], where the real-time spectrogram is presented to enhance the MI [48].
Wang et al. proposed a neurofeedback training paradigm, in which the
performance of MI was transferred into the distance between a needle and hands
on the screen [49]. Alternatively, auditory feedbacks of different sounds
have been presented to subject depending on their MI performance [50].
Nevertheless, Yao et al. proposed tactile stimulation to help subjects improve
the MI activation [51, 52]. Also, Ono et al. adopted exoskeleton attached to
subjects, making feedbacks more sensible [53].
* •
Co-adaptive learning. The aforementioned approaches mostly focus on one side
of the learning, either the signal generation by the user or the algorithm
training by the machine. However, co-adaptive methods have also been
introduced [54, 55]. Instead of using a fixed decoder after the initial
training process, co-adaptive approaches retrain both the decoder and the
subject iteratively in sequential learning sessions to cope with each other
[56, 54] (Fig. 1C). To this end, different decoder recalibration algorithms
have been proposed using machine learning approaches such as LDA [57], SVM
[58], probabilistic graphical models [59] to update the decoder with newly
collected data to cope with changes in brain signals during the subject
learning process. By considering the interaction between the subject and the
decoder, co-adaptive approaches demonstrated superior performance in BCI
learning than separate learning approaches [60, 61]. To provide a mathematical
basis of co-adaptive learning processes, previous work has modeled the process
with mathematical formulations. For example, in [62], an encoder-decoder model
was proposed, in which the co-adaptation was described by the optimization of
the user’s control. With the assumption of two linear learning systems, a
theoretical formulation was derived by the stochastic gradient descent method
in [63].
Figure 1: Illustration of BCI training with different strategies. (A) The
process of decoder learning gives a set of subject-generated data (indicated
by the red circle). (B) The subject learning process with feedback from a
fixed decoder. (C) The co-adaptation learning process is where the subject and
the decoder learn in an alternate manner. (D) The proposed joint learning
process is where the subject and the decoder share the same loss function
during learning.
In Fig. 1, we illustrate and compare the aforementioned learning process in an
intuitive way. Suppose there is a global optimal point for the human-machine
BCI learning process. With the machine learning model alone (Fig. 1A), where
the human model is static, the optimal performance is highly limited by the
boundary of brain signal patterns. With the human learning model alone (Fig.
1B), where the decoder is fixed, the subject can learn with the guidance of
the decoder’s feedback. The performance highly relies on the effectiveness of
the decoder. Since the decoder is usually trained with the initially imperfect
brain signals of the subject, it can be suboptimal and can degrade the human
learning process. The co-adaptation model (Fig. 1C) considers both machine
learning and human learning, where the decoder is iteratively updated along
with the human learning process. Specifically, it contains an iterative
process where the machine and the human learn alternately, which improves the
theoretical overall performance. However, it assumes that the subject can
effectively learn at every human learning stage, which can be difficult,
especially at the beginning rounds of the training process.
This study proposes a novel BCI learning scheme for efficient and effective
BCI training. As shown in Fig. 1D, we aim to construct a joint learning model
between humans and machines, where human and machine processes can be
optimized simultaneously. To this end, we propose a novel human-machine joint
learning framework for effective BCI training. Specifically, we assume the
human learning behavior in sequential training is a trial-and-error process,
and let the decoder determine whether a brain signal is “good” or “poor” by
its discrimination ability according to the distribution of the feature space.
The contribution of this study is summarized in three folds.
1. 1.
We formally formulate the human-machine joint learning model and propose a
human-machine joint loss function, where the subject is encouraged to generate
more discriminative brain signals, and the decoder optimizes the classifier to
separate the different brain signal modes.
2. 2.
From the human side, we formulate human behavior in a sequential trial-and-
error learning process. A novel paradigm is proposed to guide the subject to
optimize brain signals. Specifically, if a “good” signal is generated, the
system encourages the subject to “copy” the state; otherwise, if a “poor”
signal is generated, the subject is encouraged to change the brain signal.
3. 3.
From the machine side, we propose a novel adaptive learning algorithm to learn
an optimal signal distribution along with the subject’s learning process.
Specifically, the decoder reweighs the brain signals generated by the subject
to focus more on “good” samples to cope with the learning process of the
subject.
Online BCI experiments with 18 healthy subjects demonstrated that the joint
learning framework can efficiently guide the subject learning discriminate
brain signals for effective BCI control. Compared with traditional co-adaptive
approaches, our method significantly improved the average control accuracy
from 69.1% to 74.5%.
## II The Human-machine joint learning framework
We first formulate the BCI learning process in a uniform model and propose a
human-machine joint loss function for learning. Then we propose the human
learning model and propose a loss function for the human learning process. We
then introduce the machine learning process where the decoder learns to select
optimal brain signals that the subject generated to guide and cope with the
learning process of the subject. Finally, we propose a novel BCI learning
paradigm and framework for human-machine joint learning.
### II-A Modeling of the human-machine joint learning
Considering BCI learning as a human-machine learning problem, the subject and
the decoder share the same goal of accurate intention decoding from brain
signals. To achieve this goal, the subject tries to generate more
discriminative brain signals, and the decoder optimizes the classifier to best
separate different brain signal patterns. To this end, the human learning
model can be described as:
$x=g(\theta_{H},y),$ (1)
where $x$ stands for the signal generated according to the given label $y$.
And the parameter of generation and the function are denoted as $\theta_{H}$
and $g(.)$ respectively. For the machine part, the model is defined by:
$y^{\prime}=h(\theta_{M},x),$ (2)
in which the predict label $y^{\prime}$ is decoded by function $h(.)$ and
parameter $\theta_{M}$.
Thus, the human-machine mutual goal of BCI learning is to minimize the
distance between the true label $y$ and the label estimated by the decoder
given brain signals $y^{\prime}$:
$\begin{split}L_{H-M}&=||y^{\prime}-y||.\end{split}$ (3)
Then the joint optimization of humans and machines can be achieved by
minimizing the joint loss function $L_{H-M}$:
$\begin{split}\mathop{\arg\min_{\theta_{H},\theta_{M}}}L_{H-M}&=\mathop{\arg\min_{\theta_{H},\theta_{M}}}||h(\theta_{M},g(\theta_{H},y))-y||.\end{split}$
(4)
Figure 2: The diagram of the human-machine joint learning process. From the
subject’s view, the subject tries to generate brain signals according to the
instructions (such as left or right). At each trial, the system gives feedback
(left or right) during the process, and evaluates the subject’s brain signals
according to the discriminative ability. For a trial with high signal quality,
a “copy” command is given such that the subject should maintain the brain
signal patterns; otherwise, a “new” command is given and the subject should
change the way of thinking to improve the signal quality.
The optimization of the joint loss function relies on both the subject and the
machine. On the one hand, the subject learns to maximize the discrimination
ability between the two brain signal modes according to the decoder by
optimizing $\theta_{H}$; on the other hand, the decoder tries to optimize a
classification plane with $\theta_{M}$, where the brain signals from different
modes can be maximumly classified. To solve the joint learning problem, we
should re-model the learning for both humans and machines to cope with the
jointly learning process.
#### II-A1 Modeling of human learning
Here we model the learning process of the user. In a feedback learning
process, the user adjusts the brain signals according to the feedback. In the
human learning model, we simplify the problem by assuming that the subject
generates new signals depending only on the last feedback. Then we can regard
the process as a Markov process, which has been widely applied to modeling the
human attention, decision-making or control system [64, 65, 66, 67].
Given the instruction $y$, the subject aims to generate brain signals in a
certain mode. We divide the brain signals into two groups of “good” and
“bad/poor” according to their discriminative ability with the decoder, as
shown in Fig. 2. We denote the “good” and “bad/poor” samples by $x_{G}$ and
$x_{B}$. Thus, the objective of the subject is to increase the probability of
generating $x_{G}$, and to decrease the probability of generating $x_{B}$. The
learning objective is therefore given by:
$\mathop{\arg\max_{\theta_{H}}}\frac{P(x_{G})}{P(x_{B})},$ (5)
where $P(x_{G})$ and $P(x_{B})$ represent the probability of good and bad
samples of the subject. In the Markov sequential learning process, we can
define a transition matrix of $x_{G}$ and $x_{B}$ by:
$\begin{bmatrix}P_{GG}&P_{GB}\\\
P_{BG}&P_{BB}\end{bmatrix}=\begin{bmatrix}P_{GG}&1-P_{GG}\\\
1-P_{BB}&P_{BB}\end{bmatrix},$ (6)
where $P_{ab}$ stands for the probability of transition from condition a to
condition b. The steady-state vector could be derived as:
$\begin{bmatrix}P(x_{G})&P(x_{B})\end{bmatrix}=\begin{bmatrix}\dfrac{1-P_{BB}}{2-P_{GG}-P_{BB}}&\dfrac{1-P_{GG}}{2-P_{GG}-P_{BB}}\end{bmatrix}.$
(7)
Substituting the variables into the loss function, we get
$\mathop{\arg\max_{\theta_{H}}}\frac{P(x_{G})}{P(x_{B})}=\mathop{\arg\max_{\theta_{H}}}\frac{1-P_{BB}}{1-P_{GG}}.$
(8)
According to the model, a high $P_{GG}$ and a low $P_{BB}$ are required for
effective BCI learning.
#### II-A2 Modeling of machine learning
Towards human-machine joint learning, we propose a novel classification
algorithm based on a sample evaluation and adaptive learning of the decoder.
The process of decoder learning has two folds. First, we evaluate the brain
signals generated by the subject and determine whether a sample is “good” or
“bad” for feedback in the sequential learning process. Second, the decoder
should be dynamically tuned to cope with the new samples generated by the
subject, where a novel training algorithm based on sample reweighting is
proposed.
* •
Sample evaluation. The determination of “good” and “bad” brain signals plays a
key role in the process of human learning, and the evaluation of brain signal
quality is the bond between the human and machine learning process.
Intuitively, “good” samples should contain high discriminative ability in the
decoder’s feature space. Thus, the determination of sample quality is given by
the decoder, and the computation of sample quality can be diverse with
different decoders. Here we take the SVM classifier as an example. With SVM,
the determination of different classes is according to a sample’s distance to
the hyperplane constituted by the support vectors. In this manner, a sample
far away from the hyperplane indicates low confusion in classification, thus
it is considered a “good” sample, and samples with a small distance to the
hyperplane are considered of low quality. Generally, the quality of a sample
$i$ should be negatively correlated to its classification loss $L_{i}$.
Therefore, we evaluate the quality of the sample $q_{i}$ as
${q_{i}}\propto\frac{1}{L_{i}}.$ (9)
* •
Adaptive decoder learning with sample reweighting.
Given the quality of brain signal samples, the decoder assigns weights to the
samples according to their quality to construct a better discriminative sample
distribution, which serves as guidance for the subject to learn.
In typical machine learning approaches such as the SVM and Adaboost, samples
with low quality (difficult samples) are more focused to improve the
classification performance. While in the human-machine joint learning problem,
we take the opposite strategy that focuses more on the “good” samples rather
than the “bad” ones. It is because the human-machine joint learning process
forms a new problem that is quite different from typical ones. Instead of
using fixed data, the sample generator (human) is intelligent and can
adaptively cope with the decoder by learning to generate more “good” samples
and fewer “bad” ones. By focusing on samples with high quality, the feedback
of the decoder leads the subject to generate “good” samples to facilitate the
learning process. Thus, the weight $v_{i}$ of the sample $i$ is assigned by
${v_{i}}={q_{i}}.$ (10)
Here we develop a novel adaptive learning approach by focusing on samples with
high quality via a sample reweighting algorithm, which satisfies Eqs. 9 and
10. Specifically, we design a novel loss function for the decoder to connect
the loss and weight by the self-paced learning [68] algorithm, where the
learning starts with an easy sample and gradually to difficult ones. The loss
function of the decoder learning process can be described as:
$\displaystyle\mathop{\min_{w,v}}$ $\displaystyle E(w,v;\lambda)=$ (11)
$\displaystyle\sum_{i=1}^{n}v_{i}L(x_{i},y_{i},w)+(1-\lambda)v_{i}-\frac{(1-\lambda)^{v_{i}}}{log(1-\lambda)},$
where $\sum_{i=1}^{n}v_{i}L(x_{i},y_{i},w)$ stands for the traditional loss
function with weights $v$ and
$\sum_{i=1}^{n}(1-\lambda)v_{i}-\frac{(1-\lambda)^{v_{i}}}{log(1-\lambda)}$
represents the penalty function $f(v;\lambda)$, which controls the pace of the
self-paced learning. In this way, the weights are included in the loss
function.
Since the loss could be taken as a biconvex function, an Alternative Convex
Search [69] is used to optimize the loss function, which alternately finds the
optimal solution to one group of variables by fixing the other variables.
* –
Optimization of $v$: We first fix parameter $w$ and optimize $v$. The partial
gradient of Eq. 11 is:
$\frac{\partial E(w,v;\lambda)}{\partial
v_{i}}=L(y_{i},x_{i},w)+((1-\lambda)-(1-\lambda^{v_{i}})).$ (12)
Let Eq. 12 $=$ 0, and we can easily deduce:
$log(L(y_{i},x_{i},w)+(1-\lambda))=v_{i}log(1-\lambda).$ (13)
So the solution for $E(w,v;\lambda)$ could be given by:
$v^{\prime}_{i}=\left\\{\begin{aligned}
&\frac{1}{log(1-\lambda)}log(L_{i}+(1-\lambda)),\quad L_{i}<\lambda,\\\
&0,\quad{\rm otherwise},\end{aligned}\right.$ (14)
where $L_{i}$ stands for $L(y_{i},x_{i},w)$. As a result, we get the close-
form solution for $v$. And in Eq. 14, there is $v_{i}\propto\frac{1}{L_{i}}$,
which means samples with higher loss will be given smaller weights, satisfying
the requirement.
* –
Optimization of $w$: Given $v$, we fix $v$ and optimize $w$ as follows:
$\frac{\partial E(w,v;\lambda)}{\partial
w}=\frac{\partial\sum_{i=1}^{n}v_{i}L(x_{i},y_{i},w)}{\partial w}.$ (15)
As shown in Eq. 15, the gradient is reduced to the partial derivative of a
weighted loss problem. Close-form solutions can be derived if the loss
function is hinge loss such as with the SVM. And for the loss of deep learning
models, it also can be optimized through gradient descent.
#### II-A3 Experimental paradigm for human-machine joint learning
Given the modeling of human-machine joint learning, the problem is how to
effectively feedback on the sample quality to the user. Here a “copy/new”
feedback strategy is proposed together with the paradigm of the joint learning
process.
* •
The “copy/new” feedback strategy. We propose a novel “copy/new” feedback
strategy to indicate whether the previous brain signal is “good” or “poor”. In
our paradigm, the trials are split into pairs and the instruction for the next
trial depends on the quality of the previous trial. If the previous trial is
of good quality, there would be a “copy” for the next trial. With the “copy”
feedback, the subject is encouraged to generate brain signals similar to the
previous one, which increases the $P_{GG}$. Otherwise, there would be a “new”
instruction where the subject is asked to change the way of thinking and
explore better signals, which helps decrease $P_{BB}$ in Section II-A1. In
other words, the “copy” signals help subjects to maintain the previous “good”
signal, and the “new” instruction will guide subjects to try other
possibilities. The feedback lets the subject learn in a trial-and-error
process, where the subject learns to shape the distribution of brain signals
to a more discriminative condition with the guidance of the decoder.
* •
The paradigm of the joint learning process. The diagram of the human-machine
joint learning process is illustrated in Fig. 2 and Algorithm 1. The subject
tries to generate brain signals according to the instructions (such as left or
right). Then the system gives the decoding results as feedback, meanwhile
evaluating the quality of brain signals according to the discriminative
ability. For a trial with high signal quality, a “copy” command is given such
that the subject should maintain the brain signal patterns; otherwise, a “new”
command is given and the subject should change the way of thinking to improve
the signal quality. The aforementioned process constitutes a training block,
and the process repeats several times in a training session. The decoder
updates after each training session.
Algorithm 1 Pseudo code of the human-machine joint learning process
Input: Number of training sessions $n$, proposed algorithm $A$ in in Section
II-A2 and proposed paradigm $P$ in in Section II-A3
Output: Fine-tuned classifier $C$ and trained subject $S$
1: Let $k=1$.
2: Calibration session of the paradigm without feedback.
3: Update the classifier $C$ by $A$.
4: $k=k+1$.
5: while $k\leq n$ do
6: Online training with $P$.
7: Update the classifier $C$ by $A$.
8: $k=k+1$.
9: end while
Figure 3: The joint learning experiment paradigm with the MI task. (A) The
trials are designed in a sequential manner as the First Trials and the Next
Trials. For each trial, there are a 2-second-long preparation where a cross
sign (+) is presented. Then an instruction arrow of “left/right” is presented
for two seconds. With the Next trials, there are “copy/new” instructions along
with the arrows in this stage. After that, the subject starts to perform MI as
instructed for five seconds, during which feedback are given (as shown in B).
(B) The feedbacks during the MI process include two squares indicating the
decoding results for left and right respectively with the color of the
squares. Lighter color indicates more discriminative classification
performance. (C) The whole scheme of experiment including human side and
machine side, which illustrates the joint learning process.
### II-B The joint learning paradigm for motor imagery training
Here we specify the process of the joint learning paradigm with the MI task.
#### II-B1 Experiment paradigm with the MI task
The paradigm is presented in Fig. 3. Subjects are asked to focus on the cross
on the screen and perform left or right MI tasks as indicated by instruction
arrows. The cross is shown on the screen for 2 seconds, turning from white to
gray. After that, there is an arrow that points left or right, indicating the
left or right MI task. The subject should perform MI for 5 seconds. In the
calibration session, the MI is performed without feedback. Subjects need to
stay focused on the cross and keep their MI mental task without feedback.
After the calibration session, the parameters of the algorithm are updated and
feedback with “copy/new” instructions are provided. To reduce eye movement, we
set the feedback as the brightness of the target. There are two squares
spreading horizontally in the middle of the screen, in the left and right
parts respectively. The subjects’ task is to keep the indicated square as
bright as possible in 5 seconds through MI. These feedbacks are denoted as
$C^{n}_{left}$ and $C^{n}_{right}$, in which $n$ stands for the number of
updates. $C^{0}_{left}$ and $C^{0}_{right}$ are set to 0.5 initially and
updated as follows:
$\begin{split}C^{n+1}_{left}&=C^{n}_{left}+\alpha((P_{left}-0.5)*2),\\\
C^{n+1}_{right}&=1-C^{n+1}_{left},\end{split}$ (16)
where $P_{left}\in[0,1]$ is the posterior probability of left calculated by
the online discriminator and $\alpha$ is a parameter to adjust the pace.
With the “copy/new” paradigm, the trials are split into pairs, namely the
“First trials” and the “Next trials” (Fig. 3). If the accuracy of the first
trial achieves a threshold of $T$, there would be a “copy” along with the
instruction arrow in the next trial, which informs subjects to keep the way of
MI. Otherwise, these would be “new” in the screen above the instruction arrow,
indicating that subjects should change the way of signal generation. The
accuracy is calculated by the average accuracy of each slice during the online
trial.
#### II-B2 MI decoding with sample reweighting
The decoding of MI brain signals consists of two stages. The first one is to
extract effective features, where methods such as the common spatial patterns
(CSP) and filter bank common spatial pattern (FBCSP) are used. And the second
stage is to find a discriminator to classify the features. In order to deploy
the proposed self-paced learning-based sample reweighting method, both stages
should be taken into consideration.
For the feature extraction, we used the weighted CSP by adding weights to the
traditional CSP. Different from CSP, the normalized covariance
$\overline{R_{1}}$ and $\overline{R_{2}}$ are calculated by weighted averaged
over samples of each group:
$R=\overline{R_{1}}+\overline{R_{2}}=\sum_{i}v_{1}^{i}r_{1}^{i}+\sum_{j}v_{2}^{j}r_{2}^{j},$
(17)
where $v_{1}^{i},v_{2}^{j}$ are the corresponding weight of samples in each
group and
$r_{1}^{i}=\frac{X_{1}^{i}X_{1}^{iT}}{trace(X_{1}^{i}X_{1}^{iT})},\quad
r_{2}^{j}=\frac{X_{2}^{j}X_{2}^{iT}}{trace(X_{2}^{j}X_{2}^{jT})},$ (18)
in which $X_{1}^{i},X_{2}^{j}\in R^{N\times T}$ stand for the signals matrices
of two conditions (left or right), $N$ denotes the number of channels, and $T$
is the number of samples per channel.
For the discriminator, we chose the SVM to cooperate with weights. Details of
weighted-SVM could be found in [70], and the loss function is as follows:
$L=\sum_{i=1}^{N}v_{i}(1-y_{i}f(x_{i})).$ (19)
Then the optimization problem becomes:
$\begin{split}&\min_{\mathbf{w},b,\xi}\quad\frac{1}{2}||\mathbf{w}||^{2}+\sum_{i=1}^{N}v_{i}\xi_{i}\\\
\mathrm{s.t.}\quad&y_{i}f(x_{i})\geq 1-\xi_{i},\quad\xi_{i}\geq 0.\end{split}$
(20)
This is the traditional weighted SVM problem, which could be solved easily.
For the self-paced learning algorithm, we introduce new parameters $\Lambda$
and $\Delta\Lambda$ to control the learning pace more precisely, where
$\Lambda$ stands for the proportion of samples we want to recruit for the
training initially and $\Delta\Lambda$ is the incremental ratio of samples for
training in every iteration. And the parameter $\lambda$ is tuned by $\Lambda$
and $\Delta\Lambda$ in terms of the rank. The number of samples to be included
in each iteration is specified by $\Lambda$ and $\Delta\Lambda$, and then
$\lambda$ is calculated accordingly as [71], which can be described as the
$1/\Lambda$ quantiles of loss $v$.
Then we can design the workflow of our self-paced learning-based sample
reweighting algorithm. The details of the algorithm are shown in Algorithm 2
for MI decoding. Assuming we have a set of labeled data, we first randomly
split the data into a training set $\left\\{X,Y\right\\}$ of $N$ samples and a
validation set $\left\\{X^{\prime},Y^{\prime}\right\\}$. At first,
$\Lambda\times N$ samples are randomly selected from the training set. These
data are used to train the initial weighted CSP and weighted SVM. After that,
all data in the training set could get a weight, and the accuracy of the
validation set is calculated. If $\Lambda<1$, which means not all the training
set has been used, more data would be included in the update of weighted CSP
and weighted SVM according to their weights. The iteration will continue until
all the training set has been included. And the round with the highest
accuracy on the validation set would be selected. The weight and parameters of
weighted CSP and weighted SVM are the output.
Also, the joint learning method could be employed in other machine learning
methods, including deep learning. Under that condition, random selected
$\lambda\times N$ samples will be fed to the neural network for
initialization. After that, more and more samples would be assigned with
different weights according to their loss to update the network by iteration.
And the best model will be selected depending on the accuracy on
$\left\\{X^{\prime},Y^{\prime}\right\\}$, which is similar to the Algrithm 2.
Algorithm 2 The joint learning decoding algorithm for MI
Input: Training set $\left\\{X,Y\right\\}$ with $N$ samples, validation set
$\left\\{X^{\prime},Y^{\prime}\right\\}$.
Parameter: $\Lambda$ and $\Delta\Lambda$, controlling the training pace
Output: Weights: $v$, trained weighted-CSP and weighted-SVM
1: Initialize $k=0$.
2: Random select $\lambda\times N$ samples $\left\\{X_{k},Y_{k}\right\\}$ from
$\left\\{X,Y\right\\}$.
3: Train weighted-CSP on $\left\\{X_{k},Y_{k}\right\\}$.
4: Train weighted-SVM on $\left\\{X_{k},Y_{k}\right\\}$.
5: Update $v_{k}$ by Eq. 14.
6: while $\Lambda<1$ do
7: $\Lambda=\Lambda+\Delta\Lambda$.
8: $k=k+1$.
9: $\lambda=1/\Lambda$ quantiles of $L$.
10: Select data $\left\\{X_{k},Y_{k}\right\\}$ by $L_{i}<\lambda$.
11: Update weighted-CSP on $\left\\{X_{k},Y_{k}\right\\}$.
12: Update weighted-SVM on $\left\\{X_{k},Y_{k}\right\\}$.
13: Test weighted-CSP and weighted-SVM on
$\left\\{X^{\prime},Y^{\prime}\right\\}$.
14: Update $v_{k}$ by Eq. 14.
15: end while
16: Select best iteration $K$ by accuracy on
$\left\\{X^{\prime},Y^{\prime}\right\\}$.
17: return Weights at $K$ iteration: $v_{K}$, weighted-CSP and weighted-SVM at
$K$ iteration
Figure 4: Comparison of BCI control accuracy with and without joint learning.
(A) The online accuracy of subjects with and without the joint learning
process. $\ast$ indicates the significance with $t$-test (one-sided,
$p\textless 0.005$). (B) The proportion of “good” trials across the sessions.
(C) The trial-wise BCI control accuracy across trials.
## III Experiments and results
### III-A Experimental settings
We evaluated the proposed human-machine joint learning model with online BCI
experiments in comparison with co-adaptive BCIs without joint learning. A
total of 18 subjects participated in the BCI experiments (eight females and
ten males, age ranging from 23 to 27). 14 subjects took part in the experiment
with CSP and SVM. And 4 subjects attended the experiment of EEGnet. Each
participant performed two experiments: a joint learning experiment and a co-
adaptive learning experiment (baseline). To mitigate the effects of session
order, the two experiments were arranged on two different days, at least two
days apart. The order of the experiments was also randomly selected, where 9
subjects conducted joint learning first the rest performed co-adaptive
learning experiments first. After each experiment, a simple questionnaire was
filled out by the subject. The settings of the experiments were as follows:
* •
Joint learning experiment. There were a total of five sessions in the joint
learning experiment. The first one was an initial calibration session without
feedback, which had 60 trials including 30 left and 30 right trials in random
order. After the initial calibration session, there were four joint learning
sessions with feedback, consisting of 20 trials for each direction, and each
trial was 9-second-long. The trials were presented in Fig. 3 and Section
II-B1. For preprocessing, a four-order Butterworth Filter was adopted to
filter the data from 8Hz to 30Hz. The classifier was updated after every
session. The threshold $T$ of “copy/new” was set to 70%. $\Lambda$ was set to
0.2 for initialization and $\Delta\Lambda$ was set to 0.05 for updating the
sample reweighting algorithm for SVM. While for EEGnet, more samples were
needed to initialize the training process, so $\Lambda$ was set to 0.5. And
$\Delta\Lambda$ was set to 0.1 to reduce training iterations for saving time.
* •
Co-adaptive learning experiment without joint learning (baseline). We used the
classical co-adaptive learning paradigm as the baseline for comparison.
Specifically, the co-adaptive learning method in the paper is adopted from
[60], named as RETRAIN, which achieved the best performance. The same method
was also used in [61]. The number of sessions in the co-adaptive learning
experiment (control group) was the same as in the joint learning experiment.
After the calibration session, the trials in the rest training session were
provided with feedback from an SVM/EEGnet decoder, which was also updated
after every session.
For both the joint learning and the co-adaptive learning experiments, the
software system was set up in Matlab, with the Psychtoolbox [72]. And the
computer was equipped with an Intel i7-10700k CPU, 64G RAM and an NVIDIA
GeForce RTX 3080 GPU. In CSP feature computing, three pairs of features were
selected for classification. In the SVM classifier, a linear kernel was
adopted. The time for inference was approximately 0.003 seconds. And the
training of the algorithm cost 5.55$\pm$2.87 seconds. For EEGnet, we chose
ADAM as the optimizer which is same to [44]. The inference time was
approximately 0.004 seconds, while the training time was 153$\pm$23 seconds.
The feedback were calculated by the window of one second and updated every
frame (1/60s). $\alpha$ was set to 0.2 for the feedback. For the offline
analysis, we cut the five-second trial into four trial slices for one second,
starting from 0.5 seconds and ending at 4.5 seconds after the instruction.
The EEG signals were recorded using a wireless EEG system (NSW24, Neuracle)
with a sampling rate of 1000Hz. A total of 20 electrodes were placed on FC5,
FC3, FC1, FCz, FC2, FC4, FC6, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz,
CP2, CP4 and CP6, according to the standard international 10-20 system. The
ground and reference were FPz and CPz, respectively. The impedances of all
electrodes were below the recommended value (10k$\Omega$) in the experiments.
### III-B Comparison of BCI control performance
Here we compared the performance of the joint learning and the co-adaptive
learning method both on SVM and EEGnet. The performance was evaluated by the
accuracy of online BCI control. We chose the highest accuracy among the last
two sessions as the online BCI control accuracy, where the accuracy of
sessions was averaged by the accuracy of all trials in the sessions.
#### III-B1 Analysis on SVM with joint learning
Fig. 4A illustrated the online BCI control accuracy for both joint learning
and co-adaptive learning with the fourteen subjects. Overall, the BCI control
accuracy of joint learning sessions was significantly higher than co-adaptive
learning sessions without joint learning (paired $t$-test, $p\textless
0.005$). The average accuracies of methods with joint learning and without
joint learning were 74.75% and 69.06%. For 11 out of 14 subjects, joint
learning achieved a superior BCI control accuracy compared with the baseline
co-adaptive experiments. The performance increases were most obvious with
subjects with lower BCI control performance, indicating the joint learning
process effectively facilitated the BCI training process.
In Fig. 4B, we examined the discrete performance by the proportion of
successful trials. Specifically, we defined a trial with accuracy above the
threshold $T$ (70%) as a successful trial, which can be used as the discrete
control signal. Overall, the joint learning sessions obtained a higher
proportion of successful trials compared with the baseline. With the joint
learning system, the proportion of successful trials rose sustainingly over
the four sessions, indicating the improvement of the subject’s BCI control
ability. Specifically, the proportion of successful trials increased to 60.0%,
64.2%, 65.5% and at the 2nd, 3rd, and 4th respectively. While without joint
learning, the proportion rose with the first two sessions, and the increase
became trivial afterward. With session 4, the joint learning system
outperformed the co-adaptive system significantly by 9.83% (paired $t$-test,
$p\textless 0.05$).
Figure 5: Comparison of BCI control accuracy with and without joint learning
on EEGnet. (A) The offline accuracy of subjects with and without the joint
learning process. $\ast$ indicates the significance with $t$-test (one-sided,
$p\textless 0.005$). (B) Comparison between CSP SVM method and EEGnet. $\ast$
indicates the significance with $t$-test (one-sided, $p\textless 0.005$) (C)
The online accuracy of EEGnet both with and without joint learning. Figure 6:
Analysis of the joint learning process. With the joint learning framework,
subjects rapidly learn to generate well-separable signals (solid lines) with
the guidance of the algorithm (dashed lines). Results on all subjects are in
the Supplementary Material Fig.1.
We further compared the trial-wise accuracy of BCI control. The trial-wise
accuracy was computed by the mean accuracy of online accuracy during the
trials, which was smoothed by the window of 10 points. Fig. 4C illustrates the
average accuracy of every trial over all the subjects. For both joint and co-
adaptive systems, the initial accuracies were around 60%. With the learning
process, the accuracy of joint learning rose rapidly to 76.58%, while the
accuracy of the baseline co-adaptive learning increased slower to 70.79%. The
results indicated that the subject learned more effectively with the joint
learning system.
#### III-B2 Analysis on EEGnet with joint learning
We conducted both online and pseudo-online experiments on EEGnet with the
proposed joint learning framework. The results are presented in Fig. 5.
Specifically, we conducted pseudo-online experiments on the EEG data of 14
subjects, comparing the performance of the EEGnet algorithm with and without
joint learning. As depicted in Fig. 5A, 11 out of 14 subjects demonstrated a
significant increase in performance with the joint learning algorithm.
Moreover, the average classification accuracy improved from 62.59% to 69.03%
($p\textless 0.005$), indicating that the proposed framework is effective and
robust for both SVM and EEGnet algorithms. These findings suggested that joint
learning framework help subjects to generate better signals and improved
performance in MI-based BCI systems.
Fig. 5B demonstrated the performance comparison between SVM and EEGnet
methods. The results indicate a significant advantage of the CSP SVM method
over the deep learning EEGnet method, with accuracies of 74.75% and 69.03%,
respectively. This performance difference may be attributed to the fact that
our framework starts the discriminator with a small number of samples, which
can lead to overfitting in the neural network.
We further conducted online experiments on 4 subjects with the EEGnet and the
results were shown in Fig. 5C. Both lines represent the online accuracy in the
feedback sessions. Without the joint learning framework, the average accuracy
increased slowly, from 63.24% to 65.62%. However, with the joint learning
paradigm, subjects demonstrated a faster learning process, starting at 61.40%,
rising to 64.88% and 67.85%, ending at 69.49%. These results provided evidence
of the effectiveness of our proposed joint learning method and its flexibility
in working with different algorithms.
Figure 7: Changes of subject’s brain signal distributions during the joint
learning process. Results on all subjects are in the Supplementary Material
Fig.2.
### III-C Effectiveness of the joint learning method
#### III-C1 Analysis of the joint learning process
The major feature of human-machine joint learning is that the machine can
guide the learning process of the subject for more efficient BCI training. In
the MI task, the joint goal of the subject and the decoder is to maximize the
discriminative ability of the “left” and “right” brain signal patterns on the
decoder’s space.
Thus, we illustrated the discriminative ability of the “left” and “right”
signals during the training process in Fig. 6. The discriminative ability was
evaluated by the SVM score, indicating the distance to the classification
plane in the hyper-space. The blue and red lines stood for the scores of left
and right respectively. With the joint learning process, the blue and red
lines became distant rapidly, indicating the improvement of discriminative
ability and the effectiveness of the joint learning process. Specifically, for
subject1 in Fig. 6A, the initial SVM scores for both “left” and “right” were
around 0.5, which were difficult to discriminative by the decoder, indicating
bad signal quality. In the training process, the “left” and “right” became
more separated during training. After about 160 training trials (at the second
training session), the SVM scores of brain signals for “left” and “right” were
-1.59 and 0.42 respectively, which were well discriminative with the decoder.
At the end of the training process, the brain signals for “left” and “right”
were discriminative with SVM scores of -0.97 and 1.39 respectively. With the
traditional co-adaptive learning process in Fig. 6B, the learning process was
inefficient compared with the joint learning process. Specifically, the brain
signals were difficult to classify until the 250th trials, with SVM scores of
-1.04 and 0.33. And the SVM scores were -0.88 and 0.65 at the end of training.
With the subject 13, similar results were observed in Fig. 6C and Fig. 6D. It
is worth noting that, for a fair comparison, the two chosen subjects had the
experiments in a different order. Subject 1 did the baseline experiment first
and subject 13 did the joint learning first. The results strongly suggested
that the joint learning process improved the efficiency of BCI learning
effectively.
Figure 8: Signal changes of subjects. (A) Sample distance between the left
and right signals across sessions. (B) Analysis of CSP features before and
after joint learning. Results on all subjects are in the Supplementary
Material Fig.3.
#### III-C2 Analysis of the decoder’s guidance to the subject
Here we further investigated the effectiveness of the joint learning process
by examining how the decoder guided the learning of the subject. In Fig. 6,
the dashed lines represented the mean of SVM scores accounting for the sample
weights, which indicated the expected scores of subjects’ signals by the
decoder. In Fig. 6A and Fig. 6C, it was observed that the decoder’s
expectations by sample reweighting were with higher absolute values for both
“left” and “right” brain signals, and the gap between the expected scores
between the two classes became larger during the training process. It
indicated that the brain signal quality improved with the training process,
and the decoder adaptively tuned itself to cope with the learning of the
subject. Also, the expectations from the decoder suggested the joint optimal
point for both humans and machines. With the training process, the SVM scores
of brain signals became closer to the optimal point, indicating the joint
learning process drove both the subject and the decoder to learn toward the
“efficient” direction effectively. While with the traditional co-adaptive
approach, the decoder’s expectations showed lower absolute values compared
with the sample scores, thus the learning was mostly driven by the subject
learning process, which degraded the efficiency of learning.
In order to illustrate the change of scores in detail, distributions of scores
in different sessions across the experiments were shown in Fig. 7. The red and
blue solid curves stood for the distribution of left and right scores. And the
dashed curves were weighted scores from feedback. All the distributions were
fitted by Gaussian distributions. Generally, the red and blue solid curves
followed the separation of the dashed curves. With the dash curves moving more
and more aside, the gap between solid curves also became more and more
significant. This not only demonstrated the learning process of subjects but
also presented the details of the guidance of the sample reweighting
algorithm. Quantitatively, the differences of $\mu$ (peaks of the fitted
distribution) for left and right distributions were computed to show the
separation. For the subject1, the difference was 0.44 at the first session for
the generated signal, but the difference for feedback was around 3.2. With the
training process, the difference for generated signal gently rose to 2.1 in
the last session. And the differences for feedback were always larger than it
for generated signals, leading the way. The change was almost the same for
subject13. The difference for generated signals grew from 0.1 to 4.7. While
the difference in feedback increased from 5.1 to 6.6. These changes meant
subjects learned to generate better signals with more separable distributions
with the guidance of feedback, explaining the process of joint learning.
#### III-C3 Analysis of brain signals with the learning process
Here we analyzed how the brain signals of the subjects change during the
learning process. We firstly illustrated the distance between “left” and
“right” signals along with sessions in Fig 8A. In this experiment, we re-
calculated the CSP features offline with brain signals from the last two
sessions, and the filters were applied to all 5 sessions for CSP feature
computation. The distance was computed by:
$distance=\dfrac{D_{inter}}{D_{intra}}$ (21)
where $D_{inter}$ stood for the inter-group distance, which was calculated by
the Euclidean Distance between the mean points of two groups of CSP features.
And $D_{intra}$ represented the intra-group distance, which was expressed by
the Euclidean Norm of the standard deviation of two groups. We illustrated the
average distance over 14 subjects in Fig. 8A.
With the training process, the distance between “left” and “right” brain
features became more separable for both joint learning and the baseline
approach, indicating the learning process of the subject. With the joint
learning approach, the distance between “left” and “right” increased more
rapidly from 0.229 to 0.263. The increase in joint learning was 0.034, which
was 0.020 higher than the baseline approach without joint learning. In the
last two sessions, the average distance between “left” and “right” brain
features was 0.261 for the joint learning approach, which was 0.011 higher
than the baseline method. Further, we illustrated the CSP features before and
after the joint learning experiments of a subject in Fig. 8B. For both before
and after learning conditions, we selected the most discriminative CSP
features for comparison. After the joint learning process, the subject learned
the typical MI pattern with the CSP feature peaking around the C3 channel.
The results demonstrated that the joint learning approach could help the
subject learn brain signal patterns both effectively and rapidly.
Figure 9: Analysis of the “copy/new” paradigm. (A) The accuracy of different
trials. (B) Average sample weights of different trials.
### III-D Effectiveness of the “copy/new” paradigm
In this section, we evaluated the effectiveness of our “copy/new” paradigm
from the aspects of accuracy and weight. We compared the averaged accuracy
before and after a “copy/new” instruction. Specifically, we labeled the trials
with four groups: 1) the first trial in a “copy” pair (before a “copy”
instruction); 2) the next trial in a “copy” pair (after a “copy” instruction);
3) the first trial in a “new” pair (before a “new” instruction); 4) the next
trial in a “new” pair (after a “new” instruction).
Overall, the “copy” instruction guided the subjects to maintain a good brain
signal pattern effectively, while the next trials after a “new” instruction
demonstrated a significant performance increase. As shown in Fig. 9A, the
first trials in a “copy” pair demonstrated a high average accuracy of 88%;
after the “copy” instruction, the performance of the next trial slightly
decreased while still keeping a high average accuracy of to 81%. The results
suggested that the “copy” instruction could guide the subject to repeat and
keep a good brain signal pattern. With the “new” instruction, the first trials
were with low accuracies of about 43%. After the “new” instruction, the
average accuracy of the next trial rose to 60%. The results showed that the
subjects tried to generate a new brain signal pattern different from the
previous ones to seek a better performance, which was in line with
expectations.
Furthermore, we analyzed the weights in different types of trials to show the
cooperation between our paradigm and algorithm. Fig. 9B presented the weight
in different types of trials. The weights were calculated by averaging the
weights on trials of the selected type. Overall, although the weights varied
in a big range, the average weight on the next trials was slightly higher than
the weight on the first trials. Specifically, the mean weight on the first
trials was 0.58, and the mean weight on the next trials was 0.62 ($p\textless
0.05$ with one-sided t-test), which explained the better separability of the
next trials. This indicated that the proposed “copy/new” instructions worked
well with the sample reweighting algorithm, which also demonstrated the
validity of the proposed learning paradigm.
Figure 10: Analysis of the questionnaires. (A) Scores before and after the
experiments. (B) Scores of the interaction manner. (C) Scores of fatigue
level.
### III-E Analysis of the questionnaires
The results of the questionnaire were presented to analyze from a subjective
point of view. The questionnaire consisted of six simple questions, measured
by scores from one to seven, which were shown in the supplementary material in
detail. The first four questions were about the ability of MI of the left and
right hand respectively. One pair was answered before the experiments and the
other pair should be scored after the experiments (How do you think about your
motor imagery ability of left/right hand before/after the experiment? one
stands for very poor, and seven stands for very good). The result was plotted
in Fig. 10A, where the score change was calculated by the sum of two questions
after the experiments minus the sum before the experiments. It could be seen
from the picture that red lines often had a higher value than blue lines,
which meant subjects thought that they had better MI ability after our
experiment with joint learning. In detail, the mean value of score change for
the co-adaptive paradigm was -0.14, while for our joint learning paradigm, the
mean change was 1.64. This meant subjects thought the co-adaptive training did
not have significant effects, while our proposed joint learning method had a
better training effect. This result was self-reported by subjects, which might
be because the better control accuracy made subjects feel more confident.
Question 5 was about the degree of interaction, and level of knowing the
feedback (Do you think you can understand the feedback? one stands for no, and
seven stands for very good understanding). The results were in Fig. 10B. Our
proposed method turned out to be slightly better than the control group,
achieving an average score of around 6.29 (5.5 in the control group). This
might be due to the “copy/new” instructions in our paradigm, giving subjects a
stronger sense of participation. Also with the instructions, the subject can
know about the separability or quality of generated signals more clearly. This
also helped with the interaction during the experiments.
The last question was about fatigue (How much fatigue do you feel during the
experiment? one stands for none, and seven stands for too much and
intolerable). Fig. 10C showed the averaged score, in which scores of both
experiments were almost the same (3.29 in the control group and 3.35 in our
experiments). So, there was no significant difference between the efforts in
the two experiments. Our experiments just helped subjects to change or hold
their thoughts. This did not increase the workload of subjects.
Figure 11: (A) Accuracy with different $\Lambda$ and $\Delta\Lambda$ (B)
Accuracy with different number of electrodes. (C) Layout of electrodes.
## IV Discussion
### IV-A Novelty and contributions
Spontaneous BCI training is often hindered by the training burden it poses.
Existing co-adaptive learning methods, which adopt an alternate learning
process, have shown suboptimal performance. In such methods, either the
decoder or the user learns at a time [60, 61]. However, human and computer
algorithms are heterogeneous in information representation and learning
mechanisms. Therefore, it is difficult to let humans and computers learn
together in a uniform framework. In this study, we use brain signal samples
generated by subjects as the intermedia, to enable the joint learning process
between both sides. In the human learning process, the subject tries to
generate “better” brain samples with the guidance of the computer, which tells
the subject what are “good” samples. At the same time, the computer
continuously updates itself adaptively to the human learning process. Thus,
the learning process of both humans and computers can be optimized
simultaneously.
In this paper, we first modeled the aforementioned joint learning process in a
uniform model, with the Markov-based human learning process and the self-paced
learning-based computer learning process. And then, we implement the framework
by proposing a novel copy/new paradigm, to achieve the joint learning process.
Online experiments demonstrated the effectiveness of the joint learning
process.
### IV-B Sensitivity of parameters
In this section, we conducted experiments to investigate the sensitivity of
the algorithm’s parameters. Specifically, we focused on two parameters,
$\Lambda$ and $\Delta\Lambda$, which control the initialization and training
pace of the algorithm. To evaluate their influence, we performed a grid search
on the data from 14 subjects. We varied $\Lambda$ in the range of [0.2, 0.4,
0.6, 0.8] and $\Delta\Lambda$ in the range of [0.05, 0.10, 0.15, 0.20]. The
results are presented in Fig. 11A. We observed that the accuracy ranged from
75.81% to 78.11%, with no significant difference found among the different
parameter settings. The highest accuracy was achieved with $\Lambda=0.2$ and
$\Delta\Lambda=0.05$, which is consistent with the parameter settings used in
the online joint learning experiment. Conversely, the lowest accuracy was
obtained with $\Lambda=0.2$ and $\Delta\Lambda=0.20$. Moreover, we found that
the accuracy increased with the increment of $\Delta\Lambda$ when
$\Lambda=0.2$. This suggests that when the number of initial samples is
relatively small, more rounds are needed to reduce the effect of random
initialization. In contrast, when more samples are included in the
initialization, the results tend to be more stable.
### IV-C Electrode selection
Electrode selection is a critical issue for motor imagery (MI) performance, in
addition to parameters such as $\Lambda$ and $\Delta\Lambda$. The primary area
for MI in the brain is the primary motor cortex (M1), which is located in the
precentral gyrus of the frontal lobe [47]. Therefore, electrodes of interest
are typically C3 and C4, which are located over the left and right primary
motor cortex, respectively [73]. In the online experiments, we recorded data
from 20 electrodes, along with CPz as a reference. As shown in Fig. 11C, the
selected electrode set was centered at C3 and C4. To investigate the impact of
electrode selection further, we reduced the number of electrodes from the
outer side towards the center. We reduced the original 20 electrodes (FC5,
FC3, FC1, FCz, FC2, FC4, FC6, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz,
CP2, CP4,CP6) to 14 electrodes (FC3, FC1, FCz, FC2, FC4, C3, C1, Cz, C2, C4,
CP3, CP1, CP2, CP4), then 8 electrodes (FC3, FCz, FC4, C3, Cz, C4, CP3, CP4),
and finally to 3 electrodes (C3, Cz, C4). The accuracy change is plotted in
Fig. 11B. As expected, the accuracy decreased with the reduction of electrodes
from 74.75% to 63.97%. The average accuracy of our proposed methods was always
higher than the method without joint learning. However, the advantage
diminished from 5.69% to 1.64%. This could be attributed to the decreasing
number of electrodes resulting in less useful information. This result also
helps to demonstrate the effectiveness of our proposed method.
### IV-D Limitations and future works
#### IV-D1 Multi-classification problem
In this paper, we proposed novel a joint learning framework for effective BCI
training, which took an MI system of the binary classification as an example.
With the framework, different base algorithms could be deployed to deal with
the training problem. For the multi-classification problem, our framework
could easily fit to the problem. Because from the algorithm aspect, the key
part is to evaluate samples according to their loss, to guide the human
learning process. From the aspect of paradigm, the “copy/new” instructions
would remain the same. As a result, the challenge lies in the base algorithm.
In the experiments, we adopted the CSP SVM and the EEGnet as the base
algorithms. For the EEGnet, it was introduced to deal with all kinds of EEG
problems. So it would be rather convenient to deal with multi-classification
problem. For the CSP SVM, it was designed for the binary classification
problem [25]. However, there are some improvements to enhance CSP for
multiclass MI [27]. With these new methods, our proposed framework promises to
achieve good results on multi-classification problems.
#### IV-D2 Markov process and endogenous attention
As discussed in Section II-A1, the Markov process is a widely used framework
for modeling human decision-making processes. Recently, it has been applied to
the study of endogenous attention, as it can accurately reproduce reaction
times in GO-GO experiments [64]. Moreover, Markov models have been employed in
the field of visual decision-making [65], and hidden Markov models have been
used to reveal endogenous neural activities that trigger perceptual changes by
analyzing dynamic neural patterns [74]. In this paper, we model the learning
process, including decision-making about MI strategies, using a Markov
process. In future work, we aim to expand the model to include EEG patterns
and features. This will provide further insights into human learning and
decision-making processes in MI training.
#### IV-D3 Applications on various EEG devices
The emergence of advanced sensor technology has facilitated the creation of
compact and intelligent wearable electroencephalography (EEG) devices tailored
for personal use [75]. Devices with novel dry EEG sensors, which is convenient
to wear and remove, have enhanced the usability of BCI systems [76]. Wireless
devices with dry and noncontact EEG electrode sensor have also been introduced
[77]. These devices are intended for real-life usage scenarios, which are
significantly more complex than controlled laboratory environments.
Consequently, ensuring the stability of BCI systems poses a considerable
challenge. Our proposed human-machine joint learning framework exhibits vast
prospects for application and further work in this field.
## V Conclusion
Towards efficient and effective BCI training, the proposed human-machine joint
learning framework enables the simultaneous learning of both the user and the
decoder. We model the human-machine joint learning process in a uniform
formulation and propose a novel joint learning framework for efficient BCI
training. Online and psuedo-online experiments demonstrate the effectiveness
of the proposed joint learning framework in rapid BCI learning. The proposed
framework can be extended to MI tasks with more control degrees and also has
the potential to guide the subject to generate new brain patterns out of the
MI manner, to broaden the usability of BCI systems.
## VI ACKNOWLEDGMENT
This work was partly supported by the grants from STI 2030 Major Projects
(2021ZD0200400), and Natural Science Foundation of China (U1909202 and
61925603), and the Key Research and Development Program of Zhejiang Province
in China (2020C03004).
## References
* [1] J. J. Shih, D. J. Krusienski, and J. R. Wolpaw, “Brain-computer interfaces in medicine,” in _Mayo Clinic Proceedings_ , vol. 87, no. 3. Elsevier, 2012, pp. 268–279.
* [2] G. Pan, J.-J. Li, Y. Qi, H. Yu, J.-M. Zhu, X.-X. Zheng, Y.-M. Wang, and S.-M. Zhang, “Rapid decoding of hand gestures in electrocorticography using recurrent neural networks,” _Frontiers in neuroscience_ , vol. 12, p. 555, 2018.
* [3] K. K. Ang and C. Guan, “Eeg-based strategies to detect motor imagery for control and rehabilitation,” _IEEE Transactions on Neural Systems and Rehabilitation Engineering_ , vol. 25, no. 4, pp. 392–401, 2016.
* [4] I. Lazarou, S. Nikolopoulos, P. C. Petrantonakis, I. Kompatsiaris, and M. Tsolaki, “Eeg-based brain–computer interfaces for communication and rehabilitation of people with motor impairment: a novel approach of the 21st century,” _Frontiers in human neuroscience_ , vol. 12, p. 14, 2018.
* [5] F. Cincotti, F. Pichiorri, P. Aricò, F. Aloise, F. Leotta, F. de Vico Fallani, J. d. R. Millán, M. Molinari, and D. Mattia, “Eeg-based brain-computer interface to support post-stroke motor rehabilitation of the upper limb,” in _2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society_. IEEE, 2012, pp. 4112–4115.
* [6] J. Li, S. Qiu, Y.-Y. Shen, C.-L. Liu, and H. He, “Multisource transfer learning for cross-subject eeg emotion recognition,” _IEEE transactions on cybernetics_ , vol. 50, no. 7, pp. 3281–3293, 2019.
* [7] S. Pancholi, A. Giri, A. Jain, L. Kumar, and S. Roy, “Source aware deep learning framework for hand kinematic reconstruction using eeg signal,” _IEEE Transactions on Cybernetics_ , 2022.
* [8] R. Xu, N. Jiang, C. Lin, N. Mrachacz-Kersting, K. Dremstrup, and D. Farina, “Enhanced low-latency detection of motor intention from eeg for closed-loop brain-computer interface applications,” _IEEE Transactions on Biomedical Engineering_ , vol. 61, no. 2, pp. 288–296, 2013.
* [9] Y. Qi, B. Liu, Y. Wang, and G. Pan, “Dynamic ensemble modeling approach to nonstationary neural decoding in brain-computer interfaces,” _Advances in Neural Information Processing Systems_ , vol. 32, 2019.
* [10] T. Gu, Z. Wang, X. Xu, D. Li, H. Yang, and W. Du, “Frame-level teacher-student learning with data privacy for eeg emotion recognition,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2022.
* [11] J. Bhattacharya _et al._ , “Complexity analysis of spontaneous eeg,” _Acta neurobiologiae experimentalis_ , vol. 60, no. 4, pp. 495–502, 2000\.
* [12] F. Di Russo, A. Martínez, M. I. Sereno, S. Pitzalis, and S. A. Hillyard, “Cortical sources of the early components of the visual evoked potential,” _Human brain mapping_ , vol. 15, no. 2, pp. 95–111, 2002.
* [13] W. Yijun, W. Ruiping, G. Xiaorong, and G. Shangkai, “Brain-computer interface based on the high-frequency steady-state visual evoked potential,” in _Proceedings. 2005 First International Conference on Neural Interface and Control, 2005._ IEEE, 2005, pp. 37–39.
* [14] H. Wang, Y. Qi, H. Yu, Y. Wang, C. Liu, G. Hu, and G. Pan, “Rcit: An rsvp-based concealed information test framework using eeg signals,” _IEEE Transactions on Cognitive and Developmental Systems_ , 2021.
* [15] S. Hu, H. Wang, J. Zhang, W. Kong, Y. Cao, and R. Kozma, “Comparison analysis: Granger causality and new causality and their applications to motor imagery,” _IEEE transactions on neural networks and learning systems_ , vol. 27, no. 7, pp. 1429–1444, 2015.
* [16] S. Gao, Y. Wang, X. Gao, and B. Hong, “Visual and auditory brain–computer interfaces,” _IEEE Transactions on Biomedical Engineering_ , vol. 61, no. 5, pp. 1436–1447, 2014.
* [17] K. Choi and A. Cichocki, “Control of a wheelchair by motor imagery in real time,” in _International conference on intelligent data engineering and automated learning_. Springer, 2008, pp. 330–337.
* [18] A. J. Doud, J. P. Lucas, M. T. Pisansky, and B. He, “Continuous three-dimensional control of a virtual helicopter using a motor imagery based brain-computer interface,” _PloS one_ , vol. 6, no. 10, p. e26322, 2011.
* [19] N. Jiang, L. Gizzi, N. Mrachacz-Kersting, K. Dremstrup, and D. Farina, “A brain–computer interface for single-trial detection of gait initiation from movement related cortical potentials,” _Clinical Neurophysiology_ , vol. 126, no. 1, pp. 154–159, 2015.
* [20] Y. Qi, L. Ding, Y. Wang, and G. Pan, “Learning robust features from nonstationary brain signals by multi-scale domain adaptation networks for seizure prediction,” _IEEE Transactions on Cognitive and Developmental Systems_ , 2021.
* [21] Z. Wu, R. Reddy, G. Pan, N. Zheng, P. F. Verschure, Q. Zhang, X. Zheng, J. C. Principe, A. Kreilinger, M. Rohm _et al._ , “The convergence of machine and biological intelligence,” _IEEE Intelligent Systems_ , vol. 28, no. 5, pp. 28–43, 2013.
* [22] B. J. Edelman, J. Meng, D. Suma, C. Zurn, E. Nagarajan, B. Baxter, C. C. Cline, and B. He, “Noninvasive neuroimaging enhances continuous neural tracking for robotic device control,” _Science robotics_ , vol. 4, no. 31, 2019.
* [23] H.-J. Hwang, K. Kwon, and C.-H. Im, “Neurofeedback-based motor imagery training for brain–computer interface (bci),” _Journal of neuroscience methods_ , vol. 179, no. 1, pp. 150–156, 2009.
* [24] C. Vidaurre and B. Blankertz, “Towards a cure for bci illiteracy,” _Brain topography_ , vol. 23, no. 2, pp. 194–198, 2010.
* [25] K. K. Ang, Z. Y. Chin, H. Zhang, and C. Guan, “Filter bank common spatial pattern (fbcsp) in brain-computer interface,” in _2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence)_. IEEE, 2008, pp. 2390–2397.
* [26] C. Ju and C. Guan, “Tensor-cspnet: A novel geometric deep learning framework for motor imagery classification,” _IEEE Transactions on Neural Networks and Learning Systems_ , pp. 1–15, 2022.
* [27] S. Kumar, T. K. Reddy, V. Arora, and L. Behera, “Formulating divergence framework for multiclass motor imagery eeg brain computer interface,” in _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2020, pp. 1344–1348.
* [28] T. K. Reddy and L. Behera, “Driver drowsiness detection: An approach based on intelligent brain–computer interfaces,” _IEEE Systems, Man, and Cybernetics Magazine_ , vol. 8, no. 1, pp. 16–28, 2022.
* [29] K. Sadatnejad and F. Lotte, “Riemannian channel selection for bci with between-session non-stationarity reduction capabilities,” _IEEE Transactions on Neural Systems and Rehabilitation Engineering_ , vol. 30, pp. 1158–1171, 2022.
* [30] R. Zhang, P. Xu, L. Guo, Y. Zhang, P. Li, and D. Yao, “Z-score linear discriminant analysis for eeg based brain-computer interfaces,” _PloS one_ , vol. 8, no. 9, p. e74433, 2013.
* [31] J. H. Friedman, “Regularized discriminant analysis,” _Journal of the American statistical association_ , vol. 84, no. 405, pp. 165–175, 1989.
* [32] X. Lei, P. Yang, and D. Yao, “An empirical bayesian framework for brain–computer interfaces,” _IEEE Transactions on Neural Systems and Rehabilitation Engineering_ , vol. 17, no. 6, pp. 521–529, 2009.
* [33] B. Blankertz, S. Lemm, M. Treder, S. Haufe, and K.-R. Müller, “Single-trial analysis and classification of erp components—a tutorial,” _NeuroImage_ , vol. 56, no. 2, pp. 814–825, 2011.
* [34] A. Subasi and M. I. Gursoy, “Eeg signal classification using pca, ica, lda and support vector machines,” _Expert systems with applications_ , vol. 37, no. 12, pp. 8659–8666, 2010.
* [35] A. Bhardwaj, A. Gupta, P. Jain, A. Rani, and J. Yadav, “Classification of human emotions from eeg signals using svm and lda classifiers,” in _2015 2nd International Conference on Signal Processing and Integrated Networks (SPIN)_. IEEE, 2015, pp. 180–185.
* [36] P. Wang, A. Jiang, X. Liu, J. Shang, and L. Zhang, “Lstm-based eeg classification in motor imagery tasks,” _IEEE transactions on neural systems and rehabilitation engineering_ , vol. 26, no. 11, pp. 2086–2095, 2018\.
* [37] J. Zhou, M. Meng, Y. Gao, Y. Ma, and Q. Zhang, “Classification of motor imagery eeg using wavelet envelope analysis and lstm networks,” in _2018 Chinese Control And Decision Conference (CCDC)_. IEEE, 2018, pp. 5600–5605.
* [38] S. Tortora, S. Ghidoni, C. Chisari, S. Micera, and F. Artoni, “Deep learning-based bci for gait decoding from eeg with lstm recurrent neural network,” _Journal of neural engineering_ , vol. 17, no. 4, p. 046011, 2020\.
* [39] X. Du, C. Ma, G. Zhang, J. Li, Y.-K. Lai, G. Zhao, X. Deng, Y.-J. Liu, and H. Wang, “An efficient lstm network for emotion recognition from multichannel eeg signals,” _IEEE Transactions on Affective Computing_ , 2020\.
* [40] G. Dai, J. Zhou, J. Huang, and N. Wang, “Hs-cnn: a cnn with hybrid convolution scale for eeg motor imagery classification,” _Journal of neural engineering_ , vol. 17, no. 1, p. 016025, 2020.
* [41] D. Zhang, L. Yao, K. Chen, S. Wang, X. Chang, and Y. Liu, “Making sense of spatio-temporal preserving representations for eeg-based human intention recognition,” _IEEE transactions on cybernetics_ , vol. 50, no. 7, pp. 3033–3044, 2019.
* [42] J.-S. Bang, M.-H. Lee, S. Fazli, C. Guan, and S.-W. Lee, “Spatio-spectral feature representation for motor imagery classification using convolutional neural networks,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2021.
* [43] Y. Hou, S. Jia, X. Lun, Z. Hao, Y. Shi, Y. Li, R. Zeng, and J. Lv, “Gcns-net: a graph convolutional neural network approach for decoding time-resolved eeg motor imagery signals,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2022.
* [44] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces,” _Journal of neural engineering_ , vol. 15, no. 5, p. 056013, 2018.
* [45] V. Gupta, J. Meenakshinathan, T. K. Reddy, and L. Behera, “Performance study of neural structured learning using riemannian features for bci classification,” in _2022 National Conference on Communications (NCC)_. IEEE, 2022, pp. 297–301.
* [46] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. Müller, and G. Curio, “The non-invasive berlin brain–computer interface: fast acquisition of effective performance in untrained subjects,” _NeuroImage_ , vol. 37, no. 2, pp. 539–550, 2007.
* [47] G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-computer communication,” _Proceedings of the IEEE_ , vol. 89, no. 7, pp. 1123–1134, 2001.
* [48] M. Mihara, I. Miyai, N. Hattori, M. Hatakenaka, H. Yagura, T. Kawano, M. Okibayashi, N. Danjo, A. Ishikawa, Y. Inoue _et al._ , “Neurofeedback using real-time near-infrared spectroscopy enhances motor imagery related cortical activation,” _PloS one_ , vol. 7, no. 3, p. e32234, 2012.
* [49] Z. Wang, Y. Zhou, L. Chen, B. Gu, S. Liu, M. Xu, H. Qi, F. He, and D. Ming, “A bci based visual-haptic neurofeedback training improves cortical activations and classification performance during motor imagery,” _Journal of neural engineering_ , vol. 16, no. 6, p. 066012, 2019.
* [50] F. Nijboer, A. Furdea, I. Gunst, J. Mellinger, D. J. McFarland, N. Birbaumer, and A. Kübler, “An auditory brain–computer interface (bci),” _Journal of neuroscience methods_ , vol. 167, no. 1, pp. 43–50, 2008.
* [51] X. Shu, L. Yao, X. Sheng, D. Zhang, and X. Zhu, “Enhanced motor imagery-based bci performance via tactile stimulation on unilateral hand,” _Frontiers in human neuroscience_ , vol. 11, p. 585, 2017.
* [52] L. Yao, N. Jiang, N. Mrachacz-Kersting, X. Zhu, D. Farina, and Y. Wang, “Reducing the calibration time in somatosensory bci by using tactile erd,” _IEEE Transactions on Neural Systems and Rehabilitation Engineering_ , vol. 30, pp. 1870–1876, 2022.
* [53] Y. Ono, K. Wada, M. Kurata, and N. Seki, “Enhancement of motor-imagery ability via combined action observation and motor-imagery training with proprioceptive neurofeedback,” _Neuropsychologia_ , vol. 114, pp. 134–142, 2018.
* [54] S. Perdikis and J. d. R. Millan, “Brain-machine interfaces: a tale of two learners,” _IEEE Systems, Man, and Cybernetics Magazine_ , vol. 6, no. 3, pp. 12–19, 2020.
* [55] J. d. R. Millán, “Brain-machine interfaces: the perception-action closed loop: a two-learner system,” _IEEE Systems, Man, and Cybernetics Magazine_ , vol. 1, no. 1, pp. 6–8, 2015.
* [56] J. R. Millan, “On the need for on-line learning in brain-computer interfaces,” in _2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541)_ , vol. 4. IEEE, 2004, pp. 2877–2882.
* [57] C. Vidaurre, C. Sannelli, K.-R. Müller, and B. Blankertz, “Co-adaptive calibration to improve bci efficiency,” _Journal of neural engineering_ , vol. 8, no. 2, p. 025009, 2011.
* [58] Y. Li, C. Guan, H. Li, and Z. Chin, “A self-training semi-supervised svm algorithm and its application in an eeg-based brain computer interface speller system,” _Pattern Recognition Letters_ , vol. 29, no. 9, pp. 1285–1294, 2008.
* [59] A. Llera, V. Gómez, and H. J. Kappen, “Adaptive classification on brain-computer interfaces using reinforcement signals,” _Neural Computation_ , vol. 24, no. 11, pp. 2900–2923, 2012.
* [60] P. Shenoy, M. Krauledat, B. Blankertz, R. P. Rao, and K.-R. Müller, “Towards adaptive classification for bci,” _Journal of neural engineering_ , vol. 3, no. 1, p. R13, 2006.
* [61] A. Abu-Rmileh, E. Zakkay, L. Shmuelof, and O. Shriki, “Co-adaptive training improves efficacy of a multi-day eeg-based motor imagery bci training,” _Frontiers in Human Neuroscience_ , p. 362, 2019.
* [62] J. Merel, D. M. Pianto, J. P. Cunningham, and L. Paninski, “Encoder-decoder optimization for brain-computer interfaces,” _PLoS computational biology_ , vol. 11, no. 6, p. e1004288, 2015.
* [63] J. S. Müller, C. Vidaurre, M. Schreuder, F. C. Meinecke, P. Von Bünau, and K.-R. Müller, “A mathematical model for the two-learners problem,” _Journal of neural engineering_ , vol. 14, no. 3, p. 036005, 2017.
* [64] C. A. Mugruza-Vassallo, J. Granados, V. Flores-Benites, and L. Córdoba-Berríos, “Different markov chains modulate visual stimuli processing in a go-go experiment in 2d, 3d and augmented reality,” 2022\.
* [65] A. Ghaderi-Kangavari, J. A. Rad, K. Parand, and M. D. Nunez, “Neuro-cognitive models of single-trial eeg measures describe latent effects of spatial attention during perceptual decision making,” _Journal of Mathematical Psychology_ , vol. 111, p. 102725, 2022.
* [66] J. R. Busemeyer, P. D. Kvam, and T. J. Pleskac, “Comparison of markov versus quantum dynamical models of human decision making,” _Wiley Interdisciplinary Reviews: Cognitive Science_ , vol. 11, no. 4, p. e1526, 2020\.
* [67] E. A. Feinberg and A. Shwartz, _Handbook of Markov decision processes: methods and applications_. Springer Science & Business Media, 2012, vol. 40.
* [68] L. Jiang, D. Meng, Q. Zhao, S. Shan, and A. G. Hauptmann, “Self-paced curriculum learning,” in _Twenty-Ninth AAAI Conference on Artificial Intelligence_ , 2015.
* [69] M. Kumar, B. Packer, and D. Koller, “Self-paced learning for latent variable models,” _Advances in neural information processing systems_ , vol. 23, 2010\.
* [70] M. Lapin, M. Hein, and B. Schiele, “Learning using privileged information: Svm+ and weighted svm,” _Neural Networks_ , vol. 53, pp. 95–108, 2014.
* [71] L. Jiang, D. Meng, S.-I. Yu, Z. Lan, S. Shan, and A. Hauptmann, “Self-paced learning with diversity,” _Advances in neural information processing systems_ , vol. 27, 2014.
* [72] M. Kleiner, D. Brainard, and D. Pelli, “What’s new in psychtoolbox-3?” 2007.
* [73] C. Neuper, R. Scherer, S. Wriessnegger, and G. Pfurtscheller, “Motor imagery and action observation: modulation of sensorimotor brain rhythms during mental control of a brain–computer interface,” _Clinical neurophysiology_ , vol. 120, no. 2, pp. 239–247, 2009.
* [74] D. Lyu, S. Naik, D. K. Menon, and E. A. Stamatakis, “Intrinsic brain dynamics in the default mode network predict involuntary fluctuations of visual awareness,” _Nature Communications_ , vol. 13, no. 1, p. 6923, 2022.
* [75] X. Gu, Z. Cao, A. Jolfaei, P. Xu, D. Wu, T.-P. Jung, and C.-T. Lin, “Eeg-based brain-computer interfaces (bcis): A survey of recent studies on signal sensing technologies and computational intelligence approaches and their applications,” _IEEE/ACM transactions on computational biology and bioinformatics_ , vol. 18, no. 5, pp. 1645–1666, 2021.
* [76] L.-D. Liao, C.-T. Lin, K. McDowell, A. E. Wickenden, K. Gramann, T.-P. Jung, L.-W. Ko, and J.-Y. Chang, “Biosensor technologies for augmented brain–computer interfaces in the next decades,” _Proceedings of the IEEE_ , vol. 100, no. Special Centennial Issue, pp. 1553–1566, 2012.
* [77] Y. M. Chi and G. Cauwenberghs, “Wireless non-contact eeg/ecg electrodes for body sensor networks,” in _2010 International Conference on Body Sensor Networks_. IEEE, 2010, pp. 297–301.
|
Stress correlations in near-crystalline packings
Roshan Maharana$\star$ and Kabir Ramola$\dagger$
Tata Institute of Fundamental Research, Hyderabad 500107, India
⋆<EMAIL_ADDRESS>†<EMAIL_ADDRESS>
## Abstract
We derive exact results for stress correlations in near-crystalline systems in
two and three dimensions. We study energy minimized configurations of
particles interacting through Harmonic as well as Lennard-Jones potentials,
for varying degrees of microscopic disorder and quenched forces on grains. Our
findings demonstrate that the macroscopic elastic properties of such near-
crystalline packings remain unchanged within a certain disorder threshold, yet
they can be influenced by various factors, including packing density,
pressure, and the strength of inter-particle interactions. We show that the
stress correlations in such systems display anisotropic behavior at large
lengthscales and are significantly influenced by the pre-stress of the system.
The anisotropic nature of these correlations remains unaffected as we increase
the strength of the disorder. Additionally, we derive the large lengthscale
behavior for the change in the local stress components that shows a $1/r^{d}$
radial decay for the case of particle size disorder and a $1/r^{d-1}$ behavior
for quenched forces introduced into a crystalline network. Finally, we verify
our theoretical results numerically using energy-minimised static particle
configurations.
###### Contents
1. 1 Introduction
2. 2 Models
1. 2.1 Short-ranged repulsive Harmonic interaction
2. 2.2 Attractive Lennard-Jones interaction with cut-off
3. 3 Numerical Simulations
1. 3.1 Harmonic Model
2. 3.2 LJ Model
4. 4 Elastic properties of near-crystalline packings
5. 5 Displacement fields induced by microscopic disorder
1. 5.1 Short-ranged repulsive harmonic interaction
2. 5.2 Attractive Lennard-Jones interaction with cut-off
6. 6 Stress correlations induced by microscopic disorder
1. 6.1 Local stress fluctuations
2. 6.2 Stress correlations in two dimensional systems
3. 6.3 Comparision with amorphous systems at large lengthscales
4. 6.4 Continuum limit
5. 6.5 Stress correlations in three dimensions
7. 7 Response to a point force
8. 8 Distribution of Stresses
9. 9 Discussion and Conclusion
10. A Green’s functions in the Harmonic model
1. A.1 Green’s functions for displacement fields
2. A.2 Green’s functions for change in local stresses in Fourier space
11. B Angular variation of stress correlations at large lengthscales
12. C Hammer projection
## 1 Introduction
Jammed athermal materials find relevance in various fields such as soft
condensed matter physics, material science, civil engineering and metallurgy
[1, 2]. Additionally, jammed packings also arise in fields such as biophysics,
where cellular tissues are well described by soft potential models [3, 4].
They arise when it is not feasible to achieve true thermodynamic equilibrium.
The stability of athermal solids against mechanical disturbances can be
attributed to the macroscopic rigidity arising from the network of constituent
particles [5, 6, 7, 8, 9]. This collective elasticity arises in any system of
interacting particles at low temperatures and is observed universally in both
crystalline and amorphous structures of athermal solids [10, 11, 12, 13, 14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. The reference states that make up the
collection of amorphous solids are highly dependent on the preparation method,
and each configuration satisfies the conditions of local equilibrium i.e., the
force and torque balance of each constituent. Amorphous structures, while
stable in a local sense, are typically not the lowest energy states of their
individual components [25, 26, 27]. Consequently, jammed packings of soft
particles can exhibit both amorphous as well as crystalline structures.
Near-crystalline materials demonstrate a range of unique properties and serve
as a bridge between the physics of crystals and amorphous materials, providing
valuable insights into the behavior of athermal ensembles [28, 29, 30, 31, 32,
33, 34]. The large-scale elasticity properties exhibited in both amorphous and
crystalline solids link these two typically distinct branches of condensed
matter physics [35, 30, 36]. Recent studies on near-crystalline materials have
revealed various characteristics similar to those found in fully amorphous
materials, including the presence of quasi-localized modes [37, 38, 39]. Such
near-crystalline materials therefore help in establishing a connection between
the well studied physics of crystals and that of amorphous solids by
introducing disorder gradually into athermal crystalline packings. On the one
hand, while crystals have ordered structures, amorphous solids are
characterized by random and inflexible structures that arise from the
competing interactions between constituent particles. Despite their distinct
local structures [40], crystalline and amorphous packings exhibit many similar
elastic properties [41, 18]. Given the long-range displacement correlations in
these systems [42, 43], one may reasonably question whether the microscopic
structure affects the large-scale elasticity properties [44]. Therefore a
crucial question to address is how global rigidity is manifested in distinct
networks and whether this can be detected in the local stress tensor
fluctuations [45, 46]. Although some properties of athermal ensembles can be
described by temperature-like variables [47, 48], and despite several attempts
at a unifying framework, there is still a lack of understanding of these
properties in non-isotropic materials and near-crystalline systems. It is
therefore of interest to generate ensembles that can be precisely
characterized theoretically. In this paper, we develop a microscopic theory
for stress correlations in near-crystalline systems arising from various types
of disorders such as particle size disorder or due to quenched forces.
Stress correlations are a key ingredient in understanding the physics of
disordered systems, and there have been several recent studies that establish
their importance in amorphous materials [49, 50]. Such correlations provide
valuable insights into the collective behavior of interacting particles and
are widely used in fields such as material science, fluid dynamics, and
geophysics [51, 19, 52, 53]. Understanding stress correlations can help us
predict the strength and stability of materials under various external
conditions [54, 55] as well as give insights into the rheology of particulate
packings, such as their ability to flow or resist deformation. Stress
correlations provide a deeper insight into the degree of rigidity or
floppiness within particle packings and how they react to external influences
like shear or compression [17, 41]. Recently, stress correlations have also
been studied in various types of systems such as glasses, granular packings,
and gels amongst others, and has become a question that has attracted
considerable interest [17, 18, 19, 20, 21, 22, 23]. There have been several
theoretical studies that use material isotropy and homogeneity in amorphous
materials to derive the large length scale anisotropic behavior of the stress
correlations [56, 57, 58]. Several numerical studies have also explored stress
correlations in computer-simulated disordered packings [22, 23].
The main results of this paper can be summarized as follows. We derive the
displacement fields due to the introduction of particle size disorder or due
to external quenched force in a crystalline system through a microscopic
disorder perturbation expansion. Using the linear order displacement and force
fields, we derive the components of the change in the local stress tensor on
each grain. At large lengthscales, the local stresses show anisotropic
$1/r^{d}$ radial behavior for particle size disorder and $1/r^{d-1}$ behavior
for external force quenching. We analyze the local pressure fluctuation which
shows similar radial behavior yet isotropic at large lengthscales. We also
measure the global bulk and shear modulus for such a near-crystalline system
which show excellent match with simulations for finite small disorder. We then
derive the configurational averaged correlations of the local stress
fluctuations which are verified through numerical simulations in two different
models in both two and three dimensions. We show that the stress correlations
in disordered crystals show different behavior to that of an isotropic
amorphous material at a high packing fraction or high pressure limit.
The outline of the paper is as follows. In Section 2 we introduce the
microscopic models, while in Section 3, we present the corresponding
preparation protocols. In Section 4 we employ the microscopic approach to
derive macroscopic properties, specifically the bulk and shear moduli of near-
crystalline systems. In Section 5 we present a detailed derivation of the
displacement fields due to the introduction of microscopic disorder in the
crystalline packing. In section 6 we derive the change in the local stress
tensor components and their correlations through this microscopic approach and
compare these predictions against direct numerical simulations. Additionally,
we draw parallels between these results and those obtained in the recently
developed VCTG framework for amorphous systems. In Section 6.5, we extend our
theory to three dimensional fcc arrangement of particles. In Section 7, we
derive the displacement and force fields due to point forces. In Section 8, we
derive the distributions of change in local stresses from the linear relations
between the stress and microscopic disorder in a near-crystalline system.
Finally, we conclude and provide directions for future investigations in
section 9.
Figure 1: Schematic of particle arrangement in a hexagonal close packing for
two different models considered in this paper. (a) Soft repulsive interactions
with a Harmonic potential, where the neighboring particles overlap with the
central particle. (b) Lennard-Jones (LJ) interaction with a cutoff. Here, the
outermost circle corresponds to the interaction range with respect to the
central particle (red).
## 2 Models
We study two well-known canonical glass-forming model potentials that can be
used to create amorphous, as well as near-crystalline structures: short-ranged
Harmonic interactions, and an attractive Lennard Jones interaction with a
cutoff.
### 2.1 Short-ranged repulsive Harmonic interaction
For the case of the short-ranged Harmonic model, we examine systems consisting
of frictionless soft disks in two dimensions and spheres in three dimensions
with different levels of overcompression. These particles interact with one
another through a one-sided pairwise potential [59, 14, 31] which takes the
following form:
$\displaystyle V_{a_{ij}}\left(\vec{r}_{ij}\right)$
$\displaystyle=\frac{K}{\alpha}\left(1-\frac{\left|\vec{r}_{ij}\right|}{a_{ij}}\right)^{\alpha}\Theta\left(1-\frac{\left|\vec{r}_{ij}\right|}{a_{ij}}\right),$
(1)
where, $\vec{r}_{ij}$ represents the displacement vector between particles $i$
and $j$, situated at positions $\vec{r}_{i}$ and $\vec{r}_{j}$, respectively.
Here $a_{ij}$ are called the quenched interaction lengths that are defined as
the sum of individual radii, denoted as $a_{ij}=a_{i}+a_{j}$. In this study,
we select $\alpha=2$ to establish a harmonic pairwise potential between the
particles. The length parameters $a_{ij}$ are then set as follows [60, 42, 43,
32, 61, 62]
$a_{ij}=2a_{0}+\eta a_{0}(\zeta_{i}+\zeta_{j}),$ (2)
where $a_{0}$ is the radius of each particle in the crystalline state. The
variables $\zeta_{i}$ are independent and identically distributed random
numbers drawn from a uniform distribution ranging between $-1/2$ and $1/2$,
that are individually assigned to each particle within the system. The
parameter $\eta$ (polydispersity) controls the magnitude of the disorder.
### 2.2 Attractive Lennard-Jones interaction with cut-off
We investigate particles interacting via long-ranged power-law potentials,
which are smoothened up to the second order at a specified cutoff interaction
length ($r_{ij}^{c}$), set at $2.5a_{ij}$ [29, 39]. This cut-off is set to
speed up the numerical simulations and for most purposes, a cut-off greater
than $1.5$ yields similar mechanical properties [63]. The smoothened LJ
potential for a cut-off interaction length $2.5a_{ij}$ can be represented as
$V_{a_{ij}}\left(\vec{r}_{ij}\right)=4K\left[\left(\frac{a_{ij}}{\left|\vec{r}_{ij}\right|}\right)^{12}-\left(\frac{a_{ij}}{\left|\vec{r}_{ij}\right|}\right)^{6}+\sum_{l=0}^{2}c_{2l}\left(\frac{\left|\vec{r}_{ij}\right|}{a_{ij}}\right)^{2l}\right]\Theta\left(2.5-\frac{\left|\vec{r}_{ij}\right|}{a_{ij}}\right),$
(3)
The disorder is introduced through the length parameters $a_{ij}$ and can be
represented as [29, 39],
$a_{ij}=\left\\{\begin{array}[]{cc}\lambda_{\mathrm{SS}}&\text{ both
}i,\text{and }j\text{ are unlabeled },\\\
\eta\left(\lambda_{\mathrm{SL}}-\lambda_{\mathrm{SS}}\right)+\lambda_{\mathrm{SS}}&\text{
either }i\text{ or }j\text{ is labeled },\\\
\eta\left(\lambda_{\mathrm{LL}}-\lambda_{\mathrm{SS}}\right)+\lambda_{\mathrm{SS}}&\text{
both }i\text{ and }j\text{ are labeled }.\end{array}\right.$ (4)
This model corresponds to a bidisperse system where $\eta$ controls the
strength of the disorder. For theoretical simplicity, instead of using length
parameter $a_{ij}$, we define an onsite parameter $t_{i}$. The variable
$t_{i}$ takes a value of either $1$ or $0$, depending on whether the particle
at position $\vec{r}_{i}$ is labeled or not. Using $t_{i}$ we can express
$a_{ij}$ as follows
$\displaystyle a_{ij}=\lambda_{SS}+$
$\displaystyle\eta\left[(t_{i}+t_{j})\left(\lambda_{SL}-\lambda_{SS}\right)+t_{i}t_{j}\left(\lambda_{LL}+\lambda_{SS}-2\lambda_{SL}\right)\right].$
(5)
## 3 Numerical Simulations
To test our theoretical predictions, we conduct simulations of an athermal,
over-compressed triangular lattice (hcp) in two dimensions (2D) and a face-
centered cubic (fcc) lattice in three dimensions (3D) with soft, frictionless
particles with varying levels of particle size disorder. We employ periodic
boundary conditions to account for boundary effects. Our focus is on states in
which every particle achieves force balance, i.e., configurations
corresponding to energy minima. To achieve this, we utilize the Fast Inertial
Relaxation Engine (FIRE) algorithm as described in Ref. [64] to minimize the
energy of the system.
Here the inter-particle separation is kept fixed at $R_{0}$ in the initial
crystalline state. We initially consider a rectangular (2D) grid spacings of
$R_{0}/2$ and $\sqrt{3}R_{0}/2$ along $x$ and $y$ directions respectively and
cubic (3D) lattice with a grid spacing of $R_{0}$. To create a triangular/fcc
arrangement, particles are placed on alternate grid points that satisfy the
respective conditions, i.e., $n_{x}+n_{y}=2n$ for a triangular lattice and
$n_{x}+n_{y}+n_{z}=2n$ for an fcc lattice, where $n$ is an integer. This
technique is an extension of the one used in reference [65] for generating a
hexagonal close packing in two dimensions. A square/cubic lattice has
$4L^{2}/8L^{3}$ grid points in total (since there are $2L$ grid points along
each coordinate axis), of which only half are occupied by particles, yielding
the total number of particles, $N=2L^{2}$ in 2D and $4L^{3}$ in 3D.
### 3.1 Harmonic Model
We first consider the Harmonic model, where particles experience exclusively
repulsive interactions and each grain interacts solely with its closest
neighbors. In a two dimensional triangular lattice, this corresponds to six
neighboring grains, while in a three dimensional fcc lattice, it corresponds
to twelve neighboring grains. The degree of compression in the lattice is
indicated by the packing fraction, which is set to $\phi=0.92/0.96/0.98$ in 2D
and $\phi=0.80$ in 3D. This is in comparison to the marginally jammed
triangular/fcc lattice with a packing fraction of $\phi_{c}\approx 0.9069$ in
2D and $\phi_{c}\approx 0.74$ in 3D. The interparticle spacing ($R_{0}$) is
defined by the initial particle radius, which is set to $a_{0}=0.5$, and the
packing fraction, calculated using the formula
$R_{0}=2a_{0}(\phi_{c}/\phi)^{1/d}$, in the absence of any disorder. Here we
have chosen bond stiffness/interaction strength as $K=0.5$. The numerical
results presented in this study are averaged over $200$ different realizations
of disordered states. These simulations were performed for system sizes of
$N=6400$ and $10000$ particles in two dimensions, and $N=32000$ and $250000$
in three dimensions, with different strengths of particle size disorder
($\eta$).
### 3.2 LJ Model
For the case of Lennard-Jones (LJ) interactions, every particle interacts with
its neighboring particles located within a cutoff radius defined as
$\left|\vec{r}_{ij}\right|/a_{ij}\leq 2.5$. In the absence of perturbation
($\eta=0$), this condition implies that there are a total of 18 neighboring
particles within the interaction range corresponding to each grain in the
triangular lattice. Here we have chosen $K=0.5$, with
$\lambda_{SS}=1,\lambda_{LL}=1.4\lambda_{SS}$ and
$\lambda_{SL}=1.2\lambda_{SS}$ for the interaction potential. Similar to the
Harmonic model, we have performed simulations for systems of size $N=6400$ in
$2D$. Since the volume associated with each particle is not defined, we define
a number density ($\rho_{N}$) as our initial parameter instead of a packing
fraction. Each number density, $\rho_{N}$ corresponds to different values of
initial pressure ($P$) in the system. Here the results are presented for
$P=0/0.27/4.24$. To achieve the initial pressure we use a Berendsen barostat
[66] which is implemented in the FIRE algorithm during the energy
minimisation.
## 4 Elastic properties of near-crystalline packings
In this section, we derive the results that can be used to compute the
relevant macroscopic elastic properties in a near-crystalline granular packing
composed of soft particles. The fundamental property we are interested in is
the global pressure, denoted by the symbol $P$ which can be written as,
$P=d^{-1}\sum_{\mu}\Sigma_{\mu\mu}=(dV)^{-1}\sum_{\mu,\langle
ij\rangle}r^{\mu}_{ij}f_{ij}^{\mu},$ (6)
where $\Sigma_{\mu\mu}$ are the diagonal components of the global stress
tensor where $d$ and $V$ represent the spatial dimension and total volume of
the system respectively. Here $r^{\mu}_{ij}$ and $f_{ij}^{\mu}$ are the $\mu-
th$ component of the relative displacement and force between the particles $i$
and $j$. The configurational averaged total pressure can be defined as
$\displaystyle\left\langle
P\right\rangle=d^{-1}\sum_{\mu}\langle\Sigma_{\mu\mu}\rangle=$ $\displaystyle
d^{-1}\sum_{\mu}\langle\Sigma_{\mu\mu}^{(0)}\rangle+\langle\delta\Sigma_{\mu\mu}^{(0)}\rangle,$
(7)
where, $\Sigma_{\alpha\beta}^{0}=V^{-1}\sum_{\langle
ij\rangle}r^{\alpha(0)}_{ij}f_{ij}^{\beta(0)}$ are the components of the
global stress tensor of the crystalline system without the disorder. For a
small magnitude of the disorder strength $\eta$, the average change in the
global pressure is zero (i.e., $\langle\delta\Sigma_{\mu\mu}^{(0)}\rangle=0$),
which we will show in the following section. For any regular arrangement of
particles in $d$-dimension, we can write the relative distance between the
neighboring particles and their corresponding forces as
$\vec{r}_{ij}^{(0)}=R_{0}\hat{r}_{ij}^{0},\hskip
14.22636pt\vec{f}_{ij}^{(0)}=\frac{K}{a_{0}}\left(1-\frac{R_{0}}{2a_{0}}\right)\hat{r}_{ij}^{0},$
(8)
where $R_{0}$ is the relative distance between two neighboring particles in
the crystalline arrangement. Inserting these values, one can obtain the final
form of the averaged net pressure as,
$\displaystyle\left\langle P\right\rangle=$
$\displaystyle\frac{z_{0}NKR_{0}}{2dVa_{0}}\left(1-\frac{R_{0}}{2a_{0}}\right)=\frac{z_{0}K\rho_{N}^{0}}{d}\left(\frac{\phi_{0}}{\phi}\right)^{-1+1/d}\left(1-\left(\frac{\phi_{0}}{\phi}\right)^{1/d}\right).$
(9)
Here $z_{0}$ is the coordination number of each grain and $\rho_{N}^{0}$ is
the number density in the marginally jammed crystal. We can also represent the
number density of an over-compressed crystal as,
$\rho_{N}=N/V=\rho_{N}^{0}(\phi/\phi_{0})^{1/d}$. In the two dimensional
triangular lattice, $z_{0}=6$ and $\rho_{N}^{0}=1/2\sqrt{3}a_{0}^{2}$. The
system is subjected to an isotropic strain, where the box dimensions along all
the Cartesian directions increase by a factor of $(1+\epsilon)$. Consequently,
the packing fraction $\phi$ changes to $\phi/(1+\epsilon)^{d}$. The pressure
$P$ of this isotropically strained system can be expressed as
$\displaystyle\left\langle P^{\prime}\right\rangle=$
$\displaystyle\frac{z_{0}NkR_{0}}{2dVa_{0}}\left(1-\frac{R_{0}}{2a_{0}}\right)=\frac{z_{0}k\rho_{N}^{0}}{d(1+\epsilon)^{d-1}}\left(\frac{\phi_{0}}{\phi}\right)^{-1+1/d}\left(1-(1+\epsilon)\left(\frac{\phi_{0}}{\phi}\right)^{1/d}\right).$
(10)
Figure 2: $(a)$ Variation of configurational averaged bulk ($\langle
B\rangle$) and shear modulus ($\langle G\rangle$) with polydispersity for a
near-crystalline packing of soft particles in a system of size $N=256$ with
packing fraction, $\phi=\phi_{0}+0.02$ in two dimensions. From the above plot,
we can notice that the elastic properties of these disordered crystals are
similar to that of a perfect crystal for a disorder strength, $\eta\leq 0.03$
which also changes with the initial packing fraction. $(b)$ Variation of
$\langle B\rangle$ and $\langle G\rangle$ with initial packing fraction for a
fixed polydispersity, $\eta=0.005$.
Here, $\epsilon\sim\delta V/2V$ is proportional to the volumetric strain
applied to the system. The average bulk modulus ($\langle B\rangle$) for these
near-crystalline systems can be obtained by finding the ratio of change in
bulk pressure to the volumetric strain,
$\langle B\rangle=\left|\frac{\delta\left\langle P\right\rangle}{\delta
V/V}\right|=\lim_{\epsilon\to 0}\left|\frac{\left\langle
P^{\prime}\right\rangle-\left\langle
P\right\rangle}{2\epsilon}\right|=\frac{z_{0}k\rho_{N}^{0}}{2d}\left(\frac{\phi_{0}}{\phi}\right)^{-1+1/d}\left((d-2)\left(\frac{\phi_{0}}{\phi}\right)^{1/d}-(d-1)\right).$
(11)
In two dimensions, the average bulk modulus for a small magnitude of disorder
in a crystalline packing can be written as $\langle
B\rangle=\frac{\sqrt{3}K}{4a_{0}^{2}}(\phi/\phi_{0})^{1/2}$, which we have
also verified numerically for $\phi=\phi_{0}+0.02$ as shown in Fig. 2. Next,
we consider a near-crystalline system that is sheared along
$\alpha\beta$-plane with a shear amplitude of $\gamma$. Only the affine part
of the displacements is considered for small shear amplitude, leading to the
following expressions for the relative displacement and force components
$\displaystyle r_{ij}^{\alpha}$
$\displaystyle=R_{0}(\cos{\theta_{ij}^{0}}+\gamma\sin{\theta_{ij}^{0}}),\hskip
14.22636ptr_{ij}^{\beta}=R_{0}\sin{\theta_{ij}^{0}},\hskip
14.22636ptR_{ij}=\left(\sum_{\alpha=1}^{d}(r_{ij}^{\alpha})^{2}\right)^{1/2}.$
(12)
where $\theta_{ij}$ is the angle made by the projection of $r_{ij}$ on the
$\alpha\beta$-plane to the $\alpha$-axis. Using the above relations, we can
write the $\alpha\beta$-component of the stress tensor as
$\displaystyle\langle\Sigma_{\alpha\beta}\rangle$
$\displaystyle=\frac{1}{2V}\sum_{i}\left(\sum_{j}r^{\alpha}_{ij}f^{\beta}_{ij}\right)=k\rho_{N}^{0}\left(\frac{\phi_{0}}{\phi}\right)^{-1+1/d}\left(\sum_{j=1}^{z}\left(1-\frac{R_{ij}}{2a_{0}}\right)(\cos{\theta_{ij}^{0}}\sin{\theta_{ij}^{0}}+\gamma\sin^{2}{\theta_{ij}^{0}})\right).$
(13)
For a two dimensional triangular lattice, the above equation simplifies to
$\displaystyle\langle\Sigma_{xy}\rangle$
$\displaystyle=\frac{3NkR_{0}}{2Va_{0}}\left[-\frac{R_{0}}{2a_{0}}\gamma+\frac{1}{\sqrt{3}}\left(\frac{\gamma-1/\sqrt{3}}{\sqrt{1+(\gamma-1/\sqrt{3})^{2}}}+\frac{\gamma+1/\sqrt{3}}{\sqrt{1+(\gamma+1/\sqrt{3})^{2}}}\right)\right].$
(14)
Considering a small shear amplitude, $\gamma$, we can do a linear
approximation of the shear stress and take the ratio of change in the shear-
stress to shear-strain to obtain the shear-modulus
$G=\lim_{\gamma\to
0}\frac{\langle\Sigma_{xy}\rangle}{\gamma}\sim\frac{3NkR_{0}}{2Va_{0}}\left(\frac{R_{0}}{2a_{0}}-\frac{3}{4}\right)=\frac{\sqrt{3}k}{2a_{0}^{2}}\left(1-\frac{3}{4}\left(\frac{\phi}{\phi_{0}}\right)^{1/2}\right).$
(15)
Both the expressions for bulk and shear modulus for various packing fraction
and particle size disorder are validated through numerical simulations in
near-crystalline packings of soft particles as shown in Fig. 2. Any local
fluctuations of pressure and shear stresses giving rise to local fluctuations
in bulk and shear modulus are discussed in the later section. Given the planar
bulk and shear moduli, one can obtain the expressions for planar Young’s
modulus and Poisson’s ratio as
$\displaystyle E$
$\displaystyle=\frac{4BG}{B+G}=\frac{\sqrt{3}k}{a_{0}^{2}}\left(\frac{\phi}{\phi_{0}}\right)^{1/2}\left(\frac{4-3\left(\phi/\phi_{0}\right)^{1/2}}{4-\left(\phi/\phi_{0}\right)^{1/2}}\right),$
(16) $\displaystyle\nu$
$\displaystyle=\frac{B-G}{B+G}=\frac{5\left(\phi/\phi_{0}\right)^{1/2}-4}{4-\left(\phi/\phi_{0}\right)^{1/2}}.$
Similar techniques can be used in a system of particles interacting via long-
ranged Lennard-Jones interactions in a near-crystalline packing with average
particle separation between the nearest neighbor being $R_{0}$. The bulk and
shear modulus for such a system can be computed as
$\displaystyle B$
$\displaystyle=\frac{24\sqrt{3}k}{R_{0}^{8}}\left(a-\frac{4b}{R_{0}^{6}}\right),$
(17) $\displaystyle G$
$\displaystyle=\frac{4\sqrt{3}k}{R_{0}^{8}}\left(a-\frac{5b}{R_{0}^{6}}\right),$
where,
$a=\sum_{i=1}^{N}\frac{n_{i}}{m_{i}^{6}}\text{, and
}b=\sum_{i=1}^{N}\frac{n_{i}}{m_{i}^{12}}.$ (18)
Here $n_{i}$ is the number of particles on the $i^{th}$ spherical cell and
$m_{i}$ is the ratio of the distance of the $i^{th}$ cell from the central
particle to $R_{0}$, i.e., $m_{i}=R_{i}/R_{0}$. In the $N\to\infty$ limit, we
arrive at $a=6.37588$ and $b=6.00981$.
## 5 Displacement fields induced by microscopic disorder
In the previous section, we presented theoretical results related to the
average stress tensor components in a nearly crystalline arrangement of soft
particles, where we made the assumption that the average changes in local
stress are negligible. The local stress components are proportional to the
square of the interparticle distances of all neighboring particles within the
cut-off distance. Consequently, in order to formulate the expressions for
local stress, it is necessary to derive the displacements of individual
particles in a disordered configuration. Below, we derive the displacement and
force fields resulting from the introduction of disorder into a crystalline
network.
In both Harmonic and LJ model, we begin with a crystalline packing of
monodisperse particles in a fixed volume. In the short-ranged repulsive model
we start with a finite overcompression whereas in the attractive LJ
interaction model, the initial volume is fixed such that the initial pressure
is set to $P=0$. We then introduce disorder in the effective particle sizes
i.e., $a_{ij}$ as given in Eqs. (2), (5). As a response to this disorder, the
particles are displaced from their crystalline positions to maintain force
balance, as
$r^{\mu}_{i}=r^{\mu(0)}_{i}+\delta r^{\mu}_{i}.$ (19)
Given that particle $j$ is one of the neighboring particles of particle $i$ in
the initial crystalline lattice, the relative displacement between their
positions can be expressed using the basis lattice vectors of the crystalline
lattice as
$\vec{r}_{ij}^{(0)}=\vec{r}_{j}^{(0)}-\vec{r}_{i}^{(0)}=\vec{\Delta}_{j}$. The
discrete Fourier transform of the change in the relative displacement $\delta
r_{ij}^{\mu}$ can be expressed as:
$\displaystyle\mathcal{F}\left[\delta
r^{\mu}_{ij}\right]=\sum_{i}e^{i\vec{r}_{i}^{(0)}.\vec{k}}\delta
r^{\mu}_{ij}=$
$\displaystyle\sum_{i}e^{i\vec{r}_{i}^{(0)}.\vec{k}}\left(\delta
r_{j}^{\mu}-\delta
r_{i}^{\mu}\right)=\left[e^{-i\vec{\Delta}_{j}.\vec{k}}-1\right]\delta\tilde{r}^{\mu}(\vec{k}),$
(20)
where, $\delta\tilde{r}^{\mu}(\vec{k})=\mathcal{F}\left[\delta
r^{\mu}(\vec{r})\right]$, corresponds to the discrete Fourier transform of
particle displacements from their crystalline positions. As a response to the
disorder as well as the displacements in the particle positions, the forces
$\vec{f}_{ij}$ acting between adjacent particles $i$ and $j$ also change. This
variation can be expressed as a perturbation relative to the forces between
these particles in the initial crystalline state, represented as
$f_{ij}^{\mu}=f_{ij}^{\mu(0)}+\delta f_{ij}^{\mu}.$ (21)
Every individual component of the excess force $\delta f_{ij}^{\mu}$ acting
between particles $i$ and $j$ can be Taylor expanded up to first order about
its value in the crystalline ground state, in terms of $\delta r^{\mu}_{ij}$
($=\delta r^{\mu}_{j}-\delta r^{\mu}_{i}$) and $\delta a_{ij}$ ($=\delta
a_{i}+\delta a_{j}$) as
$\delta f_{ij}^{\mu}=\sum_{\nu}C_{ij}^{\mu\nu}\delta r^{\nu}_{ij}+C_{ij}^{\mu
a}\delta a_{ij},$ (22)
where the first-order Taylor coefficients $C_{ij}^{\mu\nu},C_{ij}^{\mu a}$
depend only on the form of the potential between the interacting particles and
the initial crystalline arrangement. These coefficients can be represented as,
$\left.C_{ij}^{\mu\nu}=\left(\partial f_{ij}^{\mu}/\partial
r_{ij}^{\nu}\right)\right|_{\\{\vec{r}_{ij}^{(0)},a_{ij}^{(0)}\\}}$ and
$\left.C_{ij}^{\mu a}=\left(\partial f_{ij}^{\mu}/\partial
a_{ij}\right)\right|_{\\{\vec{r}_{ij}^{(0)},a_{ij}^{(0)}\\}}$. For energy-
minimized configurations, the force balance condition dictates that the net
force acting on each particle $i$ is zero. This means that for all interacting
neighbors $j$, the sum of the force deviations, denoted as
$\delta\vec{f}_{ij}$, is equal to zero, expressed as
$\sum_{j}\delta\vec{f}_{ij}=0$. By applying this condition in the linear order
expression for forces as given in Eq. (22), we obtain $d$-equations, where $d$
represents the dimension of the system, for each particle $i$. These equations
are given as follows:
$\displaystyle\sum_{j}\sum_{\nu}C_{ij}^{\mu\nu}\delta r^{\nu}_{ij}=$
$\displaystyle-\sum_{j}C_{ij}^{\mu a}\delta a_{ij},$ (23)
$\displaystyle\mathcal{P}_{1}\ket{\delta r}=$
$\displaystyle\mathcal{P}_{2}\ket{\delta a}.$
Here, we have $Nd$ such equations corresponding to $Nd$-variables
(displacement components). Since the system is translationally invariant,
performing a discrete Fourier transform on the equation can convert the
$Nd$-equations of $Nd$-variables into $d$-equations of $d$ variables. This
simplification reduces the complexity of the problem and diagonalizes the
large matrices $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$. So the Fourier
transform of the Eq. (23) leads to
$\displaystyle\sum_{\nu}A^{\mu\nu}(\vec{k})\delta\tilde{r}^{\nu}(\vec{k})=B^{\mu}(\vec{k}),$
(24)
where,
$\displaystyle A^{\mu\nu}(\vec{k})=$
$\displaystyle\sum_{j}\left(1-e^{-i\vec{k}.\vec{\Delta}_{j}}\right)C_{ij}^{\mu\nu},\hskip
14.22636ptB^{\mu}(\vec{k})=-\mathcal{F}\left[\sum_{j}C_{ij}^{\mu a}\delta
a_{ij}\right].$ (25)
In $d$-dimensions, $A^{\mu\nu}$ would be a $d\times d$ symmetric matrix. Here
the expression for $A^{\mu\nu}$ and $B^{\mu}$ have the same form for both
short and long-range models where the only difference lies in the number of
interacting neighbor particles. Now we can obtain the displacement fields in
Fourier space by inverting Eq. (24),
$\delta\tilde{r}^{\mu}(\vec{k})=\sum_{\nu}(A^{-1})^{\mu\nu}(\vec{k})B^{\nu}(\vec{k}).$
(26)
Since $\delta\tilde{r}$ is expressed as the product of $A^{-1}$ and $B$, its
inverse Fourier transform can be written as a convolution resulting in the
displacement fields in real space, as shown below:
$\delta
r^{\mu}(\vec{r})=\mathcal{F}^{-1}\left[\delta\tilde{r}^{\mu}(\vec{k})\right]=\frac{1}{N}\sum_{\vec{k}}\exp(-i\vec{k}.\vec{r})\delta\tilde{r}^{\mu}(\vec{k}).$
(27)
Here Eqs. (27) and (26) correspond to the displacement fields and their
Fourier transform in the presence of particle size disorder. The exact
expressions of these displacements are model dependent, and we discuss the two
different scenarios in detail below.
### 5.1 Short-ranged repulsive harmonic interaction
The displacement fields for the Harmonic model has been studied extensively
[60, 32, 43, 42] where the disorder is introduced in the particle radius,
$\delta a_{ij}=\eta a_{0}(\zeta_{i}+\zeta_{j})$. Next, putting this in Eq.
(25) we get the expression for $B^{\mu}$,
$\displaystyle B^{\mu}(\vec{k})=$
$\displaystyle-D^{\mu}(\vec{k})\delta\tilde{a}(\vec{k})=-\eta
a_{0}D^{\mu}(\vec{k})\tilde{\zeta}(\vec{k}),$ (28)
$\displaystyle\text{where,}\hskip 14.22636ptD^{\mu}(\vec{k})=$
$\displaystyle\sum_{j}\left(1+e^{-i\vec{k}.\vec{\Delta}_{j}}\right)C^{\mu
a}_{ij}.$
Here $\delta\tilde{a}(\vec{k})=\mathcal{F}\left[\delta a(\vec{r})\right]=\eta
a_{0}\tilde{\zeta}(\vec{k})$, is the Fourier transform of $\delta a_{i}$. We
have defined $|\vec{\Delta}_{j}|=R_{0}$ as the magnitude of the relative
distance between grains in an over-compressed crystalline system. Now putting
the expression of $B^{\nu}(\vec{k})$ in the expression of
$\delta\tilde{r}^{\mu}(\vec{k})$ in Eq. (26), we obtain
$\delta\tilde{r}^{\mu}(\vec{k})=\underbrace{\left[-\sum_{\nu}(A^{-1})^{\mu\nu}(\vec{k})D^{\nu}(\vec{k})\right]}_{\tilde{G}^{\mu}(\vec{k})}\delta\tilde{a}(\vec{k})=\tilde{G}^{\mu}(\vec{k})\delta\tilde{a}(\vec{k}).$
(29)
We can get the displacement fields by taking an inverse discrete Fourier
transform as given in Eq. (27) as,
$\displaystyle\delta r^{\mu}(\vec{r})=$
$\displaystyle\sum_{\vec{r}^{\prime}}G^{\mu}(\vec{r}-\vec{r}^{\prime})\delta
a(\vec{r}^{\prime}),$ (30) $\displaystyle\text{where,\hskip
14.22636pt}G^{\mu}(\vec{r})=$
$\displaystyle\mathcal{F}^{-1}\left[\tilde{G}^{\mu}(\vec{k})\right].$
### 5.2 Attractive Lennard-Jones interaction with cut-off
The above-mentioned formulation for displacement fields can also be extended
to any sort of interaction where every grain can interact with 6 or more
neighbors depending on the interaction cut-off. For example, this cutoff is
$|\vec{r}_{ij}|/a_{ij}\leq 2.5$ for the LJ model, where the microscopic
disorders are incorporated into the bond distances as
$\displaystyle\delta a_{ij}=$
$\displaystyle\eta\left[(t_{i}+t_{j})\left(\lambda_{SL}-\lambda_{SS}\right)+t_{i}t_{j}\left(\lambda_{LL}+\lambda_{SS}-2\lambda_{SL}\right)\right].$
(31)
Therefore, $B^{\mu}(\vec{k})$ has the form
$\displaystyle B^{\mu}(\vec{k})=$
$\displaystyle\eta\left[\left(\lambda_{SS}-\lambda_{SL}\right)\tilde{D}^{\mu}(\vec{k})\tilde{t}(\vec{k})-\frac{\left(\lambda_{LL}+\lambda_{SS}-2\lambda_{SL}\right)}{N}\sum_{\vec{k}^{\prime}}\left[\tilde{t}(\vec{k}^{\prime})\tilde{t}(\vec{k}-\vec{k}^{\prime})\tilde{D}^{\mu}(\vec{k}-\vec{k}^{\prime})\right]\right].$
(32)
In the above expression, $\tilde{D}^{\mu}(\vec{k})$ has the same expression as
given in Eq. (28) with $j$ going from $1-18$ for all the interacting neighbors
within the range $|\vec{r}_{ij}|/a_{ij}\leq 2.5$. The only difference in the
expression of $B^{\mu}$ in LJ model to that of the Harmonic model is that the
magnitude of $\vec{\Delta}_{j}$s are not constant for all the interacting
neighbors. In this study, we have chosen,
$\lambda_{SL}=(\lambda_{SS}+\lambda_{LL})/2$ for numerical simulations, which
simplifies the problem by removing the nonlinear term in the above expression
for $B^{\mu}(\vec{k})$. In this approximation we can write, $\delta
a_{i}=\eta(\lambda_{SL}-\lambda_{SS})t_{i}$. Now the expressions for
$B^{\mu}(\vec{k})$ and $\delta\tilde{r}^{\mu}(\vec{k})$ in Eq. (26) can be
written as,
$\displaystyle B^{\mu}(\vec{k})=$
$\displaystyle-D^{\mu}(\vec{k})\delta\tilde{a}(\vec{k})=-\eta(\lambda_{SL}-\lambda_{SS})D^{\mu}(\vec{k})\tilde{t}(\vec{k}),$
(33) $\displaystyle\delta\tilde{r}^{\mu}(\vec{k})=$
$\displaystyle\underbrace{\left(-\sum_{\nu}(A^{-1})^{\mu\nu}(\vec{k})D^{\nu}(\vec{k})\right)}_{\tilde{G}^{\mu}(\vec{k})}\delta\tilde{a}(\vec{k}).$
where,
$\delta\tilde{a}(\vec{k})=\eta(\lambda_{SL}-\lambda_{SS})\tilde{t}(\vec{k})$.
## 6 Stress correlations induced by microscopic disorder
In this section, we focus on the fluctuations and correlations of the local
stress tensor components. Using the linear order displacement fields derived
in the earlier section, we can compute the stress correlations using a similar
perturbation expansion for the minimally polydisperse system. For any athermal
jammed system, the components of the global stress tensor can be represented
as
$\displaystyle\Sigma_{\alpha\beta}$ $\displaystyle=V^{-1}\sum_{\langle
ij\rangle}r_{ij}^{\alpha}f_{ij}^{\beta},$ (34)
where, $r_{ij}^{\alpha}=r_{ij}^{\alpha(0)}+\delta r_{ij}^{\alpha}$ and
$f_{ij}^{\beta}=f_{ij}^{\beta(0)}+\delta f_{ij}^{\beta}$. Here
$r_{ij}^{\mu(0)}$ refer to the $\mu$-th component of the interparticle
distance whereas $f_{ij}^{\mu(0)}$ denote the $\mu$-th component of the force
acting between particle $i$ and $j$ in the crystalline lattice without any
microscopic disorder. In a system with small particle size polydispersity
($\eta$), both the deviation of the particle positions ($|\delta\vec{r}|$)
from the crystalline positions and the change in inter-particle forces
($|\delta\vec{f}|$) are in the order of $\delta a\sim\eta$. For small values
of $\eta$, we can neglect higher-order terms in the expression for global
stress which leads to
$\displaystyle\Sigma_{\alpha\beta}\sim V^{-1}\sum_{\langle
ij\rangle}\left(r^{\alpha(0)}_{ij}f_{ij}^{\beta(0)}+r^{\alpha(0)}_{ij}\delta
f_{ij}^{\beta}+\delta r^{\alpha}_{ij}f_{ij}^{\beta(0)}\right).$ (35)
Therefore the incremental change in the global stress (i.e
$\delta\Sigma=\Sigma-\Sigma^{(0)}$) due to the introduction of disorder in the
crystalline system can be written as
$\displaystyle\delta\Sigma_{\alpha\beta}=V^{-1}\sum_{i}\underbrace{\left[\sum_{j}(r^{\alpha(0)}_{ij}\delta
f_{ij}^{\beta}+\delta
r^{\alpha}_{ij}f_{ij}^{\beta(0)})\right]}_{\delta\sigma_{\alpha\beta}(\vec{r}_{i})}.$
(36)
The above expression represents the net change in global stress as a linear
combination of $\delta\sigma(\vec{r}_{i}^{0})$, which we define as the change
in local stress at the lattice position $\vec{r}_{i}^{(0)}$, and can be
expressed as follows:
$\displaystyle\delta\sigma_{\alpha\beta}(\vec{r}_{i}^{0})=$
$\displaystyle\sum_{j}(r^{\alpha(0)}_{ij}\delta f_{ij}^{\beta}+\delta
r^{\alpha}_{ij}f_{ij}^{\beta(0)})=\sum_{j}\left[\Delta_{j}^{\alpha}\sum_{\nu}C_{ij}^{\beta\nu}\delta
r_{ij}^{\nu}+\Delta_{j}^{\alpha}C_{ij}^{\beta a}\delta a_{ij}+\delta
r^{\alpha}_{ij}f_{ij}^{\beta(0)}\right].$ (37)
In the above expression, $\delta\sigma_{\alpha\beta}(\vec{r}_{i})$ is a linear
summation of particle displacements and change in radii with coefficients. As
we have demonstrated earlier, the Fourier transform of $\delta
r^{\alpha}_{ij}$ and $\delta a_{ij}$ have simple relationships due to the
translational invariance of the system, as shown in Eqs. (29) and (33).
Therefore, we can simplify the problem by performing a discrete Fourier
transform of Eq. (37) which leads to
$\displaystyle\delta\tilde{\sigma}_{\alpha\beta}(\vec{k})=\mathcal{F}\left[\delta\sigma_{\alpha\beta}(\vec{r}_{i}^{0})\right]$
$\displaystyle=\sum_{i}e^{i\vec{r}_{i}^{0}.\vec{k}}\delta\sigma_{\alpha\beta}(\vec{r}_{i}^{0})=\sum_{j}\left[\Delta_{j}^{\alpha}\sum_{\nu}C_{j}^{\beta\nu}\left[-1+F_{j}(\vec{k})\right]\delta\tilde{r}^{\nu}(\vec{k})\right.$
(38) $\displaystyle+\left.\Delta_{j}^{\alpha}C_{j}^{\beta
a}\left[1+F_{j}(\vec{k})\right]\delta\tilde{a}(\vec{k})+\left[-1+F_{j}(\vec{k})\right]\delta\tilde{r}^{\alpha}(\vec{k})f_{j}^{\beta(0)}\right].$
where, $F_{j}(\vec{k})=\exp\\{-i\vec{\Delta}_{j}.\vec{k}\\}$. We can further
simplify the above expression by replacing $\delta\tilde{r}^{\nu}(\vec{k})$ by
$\tilde{G}^{\nu}(\vec{k})\delta\tilde{a}(\vec{k})$ as given in Eqs. (29) and
(33), to arrive at
$\displaystyle\delta\tilde{\sigma}_{\alpha\beta}(\vec{k})=$ $\displaystyle
S_{\alpha\beta}(\vec{k})\delta\tilde{a}(\vec{k})\hskip
11.38092pt\text{where,}$ (39) $\displaystyle S_{\alpha\beta}(\vec{k})=$
$\displaystyle\sum_{j}\left[\left[1+F_{j}(\vec{k})\right]C_{j}^{\beta
a}\Delta^{\alpha(0)}_{j}+\left[-1+F_{j}(\vec{k})\right]\left(\sum_{\nu}\Delta^{\alpha(0)}_{j}C_{j}^{\beta\nu}\tilde{G}^{\nu}(\vec{k})+f_{j}^{\beta(0)}\tilde{G}^{\alpha}(\vec{k})\right)\right].$
The sum over $j$ pertains to all neighboring particles, i.e., all the
particles that interact with the central particle in the crystalline state
without the disorder. Here, $S_{\alpha\beta}$ represents the Fourier transform
of the Green’s function for the change in local stress components. Next, we
can obtain the change in local stresses in real space as a convolution by
performing an inverse Fourier transform of Eq. (74). This yields
$\delta\sigma_{\alpha\beta}(\vec{r})=\mathcal{F}^{-1}\left[\delta\tilde{\sigma}_{\alpha\beta}(\vec{k})\right]=\sum_{\vec{r}^{\prime}}S_{\alpha\beta}(\vec{r}-\vec{r}^{\prime})\delta
a(\vec{r}^{\prime}).$ (40)
where $S_{\alpha\beta}$ is Green’s function for the change in local stresses.
Next, we can write the form for the Fourier transform of the change in the
local pressure as
$\delta\tilde{P}(\vec{k})=d^{-1}\sum_{\alpha=1}^{d}\delta\tilde{\sigma}_{\alpha\alpha}(\vec{k})=d^{-1}\delta\tilde{a}(\vec{k})\sum_{\alpha}S_{\alpha\alpha}(\vec{k}).$
(41)
Since the Fourier transform of the change in the local stresses is linearly
proportional to $\delta\tilde{a}(\vec{k})$, we can also derive the
configurational average of the local stress and pressure correlations as
$\displaystyle\langle\delta\tilde{\sigma}_{\alpha\beta}(\vec{k})\delta\tilde{\sigma}_{\mu\nu}$
$\displaystyle(\vec{k}^{\prime})\rangle=\left<\delta\tilde{a}(\vec{k}).\delta\tilde{a}(\vec{k}^{\prime})\right>S_{\alpha\beta}(\vec{k})S_{\mu\nu}(\vec{k}^{\prime}),$
(42) $\displaystyle\langle\delta\tilde{P}(\vec{k})\delta\tilde{P}$
$\displaystyle(\vec{k}^{\prime})\rangle=\frac{\left<\delta\tilde{a}(\vec{k}).\delta\tilde{a}(\vec{k}^{\prime})\right>}{d^{2}}\sum_{\alpha,\beta}S_{\alpha\alpha}(\vec{k})S_{\beta\beta}(\vec{k}^{\prime}).$
Using the translational invariance of the system, the configurational average
of the microscopic correlations of $\delta\tilde{a}(\vec{k})$ between two
points $\vec{k}$ and $\vec{k}^{\prime}$ in Fourier space can be written as,
$\displaystyle\left<\delta\tilde{a}(\vec{k}).\delta\tilde{a}(\vec{k}^{\prime})\right>=$
$\displaystyle\frac{N\eta^{2}}{48}\delta_{\vec{k},-\vec{k}^{\prime}},\hskip
42.67912pt\text{(Harmonic)},$ (43) $\displaystyle=$
$\displaystyle\frac{N\eta^{2}(\lambda_{SL}-\lambda_{SS})^{2}}{4}\delta_{\vec{k},-\vec{k}^{\prime}}\hskip
14.22636pt\text{(LJ)}.$
In the $|\vec{k}|\to 0$ limit,
$\langle\delta\tilde{\sigma}_{\alpha\beta}(\vec{k})\delta\tilde{\sigma}_{\mu\nu}(-\vec{k})\rangle$
becomes independent of the magnitude of $|\vec{k}|$ and only has an angular
dependence which we represent as
$C_{\alpha\beta\mu\nu}(\theta)=\lim_{|\vec{k}|\to
0}\left<\delta\tilde{\sigma}_{\alpha\beta}(\vec{k}).\delta\tilde{\sigma}_{\mu\nu}(-\vec{k})\right>.$
(44)
The observed stress correlations are therefore anisotropic in the $k\to 0$
limit, corresponding to a pinch-point singularity at $k=0$ [41]. Due to the
finite system size, and to avoid effects introduced by the periodic
boundaries, we have integrated the stress correlations in Fourier space in the
narrow window of $0.5\leq|\vec{k}|\leq 1.5$. The integrated stress
correlations in a small window of $k\in[k_{min},k_{max}]$ near $k\to 0$ can be
expressed as
$\bar{C}_{\alpha\beta\mu\nu}(\theta)=\int_{k_{min}}^{k_{max}}dk\langle\delta\tilde{\sigma}_{\alpha\beta}(k,\theta)\delta\tilde{\sigma}_{\mu\nu}(k,\pi+\theta)\rangle.$
(45)
In real space, this translates to integrating the stress correlations at
intermediate to large lengthscales. The angular dependence of the integrated
correlations are plotted in Fig. 5 for both Harmonic and LJ model.
### 6.1 Local stress fluctuations
We can express the correlation of the excess local stress between two points
$\vec{r}$ and $\vec{r}^{\prime}$ in real space using the expression for stress
correlation in $k$-space given in Eq. (42) as,
$\displaystyle\langle\delta\sigma_{\mu\nu}(\vec{r})$
$\displaystyle\delta\sigma_{\mu\nu}(\vec{r}^{\prime})\rangle=\sum_{\vec{k},\vec{k}^{\prime}}\frac{\langle\delta\tilde{\sigma}_{\mu\nu}(\vec{k})\delta\tilde{\sigma}_{\mu\nu}(\vec{k}^{\prime})\rangle}{N^{2}}e^{-i(\vec{k}.\vec{r}+\vec{k}^{\prime}.\vec{r}^{\prime})}=\frac{\eta^{2}}{48N}\sum_{\vec{k}}S_{\mu\nu}(\vec{k})S_{\mu\nu}(-\vec{k})e^{-i(\vec{r}-\vec{r}^{\prime}).\vec{k}}.$
(46)
Figure 3: Local stress fluctuation with increasing polydispersity for $(a)$
disordered triangular lattice (2d) with repulsive harmonic particles of system
size, $N=256$ and initial packing fraction, $\phi=0.92$ and $(b)$ disordered
fcc lattice (3d) with $N=4000$ and $\phi=0.80$. Here all the numerical stress
fluctuation varies as $c_{\alpha\beta}\eta^{2}$, where,
$c_{\alpha\beta}=\frac{1}{48N}\sum_{\vec{k}}S_{\alpha\beta}(\vec{k})S_{\alpha\beta}(-\vec{k})$.
Therefore the local stress fluctuations at the same site can be written as
$\langle\delta\sigma^{2}_{\mu\nu}(\vec{r})\rangle=\frac{\eta^{2}}{48N}\sum_{\vec{k}}S_{\alpha\beta}(\vec{k})S_{\mu\nu}(-\vec{k}).$
(47)
The theoretical prediction for local stress fluctuations as a function of
increasing polydispersity is presented in Figure 3$(a)$ and $(b)$ for a two
dimensional triangular lattice and a three dimensional fcc lattice,
respectively. These theoretical stress fluctuations align perfectly with the
numerical results.
### 6.2 Stress correlations in two dimensional systems
In the case of the two dimensional Harmonic model, every grain has six
neighbors in the near-crystalline system. The expressions for the Green’s
functions $S_{\alpha\beta}$, as defined in Eq. (74), depend solely on the
nearest neighbor arrangement. Our analytic results demonstrate that these
Green’s functions in Fourier space have no radial dependence at small values
of $|\vec{k}|$, corresponding to larger lengthscales in real space. Keeping
only the first term in the Taylor expansion we can write these Green’s
functions as
$\displaystyle S_{xx}=\frac{\mathcal{C}(R_{0},K)}{|\vec{k}|^{2}}$
$\displaystyle\left[\left(\frac{R_{0}}{a_{0}}-1\right)k_{y}^{2}+\left(2-\frac{R_{0}}{a_{0}}\right)k_{x}^{2}\right]+\mathcal{O}(k^{2}),$
(48) $\displaystyle S_{yy}=\frac{\mathcal{C}(R_{0},K)}{|\vec{k}|^{2}}$
$\displaystyle\left[\left(\frac{R_{0}}{a_{0}}-1\right)k_{x}^{2}+\left(2-\frac{R_{0}}{a_{0}}\right)k_{y}^{2}\right]+\mathcal{O}(k^{2}),$
$\displaystyle S_{xy}=\frac{\mathcal{C}(R_{0},K)}{|\vec{k}|^{2}}$
$\displaystyle\left[\left(3-\frac{2R_{0}}{a_{0}}\right)k_{x}k_{y}\right]+\mathcal{O}(k^{2}),$
$\displaystyle\delta\tilde{P}=\frac{\mathcal{C}(R_{0},K)}{2}$
$\displaystyle\delta\tilde{a}(\vec{k})\left[1+f_{1}|\vec{k}|^{2}+f_{2}|\vec{k}|^{2}+\ldots\right],$
where,
$\mathcal{C}(R_{0},K)=\frac{6KR_{0}(R_{0}-a_{0})}{(2R_{0}-a_{0})a_{0}^{2}},\hskip
8.5359ptf_{1}=-\left(a_{0}/3\right)^{2}\cos^{2}{3\theta}\text{ and
}f_{2}=f_{1}^{2}.$ (49)
Here the functions $f_{1}$ and $f_{2}$ in the expression for $\delta\tilde{P}$
are influenced by the initial packing fraction and the orientation of the
reciprocal lattice vector $\vec{k}$. The above expressions for Green’s
functions can be reformulated in a polar coordinate system to describe
behavior at large lengthscales as follows.
$\displaystyle\lim_{|\vec{k}|\to
0}S_{xx}(|\vec{k}|,\theta)\sim\frac{\mathcal{C}(R_{0},K)}{2}\left(1-\left(\frac{2R_{0}}{a_{0}}-3\right)\cos(2\theta)\right),$
(50) $\displaystyle\lim_{|\vec{k}|\to
0}S_{yy}(|\vec{k}|,\theta)\sim\frac{\mathcal{C}(R_{0},K)}{2}\left(1+\left(\frac{2R_{0}}{a_{0}}-3\right)\cos(2\theta)\right),$
$\displaystyle\lim_{|\vec{k}|\to
0}S_{xy}(|\vec{k}|,\theta)\sim-\frac{\mathcal{C}(R_{0},K)}{2}\left(\frac{2R_{0}}{a_{0}}-3\right)\sin(2\theta).$
Figure 4: Change in local stress due to the introduction of disorder in a
single particle (i.e., the Green’s function for change in the local stress) in
a 2D near-crystalline packing (HCP) of soft particles. Here figure (a), (b),
and (c) correspond to $\delta\sigma_{xx}(\vec{r})$,
$\delta\sigma_{xy}(\vec{r})$ and $\delta\sigma_{yy}(\vec{r})$ for $\delta
a(\vec{r}=0)=1$ and $\delta a(\vec{r}\neq 0)=0$ for every other grain.
All the Green’s function above and their correlations show anisotropic
behavior with an angular periodicity of $\pi$. Performing an inverse Fourier
transform on the above expression yields the behavior of these Green’s
functions at large lengthscales in real space, which is presented below as
$\displaystyle S_{xx}(\vec{r})=-S_{yy}(\vec{r})$
$\displaystyle\sim-\frac{\mathcal{C}(R_{0},K)}{2}\left(\frac{2R_{0}}{a_{0}}-3\right)\frac{\cos(2\theta)}{|\vec{r}|^{2}},$
(51) $\displaystyle S_{xy}(\vec{r})=S_{yx}(\vec{r})$
$\displaystyle\sim-\frac{\mathcal{C}(R_{0},K)}{2}\left(\frac{2R_{0}}{a_{0}}-3\right)\frac{\sin(2\theta)}{|\vec{r}|^{2}}.$
Fig. 4 displays all the components of the Green’s functions at large
lengthscales whose functional forms are given in Eq. (51). The long-ranged two
dimensional LJ model also exhibits similar behavior at large lengthscales. The
Green’s functions as well as stress correlations in LJ model have equal
angular behavior as that of the Harmonic model as plotted in Figs. 4 and 5,
only difference lying in the magnitude of the stress fluctuations which
depends on the initial macroscopic properties of these systems like global
pressure, box size. In the two dimensional Harmonic model, all six unique
stress correlations as well as the pressure correlation in Fourier space at
small magnitudes of $\vec{k}$ can be represented using the expressions given
in Eqs. (42) and (48). For example, the correlation of change in local
pressure at large lengthscales ($k\to 0$) can be expressed as,
Figure 5: Angular dependence of stress correlations in Fourier space
integrated in a small window of $|\vec{k}|$ i.e
$\bar{C}_{\alpha\beta\mu\nu}(\theta)=\int_{k_{min}}^{k_{max}}dk\langle\delta\tilde{\sigma}_{\alpha\beta}(k,\theta)\delta\tilde{\sigma}_{\mu\nu}(k,\pi+\theta)\rangle$
for different initial over-compression (Harmonic)/pressure (LJ) with disorder
strength of $\eta=0.005$. The first row represents all the six distinct
correlations for the Harmonic model and the second row corresponds to the LJ
model. Here the solid and dashed lines correspond to the theoretical results
where points correspond to the numerical data for the two different models.
Here we have chosen $k_{min}=0.1$ and $k_{max}=1.0$ and system size $N=6400$
for both the models.
$\displaystyle\mathcal{P}(R_{0},\eta)$ $\displaystyle=\lim_{k\to
0}\left<\delta\tilde{P}(\vec{k}).\delta\tilde{P}(-\vec{k})\right>\sim
N\langle\delta a^{2}\rangle\frac{\mathcal{C}^{2}(R_{0},K)}{4},$ (52)
which is a constant for a given packing of particles. The higher-order terms
in the Taylor expansion of pressure correlation in Fourier space reveal the
anisotropic crystalline nature of the inherent lattice. However, the
significance of higher-order terms becomes apparent at smaller length scales,
where lattice symmetry becomes a dominant factor, as illustrated in Fig. 6. In
Fig. 5 we show an exact match between the stress correlations obtained from
numerical simulations and the predictions from the microscopic theory for both
short-ranged Harmonic and long-ranged LJ model in two dimensions. The
aforementioned correlations are applicable to systems that possess a finite
average pressure ($R_{0}<2a_{0}$). However, in the limit where the average
pressure is zero ($\langle P\rangle\to 0$), i.e., as $R_{0}$ approaches
$2a_{0}$, the results obtained from the VCTG framework [41, 18] for amorphous
materials lacking any crystalline symmetries are reproduced i.e.
$\displaystyle\lim_{R_{0}\to 2a_{0}}\left(\lim_{k\to 0}S_{\alpha\beta}\right)$
$\displaystyle=\frac{\mathcal{C}(2a_{0},K)}{(-1)^{1-\delta_{\alpha\beta}}k^{2}}\prod_{i=\alpha,\beta}\left(\sqrt{k^{2}-k^{2}_{i}}\right),$
(53)
which can be used to write the correlations between the different components
of the stresses as,
$\displaystyle C_{\alpha\beta\mu\nu}=$
$\displaystyle\frac{N\eta^{2}}{48}\left[\lim_{R_{0}\to 2a_{0}}\left(\lim_{k\to
0}S_{\alpha\beta}(\vec{k})S_{\mu\nu}(-\vec{k})\right)\right]=\frac{4\mathcal{P}(2a_{0},\eta)}{(-1)^{2-\delta_{\alpha\beta}-\delta_{\mu\nu}}k^{4}}\prod_{i=\alpha,\beta,\mu,\nu}\left(\sqrt{k^{2}-k^{2}_{i}}\right).$
(54)
where, $\mathcal{C}(2a_{0},K)=4K/a_{0}$ and
$\mathcal{P}(2a_{0},\eta)=\eta^{2}K^{2}/12a_{0}^{2}$. In the same $R_{0}\to
2a_{0}$ limit, the functions in Eq. (48) for change in local pressure have the
following form i.e., $f_{1}=-\left(a_{0}/3\right)^{2}\cos^{2}{3\theta}$,
$f_{2}=f_{1}^{2}$ and so on. For a finite system with particle radius $a_{0}$,
the maximum magnitude of $k$ is $\pi/a_{0}$. So $\left|f_{1}k^{2}\right|\leq
1$ for all values of $k$, except near the boundary of the $1^{st}$ Brillouin
zone. So the change in the local pressure due to particle size defect for
$k\ll\pi/a_{0}$ can be written as,
$\displaystyle\lim_{R_{0}\to 2a_{0}}$
$\displaystyle\delta\tilde{P}(\vec{k})=\frac{\mathcal{C}(2a_{0},K)\delta\tilde{a}(\vec{k})}{2}(1-f_{1}k^{2}+f_{1}^{2}k^{4}+...)$
(55)
$\displaystyle\sim\frac{\mathcal{C}(2a_{0},K)\delta\tilde{a}(\vec{k})}{2(1+f_{1}k^{2})}=\frac{\mathcal{C}(2a_{0},K)\delta\tilde{a}(\vec{k})}{2\left(1+\left(a_{0}/3\right)^{2}k^{2}\cos^{2}{3\theta}\right)}.$
Using the above approximation, we can calculate the correlation of local
pressure which have the sixfold symmetry (i.e $\cos^{2}{3\theta}$) for
$|\vec{k}|>0$, which we have shown in Fig. 6 for a single particle disorder.
In the zero average pressure limit, the findings for the stress correlations
exhibit unexpected universal behavior. This universal behavior is
characterized by the observation of similar anisotropic stress correlations at
large length scales across various jammed athermal packings [41, 18, 17, 56,
57].
Figure 6: Correlation in the change in local pressure in Fourier space
($\langle\delta\tilde{P}(\vec{k})\delta\tilde{P}(-\vec{k})\rangle/\langle\delta
a^{2}\rangle$) due to introduction of a single particle disorder in a
crystalline packing of soft particles with $\phi=0.92$.
### 6.3 Comparision with amorphous systems at large lengthscales
Recently the stress correlations in fully amorphous packings have been
successfully predicted within a field-theoretic framework [41, 18]. This
Vector Charge Theory of “emergent” elasticity is defined by the following
equations:
$\displaystyle\partial_{i}\Sigma_{ij}$ $\displaystyle=$ $\displaystyle f_{j},$
(56) $\displaystyle E_{ij}$ $\displaystyle=$
$\displaystyle\frac{1}{2}(\delta_{i}\psi_{j}+\delta_{j}\psi_{i}),$ (57)
$\displaystyle\sigma_{ij}$ $\displaystyle=$
$\displaystyle(\delta_{ijkl}+\chi_{ijkl})E_{kl}=\Lambda^{-1}_{ijkl}E_{kl}.$
(58)
The stress tensor field is represented by $\sigma$, and in this context, $E$
serves a role that is similar to that of the strain field in canonical
elasticity theory. The emergent elasticity modulus tensor is defined as
$\Lambda$, and the equations bear a notable resemblance to those of canonical
linear elasticity. The components of the $\Lambda$ tensor can be interpreted
as "emergent" elastic moduli. By utilizing the tensor gauge theory for
polarizable isotropic media, all 6 distinct stress correlations at small
magnitudes of $\vec{k}$ can be obtained which has the same form as the ones in
Eq. (54) for near-crystalline systems in the $P\to 0$ limit with
$\mathcal{P}(2a_{0},\eta)$ replaced by a constant $K_{2D}$ that depends on the
elastic properties of the system [41, 18]. So for the finite pressure, the
stress correlations for the near-crystalline systems can be represented as a
summation of the stress correlation for an isotropic amorphous system (from
the VCTG framework) and the non-isotropic part of the stress correlation which
contains the information about the crystalline symmetry i.e.
$C_{\alpha\beta\mu\nu}=C_{\alpha\beta\mu\nu}^{t}+d_{\alpha\beta\mu\nu},$ (59)
where $C_{\alpha\beta\mu\nu}^{t}$, represents the stress correlations obtained
from the VCTG framework. This finite shift $d_{\alpha\beta\mu\nu}$, can also
be seen from the numerically obtained correlations as plotted in Fig. 5 which
can not be explained in the VCTG framework. Therefore for an overcompressed
disordered crystal with finite average pressure, the angular behavior of
stress correlations show an additional anisotropic term which depend on the
system pre-stress. For small pre-stress i.e., $R_{0}\to 2a_{0}$, we have
$|d_{\alpha\beta\mu\nu}/C_{\alpha\beta\mu\nu}|\to 0$, and consequently
numerically the stress correlations are indistinguishable to that of the
amorphous systems. The exact expressions for the angular dependence of the
above correlation functions are detailed in Appendix B.
### 6.4 Continuum limit
In the small $|\vec{k}|$ limit, using the expression for
$S_{\alpha\beta}(\vec{k})$ as given in Eq. (48) we can rewrite the change in
the local stresses in the Fourier space as,
$\displaystyle\delta\tilde{\sigma}_{\alpha\beta}=\left[k_{y}^{2}\phi_{\alpha\beta
1}+k_{x}^{2}\phi_{\alpha\beta 2}-k_{x}k_{y}\phi_{\alpha\beta
3}\right]|\vec{k}|^{-2}\delta\tilde{a}(\vec{k}).$ (60)
Using Voigt notation [67], we can replace $xx=1,yy=2,xy=3$. Since the stress
tensor has three independent components in two dimensions, we can represent
the Fourier transform of their change in the following matrix form,
$\displaystyle\lim_{k\to 0}\begin{bmatrix}\delta\tilde{\sigma}_{1}(\vec{k})\\\
\delta\tilde{\sigma}_{2}(\vec{k})\\\
\delta\tilde{\sigma}_{3}(\vec{k})\end{bmatrix}=$
$\displaystyle\underbrace{\left(\frac{\delta\tilde{a}(\vec{k})}{|\vec{k}|^{2}}\right)}_{\tilde{\Psi}(\vec{k})}\underbrace{\begin{bmatrix}\phi_{11}&\phi_{12}&\phi_{13}\\\
\phi_{21}&\phi_{22}&\phi_{23}\\\
\phi_{31}&\phi_{32}&\phi_{33}\end{bmatrix}}_{\hat{\Phi}}.\underbrace{\begin{bmatrix}k_{y}^{2}\\\
k_{x}^{2}\\\ -k_{x}k_{y}\end{bmatrix}}_{\ket{A(\vec{k})}},$ (61)
$\displaystyle\ket{\delta\tilde{\sigma}(\vec{k})}=$
$\displaystyle\hat{\Phi}\left(\tilde{\Psi}(\vec{k})\ket{A(\vec{k})}\right),$
where,
$\displaystyle\phi_{11}$
$\displaystyle=\phi_{22}=\mathcal{C}(R_{0},K)\left(R_{0}/a_{0}-1\right),$ (62)
$\displaystyle\phi_{33}$
$\displaystyle=\mathcal{C}(R_{0},K)\left(2R_{0}/a_{0}-3\right),$
$\displaystyle\phi_{12}$
$\displaystyle=\phi_{21}=\mathcal{C}(R_{0},K)\left(2-R_{0}/a_{0}\right),$
$\displaystyle\phi_{13}$ $\displaystyle=\phi_{23}=\phi_{31}=\phi_{32}=0.$
Given that $\delta\tilde{\sigma}_{i}(\vec{k})$ characterizes the Fourier
transform of the change in local stresses as $|\vec{k}|\to 0$, its inverse-
Fourier transform reveals the changes in local stresses at large lengthscales
due to the defect at the origin. Specifically, this pertains to the coarse-
grained local stress fluctuations in real space, which can be expressed as:
$\delta\sigma_{i}(\vec{r})=\left[\phi_{i1}\partial_{y}^{2}+\phi_{i2}\partial_{x}^{2}-\phi_{i3}\partial_{x}\partial_{y}\right]\Psi(\vec{r}),$
(63)
where,
$\Psi(\vec{r})=\frac{1}{(2\pi)^{2}}\int
d^{2}ke^{i\vec{r}.\vec{k}}\left(\frac{\delta\tilde{a}(\vec{k})}{|\vec{k}|^{2}}\right).$
(64)
The summation in the above equation is performed over reciprocal lattice
points in a triangular lattice configuration. The function $\Psi(\vec{r})$ is
a non-isotropic field contingent upon the symmetry of the lattice. In an
isotropic system, the force balance criterion is:
$\partial_{i}\sigma_{ij}(\vec{r})=f^{j}(\vec{r})=0.$ (65)
In the case of an amorphous system, we can establish an isotropic field
$\Psi^{{}^{\prime}}(\vec{r})$, which lacks any lattice symmetry, yet meets the
force balance condition as expressed in Eq. (65), in a manner such that
$\displaystyle\delta\sigma_{ij}^{{}^{\prime}}(\vec{r})=$
$\displaystyle\epsilon_{ia}\epsilon_{jb}\partial_{a}\partial_{b}\Psi^{{}^{\prime}}(\vec{r}).$
(66)
In the Voigt notation [67],
$\displaystyle\delta\sigma_{i}^{{}^{\prime}}(\vec{r})=$
$\displaystyle\left[\phi_{i1}^{{}^{\prime}}\partial_{y}^{2}+\phi_{i2}^{{}^{\prime}}\partial_{x}^{2}-\phi_{i3}^{{}^{\prime}}\partial_{x}\partial_{y}\right]\Psi^{{}^{\prime}}(\vec{r}),$
(67) $\displaystyle\text{where, }\phi_{ij}^{{}^{\prime}}=$
$\displaystyle\lim_{R_{0}\to
2a_{0}}\phi_{ij}=\mathcal{C}(2a_{0},K)\delta_{ij}.$
So in large length scales the $\hat{\Phi}$ tensor gives a difference between
the coarse-grained local stresses for an amorphous system and a disordered-
crystal. For an amorphous system $\hat{\Phi}$ is an identity matrix whereas in
near-crystalline systems $\hat{\Phi}$ also contains off-diagonal elements.
This difference vanishes for a marginally jammed disordered crystal i.e.,
$R_{0}\to 2a_{0}$.
Figure 7: Angular variation of the change in local stress correlation in
Fourier space due to particle size disorder in a 3D near-crystalline (fcc)
system as $|\vec{k}|\to 0$. Here
$C_{\alpha\beta\mu\nu}(\theta,\phi)=\lim_{|\vec{k}|\to
0}\left<\delta\tilde{\sigma}_{\alpha\beta}(\vec{k}).\delta\tilde{\sigma}_{\mu\nu}(-\vec{k})\right>=\frac{N\eta^{2}}{48}S_{\alpha\beta}S_{\mu\nu}$,
where exact expressions for $S_{\alpha\beta}$ and its angular variations are
given in Eqs. (86), (87) and (88). Here we have plotted correlations the in
Hammer projection as given in eq. (92) for
$(H_{x}/2\sqrt{2})^{2}+(H_{y}/\sqrt{2})^{2}\leq 1$.
### 6.5 Stress correlations in three dimensions
In this section, we derive the stress correlations in a three dimensional
system induced by microscopic disorder. The method developed earlier for the
displacement fields and the change in local stresses is still valid for the 3D
fcc lattice with the only difference arising from the number of nearest
neighbors and their arrangement in space. We can also write the displacement
fields of the particles as,
$\delta\tilde{r}^{\alpha}(\vec{k})=G^{\alpha}(\vec{k})\delta\tilde{a}(\vec{k})$
[61]. Using this we can write down the expression for change in the local
stress components in an fcc lattice in Fourier space as,
$\delta\tilde{\sigma}_{\alpha\beta}(\vec{k})=S_{\alpha\beta}(\vec{k})\delta\tilde{a}(\vec{k})$.
The exact expressions for Green’s functions for displacement fields and the
change in local stress components due to both particle size disorder and force
pinning in fcc lattice are detailed in the appendix A.1 and appendix A.2
respectively. At large lengthscales, the Green’s function in real space for
change in local stress has the following radial behavior,
$S_{\alpha\beta}(\vec{r})=\mathcal{F}\left[S_{\alpha\beta}(\vec{k})\right]\sim\frac{B_{\alpha\beta}(\theta,\phi)}{r^{3}},\hskip
14.22636pt\forall r\gg R_{0}.$ (68)
Similar to two dimensional systems, we can write the correlation between
different components of stress using the Green’s functions defined above in
$|\vec{k}|\to 0$ limit as
$\displaystyle C_{\alpha\beta\mu\nu}(\theta,\phi)=$
$\displaystyle\langle\delta a^{2}\rangle\lim_{|\vec{k}|\to
0}S_{\alpha\beta}(\vec{k})S_{\mu\nu}(-\vec{k}).$ (69)
Here the preliminary theoretical results for the Green’s functions for local
stress and the stress correlations in Fourier space are given in the Appendix
A.2 and all the distinct components of these correlations are plotted in
Figure. 7.
## 7 Response to a point force
Subsequently, we examine the change in the local stress components within a
crystalline system caused by finite quenched forces. We start by introducing
finite external quenched forces, represented as $\vec{f}_{a}(\vec{r}_{i})$, to
each grain $i$ in the crystalline system. The sum of these forces satisfies
the condition $\sum_{i}\vec{f}_{a}(\vec{r}_{i})=0$. To balance the forces on
each grain in the system, particles shift from their original lattice
positions. In the case of an ideal crystalline system with forces attached to
particles, the force balance requirement for each grain $i$ can be expressed
as follows:
$\displaystyle\sum_{j}\left(f^{(0)\mu}_{ij}+\delta f^{\mu}_{ij}\right)=$
$\displaystyle-(f_{a})^{\mu}_{i},$ (70)
where $f_{ij}^{(0)}$ are the forces along the bond between particle $i$ and
$j$ in the initial crystalline system whereas $\delta f_{ij}$ correspond to
the change in the bond forces due to the external pinning. Here $\delta
f_{ij}$ can be approximated using their first-order Taylor series expansion,
resulting in the following linear expression:
$\displaystyle\sum_{j}\sum_{\nu}C_{ij}^{\mu\nu}\delta r^{\nu}_{ij}=$
$\displaystyle-(f_{a})^{\mu}_{i}.$ (71)
The above expression is similar to the linear equation for the displacement
fields due to particle size disorder. By employing a similar approach as
explained earlier for particle size disorder, we can obtain the expression for
displacement fields resulting from pinned forces, which is presented as
follows:
$\delta\tilde{r}^{\mu}(\vec{k})=-\sum_{\nu}\underbrace{\left(A^{-1}\right)^{\mu\nu}(\vec{k})}_{\tilde{\mathcal{G}}^{\mu\nu}(\vec{k})}\tilde{f}^{\nu}_{a}(\vec{k}),$
(72)
whose inverse Fourier transform gives the displacement fields in real space
due to force quench. The large lengthscale behavior of these displacements has
been studied thoroughly [32]. We have given a brief description of these above
expressions in the Appendix A.1. Using these displacement fields and the
expression for the change in local stresses, one can write the components of
change in local stresses in Fourier space as,
$\delta\tilde{\sigma}_{\alpha\beta}(\vec{k})=\sum_{\mu}\mathcal{S}^{\mu}_{\alpha\beta}(\vec{k})\tilde{f}_{a}^{\mu}(\vec{k}),\text{
where},$ (73)
$\displaystyle\mathcal{S}_{\alpha\beta}^{\mu}(\vec{k})=$
$\displaystyle\sum_{j}\left[\left(-1+F_{j}(\vec{k})\right)\left(\sum_{\nu}\Delta^{\alpha(0)}_{j}C_{j}^{\beta\nu}\tilde{\mathcal{G}}^{\mu\nu}(\vec{k})+f_{j}^{\beta(0)}\tilde{\mathcal{G}}^{\mu\alpha}(\vec{k})\right)\right].$
(74)
Figure 8: Change in local stress components per unit force
($\delta\sigma_{\alpha\beta}(\vec{r})/f_{a}$) applied at the origin in 2D
near-crystalline systems of $N=10000$ particles with periodic boundary
conditions. Here the top panel $(a)$ corresponds to direct numerical
simulations and $(b)$ represents the theoretical results as given in Eq. (78).
We can connect the Green’s functions for the displacement and stress fields
produced due to force quench to that of the microscopic particle size disorder
which can be represented as follows,
$\displaystyle\tilde{G}^{\mu}(\vec{k})=$
$\displaystyle\sum_{\nu}\tilde{\mathcal{G}}^{\mu\nu}(\vec{k})\left(\sum_{j}C_{j}^{\nu
a}\left[1+F_{j}(\vec{k})\right]\right),$ (75) $\displaystyle
S_{\alpha\beta}(\vec{k})=$
$\displaystyle\sum_{\nu}\mathcal{S}^{\nu}_{\alpha\beta}(\vec{k})\left(\sum_{j}C_{j}^{\nu
a}\left[1+F_{j}(\vec{k})\right]\right).$
In real space, the change in the local stress components can be represented as
$\delta\sigma_{\alpha\beta}(\vec{r})=\sum_{\vec{r}^{\prime}}\sum_{\mu}\mathcal{S}^{\mu}_{\alpha\beta}(\vec{r}-\vec{r}^{\prime})f^{\mu}_{a}(\vec{r}^{\prime}),$
(76)
where $\mathcal{S}^{\mu}_{\alpha\beta}(\vec{r})$ corresponds to the Green’s
function for the change in local stress components due to pinned force in real
space. These Green’s functions in the above expression can be understood as
the change in local stresses due to a unit force applied to a single particle
at the origin. The exact expressions for these Green’s functions in Fourier
space (in $k\to 0$ limit) are detailed in Appendix. A.2. We can perform
Fourier transform to obtain the large lengthscale behavior of these Green’s
functions which shows $1/r$ radial behavior in two dimensions and
$\mathcal{S}_{\alpha\beta}^{\gamma}\sim 1/r^{2}$ and
$\mathcal{S}_{\alpha\alpha}^{\alpha}\sim 1/r$ in three dimensions. In two
dimensions we can write these Green’s functions explicitly as
$\displaystyle\mathcal{S}^{x}_{xx}(\vec{r})=$
$\displaystyle\left(4\left(R_{0}-a_{0}\right)\cos{\theta}+a_{0}\cos{3\theta}\right)/(2R_{0}-a_{0})r,$
(77) $\displaystyle\mathcal{S}^{x}_{xy}(\vec{r})=$
$\displaystyle\left(2R_{0}-a_{0}+2a_{0}\cos{2\theta}\right)\sin{\theta}/(2R_{0}-a_{0})r,$
$\displaystyle\mathcal{S}^{x}_{yy}(\vec{r})=$ $\displaystyle
a_{0}\cos{3\theta}/(2R_{0}-a_{0})r,$
$\displaystyle\mathcal{S}^{y}_{xx}(\vec{r})=$ $\displaystyle
a_{0}\sin{3\theta}/(2R_{0}-a_{0})r,$
$\displaystyle\mathcal{S}^{y}_{xy}(\vec{r})=$
$\displaystyle\left(2R_{0}-a_{0}-2a_{0}\cos{2\theta}\right)\cos{\theta}/(2R_{0}-a_{0})r,$
$\displaystyle\mathcal{S}^{y}_{yy}(\vec{r})=$
$\displaystyle\left(4\left(R_{0}-a_{0}\right)\sin{\theta}-a_{0}\sin{3\theta}\right)/(2R_{0}-a_{0})r.$
To verify our results we have taken a system of $N=10000$ particles in a
triangular lattice arrangement and assigned a point force ($f_{a}\hat{y}$)
directed along the $y-$axis to a particle located at the origin ($0,0$). To
make the net force in the system zero we apply an additional $-f_{a}\hat{y}/N$
to all the particles in the system. All the particles will rearrange to
balance the point force, which results in a change in the local stress
profile. So the displacements and the change in local stresses can be written
as,
$\displaystyle\delta x(\vec{r})=-f_{a}\mathcal{G}^{xy}(\vec{r}),\hskip
14.22636pt\delta y(\vec{r})=-f_{a}\mathcal{G}^{yy}(\vec{r}),$ (78)
$\displaystyle\delta\sigma_{\alpha\beta}(\vec{r})=f_{a}\mathcal{S}^{y}_{\alpha\beta}(\vec{r}).$
The results above can be verified in Fig. 8 where we have shown the match
between the analytically and numerically obtained results for change in local
stress profile due to a quenched force along $y$-axis. We have also done a
preliminary study on the effect of pinning in 3d systems which we have
detailed in the appendix A.2.
## 8 Distribution of Stresses
Computations of stress correlations in athermal amorphous materials implicitly
assume a partition function description [41, 18, 57, 58]. We show below that
such an ensemble indeed emerges from the fluctuations of the individual radii.
We obtain the joint probability distribution of changes in local stresses in
Fourier space by utilizing the linear relationships between the change in
local stress and disorder, as described in equations (40) and (76).
$\displaystyle P($
$\displaystyle\delta\tilde{\sigma}_{1}(\vec{k}),\delta\tilde{\sigma}_{2}(\vec{k}),\delta\tilde{\sigma}_{3}(\vec{k}))=\int_{-\infty}^{\infty}d\left(\delta\tilde{a}_{k}\right)p(\delta\tilde{a}_{k})\prod_{i=1}^{3}\delta\left(\delta\tilde{\sigma}_{i}^{R}-S_{i}\delta\tilde{a}(\vec{k})^{R}\right)\delta\left(\delta\tilde{\sigma}_{i}^{I}-S_{i}\delta\tilde{a}(\vec{k})^{I}\right)$
(79)
$\displaystyle=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\prod_{j=1}^{3}df_{j}dg_{j}e^{i\left(f_{j}\delta\tilde{\sigma}_{j}^{R}+g_{j}\delta\tilde{\sigma}_{j}^{I}\right)}\underbrace{\int_{-\infty}^{\infty}d\left(\delta\tilde{a}_{k}\right)p(\delta\tilde{a}_{k})e^{-i\left(\sum_{m=1}^{3}\left(\delta\tilde{a}(\vec{k})^{R}f_{m}+\delta\tilde{a}(\vec{k})^{I}g_{m}\right)S_{m}\right)}}_{h\left(\\{f,g,S\\}\right)}.$
In the above equation, the superscripts $R$ and $I$ correspond to the real and
imaginary parts respectively. Since the $\delta a$s are drawn from a uniform
distribution we can obtain the distribution of its Fourier transformed
variable $\delta\tilde{a}(\vec{k})$ as $p(\delta\tilde{a}_{k})=(48/\pi
N\eta^{2})^{1/2}\exp{-48|\delta\tilde{a}(\vec{k})|^{2}/N\eta^{2}}.$ So using
this distribution we can rewrite the above equation as,
$\displaystyle h$
$\displaystyle\left(\\{f,g,S\\}\right)=\exp{-\frac{N\eta^{2}}{192}\sum_{m,n=1}^{3}(f_{m}f_{n}+g_{m}g_{n})\left(S_{m}S_{n}\right)}=\exp{-\frac{N\eta^{2}}{192}(\bra{f}\hat{S}\ket{f}+\bra{g}\hat{S}\ket{g})},$
(80)
where, $\bra{f}=(f_{1}\text{\hskip 2.84544pt}f_{2}\text{\hskip
2.84544pt}f_{3})$, $\bra{g}=(g_{1}\text{\hskip 2.84544pt}g_{2}\text{\hskip
2.84544pt}g_{3})$ and $\hat{\mathcal{S}}_{mn}=S_{m}S_{n}$. Therefore, using
the above expression we can rewrite the joint probability distribution of the
stress components in Fourier space as
$\displaystyle
P(\delta\tilde{\sigma}_{xx}(\vec{k}),\delta\tilde{\sigma}_{yy}(\vec{k}),\delta\tilde{\sigma}_{xy}(\vec{k}))$
$\displaystyle=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\prod_{j=1}^{3}df_{j}dg_{j}e^{i\left(\bra{f}\ket{\delta\tilde{\sigma}^{R}}+\bra{g}\ket{\delta\tilde{\sigma}^{I}}\right)}e^{-\frac{N\eta^{2}}{192}\left(\bra{f}\hat{S}\ket{f}+\bra{g}\hat{S}\ket{g}\right)}$
(81) $\displaystyle=\left(\frac{48}{\pi
N\eta^{2}}\right)^{1/2}\exp{-\frac{48}{N\eta^{2}}\bra{\delta\tilde{\sigma}^{R}}\mathcal{\hat{S}}^{-1}\ket{\delta\tilde{\sigma}^{R}}}.$
where,
$\bra{\delta\tilde{\sigma}^{R}}=\begin{bmatrix}\delta\tilde{\sigma}_{xx}(\vec{k})&\delta\tilde{\sigma}_{yy}(\vec{k})&\delta\tilde{\sigma}_{xy}(\vec{k})\end{bmatrix},\text{
and
}\mathcal{\hat{S}}=\begin{bmatrix}S_{xx}S_{xx}&S_{xx}S_{yy}&S_{yy}S_{xy}\\\
S_{yy}S_{xx}&S_{yy}S_{yy}&S_{xy}S_{xy}\\\
S_{xy}S_{xx}&S_{xy}S_{yy}&S_{xx}S_{xy}\end{bmatrix},$ (82)
where the form of the $S_{\alpha\beta}$ are model dependent and their exact
forms are provided in the appendix A.2.
Earlier studies [41, 18] have shown that the generalized elastic constants in
amorphous packings are directly related to the correlations in components of
the stress tensor. This explicitly does not depend on the strength of the
disorder in the system. However, as we have shown, in near-crystalline
packings the distributions depend on the strength of the disorder but the
elastic constants are independent of them. So it is not evident that the
formulations developed earlier [41, 18] for generalised elastic constants in
amorphous systems can be directly implemented in the context of near-
crystalline packings.
## 9 Discussion and Conclusion
In this study, we have examined the elastic properties, stress fluctuations as
well as spatial stress correlations in near-crystalline athermal systems. We
have obtained exact theoretical results for the macroscopic elastic properties
by utilizing the fact that the average change in local stresses due to
microscopic disorder in crystalline athermal solids is negligible. Our
findings reveal that these elastic properties remain unaffected by the degree
of disorder within a crystalline packing but are influenced by various initial
conditions, such as packing fraction, pressure, and the strength of particle
interactions. Furthermore, we have presented both numerical and theoretical
results for local stress fluctuations and their spatial correlations within
energy-minimized configurations of soft particles in both two and three
dimensions. Notably, all these fluctuations and correlations exhibit a
quadratic variation with the strength of the disorder. We have shown that the
components of the stress tensor display anisotropic long range decay in both
two and three dimensional near-crystalline packings irrespective of the
interaction potential. For particle size disorder we observe a $1/r^{d}$
radial decay of the change in the stress-tensor components whereas for
external quenched forces, we have established a slower $1/r^{d-1}$ radial
decay. The stress correlations in disordered crystals differ significantly
from those observed in isotropic amorphous materials at high packing fractions
or under high-pressure conditions [57, 58]. Notably, we have observed
additional non-isotropic angular behavior in the stress correlations, which
becomes prominent at higher packing fractions.
Crucially, for the case of near-crystalline packings, we have found that the
correlations depend on the strength of the disorder (proportional to
$\eta^{2}$) introduced into the system, whereas the macroscopic elastic
coefficients are largely independent of disorder. This is in contrast to
stress correlations in amorphous materials, where the magnitude of the
correlations have been related to elastic constants which are independent of
the degree of disorder [41, 18]. It would therefore be very interesting to
examine the crossover between this near-crystalline and amorphous behaviour as
the degree of disorder increased beyond a critical threshold.
Several interesting questions still remain for further research. For example,
the behavior of these stress correlations as we increase the disorder in the
system may not vary quadratically to the strength of the disorder as linear
perturbation expansion is not valid at a high enough disorder. It would
therefore be interesting to study these properties across the crystalline to
amorphous transition. It would also be intriguing to extend our analysis to
dynamical systems where particles obey local force balance constraints only on
average.
Recent studies [68] have connected the fluctuating elastic constants to the
quasilocalised vibrational modes in amorphous materials. This hypothesis would
be interesting to examine microscopically within the context of near-
crystalline materials. Our study also highlights the importance of prestress
in the stress correlations of athermal materials. Increasingly, studies have
shown that accounting for the impact of stresses or prestress is crucial in
understanding mechanical properties of amorphous materials [69, 70]. Our
techniques could therefore be used to test the impact of frozen-in stresses on
the elasticity characteristics of both crystalline and amorphous materials.
## Acknowledgements
We thank Surajit Chakraborty, Pinaki Chaudhuri, Debankur Das, Bulbul
Chakraborty, Subhro Bhattacharjee, Jishnu Nampoothiri and Palash Bera for
useful discussions. The work of K. R. was partially supported by the SERB-
MATRICS grant MTR/2022/000966. This project was funded by intramural funds at
TIFR Hyderabad from the Department of Atomic Energy (DAE), Government of
India.
## Appendix A Green’s functions in the Harmonic model
### A.1 Green’s functions for displacement fields
The displacements of every grain (in Fourier space) due to pinned forces can
be written using the translational invariance of the system as detailed in the
recent publications [60, 32] which have the form
$\delta\tilde{r}^{\mu}(\vec{k})=-\sum_{\nu}\underbrace{\left(A^{-1}\right)^{\mu\nu}(\vec{k})}_{\tilde{\mathcal{G}}^{\mu\nu}(\vec{k})}\tilde{f}^{\nu}_{a}(\vec{k}).$
(83)
Similarly, the linear order displacement fields due to disorder in the
particle sizes can be expressed as,
$\delta\tilde{r}^{\alpha}(\vec{k})=\sum_{\nu}\tilde{G}^{\alpha}(\vec{k})\delta\tilde{a}^{\alpha}(\vec{k}).$
(84)
Below we have given the general form of these Green’s functions with various
initial packing fractions, particle size disorder and random quenched forces
for both two and three dimensional harmonic soft particle systems.
Force pinning (2d)
---
$\tilde{\mathcal{G}}^{xx}(\vec{k})$ | $\frac{R_{0}a_{0}^{2}}{K}\frac{m_{1}}{\left(m_{1}m_{2}-m_{3}^{2}\right)}$ | where, $m_{1}=-3(R_{0}-a_{0})+(R_{0}-2a_{0})\cos{2k_{x}}+(2R_{0}-a_{0})\cos{k_{x}}\cos{k_{y}},$
$\tilde{\mathcal{G}}^{yy}(\vec{k})$ | $\frac{R_{0}a_{0}^{2}}{K}\frac{m_{2}}{\left(m_{1}m_{2}-m_{3}^{2}\right)}$ | $m_{2}=-3(R_{0}-a_{0})+R_{0}\cos{2k_{x}}+(2R_{0}-3a_{0})\cos{k_{x}}\cos{k_{y}},$
$\tilde{\mathcal{G}}^{xy}(\vec{k})$ | $\frac{R_{0}a_{0}^{2}}{K}\frac{m_{3}}{\left(m_{1}m_{2}-m_{3}^{2}\right)}$ | $m_{3}=\sqrt{3}a_{0}\sin{k_{x}}\sin{k_{y}}.$
Force pinning (3d)
$\tilde{\mathcal{G}}^{\alpha\alpha}(\vec{k})$ | $\frac{R_{0}a_{0}^{2}}{K}\frac{p_{\beta}p_{\gamma}-q_{\alpha}^{2}}{h}$ | where, $h=p_{x}p_{y}p_{z}+2q_{x}q_{y}q_{z}-p_{x}q_{x}^{2}-p_{y}q_{y}^{2}-p_{z}q_{z}^{2},$
$\tilde{\mathcal{G}}^{\alpha\beta}(\vec{k})$ | $\frac{R_{0}a_{0}^{2}}{K}\frac{p_{\gamma}q_{\gamma}-q_{\alpha}q_{\beta}}{h}$ | $p_{\alpha}=(-3R_{0}+4a_{0}+(R_{0}-2a_{0})\cos{k_{\beta}}\cos{k_{\gamma}}$ $+(R_{0}-a_{0})\cos{k_{\alpha}}(\cos{k_{\beta}}+\cos{k_{\gamma}})),$ $q_{\alpha}=a_{0}\sin{k_{\beta}}\sin{k_{\gamma}},\hskip 28.45274pt\forall\alpha\neq\beta\neq\gamma\in\\{x,y,z\\}.$
Particle size disorder (2d)
$\tilde{G}^{\alpha}(\vec{k})$ | $\sum_{\beta}\tilde{\mathcal{G}}^{\alpha\beta}(\vec{k})D^{\beta}(\vec{k})$ | where, $D^{x}(\vec{k})=i\frac{K(R_{0}-a_{0})}{a_{0}^{3}}\left(2\cos{k_{x}}+\cos{k_{y}}\right)\sin{k_{x}},$ $D^{y}(\vec{k})=i\frac{\sqrt{3}K(R_{0}-a_{0})}{a_{0}^{3}}\cos{k_{x}}\sin{k_{y}}.$
Particle size disorder (3d)
$\tilde{G}^{\alpha}(\vec{k})$ | $\sum_{\beta}\tilde{\mathcal{G}}^{\alpha\beta}(\vec{k})D^{\beta}(\vec{k})$ | where, $D^{\alpha}(\vec{k})=i\frac{K(R_{0}-a_{0})}{\sqrt{2}a_{0}^{3}}\left(\cos{k_{\beta}}+\cos{k_{\gamma}}\right)\sin{k_{\alpha}},\hskip 2.84544pt\forall\alpha\neq\beta\neq\gamma\in\\{x,y,z\\}.$
### A.2 Green’s functions for change in local stresses in Fourier space
In the linear approximation, the change in the local stress due to external
force pinning in both two and three dimensional systems can be expressed in
Fourier space as,
$\delta\tilde{\sigma}_{\mu\nu}(\vec{k})=\sum_{\nu}\mathcal{S}^{\alpha}_{\mu\nu}(\vec{k})\tilde{f}^{\alpha}_{a}(\vec{k}).$
(85)
For particle size disorder the expressions for Green’s functions
$S_{\alpha\beta}$ as given in Eq. (74) which only depend on the nearest
neighbor arrangement exhibit simple relationships at small magnitudes of
$|\vec{k}|$ (corresponding to large lengthscales in real space), which are
expressed below as
Force Pinning (2d)
---
$\mathcal{S}^{\alpha}_{\alpha\alpha}(\vec{k})$ | $-2ik_{\alpha}\left(k_{\beta}^{2}(4R_{0}-a_{0})+k_{\alpha}^{2}(4R_{0}-5a_{0})\right)/\left(k^{4}(2R_{0}-a_{0})\right),$
$\mathcal{S}^{\alpha}_{\beta\beta}(\vec{k})$ | $-2ik_{\alpha}(k_{\alpha}^{2}-3k_{\beta}^{2})a_{0}/\left(k^{4}(2R_{0}-a_{0})\right),$
$\mathcal{S}^{\alpha}_{\alpha\beta}(\vec{k})=\mathcal{S}^{\alpha}_{\beta\alpha}(\vec{k})$ | $-2ik_{\beta}\left(k_{\alpha}^{2}(2R_{0}-5a_{0})+k_{\beta}^{2}(2R_{0}-a_{0})\right)/\left(k^{4}(2R_{0}-a_{0})\right).$
Force Pinning (3d)
$\mathcal{S}^{\alpha}_{\mu\nu}(\vec{k})$ | $i\sqrt{2}R_{0}n^{\alpha}_{\mu\nu}(\vec{k})/d(\vec{k})$ where, $d(\vec{k})=c_{0}^{2}(c_{0}-a_{0})k^{6}-\frac{3a_{0}^{2}}{2}c_{0}k^{2}\sum_{\alpha\beta\in\\{x,y,z\\}}k_{\alpha}^{2}k_{\beta}^{2}+5a_{0}^{3}(k_{x}k_{y}k_{z})^{2},$ $n^{\alpha}_{\alpha\alpha}(\vec{k})=c_{0}k^{2}\left[c_{0}^{2}k^{2}+2a_{0}c_{1}(k^{2}-k_{\alpha}^{2})\right]-3a_{0}^{2}c_{2}k_{\beta}^{2}k_{\gamma}^{2},$ $n^{\alpha}_{\beta\beta}(\vec{k})=a_{0}k_{\alpha}\left[2c_{0}c_{1}k^{4}+3a_{0}(8c_{3}-a_{0})k_{\beta}^{2}(k_{\alpha}^{2}+k_{\beta}^{2})+a_{0}c_{0}k^{2}k_{\alpha}^{2}\right.$ $\left.\hskip 28.45274pt-(16c_{3}^{2}+a_{0}(2c_{2}+a_{0}))k^{2}k_{\beta}^{2}\right],$ $n^{\alpha}_{\alpha\beta}(\vec{k})=c_{0}k_{\beta}\left[2c_{0}c_{3}k^{4}-12R_{0}a_{0}k^{2}k_{\alpha}^{2}\right.$ $\hskip 28.45274pt\left.-3a_{0}^{2}\left((4k_{\alpha}^{2}-k_{\beta}^{2})(k_{\alpha}^{2}+k_{\beta}^{2})-k^{2}(k_{\alpha}^{2}-k_{\beta}^{2})\right)\right],$ $n^{\alpha}_{\beta\gamma}(\vec{k})=2a_{0}c_{0}k_{\alpha}k_{\beta}k_{\gamma}\left[2c_{0}k_{\alpha}^{2}+(2R_{0}+c_{0})(k_{\beta}^{2}+k_{\gamma}^{2})\right],\hskip 14.22636pt$ $\forall\alpha\neq\beta\neq\gamma\in\\{x,y,z\\}.$ with $c_{0}=(2R_{0}-3a_{0}),c_{1}=(R_{0}-2a_{0})$, $c_{2}=(2R_{0}-a_{0})$, $c_{3}=(R_{0}-a_{0}).$
Particle size disorder (2d)
$S_{\alpha\alpha}(\vec{k})$ | $\frac{\mathcal{C}(R_{0},K)}{|\vec{k}|^{2}}\left[\left(\frac{R_{0}}{a_{0}}-1\right)k_{\beta}^{2}+\left(2-\frac{R_{0}}{a_{0}}\right)k_{\alpha}^{2}\right],$
$S_{\alpha\beta}(\vec{k})$ | $\frac{\mathcal{C}(R_{0},K)}{|\vec{k}|^{2}}\left[\left(3-\frac{2R_{0}}{a_{0}}\right)k_{\alpha}k_{\beta}\right],\hskip 14.22636pt\forall\alpha\neq\beta\in\\{x,y\\},$ with $\mathcal{C}(R_{0},K)=\frac{6KR_{0}(R_{0}-a_{0})}{(2R_{0}-a_{0})a_{0}^{2}}.$
Particle size disorder (3d)
---
$S_{\mu\nu}(\vec{k})$ | $\left(2c_{3}R_{0}/a_{0}^{3}\right)\left(m_{\mu\nu}(\vec{k})/d(\vec{k})\right)$ where, $d(\vec{k})=c_{0}^{2}(c_{0}-a_{0})k^{6}-\frac{3a_{0}^{2}}{2}c_{0}k^{2}\sum_{\alpha\beta\in\\{x,y,z\\}}k_{\alpha}^{2}k_{\beta}^{2}+5a_{0}^{3}(k_{x}k_{y}k_{z})^{2},$ $m_{\alpha\alpha}(\vec{k})=c_{0}^{3}k^{6}-a_{0}^{2}(4c_{1}-a_{0})k_{\alpha}^{2}k_{\beta}^{2}k_{\gamma}^{2}-c_{0}a_{0}k^{2}\left(4c_{1}k_{\alpha}^{4}+a_{0}^{2}k_{\beta}^{2}k_{\gamma}^{2}\right)$ $\hskip 42.67912pt+c_{0}(8c_{1}^{2}+a_{0}c_{0})k^{4}k_{\alpha}^{2},$ $m_{\alpha\beta}(\vec{k})=c_{0}k_{\alpha}k_{\beta}\left(2c_{0}k^{2}-a_{0}(k_{\alpha}^{2}+k_{\beta}^{2})\right)\left(2c_{1}k^{2}+a_{0}(k_{\alpha}^{2}+k_{\beta}^{2})\right),$ $\hskip 42.67912pt\forall\alpha\neq\beta\neq\gamma\in\\{x,y,z\\}.$ with $c_{0}=(2R_{0}-3a_{0}),c_{1}=(R_{0}-2a_{0})$, $c_{2}=(2R_{0}-a_{0})$, $c_{3}=(R_{0}-a_{0}).$
Figure 9: Green’s function for change in the local stress components in $k\to
0$ limit which only has anisotropic angular behavior. Here the Green’s
functions are represented in Hammer projection i.e., $\\{\theta,\phi\to
H_{x}(\theta,\phi),H_{y}(\theta,\phi)\\}$ as given in Eq. (92) for
$(H_{x}/2\sqrt{2})^{2}+(H_{y}/\sqrt{2})^{2}\leq 1$.
In the 3D Harmonic model, when we approach the limit of $|\vec{k}|\to 0$ and
$R_{0}\to 2a_{0}$ with $a_{0}=1/2$, the components of $S_{\alpha\beta}$ take
on specific forms:
$\displaystyle S_{\alpha\alpha}=$
$\displaystyle\frac{8(k_{\beta}^{2}+k_{\gamma}^{2})\left(|k|^{4}-k_{\beta}^{2}k_{\gamma}^{2}\right)}{|k|^{6}+(k_{x}^{6}+k_{y}^{6}+k_{z}^{6}+2k_{x}^{2}k_{y}^{2}k_{z}^{2})},\hskip
5.69046pt\text{and}\hskip
5.69046ptS_{\alpha\beta}=\frac{-8(k_{\alpha}k_{\beta})\left(|k|^{4}-k_{\gamma}^{4}\right)}{|k|^{6}+(k_{x}^{6}+k_{y}^{6}+k_{z}^{6}+2k_{x}^{2}k_{y}^{2}k_{z}^{2})},\hskip
14.22636pt$ (86)
$\displaystyle\forall\alpha\neq\beta\neq\gamma\in\\{x,y,z\\}.$
Now, in a spherical polar coordinate system, where
$k_{x}=k\sin{\theta}\cos{\phi}$, $k_{y}=k\sin{\theta}\sin{\phi}$, and
$k_{z}=k\cos{\theta}$, substituting these values into the equations above
yields the angular behavior of $S_{\alpha\beta}(\vec{k})$ as $|\vec{k}|\to 0$,
$\lim_{|\vec{k}|\to
0}S_{\alpha\beta}(|\vec{k}|,\theta,\phi)=\frac{g_{\alpha\beta}(\theta,\phi)}{h(\theta,\phi)}.$
(87)
In the specific case of $R_{0}\to 2a_{0}$ with $a_{0}=1/2$, the components of
$g_{\alpha\beta}(\theta,\phi)$ and $h(\theta,\phi)$ are given by,
$\displaystyle g_{xx}=$ $\displaystyle
8(\sin^{2}{\theta}\sin^{2}{\phi}+\cos^{2}{\theta})\left(1-\sin^{2}{\theta}\sin^{2}{\phi}\cos^{2}{\theta}\right),$
(88) $\displaystyle g_{yy}=$ $\displaystyle
8(\sin^{2}{\theta}\cos^{2}{\phi}+\cos^{2}{\theta})\left(1-\sin^{2}{\theta}\cos^{2}{\phi}\cos^{2}{\theta}\right),$
$\displaystyle g_{zz}=$ $\displaystyle
8\sin^{2}{\theta}\left(1-\sin^{4}{\theta}\sin^{2}{\phi}\cos^{2}{\phi}\right),$
$\displaystyle g_{xy}=$ $\displaystyle
2\sin^{4}{\theta}\sin^{2}{2\phi}\left(1-\cos^{2}{\theta}\right),$
$\displaystyle g_{yz}=$ $\displaystyle
2\sin^{2}{2\theta}\sin^{2}{\phi}\left(1-\sin^{2}{\theta}\cos^{2}{\phi}\right),$
$\displaystyle g_{zx}=$ $\displaystyle
2\sin^{2}{2\theta}\cos^{2}{\phi}\left(1-\sin^{2}{\theta}\sin^{2}{\phi}\right),$
$\displaystyle h(\theta,\phi$
$\displaystyle)=1+\sin^{6}{\theta}(\sin^{6}{\phi}+\cos^{6}{\phi})+\cos^{6}{\theta}+2\sin^{4}{\theta}\sin^{2}{\phi}\cos^{2}{\phi}.$
## Appendix B Angular variation of stress correlations at large lengthscales
All the six distinct stress correlations in a two dimensional Harmonic soft
particle system where the underlying arrangement is a triangular lattice have
the following forms,
$\displaystyle C_{xxxx}=4\mathcal{P}(R_{0},\eta)$
$\displaystyle\left[\left(\frac{R_{0}}{a_{0}}-1\right)\sin^{2}{\theta}+\left(2-\frac{R_{0}}{a_{0}}\right)\cos^{2}{\theta}\right]^{2}\xrightarrow{R_{0}\to
2a_{0}}4\mathcal{P}(2a_{0},\eta)\sin^{4}{\theta},$ (89) $\displaystyle
C_{yyyy}=4\mathcal{P}(R_{0},\eta)$
$\displaystyle\left[\left(\frac{R_{0}}{a_{0}}-1\right)\cos^{2}{\theta}+\left(2-\frac{R_{0}}{a_{0}}\right)\sin^{2}{\theta}\right]^{2}\xrightarrow{R_{0}\to
2a_{0}}4\mathcal{P}(2a_{0},\eta)\cos^{4}{\theta},$ $\displaystyle
C_{xyxy}=4\mathcal{P}(R_{0},\eta)$
$\displaystyle\left[\left(3-\frac{2R_{0}}{a_{0}}\right)\sin{\theta}\cos{\theta}\right]^{2}\xrightarrow{R_{0}\to
2a_{0}}4\mathcal{P}(2a_{0},\eta)\sin^{2}{\theta}\cos^{2}{\theta},$
$\displaystyle C_{yyxx}=4\mathcal{P}(R_{0},\eta)$
$\displaystyle\left[\sin^{2}{\theta}\cos^{2}{\theta}+\left(\frac{R_{0}}{a_{0}}-1\right)\left(2-\frac{R_{0}}{a_{0}}\right)(\sin^{4}{\theta}+\cos^{4}{\theta})\right]$
$\displaystyle\xrightarrow{R_{0}\to
2a_{0}}4\mathcal{P}(2a_{0},\eta)\sin^{2}{\theta}\cos^{2}{\theta},$
$\displaystyle C_{xxxy}=4\mathcal{P}(R_{0},\eta)$
$\displaystyle\left(3-\frac{2R_{0}}{a_{0}}\right)\left[\left(\frac{R_{0}}{a_{0}}-1\right)\sin^{3}{\theta}\cos{\theta}+\left(2-\frac{R_{0}}{a_{0}}\right)\sin{\theta}\cos^{3}{\theta}\right]$
$\displaystyle\xrightarrow{R_{0}\to
2a_{0}}-4\mathcal{P}(2a_{0},\eta)\sin^{3}{\theta}\cos{\theta},$ $\displaystyle
C_{yyyx}=4\mathcal{P}(R_{0},\eta)$
$\displaystyle\left(3-\frac{2R_{0}}{a_{0}}\right)\left[\left(\frac{R_{0}}{a_{0}}-1\right)\sin{\theta}\cos^{3}{\theta}+\left(2-\frac{R_{0}}{a_{0}}\right)\sin^{3}{\theta}\cos{\theta}\right]$
$\displaystyle\xrightarrow{R_{0}\to
2a_{0}}-4\mathcal{P}(2a_{0},\eta)\sin{\theta}\cos^{3}{\theta}.$
The stress correlations shown above for marginally jammed crystals i.e., for
$R_{0}\to 2a_{0}$ have similar angular behavior as that of an isotropic
amorphous material which have been shown earlier in several studies [56, 57,
58, 41, 18]. But for an overcompressed disordered crystal with finite average
pressure, the angular behavior of stress correlations show an additional
anisotropic term which depend on the system overcompression. Therefore for
finite pressure, we can rewrite the above stress correlations of a disordered
crystal as a summation of stress correlations of an isotropic amorphous
material with the addition of a finite shift i.e.,
$C_{\alpha\beta\mu\nu}=C^{t}_{\alpha\beta\mu\nu}+d_{\alpha\beta\mu\nu}$, with
$\displaystyle C_{xxxx}^{t}=$ $\displaystyle 4K_{2D}\sin^{4}{\theta},\hskip
56.9055ptd_{xxxx}(\theta)=4K_{2D}t_{0}\left(t_{0}+2\sin^{2}{\theta}\right),$
(90) $\displaystyle C_{yyyy}^{t}=$ $\displaystyle
4K_{2D}\cos^{4}{\theta},\hskip
56.9055ptd_{yyyy}(\theta)=4K_{2D}t_{0}\left(t_{0}+2\cos^{2}{\theta}\right),$
$\displaystyle C_{xxxy}^{t}=$
$\displaystyle-4K_{2D}\sin^{3}{\theta}\cos{\theta},\hskip
22.76228ptd_{xxxy}(\theta)=-4K_{2D}t_{0}\sin(\theta)\cos(\theta),$
$\displaystyle C_{yyxy}^{t}=$
$\displaystyle-4K_{2D}\sin{\theta}\cos^{3}{\theta},\hskip
22.76228ptd_{yyxy}(\theta)=4K_{2D}t_{0}\sin(\theta)\cos(\theta),$
$\displaystyle C_{xyxy}^{t}=$ $\displaystyle
4K_{2D}\sin^{2}{\theta}\cos^{2}{\theta},\hskip 31.2982ptd_{xyxy}(\theta)=0,$
$\displaystyle C_{yyxx}^{t}=$ $\displaystyle
4K_{2D}\sin^{2}{\theta}\cos^{2}{\theta},\hskip
31.2982ptd_{yyxx}(\theta)=4K_{2D}t_{0}(1+t_{0}),$
where,
$K_{2D}=\mathcal{P}(R_{0},\eta)\left(\frac{2R_{0}}{a_{0}}-3\right)^{2},\hskip
5.69046pt\text{and}\hskip
5.69046ptt_{0}=\frac{2a_{0}-R_{0}}{2R_{0}-3a_{0}}\sim
2\left(1-\left(\frac{\phi_{0}}{\phi}\right)^{1/2}\right).$ (91)
For an isotropic media with average pressure close to zero i.e $R_{0}\to
2a_{0}$ both the theories for stress correlations dealing with two completely
different scenarios give the same results as $d_{\alpha\beta\mu\nu}\to 0$.
## Appendix C Hammer projection
The Hammer projection is a useful technique for visualizing functions that
have a fixed radial coordinate, meaning they depend only on angular variables.
This projection is particularly valuable in two dimensional visualizations.
Any function with this characteristic can be effectively represented using
Hammer projection. In Hammer projection, we transform from spherical
coordinates $(\theta,\phi)$ to Hammer coordinates $(H_{x},H_{y})$ using the
following equations:
$\displaystyle H_{x}=$
$\displaystyle\frac{2\sqrt{2}\cos{\left(\theta-\pi/2\right)}\sin{\left(\phi/2\right)}}{\sqrt{1+\cos{\left(\theta-\pi/2\right)}\cos{\left(\phi/2\right)}}},\hskip
28.45274ptH_{y}=\frac{\sqrt{2}\sin{\left(\theta-\pi/2\right)}}{\sqrt{1+\cos{\left(\theta-\pi/2\right)}\cos{\left(\phi/2\right)}}}.$
(92)
These equations allow us to map spherical coordinates to Hammer coordinates,
facilitating the visualization of angular-dependent functions in a two
dimensional space.
## References
* [1] A. Boromand, A. Signoriello, F. Ye, C. S. O’Hern and M. D. Shattuck, _Jamming of deformable polygons_ , Phys. Rev. Lett. 121, 248003 (2018), 10.1103/PhysRevLett.121.248003.
* [2] C. P. Broedersz, X. Mao, T. C. Lubensky and F. C. MacKintosh, _Criticality and isostaticity in fibre networks_ , Nature Physics 7(12), 983 (2011), https://doi.org/10.1038/nphys2127.
* [3] D. Bi, X. Yang, M. C. Marchetti and M. L. Manning, _Motility-driven glass and jamming transitions in biological tissues_ , Phys. Rev. X 6, 021011 (2016), 10.1103/PhysRevX.6.021011.
* [4] A. J. Licup, S. Münster, A. Sharma, M. Sheinman, L. M. Jawerth, B. Fabry, D. A. Weitz and F. C. MacKintosh, _Stress controls the mechanics of collagen networks_ , Proceedings of the National Academy of Sciences 112(31), 9573 (2015), 10.1073/pnas.1504258112.
* [5] M. Wyart, _On the rigidity of amorphous solids_ , In _Annales de Physique_ , vol. 30, pp. 1–96. EDP Sciences (2005).
* [6] P. R. Onck, T. Koeman, T. van Dillen and E. van der Giessen, _Alternative explanation of stiffening in cross-linked semiflexible networks_ , Phys. Rev. Lett. 95, 178102 (2005), 10.1103/PhysRevLett.95.178102.
* [7] M. Wyart, H. Liang, A. Kabla and L. Mahadevan, _Elasticity of floppy and stiff random networks_ , Phys. Rev. Lett. 101, 215501 (2008), 10.1103/PhysRevLett.101.215501.
* [8] A. Sharma, A. Licup, K. Jansen, R. Rens, M. Sheinman, G. Koenderink and F. MacKintosh, _Strain-controlled criticality governs the nonlinear mechanics of fibre networks_ , Nature Physics 12(6), 584 (2016), 10.1038/nphys3628.
* [9] M. F. J. Vermeulen, A. Bose, C. Storm and W. G. Ellenbroek, _Geometry and the onset of rigidity in a disordered network_ , Phys. Rev. E 96, 053003 (2017), 10.1103/PhysRevE.96.053003.
* [10] A. Baule, F. Morone, H. J. Herrmann and H. A. Makse, _Edwards statistical mechanics for jammed granular matter_ , Rev. Mod. Phys. 90, 015006 (2018), 10.1103/RevModPhys.90.015006.
* [11] S. Henkes, C. S. O’Hern and B. Chakraborty, _Entropy and temperature of a static granular assembly: An ab initio approach_ , Phys. Rev. Lett. 99, 038002 (2007), 10.1103/PhysRevLett.99.038002.
* [12] D. Bi, S. Henkes, K. E. Daniels and B. Chakraborty, _The statistical physics of athermal materials_ , Annu. Rev. Condens. Matter Phys. 6(1), 63 (2015), 10.1146/annurev-conmatphys-031214-014336.
* [13] M. E. Cates, J. P. Wittmer, J.-P. Bouchaud and P. Claudin, _Jamming, force chains, and fragile matter_ , Phys. Rev. Lett. 81, 1841 (1998), 10.1103/PhysRevLett.81.1841.
* [14] C. S. O’Hern, S. A. Langer, A. J. Liu and S. R. Nagel, _Random packings of frictionless particles_ , Phys. Rev. Lett. 88, 075507 (2002), 10.1103/PhysRevLett.88.075507.
* [15] H. Yoshino, _Replica theory of the rigidity of structural glasses_ , The Journal of Chemical Physics 136(21), 214108 (2012), https://doi.org/10.1063/1.4722343.
* [16] J. Geng, D. Howell, E. Longhi, R. P. Behringer, G. Reydellet, L. Vanel, E. Clément and S. Luding, _Footprints in sand: The response of a granular material to local perturbations_ , Phys. Rev. Lett. 87, 035506 (2001), 10.1103/PhysRevLett.87.035506.
* [17] H. Vinutha, F. D. Diaz Ruiz, X. Mao, B. Chakraborty and E. Del Gado, _Stress–stress correlations reveal force chains in gels_ , The Journal of chemical physics 158(11) (2023), https://doi.org/10.1063/5.0131473.
* [18] J. N. Nampoothiri, M. D’Eon, K. Ramola, B. Chakraborty and S. Bhattacharjee, _Tensor electromagnetism and emergent elasticity in jammed solids_ , Phys. Rev. E 106, 065004 (2022), 10.1103/PhysRevE.106.065004.
* [19] A. Rida, E. Martinez, D. Rodney and P.-A. Geslin, _Influence of stress correlations on dislocation glide in random alloys_ , Phys. Rev. Mater. 6, 033605 (2022), 10.1103/PhysRevMaterials.6.033605.
* [20] F. Vogel, A. Zippelius and M. Fuchs, _Emergence of goldstone excitations in stress correlations of glass-forming colloidal dispersions_ , Europhysics Letters 125(6), 68003 (2019), 10.1209/0295-5075/125/68003.
* [21] D. S. Dagur, C. Mondal and S. Roy, _Spatial stress correlations in strong colloidal gel systems_ , Phys. Rev. B 108, 024106 (2023), 10.1103/PhysRevB.108.024106.
* [22] J. P. Wittmer, A. N. Semenov and J. Baschnagel, _Correlations of tensor field components in isotropic systems with an application to stress correlations in elastic bodies_ , Phys. Rev. E 108, 015002 (2023), 10.1103/PhysRevE.108.015002.
* [23] S. Mahajan, J. Chattoraj and M. P. Ciamarra, _Emergence of linear isotropic elasticity in amorphous and polycrystalline materials_ , Phys. Rev. E 103, 052606 (2021), 10.1103/PhysRevE.103.052606.
* [24] T. Yanagishima, J. Russo and H. Tanaka, _Common mechanism of thermodynamic and mechanical origin for ageing and crystallization of glasses_ , Nature communications 8(1), 15954 (2017), https://doi.org/10.1038/ncomms15954.
* [25] L. D. Landau and E. Lifshitz, _Theoretical physics, vol. 7, theory of elasticity_ , Science, Moscow, Main editorial board for physical and mathematical literature (1987), https://doi.org/10.1063/1.3057037.
* [26] B. Cui, G. Ruocco and A. Zaccone, _Theory of elastic constants of athermal amorphous solids with internal stresses_ , Granular Matter 21(3), 1 (2019), https://doi.org/10.1007/s10035-019-0916-4.
* [27] G. Biroli and P. Urbani, _Breakdown of elasticity in amorphous solids_ , Nature physics 12(12), 1130 (2016), https://doi.org/10.1038/nphys3845.
* [28] P. Chaudhuri, S. Karmakar, C. Dasgupta, H. R. Krishnamurthy and A. K. Sood, _Equilibrium glassy phase in a polydisperse hard-sphere system_ , Phys. Rev. Lett. 95, 248301 (2005), 10.1103/PhysRevLett.95.248301.
* [29] H. Mizuno, S. Mossa and J.-L. Barrat, _Elastic heterogeneity, vibrational states, and thermal conductivity across an amorphisation transition_ , EPL (Europhysics Letters) 104(5), 56001 (2013), 10.1209/0295-5075/104/56001.
* [30] C. P. Goodrich, A. J. Liu and S. R. Nagel, _Solids between the mechanical extremes of order and disorder_ , Nature Physics 10(8), 578 (2014), https://doi.org/10.1038/nphys3006.
* [31] H. Tong, P. Tan and N. Xu, _From crystals to disordered crystals: A hidden order-disorder transition_ , Scientific reports 5, 15378 (2015), https://doi.org/10.1038/srep15378.
* [32] P. Acharya, D. Das and K. Ramola, _Disorder perturbation expansion for athermal crystals_ , Phys. Rev. E 104, 034608 (2021), 10.1103/PhysRevE.104.034608.
* [33] G. Tsekenis, _Jamming criticality of near-crystals_ , EPL (Europhysics Letters) 135(3), 36001 (2021), 10.1209/0295-5075/ac0ffc.
* [34] P. Charbonneau, E. I. Corwin, L. Fu, G. Tsekenis and M. van der Naald, _Glassy, gardner-like phenomenology in minimally polydisperse crystalline systems_ , Phys. Rev. E 99, 020901 (2019), 10.1103/PhysRevE.99.020901.
* [35] M. Otto, J.-P. Bouchaud, P. Claudin and J. E. S. Socolar, _Anisotropy in granular media: Classical elasticity and directed-force chain network_ , Phys. Rev. E 67, 031302 (2003), 10.1103/PhysRevE.67.031302.
* [36] S. Gelin, H. Tanaka and A. Lemaître, _Anomalous phonon scattering and elastic correlations in amorphous solids_ , Nature materials 15(11), 1177 (2016), https://doi.org/10.1038/nmat4736.
* [37] N. Xu, V. Vitelli, A. J. Liu and S. R. Nagel, _Anharmonic and quasi-localized vibrations in jammed solids—modes for mechanical failure_ , EPL (Europhysics Letters) 90(5), 56001 (2010), 10.1209/0295-5075/90/56001.
* [38] M. Shimada, H. Mizuno, M. Wyart and A. Ikeda, _Spatial structure of quasilocalized vibrations in nearly jammed amorphous solids_ , Phys. Rev. E 98, 060901 (2018), 10.1103/PhysRevE.98.060901.
* [39] E. Lerner and E. Bouchbinder, _Disordered crystals reveal soft quasilocalized glassy excitations_ , Phys. Rev. Lett. 129, 095501 (2022), 10.1103/PhysRevLett.129.095501.
* [40] W. A. Phillips and A. Anderson, _Amorphous solids: low-temperature properties_ , vol. 24, Springer, https://doi.org/10.1007/978-3-642-81534-8 (1981).
* [41] J. N. Nampoothiri, Y. Wang, K. Ramola, J. Zhang, S. Bhattacharjee and B. Chakraborty, _Emergent elasticity in amorphous solids_ , Physical review letters 125(11), 118002 (2020), 10.1103/PhysRevLett.125.118002.
* [42] D. Das, P. Acharya and K. Ramola, _Long-range correlations in pinned athermal networks_ , Phys. Rev. E 104, 014503 (2021), 10.1103/PhysRevE.104.014503.
* [43] D. Das, P. Acharya and K. Ramola, _Displacement correlations in disordered athermal networks_ , Journal of Statistical Physics 189(2), 1 (2022), https://doi.org/10.1007/s10955-022-02981-9.
* [44] I. Goldhirsch and C. Goldenberg, _On the microscopic foundations of elasticity_ , The European Physical Journal E 9(3), 245 (2002), https://doi.org/10.1140/epje/i2002-10073-5.
* [45] V. V. Vasisht, P. Chaudhuri and K. Martens, _Residual stress in athermal soft disordered solids: insights from microscopic and mesoscale models_ , Soft Matter 18(34), 6426 (2022), 10.1039/D2SM00615D.
* [46] A. Barbot, M. Lerbinger, A. Hernandez-Garcia, R. García-García, M. L. Falk, D. Vandembroucq and S. Patinet, _Local yield stress statistics in model amorphous solids_ , Phys. Rev. E 97, 033001 (2018), 10.1103/PhysRevE.97.033001.
* [47] S. F. Edwards and R. Oakeshott, _Theory of powders_ , Physica A: Statistical Mechanics and its Applications 157(3), 1080 (1989), https://doi.org/10.1016/0378-4371(89)90034-4.
* [48] R. Blumenfeld and S. F. Edwards, _On granular stress statistics: Compactivity, angoricity, and some open issues_ , The Journal of Physical Chemistry B 113(12), 3981 (2009), https://doi.org/10.1021/jp809768y.
* [49] M. Maier, A. Zippelius and M. Fuchs, _Emergence of long-ranged stress correlations at the liquid to glass transition_ , Phys. Rev. Lett. 119, 265701 (2017), 10.1103/PhysRevLett.119.265701.
* [50] L. Klochko, J. Baschnagel, J. Wittmer and A. Semenov, _Long-range stress correlations in viscoelastic and glass-forming fluids_ , Soft Matter 14(33), 6835 (2018), https://doi.org/10.1039/C8SM01055B.
* [51] H. Tong, S. Sengupta and H. Tanaka, _Emergent solidity of amorphous materials as a consequence of mechanical self-organisation_ , Nature communications 11(1), 1 (2020), https://doi.org/10.1038/s41467-020-18663-7.
* [52] M. Schindler, _A numerical test of stress correlations in fluctuating hydrodynamics_ , Chemical Physics 375(2-3), 327 (2010), https://doi.org/10.1016/j.chemphys.2010.05.008.
* [53] P. Mora and D. Place, _Stress correlation function evolution in lattice solid elasto-dynamic models of shear and fracture zones and earthquake prediction_ , Earthquake Processes: Physical Modelling, Numerical Simulation and Data Analysis Part II pp. 2413–2427 (2002), https://doi.org/10.1007/978-3-0348-8197-5_13.
* [54] T. S. Majmudar and R. P. Behringer, _Contact force measurements and stress-induced anisotropy in granular materials_ , nature 435(7045), 1079 (2005), https://doi.org/10.1038/nature03805.
* [55] G. Lois, J. Zhang, T. S. Majmudar, S. Henkes, B. Chakraborty, C. S. O’Hern and R. P. Behringer, _Stress correlations in granular materials: An entropic formulation_ , Phys. Rev. E 80, 060303 (2009), 10.1103/PhysRevE.80.060303.
* [56] A. Lemaître, _Structural relaxation is a scale-free process_ , Physical review letters 113(24), 245702 (2014), 10.1103/PhysRevLett.113.245702.
* [57] A. Lemaître, _Stress correlations in glasses_ , The Journal of Chemical Physics 149(10), 104107 (2018), https://doi.org/10.1063/1.5041461.
* [58] A. Lemaître, C. Mondal, I. Procaccia and S. Roy, _Stress correlations in frictional granular media_ , Phys. Rev. B 103, 054110 (2021), 10.1103/PhysRevB.103.054110.
* [59] D. J. Durian, _Foam mechanics at the bubble scale_ , Phys. Rev. Lett. 75, 4780 (1995), 10.1103/PhysRevLett.75.4780.
* [60] P. Acharya, S. Sengupta, B. Chakraborty and K. Ramola, _Athermal fluctuations in disordered crystals_ , Phys. Rev. Lett. 124, 168004 (2020), 10.1103/PhysRevLett.124.168004.
* [61] R. Maharana, _Athermal fluctuations in three dimensional disordered crystals_ , Journal of Statistical Mechanics: Theory and Experiment 2022(10), 103201 (2022), 10.1088/1742-5468/ac9466.
* [62] R. Maharana, J. N. Nampoothiri and K. Ramola, _First-contact-breaking distributions in strained disordered crystals_ , Physical Review E 106(6), 064901 (2022), 10.1103/PhysRevE.106.064901.
* [63] S. Toxvaerd and J. C. Dyre, _Communication: Shifted forces in molecular dynamics_ , The Journal of Chemical Physics 134(8), 081102 (2011), 10.1063/1.3558787.
* [64] E. Bitzek, P. Koskinen, F. Gähler, M. Moseler and P. Gumbsch, _Structural relaxation made simple_ , Phys. Rev. Lett. 97, 170201 (2006), 10.1103/PhysRevLett.97.170201.
* [65] T. Horiguchi, _Lattice green’s functions for the triangular and honeycomb lattices_ , Journal of Mathematical Physics 13(9), 1411 (1972), https://doi.org/10.1063/1.1666155.
* [66] H. J. Berendsen, J. v. Postma, W. F. Van Gunsteren, A. DiNola and J. R. Haak, _Molecular dynamics with coupling to an external bath_ , The Journal of chemical physics 81(8), 3684 (1984), https://doi.org/10.1063/1.448118.
* [67] T. Belytschko, W. Liu and B. Moran, _Nonlinear Finite Elements for Continua and Structures_ , John Wiley and Sons, Ltd (2000).
* [68] W. Schirmacher, _Some comments on fluctuating-elasticity and local oscillator models for anomalous vibrational excitations in glasses_ , Journal of Non-Crystalline Solids 357(2), 518 (2011), https://doi.org/10.1016/j.jnoncrysol.2010.07.052, 6th International Discussion Meeting on Relaxation in Complex Systems.
* [69] S. Zhang, E. Stanifer, V. V. Vasisht, L. Zhang, E. Del Gado and X. Mao, _Prestressed elasticity of amorphous solids_ , Physical Review Research 4(4), 043181 (2022), 10.1103/PhysRevResearch.4.043181.
* [70] J. Liu, G. Nian, Q. Cao, S. Qu, X. Wang, D. Zhang and J. Jiang, _Effect of pre-stress on the onset of yielding in bulk metallic glasses_ , Journal of Non-Crystalline Solids 503, 44 (2019), https://doi.org/10.1016/j.jnoncrysol.2018.09.020.
|
# On Geometric Implications
Amirhossein Akbar Tabatabai
Institute of Mathematics, Czech Academy of Sciences
###### Abstract
It is a well-known fact that although the poset of open sets of a topological
space is a Heyting algebra, its Heyting implication is not necessarily stable
under the inverse image of continuous functions and hence is not a geometric
concept. This leaves us wondering if there is any stable family of
implications that can be safely called geometric. In this paper, we will first
recall the abstract notion of implication as a binary modality introduced in
[1]. Then, we will use a weaker version of categorical fibrations to define
the geometricity of a category of pairs of spaces and implications over a
given category of spaces. We will identify the greatest geometric category
over the subcategories of open-irreducible (closed-irreducible) maps as a
generalization of the usual injective open (closed) maps. Using this
identification, we will then characterize all geometric categories over a
given category $\mathcal{S}$, provided that $\mathcal{S}$ has some basic
closure properties. Specially, we will show that there is no non-trivial
geometric category over the full category of spaces. Finally, as the
implications we identified are also interesting in their own right, we will
spend some time to investigate their algebraic properties. We will first use a
Yoneda-type argument to provide a representation theorem, making the
implications a part of an adjunction-style pair. Then, we will use this result
to provide a Kripke-style representation for any arbitrary implication.
Keywords: modal algebras, implications, geometricity, frame representation.
## 1 Introduction
It is well-known that the poset of open sets of a topological space is a
(complete) Heyting algebra, an observation that provides the topological
interpretation of the intuitionistic propositional logic $\mathsf{IPC}$. As
the interpretation turns out to be complete, one may be tempted to consider
$\mathsf{IPC}$ as the logic of spaces in the same way that the classical
propositional logic is the logic of the discrete spaces or simply the
unstructured sets. Despite the beauty of such a philosophical temptation, the
Heyting implication, although present in any of these locales, is not
preserved under the inverse image of continuous functions and hence can not be
considered a truly geometric concept. (To see more about locales, see [16]).
To solve the issue, one may eliminate the Heyting implication from the
language and restrict its expressive power to the so-called coherent fragment.
It is also possible to be more faithful to the nature of space (and hence less
to the elementary nature of the language) to allow the infinitary disjunctions
and achieve the so-called geometric logic [11, 18]. In many contexts [19],
these logics are the natural logical systems to consider and although they
seem to be weak at first glance, they prove their sophistication through their
natural role in the real practice. For instance, geometric theories play a
crucial role in topos theory, where they characterize all Grothendieck topoi
as the mathematical universes freely constructed from the free model of
geometric theories [15, 12, 13]. Even the finitary coherent fragment is more
powerful than it appears as any classical theory has a coherent conservative
extension [5].
Having said that, the implication is not what one wants to ignore permanently.
Philosophically speaking, implication is the machinery to internalize the
meta-relation of the “entailment order between the propositions $A$ and $B$”
into a “proposition $A\to B$” to empower the language to talk about its own
entailment behavior. The logical realm is full of different instances of
implications from the more philosophically motivated conditionals [3] and the
weak implications [20, 21, 17] addressing the impredicativity problem of the
Heyting implication to the more mathematically motivated implications such as
the ones appeared in provability logic [21] and preservability logic [8, 9,
14].
Opening the horizon to the alternative implications, one may wonder if there
is any sort of geometric implication, powerful enough to internalize some
parts of the structures, on the one hand, and be geometric, on the other. To
address such a problem formally, we must first be precise about what we mean
by an implication. Reading the internalization process algebraically,
implications are some binary operations over the posets where they internalize
the order of the poset, mapping the predicate $a\leq b$ into the element $a\to
b$. Naturally, there are many structures and properties to internalize. For
instance, the fact that the order is reflexive, i.e., $a\leq a$ internalizes
to $a\to a=1$, its transitivity internalizes to $(a\to b)\wedge(b\to
c)\leq(a\to c)$ and the existence of the binary meets to $a\to(b\wedge
c)=(a\to b)\wedge(a\to c)$. To provide a definition for a general notion of
implication, we must choose the minimum level of internalization to enforce,
and we think that the natural minimum property of an order to internalize is
simply the fact that it is an order, i.e., that it is reflexive and
transitive, see [1].
###### Definition 1.1.
Let $\mathcal{A}=(A,\leq,\wedge,\vee,1,0)$ be a bounded distributive lattice.
A binary operator $\to$ over $\mathcal{A}$, decreasing in its first argument
and increasing in its second is called an _implication_ over $\mathcal{A}$ if:
$(i)$
(_internal reflexivity_) $a\to a=1$, for any $a\in\mathcal{A}$,
$(ii)$
(_internal transitivity_) $(a\to b)\wedge(b\to c)\leq a\to c$, for any
$a,b,c\in\mathcal{A}$.
An implication is called _meet internalizing_ if $a\to(b\wedge c)=(a\to
b)\wedge(a\to c)$ and _join internalizing_ if $(a\vee b)\to c=(a\to
c)\wedge(b\to c)$, for any $a,b,c\in\mathcal{A}$. For any implication, $\neg
a$ is an abbreviation for $a\to 0$. If $\to$ is an implication over
$\mathcal{A}$, the pair $(\mathcal{A},\to)$ is called a _strong algebra_. If
$\mathcal{A}=\mathcal{O}(X)$, for some space $X$, then the pair $(X,\to)$ is
called a _strong space_. By a _strong algebra map_
$f:(\mathcal{A},\to_{\mathcal{A}})\to(\mathcal{B},\to_{\mathcal{B}})$, we mean
a bounded lattice map preserving the implication, i.e.,
$f(a\to_{\mathcal{A}}b)=f(a)\to_{\mathcal{B}}f(b)$, for any
$a,b\in\mathcal{A}$. A _strong space map_ is a continuous map between spaces
such that its inverse image preserves the implication.
###### Remark 1.2.
In [1], it is shown that implications can be equivalently defined as the
binary operations over $\mathcal{A}$ satisfying the conditions:
$(i^{\prime})$
If $a\leq b$ then $a\to b=1$, for any $a,b\in\mathcal{A}$
$(ii)$
$(a\to b)\wedge(b\to c)\leq a\to c$, for any $a,b,c\in\mathcal{A}$.
To see a more detailed discussion to motivate the aforementioned definition,
the reader may consult [1]. However, it is illuminating to think of an
implication as a special case of a general setting in which a category
internalizes its hom structure, i.e., its identity and its composition. The
general formalization for such a generalized function space is introduced in
[7], where it is called an arrow. The categorical formalization of the arrows
that act as the generalized internal hom functors can be found in [10]. In
this broader story, our implications are nothing but arrows enriched over the
category $\\{0\leq 1\\}$ rather than $\mathbf{Set}$ and hence they are just
the propositional shadows of the more structured arrows.
###### Example 1.3.
Over any bounded distributive lattice $\mathcal{A}$, there is a _trivial
implication_ defined by $a\to_{t}b=1$, for any $a,b\in\mathcal{A}$. The
Boolean and the Heyting implications are also implications. To construct a new
implication from the old, assume that $(\mathcal{B},\to_{\mathcal{B}})$ is a
strong algebra, $f:\mathcal{A}\to\mathcal{B}$ is an order-preserving map and
$g:\mathcal{B}\to\mathcal{A}$ is a finite meet preserving map. Then, it is
easy to check that the operator
$a\to_{\mathcal{A}}b=g(f(a)\to_{\mathcal{B}}f(b))$ is an implication over
$\mathcal{A}$. It is also possible to show that any implication is
constructible from the Heyting implication in this way, expanding the base
lattice to a locale, see [1].
Having a definition for implication, it is now reasonable to search for the
geometric implications, i.e., the family of implications over the locales of
opens of spaces stable under the inverse image of _all_ continuous functions.
We will show that there is only one such family, namely the one with the
trivial implications. To prove that surprising result, we employ a weaker
version of the categorical fibrations to develop the relative notion of
geometricity of a category $\mathcal{C}$ of strong spaces over a category
$\mathcal{S}$ of spaces. Here, geometricity simply means that the implications
of the strong spaces in $\mathcal{C}$ are stable under the inverse image of
the maps in $\mathcal{S}$. We will then continue by identifying the greatest
geometric categories over the subcategories of the open- and closed-
irreducible maps. These two families of maps can be considered as the
generalizations of the injective open (closed) maps. The implications stable
under the open-irreducible maps are the ones for which $c\wedge a\leq b$
implies $c\leq a\to b$. These implications behave similarly to a well-known
family of implications called the basic implication introduced in [21] in
provability logic and later in [17] for philosophical reasons. For the closed-
irreducible maps, the implications are the ones for which $a\leq b\vee c$
implies $(a\to b)\vee c=1$. We will show that having these two properties
forces the implications to behave similarly to the Boolean implications as
they satisfy the equation $a\to b=\neg a\vee b$. Using these implications and
their relationship with the geometricity for the open- and closed-irreducible
maps, we will then identify all the geometric categories over a given category
$\mathcal{S}$, provided that $\mathcal{S}$ has some basic closure properties.
Completing the characterization of the geometric categories, as the
implications we identified are also interesting in their own right, we will
spend the last section to provide a representation theorem for them. We will
first use a Yoneda-type argument, making the implications a part of an
adjunction-style pair. Then, we will use this result to represent an arbitrary
implication as the implication of a topological version of a combination of an
intuitionistic Kripke and a neighbourhood frame.
## 2 Preliminaries
In this section, we will recall some basic notions and their corresponding
theorems we need throughout the paper. Let $\mathcal{P}=(P,\leq)$ be a poset.
A subset $S\subseteq P$ is called an _upset_ if for any $x,y\in P$, if $x\in
S$ and $x\leq y$ then $y\in S$. The _downsets_ are defined dually. The set of
all upsets of $(P,\leq)$ is denoted by $U(P,\leq)$. For any $S\subseteq P$,
the greatest lower bound of $S$ (resp. the least upper bound of $S$), if it
exists, is called the _meet_ (resp. _join_) of the elements of $S$ and is
denoted by $\bigwedge S$ (resp. $\bigvee S$). If $S=\\{a,b\\}$, the meet
$\bigwedge S$ and the join $\bigvee S$ are denoted by $a\wedge b$ and $a\vee
b$, respectively. Moreover, $\bigwedge\varnothing$ and $\bigvee\varnothing$,
i.e., the greatest and the least elements of $P$, if exist, are denoted by $1$
and $0$, respectively. A poset is called a _bounded lattice_ , if for any
finite subset $S\subseteq P$, both $\bigwedge S$ and $\bigvee S$ exist and it
is called _complete_ if for any set $S\subseteq P$, the meet $\bigwedge S$
exists. A bounded lattice is called _distributive_ , if $a\wedge(b\vee
c)=(a\wedge b)\vee(a\wedge c)$, for any $a,b,c\in P$. It is called a _locale_
, if for any $S\subseteq P$, the join $\bigvee S$ exists and
$a\wedge\bigvee_{b\in S}b=\bigvee_{b\in S}(a\wedge b)$, for any $a\in P$ and
$S\subseteq P$. By the _Heyting implication_ over a bounded lattice
$\mathcal{A}$, we mean the binary operation $\Rightarrow$ over $\mathcal{A}$
such that $a\wedge b\leq c$ iff $a\leq b\Rightarrow c$, for any
$a,b,c\in\mathcal{A}$. A bounded lattice $\mathcal{H}$ is called a _Heyting
algebra_ if it has the Heyting implication. A bounded lattice $\mathcal{B}$ is
called a _Boolean algebra_ if all the elements of $\mathcal{B}$ have a
complement, i.e., for any $a\in\mathcal{B}$, there is $b\in\mathcal{B}$ such
that $a\vee b=1$ and $a\wedge b=0$. A subset of a bounded lattice is called a
_filter_ , if it is an upset and closed under all finite meets. A filter $F$
is called _prime_ if $0\notin F$ and $a\vee b\in F$ implies either $a\in F$ or
$b\in F$. The set of all prime filters of a lattice $\mathcal{A}$ is denoted
by $\mathcal{F}_{p}(\mathcal{A})$. A subset of $\mathcal{A}$ is called an
_ideal_ , if it is a downset and closed under all finite joins. The following
theorem is a useful tool when working with bounded distributive lattices:
###### Theorem 2.1.
[4, 6](Prime filter theorem) Let $\mathcal{A}$ be a bounded distributive
lattice, $F$ be a filter and $I$ be an ideal such that $F\cap I=\varnothing$.
Then, there exists a prime filter $P$ such that $F\subseteq P$ and $P\cap
I=\varnothing$.
Let $(P,\leq_{P})$ and $(Q,\leq_{Q})$ be two posets and $f:P\to Q$ be a
function. It is called an _order-preserving map_ , if it preserves the order,
meaning $f(a)\leq_{Q}f(b)$, for any $a\leq_{P}b$. An order-preserving map is
called an _order embedding_ or simply an embedding, if for any $a,b\in P$, the
inequality $f(a)\leq_{Q}f(b)$ implies $a\leq_{P}b$. An order-preserving map
between two bounded lattices (locales) is called a _bounded lattice map_
(_locale map_), if it preserves all finite meets and finite joins (arbitrary
joins). For two order-preserving maps $f:P\to Q$ and $g:Q\to P$, the pair
$(f,g)$ is called an _adjunction_ , denoted by _$f\dashv g$_ , if
$f(a)\leq_{Q}b$ is equivalent to $a\leq_{P}g(b)$, for any $a\in P$ and $b\in
Q$. If $f\dashv g$, the map $f$ is called the _left adjoint_ of $g$ and $g$ is
called the _right adjoint_ of $f$.
###### Theorem 2.2.
[2] (Adjoint functor theorem for posets) Let $(P,\leq_{P})$ be a complete
poset and $(Q,\leq_{Q})$ be a poset. Then, an order-preserving map
$f:(P,\leq_{P})\to(Q,\leq_{Q})$ has a right (left) adjoint iff it preserves
all joins (meets).
Let $X$ be a topological space. We denote the locale of its open subsets by
$\mathcal{O}(X)$. A topological space is called $T_{0}$, if for any two
different points $x,y\in X$, there is an open set which contains one of these
points and not the other. It is called _Hausdorff_ if for any two distinct
points $x,y\in X$, there are opens $U,V\in\mathcal{O}(X)$ such that $x\in U$,
$y\in V$ and $U\cap V=\varnothing$. A pair $(X,\leq)$ of a topological space
and a partial order is called a _Priestley space_ if $X$ is compact and for
any $x,y\in X$, if $x\nleq y$, there exists a clopen upset $U$ such that $x\in
U$ and $y\notin U$. For any bounded distributive lattice $\mathcal{A}$, the
pair $(\mathcal{F}_{p}(\mathcal{A}),\subseteq)$ is a Priestley space, where
$\mathcal{F}_{p}(\mathcal{A})$ is the set of all prime filters of
$\mathcal{A}$ and the topology on $\mathcal{F}_{p}(\mathcal{A})$ is defined by
the basis of the opens in the form $\\{P\in\mathcal{F}_{p}(\mathcal{A})\mid
a\in P\;\text{and}\;b\notin P\\}$, for any $a,b\in\mathcal{A}$. Denoting
$\\{P\in\mathcal{F}_{p}(\mathcal{A})\mid a\in P\\}$ by $i(a)$, it is known
that any clopen upset in this Priestley space equals to $i(a)$, for some
$a\in\mathcal{A}$ and any clopen set is in form
$\bigcup_{r=1}^{n}[i(a_{r})\cap i(b_{r})^{c}]$, for some finite sets
$\\{a_{1},\ldots,a_{n}\\},\\{b_{1},\ldots,b_{n}\\}\subseteq\mathcal{A}$. For a
comprehensive explanation, see [4].
## 3 Open, Closed and Weakly Boolean Implications
In this section, we first introduce the three families of open, closed, and
weakly Boolean implications, some of their natural examples and a method to
construct the new ones from the old. Then, in Subsection 3.1, we provide a
characterization for the weakly Boolean implications defined on the locale of
opens of a topological space needed in the next section. Finally, in
Subsection 3.2, we introduce two families of continuous maps as the
generalized versions of the injective open and closed maps. These families
provide the real motivation to consider the above-mentioned families of
implications as the classes of the open and closed implications are the
greatest classes of implications that are stable under the inverse image of
the open- and closed-irreducible maps, respectively. In other words, the
conditions we put on any of these two families of implications are necessary
if we want them to be stable under the corresponding classes of continuous
maps.
###### Definition 3.1.
An implication $\to$ over a bounded distributive lattice $\mathcal{A}$ is
called _open_ if $a\wedge b\leq c$ implies $a\leq b\to c$, for any
$a,b,c\in\mathcal{A}$. It is called _closed_ if $a\leq b\vee c$ implies $(a\to
b)\vee c=1$, for any $a,b,c\in\mathcal{A}$. An implication is called a _weakly
Boolean_ implication (WBI, for short), if it is both open and closed. A strong
algebra $(\mathcal{A},\to)$ is called open, closed, or weakly Boolean, if its
implication is.
###### Remark 3.2.
As mentioned before, the real motivation to investigate the three families of
implications introduced in Definition 3.1 is geometric and will be covered in
Subsection 3.2. However, it is also worth providing a logical motivation for
them in this remark. For that purpose, first, consider the following sequent-
style rule for the classical implication in the usual calculus $\mathbf{LK}$
for classical logic:
$\Gamma,A\Rightarrow B,\Delta$ $\Gamma\Rightarrow A\to B,\Delta$
---
The condition for the open implications is half of the adjunction property of
Heyting implications and is reminiscent of the above rule, except that in the
rule $\Delta$ is considered as empty. The condition for the closed
implications is also reminiscent of the above rule. However, this time the
restriction changes to the emptiness of $\Gamma$. It is easy to see that an
implication is weakly Boolean iff it admits the full rule, i.e., if $c\wedge
a\leq b\vee d$ implies $c\leq(a\to b)\vee d$, for any $a,b,c,d\in\mathcal{A}$.
It is practically helpful to simplify the definition of the open and closed
implications from an implication between two inequalities to just one
inequality. It is also theoretically important as it shows that the each of
these families form a variety. Here is the simplification.
###### Lemma 3.3.
Let $(\mathcal{A},\to)$ be a strong algebra. Then:
$(i)$
$\to$ is open iff $a\leq b\to a\wedge b$, for any $a,b\in\mathcal{A}$.
Specially, $a\leq b\to a$ and $a\wedge\neg 1=a\wedge\neg a$, for any
$a,b\in\mathcal{A}$.
$(ii)$
$\to$ is closed iff $(a\vee b\to a)\vee b=1$, for any $a,b\in\mathcal{A}$.
Specially, $b\vee\neg b=1$, for any $b\in\mathcal{A}$.
$(iii)$
If $\to$ is closed, then $c\wedge a\leq b$ implies $c\leq\neg a\vee b$, for
any $a,b,c\in\mathcal{A}$.
###### Proof.
For $(i)$, if an implication is open, as $a\wedge b\leq a\wedge b$, we have
$a\leq b\to a\wedge b$. Conversely, if $a\wedge b\leq c$, we have $a\leq
b\to(a\wedge b)\leq b\to c$. For the special cases, notice that by $a\wedge
b\leq a$, we reach $a\leq b\to a$. To prove $a\wedge\neg a=a\wedge\neg 1$,
since $a\leq 1$, we reach $\neg 1\leq\neg a$ which itself implies $a\wedge\neg
1\leq a\wedge\neg a$. For the converse, note that as $a\leq 1\to a$, we have
$a\wedge\neg a\leq(1\to a)\wedge(a\to 0)\leq 1\to 0=\neg 1$. Hence,
$a\wedge\neg a\leq a\wedge\neg 1$. For $(ii)$, its first part is similar to
that of $(i)$. For the special case, by setting $a=0$, we have $b\vee\neg
b=1$. For $(iii)$, if $c\wedge a\leq b$, then $c\leq(c\vee\neg
a)\wedge(a\vee\neg a)=(c\wedge a)\vee\neg a\leq\neg a\vee b$. ∎
###### Example 3.4.
The trivial implication $\to_{t}$ defined as the constant function $1$ is both
open and closed. The Heyting implication is clearly open, while it is not
necessarily closed. In fact, it is closed iff it is Boolean. To have a closed
implication that is not open, we refer the reader to Example 3.8, where a
general machinery to construct the open and closed implications are provided.
Here, however, we provide an example of an implication that is neither open
nor closed. Let $\mathcal{A}$ be $\mathcal{O}(\mathbb{R})$ and $\Rightarrow$
be its Heyting implication. Now, putting $f(U)=U+1=\\{x+1\mid x\in U\\}$ and
$g=id_{\mathcal{A}}$ in Example 1.3, the operation $U\to
V=[(U+1)\Rightarrow(V+1)]$ is an implication. However, it is not open as
$[\mathbb{R}\to(0,1)]=[\mathbb{R}\Rightarrow(1,2)]=(1,2)\nsupseteq(0,1)$ and
it is not closed as
$(-\infty,0)\cup[(-\infty,0)\to\varnothing]=(-\infty,0)\cup[(-\infty,1)\Rightarrow\varnothing]=(-\infty,0)\cup(1,\infty)\neq\mathbb{R}$.
Finally, to provide a family of implications that are both open and closed,
over any Boolean algebra $\mathcal{B}$ define $a\to b=\bar{a}\vee b\vee m$,
where $\bar{a}$ is the complement of $a$ and $m\in\mathcal{B}$ is a fixed
element. It is easy to see that $\to$ is a WBI.
In the following definition, we provide a combination of an intuitionistic
Kripke frame and a neighbourhood frame, serving as an order-theoretic and
hence concrete machinery to construct different families of implications.
Later, in Section 5, we will see that these frames are powerful enough to
represent all possible implications.
###### Definition 3.5.
A _Kripke-Neighbourhood frame_ (KN-frame, for short) is a tuple
$\mathcal{K}=(K,\leq,R,B,N)$ of a poset $(K,\leq)$, a binary relation $R$ on
$K$, a set $B\subseteq P(X)$ and a map $N:X\to P(U(K,\leq,B))$, where
$U(K,\leq,B)$ is the set of all upsets in $B$, such that:
* $\bullet$
$R$ is compatible with the order, i.e., if $x\leq y$ and $(y,z)\in R$, then
$(x,z)\in R$,
* $\bullet$
for any $x\in X$ and any upsets $U,V\in B$, if $U\subseteq V$ and $U\in N(x)$
then $V\in N(x)$,
* $\bullet$
$B$ is closed under finite union (including $K$ as the nullary union),
complement and the operation $\lozenge_{R}$ defined by
$\lozenge_{R}(U)=\\{x\in X\mid\exists y\in U,(x,y)\in R\\}$,
* $\bullet$
$j(U)=\\{x\in X\mid U\in N(x)\\}$ is in $B$, for any upset $U\in B$.
A KN-frame is called _full_ if $B=P(X)$. It is called _standard_ if $B=P(X)$
and $N(k)=\\{U\in U(K,\leq)\mid k\in U\\}$. We denote a standard KN-frame by
$(K,\leq,R)$ as its only non-trivial ingredients. A KN-frame is called _open_
when for any $x,y\in K$ and any upsets $U,V\in B$, if $x\in U$, $(x,y)\in R$
and $V\in N(y)$, then $U\cap V\in N(y)$. It is called _closed_ when for any
$x,y\in K$ and any upsets $U,V\in B$, if $x\notin U$, $(x,y)\in R$ and $U\cup
V\in N(y)$, then $V\in N(y)$.
###### Example 3.6.
Let $\mathcal{K}=(K,\leq,R,B,N)$ be a KN-frame. Then, the bounded distributive
lattice $U(K,\leq,B)$ of the upsets in $B$ is closed under the operation
$U\to_{\mathcal{K}}V=\\{x\in K\mid\forall y\in K\,[(x,y)\in
R\;\text{and}\;U\in N(y)\;\text{then}\;V\in N(y)]\\}$ and the pair
$\mathfrak{A}(\mathcal{K})=(U(K,\leq,B),\to_{\mathcal{K}})$ is a strong
algebra. Moreover, if $\mathcal{K}$ is open (closed), then so is
$\mathfrak{A}(\mathcal{K})$. To prove the closure of $U(K,\leq,B)$ under the
operation $\to_{\mathcal{K}}$, let $U$ and $V$ be two upsets in $B$. Then,
notice that $(U\to_{\mathcal{K}}V)^{c}=\Diamond_{R}(j(U)\cap j(V)^{c})$. As
$B$ is closed under complement, finite intersection and $\Diamond_{R}$ and $j$
maps the upsets in $B$ to the elements of $B$, we can conclude that
$(U\to_{\mathcal{K}}V)^{c}$ and hence $U\to_{\mathcal{K}}V$ is in $B$. Also,
using the compatibility of the order with $R$, it is easy to see that
$U\to_{\mathcal{K}}V$ is an upset. Hence, $U\to_{\mathcal{K}}V\in
U(K,\leq,B)$.
To prove that $\to_{\mathcal{K}}$ is an implication, the only non-trivial part
is to prove that if $U\subseteq V$, then $U\to_{\mathcal{K}}V=K$, for any
upsets $U,V\in B$. Let $x\in K$ be an arbitrary element and assume $(x,y)\in
R$ and $U\in N(y)$. As $U\subseteq V$ and $N(y)$ is upward closed for the
upsets in $B$, we reach $V\in N(y)$. Therefore, $x\in U\to_{\mathcal{K}}V$.
Finally, for the open and closed conditions, if $\mathcal{K}$ is open, using
Lemma 3.3, it is enough to prove that $U\subseteq V\to_{\mathcal{K}}U\cap V$,
for any upsets $U,V\in B$. Let $x\in U$, $(x,y)\in R$ and $V\in N(y)$. Then,
by the openness of the KN-frame, we know $U\cap V\in N(y)$, which completes
the proof. A similar argument works for the closed case.
###### Remark 3.7.
First, notice that a KN-frame is a combination of an intuitionistic Kripke
frame with an independent monotone neighbourhood function restricted to the
upsets of a given Boolean algebra of the subsets of $K$. The presence of the
neighbourhood function is crucial as in the standard KN-frames, the definition
of the implication simplifies to $U\to_{\mathcal{K}}V=\\{x\in K\mid\forall
y\in K\,[(x,y)\in R\;\text{and}\;y\in U\;\text{then}\;y\in V]\\}$ which is
always meet- and join-internalizing. Therefore, without the neighbourhood
structure, the KN-frames are not capable of representing all implications.
Secondly, note that starting from a KN-frame, it is always possible to drop
the Boolean algebra $B$ to reach a full KN-frame and hence a greater strong
algebra. More precisely, let $\mathcal{K}=(K,\leq,R,B,N)$ be a KN-frame and
define $\mathcal{K}^{f}$ as $(K,\leq,R,P(K),N^{f})$, where $N^{f}(x)=\\{U\in
U(K,\leq)\mid\exists V\in N(x)\,V\subseteq U\\}$. It is easy to see that
$\mathcal{K}^{f}$ is a full KN-frame. Moreover, as $N(x)$ is upward closed for
the upsets in $B$, it is clear that $U\in N(x)$ iff $U\in N^{f}(x)$, for any
$U\in U(K,\leq,B)$. Therefore, the strong algebra $\mathfrak{A}(\mathcal{K})$
is a subalgebra of the strong algebra $\mathfrak{A}(\mathcal{K}^{f})$. Notice
that the passage from $\mathcal{K}$ to $\mathcal{K}^{f}$ does not necessarily
preserve the openness or the closedness of the original KN-frame.
###### Example 3.8.
For any standard KN-frame $\mathcal{K}=(K,\leq,R)$, if $R\subseteq\,\leq$, the
implication $\to_{\mathcal{K}}$ is open and if $R^{op}\subseteq\,\leq$, it is
closed, where by $R^{op}$, we mean $\\{(k,l)\in K^{2}\mid(l,k)\in R\\}$. For
the first claim, using Lemma 3.3, it is enough to show $U\subseteq
V\to_{\mathcal{K}}U\cap V$, for any $U,V\in U(K,\leq)$. For that purpose,
assume $k\in U$. Then, for any $l\in V$, if $(k,l)\in R$, as
$R\subseteq\,\leq$, we have $k\leq l$ and since $U$ is an upset, we have $l\in
U$. Hence, $l\in U\cap V$. Therefore, $k\in V\to_{\mathcal{K}}(U\cap V)$. For
the second claim, again using Lemma 3.3, it is enough to show $[(U\cup
V)\to_{\mathcal{K}}V]\cup U=K$, for any $U,V\in U(K,\leq)$. Suppose $k\notin
U$. Then, for any $l\in U\cup V$, if $(k,l)\in R$, as $R^{op}\subseteq\,\leq$,
we have $l\leq k$. Hence, as $U$ is an upset and $k\notin U$, we have $l\notin
U$. Hence, $l\in V$. Therefore, $k\in(U\cup V)\to_{\mathcal{K}}V$.
Employing these two families of implications, it is easy to provide a closed
implication that is not open. Set $K=\\{k,l\\}$, $k\leq l$ and
$R=\\{(l,k),(k,k)\\}$ and consider $\mathcal{K}=(K,\leq,R)$. It is clear that
$R$ is compatible with the order and $R^{op}\subseteq\,\leq$. Hence,
$\to_{\mathcal{K}}$ is closed. To show that it is not open, we show
$K\to_{\mathcal{K}}\\{l\\}=\varnothing$ and as $\\{l\\}\nsubseteq
K\to_{\mathcal{K}}\\{l\\}$, the implication cannot be open. For
$K\to_{\mathcal{K}}\\{l\\}=\varnothing$, if either $k$ or $l$ is in
$K\to_{\mathcal{K}}\\{l\\}$, as $(k,k),(l,k)\in R$, we must have $k\in\\{l\\}$
which is impossible.
###### Remark 3.9.
It is not hard to see that being closed (as opposed to being open) is a very
demanding condition restricting the form of the closed implications and hence
the WBI’s in a very serious way (see Theorem 3.12). For instance, it is easy
to see that the condition $R^{op}\subseteq\,\leq$ in Example 3.8 together with
the compatibility condition of $R$ with respect to $\leq$, restricts the
relation $R$ only to the ones in the from $\\{(f(k),k)\in K^{2}\mid k\in
L\\}\cup\\{(k,k)\in K^{2}\mid k\in L\\}$, where $f:L\to K$ is an injective
function, $L$ is a subset of the minimal elements of $K$ and $f(k)\geq k$, for
any $k\in L$:
${{k_{0}}}$${{k_{1}}}$${{k_{2}}}$${\cdots}$${{f(k_{0})}}$${{f(k_{1})}}$${{f(k_{2})}}$$\scriptstyle{R}$$\scriptstyle{R}$$\scriptstyle{R}$
For the WBI’s, even the function $f$ collapses to the identity and the only
remaining data will be the set $L$. We do not prove these claims as they will
not be used in the present paper. They are mentioned here only to convey the
feeling that the study of these two families is not as justified as one might
expect. However, we spend some time studying these two families as we need
their behavior to prove the rarity of geometric implications in the next
section.
So far, we have seen some concrete examples of the open and closed
implications. The following theorem modifies the method of Example 1.3 to
construct the new open and closed implications from the old.
###### Theorem 3.10.
Let $(\mathcal{B},\to_{\mathcal{B}})$ be a strong algebra,
$f:\mathcal{A}\to\mathcal{B}$ be an order-preserving and
$g:\mathcal{B}\to\mathcal{A}$ be a finite meet preserving map. Then,
$a\to_{\mathcal{A}}b=g(f(a)\to_{\mathcal{B}}f(b))$ is:
$(i)$
an open implication over $\mathcal{A}$, if $\to_{\mathcal{B}}$ is open, $f$
preserves all binary meets and $gf(a)\geq a$, for any $a\in\mathcal{A}$.
$(ii)$
a closed implication over $\mathcal{A}$, if $\to_{\mathcal{B}}$ is closed, $f$
preserves all binary joins and $c\vee f(a)=1$ implies $g(c)\vee a=1$, for any
$a\in\mathcal{A}$ and $c\in\mathcal{B}$.
###### Proof.
To check whether $\to_{\mathcal{A}}$ is open or closed, we use the criterion
of Lemma 3.3. For $(i)$, as $\to_{\mathcal{B}}$ is open, $f$ preserves the
binary meets and $gf(a)\geq a$, we have $b\to_{\mathcal{A}}a\wedge
b=g(f(b)\to_{\mathcal{B}}f(a\wedge b))=g(f(b)\to_{\mathcal{B}}f(a)\wedge
f(b))\geq g(f(a))\geq a$. For $(ii)$, as $\to_{\mathcal{B}}$ is closed and $f$
preserves the binary joins, we have $[f(a\vee b)\to_{\mathcal{B}}f(b)]\vee
f(a)=[f(a)\vee f(b)\to_{\mathcal{B}}f(b)]\vee f(a)=1$. Hence, by the property,
we have $g(f(a\vee b)\to_{\mathcal{B}}f(b))\vee a=1$ which implies $[(a\vee
b)\to_{\mathcal{A}}b]\vee a=1$. ∎
###### Remark 3.11.
In practice, part $(i)$ in Lemma 3.10 is useful in situations where $f\dashv
g$ and $f$ is binary meet preserving and part $(ii)$ is used when $g\dashv f$,
$g$ is finite meet preserving and $f$ is binary join preserving.
The following theorem provides a complete characterization of the WBI’s. We
will see that their general form is not far from the one provided in Example
3.4.
###### Theorem 3.12.
Let $\mathcal{A}$ be a bounded distributive lattice. Then, for any weakly
Boolean implication $\to$ over $\mathcal{A}$, the interval $[\neg
1,1]=\\{x\in\mathcal{A}\mid\neg 1\leq x\leq 1\\}$ with its induced order is a
Boolean algebra. Moreover, $a\to b=\neg a\vee b$ and $\neg a$ is the
complement of $a\vee\neg 1$ in $[\neg 1,1]$. Conversely, if for some
$m\in\mathcal{A}$, the interval $[m,1]$ with its induced order is a Boolean
algebra, and $n(a)$ is the complement of $a\vee m$ in $[m,1]$, then
$a\to_{m}b=n(a)\vee b$ is a WBI over $\mathcal{A}$. Note that $\neg_{m}a=n(a)$
and $\neg_{m}1=n(1)=m$.
###### Proof.
For the first part, by Lemma 3.3, we have $(a\vee\neg 1)\wedge\neg
a=(a\wedge\neg a)\vee(\neg 1\wedge\neg a)=(a\wedge\neg 1)\vee\neg 1=\neg 1$
and $(a\vee\neg 1)\vee\neg a=1$, for any $a\in\mathcal{A}$. Therefore, $\neg
a$ is the complement of $a\vee\neg 1$ over $[\neg 1,1]$. Moreover, for any
$a\geq\neg 1$, as $a\vee\neg 1=a$, the element $\neg a$ is the complement of
$a$ which implies that $[\neg 1,1]$ is a Boolean algebra. To show $a\to b=\neg
a\vee b$, note that $\neg a=(a\to 0)\leq(a\to b)$. As the implication is open,
by Lemma 3.3, we have $b\leq a\to b$. Hence, $\neg a\vee b\leq a\to b$. For
the converse, as the implication is also closed, by Lemma 3.3, $b\vee\neg
b=1$. Hence, $a\to b=(a\to b)\wedge(b\vee\neg b)=((a\to b)\wedge b)\vee((a\to
b)\wedge\neg b)\leq b\vee(a\to 0)=\neg a\vee b$.
Conversely, let $[m,1]$ be a Boolean algebra and $n(a)$ be the complement of
$a\vee m$ in $[m,1]$. First, we prove that $a\vee n(a)=1$ and $b\wedge
n(b)\leq n(a)$, for any $a,b\in\mathcal{A}$. For the former, as $n(a)$ is the
complement of $a\vee m$, we have $n(a)\vee(a\vee m)=1$ and as $n(a)\in[m,1]$,
we reach $m\leq n(a)$. Hence, $n(a)\vee a=1$. For the latter, as $n(b)$ is the
complement of $b\vee m$ over $[m,1]$, we have $(b\vee m)\wedge n(b)=m$ which
implies $b\wedge n(b)\leq m$. Again as $m\leq n(a)$, we reach $b\wedge
n(b)\leq n(a)$. Now, to show that $a\to_{m}b=n(a)\vee b$ is an implication, we
must check the properties in Remark 1.2. For $(i^{\prime})$, if $a\leq b$,
then $1=n(a)\vee a\leq n(a)\vee b=a\to_{m}b$. For $(ii)$, using the
distributivity, we have $(a\to_{m}b)\wedge(b\to_{m}c)=(n(a)\vee
b)\wedge(n(b)\vee c)\leq n(a)\vee(b\wedge n(b))\vee c\leq n(a)\vee n(a)\vee
c=a\to_{m}c$. Finally, to show that $\to_{m}$ is both open and closed, as
$a\to_{m}(a\wedge b)=n(a)\vee(a\wedge b)=(n(a)\vee a)\wedge(n(a)\vee
b)=(n(a)\vee b)\geq b$, the implication is clearly open. For closedness, note
that $((a\vee b)\to_{m}a)\vee b=n(a\vee b)\vee a\vee b=1$. ∎
###### Remark 3.13.
Note that in a Boolean algebra, the complement of $a\vee m$ in $[m,1]$ is
$\bar{a}\vee m$, where $\bar{a}$ is the complement of $a$. Therefore, by
Theorem 3.12, it is clear that the implications $a\to_{m}b=\bar{a}\vee b\vee
m$ are the only WBI’s over a Boolean algebra. Having made this observation,
one wonders whether the presence of a WBI forces the ground lattice to be a
Boolean algebra itself. However, as the trivial implication is weakly Boolean
and it is definable over any bounded distributive lattice, it is clear that
the ground lattice structure of a weakly Boolean strong algebra can be quite
general and not necessarily Boolean.
### 3.1 Weakly Boolean Spaces
In this subsection, we will use Theorem 3.12 to provide a characterization for
the WBI’s over the locales of opens of the topological spaces. The
characterization will be useful in the next section. First, let us recall some
basic notions from topology that we need below. A topological space $X$ is
called _discrete_ if all of its subsets are open. It is called _indiscrete_ if
its only opens are $\varnothing$ and $X$. It is called _locally indiscrete_ if
for any $x\in X$, there exists an open $U\subseteq X$ such that $x\in U$ and
the induced topology on $U$ is indiscrete. A space is locally indiscrete iff
all of its closed subsets are open. More generally:
###### Lemma 3.14.
Let $X$ be a topological space and $M\subseteq X$ be an open subset. Then, the
following are equivalent:
$(i)$
For any closed $K\subseteq X$, the union $K\cup M$ is open.
$(ii)$
The subspace $X-M$ is locally indiscrete.
###### Proof.
To prove $(ii)$ from $(i)$, let $L$ be a closed subset of $X-M$. We have to
show that $L$ is also open. As $L$ is closed in $X-M$, there is a closed
subset $K$ of $X$ such that $L=K\cap(X-M)$. By $(i)$, the set $K\cup M$ is
open. As $L=(K\cup M)\cap(X-M)$, the set $L$ is open in $X-M$. Conversely, to
prove $(i)$ from $(ii)$, assume that $K$ is a closed subset in $X$. Hence,
$K\cap(X-M)$ is closed and hence open in $X-M$, by $(ii)$. This implies the
existence of an open $U$ in $X$ such that $K\cap(X-M)=U\cap(X-M)$. Hence,
$K\cup M=U\cup M$. Finally, as both $U$ and $M$ are open in $X$, the subset
$K\cup M$ is open in $X$. ∎
By definition, it is clear that any discrete space is locally indiscrete. The
converse also holds for $T_{0}$ spaces.
###### Lemma 3.15.
Any locally indiscrete $T_{0}$ space is discrete.
###### Proof.
Let $x\in X$ be a point. As $X$ is $T_{0}$, for any $y\neq x$, there is an
open $U_{y}$ such that either $x\in U_{y}$ and $y\notin U_{y}$ or $x\notin
U_{y}$ and $y\in U_{y}$. As any open is closed in the space, w.l.o.g, we can
assume the first case. As closed and open subsets are identical, opens are
closed under arbitrary intersections. Hence, set $U=\bigcap_{y\neq x}U_{y}$.
It is easy to see that $U=\\{x\\}$. Hence, the singletons and consequently all
subsets are open and hence the space is discrete. ∎
We are now ready to provide a characterization for all the WBI’s over the
locales of topological spaces, as promised.
###### Corollary 3.16.
Let $X$ be a topological space and $M\subseteq X$ be an open subset such that
$X-M$ is locally indiscrete. Then, the binary map $U\to V=U^{c}\cup V\cup M$
is a WBI over $\mathcal{O}(X)$. Conversely, any WBI over $\mathcal{O}(X)$ is
in the form $U\to V=U^{c}\cup V\cup M$, where $M=\neg X$ is an open subset
such that $X-M$ is locally indiscrete.
###### Proof.
For the first part, by Lemma 3.14, $\mathcal{O}(X)$ is closed under the
operation $U\mapsto U^{c}\cup M$ as $X-M$ is locally indiscrete and hence the
subset $U^{c}\cup M$ is open, for any open $U$. This proves that $\to$ is
well-defined over $\mathcal{O}(X)$. Now, note that the interval $[M,X]$ in
$\mathcal{O}(X)$ is a Boolean algebra, simply because $U^{c}\cup M$ is the
complement of $U\supseteq M$. Therefore, by Theorem 3.12, the operation $U\to
V=n(U)\cup V$ is a WBI, where $n(U)$ is the complement of $U\cup M$ in
$[M,X]$. As $n(U)=U\cup M^{c}$, we know that $U\to V=U^{c}\cup V\cup M$ is a
WBI. Conversely, if $\to$ is a WBI over $\mathcal{O}(X)$, then by Theorem 3.12
again, the interval $[\neg X,X]$ is a Boolean algebra and $\neg U$ is the
complement of $U\cup\neg X$ in $[\neg X,X]$. Set $M=\neg X$ and note that $M$
is open, as $\neg X\in\mathcal{O}(X)$. As the complement of $U\cup M$ in
$[M,X]$ is $U^{c}\cup M$, we have $U\to V=U^{c}\cup V\cup M$. Moreover, as
$\neg U=U^{c}\cup M$ is open, for any open $U$, the space $X-M$ is locally
indiscrete, by Lemma 3.14. ∎
###### Remark 3.17.
For any WBI over $\mathcal{O}(X)$, its unique $M=\neg X$ is called its _core_.
Note that if $M=\varnothing$, then the space $X$ is locally indiscrete and the
trivial and the Boolean implications are the WBI’s with the cores $X$ and
$\varnothing$, respectively. The strong space $(X,\to)$ is called a _weakly
Boolean space_ , (WBS, for short), if $\to$ is a WBI over $\mathcal{O}(X)$.
Note that if a space is locally indiscrete, then by Corollary 3.16, for any
open $N$, there is a WBI with the core $N$, because $X-N$ is locally
indiscrete. Also notice that a continuous map $f:X\to Y$ induces a strong map
$f:(X,\to_{X})\to(Y,\to_{Y})$ between two WBS’s, iff $f^{-1}(M_{Y})=M_{X}$.
###### Example 3.18.
Let $X$ be a topological space and $N\subseteq X$ be a closed discrete
subspace. Then, by Corollary 3.16, $U\to V=U^{c}\cup V\cup(X-N)$ is a WBI, as
$X-N$ is open and $N$ is locally indiscrete. Corollary 3.16 also shows that if
$X$ is $T_{0}$, then these are the only WBI’s over $X$, as in a $T_{0}$ space,
the only locally indiscrete subspaces are the discrete ones. Moreover, for the
later reference, note that if $X$ is a Hausdorff space, then $U\to V=U^{c}\cup
V\cup(X-N)$ is a WBI, for any finite $N\subseteq X$.
### 3.2 Open-irreducible and Closed-irreducible Maps
In this subsection, we will first introduce the two families of open- and
closed-irreducible maps as the generalizations of the injective open and
closed continuous functions, respectively. Then, we will prove that an
implication $\to$ over $\mathcal{O}(X)$ is open (closed) iff the inverse image
of any open-irreducible (closed-irreducible) map into $X$ transforms $\to$ to
another implication.
###### Definition 3.19.
A topological space $X$ is _open-irreducible (closed-irreducible)_ , if $A\cap
B=\varnothing$ implies $A=\varnothing$ or $B=\varnothing$, for any open
(closed) subsets $A,B\subseteq X$.
Intuitively, the open-irreducible (closed-irreducible) spaces are roughly the
spaces in which the open (closed) subsets are so big that any two non-empty
open (closed) subsets intersect.
###### Example 3.20.
Any singleton space is both open- and closed-irreducible. Any space that is
the closure of a single point is open-irreducible because if
$X=\overline{\\{x\\}}$ and $U$ and $V$ are two nonempty opens, then $x\in U$
as otherwise, $x\in U^{c}$ and since $U^{c}$ is closed, we have
$U^{c}=\overline{\\{x\\}}$ which implies $U=\varnothing$. Similarly, $x\in V$.
Therefore, $U\cap V\neq\varnothing$. For closed-irreducible spaces, it is easy
to see that $X$ is closed-irreducible iff
$\overline{\\{x\\}}\cap\overline{\\{y\\}}$ is non-empty, for any $x,y\in X$.
For instance, if $(P,\leq)$ is a poset, where any two elements have an upper
bound, then $P$ with the topology of the upsets as the closed subsets is
closed-irreducible. The reason is that $\overline{\\{x\\}}$ is $\\{z\in P\mid
z\geq x\\}$. Hence, if $w$ is the upper bound of $x$ and $y$, then
$w\in\overline{\\{x\\}}\cap\overline{\\{y\\}}$.
For our purpose, we must look into the relative version of the open-
irreducible (closed-irreducible) space, where a space is replaced by a
continuous map.
###### Definition 3.21.
A continuous map $f:X\to Y$ is called _open-irreducible (closed-irreducible)_
if it is open (closed) and $f[A]\cap f[B]=f[A\cap B]$, for any open (closed)
subsets $A,B\subseteq X$. As the identity function is open-irreducible
(closed-irreducible) and the composition of any two open-irreducible (closed-
irreducible) maps is also open-irreducible (closed-irreducible), considering
all topological spaces together with the open-irreducible (resp. closed-
irreducible) maps forms a category that we denote by $\mathbf{OI}$ (resp.
$\mathbf{CI}$).
###### Example 3.22.
The unique map $!:X\to\\{*\\}$ is open-irreducible (closed-irreducible) iff
$X$ is open-irreducible (closed-irreducible). It is easy to prove that an open
(a closed) map is open-irreducible (closed-irreducible) iff the fiber
$f^{-1}(y)$ is open-irreducible (closed-irreducible) as a subspace of $X$, for
any $y\in Y$. As a special case, all injective open (closed) maps are
trivially open-irreducible (closed-irreducible), as their fibers are
singletons or empty.
Note that if $f:X\to Y$ is an open map, then the map
$f_{!}:\mathcal{O}(X)\to\mathcal{O}(Y)$ defined by $f_{!}(U)=f[U]$ is the left
adjoint of $f^{-1}$, as $U\subseteq f^{-1}(V)$ iff $f[U]\subseteq V$. The map
$f_{!}$ is well-defined as $f$ is open. Having this left adjoint, we can say
that for any open map $f$, it is open-irreducible iff the left adjoint of
$f^{-1}$ preserves the binary meets. Similarly, if $f:X\to Y$ is a closed map,
then the map $f_{*}:\mathcal{O}(X)\to\mathcal{O}(Y)$ defined by
$f_{*}(U)=f[U^{c}]^{c}$ is the right adjoint of $f^{-1}$, as
$f^{-1}(V)\subseteq U$ iff $V\subseteq f[U^{c}]^{c}$. Note that $f_{*}$ is
well-defined as $f$ is closed. Again, using this right adjoint, we can say
that for any closed map $f$, it is closed-irreducible iff the right adjoint of
$f^{-1}$ preserves the binary joins.
In the following theorem, we will characterize the open (closed) implications
based on their preservability under the inverse image of the open-irreducible
(closed-irreducible) maps.
###### Theorem 3.23.
For a strong space $(Y,\to_{Y})$, the followings are equivalent:
$(i)$
For any topological space $X$ and any open-irreducible map $f:X\to Y$, there
is an open implication $\to_{X}$ over $\mathcal{O}(X)$ such that
$f^{-1}(U\to_{Y}V)=f^{-1}(U)\to_{X}f^{-1}(V)$, for any $U,V\in\mathcal{O}(Y)$.
$(ii)$
For any topological space $X$ and any open-irreducible map $f:X\to Y$, there
is an implication $\to_{X}$ over $\mathcal{O}(X)$ such that we have
$f^{-1}(U\to_{Y}V)=f^{-1}(U)\to_{X}f^{-1}(V)$, for any $U,V\in\mathcal{O}(Y)$.
$(iii)$
For any topological space $X$ and any open embedding $f:X\to Y$, there is an
implication $\to_{X}$ over $\mathcal{O}(X)$ such that
$f^{-1}(U\to_{Y}V)=f^{-1}(U)\to_{X}f^{-1}(V)$, for any $U,V\in\mathcal{O}(Y)$.
$(iv)$
The implication $\to_{Y}$ is open localizable, i.e., for any open subset
$Z\subseteq Y$ and any $U_{1},V_{1},U_{2},V_{2}\in\mathcal{O}(Y)$, if
$U_{1}\cap Z=U_{2}\cap Z$ and $V_{1}\cap Z=V_{2}\cap Z$, then
$(U_{1}\to_{Y}V_{1})\cap Z=(U_{2}\to_{Y}V_{2})\cap Z$.
$(v)$
The implication $\to_{Y}$ is open.
The same also holds, replacing open with closed, everywhere in the theorem.
###### Proof.
The parts $(i)$ to $(ii)$ and $(ii)$ to $(iii)$ are trivial. For $(iii)$ to
$(iv)$, as $Z$ is open in $Y$, the inclusion map $j:Z\to Y$ is an open
embedding. Therefore, we can apply $(iii)$ to $j$. Then, as
$j^{-1}(U_{1})=U_{1}\cap Z=U_{2}\cap Z=j^{-1}(U_{2})$ and
$j^{-1}(V_{1})=V_{1}\cap Z=V_{2}\cap Z=j^{-1}(V_{2})$, we have
$j^{-1}(U_{1})\to_{Z}j^{-1}(V_{1})=j^{-1}(U_{2})\to_{Z}j^{-1}(V_{2})$.
Therefore, $j^{-1}(U_{1}\to_{Y}V_{1})=j^{-1}(U_{2}\to_{Y}V_{2})$ and hence,
$(U_{1}\to_{Y}V_{1})\cap Z=(U_{2}\to_{Y}V_{2})\cap Z$.
For $(iv)$ to $(v)$, using Lemma 3.3, it is enough to prove $V\subseteq
U\to_{Y}U\cap V$, for any $U,V\in\mathcal{O}(Y)$. Set $Z=V$ in $(iv)$. As
$Z\cap(U\cap V)=Z\cap U$, we have $(U\to_{Y}U\cap V)\cap Z=(U\to_{Y}U)\cap Z$,
by $(iv)$. As $\to_{Y}$ is an implication, we have $U\to_{Y}U=Y$ and hence
$V=Z\subseteq(U\to_{Y}U\cap V)$.
Finally, to prove $(i)$ from $(v)$, define
$U^{\prime}\to_{X}V^{\prime}=f^{-1}(f[U^{\prime}]\to_{Y}f[V^{\prime}])$, for
any $U^{\prime},V^{\prime}\in\mathcal{O}(X)$. First, note that as $f$ is open,
$\to_{X}$ is well-defined over $\mathcal{O}(X)$. Secondly, by Example 1.3,
$\to_{X}$ is clearly an implication. Thirdly, recall that $f_{!}(U)=f[U]$ is
the left adjoint of $f^{-1}$ and preserves all binary meets. Hence, as
$\to_{Y}$ is open, by Theorem 3.10, the implication $\to_{X}$ is open.
Finally, to prove $f^{-1}(U)\to_{X}f^{-1}(V)=f^{-1}(U\to_{Y}V)$, for any
$U,V\in\mathcal{O}(Y)$, we first prove that $f^{-1}(U_{1})=f^{-1}(U_{2})$ and
$f^{-1}(V_{1})=f^{-1}(V_{2})$ imply
$f^{-1}(U_{1}\to_{Y}V_{1})=f^{-1}(U_{2}\to_{Y}V_{2})$, for any opens
$U_{1},V_{1},U_{2},V_{2}\in\mathcal{O}(Y)$. First, notice that by the
assumptions $f^{-1}(U_{1})=f^{-1}(U_{2})$ and $f^{-1}(V_{1})=f^{-1}(V_{2})$,
we reach $U_{1}\cap f[X]=U_{2}\cap f[X]$ and $V_{1}\cap f[X]=V_{2}\cap f[X]$.
As $f[X]$ is open, $\to_{Y}$ is an open implication, and $U_{2}\cap
f[X]\subseteq U_{1}$ and $V_{1}\cap f[X]\subseteq V_{2}$, we have
$f[X]\subseteq(U_{2}\to_{Y}U_{1})\cap(V_{1}\to_{Y}V_{2})$. Therefore,
$f[X]\cap(U_{1}\to_{Y}V_{1})\subseteq(U_{2}\to_{Y}U_{1})\cap(U_{1}\to_{Y}V_{1})\cap(V_{1}\to_{Y}V_{2})\subseteq
U_{2}\to_{Y}V_{2}$. By symmetry, we also have
$f[X]\cap(U_{2}\to_{Y}V_{2})\subseteq U_{1}\to_{Y}V_{1}$. Therefore,
$f[X]\cap(U_{1}\to_{Y}V_{1})=f[X]\cap(U_{2}\to_{Y}V_{2})$ which implies
$f^{-1}(U_{1}\to_{Y}V_{1})=f^{-1}(U_{2}\to_{Y}V_{2})$.
Now, as $f_{!}\dashv f^{-1}$, we have $f^{-1}f_{!}f^{-1}(U)=f^{-1}(U)$ and
$f^{-1}f_{!}f^{-1}(V)=f^{-1}(V)$, for any $U,V\in\mathcal{O}(Y)$. Therefore,
using what we just proved, we have
$f^{-1}(U)\to_{X}f^{-1}(V)=f^{-1}(f_{!}f^{-1}(U)\to_{Y}f_{!}f^{-1}(V))=f^{-1}(U\to_{Y}V)$.
For the closed case, the only non-trivial cases are $(iv)$ to $(v)$ and $(v)$
to $(i)$. For the first case, using Lemma 3.3, it is enough to show $(U\cup
V\to_{Y}V)\cup U=Y$ or equivalently $U^{c}\subseteq U\cup V\to_{Y}V$, for any
$U,V\in\mathcal{O}(Y)$. Set $Z=U^{c}$ in $(iv)$ and note that $Z$ is closed in
$Y$. As $(U\cup V)\cap Z=V\cap Z$, we have $(U\cup V\to_{Y}V)\cap
Z=(V\to_{Y}V)\cap Z$, by $(iv)$. As $\to_{Y}$ is an implication, we have
$V\to_{Y}V=Y$ and hence $U^{c}=Z\subseteq(U\cup V\to_{Y}V)$. Therefore,
$U^{c}\subseteq U\cup V\to_{Y}V$.
To prove $(i)$ from $(v)$, define
$U^{\prime}\to_{X}V^{\prime}=f^{-1}(f_{*}(U^{\prime})\to_{Y}f_{*}(V^{\prime}))$,
for any $U^{\prime},V^{\prime}\in\mathcal{O}(X)$, where
$f_{*}(W)=(f[W^{c}])^{c}$, for any $W\in\mathcal{O}(X)$. By Example 1.3,
$\to_{X}$ is clearly an implication. As $f^{-1}\dashv f_{*}$, the implication
$\to_{Y}$ is closed and $f_{*}$ preserves all binary joins, by Theorem 3.10,
$\to_{X}$ is also closed. To prove
$f^{-1}(U\to_{Y}V)=f^{-1}(U)\to_{X}f^{-1}(V)$, we first prove that
$f^{-1}(U_{1})=f^{-1}(U_{2})$ and $f^{-1}(V_{1})=f^{-1}(V_{2})$ imply
$f^{-1}(U_{1}\to_{Y}V_{1})=f^{-1}(U_{2}\to_{Y}V_{2})$, for any
$U_{1},V_{1},U_{2},V_{2}\in\mathcal{O}(Y)$. As before, by
$f^{-1}(U_{1})=f^{-1}(U_{2})$ and $f^{-1}(V_{1})=f^{-1}(V_{2})$, we reach
$U_{1}\cap f[X]=U_{2}\cap f[X]$ and $V_{1}\cap f[X]=V_{2}\cap f[X]$.
Therefore, $U_{2}\subseteq f[X]^{c}\cup U_{1}$ and $V_{1}\subseteq
f[X]^{c}\cup V_{2}$. As $f[X]^{c}$ is open and the implication $\to_{Y}$ is
closed, we have
$f[X]^{c}\cup(U_{2}\to_{Y}U_{1})=f[X]^{c}\cup(V_{1}\to_{Y}V_{2})=Y$. Hence,
$U_{1}\to_{Y}V_{1}\subseteq f[X]^{c}\cup(U_{1}\to_{Y}V_{1})$ which is itself a
subset of
$(f[X]^{c}\cup(U_{1}\to_{Y}V_{1}))\cap(f[X]^{c}\cup(U_{2}\to_{Y}U_{1}))\cap(f[X]^{c}\cup(V_{1}\to_{Y}V_{2})])=$
$f[X]^{c}\cup[(U_{2}\to_{Y}U_{1})\cap(U_{1}\to_{Y}V_{1})\cap(V_{1}\to_{Y}V_{2})]\subseteq
f[X]^{c}\cup(U_{2}\to_{Y}V_{2}).$
Thus, $U_{1}\to_{Y}V_{1}\subseteq f[X]^{c}\cup(U_{2}\to_{Y}V_{2})$. By
symmetry, $U_{2}\to_{Y}V_{2}\subseteq f[X]^{c}\cup(U_{1}\to_{Y}V_{1})$.
Therefore, $f[X]\cap(U_{1}\to_{Y}V_{1})=f[X]\cap(U_{2}\to_{Y}V_{2})$ which
implies $f^{-1}(U_{1}\to_{Y}V_{1})=f^{-1}(U_{2}\to_{Y}V_{2})$.
Finally, as $f^{-1}\dashv f_{*}$, we have the equalities
$f^{-1}f_{*}f^{-1}(U)=f^{-1}(U)$ and $f^{-1}f_{*}f^{-1}(V)=f^{-1}(V)$, for any
$U,V\in\mathcal{O}(Y)$. Therefore,
$f^{-1}(U)\to_{X}f^{-1}(V)=f^{-1}(f_{*}f^{-1}(U)\to_{Y}f_{*}f^{-1}(V))$
which is equal to $f^{-1}(U\to_{Y}V)$, for any $U,V\in\mathcal{O}(Y)$. ∎
## 4 Geometric Implications
In this section, we will introduce a natural notion of geometricity for a
category of strong spaces over a category of spaces to formalize the informal
concept of geometricity of a family of implications over a family of
continuous maps. Then, we will provide a characterization theorem for the
geometric categories over the categories of spaces satisfying some basic
closure properties. Specifically, we will see that the only geometric category
over $\mathbf{Top}$ is the category of the strong spaces with the trivial
implication. We first need the following lemma.
###### Lemma 4.1.
$(i)$
Let $(\mathcal{A},\to_{\mathcal{A}})$ be a strong algebra, $\mathcal{B}$ be a
bounded distributive lattice and $f:\mathcal{A}\to\mathcal{B}$ be a bounded
lattice map with a left inverse, i.e, a monotone function
$g:\mathcal{B}\to\mathcal{A}$ such that $gf=id_{\mathcal{A}}$. Then, there is
an implication $\to_{\mathcal{B}}$ over $\mathcal{B}$ such that
$f:(\mathcal{A},\to_{\mathcal{A}})\to(\mathcal{B},\to_{\mathcal{B}})$ is a
strong algebra map.
$(ii)$
Let $(\mathcal{A},\to_{\mathcal{A}})$ be a weakly Boolean strong algebra,
$\mathcal{B}$ be a bounded distributive lattice and
$f:\mathcal{A}\to\mathcal{B}$ be a surjective bounded distributive lattice
morphism. Then, there is a unique weakly Boolean implication
$\to_{\mathcal{B}}$ over $\mathcal{B}$ such that
$f:(\mathcal{A},\to_{\mathcal{A}})\to(\mathcal{B},\to_{\mathcal{B}})$ is a
strong algebra map. If $\mathcal{B}$ is Boolean, it holds even without the
surjectivity condition.
###### Proof.
For $(i)$, define $c\to_{\mathcal{B}}d=f(g(c)\to_{\mathcal{A}}g(d))$, for any
$c,d\in\mathcal{B}$, where $g:\mathcal{B}\to\mathcal{A}$ is the left inverse
of $f$. By Example 1.3, the operator $\to_{\mathcal{B}}$ is an implication. To
prove that $f$ is a strong algebra map, note that
$f(a)\to_{\mathcal{B}}f(b)=f(gf(a)\to_{\mathcal{A}}gf(b))$. As
$gf=id_{\mathcal{A}}$, we reach the conclusion.
For $(ii)$, as $\to_{\mathcal{A}}$ is weakly Boolean, by Theorem 3.12, the
interval $[\neg_{\mathcal{A}}1,1]$ is a Boolean algebra,
$a\to_{\mathcal{A}}b=\neg_{\mathcal{A}}a\vee b$ and $\neg_{\mathcal{A}}a$ is
the complement of $a\vee\neg_{\mathcal{A}}1$ in $[\neg_{\mathcal{A}}1,1]$. Set
$n=f(\neg_{\mathcal{A}}1)$. Note that $f(\neg_{\mathcal{A}}a)$ is the
complement of $f(a)\vee n$ in $[n,1_{\mathcal{B}}]$, for any
$a\in\mathcal{A}$, because $f(a)\vee n\vee
f(\neg_{\mathcal{A}}a)=f(a\vee\neg_{\mathcal{A}}a\vee\neg_{\mathcal{A}}1)=f(1_{\mathcal{A}})=1_{\mathcal{B}}$
and $(f(a)\vee n)\wedge
f(\neg_{\mathcal{A}}a)=f(a\vee\neg_{\mathcal{A}}1)\wedge
f(\neg_{\mathcal{A}}a)=f((a\vee\neg_{\mathcal{A}}1)\wedge\neg_{\mathcal{A}}a)=f(\neg_{\mathcal{A}}1)=n$.
Now, to show that the interval $[n,1_{\mathcal{B}}]$ in $\mathcal{B}$ is a
Boolean algebra, we use either the surjectivity of $f$ or the Booleanness of
$\mathcal{B}$. If $f$ is surjective, then every $c\in\mathcal{B}$ is equal to
$f(a)$, for some $a\in\mathcal{A}$. Hence, if $c=f(a)\geq n$, as
$f(\neg_{\mathcal{A}}a)$ is the complement of $f(a)\vee n=f(a)$ in
$[n,1_{\mathcal{B}}]$, then any $c\geq n$ has a complement in
$[n,1_{\mathcal{B}}]$. If $\mathcal{B}$ is Boolean, then the complement of
$c\geq n$ in $[n,1_{\mathcal{B}}]$ is $\bar{c}\vee n$, where $\bar{c}$ is the
complement of $c$ in $\mathcal{B}$. Now, by Theorem 3.12, the operation
$c\to_{\mathcal{B}}d=n(c)\vee d$ is a WBI over $\mathcal{B}$, where $n(c)$ is
the complement of $c\vee n$ in $[n,1_{\mathcal{B}}]$. Note that we proved
$n(f(a))=f(\neg_{\mathcal{A}}a)$. Hence, as $\neg_{\mathcal{B}}f(a)=n(f(a))$,
we reach $f(a\to_{\mathcal{A}}b)=f(\neg_{\mathcal{A}}a\vee
b)=f(\neg_{\mathcal{A}}a)\vee b=\neg_{\mathcal{B}}f(a)\vee
f(b)=f(a)\to_{\mathcal{B}}f(b)$. For uniqueness, let $\to$ be another WBI over
$\mathcal{B}$ such that $f(a\to_{\mathcal{A}}b)=f(a)\to f(b)$, for any
$a,b\in\mathcal{A}$. Then, $\neg 1_{\mathcal{B}}=f(\neg_{\mathcal{A}}1)=n$. As
$\neg c$ and $\neg_{\mathcal{B}}c$ are both the complements of $c\vee n$ in
$[n,1_{\mathcal{B}}]$, they must be equal. As a WBI is determined uniquely by
its negation, we have $\to_{\mathcal{B}}=\,\to$. ∎
As geometricity is the stability of a family of implications under the inverse
image of a family of continuous functions, to formalize this notion, we need
to be precise about two ingredients: the continuous maps we use and the family
of implications we choose. For the former, it is reasonable to start with a
subcategory $\mathcal{S}$ of $\mathbf{Top}$ to have a relative version of
geometricity. For the latter, as any implication must be over a space in this
case, a natural formalization of a family of implications is some sort of
fibration that to each space $X$ in $\mathcal{S}$ assigns a fiber of strong
spaces over $X$. Having these two ingredients fixed, the geometricity simply
means the stability of the fibres under the inverse image of the maps in
$\mathcal{S}$. In other words, it states that for any map $f:X\to Y$ in
$\mathcal{S}$, the inverse image map $f^{-1}$ must map a fiber over $Y$ into
the fiber over $X$. The following is the formalization of this idea.
###### Definition 4.2.
Let $\mathcal{S}$ be a (not necessarily full) subcategory of $\mathbf{Top}$. A
category $\mathcal{C}$ of strong spaces is called _geometric over
$\mathcal{S}$_, if the forgetful functor $U:\mathcal{C}\to\mathbf{Top}$
mapping $(X,\to_{X})$ to $X$ has the following conditions:
$(i)$
$U$ maps $\mathcal{C}$ to $\mathcal{S}$ and it is surjective on the objects of
$\mathcal{S}$, and
$(ii)$
for any object $(Y,\to_{Y})$ in $\mathcal{C}$, any object $X$ in $\mathcal{S}$
and any map $f:X\to Y=U(Y,\to_{Y})$ in $\mathcal{S}$, there exists an object
$(X,\to_{X})$ in $\mathcal{C}$ such that $f$ induces a strong space map
$f:(X,\to_{X})\to(Y,\to_{Y})$ in $\mathcal{C}$:
${{\mathcal{C}}}$${{(X,\to_{X})}}$${{(Y,\to_{Y})}}$${{\mathcal{S}}}$${X}$${{Y=U(Y,\to_{Y})}}$$\scriptstyle{U}$$\scriptstyle{f}$$\scriptstyle{f}$
Let us explain the connection between the definition and the previous
discussion we had. Using the functor $U$, the category $\mathcal{C}$ is
nothing but a way to provide a fiber of strong spaces or equivalently a fiber
of implications over any space in $\mathcal{S}$. The condition $(i)$, then,
demands that the fibers and the maps between them are all lying over
$\mathcal{S}$ and none of the fibers are empty. The condition $(ii)$ is the
geometricity condition that states that for any map $f:X\to Y$, its inverse
image can pull back any implication over $Y$ to an implication over $X$. In
other words, the fibers are stable under the inverse image of the maps of
$\mathcal{S}$.
Note that if $\mathcal{C}$ is geometric over $\mathcal{S}$, then the maps of
$\mathcal{C}$ and $\mathcal{S}$ are the same. One direction is easy as $U$
maps $\mathcal{C}$ into $\mathcal{S}$. For the other direction, notice that
for any map $f:X\to Y$, as the fiber over $Y$ is non-empty, there exists an
object $(Y,\to_{Y})$ in $\mathcal{C}$. Now, use $(ii)$ to show that $f$
actually lives in $\mathcal{C}$. This implies that to show the equality of two
geometric categories over a fixed category, it is enough to show that their
objects are the same.
###### Example 4.3.
For any category $\mathcal{S}$ of spaces, let $\mathcal{S}_{t}$ be the
category of strong spaces $(X,\to_{t})$, where $X$ is in $\mathcal{S}$ and
$\to_{t}$ is the trivial implication together with all the maps of
$\mathcal{S}$ as the morphisms. It is well defined as the inverse image of any
map $f:X\to Y$ in $\mathcal{S}$ preserves the implication, since
$f^{-1}(Y)=X$. It is clear that $\mathcal{S}_{t}$ is a geometric category over
$\mathcal{S}$. This category is called the _trivial_ geometric category over
$\mathcal{S}$.
If $\mathcal{S}$ only consists of locally indiscrete spaces, then there are
three other degenerate geometric categories over $\mathcal{S}$. The first is
the category $\mathcal{S}_{b}$ of strong spaces $(X,\to_{b})$, where $X$ is in
$\mathcal{S}$ and $\to_{b}$ is the Boolean implication together with the maps
of $\mathcal{S}$ as the morphisms. This category is well defined, since the
locally indiscreteness of $X$ implies the Booleanness of $\mathcal{O}(X)$ and
the inverse images always preserve all the Boolean operators. It is easy to
see that $\mathcal{S}_{b}$ is actually geometric over $\mathcal{S}$. The
second example is the union of $\mathcal{S}_{b}$ and $\mathcal{S}_{t}$ that we
denote by $\mathcal{S}_{bt}$. This category is also clearly geometric over
$\mathcal{S}$. The third example is $\mathcal{S}_{a}$, the subcategory of
strong spaces $(X,\to)$, where $X$ is in $\mathcal{S}$ and $\to$ is a WBI,
together with the strong space morphisms that $U$ maps into $\mathcal{S}$. The
fibers are clearly non-empty, because of the existence of the trivial
implication over any space. It is geometric by part $(ii)$ of Lemma 4.1.
To have a non-degenerate example, note that the category of all strong spaces
$(X,\to)$, where $\to$ is open (closed) together with the strong maps that are
open-irreducible (closed-irreducible) is geometric over the category
$\mathbf{OI}$ (resp. $\mathbf{CI}$), by Theorem 3.23. Note that Theorem 3.23
actually proves that these categories are the greatest geometric categories
over $\mathbf{OI}$ and $\mathbf{CI}$, respectively, because if the inverse
image of any open-irreducible (closed-irreducible) map into $X$ maps an
implication $\to$ over $\mathcal{O}(X)$ to another implication, then $\to$ is
open (resp. closed). Finally, it is worth mentioning that the category of all
strong spaces with the strong space morphisms that are also surjective is
geometric over the category of all spaces with all surjections,
$\mathbf{Srj}$. To prove, note that if $f:X\to Y$ is a continuous surjection,
then $f^{-1}$ has a left inverse that is also monotone. Now, use part $(i)$ of
Lemma 4.1.
To provide a characterization for all geometric categories over a given
category $\mathcal{S}$, it is reasonable to assume some closure properties for
$\mathcal{S}$.
###### Definition 4.4.
A subcategory $\mathcal{S}$ of $\mathbf{Top}$ is called _local_ if it has at
least one non-empty object and it is closed under all embeddings, i.e., for
any space $X$ in $\mathcal{S}$ and any embedding $f:Y\to X$, both $Y$ and $f$
belongs to $\mathcal{S}$. A space $X$ is called _full_ in $\mathcal{S}$ if
$\mathcal{S}$ has $X$ as an object and all maps $f:Y\to X$ as maps, for any
object $Y$ in $\mathcal{S}$.
First, note that a local subcategory $\mathcal{S}$ of $\mathbf{Top}$ has all
singleton spaces as its objects. Because, as $\mathcal{S}$ has a non-empty
space $X$, it is possible to pick $x\in X$ and set $f:\\{*\\}\to X$ as a
function mapping $*$ to $x$. As $f$ is an embedding and $\mathcal{S}$ is
local, the space $\\{*\\}$ is in $\mathcal{S}$. Secondly, notice that although
the singletons are all present in any local subcategory $\mathcal{S}$ of
$\mathbf{Top}$, it does not necessarily mean that $\mathcal{S}$ has a terminal
object. To have a counter-example, it is enough to set $\mathcal{S}$ as the
subcategory of all embeddings in $\mathbf{Top}$. $\mathcal{S}$ is trivially
local. However, if $T$ is its terminal object, then it must be a singleton, as
otherwise, there are either two or none embeddings from $\\{*\\}$ to $T$.
However, if $T$ is a singleton, then there is no embedding from the discrete
space $\\{*,\dagger\\}$ with two points into $T$ which is a contradiction.
Using an argument similar to what we had above, it is not hard to see that a
terminal object in a local subcategory of $\mathbf{Top}$, if exists, must be a
singleton space. Moreover, a singleton space is a terminal object iff it is
full in the subcategory.
###### Theorem 4.5.
Let $\mathcal{S}$ be a local subcategory of $\mathbf{Top}$ with a terminal
object:
$(i)$
If $\mathcal{S}$ has at least one non-locally-indiscrete space, then the only
geometric category over $\mathcal{S}$ is $\mathcal{S}_{t}$.
$(ii)$
If $\mathcal{S}$ only consists of locally-indiscrete spaces, includes a non-
indiscrete space and a full discrete space with two points, then the only
geometric categories over $\mathcal{S}$ are the four distinct categories
$\mathcal{S}_{t}$, $\mathcal{S}_{b}$, $\mathcal{S}_{bt}$ and
$\mathcal{S}_{a}$.
$(iii)$
If $\mathcal{S}$ only consists of indiscrete spaces, then the only geometric
categories over $\mathcal{S}$ are the three distinct categories
$\mathcal{S}_{t}$, $\mathcal{S}_{b}$, and $\mathcal{S}_{bt}$.
###### Proof.
We first show that the locality of $\mathcal{S}$ implies that the strong
spaces in $\mathcal{C}$ are all WBS. Let $(Y,\to_{Y})$ be an object in
$\mathcal{C}$. As $\mathcal{S}$ is closed under all embeddings, by
geometricity of $\mathcal{C}$ over $\mathcal{S}$, for any embedding $f:X\to
Y$, there is a strong space $(X,\to_{X})$ such that $f$ induces a strong space
map from $(X,\to_{X})$ to $(Y,\to_{Y})$. Therefore, by Theorem 3.23, $\to_{Y}$
is both open and closed and hence a WBI. Now, note that by Corollary 3.16,
$\to_{Y}$ has a core. Denote this core by $M_{Y}$. Then, the geometricity
implies that for any $f:X\to Y$ in $\mathcal{S}$, there exists a strong space
$(X,\to_{X})$ in $\mathcal{C}$ such that $f$ induces the strong space map
$f:(X,\to_{X})\to(Y,\to_{Y})$. Therefore, by Remark 3.17, we can conclude that
for any $f:X\to Y$ in $\mathcal{S}$, there is a WBI over $X$ in $\mathcal{C}$
with the core $M_{X}=f^{-1}(M_{Y})$. We will use this property frequently in
the proof.
To prove $(i)$, it is enough to show that $M_{Y}=Y$, for any $(Y,\to_{Y})$ in
$\mathcal{C}$. Because, by Remark 3.17, any strong space over $Y$ must have
the trivial implication and as the fiber is non-empty, it also has the trivial
strong space. Thus, the objects of $\mathcal{C}$ are the same as the objects
of $\mathcal{S}_{t}$ and hence $\mathcal{C}=\mathcal{S}_{t}$. Now, let $X$ be
a non-locally-indiscrete space in $\mathcal{S}$ and for the sake of
contradiction, assume $M_{Y}\neq Y$, for some $(Y,\to_{Y})$ in $\mathcal{C}$.
Set $y\in Y-M_{Y}$. As $\mathcal{S}$ is local and has a terminal object, the
singleton space $\\{*\\}$ is a full object in $\mathcal{S}$. As $\mathcal{S}$
is closed under the embeddings, $\mathcal{S}$ has $Y-M_{Y}$ as an object and
the embeddings $i:Y-M_{Y}\to Y$ and $k:\\{*\\}\to Y-M_{Y}$ as the morphisms,
where $i$ is the inclusion and $k$ maps $*$ to $y$. Consider the map
$f=ik!:X\to Y$, where $!:X\to\\{*\\}$ is the constant function. As $\\{*\\}$
is full, $!$ and hence $f$ lives in $\mathcal{S}$. By the geometricity of
$\mathcal{C}$ over $\mathcal{S}$, there is an implication $\to_{X}$ over $X$
such that $f^{-1}(M_{Y})=M_{X}$. As $f$ is a constant function mapping every
element of $X$ to $y\notin M_{Y}$, we have $f^{-1}(M_{Y})=\varnothing$. Hence,
$M_{X}=\varnothing$. Therefore, by Remark 3.17, $X$ is locally indiscrete
which is a contradiction. Therefore, $M_{Y}=Y$.
For $(ii)$, it is easy to see that the four geometric categories over
$\mathcal{S}$ are different. Let $X$ be a non-indiscrete space in
$\mathcal{S}$. As it is not indiscrete, there is a non-trivial open
$\varnothing\subset M\subset X$. As $X$ is locally indiscrete, by Remark 3.17,
there is a WBI over $X$ with the core $M$. This WBI is clearly different from
the Boolean and the trivial ones, as its core, i.e., $M$ is different from the
cores of the other two that are $\varnothing$ and $X$. Hence, the fibers over
$X$ in the four categories $\mathcal{S}_{t}$, $\mathcal{S}_{b}$,
$\mathcal{S}_{bt}$ and $\mathcal{S}_{a}$ are different. To prove that these
four categories are the only geometric categories over $\mathcal{S}$, we will
consider the following two cases. Either, $M_{Y}=\varnothing$ or $M_{Y}=Y$,
for any $(Y,\to_{Y})$ in $\mathcal{C}$ or there exists a strong space
$(Y,\to_{Y})$ in $\mathcal{C}$ such that $\varnothing\subset M_{Y}\subset Y$.
In the first case, we show that $\mathcal{C}$ is uniquely determined by the
fiber over $\\{*\\}$ and it is one of $\mathcal{S}_{t}$, $\mathcal{S}_{b}$ or
$\mathcal{S}_{bt}$. In the second case, we show that
$\mathcal{C}=\mathcal{S}_{a}$. For the first case, as $\\{*\\}$ has only two
subsets, there are only two possible implications over $\\{*\\}$. If both are
in $\mathcal{C}$, then as the constant function $!:Y\to\\{*\\}$ lives in
$\mathcal{S}$, by the fullness of $\\{*\\}$, the geometricity dictates the
existence of two strong spaces $(Y,\to_{Y})$ and $(Y,\to^{\prime}_{Y})$ in
$\mathcal{C}$ such that $!^{-1}(\\{*\\})=M_{Y}$ and
$!^{-1}(\varnothing)=M^{\prime}_{Y}$. Therefore, $M_{Y}=Y$ and
$M^{\prime}_{Y}=\varnothing$. Hence, over $Y$ the implications with both cores
appear. Hence, on objects, $\mathcal{C}$ and $\mathcal{S}_{bt}$ are similar
which implies $\mathcal{C}=\mathcal{S}_{bt}$. For the other case, if the fiber
over $\\{*\\}$ has just one implication, denote its core by $M_{*}$. Now, pick
an arbitrary non-empty strong space $(Y,\to_{Y})$ in $\mathcal{C}$ with the
core $M_{Y}$. Note that it is either $\varnothing$ or $Y$, by the assumption.
As $Y$ is non-empty, there is a map $i:\\{*\\}\to Y$. This is an embedding and
hence it is in $\mathcal{S}$. Therefore, by geometricity,
$i^{-1}(M_{Y})=M_{*}$. Hence, if $M_{*}=\varnothing$, we have
$M_{Y}=\varnothing$ and if $M_{*}=\\{*\\}$, we reach $M_{Y}=Y$. Therefore, by
the same reasoning as before, either $\mathcal{C}=\mathcal{S}_{b}$ or
$\mathcal{C}=\mathcal{S}_{t}$.
For the second case, assume the existence of a strong space $(Y,\to_{Y})$ in
$\mathcal{C}$ such that $\varnothing\subset M_{Y}\subset Y$. For any $Z$ in
$\mathcal{S}$ and any open $N\subseteq Z$, define $\to_{N}$ as the WBI with
the core $N$. As $Y$ is locally indiscrete, this is possible by Remark 3.17.
We show that $(Z,\to_{N})$ is in $\mathcal{C}$ and as $N$ is arbitrary, we
conclude $\mathcal{C}=\mathcal{S}_{a}$. Let $\\{*,\dagger\\}$ be the full
discrete space with two elements in $\mathcal{S}$. Set $y\in M_{Y}$ and $z\in
Y-M_{Y}$ and let $f:Z\to\\{*,\dagger\\}$ be the function mapping $N$ to $*$
and $Z-N$ to $\dagger$. As $N$ is open and $Z$ is locally indiscrete, $N$ is
also clopen. Hence, $f$ is continuous. As $\mathcal{S}$ has all maps into
$\\{*,\dagger\\}$, the map $f$ is in $\mathcal{S}$. Now, consider the map
$i:\\{*,\dagger\\}\to Y$ mapping $*$ to $y$ and $\dagger$ to $z$. As
$\\{*,\dagger\\}$ is discrete, the map $i$ is continuous. As $M_{Y}$ is open
and $Y$ is locally indiscrete, $M_{Y}$ is clopen. Hence, it is easy to see
that $i$ is an embedding. Therefore, $i$ is in $\mathcal{S}$. Consider the map
$g:Z\to Y$ defined by $g=if$ and notice that $g$ is also in $\mathcal{S}$. As
$\mathcal{C}$ is geometric over $\mathcal{S}$, there must be a strong space
$(Z,\to_{Z})$ with the core $M_{Z}$ such that $g^{-1}(M_{Y})=M_{Z}$. By
construction, we have $g^{-1}(M_{Y})=f^{-1}(i^{-1}(M_{Y}))=f^{-1}(\\{*\\})=N$.
Therefore, $M_{Z}=N$. As any WBI is uniquely determined by its core, we have
$\to_{Z}=\to_{N}$. Hence, $(Z,\to_{N})$ is in $\mathcal{C}$.
For $(iii)$, note that the trivial and the Boolean implications are different
over $\mathcal{O}(\\{*\\})$, as their cores, i.e., $\\{*\\}$ and
$\\{*\\}^{c}=\varnothing$ are different. Hence, the three categories
$\mathcal{S}_{t}$, $\mathcal{S}_{b}$ and $\mathcal{S}_{bt}$ are different, as
their fibers over $\\{*\\}$ are different. To prove that these are the only
geometric categories over $\mathcal{S}$, assume that $(X,\to_{X})$ is an
arbitrary object in $\mathcal{C}$. As $X$ is indiscrete and has only two
opens, the core is either $\varnothing$ or $X$. The rest is similar to the
first case in $(ii)$. ∎
###### Corollary 4.6.
Let $\mathcal{S}$ be a full subcategory of $\mathbf{Top}$ that is also local.
Then:
$(i)$
If $\mathcal{S}$ has at least one non-locally-indiscrete space, then the only
geometric category over $\mathcal{S}$ is $\mathcal{S}_{t}$.
$(ii)$
If $\mathcal{S}$ only consists of locally-indiscrete spaces and includes a
non-indiscrete space, then the only geometric categories over $\mathcal{S}$
are the four distinct categories $\mathcal{S}_{t}$, $\mathcal{S}_{b}$,
$\mathcal{S}_{bt}$ and $\mathcal{S}_{a}$.
$(iii)$
If $\mathcal{S}$ only consists of indiscrete spaces, then the only geometric
categories over $\mathcal{S}$ are the three distinct categories
$\mathcal{S}_{t}$, $\mathcal{S}_{b}$, and $\mathcal{S}_{bt}$.
###### Proof.
As $\mathcal{S}$ is local, it has the singleton object and as $\mathcal{S}$ is
a full subcategory, the singleton is a full object and hence it is the
terminal object of $\mathcal{S}$. Having made that observation, for $(i)$ and
$(iii)$, it is enough to use Theorem 4.5. For $(ii)$, let $Y$ be a non-
indiscrete yet locally indiscrete space in $\mathcal{S}$. Hence, it has a non-
trivial open $\varnothing\subset U\subset Y$ and as $Y$ is locally indiscrete,
$U$ is clopen. Pick two elements $x\in U$ and $y\in Y-U$. The map
$g:\\{*,\dagger\\}\to X$ mapping $*$ to $x$ and $\dagger$ to $y$ is continuous
as $\\{*,\dagger\\}$ is discrete. It is an embedding, as $U$ is clopen. Hence,
as $\mathcal{S}$ is local, it must have the discrete space $\\{*,\dagger\\}$.
Finally, as $\mathcal{S}$ is a full subcategory, the space $\\{*,\dagger\\}$
is a full object. Now, use Theorem 4.5 to complete the proof. ∎
###### Corollary 4.7.
$\mathbf{Top}_{t}$ is the only geometric category over $\mathbf{Top}$.
The following example shows that the existence of the terminal object in
Theorem 4.5 is crucial, as without the condition, it is possible to have
infinitely many geometric categories over some local subcategories of
$\mathbf{Top}$.
###### Theorem 4.8.
Let $\mathcal{S}$ be a category of Hausdorff spaces with injective continuous
maps such that the size of the objects is not bounded by any finite number.
Then, there are infinitely many geometric categories over $\mathcal{S}$.
###### Proof.
For any natural number $n\geq 0$, define the category $\mathcal{C}_{n}$ as
follows. For objects, consider all strong spaces $(X,\to_{M})$, where $X$ is
in $\mathcal{S}$, $M\subseteq X$ is a subset such that $X-M$ has at most $n$
elements and $\to_{M}$ is the WBI with the core $M$. Note that as $X-M$ is
finite and $X$ is Hausdorff, $X-M$ is closed and discrete. Therefore, the
implication is well-defined by Example 3.18. For the morphisms, consider all
maps $f:(X,\to_{M})\to(Y,\to_{N})$, where $f:X\to Y$ is in $\mathcal{S}$ and
$f^{-1}(N)=M$. By Remark 3.17, the maps are strong space maps and hence well
defined. We prove that $\mathcal{C}_{n}$ is geometric over $\mathcal{S}$. It
is clear that $U$ maps $\mathcal{C}_{n}$ into $\mathcal{S}$. The fibers are
non-empty as the trivial space $(X,\to_{t})$ is in $\mathcal{C}_{n}$. For
geometricity, for any $(Y,\to_{N})$ and any map $f:X\to Y$ in $\mathcal{S}$,
as $Y-N$ has at most $n$ elements and $f$ is injective, the set
$f^{-1}(Y-N)=X-f^{-1}(N)$ has at most $n$ elements, as well. Hence, $f$ lifts
to the strong space map $f:(X,\to_{f^{-1}(N)})\to(Y,\to_{N})$. Finally, it is
clear that $\mathcal{C}_{n}$ is a subcategory of $\mathcal{C}_{n+1}$. We show
that this inclusion is proper. To prove it, note that the size of the objects
in $\mathcal{S}$ is not bounded by a finite number. Hence, for any $n$, there
exists a space $X$ with at least $n+1$ elements. Choose $M$ such that $X-M$
has exactly $n+1$ elements. Then, $(X,\to_{M})$ appears in $\mathcal{C}_{n+1}$
but not in $\mathcal{C}_{n}$. ∎
## 5 A Kripke-style Representation Theorem
In the previous sections, we have introduced three families of implications
and their corresponding geometric characterizations. These implications are
also interesting in their own right for their algebraic and logical aspects.
Algebraically, they are different internalizations of the order of the base
lattice, while logically, as observed in Remark 3.2, they are reflecting
different versions of the right implication rule. Among the three, the closed
and the weakly Boolean implications are rather too restricted in their form
and hence too close to the well-understood Boolean implication. However, the
open implications may come in many different forms, appearing in many
different logical disciplines. For instance, the open meet-internalizing and
join-internalizing implications are well-studied in provability logics [21]
and philosophical discussions around the impredicativity of the intuitionistic
implication [17]. Inspired by these aspects, in this section, we intend to
provide a representation theorem for all implications, in general, and the
open implications, in particular.
To provide this representation theorem, we first need to use a Yoneda-type
technique augmented by an ideal completion to embed any strong algebra in a
strong algebra over a locale, where the implication becomes a part of an
adjunction-like situation. The machinery is explained in detail in [1].
However, for the sake of completeness and to address the new case of the open
implications, we will repeat the main construction here and refer the reader
to [1], for a more detailed explanation.
###### Lemma 5.1.
For any strong algebra $(\mathcal{A},\to_{\mathcal{A}})$, there exist a locale
$\mathcal{H}$, an implication $\to_{\mathcal{H}}$ over $\mathcal{H}$, an
embedding
$i:(\mathcal{A},\to_{\mathcal{A}})\to(\mathcal{H},\to_{\mathcal{H}})$, a join
preserving operator $\nabla:\mathcal{H}\to\mathcal{H}$ and an order-preserving
map $F:\mathcal{H}\to\mathcal{H}$ such that $\nabla x\wedge F(y)\leq F(z)$ iff
$x\leq y\to_{\mathcal{H}}z$, for any $x,y,z\in\mathcal{H}$. If
$\to_{\mathcal{A}}$ is open, then $\to_{\mathcal{H}}$ can be chosen as open.
###### Proof.
Let $\mathcal{B}$ be the set of all functions
$f:\prod_{n=0}^{\infty}\mathcal{A}\to\mathcal{A}$ with the pointwise ordering
$\leq_{\mathcal{B}}$. It is clear that $(\mathcal{B},\leq_{\mathcal{B}})$ is a
bounded distributive lattice. It is also possible to define an implication
$\to_{\mathcal{B}}$ over $\mathcal{B}$ in a pointwise fashion. Define
$s:\mathcal{B}\to\mathcal{B}$ as a shift function on the inputs, i.e.,
$[s(f)](\\{x_{i}\\}_{i=0}^{\infty})=f(\\{x_{i}\\}_{i=1}^{\infty})$. Notice
that $s$ respects the order $\leq_{\mathcal{B}}$, the meets and the joins as
they are defined pointwise. Set $\mathcal{H}$ as the locale of the ideals of
$\mathcal{B}$ and define $i:\mathcal{A}\to\mathcal{H}$ as
$i(a)=\\{f\in\mathcal{B}\mid f\leq c_{a}\\}$, where $c_{a}$ is the constant
function mapping every input into $a$. It is clear that $i$ is an embedding
respecting all finite joins and meets. Define $\nabla
I=\\{f\in\mathcal{B}\mid\exists g\in I\;[f\leq s(g)]\\}$ and
$F(I)=\\{f\in\mathcal{B}\mid\exists g\in
I\,[f\leq(x_{0}\to_{\mathcal{B}}s(g))]\\}$. Note that both $\nabla$ and $F$
are order-preserving. It is easy to see that $\nabla$ is also join preserving.
As $\mathcal{H}$ is complete, $\nabla$ has a right adjoint $\Delta$. Define
$I\to_{\mathcal{H}}J=\Delta(F(I)\Rightarrow F(J))$, where $\Rightarrow$ is the
Heyting implication of $\mathcal{H}$. The operation $\to_{\mathcal{H}}$ is an
implication, by Example 1.3. By the adjunction $\nabla\dashv\Delta$, it is
clear that $\nabla K\cap F(I)\subseteq F(J)$ iff $K\subseteq
I\to_{\mathcal{H}}J$, for any ideal $I,J,K$ of $\mathcal{B}$. The only
remaining thing to prove is the identity
$i(a\to_{\mathcal{A}}b)=i(a)\to_{\mathcal{H}}i(b)$, for any
$a,b\in\mathcal{A}$. To prove, it is enough to show that $\nabla I\cap
F(i(a))\subseteq F(i(b))$ iff $I\subseteq i(a\to_{\mathcal{A}}b)$, for any
ideal $I$ of $\mathcal{B}$. For the forward direction, if $f\in I$, then
$s(f)\in\nabla I$ and as $c_{a}\in i(a)$, we have
$s(f)\wedge(x_{0}\to_{\mathcal{B}}s(c_{a}))\in\nabla I\cap F(i(a))$.
Therefore, $s(f)\wedge(x_{0}\to_{\mathcal{B}}s(c_{a}))\in F(i(b))$ which
implies
$s(f)\wedge(x_{0}\to_{\mathcal{B}}s(c_{a}))\leq(x_{0}\to_{\mathcal{B}}s(c_{b}))$.
This means, $f(\\{x_{i}\\}_{i=1}^{\infty})\wedge(x_{0}\to_{\mathcal{A}}a)\leq
x_{0}\to_{\mathcal{A}}b$, for any $\\{x_{i}\\}_{i=0}^{\infty}$. Putting
$x_{0}=a$, we reach $f(\\{x_{i}\\}_{i=1}^{\infty})\leq a\to_{\mathcal{A}}b$,
for any $\\{x_{i}\\}_{i=0}^{\infty}$. Thus, as $\\{x_{i}\\}_{i=0}^{\infty}$ is
arbitrary, we have $f(\\{x_{i}\\}_{i=0}^{\infty})\leq a\to_{\mathcal{A}}b$,
for any $\\{x_{i}\\}_{i=0}^{\infty}$. Therefore, $f\leq
c_{a\to_{\mathcal{A}}b}$ which implies $f\in i(a\to_{\mathcal{A}}b)$. For the
backward direction, if $f\in\nabla I\cap F(i(a))$, then there exist $g\in I$
and $h\in i(a)$ such that $f\leq s(g)\wedge(x_{0}\to_{\mathcal{B}}s(h))$. As
$g\in I$, we have $g\leq c_{a\to_{\mathcal{A}}b}$. Hence, $f\leq
c_{a\to_{\mathcal{A}}b}\wedge(x_{0}\to_{\mathcal{B}}c_{a})$ which means
$f(\\{x_{i}\\}_{i=0}^{\infty})\leq(a\to_{\mathcal{A}}b)\wedge(x_{0}\to_{\mathcal{A}}a)$,
for any $\\{x_{i}\\}_{i=0}^{\infty}$. Therefore,
$f(\\{x_{i}\\}_{i=0}^{\infty})\leq x_{0}\to_{\mathcal{A}}b$ which implies
$f\leq x_{0}\to_{\mathcal{B}}s(c_{b})$. Hence, $f\in F(i(b))$.
Finally, for the preservation of the openness, if $\to_{\mathcal{A}}$ is open,
using Lemma 3.3, it is enough to prove $I\subseteq J\to_{\mathcal{H}}I\cap J$,
for any $I,J\in\mathcal{H}$. Using the adjunction-style condition for
$\to_{\mathcal{H}}$, we must show that $\nabla I\cap F(J)\subseteq F(I\cap
J)$, for any $I,J\in\mathcal{H}$. To prove it, let $f\in\nabla I\cap F(J)$.
Hence, there are $g\in I$ and $h\in J$ such that $f\leq s(g)$ and $f\leq
x_{0}\to_{\mathcal{B}}s(h)$, i.e., $f(\\{x_{i}\\}_{i=0}^{\infty})\leq
g(\\{x_{i}\\}_{i=1}^{\infty})$ and $f(\\{x_{i}\\}_{i=0}^{\infty})\leq
x_{0}\to_{\mathcal{A}}h(\\{x_{i}\\}_{i=1}^{\infty})$, for any
$\\{x_{i}\\}_{i=0}^{\infty}$. We want to show that
$f(\\{x_{i}\\}_{i=0}^{\infty})\leq
x_{0}\to_{\mathcal{A}}[g(\\{x_{i}\\}_{i=1}^{\infty})\wedge
h(\\{x_{i}\\}_{i=1}^{\infty})].$
As $\to_{\mathcal{A}}$ is open, $g(\\{x_{i}\\}_{i=1}^{\infty})\leq
h(\\{x_{i}\\}_{i=1}^{\infty})\to_{\mathcal{A}}g(\\{x_{i}\\}_{i=1}^{\infty})\wedge
h(\\{x_{i}\\}_{i=1}^{\infty})$. Therefore, $f(\\{x_{i}\\}_{i=0}^{\infty})$ is
less than or equal to
$(x_{0}\to_{\mathcal{A}}h(\\{x_{i}\\}_{i=1}^{\infty}))\wedge(h(\\{x_{i}\\}_{i=1}^{\infty})\to_{\mathcal{A}}[g(\\{x_{i}\\}_{i=1}^{\infty})\wedge
h(\\{x_{i}\\}_{i=1}^{\infty})]).$
Hence, $f(\\{x_{i}\\}_{i=0}^{\infty})\leq
x_{0}\to_{\mathcal{A}}[g(\\{x_{i}\\}_{i=1}^{\infty})\wedge
h(\\{x_{i}\\}_{i=1}^{\infty})]$. Now, set $k=g\wedge h$. Therefore, $k\in
I\cap J$ and $f\leq x_{0}\to_{\mathcal{B}}s(k)$. Hence, $f\in F(I\cap J)$. ∎
In the following, we define a topological version of the KN-frames introduced
before to represent all (open) strong algebras. Apart from providing a more
specific representation, the topology helps to represent the open strong
algebras by the open KN-frames. Later, we will see that to represent an
arbitrary implication (without respecting the openness condition), even the
full KN-frames (without using any topology) are sufficient.
###### Definition 5.2.
A _KN-space_ is a KN-frame $\mathcal{X}=(X,\leq,R,C(X),N)$, where $(X,\leq)$
is a Priestley space and $C(X)$ is the set of all clopens of $X$. Spelling
out, it means that:
* $\bullet$
$R$ is compatible with the order, i.e., if $x\leq y$ and $(y,z)\in R$, then
$(x,z)\in R$,
* $\bullet$
for any $x\in X$ and any clopen upsets $U$ and $V$, if $U\subseteq V$ and
$U\in N(x)$ then $V\in N(x)$,
* $\bullet$
$\lozenge_{R}(U)=\\{x\in X\mid\exists y\in U,(x,y)\in R\\}$ is clopen, for any
clopen $U$,
* $\bullet$
$j(U)=\\{x\in X\mid U\in N(x)\\}$ is clopen, for any clopen upset $U$.
###### Theorem 5.3.
(KN-space Representation) For any strong algebra
$(\mathcal{A},\to_{\mathcal{A}})$, there exist a KN-space $\mathcal{X}$ and an
embedding $i:(\mathcal{A},\to_{\mathcal{A}})\to\mathfrak{A}(\mathcal{X})$.
Moreover, if $\mathcal{A}$ is open, then so is $\mathcal{X}$.
###### Proof.
By Lemma 5.1, w.l.o.g., we assume that $\mathcal{A}$ is a locale and there are
join preserving map $\nabla:\mathcal{A}\to\mathcal{A}$ and order-preserving
map $F:\mathcal{A}\to\mathcal{A}$ such that $\nabla c\wedge F(a)\leq F(b)$ iff
$c\leq a\to_{\mathcal{A}}b$, for any $a,b,c\in\mathcal{A}$. Set $(X,\leq)$ as
the Priestley space $(\mathcal{F}_{p}(\mathcal{A}),\subseteq)$ and define $R$
as $\\{(P,Q)\in X^{2}\mid\nabla[P]\subseteq Q\\}$. Set $i(a)=\\{P\in X\mid
a\in P\\}$ and define $N(P)=\\{U\in CU(X,\leq)\mid\exists
a\in\mathcal{A}\;[i(a)\subseteq U\;\text{and}\;F(a)\in P]\\}$. First, it is
clear that $R$ is compatible with the inclusion, $N(P)$ is upward closed on
the clopen upsets of $(X,\leq)$, for any $P\in X$ and $i$ is a bounded lattice
embedding. Secondly, observe that $i(a)\in N(P)$ iff $F(a)\in P$, for any
$a\in\mathcal{A}$ and $P\in\mathcal{F}_{p}(\mathcal{A})$. One direction is
obvious from the definition of $N$. For the other, if $i(a)\in N(P)$, then
there exists $b\in\mathcal{A}$ such that $F(b)\in P$ and $i(b)\subseteq i(a)$.
Hence, $b\leq a$ which implies $F(b)\leq F(a)$. As $P$ is a filter, $F(a)\in
P$. Now, to prove that $j$ maps the clopen upsets to the clopens, if $U$ is a
clopen upset, there is $a\in\mathcal{A}$ such that $U=i(a)$. Therefore,
$j(U)=j(i(a))=\\{P\in X\mid i(a)\in N(P)\\}=i(F(a))$ which is clopen.
For the closure of the clopens under $\Diamond_{R}$, we need an auxiliary
implication. First, notice that $\mathcal{A}$ is a locale and $\nabla$ is join
preserving. Hence, it has a right adjoint $\Delta$. Define
$a\to_{\nabla}b=\Delta(a\Rightarrow b)$, where $\Rightarrow$ is the Heyting
implication of $\mathcal{A}$. By Example 1.3, it is clear that $\to_{\nabla}$
is an implication. Also notice that $c\leq a\to_{\nabla}b$ iff $\nabla c\wedge
a\leq b$, for any $a,b,c\in\mathcal{A}$. The important property of
$\to_{\nabla}$ for us is that $\Diamond_{R}(i(a)\cap
i(b)^{c})=i(a\to_{\nabla}b)^{c}$, for any $a,b\in\mathcal{A}$. To prove it, we
have to show that $a\to_{\nabla}b\in P$ iff for any $Q\in X$ such that
$\nabla[P]\subseteq Q$, if $a\in Q$ then $b\in Q$. The forward direction is
easy, as $a\to_{\nabla}b\in P$ implies $\nabla(a\to_{\nabla}b)\in Q$. Hence,
if $a\in Q$, as $a\wedge\nabla(a\to_{\nabla}b)\leq b$, we reach $b\in P$. For
the converse, if $a\to_{\nabla}b\notin P$, then define $G$ and $I$ as the
filter generated by $\nabla[P]\cup\\{a\\}$ and the ideal generated by $b$,
respectively. It is clear that $I\cap G=\varnothing$. As otherwise, if $x\in
I\cap G$, there are $p_{1},\ldots,p_{n}\in P$ such that
$\bigwedge_{i=1}^{n}\nabla p_{i}\wedge a\leq x\leq b$. As $\nabla$ is order-
preserving, $\nabla(\bigwedge_{i=1}^{n}p_{i})\wedge a\leq b$ which implies
$\bigwedge_{i=1}^{n}p_{i}\leq a\to_{\nabla}b$. As $p_{i}$’s are in $P$ and $P$
is a filter, $a\to_{\nabla}b\in P$ which is a contradiction. Hence, $G\cap
I=\varnothing$. Now, by the prime filter theorem, there is a prime filter $Q$
such that $G\subseteq Q$ and $Q\cap I=\varnothing$. By the former, we see that
$\nabla[P]\subseteq Q$ and $a\in Q$. By the latter, we see $b\notin Q$.
Now, having the identity $\Diamond_{R}(i(a)\cap
i(b)^{c})=i(a\to_{\nabla}b)^{c}$, we prove the closure of the clopens of $X$
under $\Diamond_{R}$. Assume that $U$ is clopen. Then, there are finite sets
$\\{a_{1},\ldots,a_{n}\\},\\{b_{1},\ldots,b_{n}\\}\subseteq\mathcal{A}$ such
that $U=\bigcup_{r=1}^{n}i(a_{r})\cap i(b_{r})^{c}$. As $\Diamond_{R}$
preserves the unions, we have
$\Diamond_{R}(U)=\bigcup_{r=1}^{n}\Diamond_{R}(i(a_{r})\cap i(b_{r})^{c})$. As
$\Diamond_{R}(i(a_{r})\cap i(b_{r})^{c})=i(a_{r}\to_{\nabla}b_{r})^{c}$, we
conclude that $\Diamond_{R}(i(a_{r})\cap i(b_{r})^{c})$ and hence
$\Diamond_{R}(U)$ is clopen.
The only remaining part is showing
$i(a\to_{\mathcal{A}}b)=i(a)\to_{\mathcal{X}}i(b)$, for any
$a,b\in\mathcal{A}$. To prove $i(a\to_{\mathcal{A}}b)\subseteq
i(a)\to_{\mathcal{X}}i(b)$, assume $P\in i(a\to_{\mathcal{A}}b)$ which implies
$a\to_{\mathcal{A}}b\in P$. Assume $(P,Q)\in R$ and $i(a)\in N(Q)$, for some
$Q\in X$. By definition of $R$, we have $\nabla[P]\subseteq Q$. As we observed
above, $i(a)\in N(Q)$ implies $F(a)\in Q$. Hence, we also have $F(a)\in Q$. To
prove $F(b)\in Q$, as $a\to_{\mathcal{A}}b\in P$, we have
$\nabla(a\to_{\mathcal{A}}b)\in\nabla[P]\subseteq Q$. As $Q$ is a filter and
$\nabla(a\to_{\mathcal{A}}b)\wedge F(a)\leq F(b)$, we reach $F(b)\in Q$. For
the converse, assume $a\to_{\mathcal{A}}b\notin P$. We intend to provide a
prime filter $Q\in X$ such that $\nabla[P]\subseteq Q$ and $F(a)\in Q$ but
$F(b)\notin Q$. Set $I$ and $G$ as the ideal generated by $F(b)$ and the
filter generated by $\nabla[P]\cup\\{F(a)\\}$, respectively. We claim $I\cap
G=\varnothing$. Otherwise, if $x\in I\cap G$, there are $p_{1},\ldots,p_{n}\in
P$ such that $\bigwedge_{i=1}^{n}\nabla p_{i}\wedge F(a)\leq x\leq F(b)$. As,
$\nabla$ is order-preserving, $\nabla(\bigwedge_{i=1}^{n}p_{i})\wedge F(a)\leq
F(b)$ which implies $\bigwedge_{i=1}^{n}p_{i}\leq a\to_{\mathcal{A}}b$. As
$p_{i}$’s are in $P$ and it is a filter, $a\to_{\mathcal{A}}b\in P$ which is a
contradiction. Hence, $G\cap I=\varnothing$. Now, by the prime filter theorem,
there is a prime filter $Q\in X$ such that $G\subseteq Q$ and $Q\cap
I=\varnothing$. By the former, we reach $\nabla[P]\subseteq Q$ and $F(a)\in
Q$. By the latter, we prove $F(b)\notin Q$.
Finally, note that as all clopen upsets are in the image of $i$, if
$\to_{\mathcal{A}}$ is open, for any clopen upsets $U$ and $V$, there are
$a,b\in\mathcal{A}$ such that $U=i(a)$ and $V=i(b)$. Therefore, as $a\leq
b\to_{\mathcal{A}}a\wedge b$ and $i$ preserves the meet and the implication,
we reach $U\subseteq V\to_{\mathcal{X}}U\cap V$. Hence, for any $P,Q\in X$ and
any clopen upsets $U$ and $V$, if $P\in U$, $(P,Q)\in R$ and $V\in N(Q)$, then
$U\cap V\in N(Q)$. ∎
###### Corollary 5.4.
For any strong algebra $(\mathcal{A},\to_{\mathcal{A}})$, there exist a full
KN-frame $\mathcal{K}$ and an embedding
$i:(\mathcal{A},\to_{\mathcal{A}})\to\mathfrak{A}(\mathcal{K})$.
###### Proof.
It is an easy consequence of Theorem 5.3 and Remark 3.7. ∎
Acknowledgments: The support by the FWF project P 33548 and the Czech Academy
of Sciences (RVO 67985840) are gratefully acknowledged.
## References
* [1] Amirhossein Akbar Tabatabai. Implication via spacetime. Mathematics, Logic, and their Philosophies: Essays in Honour of Mohammad Ardeshir, pages 161–216, 2021.
* [2] Francis Borceux. Handbook of categorical algebra 1. Encyclopedia of Mathematics and its Applications, Vol. 50, Cambridge University Press, Cambridge, 1994.
* [3] Charles B Cross. Nute donald. conditional logic. handbook of philosophical logic, volume ii, extensions of classical logic, edited by gabbay d. and guenthner f., synthese library, vol. 165, d. reidel publishing company, dordrecht, boston, and lancaster, 1984, pp. 387–439. 1989\.
* [4] B. A. Davey and H. A. Priestley. Introduction to lattices and order. Cambridge University Press, New York, 2002.
* [5] Roy Dyckhoff and Sara Negri. Geometrisation of first-order logic. Bull. Symb. Log., 21(2):123–163, 2015.
* [6] Leo Esakia. Heyting algebras. Trends in Logic—Studia Logica Library, Vol. 50, 2019.
* [7] John Hughes. Generalising monads to arrows. volume 37, pages 67–111. 2000. Mathematics of program construction (Marstrand, 1998).
* [8] Rosalie Iemhoff. Preservativity logic: an analogue of interpretability logic for constructive theories. MLQ Math. Log. Q., 49(3):230–249, 2003.
* [9] Rosalie Iemhoff, Dick de Jongh, and Chunlai Zhou. Properties of intuitionistic provability and preservativity logics. Log. J. IGPL, 13(6):615–636, 2005.
* [10] Bart Jacobs, Chris Heunen, and Ichiro Hasuo. Categorical semantics for arrows. J. Funct. Programming, 19(3-4):403–438, 2009.
* [11] Peter T. Johnstone. Stone spaces. Cambridge Studies in Advanced Mathematics, Vol. 3, Cambridge University Press, Cambridge, 1982.
* [12] Peter T. Johnstone. Sketches of an elephant: a topos theory compendium. Vol. 1. The Clarendon Press, Oxford University Press, New York, Vol 43,, 2002.
* [13] Peter T. Johnstone. Sketches of an elephant: a topos theory compendium. Vol. 2. The Clarendon Press, Oxford University Press, New York, Vol 44,, 2002.
* [14] Tadeusz Litak and Albert Visser. Lewis meets Brouwer: constructive strict implication. Indag. Math. (N.S.), 29(1):36–90, 2018.
* [15] Saunders Mac Lane and Ieke Moerdijk. Sheaves in geometry and logic, a first introduction to topos theory. Universitext, Springer-Verlag, New York, 1994.
* [16] Jorge Picado and Aleš Pultr. Frames and locales: topology without points. Springer Science & Business Media, 2011.
* [17] Wim Ruitenburg. Basic logic and fregean set theory. Dirk van Dalen Festschrift, Questiones Infinitae, 5:121–142, 1992.
* [18] Steven Vickers. Topology via logic. Cambridge Tracts in Theoretical Computer Science, Vol 5, Cambridge University Press, Cambridge, 1989.
* [19] Steven Vickers. Continuity and geometric logic. J. Appl. Log., 12(1):14–27, 2014.
* [20] Albert Visser. Aspects of Diagonalization and Provability. PhD thesis, University of Utrecht, 1981.
* [21] Albert Visser. A propositional logic with explicit fixed points. Studia Logica, pages 155–175, 1981.
|
# Balanced Measures on Compact Median Algebras
Uri Bader and Aviv Taller Email addresses: A. Taller<EMAIL_ADDRESS>U. Bader<EMAIL_ADDRESS>Data availability statement : NA
###### Abstract.
We initiate a systematic investigation of group actions on compact median
algebras via the corresponding dynamics on their spaces of measures. We show
that a probability measure which is invariant under a natural push forward
operation must be a uniform measure on a cube and use this to show that every
amenable group action on a locally convex compact median algebra fixes a sub-
cube.
## 1\. Introduction
Median algebras form a common generalization of dendrites and distributive
lattices. While early investigations of these objects mainly dealt with
combinatorial aspects as part of order theory, recently they obtained much
attention due to the observation that CAT(0) cube complexes carry a natural
median space structure, that is, compatible metric and median algebra
structures. A powerful tool in the investigation of a CAT(0) cube complex is
provided by embedding it in a compact median algebra, namely its Roller
compactification, see [Rol98] and also [Fio20] for a further discussion. This
brings to front the class of compact median algebras.
As common for dynamical systems, we study compact median algebras by
investigating their invariant measures. Let $M$ be a second countable compact
median algebra, endowed with a continuous median operator $m:M^{3}\to M$. We
study the operator $\Phi$, called self-median operator, defined on the space
of Borel probability measures of $M$ by the formula
$\Phi:\operatorname{Prob}(M)\to\operatorname{Prob}(M),\quad\Phi(\mu)=m_{*}(\mu^{3}),$
along with its space of invariant measures, $\operatorname{Prob}(M)^{\Phi}$,
which elements we denote balanced measures.
A basic example of a second countable compact median algebra is the cube
$\\{0,1\\}^{I}$, endowed with the product topology and median structure, where
$I$ is a countable index set111In this paper, a countable set could be finite,
in particular empty.. One verifies easily that the uniform measure
$\lambda\in\operatorname{Prob}(\\{0,1\\}^{I})$ is balanced (see the beginning
of §5 for details). Clearly, if $f:N\to M$ is a continuous morphism of second
countable compact median algebras and $\mu$ is a balanced measure on $N$ then
$f_{*}(\mu)$ is a balanced measure on $M$. In particular, for every continuous
median algebra morphism $f:\\{0,1\\}^{I}\to M$, the push forward of the
uniform measure, $f_{*}(\lambda)$, is a balanced measure on $M$. In case $f$
is injective, we say that $f(\\{0,1\\}^{I})$ is a cube in $M$ and
$f_{*}(\lambda)$ is a cubical measure on $M$.
Our main theorem is the following classification result which applies to the
class of second countable, locally open-convex, compact, or for short sclocc,
median algebras (see Definitions 3.1 and 4.1).
###### Theorem A.
Every balanced measure on a sclocc median algebra is cubical.
Theorem A is a direct corollary of the following two propositions.
###### Proposition 1.1.
The support of every balanced measure on a sclocc median algebra is a cube.
###### Proposition 1.2.
The uniform measure is the unique fully supported balanced measure on a cube.
Proposition 1.1 has the following interesting corollary.
###### Corollary B.
Assume that $G$ is a locally compact amenable group which acts on a sclocc
median algebra $M$ by continuous automorphisms. Then $M$ contains a
$G$-invariant cube.
Indeed, by the amenability of $G$, $\operatorname{Prob}(M)^{G}$ is non-empty
and clearly $\Phi$-invariant, thus it contains a balanced measure $\mu$ by the
Tychonoff Fixed Point Theorem and the support of $\mu$ is a $G$-invariant cube
by Proposition 1.1.
Note that one cannot expect a $G$-invariant cube to be pointwise fixed.
Indeed, the group $(\mathbb{Z}/2\mathbb{Z})^{I}$ acts transitively on the cube
$\\{0,1\\}^{I}$ by median automorphisms (see the discussion in the begining of
§5).
The requirements that a topological median algebra is sclocc is satisfied by
the Roller compactification of a (possibly infinite dimensional) CAT(0) cube
complex with countably many vertices, as well as a second countable median
space of finite rank, see Example 4.2. For further discussions on CAT(0) cube
complexes and median spaces, see [Sag95, CFI16, Bow13, Bow20, CDH10, Fio20]
and the references therein.
We note that if $M$ is not assumed second countable then the median operator
$m$ might not be Borel (with respect to the product of the Borel
$\sigma$-algebras on $M^{3}$) even though it is continuous, thus we cannot
define the self median operator $\Phi$ in this generality. Consider the
interval $[0,\omega_{1}]$, where $\omega_{1}$ is the first uncountable
ordinal, and endow it with the order topology and median structure. Even
though this space is not second countable, one verifies easily that $m$ is
Borel, thus $\Phi$ is defined, in this case. Then the Dieudonné measure, which
assigns 1 to Borel sets containing an unbounded closed subset and 0 to their
complements, is a balanced measure which has an empty support. Hereafter, when
discussing measures on compact spaces we will assume that the underlying space
is second countable, thus every measure has a non-empty support.
The structure of this note is as follows. After reviewing median algebras in
the next section, we will focus our attention on the theory of compact median
algebras in §3. We will then prove Proposition 1.1 in §4 and Proposition 1.2
in §5.
Acknowledgments. The authors thank Elia Fioravanti for sharing with them the
observation that a finite rank median space is locally open-convex and
referring them to [Bow20]. The authors wish to thank the anonymous referee for
many valuable comments. This research is supported by ISF Moked 713510 grant
number 2919/19.
## 2\. A review of Median Algebras
A median algebra is a set $M$, equipped with a ternary operation
$m:M\times M\times M\rightarrow M$, called the median operator, that has the
following properties:
$\forall x,y,z,u,v\in M$
1. (Med 1)
$m(x,y,z)=m(x,z,y)=m(y,x,z)$
2. (Med 2)
$m(x,x,y)=x$
3. (Med 3)
$m(m(x,y,z),u,v)=m(x,m(y,u,v),m(z,u,v))$
A median morphism $\varphi:M\rightarrow N$ between two median algebras, is a
map such that $\varphi\circ m=m\circ(\varphi\times\varphi\times\varphi)$
(notice that we may confuse often between the median operators of one object
to another, like we did here).
A basic example of a median algebra is $\\{0,1\\}$, endowed with the standard
median operation. The category of median algebra has arbitrary products, given
by the Cartesian products of the underlying sets and the coordinate-wise
operations. In particular, for any index set $I$, $\\{0,1\\}^{I}$ is a median
algebra. A median algebra that isomorphic to $\\{0,1\\}^{I}$ for some $I$ is
called a cube. A cube in a median algebra $M$ means a median subalgebra of $M$
that is a cube.
For the rest of this section we fix a median algebra $M$. The interval of two
elements $x,y\in M$ is the subset $[x,y]=\\{u\in M\mid m(x,y,u)=u\\}$.
###### Lemma 2.1 (statements (Int 3) and (Int 6) at the beginning of section
2 in [Rol98]).
For every $x,y,z\in M$,
1. (1)
$y\in[x,z]\quad\implies\quad[x,y]\subset[x,z].$
2. (2)
$y\in[x,z]\quad\iff\quad[x,y]\cap[y,z]=\\{y\\}.$
A subset $C\subset M$ is called convex, if for every $x,y\in C$, $[x,y]\subset
C$. An important property of convex sets is what usually known as Helly’s
Theorem .
###### Lemma 2.2 ([Rol98, Theorem 2.2]).
If $C_{1},...,C_{n}$ are convex, and for every $i\neq j\ \ C_{i}\cap
C_{j}\neq\varnothing$, then also
$\overset{n}{\underset{i=1}{\cap}}C_{i}\neq\varnothing$.
Let $C\subset M$ be a convex set and $x\in M$. We say that $y\in C$ is the
gate for $x$ in $C$, if $y\in[x,z]$, for every $z\in C$. It follows form Lemma
2.1(2), that if exists, the gate is indeed unique. We say that $C$ is gate-
convex if for every $x\in M$, there exists a gate in $C$. In this case, we
define the gate-projection to $C$, $\pi_{C}:M\rightarrow C$, where
$\pi_{C}(x)$ is the gate of $x$ in $C$. An example for gate-convex set is the
interval $[x,y]$ with gate-projection $\pi_{[x,y]}(z)=m(x,y,z)$, for every
$x,y\in M$. Notice that as a result of Lemma 2.1(2), gate-convex sets are
always convex.
By [Fio20, Proposition 2.1], if $\phi:M\rightarrow M$ is a gate-projection,
then for every $x,y,z\in M$ we have
$\phi(m(x,y,z))=m(\phi(x),\phi(y),\phi(z))$. In particular, gate-projections
map intervals to intervals. In addition, the following is true
###### Lemma 2.3 ([Fio20, Lemma 2.2]).
1. (1)
If $C_{1}\subset M$ is convex and $C_{2}\subset M$ is gate-convex, the
projection $\pi_{C_{2}}(C_{1})$ is convex. If moreover, $C_{1}\cap
C_{2}\neq\varnothing$, we have $\pi_{C_{2}}(C_{1})=C_{1}\cap C_{2}$.
2. (2)
If $C_{1},C_{2}\subset M$ are gate-convex and $C_{1}\cap
C_{2}\neq\varnothing$, then $C_{1}\cap C_{2}$ is gate-convex with gate-
projection $\pi_{C_{2}}\circ\pi_{C_{1}}=\pi_{C_{1}}\circ\pi_{C_{2}}$. In
particular, if $C_{2}\subset C_{1}$, then
$\pi_{C_{2}}=\pi_{C_{2}}\circ\pi_{C_{1}}$.
Given a disjoint partition of a median algebra $M$ into non-empty convex
subsets, $M=\mathfrak{h}\sqcup\mathfrak{h}^{*}$, we say that $\mathfrak{h}$
and $\mathfrak{h}^{*}$ are complementary half-spaces in $M$ and we regard the
unordered pair $\mathfrak{w}=\\{\mathfrak{h},\mathfrak{h}^{*}\\}$ as a wall in
$M$. For disjoint subsets $A,B\subset M$, we say that $\mathfrak{h}$ separates
$A$ from $B$ if $A\subset\mathfrak{h}$ and $B\subset\mathfrak{h}^{*}$. We
denote by $\Delta(A,B)$ the collection of half-spaces that separate $A$ from
$B$. By [Rol98, Theorem 2.8], if $A$ and $B$ are convex then $\Delta(A,B)$ is
not empty. Given a wall $\mathfrak{w}$, we say that $\mathfrak{w}$ separates
$A$ and $B$ if $\mathfrak{w}\cap\Delta(A,B)$ is not empty. For $a\notin B$ we
denote $\Delta(a,B):=\Delta(\\{a\\},B)$.
###### Lemma 2.4.
For disjoint non-empty subsets $A,B\subset M$, if $A$ is gate-convex then
there exists $a\in A$ such that $\Delta(A,B)=\Delta(a,B)$.
###### Proof.
Fix $b\in B$ and let $a=\pi_{A}(b)$. Clearly, $\Delta(A,B)\subset\Delta(a,B)$.
We will show the other inclusion. Fix $\mathfrak{h}\in\Delta(a,B)$, and let
$a^{\prime}\in A\cap\mathfrak{h}^{*}$. By the definition of the gate-
projection and the fact that $\mathfrak{h}^{*}$ is convex, we have
$a\in[a^{\prime},b]\subset\mathfrak{h}^{*}$, contradicting $a\in\mathfrak{h}$.
Thus, indeed, $A\subset\mathfrak{h}$ and we conclude that
$\mathfrak{h}\in\Delta(A,B)$. ∎
We denote by $\mathscr{H}$ and by $\mathscr{W}$ the collections of all half-
spaces and all walls in $M$, respectively. There is a natural map
$\mathscr{H}\to\mathscr{W}$, given by
$\mathfrak{h}\mapsto\\{\mathfrak{h},\mathfrak{h}^{*}\\}$. Fix a section
$\sigma:\mathscr{W}\to\mathscr{H}$, that is, a half-space representation for
every wall in $M$. We denote by $\chi_{A}$ the characteristic function of a
set A. For each subset $W\subset\mathscr{W}$ we get a median morphism
$\iota_{W}:M\to\\{0,1\\}^{W},\quad
x\mapsto(\chi_{\sigma(\mathfrak{w})}(x))_{\mathfrak{w}}.$
We say that a set of walls $W$ is separating if $\iota_{W}$ is injective, that
is, the walls in $W$ separate the points of $M$. We say that $W$ is transverse
if $\iota_{W}$ is surjective. Note that these properties are independent of
the choice of the section $\sigma$. Two distinct walls
$\mathfrak{w}_{1},\mathfrak{w}_{2}\in\mathscr{W}$ are said to be transverse if
$\\{\mathfrak{w}_{1},\mathfrak{w}_{2}\\}$ is transverse. We record the
following lemma, which proof is an immediate application of Helly’s Theorem,
Lemma 2.2.
###### Lemma 2.5.
If $W\subset\mathscr{W}$ is finite then it is transverse if and only every
pair of distinct walls in $W$ is transverse.
## 3\. Compact Median Algebras
A Topological median algebra is a median algebra $M$ endowed with a topology
for which the median operator is continuous. In this note we will always
assume that topologies are Hausdorff.
###### Definition 3.1.
A topological median algebra $M$ is said to be locally convex if each of its
points has a basis of convex neighborhoods and it is said to be locally open-
convex if each of its points has a basis of open and convex neighborhoods.
###### Example 3.2.
1. (1)
The class of locally open-convex median algebras is closed under taking
arbitrary products, hence for every index set I, the cube $\\{0,1\\}^{I}$ is a
locally open-convex compact median algebra, with respect to the product
topology and product median structure.
2. (2)
The class of locally open-convex median algebras is closed under taking median
subalgebras. Therefore, any closed median subalgebra of a cube, such as the
Roller compactification of a (possibly infinite dimensional) CAT(0) cube
complex, is locally open-convex and compact.
3. (3)
By [Bow20, Lemma 3.1 and Lemma 3.2], every median space $(X,\rho)$ admits a
pseudo-metric $\sigma$ which open balls are convex. Assume $X$ has a finite
rank. Then by [Bow20, Lemma 6.2], $\sigma$ is bilipschitz equivalent to
$\rho$, thus it separates the points of $X$. It follows that $X$, as well as
all its intervals, are locally open-convex. By [Fio20, Corollary 2.20], the
completion of $X$ has compact intervals and its Roller compactification
$\bar{X}$ is defined in [Fio20, Definition 4.13]. By [Fio20, Definition 4.1],
$\bar{X}$ is a median subalgebra of the product of all intervals in $X$. As in
the previous example, it follows that $\bar{X}$ is locally open-convex and
compact.
For the rest of the section we let $M$ be a locally open-convex and compact
topological median algebra.
We note that the closure of a convex set in $M$ is convex. Indeed, if
$C\subset M$ is convex, we have $C\times C\times M\subset m^{-1}(\bar{C})$
and, as $m^{-1}(\bar{C})$ is closed, we have $\bar{C}\times\bar{C}\times
M\subset m^{-1}(\bar{C})$, thus also $\bar{C}$ is convex. Since $M$ is a
normal topological space, it follows that every point in $M$ also has a basis
of closed convex neighborhoods. We also note that, by [Fio20, Lemma 2.6 and
Lemma 2.7], a convex set in $M$ is gate-convex if and only if it is closed and
the gate-projections to the gate-convex sets are continuous. In particular,
every interval is closed (this is true in fact in every Hausdorff topological
median algebra).
A half-space $\mathfrak{h}$ in $M$ is said to be admissible if it is open and
$\mathfrak{h}^{*}$ has a non empty interior. If further $\mathfrak{h}^{*}$ is
open, we say that $\mathfrak{h}$ is clopen. We denote by $\mathscr{H}^{\circ}$
the collections of all clopen half-spaces and we denote by
$\mathscr{W}^{\circ}$ the collection of all corresponding walls. For a subset
$W\subset\mathscr{W}^{\circ}$, the median morphism
$\iota_{W}:M\to\\{0,1\\}^{W}$ is clearly continuous. By compactness, we get
the following upgrade of Lemma 2.5.
###### Lemma 3.3.
For every $W\subset\mathscr{W}^{\circ}$, $W$ is transverse if and only if
every pair of distinct walls in $W$ is transverse. If moreover $W$ is
separating then $\iota_{W}$ is an isomorphism of topological median algebras,
thus $M$ is a cube.
###### Proof.
The first part follows immediately from Lemma 2.5 by the compactness of $M$.
If further $W$ is separating then $\iota_{W}$ is a continuous isomorphism of
median algebras and it is closed, as $M$ is compact, thus an homeomorphism. ∎
Since in a cube $\mathscr{W}^{\circ}$ is clearly transverse and separating, we
immediately get the following corollary of Lemma 3.3.
###### Corollary 3.4.
$M$ is a cube if and only if $\mathscr{W}^{\circ}$ is separating and every
pair of distinct walls in $\mathscr{W}^{\circ}$ is transverse.
We will use Corollary 3.4 in the next section. In order to apply it we will
use the existence of enough open and admissible half-spaces. These are
provided by the next two results.
###### Proposition 3.5.
Let $U,C\subset M$ be two disjoint non-empty convex sets such that $U$ is open
and $C$ is closed. Then there exists an open half-space
$\mathfrak{h}\in\Delta(U,C)$.
###### Proof.
By Lemma 2.4, there exists a point $c\in C$ such that
$\Delta(U,c)=\Delta(U,C)$. We fix such $c$ and argue to show that there exists
an open half-space $\mathfrak{h}\in\Delta(U,c)$.
We order the collection
$P=\\{V\subset M\mid V\mbox{ is an open-convex set, }U\subset V\mbox{ and
}c\notin V\\}$
by inclusion and note that it is non empty, as $U\in P$. By Zorn’s Lemma, $P$
has a maximal element, as the union of any chain in $P$ forms an upper bound.
We fix such a maximal element $\mathfrak{h}$ and denote
$\mathfrak{h}^{*}=M\setminus\mathfrak{h}$. We argue to show that
$\mathfrak{h}^{*}$ is convex, thus $\mathfrak{h}$ is a half-space.
Assume for the sake of contradiction that $\mathfrak{h}^{*}$ is not convex,
that is, there exist $x,y\in\mathfrak{h}^{*}$ such that
$[x,y]\cap\mathfrak{h}\neq\varnothing$. Fix $z\in[x,y]\cap\mathfrak{h}$ and
set $\omega=\pi_{[x,z]}(c)=m(x,z,c)$. In both cases, $\omega\in\mathfrak{h}$
and $\omega\in\mathfrak{h}^{*}$, we will derive a contradiction to the
maximality of $\mathfrak{h}$ by establishing $\mathfrak{h}\subsetneq V\in P$.
We assume first that $\omega\in\mathfrak{h}^{*}$ and denote
$V\coloneqq\pi_{[\omega,z]}^{-1}(\mathfrak{h}\cap[\omega,z])$. Recalling that
$\pi_{[\omega,z]}$ is a continuous median morphism, $V$ is open and convex.
Applying Lemma 2.3(1) for $C_{1}=\mathfrak{h}$ and $C_{2}=[\omega,z]$ we
conclude that $\mathfrak{h}\subset V$. In particular, $U\subset V$. By Lemma
2.1(1), since $\omega\in[z,x]$, we have $[z,\omega]\subset[z,x]$ and we get by
Lemma 2.3(2),
$\pi_{[z,\omega]}(c)=\pi_{[z,\omega]}\circ\pi_{[z,x]}(c)=\pi_{[z,\omega]}(\omega)=\omega$.
Using our assumption that $\omega\in\mathfrak{h}^{*}$, it follows that
$c\notin V$, thus $V\in P$. Similarly, we use $z\in[x,y]$ to get
$\pi_{[z,\omega]}(y)=\pi_{[z,\omega]}\circ\pi_{[z,x]}(y)=\pi_{[z,\omega]}(z)=z\in\mathfrak{h}\cap[\omega,z]$.
We conclude that $y\in V$, therefore $\mathfrak{h}\subsetneq V$, contradicting
the maximality of $\mathfrak{h}$.
We assume now that $\omega\in\mathfrak{h}$ and denote
$V\coloneqq\pi_{[\omega,c]}^{-1}(\mathfrak{h}\cap[\omega,c])$. Again, $V$ is
clearly open and convex. Applying now Lemma 2.3(1) for $C_{1}=\mathfrak{h}$
and $C_{2}=[\omega,c]$ we conclude that $\mathfrak{h}\subset V$. Since
$\pi_{[\omega,c]}(c)=c\notin\mathfrak{h}$ we have that $c\notin V$, thus $V\in
P$. Again, by Lemma 2.1(1), as $\omega\in[z,c]$, we have
$[\omega,c]\subset[z,c]$ and using Lemma 2.3(2) we get this time
$\pi_{[\omega,c]}(x)=\pi_{[\omega,c]}\circ\pi_{[z,c]}(x)=\pi_{[\omega,c]}(\omega)=\omega$,
which is in $\mathfrak{h}\cap[\omega,c]$ by our assumption that
$\omega\in\mathfrak{h}$. We conclude that $x\in V$, therefore
$\mathfrak{h}\subsetneq V$, contradicting again the maximality of
$\mathfrak{h}$. ∎
###### Proposition 3.6.
Let $C,C^{\prime}\subset M$ be two non-empty disjoint closed convex sets. Then
there exists an admissible half-space $\mathfrak{h}\in\Delta(C,C^{\prime})$.
###### Proof.
Using Lemma 2.4 twice, we find $c\in C$ and $c\in C^{\prime}$ such that
$\Delta(c,c^{\prime})=\Delta(C,C^{\prime})$. Using the fact that $M$ is
normal, we find open-convex neighborhoods $c\in U$ and $c^{\prime}\in V$
having disjoint closures. By Proposition 3.5, there exists an open half-space
$\mathfrak{h}\in\Delta(U,\bar{V})$. As $V\subset\mathfrak{h}^{*}$,
$\mathfrak{h}$ is admissible. We have
$\mathfrak{h}\in\Delta(U,\bar{V})\subset\Delta(c,c^{\prime})=\Delta(C,C^{\prime})$,
thus indeed $\mathfrak{h}\in\Delta(C,C^{\prime})$ ∎
## 4\. The support of a balanced measure
In this section we study the support of balanced measures. Recall that the
support of a measure is the minimal closed set having a null complement and
that every Borel measure on a compact second countable topological space has a
non-empty support.
###### Definition 4.1.
A topological median algebra $M$ is said to be sclocc if it is second
countable, locally open-convex and compact.
In view of Example 3.2, we get the following.
###### Example 4.2.
1. (1)
For a countable index set I, the cube $\\{0,1\\}^{I}$ is sclocc.
2. (2)
The Roller compactification of a CAT(0) cube complex with countably many
vertices is sclocc.
3. (3)
By [Fio20, Theorem 4.14(4)], the Roller compactification of a second
countable, finite rank median space is sclocc.
For the rest of the section we let $M$ be a sclocc median algebra.
We let $\operatorname{Prob}(M)$ be the space of probability Borel measures on
$M$ and recall the definition of the self-median operator
$\Phi:\operatorname{Prob}(M)\to\operatorname{Prob}(M),\quad\Phi(\mu)=m_{*}(\mu^{3}),$
which fixed points are the balanced measures on $M$. The following observation
is trivial, but useful.
###### Lemma 4.3.
For any Borel median algebra morphism $M\to N$, the push forward map
$\operatorname{Prob}(M)\to\operatorname{Prob}(N)$ commutes with the
corresponding self-median operators. In particular, the image of a balanced
measure on $M$ is a balanced measure on $N$.
Another basic lemma is the following.
###### Lemma 4.4.
The balanced measures on $\\{0,1\\}$ are exactly $\delta_{0}$, $\delta_{1}$
and $\frac{1}{2}\delta_{0}+\frac{1}{2}\delta_{1}$.
###### Proof.
Let $\mu$ be a balanced measure on $\\{0,1\\}$ and denote $x=\mu(\\{0\\})$. An
easy calculation gives the equation
$x=\mu(\\{0\\})=m_{*}(\mu^{3})(\\{0\\})=x^{3}+3x^{2}(1-x)$ which solutions are
exactly $0,1$ and $\frac{1}{2}$. ∎
Since every Borel half-space gives a Borel median algebra morphism to
$\\{0,1\\}$ (namely, the characteristic map of the half-space), we get the
following.
###### Corollary 4.5.
For every balanced measure on a median algebra, the measure of any Borel half-
space is either $0,1$ or $\frac{1}{2}$.
For fully supported balanced measures and admissible half-spaces, much more
can be said.
###### Lemma 4.6.
Let $\mu$ be a fully supported balanced measure on the sclocc median algebra
$M$ and let $\mathfrak{h}$ be an admissible half-space. Then $\mathfrak{h}$ is
clopen and $\mu(\mathfrak{h})=\frac{1}{2}$. If $\mathfrak{f}$ is a clopen
half-space corresponding to another wall in $M$ then $\mathfrak{f}$ and
$\mathfrak{h}$ are transverse.
###### Proof.
That $\mu(\mathfrak{h})=\frac{1}{2}$ follows immediately from Corollary 4.5,
by the assumptions that $\mu$ is fully supported and balanced, as by the
admissibility of $\mathfrak{h}$, both $\mathfrak{h}$ and $\mathfrak{h}^{*}$
have positive measures. Fix $x\in\mathfrak{h}$ and use Proposition 3.6 to find
an admissible half-space $\mathfrak{h}^{\prime}\in\Delta(\mathfrak{h}^{*},x)$.
Then also $\mu(\mathfrak{h}^{\prime})=\frac{1}{2}$ and we have
$M=\mathfrak{h}\cup\mathfrak{h}^{\prime}$. It follows that
$\mu(\mathfrak{h}\cap\mathfrak{h}^{\prime})=0$. Since
$\mathfrak{h}\cap\mathfrak{h}^{\prime}$ is open and $\mu$ is fully supported,
we conclude that $\mathfrak{h}\cap\mathfrak{h}^{\prime}=\varnothing$, thus
$\mathfrak{h}^{\prime}=\mathfrak{h}^{*}$ and indeed, $\mathfrak{h}$ is clopen
We now let $\mathfrak{f}$ be a clopen half-space which is not transverse to
$\mathfrak{h}$ and show that $\mathfrak{f}$ and $\mathfrak{h}$ determine the
same wall in $M$. Without loss of the generality we assume
$\mathfrak{f}\cap\mathfrak{h}^{*}=\varnothing$. We have
$\mu(\mathfrak{f}\cap\mathfrak{h})=\mu(\mathfrak{f})=\frac{1}{2}$ and as
$\mu(\mathfrak{h})=\frac{1}{2}$ we get
$\mu(\mathfrak{f}^{*}\cap\mathfrak{h})=0$. Since
$\mathfrak{f}^{*}\cap\mathfrak{h}$ is open and $\mu$ is fully supported, we
conclude that $\mathfrak{f}^{*}\cap\mathfrak{h}=\varnothing$, thus
$\mathfrak{f}=\mathfrak{h}$. This finishes the proof . ∎
We are now ready to prove that the support of a balanced measure is a cube.
###### Proof of Proposition 1.1.
We let $\mu$ be a balanced measure on $M$. We note that the support of $\mu$,
$\operatorname{supp}(\mu)$, is a closed sub-algebra of $M$, and thus, a sclocc
median algebra. Indeed, fixing $x,y,z\in\operatorname{supp}(\mu)$, for every
open neighborhood $O$ of $m(x,y,z)$, there are open neighborhoods $x\in U$,
$y\in V$ and $z\in W$ such that $(x,y,z)\in U\times V\times W\subset
m^{-1}(O)$, and we get $\mu(O)\geq\mu(U)\mu(V)\mu(W)>0$, therefore
$m(x,y,z)\in\operatorname{supp}(\mu)$. By restricting to its support, we
assume as we may that $\mu$ is a fully supported balanced measure on $M$ and
argue to show that $M$ is a cube.
By Corollary 3.4, we need to show that $\mathscr{W}^{\circ}$ is separating and
every pair of distinct walls in $\mathscr{W}^{\circ}$ is transverse.
Proposition 3.6 guarantees that the collection of admissible half-spaces is
separating and, by the first part of Lemma 4.6, this collection coincides with
$\mathscr{W}^{\circ}$. Thus, we get that $\mathscr{W}^{\circ}$ is separating.
By the last part of Lemma 4.6 we also get that every pair of distinct walls in
$\mathscr{W}^{\circ}$ is transverse. Thus, indeed, $M$ is a cube. ∎
## 5\. Balanced measures on cubes
Fix a countable set $I$ and consider the cube $M=\\{0,1\\}^{I}$. It is
convenient to identify $\\{0,1\\}$ with the group $\mathbb{Z}/2\mathbb{Z}$ and
$M$ with the compact group $(\mathbb{Z}/2\mathbb{Z})^{I}$. We denote by
$\lambda$ the Haar measure on $M$,
$\lambda=(\frac{1}{2}\delta_{0}+\frac{1}{2}\delta_{1})^{I}$. It is fully
supported and balanced. To see that it is indeed balanced, recall that
$\lambda$ is the unique probability Borel measure on $M$ that is invariant
under translations, but by Lemma 4.3, also $\Phi(\lambda)$ is invariant under
translations, as translations form median algebra automorphisms.
This section is devoted to the proof of Proposition 1.2, which claims that
$\lambda$ is the unique fully supported balanced measure on $M$. We observe
that it is enough to prove this for finite cubes, that is, in case
$|I|<\infty$, as every cube is the inverse limit of its finite coordinate
projections, which are median algebras surjective morphisms, thus take fully
supported and balanced measure to fully supported and balanced measures by
Lemma 4.3.
Dealing with measures on finite sets, we will identify a measure $\mu$ with
the function $x\mapsto\mu(\\{x\\})$, writing $\mu(x):=\mu(\\{x\\})$. For our
proof, it is beneficial to study a one-parameter class of measures on $M$,
namely, the measures that are invariant under translations by a certain index
two subgroup $K_{0}<M$.
###### Lemma 5.1.
Fix a natural integer $n$ and let $M=(\mathbb{Z}/2\mathbb{Z})^{n}$. Consider
the group homomorphism
$\rho:M\to\mathbb{Z}/2\mathbb{Z},\quad(x_{1},\dots,x_{n})\mapsto\sum x_{i},$
denote its kernel by $K_{0}$ and denote the non-trivial coset of $K_{0}$ by
$K_{1}$. Then the map
$[0,1]\to\operatorname{Prob}(M)^{K_{0}},\quad\omega\mapsto\mu_{t}=\frac{t}{2^{n-1}}\cdot\chi_{K_{0}}+\frac{1-t}{2^{n-1}}\cdot\chi_{K_{1}}$
is bijective and the cubic polynomial
$\phi(t)=t+(-1)^{n}2^{2-n}(t-\frac{1}{2})(t^{2}-t+\frac{1}{4}+(-1)^{n}\frac{3}{4}-(-1)^{n}2^{n-2})$
satisfies the relation, $\Phi(\mu_{t})=\mu_{\phi(t)}$ for every $t\in[0,1]$.
We now provide the proof of Proposition 1.2, based on Lemma 5.1, which proof
we postpone until later.
###### Proof of Proposition 1.2.
We argue to show that $\lambda$ is the unique fully supported balanced measure
on $M=\\{0,1\\}^{I}$. As mentioned above, we may assume that $I$ is finite. We
do so and prove the claim by an induction on $|I|$. The base case $|I|=0$ is
trivial. We note also that the case $|I|=1$ is proven in Lemma 4.4. We now fix
a natural $n>1$, assume $|I|=n$ and that the proposition is known for every
cube $\\{0,1\\}^{J}$ with $|J|<n$. We let $\mu$ be a balanced measure on
$\\{0,1\\}^{I}$ and we argue to prove that for every $x\in\\{0,1\\}^{I}$,
$\mu(x)=\lambda(x)=1/2^{n}$.
We denote by $\\{e_{i}\mid i\in I\\}$ the standard generating set of $M$ and
for every $i\in I$ we consider the obvious projection
$\pi_{i}:M\to\\{0,1\\}^{I\setminus\\{i\\}}$. Fixing $i\in I$ and noticing that
$\pi_{i}$ is a homomorphism of groups, we get by Lemma 4.3 that the push
forward of $\mu$ by $\pi_{i}$ is balanced, thus we conclude by our induction
hypothesis that for every $x\in M$, $\mu(x)+\mu(x+e_{i})=1/2^{n-1}$. It
follows that for evey $i,j\in I$, $\mu(x)=\mu(x+e_{i}+e_{j})$. We denote by
$K_{0}$ the group generated by the set $\\{e_{i}+e_{j}\mid i,j\in I\\}$ and
conclude that $\mu$ is $K_{0}$ invariant.
Noticing that $K_{0}<M$ coincides with the subgroup considered in Lemma 5.1,
it follows that $\mu=\mu_{t}$ for some $t\in[0,1]$. Since $\mu$ is fully
supported, we in fact get that $t\in(0,1)$. Since $\mu$ is balanced, we have
by Lemma 5.1, $\mu_{t}=\Phi(\mu_{t})=\mu_{\phi(t)}$, thus $\phi(t)=t$. We
conclude that $t$ is a root of the polynomial
$(-1)^{n}2^{n-2}(\phi(t)-t)=(t-\frac{1}{2})(t^{2}-t+\frac{1}{4}+(-1)^{n}\frac{3}{4}-(-1)^{n}2^{n-2}).$
We observe that $t=1/2$ is the only root of this polynomial in the region
$(0,1)$. Indeed, for $c\in\mathbb{R}$, the polynomial $t^{2}-t+c$ has a root
in $(0,1)$ iff $0<c\leq 1/4$, which is not satisfied for
$c=\frac{1}{4}+(-1)^{n}\frac{3}{4}-(-1)^{n}2^{n-2})$, since for even $n$ we
have $c\leq 0$ and for odd $n>1$ we have $c\geq 3/2$. We conclude that
$\mu=\mu_{\frac{1}{2}}$, thus indeed, for every $x\in\\{0,1\\}^{I}$,
$\mu(x)=1/2^{n}$. ∎
###### Proof of Lemma 5.1.
The map $t\mapsto\mu_{t}$ is clearly injective and it is onto
$\operatorname{Prob}(M)^{K_{0}}$ as every measure in
$\operatorname{Prob}(M)^{K_{0}}$ (considered as a function on $M$) is constant
on the fibers of $\rho$. Since translation on $M$ are median algebra
automorphisms, we get by Lemma 4.3 that $\Phi$ preserves
$\operatorname{Prob}(M)^{K_{0}}$, thus for every $t\in[0,1]$,
$\Phi(\mu_{t})=\mu_{\psi(t)}$ for some function $\psi:[0,1]\to[0,1]$. We are
left to prove that $\psi=\phi$.
We denote by $0\in M$ the group identity and set $X=m^{-1}(\\{0\\})\subset
M^{3}$. Clearly, $0\in K_{0}$, thus for every $t\in[0,1]$,
$\mu_{t}(0)=t/2^{n-1}$. Applying this to $\psi(t)$, we get that for every
$t\in[0,1]$,
$\mu_{t}^{3}(X)=\mu_{t}^{3}(m^{-1}(\\{0\\}))=m_{*}(\mu_{t}^{3})(\\{0\\})=\Phi(\mu_{t})(\\{0\\})=\mu_{\psi(t)}(0)=\psi(t)/2^{n-1}.$
It is then enough to show that for every $t\in[0,1]$ we have
$\mu_{t}^{3}(X)=\phi(t)/2^{n-1}$, which is what we now proceed to show.
We need to understand the subset $X\subset M^{3}$ and its measure under
$\mu_{t}^{3}$. We use the decomposition $M^{3}=A_{0}\sqcup A_{1}\sqcup
A_{2}\sqcup A_{3}$, where
$A_{i}=\cup\\{K_{\epsilon_{1}}\times K_{\epsilon_{2}}\times
K_{\epsilon_{3}}\mid\epsilon_{1},\epsilon_{2},\epsilon_{3}\in\\{0,1\\},\leavevmode\nobreak\
\epsilon_{1}+\epsilon_{2}+\epsilon_{3}=i\\},$
that is, $A_{i}$ is the subset of $M^{3}$ consists of triples of elements out
of which exactly $i$ are in $K_{1}$. We observe that $\mu_{t}^{3}$, as a
function on $M^{3}$, attains the constant value $t^{3-i}(1-t)^{i}/8^{n-1}$ on
$A_{i}$. Denoting $X_{i}=X\cap A_{i}$ and $a_{i}=|X_{i}|$ we get the formula
(1) $\mu_{t}^{3}(X)=\sum_{i=0}^{3}a_{i}\cdot\frac{t^{3-i}(1-t)^{i}}{8^{n-1}}.$
Our next goal will be to compute the coefficients $a_{i}$. For this we now
emphasize their dependence on $n$, denoting them $a_{i}(n)$. Similarly, we
write $X(n)$ and $X_{i}(n)$ for $X$ and $X_{i}$ correspondingly. Writing
further, $M(n)=\\{0,1\\}^{n}$, we make the identification $M(n)\simeq
M(n-1)\times\\{0,1\\}$ and we identify accordingly also $M(n)^{3}\simeq
M(n-1)^{3}\times\\{0,1\\}^{3}$. As $0\mapsto 0$ under the projection map
$M(n)\to M(n-1)$, we clearly have, using Lemma 4.3, that the image of $X(n)$
under the corresponding map $M(n)^{3}\to M(n-1)^{3}$ is in $X(n-1)$. We denote
by $\pi:X(n)\to X(n-1)$ the corresponding restriction map and consider the
partition $X(n-1)=\sqcup_{j=0}^{3}X_{j}(n-1)$. We fix $i,j\in\\{0,1,2,3\\}$
and for each $x\in X_{j}(n-1)$ count the intersection size of the fiber
$\pi^{-1}(\\{x\\})$ with the sets $X_{i}(n)$, that is, $|\pi^{-1}(\\{x\\})\cap
X_{i}(n)|$. One verifies easily that this size does not depends on $x$, only
on $i,j\in\\{0,1,2,3\\}$ and, denoting it by $s_{i,j}$, we have
$(s_{i,j})=\begin{pmatrix}1&1&0&0\\\ 3&1&2&0\\\ 0&2&1&3\\\
0&0&1&1\end{pmatrix}.$
We therefore obtain the recurrence linear relation
$\begin{pmatrix}a_{0}(n)\\\ a_{1}(n)\\\ a_{2}(n)\\\
a_{3}(n)\end{pmatrix}=\begin{pmatrix}1&1&0&0\\\ 3&1&2&0\\\ 0&2&1&3\\\
0&0&1&1\end{pmatrix}\begin{pmatrix}a_{0}(n-1)\\\ a_{1}(n-1)\\\ a_{2}(n-1)\\\
a_{3}(n-1)\end{pmatrix},\quad\quad\begin{pmatrix}a_{0}(1)\\\ a_{1}(1)\\\
a_{2}(1)\\\ a_{3}(1)\end{pmatrix}=\begin{pmatrix}1\\\ 3\\\ 0\\\
0\end{pmatrix},$
that leads to the explicit formulas
$\displaystyle a_{0}(n)$
$\displaystyle=2^{n}\left((\frac{3}{8}+(-1)^{n}\frac{1}{8})+2^{n}\frac{1}{8}\right),$
$\displaystyle a_{1}(n)$
$\displaystyle=2^{n}\left((\frac{3}{8}-(-1)^{n}\frac{3}{8})+2^{n}\frac{3}{8}\right),$
$\displaystyle a_{2}(n)$
$\displaystyle=2^{n}\left((-\frac{3}{8}+(-1)^{n}\frac{3}{8})+2^{n}\frac{3}{8}\right),$
$\displaystyle a_{3}(n)$
$\displaystyle=2^{n}\left((-\frac{3}{8}-(-1)^{n}\frac{1}{8})+2^{n}\frac{1}{8}\right).$
By substituting these values in equation (1), we get
$\displaystyle\mu_{t}^{3}(X)$
$\displaystyle=\sum_{i=0}^{3}a_{i}\cdot\frac{t^{3-i}(1-t)^{i}}{8^{n-1}}$
$\displaystyle=2^{n}\left((\frac{3}{8}+(-1)^{n}\frac{1}{8})+2^{n}\frac{1}{8}\right)\cdot\frac{t^{3}}{8^{n-1}}$
$\displaystyle+2^{n}\left((\frac{3}{8}-(-1)^{n}\frac{3}{8})+2^{n}\frac{3}{8}\right)\cdot\frac{t^{2}(1-t)}{8^{n-1}}$
$\displaystyle+2^{n}\left((-\frac{3}{8}+(-1)^{n}\frac{3}{8})+2^{n}\frac{3}{8}\right)\cdot\frac{t(1-t)^{2}}{8^{n-1}}$
$\displaystyle+2^{n}\left((-\frac{3}{8}-(-1)^{n}\frac{1}{8})+2^{n}\frac{1}{8}\right)\cdot\frac{(1-t)^{3}}{8^{n-1}}$
$\displaystyle=\frac{t+(-1)^{n}2^{2-n}(t-\frac{1}{2})(t^{2}-t+\frac{1}{4}+(-1)^{n}\frac{3}{4}-(-1)^{n}2^{n-2})}{2^{n-1}}=\frac{\phi(t)}{2^{n-1}},$
thus indeed, $\mu_{t}^{3}(X)=\phi(t)/2^{n-1}$, and this finishes the proof. ∎
## References
* [Bow13] B.H. Bowditch. Coarse median spaces and groups. Pacific Journal of Mathematics., 261(1):53–93, 2013.
* [Bow20] Brian H. Bowditch. Median and injective metric spaces. Mathematical Proceedings of the Cambridge Philosophical Society, 168, 2020.
* [CDH10] I. Chatterji, C. Drutu, and F. Haglund. Kazhdan and haagerup properties from the median viewpoint. Advances in Mathematics., 225(2):882–921, 2010.
* [CFI16] I. Chatterji, T. Fernós, and A. Iozzi. The median class and supperigity of actions on cat(0) cube complexes. Journal of Topology., 9(2):349–400, June 2016.
* [Fio20] E. Fioravanti. Roller boundaries for median spaces and algebras. Algebr. Geom. Topol., 20(3):1325–1370, 2020.
* [Rol98] M. A. Roller. Poc sets, median algebras and group actions. an extended study of dunwoody’s construction and sageev’s theorem. 1998\.
* [Sag95] M. Sageev. Ends of group pairs and non-positively curved cube complexes. Proceedings of the London Mathematical Society, s3-71(3):585–617, 1995.
|
# SLOCC classification of n qubits invoking the proportional relationships
for spectrums and for standard Jordan normal forms
Dafa Li1,2 1Department of Mathematical Sciences, Tsinghua University, Beijing,
100084, China
2Center for Quantum Information Science and Technology, Tsinghua National
Laboratory for Information Science and Technology (TNList), Beijing, 100084,
China
###### Abstract
We investigate the proportional relationships for spectrums and for SJNFs
(Standard Jordan Normal Forms) of the matrices constructed from coefficient
matrices of two SLOCC (stochastic local operations and classical
communication) equivalent states of $n$ qubits. Invoking the proportional
relationships for spectrums and for SJNFs, pure states of $n$ ($\geq 4$)
qubits are partitioned into 12 groups and 34 families under SLOCC,
respectively. Specially, it is true for four qubits.
## I Introduction
Quantum entanglement is an essential resource in quantum teleportation,
quantum cryptography, and quantum information and computation Nielsen . A key
task of the entanglement theory is to classify different types of
entanglement. SLOCC classification is very significant because the states in
the same SLOCC class are able to perform the same QIT-tasks Dur Verstraete .
It is well known that two-qubit states were partitioned into two SLOCC
classes, three-qubit states were partitioned into six SLOCC classes, and there
are infinitely many SLOCC classes for $n$ ($\geq 4$) qubits Dur . It is highly
desirable to partition these infinite classes into a finite number of families
according to a SLOCC invariant criterion. In the pioneering work of Verstraete
_et al._ Verstraete , by using their general singular value decomposition
Verstraete _et al._ partitioned pure four-qubit states into nine SLOCC
inequivalent families: $G_{abcd}$, $L_{abc_{2}}$, $L_{a_{2}b_{2}}$,
$L_{ab_{3}}$, $L_{a_{4}}$,$\ L_{a_{2}0_{3\oplus 1}}$, $L_{0_{5\oplus 3}}$,
$L_{0_{7\oplus 1}}$, and $L_{0_{3\oplus 1}0_{3\oplus 1}}$ Verstraete . Since
then, the extensive efforts have contributed to studying entanglement
classification of four qubits Verstraete ; Miyake ; Cao ; LDF07b ; Chterental
; Lamata ; LDFQIC09 ; Borsten ; Viehmann ; Buniy ; Sharma12 .
Recently, considerable efforts have been devoted to find SLOCC invariant
polynomials in the coefficients of states for classifications and measures of
entanglement of$\ n$ qubits Wong ; Luque ; Leifer ; Levay ; LDF07a ;
Osterloh09 ; Viehmann ; Eltschka ; Gour ; LDFPRA13 . It is well known that the
concurrence and the 3-tangle are invariant polynomials of degrees 2 and 4 for
two and three qubits, respectively Coffman . Explicit and simple expresses of
invariant polynomials of degrees 2 for even $n$ qubits LDF07a , 4 for odd $n$
($\geq 4$) qubits LDF07a , 4 for even $n$ ($\geq 4$) qubits LDFPRA13 , were
presented.
Very recently, SLOCC invariant ranks of the coefficient matrices were proposed
for SLOCC classification LDFPRL12 ; LDFPRA12 ; Wang ; Fan ; LDFPRA15 .
In this paper, for two SLOCC equivalent states of $n$ qubits, we show that the
matrices constructed from coefficient matrices of the two states have
proportional spectrums and proportional SJNFs. Invoking the proportional
relationships for spectrums pure states of $n$ ($\geq 4$) qubits are
partitioned into 12 groups under SLOCC, and invoking the proportional
relationships for SJNFs pure states of $n$ ($\geq 4$) qubits are partitioned
into 34 families under SLOCC. Specially, for four qubits, we obtain new SLOCC
classifications.
## II SLOCC classification of $n$ qubits
### II.1 The proportional relationships for spectrums and for SJNFs
Let $|\psi\rangle=\sum_{i=0}^{2^{n}-1}a_{i}|i\rangle$ be an $n$-qubit pure
state. It is well known that two $n$-qubit pure states $|\psi\rangle$ and
$|\psi^{\prime}\rangle$ are SLOCC equivalent if and only if there are
invertible local operators $\mathcal{A}_{i}\in GL(2,C)$, $i=1,\cdots,n$, such
that Dur
$|\psi^{\prime}\rangle=\mathcal{A}_{1}\otimes\mathcal{A}_{2}\otimes\cdots\otimes\mathcal{A}_{n}|\psi\rangle.$
(1)
To any state $|\psi\rangle$ of $n$ qubits, we associate a $2^{\ell}$ by
$2^{n-\ell}$ matrix $C_{q_{1}\cdots q_{\ell}}^{(n)}(|\psi\rangle)$ whose
entries are the coefficients $a_{0},a_{1},\cdots,a_{2^{n}-1}$ of the state
$|\psi\rangle$, where $q_{1},\cdots,q_{\ell}$ are chosen as the row bits
LDFPRL12 ; LDFPRA12 . In LDFPRA15 , in terms of the coefficient matrix
$C_{q_{1},...,q_{i}}^{(n)}$ we constructed a $2^{i}$ by $2^{i}$ matrix
$\Omega_{q_{1},...,q_{i}}^{(n)}$
$\displaystyle\Omega_{q_{1},...,q_{i}}^{(n)}(|\psi\rangle)$ (2)
$\displaystyle=$ $\displaystyle
C_{q_{1},...,q_{i}}^{(n)}(|\psi\rangle)\upsilon^{\otimes(n-i)}(C_{q_{1},...,q_{i}}^{(n)}(|\psi\rangle))^{t},$
where $\upsilon=\sqrt{-1}\sigma_{y}$ and $\sigma_{y}\ $is the Pauli operator,
and $C^{t}$ is the transpose of $C$.
From LDFPRA15 , when $q_{1}$ and $q_{2}$ are chosen as the row bits, we can
show that if $n$-qubit states $|\psi^{\prime}\rangle$ and $|\psi\rangle$ are
SLOCC equivalent, then
$\displaystyle\Omega_{q_{1}q_{2}}^{(n)}(|\psi^{\prime}\rangle)$
$\displaystyle=$
$\displaystyle(\Pi_{\ell=3}^{n}\det\mathcal{A}_{q_{\ell}})(\mathcal{A}_{q_{1}}\otimes\mathcal{A}_{q_{2}})\Omega_{q_{1}q_{2}}^{(n)}(|\psi\rangle)(\mathcal{A}_{q_{1}}\otimes\mathcal{A}_{q_{2}})^{t}.$
Let the unitary matrix
$T=\frac{1}{\sqrt{2}}\left(\begin{array}[]{cccc}1&0&0&1\\\ 0&i&i&0\\\
0&-1&1&0\\\ i&0&0&-i\end{array}\right).$ (4)
Let $G_{1}=\ \ T(\mathcal{A}_{q_{1}}\otimes\mathcal{A}_{q_{2}})T^{+}$, where
$T^{+}$ is the Hermitian transpose of $T$. It is easy to check that
$G_{1}G_{1}^{t}=(\det\mathcal{A}_{q_{1}}\ast\det\mathcal{A}_{q_{2}})I$. Let
$S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})=T\Omega_{q_{1}q_{2}}^{(n)}(|\psi^{\prime}\rangle)T^{t}$.
Then, from Eq. (LABEL:g-4) we obtain
$\displaystyle S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})$ (5) $\displaystyle=$
$\displaystyle(\Pi_{\ell=3}^{n}\det\mathcal{A}_{q_{\ell}})\times$
$\displaystyle
T(\mathcal{A}_{q_{1}}\otimes\mathcal{A}_{q_{2}})T^{+}T\Omega_{q_{1}q_{2}}^{(n)}(|\psi\rangle)T^{t}T^{\ast}(\mathcal{A}_{q_{1}}\otimes\mathcal{A}_{q_{2}})^{t}T^{t}$
$\displaystyle=$ $\displaystyle kG_{1}S_{q_{1}q_{2}}^{(n)}(\psi)G_{1}^{-1}$
where $T^{\ast}$ is a conjugate matrix, $T^{t}T^{\ast}=I$, and
$k=\Pi_{\ell=1}^{n}\det\mathcal{A}_{\ell}$. Note that
$S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})$ and $S_{q_{1}q_{2}}^{(n)}(\psi)$ are $4$
by $4$ matrices.
In this paper, we write the direct sum of standard Jordan blocks
$J_{n_{1}}(\lambda_{1})$,$\cdots$, and $J_{n_{j}}(\lambda_{j})$ as
$J_{n_{1}}(\lambda_{1})\cdots J_{n_{j}}(\lambda_{j})$. The Jordan block
$J_{1}(a)$ is simply written as $a$. We define that the two SJNFs
$J_{n_{1}}(\lambda_{1})\cdots J_{n_{j}}(\lambda_{j})$ and
$J_{n_{1}}(k\lambda_{1})\cdots J_{n_{j}}(k\lambda_{j})$, where $k\neq 0$, are
proportional.
Eq. (5) leads to the following theorem 1.
Theorem 1. If the states $|\psi^{\prime}\rangle$ and $|\psi\rangle$ of $n$
qubits satisfy Eq. (1), i.e. the state $|\psi^{\prime}\rangle$ is SLOCC
equivalent to $|\psi\rangle$, then
(1) if $S_{q_{1}q_{2}}^{(n)}(\psi)$ has the spectrum $\lambda_{1}$, $\cdots$,
$\lambda_{4}$, then $S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})$ has the spectrum
$k\lambda_{1}$, $\cdots$, $k\lambda_{4}$, where
$k=\Pi_{\ell=1}^{n}\det\mathcal{A}_{\ell}$.
(2) if $S_{q_{1}q_{2}}^{(n)}(\psi)$ has the SJNF $J_{n_{1}}(\lambda_{1})\cdots
J_{n_{j}}(\lambda_{j})$, then $S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})$ has the
SJNF $J_{n_{1}}(k\lambda_{1})\cdots J_{n_{j}}(k\lambda_{j})$, where
$k=\Pi_{\ell=1}^{n}\det\mathcal{A}_{\ell}$.
We give our argument as follows. Let
$\Gamma=G_{1}S_{q_{1}q_{2}}^{(n)}(\psi)G_{1}^{-1}$. Then,
$S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})=k\Gamma$. Clearly, $\Gamma$ is similar to
$S_{q_{1}q_{2}}^{(n)}(\psi)$. Therefore, $\Gamma$ and
$S_{q_{1}q_{2}}^{(n)}(\psi)$ have the same spectrum and SJNF.
(1). It is clear that if $\Gamma$ has the spectrum $\lambda_{1}$,$\cdots$,
$\lambda_{4}$, then $S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})$ has the spectrum
$k\lambda_{1}$,$\cdots$, $k\lambda_{4}$.
(2). There is an invertible matrix $H$ such that $\Gamma=HJH^{-1}$, where the
SJNF $J=J_{n_{1}}(\lambda_{1})\cdots J_{n_{j}}(\lambda_{j})$. Then,
$k\Gamma=HkJH^{-1}$. It is not hard to see that the SJNF of $kJ$ is
$J_{n_{1}}(k\lambda_{1})\cdots J_{n_{j}}(k\lambda_{j})$.
Example 1. We have the following SLOCC equivalent states of four qubits:
$L_{a_{4}}(a\neq 0)$ and $L_{a_{4}}(a=1)$ LDFQIC09 ; $L_{a_{2}0_{3\oplus
1}}(a\neq 0)$ and $L_{a_{2}0_{3\oplus 1}}(a=1)$ LDFQIC09 ; and
$L_{ab_{3}}^{\ast}(a=0)$ and $L_{ab_{3}}(a=0)$ LDFPRA12 . We list the SJNFs of
$S_{1,2}^{(4)}$ of the states in Table 1.
Table 1: SJNFs of SLOCC equivalent states state | $L_{a_{4}}(a\neq 0)$ | $L_{a_{4}}(a=1)$ | $k=a^{2}$
---|---|---|---
SJNF | $J_{4}(a^{2})$ | $J_{4}(1)$ |
State | $L_{a_{2}0_{3\oplus 1}}(a\neq 0)$ | $L_{a_{2}0_{3\oplus 1}}(a=1)$ | $k=a^{2}$
SJNF | $J_{2}(a^{2})J_{2}(0)$ | $J_{2}(1)J_{2}(0)$ |
State | $L_{ab_{3}}^{\ast}(a=0)$ | $L_{ab_{3}}(a=0)$ | $k=1$
SJNF | $0b^{2}J_{2}(0)$ | $0b^{2}J_{2}(0)$ |
Example 2. For four qubits, let
$\zeta_{4}=a(|0\rangle+|15\rangle)+b(|5\rangle+|10\rangle)+|6\rangle$, and
$\zeta_{5}=b(|0\rangle+|15\rangle)+a(|5\rangle+|10\rangle)+|6\rangle$, where
$a\neq b$. The SJNF of $S_{1,2}^{(4)}(\zeta_{4})$ is $J_{2}(b)aa$ while the
SJNF of $S_{1,2}^{(4)}(\zeta_{5})$ is $J_{2}(a)bb$. So, by (2) of Theorem 1
the two states $\zeta_{4}$ and $\zeta_{5}$ are SLOCC inequivalent.
We can rewrite $S_{q_{1}q_{2}}^{(n)}(\psi)$ as
$S_{q_{1}q_{2}}^{(n)}(\psi)=[TC_{q_{1}q_{2}}^{(n)}(|\psi\rangle)]\upsilon^{\otimes(n-2)}[TC_{q_{1}q_{2}}^{(n)}(|\psi\rangle)]^{t}.$
(6)
### II.2 Partition pure states of $n$ ($\geq 4$) qubits into 12 groups and 34
families
Theorem 1 permits a reduction of SLOCC classification of $n$ ($\geq 4$) qubits
to a classification of $4$ by $4$ complex matrices. For $4$ by $4$ matrices, a
calculation yields 12 types of CPs (characteristic polynomials), 12 types of
spectrums, and 34 types of SJNFs in Table 2. It is easy to see that CPs and
spectrums have the same effect for SLOCC classification.
Note that in Table 2, $\sigma_{i}\neq 0$, $\sigma_{i}\neq\sigma_{j}$ when
$i\neq j$. Next, we give 12 types of CPs of $4$ by $4$ matrices as follows.
CP${}_{1}:(\sigma-\sigma_{1})^{4}$;
CP${}_{2}:(\sigma-\sigma_{1})(\sigma-\sigma_{2})^{3}$;
CP${}_{3}:(\sigma-\sigma_{1})(\sigma-\sigma_{2})(\sigma-\sigma_{3})^{2}$;
CP${}_{4}:(\sigma-\sigma_{1})^{2}(\sigma-\sigma_{2})^{2}$;
CP${}_{5}:\Pi_{i=1}^{4}(\sigma-\sigma_{i})$;
CP${}_{6}:\sigma(\sigma-\sigma_{1})^{3}$;
CP${}_{7}:\sigma(\sigma-\sigma_{1})(\sigma-\sigma_{2})^{2}$;
CP${}_{8}:\sigma\Pi_{i=1}^{3}(\sigma-\sigma_{i})$;
CP${}_{9}:\sigma^{2}(\sigma-\sigma_{1})^{2}$;
CP${}_{10}:\sigma^{2}(\sigma-\sigma_{1})(\sigma-\sigma_{2})$;
CP${}_{11}:\sigma^{3}(\sigma-\sigma_{1})$; CP${}_{12}:\sigma^{4}$.
For each state of $n$ ($\geq 4$) qubits, the spectrum of
$S_{q_{1}q_{2}}^{(n)}\ $must belong to one of the 12 types of the spectrums in
Table 2. Let the states of $n$ ($\geq 4$) qubits, for which spectrums of
$S_{q_{1}q_{2}}^{(n)}$ possess the same type in Table 2, belong to the same
group. Thus, the states of $n$ ($\geq 4$) qubits are partitioned into 12
groups. In light of Theorem 1, the states belonging to different groups must
be SLOCC inequivalent.
For each state of $n$ ($\geq 4$) qubits, the SJNF of $S_{q_{1}q_{2}}^{(n)}$ up
to the order of the standard Jordan blocks must belong to one of the 34 types
of the SJNFs in Table 2. Let the states of $n$ ($\geq 4$) qubits with the same
type of SJNFs of $S_{q_{1}q_{2}}^{(n)}$ in Table 2 up to the order of the
Jordan blocks belong to the same family. Thus, we partition the states of $n$
($\geq 4$) qubits into 34 families. In light of Theorem 1, the states
belonging to different families must be SLOCC inequivalent.
Table 2: 12 types of CPs, 12 types of spectrums, 34 types of the SJNFs for 4 by 4 matrices, and the corresponding states for four qubits. CPi;spectrum | SJNF | state | SJNF | state
---|---|---|---|---
1;$\sigma_{1}\sigma_{1}\sigma_{1}\sigma_{1}$ | $J_{4}(\sigma_{1})\text{ }$ | $\tau_{1}$ | $J_{2}(\sigma_{1})J_{2}(\sigma_{1})$ | $\eta_{1}$
| $J_{3}(\sigma_{1})\sigma_{1}$ | $\theta_{1}$ | $\sigma_{1}\sigma_{1}{}J_{2}(\sigma_{1})$ | $\zeta_{1}$
| $\sigma_{1}\sigma_{1}\sigma_{1}\sigma_{1}{}$ | $G_{1}$ | |
2;$\sigma_{1}\sigma_{2}\sigma_{2}\sigma_{2}$ | $\sigma_{1}J_{3}(\sigma_{2})$ | $\theta_{2}$ | $\sigma_{1}\sigma_{2}J_{2}(\sigma_{2})$ | $\zeta_{2}$
| $\sigma_{1}\sigma_{2}\sigma_{2}\sigma_{2}{}$ | $G_{2}$ | |
3;$\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{3}$ | $\sigma_{1}\sigma_{2}J_{2}(\sigma_{3})$ | $\zeta_{3}$ | $\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{3}{}$ | $G_{3}$
4;$\sigma_{1}\sigma_{1}{}\sigma_{2}\sigma_{2}$ | $\sigma_{1}\sigma_{1}{}\sigma_{2}\sigma_{2}{}$ | $G_{4}$ | $\sigma_{1}\sigma_{1}{}J_{2}(\sigma_{2})$ | $\zeta_{4}$
| $J_{2}(\sigma_{1})J_{2}(\sigma_{2})$ | $\eta_{2}$ | |
5;$\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{4}$ | $\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{4}$ | $G_{5}$ | |
6;0$\sigma_{1}\sigma_{1}\sigma_{1}$ | $0J_{3}(\sigma_{1})$ | $\theta_{3}$ | $0J_{2}(\sigma_{1})\sigma_{1}$ | $\zeta_{6}$
| $0\sigma_{1}\sigma_{1}\sigma_{1}{}$ | $G_{6}$ | |
7;0$\sigma_{1}\sigma_{2}\sigma_{2}$ | $0\sigma_{1}J_{2}(\sigma_{2})$ | $\zeta_{7}$ | $0\sigma_{1}\sigma_{2}\sigma_{2}{}$ | $G_{7}$
8;$0\sigma_{1}\sigma_{2}\sigma_{3}$ | $0\sigma_{1}\sigma_{2}\sigma_{3}$ | $G_{8}$ | |
9;$00\sigma_{1}\sigma_{1}$ | $J_{2}(0)J_{2}(\sigma_{1})$ | $\kappa_{1}$ | $J_{2}(0)\sigma_{1}\sigma_{1}{}$ | $\mu_{2}$
| $00J_{2}(\sigma_{1})$ | $\zeta_{9}$ | $00\sigma_{1}\sigma_{1}{}$ | $\zeta_{8}$
10;$00\sigma_{1}\sigma_{2}$ | $J_{2}(0)\sigma_{1}\sigma_{2}$ | $\mu_{1}$ | $00\sigma_{1}\sigma_{2}$ | $\zeta_{10}$
11;$000\sigma_{1}$ | $J_{3}(0)\sigma_{1}$ | $\xi_{1}$ | $J_{2}(0)0\sigma_{1}$ | $\theta_{4}$
| $000\sigma_{1}$ | $\zeta_{11}$ | |
12;0000 | $J_{4}(0)$ | $L_{0_{7\oplus 1}}$ | $J_{3}(0)0$ | $\xi_{2}$
| $J_{2}(0)J_{2}(0)$ | $\tau_{2}$ | $J_{2}(0)00$ | $\theta_{5}$
| 0000 | $\zeta_{12}$ | |
Example 3. For the maximally entangled states $|\Psi_{2}\rangle$,
$|\Psi_{4}\rangle-|\Psi_{6}\rangle$ of five qubits and $|\Xi_{2}\rangle$,
$|\Xi_{4}\rangle-|\Xi_{7}\rangle$ of six qubits Osterloh06 , SJNFs of
$S_{1,2}^{(5)}$ partition $|\Psi_{2}\rangle$,
$|\Psi_{4}\rangle-|\Psi_{6}\rangle$ into three families, and SJNFs of
$S_{1,2}^{(6)}$ partition $|\Xi_{2}\rangle$, $|\Xi_{4}\rangle-|\Xi_{7}\rangle$
into four families. See Table 3.
Table 3: SJNFs of $S_{1,2}^{(5)}$ and $S_{1,2}^{(6)}$. states | $|\Psi_{2}\rangle$ | $|\Psi_{4}\rangle$ | $|\Psi_{5}\rangle$ | $|\Psi_{6}\rangle$ |
---|---|---|---|---|---
SJNFs | $\pm\frac{1}{2}00$ | $0000$ | $0000$ | $0J_{3}(0)$ |
states | $|\Xi_{2}\rangle$ | $|\Xi_{4}\rangle$ | $|\Xi_{5}\rangle$ | $|\Xi_{6}\rangle$ | $|\Xi_{7}\rangle$
SJNFs | $(\frac{1}{2})(\frac{1}{2})00$ | $0000$ | $0000$ | $00J_{2}(0)$ | $0J_{3}(0)$
## III SLOCC classification of two, three, and four qubits
### III.1 SLOCC classification of four qubits
For four qubits, invoking the fact that $T^{+}T^{\ast}=\upsilon^{\otimes 2}$
Eq. (6) reduces to
$S_{q_{1}q_{2}}^{(4)}(\psi)=(TC_{q_{1}q_{2}}^{(4)}T^{+})(TC_{q_{1}q_{2}}^{(4)}T^{+})^{t}.$
(7)
From the above discussion, in light of Theorem 1 pure states of four qubits
are partitioned into 12 groups and 34 families in Table 2. Furthermore, for
each type of spectrums, CPs, and SJNFs in Table 2, we give a state in Table 2
and the appendix for which $S_{1,2}^{(4)}$ has the corresponding type. For
example, $S_{1,2}^{(4)}\ $of the state $\theta_{1}$ has the spectrum
$\sigma_{1}$, $\sigma_{1}$, $\sigma_{1}$, $\sigma_{1}$, the CP
$(\sigma-\sigma_{1})^{4}$, and the SJNF $J_{3}(\sigma_{1})\sigma_{1}$. It is
plain to see that 12 groups and 34 families in Table 2 are both complete for
four qubits.
Here, we make a comparison to Verstraete et al.’s nine families. They showed
that for a complex $n$ by $n$ matrix, there are complex orthogonal matrices
$O_{1}$ and $O_{2}$ such that $R=O_{1}R^{\prime}O_{2}$, where $R^{\prime}$ is
a direct sum of blocks defined in Verstraete . Note that the blocks are not
standard Jordan blocks. The decomposition was called a generalization of the
singular value decomposition and used to partition pure states of four qubits
into nine families Verstraete .
Recently, Chterental and Djoković pointed out an error in Verstraete et al.’s
nine families by indicating that the family$\ L_{ab_{3}}$ is SLOCC equivalent
to the subfamily $L_{abc_{2}}(a=c)$ of the family $L_{abc_{2}}$ Chterental
LDFPRA15 . Thus, the classification for the nine families is incomplete. The
need to redo this classification of four qubits was proposed Chterental .
### III.2 SLOCC classification of three qubits
For three qubits, Eq. (6) reduces to
$S_{q_{1}q_{2}}^{(3)}(\psi)=[TC_{q_{1}q_{2}}^{(3)}(|\psi\rangle)]\upsilon[TC_{q_{1}q_{2}}^{(3)}(|\psi\rangle)]^{t}.$
(8)
Let
$\lambda^{2}=[(c_{0}c_{7}-c_{1}c_{6})-(c_{2}c_{5}-c_{3}c_{4})]^{2}-4(c_{0}c_{3}-c_{1}c_{2})(c_{4}c_{7}-c_{5}c_{6})$.
Note that $\lambda^{2}$ is just the $3$-tangle. The spectrum of
$S_{1,2}^{(3)}(\psi)$ is $\pm\lambda,0,0$. We list the SJNFs of
$S_{1,2}^{(3)}(\psi)$ and $S_{1,3}^{(3)}(\psi)$ in the Table 4. In light of
Theorem 1, we can distinguish the six SLOCC classes of three qubits.
Table 4: SLOCC classification of three qubits states | SJNF of $S_{1,2}^{(3)}(\psi)$ | SJNF of $S_{1,3}^{(3)}(\psi)$
---|---|---
GHZ | $J_{1}(\pm\frac{1}{2})00$ | $J_{1}(\pm\frac{1}{2})00$
W | $J_{3}(0)0$ | $J_{3}(0)0$
A-BC | $J_{2}(0)J_{2}(0)$ | $J_{2}(0)J_{2}(0)$
B-AC | $J_{2}(0)J_{2}(0)$ | $0000$
C-AB | $0000$ | $J_{2}(0)J_{2}(0)$
$|000\rangle$ | $0000$ | $0000$
### III.3 SLOCC classification of two qubits
For two qubits, Eq. (6) reduces to
$S_{1,2}^{(2)}(\psi)=[TC_{1,2}^{(2)}(|\psi\rangle)][TC_{1,2}^{(2)}(|\psi\rangle)]^{t}.$
(9)
The spectrum of $S_{1,2}^{(2)}(\psi)$ is 0, 0, 0, $\lambda^{\prime}$, where
$\lambda^{\prime}=2(a_{0}a_{3}-a_{1}a_{2})$. There are two cases for SJNFs.
Case 1. For which $a_{0}a_{3}=a_{1}a_{2}$ (it is a separate state), the SJNF
of $S_{1,2}^{(2)}(\psi)$ is $J_{2}(0)00$. Case 2. For which $a_{0}a_{3}\neq
a_{1}a_{2}$ (it is an entangled state), the SJNF of $S_{1,2}^{(2)}(\psi)$ is
$\lambda^{\prime}000$. Thus, in light of Theorem 1, we can distinguish two-
qubit states into two SLOCC classes.
## IV SLOCC classification of $n$ qubits under $\mathcal{A}_{i}\in SL(2,C)$
SLOCC classification under $A_{i}\in SL(2,C)$ or the classification under
determinant one SLOCC operations was discussed in previous articles Verstraete
Luque . Note that under $\mathcal{A}_{i}\in SL(2,C)$, $G_{1}\in SO(4,C)$ and
Eq. (5) reduces to
$S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})=G_{1}S_{q_{1}q_{2}}^{(n)}(\psi)G_{1}^{-1}.$
(10)
Thus, Eq. (10) leads to the following theorem.
Theorem 2. If the states $|\psi^{\prime}\rangle$ and $|\psi\rangle$ of $n$
qubits are SLOCC equivalent under $\mathcal{A}_{i}\in SL(2,C)$, then
$S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})$ is orthogonally similar to
$S_{q_{1}q_{2}}^{(n)}(\psi)$. The similarity implies that
$S_{q_{1}q_{2}}^{(n)}(\psi^{\prime})$ and $S_{q_{1}q_{2}}^{(n)}(\psi)$ have
the same CP, spectrum, and SJNF up to the order of the standard Jordan blocks.
Example 4. $L_{ab_{3}}^{\ast}(a=0)$ is SLOCC equivalent to $L_{ab_{3}}(a=0)$
under $A_{i}\in SL(2,C)$ LDFPRA12 . The SJNFs of $S_{1,2}^{(4)}$ are both
$0b^{2}J_{2}(0)$.
Restated in the contrapositive the theorem reads: If two matrices
$S_{q_{1}q_{2}}^{(n)}$ associated with two n-qubit pure states differ in their
CPs, spectrums, or SJNFs, then the two states are SLOCC inequivalent under
$\mathcal{A}_{i}\in SL(2,C)$. From Example 2, by Theorem 2 the two states
$\zeta_{4}$ and $\zeta_{5}$ are SLOCC inequivalent under $A_{i}\in SL(2;C)$
because SJNFs of $S_{1,2}^{(4)}(\zeta_{4})$ and $S_{1,2}^{(4)}(\zeta_{5})$ are
different.
Note that a SLOCC equivalent class may include infinite SLOCC equivalent
classes under $\mathcal{A}_{i}\in SL(2,C)$.
## V Conclusion
In Theorem 1, we demonstrate that for two SLOCC equivalent states, the
spectrums and SJNFs of the matrices $S_{q_{1}q_{2}}^{(n)}$ have proportional
relationships. Invoking the proportional relationships, we partition pure
states of $n$ ($\geq 4$) qubits into 12 groups and 34 families under SLOCC,
respectively.
In Theorem 2, we deduce that for two equivalent states under determinant one
SLOCC operations, the spectrums, CPs, SJNFs of $S_{q_{1}q_{2}}^{(n)}$ are
invariant. The invariance can be used for SLOCC classification of $n$ qubits
under determinant one SLOCC operations.
To make a comparison, we list the differences between Theorems 1 and 2 in
Table 5.
It is known that SJNF is used to solve a system of linear differential
equations. The classification of SJNFs under SLOCC in this paper seems to be
useful for classifying linear differential systems.
Table 5: Comparison between Theorems 1 and 2 | Theorem 1 | Theorem 2
---|---|---
spect. $\psi$ | $\lambda_{1}$, $\cdots$, $\lambda_{4}$ | $\lambda_{1}$, $\cdots$, $\lambda_{4}$
spect. $\psi^{\prime}$ | $k\lambda_{1}$, $\cdots$, $k\lambda_{4}$ | $\lambda_{1}$, $\cdots$, $\lambda_{4}$
SJNF $\psi$ | $J_{\ell_{1}}(\lambda_{1})\cdots J_{\ell_{j}}(\lambda_{j})$ | $J_{\ell_{1}}(\lambda_{1})\cdots J_{\ell_{j}}(\lambda_{j})$
SJNF $\psi^{\prime}$ | $J_{\ell_{1}}(k\lambda_{1})\cdots J_{\ell_{j}}(k\lambda_{j})$ | $J_{\ell_{1}}(\lambda_{1})\cdots J_{\ell_{j}}(\lambda_{j})$
Acknowledgement—This work was supported by NSFC (Grant No. 10875061) and
Tsinghua National Laboratory for Information Science and Technology.
## VI Appendix Corresponding states of four qubits
Using $G_{abcd}$ we obtain the following 8 states.
$G_{1}=G_{abcd}(a=b=c=d\neq 0)$; (we will omit $G_{abcd}$ next);
$G_{2}:abcd\neq$ $0;b=c=d$ but $a\neq b$;
$G_{3}:abcd\neq$ $0$, two of $a,b,c$, and $d$ are equal while the other two
are not equal;
$G_{4}:abcd\neq 0$, $a,b$, $c$, and $d$ consists of two pairs of equal
numbers;
$G_{5}:abcd\neq$ $0,$ $a,b$, $c$, and $d$ are distinct;
$G_{6}:$ only one of $a,b$, $c$, and $d$ is zero and other three are equal;
$G_{7}:$ only one of $a,b$, $c$, and $d$ is zero and only two of them are
equal;
$G_{8}:$ only one of $a,b$, $c$, and $d$ is zero and the other three are
distinct.
Using $L_{abc_{2}}$ we obtain the following 11 states.
$\zeta_{1}=L_{abc_{2}}($ $a=b=c\neq$ $0);$
$\zeta_{2}=L_{abc_{2}}(abc\neq 0$ and one of $a$ and $b$ equals c$)$;
$\zeta_{3}=L_{abc_{2}}(abc\neq 0$ and $a,b,c$ are distinct.$)$;
$\zeta_{4}=a(|0\rangle+|15\rangle)+b(|5\rangle+|10\rangle)+|6\rangle$, where
$a\neq b$;
$\zeta_{6}=L_{abc_{2}}($ only one of $a$ and $b$ is zero while the other is
equal to $c$.$)$;
$\zeta_{7}=L_{abc_{2}}(c\neq 0$ and only one of $a$ and $b$ is zero while the
other is not equal to $c$.$)$;
$\zeta_{8}=L_{abc_{2}}(c=0$ and $a=b\neq 0)$; $\zeta_{9}=L_{abc_{2}}(c\neq 0$
and $a=b=0)$; $\zeta_{10}=L_{abc_{2}}(c=0$ while $ab\neq 0$ and $a\neq b)$;
$\zeta_{11}=L_{abc_{2}}(c=0$ while only one of $a$ and $b$ is zero$)$;
$\zeta_{12}=L_{abc_{2}}(a=b=c=0)$.
Using $L_{a_{2}b_{2}}$ we obtain the following two states.
$\eta_{1}=L_{a_{2}b_{2}}(a=b\neq 0)$; $\eta_{2}=L_{a_{2}b_{2}}(ab\neq 0$ and
$a\neq b)$.
Let
$L_{ab_{3}}^{\prime}=b(|0\rangle+|15\rangle)+\frac{b+a}{2}(|5\rangle+|10\rangle)+\frac{b-a}{2}(|6\rangle+|9\rangle)+\frac{i}{\sqrt{2}}(|1\rangle+|2\rangle-|7\rangle-|11\rangle)$.
Using $L_{ab_{3}}^{\prime}$ we obtain the following five states.
$\theta_{1}=L_{ab_{3}}^{\prime}(a=b\neq 0)$;
$\theta_{2}=L_{ab_{3}}^{\prime}(ab\neq 0$ and $a\neq b)$;
$\theta_{3}=a(|0\rangle+|15\rangle)+\frac{a}{2}(|5\rangle+|10\rangle+|6\rangle+|9\rangle)+\frac{i}{\sqrt{2}}(|1\rangle+|2\rangle-|7\rangle-|11\rangle)$
(obtained from $L_{ab_{3}}^{\prime}(a=0$ but $b\neq 0)$);
$\theta_{4}=L_{ab_{3}}^{\prime}(b=0$ but $a\neq 0)$;
$\theta_{5}=L_{ab_{3}}^{\prime}(a=b=0)$.
Using $L_{a_{4}}$ we obtain the following two states.
$\tau_{1}=L_{a_{4}}(a\neq 0)$; $\tau_{2}=L_{a_{4}}(a=0)$.
Using $L_{a_{2}0_{3\oplus 1}}$ we obtain the following one state.
$\kappa_{1}=L_{a_{2}0_{3\oplus 1}}(a\neq 0)$.
Let $L_{ab0_{3\oplus
1}}=\frac{a+b}{2}(|0\rangle+|15\rangle)+\frac{a-b}{2}(|3\rangle+|12\rangle)+|5\rangle+|6\rangle$.
Using $L_{ab0_{3\oplus 1}}$ we obtain the following two states.
$\mu_{1}=L_{ab0_{3\oplus 1}}(ab\neq 0$ and $a\neq b)$;
$\mu_{2}=L_{ab0_{3\oplus 1}}(a=b\neq 0)$ ;
Let
$\xi=\frac{a}{2}(|0\rangle+|3\rangle+|12\rangle+|15\rangle)+i|1\rangle-i|13\rangle+|10\rangle$.
Using $\xi$ we obtain the following two states.
$\xi_{1}=\xi(a\neq 0)$; $\xi_{2}=\xi(a=0)$.
## References
* (1) M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information (Cambridge Univ. Press, Cambridge, 2000).
* (2) W. Dür, G. Vidal, and J.I. Cirac, Phys. Rev. A 62, 062314 (2000).
* (3) F. Verstraete, J. Dehaene, B. De Moor, and H. Verschelde, Phys. Rev. A 65, 052112 (2002).
* (4) A. Miyake, Phys. Rev. A 67, 012108 (2003).
* (5) Y. Cao and A. M. Wang, Eur. Phys. J. D 44, 159 (2007).
* (6) D. Li, X. Li, H. Huang, and X. Li, Phys. Rev. A 76, 052311 (2007).
* (7) O. Chterental and D.Z. Djoković, in Linear Algebra Research Advances, edited by G.D. Ling (Nova Science Publishers, Inc., Hauppauge, NY, 2007), Chap. 4, 133.
* (8) L. Lamata, J. León, D. Salgado, and E. Solano, Phys. Rev. A 75, 022318 (2007).
* (9) D. Li, X. Li, H. Huang, and X. Li, Quantum Inf. Comput. 9, 0778 (2009).
* (10) L. Borsten, D. Dahanayake, M. J. Duff, A. Marrani, and W. Rubens, Phys. Rev. Lett. 105, 100507 (2010).
* (11) O. Viehmann, C. Eltschka, and J. Siewert, Phys. Rev. A 83, 052330 (2011).
* (12) R.V. Buniy and T.W. Kephart, J. Phys. A: Math. Theor. 45, 185304 (2012).
* (13) S.S. Sharma and N.K. Sharma, Phys. Rev. A 85, 042315 (2012).
* (14) G. Gour and N.R. Wallach, Phys. Rev. Lett. 111, 060502 (2013).
* (15) A. Wong and N. Christensen, Phys. Rev. A 63, 044301, 2001\.
* (16) J.-G. Luque and J.-Y. Thibon, Phys. Rev. A 67, 042303 (2003).
* (17) M.S. Leifer, N. Linden and A. Winter, Phys. Rev. A 69, 052304 (2004).
* (18) P. Levay, J. Phys. A: Math. Gen. 39, 9533, (2006).
* (19) D. Z. Djoković and A. Osterloh, J. Math. Phys. 50, 033509 (2009).
* (20) D. Li, X. Li, H. Huang, and X. Li, Phys. Rev. A 76 , 032304 (2007).
* (21) X. Li and D. Li, Phys. Rev. A 88, 022306 (2013).
* (22) C. Eltschka, T. Bastin, A. Osterloh, and J. Siewert, Phys. Rev. A 85, 022301 (2012).
* (23) V. Coffman, J. Kundu, and W.K. Wootters, Phys. Rev. A 61, 052306 (2000).
* (24) X. Li and D. Li, Phys. Rev. Lett. 108, 180502 (2012).
* (25) X. Li and D. Li, Phys. Rev. A 86, 042332 (2012).
* (26) Hui Li, Shuhao Wang, Jianlian Cui, and Gui-Lu Long, Phys. Rev. A 87, 042335 (2013).
* (27) Bo Li, Leong Chuan Kwek, and Heng Fan, J. Phys. A: Math. Theor. 45, 505301 (2012).
* (28) X. Li and D. Li, Physical Review A 91, 012302, (2015).
* (29) A. Osterloh and J. Siewert, Int. J. Quantum. Inform. 4, 531 (2006)
|
A Common Approach to Singular Perturbation and Homogenization I: Quasilinear
ODE Systems
Nikolay N. Nefedov (Moscow) and Lutz Recke (Berlin)
###### Abstract
We consider perodic homogenization of boundary value problems for quasilinear
second-order ODE systems in divergence form of the type
$a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon,u(x),u^{\prime}(x))\mbox{
for }x\in[0,1].$
For small $\varepsilon>0$ we show existence of weak solutions
$u=u_{\varepsilon}$ as well as their local uniqueness for
$\|u-u_{0}\|_{\infty}\approx 0$, where $u_{0}$ is a given non-degenerate
solution to the homogenized boundary value problem, and we describe the rate
of convergence to zero for $\varepsilon\to 0$ of the homogenization error
$\|u_{\varepsilon}-u_{0}\|_{\infty}$. In particular, we show that this rate
depends on the smoothness of the maps $a(\cdot,y,u,u^{\prime})$ and
$f(\cdot,y,u,u^{\prime})$.
Our assumptions are, roughly speaking, as follows: The maps
$a,f:[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}^{n}$
are continuous, the maps $a(x,y,\cdot,\cdot)$ and $f(x,y,\cdot,\cdot)$ are
$C^{1}$-smooth, the maps $a(x,\cdot,u,u^{\prime})$ and
$f(x,\cdot,u,u^{\prime})$ are 1-periodic, and the maps $a(x,y,u,\cdot)$ are
strongly monotone and Lipschitz continuous uniformly with respect to $x$, $y$
and bounded $u$. Neither global solution uniqueness is supposed nor
$W^{2,2}$-regularity of $u_{0}$.
The main tool of the proofs is an abstract result of implicit function theorem
type which in the past has been applied to singularly perturbed nonlinear ODEs
and elliptic and parabolic PDEs and, hence, which permits a common approach to
existence and local uniqueness results for singularly perturbed problems and
and for homogenization problems.
## 1 Introduction
In this paper we present an abstract result of implicit function theorem type
(see Section 2), which in the past has been applied in [5, 6, 7, 16, 20, 22,
23] to singularly perturbed nonlinear ODEs and PDEs and, in Part II [17], to
periodic homogenization of semilinear elliptic PDE systems. In the present
paper we apply it to describe periodic homogenization for systems of
quasilinear second-order ODEs in divergence form of the type
$a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon,u(x),u^{\prime}(x))\mbox{
for }x\in[0,1]$ (1.1)
with one Dirichlet and one natural boundary condition
$u(0)=a(1,1/\varepsilon,u(1),u^{\prime}(1))=0$ (1.2)
as well as with other boundary conditions (see Section 4). Here
$\varepsilon>0$ is the small homogenization parameter, and we look for vector
valued solutions $u:[0,1]\to\mathbb{R}^{n}$. The coefficient functions
$a,f:[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}^{n}$
are supposed to be 1-periodic with respect to the second argument, i.e.
$a(x,y+1,u,u^{\prime})=a(x,y,u,u^{\prime})\mbox{ and
}f(x,y+1,u,u^{\prime})=f(x,y,u,u^{\prime})$ (1.3)
for all $x\in[0,1]$, $y\in\mathbb{R}$ and $u,u^{\prime}\in\mathbb{R}^{n}$.
Further, we suppose that the maps $a$ and $f$ are continuous and that their
first partial derivatives with respect to the third and fourth arguments exist
and are continuous, i.e.
$a,\partial_{u}a,\partial_{u^{\prime}}a,f,\partial_{u}f,\partial_{u^{\prime}}f\mbox{
are continuous on
}[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}.$ (1.4)
And finally, we supose that the maps $a(x,y,u,\cdot)$ are strongly monotone
and Lipschitz continuous uniformly with respect to $x$, $y$ and bounded $u$,
i.e. that there exist constants $M\geq m>0$ such that for all $x\in[0,1]$,
$y\in\mathbb{R}$ and $u,u_{1}^{\prime},u_{2}^{\prime}\in\mathbb{R}^{n}$ with
$\|u\|\leq 1$ we have
$\left.\begin{array}[]{l}\big{(}a(x,y,u,u_{1}^{\prime})-a(x,y,u,u_{2}^{\prime})\big{)}\cdot(u_{1}^{\prime}-u_{2}^{\prime})\geq
m\|u_{1}^{\prime}-u_{2}^{\prime}\|^{2},\\\
\|a(x,y,u,u_{1}^{\prime})-a(x,y,u,u_{2}^{\prime})\|\|\leq
M\|u^{\prime}_{1}-u^{\prime}_{2}\|.\end{array}\right\\}$ (1.5)
Here and in what follows we denote by $v\cdot w$ the Euclidean scalar product
of vectors $v,w\in\mathbb{R}^{n}$, and $\|v\|:=\sqrt{v\cdot v}$ is the
Euclidean norm of the vector $v\in\mathbb{R}^{n}$.
From assumption (1.5) it follows that for all $x\in[0,1]$, $y\in\mathbb{R}$
and $u\in\mathbb{R}^{n}$ with $\|u\|\leq 1$ the maps $a(x,y,u,\cdot)$ are
bijective from $\mathbb{R}^{n}$ onto $\mathbb{R}^{n}$. We denote
$b(x,y,u,\cdot):=a(x,y,u,\cdot)^{-1},\mbox{ i.e.
}a(x,y,u,b(x,y,u,u^{\prime}))=b(x,y,u,a(x,y,u,u^{\prime}))=u^{\prime}$ (1.6)
and
$b_{0}(x,u,u^{\prime}):=\int_{0}^{1}b(x,y,u,u^{\prime})dy$ (1.7)
for all $x\in[0,1]$, $y\in\mathbb{R}$ and $u,u^{\prime}\in\mathbb{R}^{n}$ with
$\|u\|\leq 1$. Also the maps $b_{0}(x,u,\cdot)$ are strongly monotone and
Lipschitz continuous and, hence, bijective from $\mathbb{R}^{n}$ onto
$\mathbb{R}^{n}$, and we denote
$a_{0}(x,u,\cdot):=b_{0}(x,u,\cdot)^{-1},\mbox{ i.e.
}a_{0}(x,u,b_{0}(x,u,u^{\prime}))=b_{0}(x,u,a_{0}(x,u,u^{\prime}))=u^{\prime}$
(1.8)
and
$f_{0}(x,u,u^{\prime}):=\int_{0}^{1}f(x,y,u,b(x,y,u,u^{\prime}))dy$ (1.9)
for $x\in[0,1]$ and $u,u^{\prime}\in\mathbb{R}^{n}$ with $\|u\|\leq 1$, and
the boundary value problem
$a_{0}(x,u(x),u^{\prime}(x))^{\prime}=f_{0}(x,u(x),u^{\prime}(x))\mbox{ for
}x\in[0,1],\;u(0)=a_{0}(1,u(1),u^{\prime}(1))=0$ (1.10)
is the homogenized version of the boundary value problem (1.1)-(1.2).
A vector function $u\in C^{1}([0,1];\mathbb{R}^{n})$ is called weak solution
to (1.1)-(1.2) if it satisfies the Dirichlet boundary condition $u(0)=0$ and
the variational equation
$\left.\begin{array}[]{r}\displaystyle\int_{0}^{1}\Big{(}a(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi^{\prime}(x)+f(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi(x)\Big{)}dx=0\\\
\mbox{ for all }\varphi\in C^{1}([0,1];\mathbb{R}^{n})\mbox{ with
}\varphi(0)=0,\end{array}\right\\}$ (1.11)
and similar for the homogenized boundary value problem (1.10) and its
linearization (1.12). Weak solutions to (1.1)-(1.2) are not classical
solutions, i.e. they are not $C^{2}$-smooth, in general, because the maps
$a(\cdot,\cdot,u,u^{\prime})$ are not $C^{1}$-smooth, in general.
Now we formulate our result about existence and local uniqueness of weak
solutions $u=u_{\varepsilon}$ to (1.1)-(1.2) with $\varepsilon\approx 0$,
which are close to a given non-degenerate solution $u=u_{0}$ to (1.10), and
about the rate of convergence to zero for $\varepsilon\to 0$ of the
homogenization error $\|u_{\varepsilon}-u_{0}\|_{\infty}$. Here and in what
follows we denote by
$\|u\|_{\infty}:=\max_{x\in[0,1]}\|u(x)\|$
the maximum norm in the function space $C([0,1];\mathbb{R}^{n})$.
###### Theorem 1.1
Suppose (1.3)-(1.5), and let $u=u_{0}$ be a weak solution to (1.10) such that
$\|u_{0}\|_{\infty}<1$ and that the linearized boundary value problem
$\left.\begin{array}[]{l}\Big{(}\partial_{u}a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))u(x)+\partial_{u}^{\prime}a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))u^{\prime}(x)\Big{)}^{\prime}\\\
=\partial_{u}f_{0}(x,u_{0}(x),u_{0}^{\prime}(x))u(x)+\partial_{u^{\prime}}f_{0}(x,u_{0}(x),u_{0}^{\prime}(x))u^{\prime}(x)\mbox{
for }x\in[0,1],\\\
u(0)=\partial_{u}a_{0}(1,u_{0}(1),u_{0}^{\prime}(1))u(1)+\partial_{u^{\prime}}a_{0}(1,u_{0}(1),u_{0}^{\prime}(1))u^{\prime}(1)=0\end{array}\right\\}$
(1.12)
does not have weak solutions $u\not=0$. Then the following is true:
(i) There exist $\varepsilon_{0}>0$ and $\delta>0$ such that for all
$\varepsilon\in(0,\varepsilon_{0}]$ there exists exactly one weak solution
$u=u_{\varepsilon}$ to (1.1)-(1.2) with $\|u-u_{0}\|_{\infty}\leq\delta$.
Moreover,
$\|u_{\varepsilon}-u_{0}\|_{\infty}\to 0\mbox{ for }\varepsilon\to 0.$ (1.13)
(ii) If also
$\partial_{x}a,\partial_{x}f\mbox{ exist and are continuous on
}[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n},$ (1.14)
then
$\|u_{\varepsilon}-u_{0}\|_{\infty}=O(\varepsilon)\mbox{ for }\varepsilon\to
0.$ (1.15)
###### Remark 1.2
Our notion of weak solutions $u$ to (1.1)-(1.2) does not include the
requirement $u\in W^{1,2}((0,1);\mathbb{R}^{n})$, as usual, but the stronger
requirement $u\in C^{1}([0,1];\mathbb{R}^{n})$. We do this in order to avoid
to suppose growth restrictions for the reaction functions $f(x,y,u,\cdot)$. If
we would suppose that the functions $f(x,y,u,\cdot)$ have linear growth, then
any solution $u\in W^{1,2}((0,1);\mathbb{R}^{n})$ to (1.11) with $u(0)=0$
would be $C^{1}$-smooth and, hence, a weak solution to (1.1)-(1.2) in the
sense introduced above.
###### Remark 1.3
The homogenized version $a_{0}$ of the map $a$ depends on $a$ only (cf. (1.6)
and (1.8)), but the homogenized version $f_{0}$ of the map $f$ depends not
only on $f$, but also on $a$ (cf. (1.9)), i.e. the homogenization of $f$ is
”relative to $a$”. For linear problems this effect is well-known, cf. [2,
Remark 1.13.1], [28] and [29, formula (3.9)].
###### Remark 1.4
It is easy to verify that the map $b$, which is defined in (1.6), is
continuous and its first partial derivatives with respect to the third and
fourth arguments exist and are continuous, that the maps
$b(x,\cdot,u,u^{\prime})$ are 1-periodic, and that
$\displaystyle\big{(}b(x,y,u,u_{1}^{\prime})-b(x,y,u,u_{2}^{\prime})\big{)}\cdot(u_{1}^{\prime}-u_{2}^{\prime})\geq\frac{m}{M^{2}}\|u_{1}^{\prime}-u_{2}^{\prime}\|^{2},$
(1.16)
$\displaystyle\|b(x,y,u,u_{1}^{\prime})-b(x,y,u,u_{2}^{\prime})\|\leq\frac{1}{m}\|u^{\prime}_{1}-u^{\prime}_{2}\|$
(1.17)
for all $x\in[0,1]$, $y\in\mathbb{R}$ and
$u,u_{1}^{\prime},u_{2}^{\prime}\in\mathbb{R}^{n}$ with $\|u\|\leq 1$.
Similarly, also the map $a_{0}$, which is defined in (1.8), is continuous and
its first partial derivatives with respect to the third and fourth arguments
exist and are continuous, and
$\displaystyle(a_{0}(x,u,u^{\prime}_{1})-a_{0}(x,u,u^{\prime}_{2}))\cdot(u^{\prime}_{1}-u^{\prime}_{1})$
$\displaystyle\geq$
$\displaystyle\frac{m^{3}}{M^{2}}\|u^{\prime}_{1}-u^{\prime}_{2}\|^{2},$
$\displaystyle\|a_{0}(x,u,u^{\prime}_{1})-a_{0}(x,u,u^{\prime}_{2})\|$
$\displaystyle\leq$
$\displaystyle\frac{M^{2}}{m}\|u^{\prime}_{1}-u^{\prime}_{2}\|$
for all $x\in[0,1]$, $u,u^{\prime}_{1},u^{\prime}_{2}\in\mathbb{R}^{n}$ with
$\|u\|\leq 1$.
###### Remark 1.5
In many applications the maps $a(x,y,u,\cdot)$ and $f(x,y,u,\cdot)$ are
affine, i.e.
$a(x,y,u,u^{\prime})=A(x,y,u)u^{\prime}+\bar{a}(x,y,u)\mbox{ and
}f(x,y,u,u^{\prime})=F(x,y,u)u^{\prime}+\bar{f}(x,y,u)$
with $n\times n$-matrices $A(x,y,u)$ and $F(x,y,u)$ and vectors
$\bar{a}(x,y,u)$ and $\bar{f}(x,y,u)$. Then also the maps $a_{0}(x,u,\cdot)$
and $f_{0}(x,u,\cdot)$ are affine, i.e.
$a_{0}(x,u,u^{\prime})=A_{0}(x,u)u^{\prime}+\bar{a}_{0}(x,u)\mbox{ and
}f_{0}(x,u,u^{\prime})=F_{0}(x,u)u^{\prime}+\bar{f}_{0}(x,u)$
with
$\displaystyle A_{0}(x,u)$ $\displaystyle:=$
$\displaystyle\left(\int_{0}^{1}A(x,y,u)^{-1}dy\right)^{-1},$
$\displaystyle\bar{a}_{0}(x,u)$ $\displaystyle:=$
$\displaystyle\left(\int_{0}^{1}A(x,y,u)^{-1}dy\right)^{-1}\int_{0}^{1}A(x,y,u)^{-1}\bar{a}(x,y,u)dy,$
$\displaystyle F_{0}(x,u)$ $\displaystyle:=$
$\displaystyle\int_{0}^{1}F(x,y,u)A(x,y,u)^{-1}dy,$
$\displaystyle\bar{f}_{0}(x,u)$ $\displaystyle:=$
$\displaystyle\int_{0}^{1}\left(\bar{f}(x,y,u)-F(x,y,u)A(x,y,u)^{-1}\bar{a}(x,y,u)\right)dy.$
Hence, $A_{0}$ depends on $A$ only, but $\bar{a}_{0}$ depends on $\bar{a}$ and
$A$, $F_{0}$ depends on $F$ and $A$, and $\bar{f}_{0}$ depends on $\bar{f}$,
$A$, $\bar{a}$ and $F$.
###### Remark 1.6
The assumption of Theorem 1.1, that there do not exist nontrivial weak
solutions to (1.10), is rather implicit. But there exist simple explicit
sufficient contitions for it. For example, if not only the matrices
$\partial_{u^{\prime}}a_{0}(x,u_{0}(x),u^{\prime}_{0}(x))$ are positive
definit (this follows from assumption (1.5)), but also the matrices
$\partial_{u^{\prime}}f_{0}(x,u_{0}(x),u^{\prime}_{0}(x))$, and if the
corresponding definitness coefficients are sufficiently large in comparison
with the matrix norms of $\partial_{u}a_{0}(x,u_{0}(x),u^{\prime}_{0}(x))$ and
$\partial_{u}f_{0}(x,u_{0}(x),u^{\prime}_{0}(x))$, then there do not exist
nontrivial weak solutions to (1.10). In order to verify this one can use the
formulas
$\displaystyle\partial_{u^{\prime}}a_{0}(x,u,u^{\prime})=\left(\int_{0}^{1}\partial_{u^{\prime}}a(x,y,u,u^{\prime})^{-1}dy\right)^{-1},$
$\displaystyle\partial_{u}a_{0}(x,u,u^{\prime})=\left(\int_{0}^{1}\partial_{u^{\prime}}a(x,y,u,u^{\prime})^{-1}dy\right)^{-1}\int_{0}^{1}\partial_{u^{\prime}}a(x,y,u,u^{\prime})^{-1}\partial_{u}a(x,y,u,u^{\prime})dy,$
$\displaystyle\partial_{u^{\prime}}f_{0}(x,u,u^{\prime})=\int_{0}^{1}\partial_{u^{\prime}}f(x,y,u,u^{\prime})\partial_{u^{\prime}}a(x,y,u,u^{\prime})^{-1}dy,$
$\displaystyle\partial_{u}f_{0}(x,u,u^{\prime})=\int_{0}^{1}\left(\partial_{u}f(x,y,u,u^{\prime})-\partial_{u^{\prime}}f(x,y,u,u^{\prime})\partial_{u^{\prime}}a(x,y,u,u^{\prime})^{-1}\partial_{u}a(x,y,u,u^{\prime})\right)dy.$
###### Remark 1.7
The assertions of Theorem 1.1 remain true also in cases where the maps
$a(x,\cdot,u,u^{\prime})$ or $f(x,\cdot,u,u^{\prime})$ are allowed to be
discontinous, for example, if
$a(x,y,u,u^{\prime})=a_{1}(x,u,u^{\prime})a_{2}(y)\mbox{ and
}f(x,y,u,u^{\prime})=f_{1}(x,u,u^{\prime})f_{2}(y)$
with vector functions $a_{1},f_{1}\in
C^{1}([0,1]\times\mathbb{R}^{n}\times\mathbb{R}^{n};\mathbb{R}^{n})$ and
1-periodic functions $a_{2},f_{2}\in L^{\infty}(\mathbb{R})$, or, more
general, if the maps $(x,u,u^{\prime})\mapsto a(x,\cdot,u,u^{\prime})$ and
$(x,u,u^{\prime})\mapsto f(x,\cdot,u,u^{\prime})$ are continuous from
$[0,1]\times\mathbb{R}^{n}\times\mathbb{R}^{n}$ into $L^{\infty}(\mathbb{R})$
(cf. also Remark 3.3). For the case of linear scalar equations see [2,
Theorems 6.1 and 6.3] and [30, Theorem 1.2].
###### Remark 1.8
$L^{\infty}$-estimates of the homogenization error $u_{\varepsilon}-u_{0}$
exist, to the best of our knowledge, for linear homogenization problems only:
For scalar ODEs of the type
$\big{(}a(x/\varepsilon)u^{\prime}(x)\big{)}^{\prime}=f(x)$ (with a smooth
1-periodic function $a:\mathbb{R}\to\mathbb{R}$ and a smooth function
$f:[0,1]\to\mathbb{R}$) in [18, Section 1], for scalar ODEs with stratified
structure of the type
$\big{(}a(x,\rho(x)/\varepsilon)u^{\prime}(x)\big{)}^{\prime}=f(x)$ in [30,
Theorem 1.2]. For $L^{\infty}$ homogenization error estimates for scalar
linear elliptic PDEs of the type $\mbox{\rm div}\,a(x/\varepsilon)\nabla
u(x)=f(x)$ see, e.g. [2, Chapter 2.4] and [15] and for linear elliptic systems
[24, Theorem 7.5.1].
What concerns existence and local uniqueness for nonlinear homogenization
problems (without assumption of global uniqueness) we know only the result [4]
for scalar semilinear elliptic PDEs of the type $\mbox{\rm
div}\,a(x/\varepsilon)\nabla u(x)=f(x)g(u(x)),$ where the nonlinearity $g$ is
supposed to have a sufficiently small local Lipschitz constant (on an
appropriate bounded interval). Let us mention also [13, 14], where existence
and local uniqueness for a homogenization problem for the linear Poisson
equation with periodic nonlinear Robin boundary conditions is shown. There the
specific structure of the problem (no highly oscillating diffusion
coefficients) allows to apply the classical implicit function theorem.
###### Remark 1.9
Consider the quasilinear elliptic PDE $\mbox{\rm
div}\,a(x,x/\varepsilon,u(x),\nabla u(x))=f(x)$ with flux function
$a:\Omega\times\mathbb{R}^{d}\times\mathbb{R}\times\mathbb{R}^{d}\to\mathbb{R}^{d}$
(with $\Omega\subseteq\mathbb{R}^{d}$) such that $a(x,y+e_{j},u,v)=a(x,y,u,v)$
for all $j=1,\ldots,d$
($e_{1}:=(1,0,\ldots,0,0),\ldots,e_{d}:=(0,0,\ldots,0,1)$ is the standard
basis in $\mathbb{R}^{d}$) and that $a(x,y,u,\cdot)$ is strongly monotone and
Lipschitz continuous uniformly with respect to $x$, $y$ and bounded $u$. The
usual formula for the homogenized flux function is (cf., e.g. [8, 11, 21, 27])
$a_{0}(x,u,v):=\int_{[0,1]^{d}}a(x,y,u,v+\nabla_{y}w(x,y,u,v))dy,$ (1.18)
where $w(x,\cdot,u,v)$ is the solution (which depends parametrically on $x$,
$u$ and $v$) of the cell problem
${div}_{y}\;a(x,y,u,v+\nabla_{y}w(x,y,u,v))=0$, $w(x,y+e_{j},u,v)=w(x,y,u,v)$
and $\int_{[0,1]^{d}}w(x,y,u,v)dy=0$. In space dimension one, i.e. $d=1$, this
looks as follows:
$a_{0}(x,u,v):=\int_{0}^{1}a(x,y,u,v+\partial_{y}w(x,y,u,v))dy$ (1.19)
and
$\left.\begin{array}[]{l}\displaystyle\frac{d}{dy}a(x,y,u,v+\partial_{y}w(x,y,u,v))=0,\\\
w(x,y+1,u,v)=w(x,y,u,v),\;\displaystyle\int_{0}^{1}w(x,y,u,v)dy=0.\end{array}\right\\}$
(1.20)
From (1.20) it follows that $a(x,y,u,v+\partial_{y}w(x,y,u,v))$ is constant
with respect to $y$. Therefore (1.19) yields that
$a(x,y,u,v+\partial_{y}w(x,y,u,v))=a_{0}(x,u,v)$ and, hence,
$b(x,y,u,a_{0}(x,u,v))=v+\partial_{y}w(x,y,u,v),$ i.e.
$v=\int_{0}^{1}(v+\partial_{y}w(x,y,u,v))dy=\int_{0}^{1}b(x,y,u,a_{0}(x,u,v))dy=b_{0}(x,u,a_{0}(x,u,v)).$
On the other hand, the solution to (1.20) with $v=b_{0}(x,u,\bar{v})$ (with
arbitrary $\bar{v}\in\mathbb{R}$) is
$w(x,y,u,b_{0}(x,u,\bar{v}))=\int_{0}^{y}(b(x,z,u,\bar{v})-b_{0}(x,u,\bar{v}))dz-\int_{0}^{1}\int_{0}^{z_{1}}(b(x,z_{2},u,\bar{v})-b_{0}(x,u,\bar{v}))dz_{2}dz_{1}.$
Therefore
$\displaystyle a_{0}(x,u,b_{0}(x,u,\bar{v}))$ $\displaystyle=$
$\displaystyle\int_{0}^{1}a(x,y,u,b_{0}(x,u,\bar{v})-\partial_{y}w(x,y,u,b_{0}(x,u,\bar{v})))dy$
$\displaystyle=$
$\displaystyle\int_{0}^{1}a(x,y,u,b(x,y,u,\bar{v})dy=\bar{v}.$
It follows that $a_{0}(x,u,\cdot)=b_{0}(x,u,\cdot)^{-1}$. In other words: Our
definition (1.8) of the homogenized flux function $a_{0}$ is the same as the
usual one for PDEs, i.e. for (1.18), considered in the case $d=1$. In the
linear case this has been shown in [2, Remark 2.3] and [10, Proposition 6.16].
In [2, Remark 5.9] the formulas (1.8) and (1.19) are called dual formulas
(there the linear case is considered, but with multidimensional space variable
$x$).
Our paper is organized as follows: In Section 2 we consider abstract nonlinear
parameter depending equations of the type
${\cal F}_{\varepsilon}(w)=0.$ (1.21)
Here $\varepsilon>0$ is the parameter. We prove a result on existence and
local uniqueness of a family of solutions $w=w_{\varepsilon}\approx w_{0}$ to
(1.21) with $\varepsilon\approx 0$, where $w_{0}$ is an approximate solution
to (1.21), i.e. an element with ${\cal F}_{\varepsilon}(w_{0})\to 0$ for
$\varepsilon\to 0$, and we estimate the norm of the error
$w_{\varepsilon}-w_{0}$ by the norm of the discrepancy ${\cal
F}_{\varepsilon}(w_{0})$. This type of generalized implicit function theorems
has been successfully applied to singularly perturbed ODEs and PDEs in [5, 6,
7, 16, 20, 22, 23]). Contrary to the classical implicit function theorem it is
not supposed that the linearized operators ${\cal
F}^{\prime}_{\varepsilon}(u)$ converge for $\varepsilon\to 0$ in the uniform
operator norm. And, indeed, in the applications to singularly perturbed
problems as well as to periodic homogenization problems they do not converge
for $\varepsilon\to 0$ in the uniform operator norm (cf. Remark 3.6 below).
Hence, the present paper is a first step (on the ODE level) to create a common
approach to existence, local uniqueness and error estimates for singularly
perturbed problems and for homogenization problems. In Part II [17] we apply
this approach to periodic homogenization for semilinear elliptic PDE systems.
In Section 3 we prove Theorem 1.1 by means of the results of Section 2. For
that reason we transform the boundary value problem (1.1)-(1.2) into the
system (3.3)-(3.4) of integral equations, and for that system of integral
equations we introduce an abstract setting of the type (1.21). For that
abstract setting we have to verify the key assumptions (2.1) and (2.4) of
Theorem 2.1, and we do this in the Subsections 3.1 and 3.2, respectively.
Finally, in Section 4 we show that Theorem 1.1 remains true also for
inhomogeneous natural boundary conditions, but not, in general, for
inhomogeneous Neumann boundary conditions. Remark that the difficulties with
inhomogeneous Neumann boundary conditions are well-known already for scalar
linear problems (see, e.g. [2, Remark 1.2.10 and Section 1.7.1] and [26]).
Further, we show how to prove that the assertions of Theorem 1.1 are true also
for two Dirichlet boundary conditions.
## 2 An abstract result of implicit function theorem type
In this section we formulate and prove Theorem 2.1 below.
###### Theorem 2.1
Let be given a Banach space $W$ with norm $\|\cdot\|_{W}$, an open set
$W_{0}\subseteq W$, an element $w_{0}\in W_{0}$ and a family of $C^{1}$-maps
${\cal F}_{\varepsilon}:W_{0}\to W$ with $\varepsilon>0$ as family parameter.
Suppose that
$\|{\cal F}_{\varepsilon}(w_{0})\|_{W}\to 0\mbox{ for }\varepsilon\to 0.$
(2.1)
Further, suppose that there exists $\varepsilon_{0}>0$ such that
$\displaystyle{\cal F}^{\prime}_{\varepsilon}(w_{0})\mbox{ is Fredholm of
index zero from $W$ into $W$ for all $\varepsilon\in(0,\varepsilon_{0}]$},$
(2.4) $\displaystyle\inf\\{\|{\cal
F}^{\prime}_{\varepsilon}(w_{0})w\|_{W}:\;\varepsilon\in(0,\varepsilon_{0}],\;\|w\|_{W}=1\\}=:\alpha>0,$
$\displaystyle\sup_{\|w\|_{W}\leq 1}\|({\cal
F}^{\prime}_{\varepsilon}(w_{0}+w_{1})-{\cal
F}_{\varepsilon}^{\prime}(w_{0}))w\|_{W}\to 0\mbox{ for
}\varepsilon+\|w_{1}\|_{W}\to 0.$
Then there exist $\varepsilon_{1}\in(0,\varepsilon_{0}]$ and $\delta>0$ such
that for all $\varepsilon\in(0,\varepsilon_{1}]$ there exists exactly one
$w=w_{\varepsilon}\in W_{0}$ with ${\cal F}_{\varepsilon}(w)=0$ and
$\|w-w_{0}\|_{W}\leq\delta$. Moreover,
$\|w_{\varepsilon}-w_{0}\|_{W}<\frac{2}{\alpha}\|{\cal
F}_{\varepsilon}(w_{0})\|_{W}.$ (2.5)
Proof Assumptions (2.4) and (2.4) imply, that for all
$\varepsilon\in(0,\varepsilon_{0})$ the operator ${\cal
F}_{\varepsilon}^{\prime}(w_{0})$ is an isomorphism from $W$ onto $W$ and
$\left\|{\cal
F}_{\varepsilon}^{\prime}(w_{0})^{-1}w\right\|_{W}\leq\frac{1}{\alpha}\|w\|_{W}\mbox{
for all }w\in W.$ (2.6)
Hence, the map ${\cal G}_{\varepsilon}:W_{0}\to W$,
${\cal G}_{\varepsilon}(w):=w-{\cal F}_{\varepsilon}^{\prime}(w_{0})^{-1}{\cal
F}_{\varepsilon}(w)$
is well-defined. Obviously, $w$ is a fixed point of ${\cal G}_{\varepsilon}$
if and only if ${\cal F}_{\varepsilon}(w)=0$.
For $r>0$ denote $\mathbb{B}_{r}:=\\{w\in W:\;\|w-w_{0}\|_{W}\leq r\\}.$ We
are going to show that for sufficiently small $\varepsilon>0$ and $r>0$ the
map ${\cal G}_{\varepsilon}$ is strictly contractive from the closed ball
$\mathbb{B}_{r}$ into itself.
In order to verify the strict contractivity of ${\cal G}_{\varepsilon}$ we
take $\varepsilon\in(0,\varepsilon_{0}]$ and $v,w\in W_{0}$ and estimate as
follows:
$\displaystyle\|{\cal G}_{\varepsilon}(v)-{\cal
G}_{\varepsilon}(w)\|_{W}=\left\|v-w-{\cal F}_{\varepsilon}(w_{0})^{-1}({\cal
F}_{\varepsilon}(v)-{\cal F}_{\varepsilon}(w))\right\|_{W}$
$\displaystyle=\left\|{\cal
F}_{\varepsilon}^{\prime}(w_{0})^{-1}\int_{0}^{1}\left({\cal
F}_{\varepsilon}^{\prime}(w_{0})-{\cal
F}^{\prime}_{\varepsilon}(sv+(1-s)w)\right)ds(v-w)\right\|_{W}$
$\displaystyle\leq\frac{1}{\alpha}\int_{0}^{1}\|\left({\cal
F}_{\varepsilon}^{\prime}(w_{0})-{\cal
F}^{\prime}_{\varepsilon}(sv+(1-s)w)\right)(v-w)\|_{W}ds.$
Here we used (2.6). Because of assumption (2.4) there exist
$\varepsilon_{1}\in(0,\varepsilon_{0}]$ and $r_{0}>0$ such that
$\mathbb{B}_{r_{0}}\subset W_{0}$ and $\|\left({\cal
F}_{\varepsilon}^{\prime}(w_{0})-{\cal
F}^{\prime}_{\varepsilon}(sv+(1-s)w)\right)(v-w)\|_{W}\leq\frac{\alpha}{2}\|v-w\|_{W}$
for all $\varepsilon\in(0,\varepsilon_{1}]$, $s\in[0,1]$ and
$v,w\in\mathbb{B}_{r_{0}}$. Hence,
$\|{\cal G}_{\varepsilon}(v)-{\cal
G}_{\varepsilon}(w)\|_{W}\leq\frac{1}{2}\|v-w\|_{W}\mbox{ for all
}\varepsilon\in(0,\varepsilon_{1}]\mbox{ and }v,w\in\mathbb{B}_{r_{0}}.$ (2.7)
Now, let us show that ${\cal G}_{\varepsilon}$ maps $\mathbb{B}_{r_{0}}$ into
$\mathbb{B}_{r_{0}}$ for all sufficiently small $\varepsilon>0$. Take
$\varepsilon\in(0,\varepsilon_{1}]$ and $w\in\mathbb{B}_{r_{0}}$. Then (2.6)
and (2.7) imply
$\displaystyle\left\|{\cal
G}_{\varepsilon}(w)-w_{0}\right\|_{W}\leq\left\|{\cal
G}_{\varepsilon}(w)-{\cal G}_{\varepsilon}(w_{0})\right\|_{W}+\left\|{\cal
G}_{\varepsilon}(w_{0})-w_{0}\right\|_{W}$
$\displaystyle\leq\frac{1}{2}\left\|w-w_{0}\right\|_{W}+\left\|{\cal
F}_{\varepsilon}^{\prime}(w_{0})^{-1}{\cal
F}_{\varepsilon}(w_{0})\right\|_{W}\leq\frac{r_{0}}{2}+\frac{1}{\alpha}\left\|{\cal
F}_{\varepsilon}(w_{0})\right\|_{W}.$
But assumption (2.1) yields that, if $\varepsilon_{1}$ is taken sufficiently
small, for all $\varepsilon\in(0,\varepsilon_{1}]$ we have $\|{\cal
F}_{\varepsilon}(w_{0})\|_{W}\leq\alpha r_{0}/2$. Hence, for those
$\varepsilon$ we get $\left\|{\cal G}_{\varepsilon}(w)-w_{0}\right\|_{W}\leq
r_{0}$.
Therefore, Banach’s fixed point principle yields the following: For all
$\varepsilon\in(0,\varepsilon_{1}]$ there exists exactly one
$w=w_{\varepsilon}\in\mathbb{B}_{r_{0}}$ with ${\cal F}_{\varepsilon}(w)=0$.
Finally, let us prove (2.5). We take $\varepsilon\in(0,\varepsilon_{1}]$ and
estimate as above:
$\|w_{\varepsilon}-w_{0}\|_{W}\leq\|{\cal
G}_{\varepsilon}(w_{\varepsilon})-{\cal G}_{\varepsilon}(w_{0})\|_{W}+\|{\cal
G}_{\varepsilon}(w)-w_{0}\|_{W}\leq\frac{1}{2}\|w_{\varepsilon}-w_{0}\|_{W}+\frac{1}{\alpha}\|{\cal
F}_{\varepsilon}(w_{0})\|_{W}.$
Hence, (2.5) is true.
###### Remark 2.2
In [5, 6, 7, 16, 20, 22, 23]) slightly more general versions of Theorem 2.1
are used, i.e. those with ${\cal F}_{\varepsilon}$ mapping one Banach space
into another one, both with $\varepsilon$-depending norms. Moreover, there the
approximate solutions are allowed to be $\varepsilon$-depending, i.e. to be a
family of approximate solutions. Hence, these versions of Theorem 2.1 seem to
be appropriate for applications to homogenization problems with approximate
solutions defined by using correctors of first or higher-order (see, e.g. [1,
9, 12]).
For another result of the type of Theorem 2.1 and its applications to
semilinear elliptic PDE systems with numerically determined approximate
solutions see [3, Theorem 2.1].
## 3 Proof of Theorem 1.1
In this section we will prove Theorem 1.1 by means of Theorem 2.1. Hence, all
assumptions of Theorem 1.1 (i.e. (1.3)-(1.5), existence of the weak solution
$u=u_{0}$ to (1.10), non-existence of weak solutions $u\not=0$ to (1.12)) will
be supposed to be satisfied (without mentioning their use). At places, where
we use the additional assumption (1.14) of Theorem 1.1(ii), we will mention
this.
In order to transform the problem of weak solutions $u\approx u_{0}$ to the
boundary value problem (1.1)-(1.2) into the problem of solutions $w\approx
w_{0}$ to an appropriate operator equation ${\cal F}_{\varepsilon}(w)=0$, we
use the notation
$\mathbb{B}:=\\{u\in C([0,1];\mathbb{R}^{n}):\;\|u\|_{\infty}<1\\}$ (3.1)
and Lemmas 3.1, 3.2 and 3.4 below.
###### Lemma 3.1
For all $\varepsilon>0$ the following is true:
(i) If $u\in\mathbb{B}$ is a weak solution to (1.1)-(1.2) and if $v\in
C([0,1];\mathbb{R}^{n})$ is defined by
$v(x):=a(x,x/\varepsilon,u(x),u^{\prime}(x))\mbox{ for }x\in[0,1],$ (3.2)
then
$\displaystyle u(x)=\int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy\mbox{ for
}x\in[0,1],$ (3.3) $\displaystyle
v(x)=-\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))dy\mbox{
for }x\in[0,1].$ (3.4)
(ii) If $(u,v)\in\mathbb{B}\times C([0,1];\mathbb{R}^{n})$ is a solution to
(3.3)-(3.4), then $u$ is a weak solution to (1.1)-(1.2).
Proof (i) Let $u\in\mathbb{B}$ be a weak solution to (1.1)-(1.2). Take an
arbitrary test function $\varphi\in C^{1}([0,1];\mathbb{R}^{n})$ with
$\varphi(0)=0$. Then (1.11) yields that
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{0}^{1}\left(a(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi^{\prime}(x)+f(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\int_{0}^{x}\varphi^{\prime}(y)dy\right)dx.$
$\displaystyle=$
$\displaystyle\int_{0}^{1}\left(a(x,x/\varepsilon,u(x),u^{\prime}(x))+\int_{x}^{1}f(y,y/\varepsilon,u(y),u^{\prime}(y))dy\right)\cdot\varphi^{\prime}(x)dx.$
Therefore
$a(x,x/\varepsilon,u(x),u^{\prime}(x))+\int_{x}^{1}f(y,y/\varepsilon,u(y),u^{\prime}(y))dy$
is constant with respect to $x$. In particular, the function $x\mapsto
a(x,x/\varepsilon,u(x),u^{\prime}(x))$ is $C^{1}$-smooth, and
$a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon,u(x),u^{\prime}(x)).$
(3.5)
If $v\in C([0,1];\mathbb{R}^{n})$ is defined by (3.2), then
$u^{\prime}(x)=b(x,x/\varepsilon,u(x),v(x)).$ (3.6)
Inserting (3.2) and (3.6) into (3.5) we get (3.4). Moreover, the boundary
condition $u(0)=0$ and (3.6) yield (3.3).
(ii) Let $(u,v)\in\mathbb{B}\times C([0,1];\mathbb{R}^{n})$ be a solution to
(3.3)-(3.4). From (3.3) follows $u(0)=0$ and (3.6), and from (3.6) follows
$v(x)=a(x,x/\varepsilon,u(x),u^{\prime}(x))$. Therefore (3.4) implies
$v(1)=a(1,1/\varepsilon,u(1),u^{\prime}(1))=0$ and
$v^{\prime}(x)=a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon,u(x),b(x,x/\varepsilon,u(x),v(x))).$
If we multiply this scalarly by an arbitrary test function $\varphi\in
C^{1}([0,1];\mathbb{R}^{n})$ with $\varphi(0)=0$, integrate with respect to
$x$ and use the boundary condition $v(1)=0$, then we get (1.11).
The following lemma is the only tool from classical homogenization theory
which we are going to use. For related results see, e.g. [18, Lemma 1.1], [24,
Proposition 2.2.2], [29, Lemma 3.1]. In Lemma 3.2 below we use the following
notation for maps $g\in C([0,1]\times\mathbb{R};\mathbb{R}^{n})$:
$\displaystyle\omega_{g}(\varepsilon)$ $\displaystyle:=$
$\displaystyle\sup\\{\|g(x_{1},y)-g(x_{2},y)\|:\;x_{1},x_{2}\in[0,1],\;y\in\mathbb{R},\;|x_{1}-x_{2}|\leq\varepsilon\\}\mbox{
for }\varepsilon>0,$ $\displaystyle\|g\|_{*}$ $\displaystyle:=$
$\displaystyle\sup\\{\|g(x,y)\|:\;(x,y)\in[0,1]\times\mathbb{R}\\}.$
###### Lemma 3.2
Let be given $g\in C([0,1]\times\mathbb{R};\mathbb{R}^{n})$ such that
$g(x,y+1)=g(x,y)$ for all $x\in[0,1]$ and $y\in\mathbb{R}$. Then for all
$x\in[0,1]$ and $\varepsilon>0$ we have
$\left\|\int_{0}^{x}\left(g(y,y/\varepsilon)-\int_{0}^{1}g(y,z)dz\right)dy\right\|\leq
2\left(\omega_{g}(\varepsilon)+\varepsilon\|g\|_{*}\right)$ (3.7)
and, if the partial derivative $\partial_{x}g$ exists and is continuous,
$\left\|\int_{0}^{x}\left(g(y,y/\varepsilon)-\int_{0}^{1}g(y,z)dz\right)dy\right\|\leq
2\varepsilon\left(\|g\|_{*}+\|\partial_{x}g\|_{*}\right).$ (3.8)
Proof Define
$h(x,y):=g(x,y)-\int_{0}^{1}g(x,z)dz.$
Then $h(x,y+1)=h(x,y)$ and $\int_{y}^{y+1}h(x,z)dz=0$ and
$\omega_{h}(\varepsilon)\leq 2\omega_{g}(\varepsilon)$. Therefore for
$x\in[0,1]$ and $\varepsilon>0$ we have
$\displaystyle\int_{0}^{x}\left(g(y,y/\varepsilon)-\int_{0}^{1}g(y,z)dz\right)dy=\int_{0}^{x}h(y,y/\varepsilon)dy=\varepsilon\int_{0}^{x/\varepsilon}h(\varepsilon
y,y)dy$
$\displaystyle=\varepsilon\left(\sum_{j=1}^{[x/\varepsilon]}\int_{j-1}^{j}h(\varepsilon
y,y)dy+\int_{[x/\varepsilon]}^{x/\varepsilon}h(\varepsilon y,y)dy\right)$
$\displaystyle=\varepsilon\left(\sum_{j=1}^{[x/\varepsilon]}\int_{j-1}^{j}\left(h(\varepsilon
y,y)dy-h(\varepsilon
j,y)\right)dy+\int_{[x/\varepsilon]}^{x/\varepsilon}h(\varepsilon
y,y)dy\right),$
where $[x/\varepsilon]$ is the integer part of $x/\varepsilon$, i.e. the
largest integer which is not larger than $x/\varepsilon$. In particular,
$\varepsilon[x/\varepsilon]\leq x\leq 1$.
For $y\in[j-1,j]$ we have that $0\leq\varepsilon(j-y)\leq\varepsilon$ and,
hence, that $|h(\varepsilon y,y)-h(\varepsilon j,y)|\leq w_{h}(\varepsilon)$.
Therefore
$\left\|\int_{0}^{x}\left(g(y,y/\varepsilon)-\int_{0}^{1}g(y,z)dz\right)dy\right\|\leq\varepsilon\left([x/\varepsilon]w_{h}(\varepsilon)+\|h\|_{*}\right)\leq
2\left(w_{g}(\varepsilon)+\varepsilon\|g\|_{*}\right),$
i.e. (3.7) is proved.
If $\partial_{x}g$ exists and is continuous, then
$g(x_{1},y)-g(x_{2},y)=(x_{1}-x_{2})\int_{0}^{1}\partial_{x}g(sx_{1}+(1-s)x_{2},y)ds$,
i.e. $\omega_{g}(\varepsilon)\leq\varepsilon\|\partial_{x}g\|_{*}$. Hence, in
that case (3.7) implies (3.8).
###### Remark 3.3
If $g(x,y)=g_{1}(x)g_{2}(y)$ with $g_{1}\in C([0,1];\mathbb{R}^{n})$ and
$g_{2}\in L^{\infty}(\mathbb{R})$ or, more general, if the map $x\mapsto
g(x,\cdot)$ is continuous from $[0,1]$ into $L^{\infty}(\mathbb{R})$, then the
assertions of Lemma 3.2 remain true.
Similarly to Lemma 3.1 we get, that the function $u_{0}$, which is by
assumtion of Theorem 1.1 a weak solution to (1.12), and the function $v_{0}\in
C([0,1];\mathbb{R}^{n})$, which is defined by
$v_{0}(x):=a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\mbox{ for }x\in[0,1],$ (3.9)
satisfy
$\left.\begin{array}[]{l}\displaystyle
u_{0}(x)=\int_{0}^{x}b_{0}(y,u_{0}(y),v_{0}(y))dy,\\\ \displaystyle
v_{0}(x)=-\int_{x}^{1}f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y)))dy\end{array}\right\\}\mbox{
for }x\in[0,1].$ (3.10)
###### Lemma 3.4
For all $\gamma>0$ there exists $\delta>0$ such that for all $\varepsilon>0$,
$u\in\mathbb{B}$ and $v\in C([0,1];\mathbb{R}^{n})$ with (3.4) and
$\varepsilon+\|u-u_{0}\|_{\infty}\leq\delta$ we have
$\|v-v_{0}\|_{\infty}\leq\gamma$.
Proof Take arbitrary $\varepsilon>0$, $u\in\mathbb{B}$ and $v\in
C([0,1];\mathbb{R}^{n})$. Because of (3.4) and (3.10) we have
$\displaystyle v(x)-v_{0}(x)$ $\displaystyle=$
$\displaystyle\int_{x}^{1}\left(f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y)))-f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y))\right)dy$
$\displaystyle=$
$\displaystyle\alpha_{\varepsilon}(x)+\alpha_{\varepsilon,u}(x)+\alpha_{\varepsilon,u,v}(x)$
with
$\displaystyle\alpha_{\varepsilon}(x)$
$\displaystyle:=\int_{x}^{1}\left(f_{0}(y,u(y),b(y,y/\varepsilon,u(y),v(y)))-f(y,y/\varepsilon,u_{0}(y),b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\right)dy,$
$\displaystyle\alpha_{\varepsilon,u}(x)$
$\displaystyle:=\int_{x}^{1}\left(f(y,y/\varepsilon,u_{0}(y),b(y,y/\varepsilon,u_{0}(y),v_{0}(y)))-f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v_{0}(y))\right)dy,$
$\displaystyle\alpha_{\varepsilon,u,v}(x)$
$\displaystyle:=\int_{x}^{1}\left(f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v_{0}(y)))-f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y))\right)dy.$
From (1.9) it follows that $\alpha_{\varepsilon}(x)$ equals to
$\int_{x}^{1}\left(\int_{0}^{1}f(y,z,u_{0}(y),b(y,z,u_{0}(y),v_{0}(y)))dz-f(y,y/\varepsilon,u_{0}(y),b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\right)dy,$
and Lemma 3.2 with $g(x,y)=f(x,y,u_{0}(x),b(x,y,u_{0}(x),v_{0}(x)))$ yields
that $\|\alpha_{\varepsilon}\|_{\infty}\to 0$ for $\varepsilon\to 0$. Further,
we have
$\displaystyle\alpha_{\varepsilon,u}(x)=\int_{x}^{1}\int_{0}^{1}\Big{(}\partial_{u}f(y,y/\varepsilon,u_{s}(y),b(y,y/\varepsilon,u_{s}(y),v_{0}(y)))$
$\displaystyle\;\;\;\;\;+\partial_{u^{\prime}}f(y,y/\varepsilon,u_{s}(y),b(y,z,u_{s}(y),v_{0}(y))\partial_{u}b(y,y/\varepsilon,u_{s}(y),v_{0}(y))\Big{)}ds\,(u_{0}(y)-u(y))dy$
with $u_{s}(y):=su_{0}(y)+(1-s)u(y)$. Hence,
$\|\alpha_{\varepsilon,u}\|_{\infty}\leq\mbox{const}\|u-u_{0}\|_{\infty},$
where the constant does not depend on $\varepsilon$ and $u$. Finally, we have
$\alpha_{\varepsilon,u,v}(x)=\int_{x}^{1}\int_{0}^{1}\partial_{u^{\prime}}f(y,y/\varepsilon,u(y),b_{s}(y))ds\,(b(y,y/\varepsilon,u(y),v_{0}(y))-b(y,y/\varepsilon,u(y),v(y)))dy$
with
$b_{s}(y):=sb(y,y/\varepsilon,u(y),v_{0}(y))+(1-s)b(y,y/\varepsilon,u(y),v(y))$.
Hence, because of (1.17) there exists a constant $c>0$, which does not depend
on $\varepsilon$, $x$, $u$ and $v$, such that
$\|\alpha_{\varepsilon,u,v}\|\leq c\int_{x}^{1}\|v(y)-v_{0}(y)\|dy.$
It follows that
$\|v(x)-v_{0}(x)\|\leq\|\alpha_{\varepsilon}\|_{\infty}+\|\alpha_{\varepsilon,u}\|_{\infty}+c\int_{x}^{1}\|v(y))-v_{0}(y))\|dy,$
and Gronwall’s inequality yields that
$\|v(x)-v_{0}(x)\|\leq\left(\|\alpha_{\varepsilon}\|_{\infty}+\|\alpha_{\varepsilon,u}\|_{\infty}\right)\exp\left(c(1-x)\right),$
i.e.
$\|v-v_{0}\|_{\infty}\to 0\mbox{ for }\varepsilon+\|u-u_{0}\|_{\infty}\to 0.$
Now we are going to apply Theorem 2.1 in order to solve the boundary value
problem (1.1)-(1.2) with $\varepsilon\approx 0$ and
$\|u-u_{0}\|_{\infty}\approx 0$. We introduce the setting of Theorem 2.1 as
follows:
$\displaystyle
W:=C([0,1];\mathbb{R}^{n})^{2},\;\|(u,v)\|_{W}:=\|u\|_{\infty}+\|v\|_{\infty},\;W_{0}:=\mathbb{B}\times
C([0,1];\mathbb{R}^{n}),\;w_{0}:=(u_{0},v_{0}),$ $\displaystyle{\cal
F}_{\varepsilon}(u,v)=({\cal U}_{\varepsilon}(u,v),{\cal
V}_{\varepsilon}(u,v))$
with
$\displaystyle[{\cal
U}_{\varepsilon}(u,v)](x):=u(x)-\int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy$
$\displaystyle[{\cal
V}_{\varepsilon}(u,v)](x):=v(x)+\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))dy.$
Here $\mathbb{B}$ is the open ball in $C([0,1];\mathbb{R}^{n})$, defined in
(3.1), $u_{0}$ is the solution to the linearized boundary value problen
(1.12), which is given by assumption of Theorem 1.1, and $v_{0}$ is defined in
(3.9)
Because of Lemmas 3.1, 3.2 and 3.4 we have the following: If $u\in\mathbb{B}$
is a weak solution to (1.1)-(1.2) with $\varepsilon\approx 0$ and
$\|u-u_{0}\|_{\infty}\approx 0$, then there exists $v\in
C([0,1];\mathbb{R}^{n})$ with $\|v-v_{0}\|_{\infty}\approx 0$ such that ${\cal
F}_{\varepsilon}(u,v)=0$. And if $(u,v)\in W_{0}$ satisfies ${\cal
F}_{\varepsilon}(u,v)=0$ with $\varepsilon\approx 0$ and
$\|u-u_{0}\|_{\infty}+\|v-v_{0}\|_{\infty}\approx 0$, then $u$ is a weak
solution to (1.1)-(1.2). Moreover, if all the assumptions of Theorem 2.1, i.e.
(2.1)-(2.4), are satisfied in the setting introduced above, then Theorem 2.1
yields the assertions of Theorem 1.1(i), in particular (2.5) yields (1.13).
If, moreover,
$\mbox{assumption (\ref{diffass1}) implies that }\|{\cal
F}_{\varepsilon}(u_{0},v_{0})\|_{W}=O(\varepsilon)\mbox{ for }\varepsilon\to
0$ (3.11)
in the setting introduced above, then the assertion (2.5) of Theorem 2.1
yields also assertion (1.15) of Theorem 1.1.
Hence, it remains to verify the assumptions (2.1)-(2.4) of Theorem 2.1 and the
assertion (3.11) in the setting introduced above.
### 3.1 Verification of (2.1) and (3.11)
Because of (1.7), (1.9) and (3.10) we have
$[{\cal
U}_{\varepsilon}(u_{0},v_{0})](x)=\int_{0}^{x}\left(\int_{0}^{1}b(y,z,u_{0}(y),v_{0}(y))dz-b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\right)dy$
and
$\displaystyle[{\cal V}_{\varepsilon}(u_{0},v_{0})](x)$
$\displaystyle=\int_{x}^{1}\Big{(}f(y,y/\varepsilon,u_{0}(y),b(y,y/\varepsilon,u_{0}(y),v_{0}(y)))-\int_{0}^{1}f(y,z,u_{0}(y),b(y,z,u_{0}(y),v_{0}(y)))dzdz\Big{)}dy.$
Hence, Lemma 3.2 with
$g(x,y)=b(x,x/\varepsilon,u_{0}(x),v_{0}(x))\mbox{ and
}g(x,y)=f(x,x/\varepsilon,u_{0}(x),b(x,x/\varepsilon,u_{0}(x),v_{0}(x))),$
respectively, yields for $\varepsilon\to 0$ that
$\|{\cal U}_{\varepsilon}(u_{0},v_{0})\|_{\infty}+\|{\cal
V}_{\varepsilon}(u_{0},v_{0})\|_{\infty}=\left\\{\begin{array}[]{l}o(1),\\\
O(\varepsilon),\mbox{ if (\ref{diffass1}) is satified.}\end{array}\right.$
### 3.2 Verification of (2.4)-(2.4)
We have
$[{\cal F}^{\prime}_{\varepsilon}(u,v)](\bar{u},\bar{v})=(\partial_{u}{\cal
U}_{\varepsilon}(u,v)\bar{u}+\partial_{v}{\cal
U}_{\varepsilon}(u,v)\bar{v},\partial_{u}{\cal
V}_{\varepsilon}(u,v)\bar{u}+\partial_{v}{\cal V}_{\varepsilon}(u,v)\bar{v})$
with
$\displaystyle[\partial_{u}{\cal
U}_{\varepsilon}(u,v)\bar{u}](x)=\bar{u}(x)-\int_{0}^{x}\partial_{u}b(y,y/\varepsilon,u(y),v(y))\bar{u}(y)dy,$
$\displaystyle[\partial_{v}{\cal
U}_{\varepsilon}(u,v)\bar{v}](x)=-\int_{0}^{x}\partial_{u^{\prime}}b(y,y/\varepsilon,u(y),v(y))\bar{v}(y)dy,$
$\displaystyle[\partial_{u}{\cal
V}_{\varepsilon}(u,v)\bar{u}](x)=\int_{x}^{1}\Big{(}\partial_{u}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y))$
$\displaystyle\;\;\;\;+\partial_{u^{\prime}}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y))\partial_{u}b(y,y/\varepsilon,u(y),v(y))\Big{)}\bar{u}(y)dy,$
$\displaystyle[\partial_{v}{\cal V}_{\varepsilon}(u,v)\bar{v}](x)=\bar{v}(x)$
$\displaystyle\;\;\;\;+\int_{x}^{1}\partial_{u^{\prime}}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y))\partial_{u^{\prime}}b(y,y/\varepsilon,u(y),v(y))\bar{v}(y)dy.$
In order to verify assumption (2.4) of Theorem 2.1 we calculate as follows:
$\displaystyle[(\partial_{u}{\cal
U}_{\varepsilon}(u_{0},v_{0})-\partial_{u}{\cal
U}_{\varepsilon}(u_{1},v_{1}))\bar{u}](x)$
$\displaystyle=\int_{0}^{x}\partial_{u}b(y,y/\varepsilon,u_{1}(y),v_{1}(y))-\partial_{u}b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\bar{u}(y)dy.$
But $\partial_{u}b$ is uniformly continuous on
$\\{(x,y,u,v)\in[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}:\;\|u\|\leq
1,\;\|v\|\leq R\\}$ for all $R>0$. Hence,
$\|\partial_{u}b(y,y/\varepsilon,u_{0}(y),v_{0}(y))-\partial_{u}b(y,y/\varepsilon,u_{1}(y),v_{1}(y))\bar{u}(y)\|\to
0\mbox{ for }\|u_{0}-u_{1}\|_{\infty}+\|v_{0}-v_{1}\|_{\infty}\to 0$
uniformly with respect to $\varepsilon>0$, $y\in[0,1]$ and
$\|\bar{u}\|_{\infty}\leq 1$. Similarly one can estimate the terms
$(\partial_{v}{\cal U}_{\varepsilon}(u_{0},v_{0})-\partial_{u}{\cal
U}_{\varepsilon}(u_{1},v_{1}))\bar{v}$, $(\partial_{u}{\cal
V}_{\varepsilon}(u_{0},v_{0})-\partial_{u}{\cal
V}_{\varepsilon}(u_{1},v_{1}))\bar{u}$ and $(\partial_{v}{\cal
V}_{\varepsilon}(u_{0},v_{0})-\partial_{u}{\cal
V}_{\varepsilon}(u_{1},v_{1}))\bar{v}$.
Further, for any $(u,v)\in{\cal W}$ we have that ${\cal
F}^{\prime}_{\varepsilon}(u,v)-I$ ($I$ is the identity in $W$) is a linear
bounded operator from $W$ into $C^{1}([0,1];\mathbb{R}^{n})^{2}$, where
$C^{1}([0,1];\mathbb{R}^{n})$ is equipped with its usual norm
$\|u\|_{\infty}+\|u^{\prime}\|_{\infty}$. Hence, the Arcela-Ascoli Theorem
yields that for any $(u,v)\in W$ the operator ${\cal
F}^{\prime}_{\varepsilon}(u,v)-I$ is compact from $W$ into $W$, and,
therefore, the operator ${\cal F}^{\prime}_{\varepsilon}(u,v)$ is Fredholm of
index zero from $W$ into $W$, i.e. (2.4) is satisfied.
Now, let us verify (2.4).
Suppose that (2.4) is not true, i.e. that it is not true that there exists
$\varepsilon_{0}>0$ such that $\inf\\{\|{\cal
F}_{\varepsilon}^{\prime}(w_{0})w\|_{W}:\;\varepsilon\in(0,\varepsilon_{0}],w\in
W,\|w\|_{W}=1\\}>0$. Then there exist sequences
$\varepsilon_{1},\varepsilon_{2},\ldots>0$ and $u_{1},u_{2},\ldots\in
C([0,1];\mathbb{R}^{n})$ and $v_{1},v_{2},\ldots\in C([0,1];\mathbb{R}^{n})$
such that
$\left.\begin{array}[]{r}\displaystyle\lim_{k\to\infty}\varepsilon_{k}=0,\\\
\displaystyle\lim_{k\to\infty}\|\partial_{u}{\cal
U}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}{\cal
U}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}\|_{\infty}=0,\\\
\displaystyle\lim_{k\to\infty}\|\partial_{u}{\cal
V}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}{\cal
V}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}\|_{\infty}=0,\end{array}\right\\}$
(3.12)
but
$\|u_{k}\|_{\infty}+\|v_{k}\|_{\infty}=1\mbox{ for all }k\in\mathbb{N}.$
(3.13)
Denote
$\displaystyle\bar{u}_{k}(x):=\int_{0}^{x}\Big{(}\partial_{u}b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))u_{k}(y)+\partial_{u^{\prime}}b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))v_{k}(y)\Big{)}dy,$
(3.14)
$\displaystyle\bar{v}_{k}(x):=\int_{x}^{1}\Big{(}\big{(}\partial_{u}f(y,y/\varepsilon_{n},u_{0}(y),b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+\partial_{u^{\prime}}f(y,y/\varepsilon_{n},u_{0}(y),b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))\partial_{u}b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))\big{)}u_{k}(y)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+\partial_{u^{\prime}}f(y,y/\varepsilon_{n},u_{0}(y),b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))\partial_{u^{\prime}}b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))v_{k}(y)\Big{)}dy.$
Then
$\sup_{k\in\mathbb{N}}\big{(}\|\bar{u}^{\prime}_{k}\|_{\infty}+\|\bar{v}^{\prime}_{k}\|_{\infty}\big{)}<\infty.$
(3.15)
Hence, because of the Arzela-Ascoli Theorem without loss of generality we may
assume that there exist $\bar{u}_{0},\bar{v}_{0}\in C([0,1];\mathbb{R}^{n})$
such that
$\lim_{k\to\infty}\big{(}\|\bar{u}_{k}-\bar{u}_{0}\|_{\infty}+\|\bar{v}_{k}-\bar{v}_{0}\|_{\infty}\big{)}=0.$
(3.16)
But we have that
$\displaystyle u_{k}=\bar{u}_{k}+\partial_{u}{\cal
U}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}{\cal
U}_{\varepsilon_{k}}(u_{0},v_{0})v_{k},$ $\displaystyle
v_{k}=\bar{v}_{n}-\partial_{u}{\cal
V}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}{\cal
V}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}.$
Hence, (3.12) yields that
$\lim_{k\to\infty}\big{(}\|u_{k}-\bar{u}_{0}\|_{\infty}+\|v_{k}-\bar{v}_{0}\|_{\infty}\big{)}=0,$
and (3.13) implies that
$\|\bar{u}_{0}\|_{\infty}+\|\bar{v}_{0}\|_{\infty}=1.$ (3.17)
We are going to show that (3.16) and (3.17) lead to a contradiction. Because
of (1.7) we have for all $x\in[0,1]$ that
$\displaystyle\int_{0}^{x}\left(\partial_{u}b_{0}(y,u_{0}(y),v_{0}(y))\bar{u}_{0}(y)+\partial_{u^{\prime}}b_{0}(y,u_{0}(y),v_{0}(y))\bar{v}_{0}(y)\right)dy$
$\displaystyle=\int_{0}^{x}\left(\partial_{u}\int_{0}^{1}b(y,z,u_{0}(y),v_{0}(y))dz\;\bar{u}_{0}(y)+\partial_{u^{\prime}}\int_{0}^{1}b(y,z,u_{0}(y),v_{0}(y))dz\;\bar{v}_{0}(y)\right)dy$
$\displaystyle=\int_{0}^{x}\int_{0}^{1}\left(\partial_{u}b(y,z,u_{0}(y),v_{0}(y))dz\bar{u}_{0}(y)+\partial_{u^{\prime}}b(y,z,u_{0}(y),v_{0}(y))dz\bar{v}_{0}(y)\right)dzdy,$
and because of Lemma 3.2 with
$g(x,y)=\partial_{u}b(x,y,u_{0}(x),v_{0}(x))\bar{u}_{0}(x)+\partial_{u^{\prime}}b(x,y,u_{0}(x),v_{0}(x))\bar{v}_{0}(x)$
this equals to
$\lim_{k\to\infty}\int_{0}^{x}\left(\partial_{u}b(y,y/\varepsilon_{k},u_{0}(y),v_{0}(y))dz\bar{u}_{0}(y)+\partial_{u^{\prime}}b(y,y/\varepsilon_{k},u_{0}(y),v_{0}(y))dz\bar{v}_{0}(y)\right)dy,$
and because of (3.16) this is
$\lim_{k\to\infty}\int_{0}^{x}\left(\partial_{u}b(y,y/\varepsilon_{k},u_{0}(y),v_{0}(y))\bar{u}_{k}(y)+\partial_{u^{\prime}}b(y,y/\varepsilon_{k},u_{0}(y),v_{0}(y))\bar{v}_{k}(y)\right)dy,$
and, finally, because of (3.14) and (3.16) this equals to
$\lim_{k\to\infty}\bar{u}_{k}=\bar{u}_{0}.$ We end up with
$\bar{u}_{0}(x)=\int_{0}^{x}\Big{(}\partial_{u}b_{0}(y,u_{0}(y),v_{0}(y))\bar{u}_{0}(y)+\partial_{u^{\prime}}b_{0}(y,u_{0}(y),v_{0}(y))\bar{v}_{0}(y)\Big{)}dy.$
(3.18)
Similarly one shows that
$\displaystyle\bar{v}_{0}(x)=-\int_{0}^{x}\Big{(}\big{(}\partial_{u}f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y))$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+\partial_{u^{\prime}}f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y))\partial_{u}b_{0}(y,u_{0}(y),v_{0}(y))\big{)}\bar{u}_{0}(y)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+\partial_{u^{\prime}}f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y))\partial_{u^{\prime}}b_{0}(y,u_{0}(y),v_{0}(y))\bar{v}_{0}(y)\Big{)}dy$
and similarly to Lemma 3.1 it follows that $\bar{u}_{0}$ is a weak solution to
the linearized boundary value problem (1.12). Therefore, the assumption of
Theorem 1.1, that (1.12) does not have nontrivial solution, implies that
$u_{0}=0$. Then (3.18) implies that
$\partial_{u^{\prime}}b_{0}(x,u_{0}(x),v_{0}(x))\bar{v}_{0}(x)=0$ for all
$x\in[0,1]$. But (1.16) yields that
$\partial_{u^{\prime}}b_{0}(x,u_{0}(x),v_{0}(x))v\cdot
v\geq\frac{m}{M^{2}}\|v\|^{2}$ (3.19)
for all $x\in[0,1]$ and $v\in\mathbb{R}^{n}$. We get that $\bar{v}_{0}=0$,
what contradicts to (3.17).
###### Remark 3.5
In the proof above we used the well-known fact (cf., e.g. [19]) that the
operations linearization and homogenization commute.
###### Remark 3.6
It is easy to verify that the linear operators ${\cal
F}^{\prime}_{\varepsilon}(u,v)$ do not converge for $\varepsilon\to 0$ in the
uniform operator norm in ${\cal L}(W)$, in general. For example,
$\int_{0}^{x}\partial_{u}b(y,y/\varepsilon,u(y),v(y))\bar{u}(y)dy$ (with fixed
$u,v\in C([0,1];\mathbb{R}^{n})$) does not converge for $\varepsilon\to 0$
uniformly with respect to $x\in[0,1]$ and $\bar{u}\in C([0,1];\mathbb{R}^{n})$
with $\|\bar{u}\|_{\infty}\leq 1$, in general. But in the proof above a
subsequence of the sequence
$\int_{0}^{x}\partial_{u}b(y,y/\varepsilon_{k},u_{0}(y),v_{0}(y))\bar{u}_{k}(y)dy$
converges for $k\to\infty$ uniformly with respect to $x\in[0,1]$, and this is
because of (3.15).
## 4 Other boundary conditions
### 4.1 Inhomogeneous boundary conditions
If the homogeneous natural boundary condition in (1.2) is replaced by an
corresponding inhomogeneous one, i.e.
$u(0)=0,\;a(1,1/\varepsilon,u(1),u^{\prime}(1))=u^{1},$ (4.1)
then the weak formulation of the boundary value problem (1.1),(4.1) is as
follows: Find $u\in W^{1,2}((0,1);\mathbb{R}^{n})$ such that $u(0)=0$ and
$\begin{array}[]{r}\displaystyle\int_{0}^{1}\Big{(}a(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi^{\prime}(x)+f(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi(x)\Big{)}dx=u^{1}\cdot\varphi(1)\\\
\mbox{ for all }\varphi\in C^{1}([0,1];\mathbb{R}^{n})\mbox{ with
}\varphi(0)=0.\end{array}$
The system (3.3)-(3.4) of integral equations has to be replaced by
$\displaystyle u(x)=\int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy,$
$\displaystyle
v(x)=u^{1}-\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))dy,$
and the results of Theorem 1.1 remain unchanged.
But if the inhomogeneous natural boundary condition in $x=1$ in (4.1) is
replaced by an inhomogeneous Neumann condition, i.e.
$u(0)=u^{0},\;u^{\prime}(1)=u^{1}\not=0,$ then, in general, there do not exist
solution families $u=u_{\varepsilon}$ which converge pointwise for
$\varepsilon\to 0$. For example, the solution to the scalar linear boundary
value problem
$\left(\frac{u^{\prime}(x)}{2+\sin(2\pi
x/\varepsilon)}\right)^{\prime}=1\mbox{ for
}x\in[0,1],\;u(0)=0,u^{\prime}(1)=u^{1}$ (4.2)
is
$u_{\varepsilon}(x)=\int_{0}^{x}(2+\sin(2\pi
y/\varepsilon))\left(\frac{u^{1}}{2+\sin(2\pi/\varepsilon)}+y-1\right)dy.$
Hence,
$u_{\varepsilon}(1)=-1+\frac{2u^{1}}{2+\sin(2\pi/\varepsilon)}+O(\varepsilon)\mbox{
for }\varepsilon\to 0,$
i.e. $u_{\varepsilon}(1)$ does not converge for $\varepsilon\to 0$ if
$u^{1}\not=0$.
###### Remark 4.1
Consider the scalar linear boundary value problem (4.2) with $u^{1}=0$. Then
$u_{\varepsilon}(1)=\int_{0}^{1}(2+\sin(2\pi
y/\varepsilon))(y-1)dy=-1-\frac{\varepsilon}{2\pi}+O(\varepsilon^{2})\mbox{
for }\varepsilon\to 0,$
and the solution to the corresponding homogenized problem
$\frac{1}{2}u^{\prime\prime}(x)=1$ for $x\in[0,1]$, $u(0)=u^{\prime}(1)=0$ is
$u_{0}(x)=x(x-2)$. Hence,
$u_{\varepsilon}(1)-u_{0}(1)=-\frac{\varepsilon}{2\pi}+O(\varepsilon^{2})\mbox{
for }\varepsilon\to 0,$
i.e. the asymptotic error estimate (1.15) of Theorem 1.1 is sharp.
### 4.2 Two Dirichlet boundary conditions
If we consider (1.1) with two homogeneous Dirichlet boundary conditions, i.e.
$u(0)=u(1)=0,$ then the weak formulation is as follows: Find $u\in
W^{1,2}((0,1);\mathbb{R}^{n})$ such that $u(0)=u(1)=0$ and
$\left.\begin{array}[]{r}\displaystyle\int_{0}^{1}\Big{(}a(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi^{\prime}(x)+f(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi(x)\Big{)}dx=0\\\
\mbox{ for all }\varphi\in C^{1}([0,1];\mathbb{R}^{n})\mbox{ with
}\varphi(0)=\varphi(1)=0,\end{array}\right\\}$ (4.3)
and the system (3.3)-(3.4) has to be changed to the system
$\displaystyle u(x)$ $\displaystyle=$
$\displaystyle\int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy,$ $\displaystyle
v(x)$ $\displaystyle=$ $\displaystyle
w-\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))dy,$
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{0}^{1}b(x,x/\varepsilon,u(x),v(x))dx$
with unknowns $(u,v,w)\in C([0,1];\mathbb{R}^{n})\times
C([0,1];\mathbb{R}^{n})\times\mathbb{R}^{n}$. This system of integral
equations can be treated by Theorem 2.1 in the following setting:
$W:=C([0,1];\mathbb{R}^{n})^{2}\times\mathbb{R}^{n},\;\|(u,v,w)\|_{W}:=\|u\|_{\infty}+\|v\|_{\infty}+\|w\|,\;W_{0}:=\mathbb{B}\times
C([0,1];\mathbb{R}^{n})\times\mathbb{R}^{n}.$
The approximate solution $w_{0}$ of Theorem 2.1 is the triple
$(u_{0},v_{0},w_{0})\in C([0,1];\mathbb{R}^{n})^{2}\times\mathbb{R}^{n},$
where $u_{0}$ is the given weak solution of the homogenized problem, i.e.
$u_{0}\in W^{1,2}((0,1);\mathbb{R}^{n})$ with $\|u_{0}\|_{\infty}<1$ and
$u_{0}(0)=u_{0}(1)=0$ and
$\begin{array}[]{r}\displaystyle\int_{0}^{1}\Big{(}a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\cdot\varphi^{\prime}(x)+f_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\cdot\varphi(x)\Big{)}dx=0\\\
\mbox{ for all }\varphi\in C^{1}([0,1];\mathbb{R}^{n})\mbox{ with
}\varphi(0)=\varphi(1)=0,\end{array}$
and $v_{0}(x):=a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))$, $w_{0}:=v_{0}(1)$. The
homogenized vector functions $a_{0}$ and $f_{0}$ are defined as in (1.8) and
(1.9). Finally, the maps ${\cal F}_{\varepsilon}:W\to W$ are defined by
${\cal F}_{\varepsilon}(u,v,w):=({\cal U}_{\varepsilon}(u,v),{\cal
V}_{\varepsilon}(u,v,w),{\cal W}_{\varepsilon}(u,v))$
with
$\displaystyle[{\cal
U}_{\varepsilon}(u,v)](x):=u(x)-\int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy,$
$\displaystyle[{\cal
V}_{\varepsilon}(u,v,w)](x):=v(x)-w+\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))dy,$
$\displaystyle{\cal
W}_{\varepsilon}(u,v):=\int_{0}^{1}b(x,x/\varepsilon,u(x),v(x))dx.$
As in Subsection 3.1 it follows that
$\displaystyle[{\cal
U}_{\varepsilon}(u_{0},v_{0})](x)=\int_{0}^{x}\Big{(}b_{0}(y,u_{0}(y),v_{0}(y))-b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\Big{)}dy,$
$\displaystyle[{\cal V}_{\varepsilon}(u_{0},v_{0},w_{0})](x)$
$\displaystyle=\int_{x}^{1}\Big{(}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))-f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y)))\Big{)}dy,$
and, hence, for $\varepsilon\to 0$ follows that
$\|{\cal U}_{\varepsilon}(u_{0},v_{0})\|_{\infty}+\|{\cal
V}_{\varepsilon}(u_{0},v_{0},w_{0})\|_{\infty}=\left\\{\begin{array}[]{l}o(1),\\\
O(\varepsilon),\mbox{ if (\ref{diffass1}) is satified.}\end{array}\right.$
Further, we have
$\displaystyle{\cal W}_{\varepsilon}(u_{0},v_{0})$ $\displaystyle=$
$\displaystyle\int_{0}^{1}b(x,x/\varepsilon,u_{0}(x),v_{0}(x))dx$
$\displaystyle=$
$\displaystyle\int_{0}^{1}(b(x,x/\varepsilon,u_{0}(x),v_{0}(x))-b_{0}(x,u_{0}(x),v_{0}(x))dx$
and, hence, for $\varepsilon\to 0$ follows
$\|{\cal
W}_{\varepsilon}(u_{0},v_{0})\|_{\infty}=\left\\{\begin{array}[]{l}o(1),\\\
O(\varepsilon),\mbox{ if (\ref{diffass1}) is satified.}\end{array}\right.$
Therefore, the assumptions (2.1) and (3.11) are satisfied in the setting
introduced above.
Finally, let us verify the assumption (2.4) of Theorem 2.1 in the setting
introduced above. Suppose that (2.4) is not true. Then there exist sequences
$\varepsilon_{1},\varepsilon_{2},\ldots>0$ and $u_{1},u_{2},\ldots\in
C([0,1];\mathbb{R}^{n})$ and $v_{1},v_{2},\ldots\in C([0,1];\mathbb{R}^{n})$
and $w_{1},w_{2},\ldots\in\mathbb{R}^{n}$ such that $\varepsilon_{n}\to 0$ for
$n\to\infty$ and
$\displaystyle\lim_{k\to\infty}\|\partial_{u}{\cal
U}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}{\cal
U}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}\|_{\infty}$ $\displaystyle=$
$\displaystyle 0,$ (4.4) $\displaystyle\lim_{k\to\infty}\|\partial_{u}{\cal
V}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}{\cal
V}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}+\partial_{w}{\cal
V}_{\varepsilon_{k}}(u_{0},v_{0})w_{k}\|_{\infty}$ $\displaystyle=$
$\displaystyle 0,$ (4.5) $\displaystyle\lim_{k\to\infty}\|\partial_{u}{\cal
W}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}{\cal
W}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}\|$ $\displaystyle=$ $\displaystyle 0,$
but
$\|u_{k}\|_{\infty}+\|v_{k}\|_{\infty}+\|w_{k}\|=1\mbox{ for all
}k\in\mathbb{N}.$ (4.6)
As in Subsection 3.2 one can show that without loss of generality we can
assume that there exist $\bar{u}_{0},\bar{v}_{0}\in C([0,1];\mathbb{R}^{n})$
and $\bar{w}_{0}\in\mathbb{R}^{n}$ such that
$\lim_{k\to\infty}\left(\|u_{k}-\bar{u}_{0}\|_{\infty}+\|v_{k}-\bar{v}_{0}\|_{\infty}+\|w_{k}-\bar{w}_{0}\|\right)=0$
and that $\bar{u}_{0}$ is a solution to the linearization (in $u_{0}$) of
(4.3), i.e. that $\bar{u}_{0}=0$. Hence, from (4.4) follows that for all
$x\in[0,1]$ we have
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\lim_{k\to\infty}[\partial_{v}{\cal
U}_{\varepsilon}(u_{0},v_{0})v_{k}](x)=\lim_{k\to\infty}\int_{0}^{x}\partial_{u^{\prime}}b(y,y/\varepsilon,u_{0}(y),v_{0}(y))v_{k}(y)dy$
$\displaystyle=$
$\displaystyle\int_{0}^{x}\partial_{u^{\prime}}b_{0}(y,u_{0}(y),v_{0}(y))\bar{v}_{0}(y)dy.$
It follows that
$\partial_{u^{\prime}}b_{0}(x,u_{0}(x),v_{0}(x))\bar{v}_{0}(x)=0$ for all
$x\in[0,1]$ and, hence, (3.19) yields that $\bar{v}_{0}=0$. Therefore, (4.5)
implies that
$0=\lim_{k\to\infty}\|\partial_{w}{\cal
V}(u_{0},v_{0},w_{0})\bar{w}_{k}\|_{\infty}=\lim_{k\to\infty}\|w_{k}\|=\|\bar{w}_{0}\|,$
and we get a contradiction to (4.6).
## References
* [1] G. Allaire, M. Briane, M. Vanninathan, A comparison between two-scale asymptotic expansions and Bloch wave expansions for the homogenization of periodic structures. SeMA 73 (2016), 237-259.
* [2] A. Bensoussan, J.L. Lions, G. Papanicolaou, Asymptotic Analysis for Periodic Structures. Studies in Mathematics and its Applications vol. 3, North-Holland, 1978.
* [3] M. Breden, R. Castelli, Existence and instability of steady states for a triangular cross-diffusion system: a computer-assisted proof. J. Differ. Equations 264 (2018), 6418–6458.
* [4] R. Bunoiu, R. Precup, Localization and multiplicity in the homogenization of nonlinear problems. Adv. Nonlinear Anal. 9 (2020), 292–304.
* [5] V.F. Butuzov, N.N. Nefedov, O.E. Omel’chenko, L. Recke, Time-periodic boundary layer solutions to singularly perturbed parabolic problems. J. Differ. Equations 262 (2017), 4823–4862.
* [6] V.F. Butuzov, N.N. Nefedov, O.E. Omel’chenko, L. Recke, Boundary layer solutions to singularly perturbed quasilinear systems. Discrete Cont. Dyn. Syst., Series B 27 (2022), 4255–4283.
* [7] V.F. Butuzov, N.N. Nefedov, O.E. Omel’chenko, L. Recke, K.R. Schneider, An implicit function theorem and applications to nonsmooth boundary layers. In: Patterns of Dynamics, ed. by P. Gurevich, J. Hell, B. Sandstede, A. Scheel, Springer Proc. in Mathematics & Statistics vol. 205, Springer, 2017, 111–127.
* [8] G. Cardone, S. E. Pastukhova, V. V. Zhikov, Some estimates for non-linear homogenization. Rend. Accad. Naz. Sci. XL, Mem. Mat. Appl. 24 (2005) 101-110 (2005).
* [9] K. D. Cherednichenko, V. P. Smyshlyaev, On full two-scale expansion of the solutions of nonlinear periodic rapidly oscillating problems and higher-order homogenised variational problems. Arch. Rational Mech. Anal.174 (2004), 385–442.
* [10] D. Cioranescu, P. Donato, An Introduction to Homogenization. Oxford Lecture Series in Mathematics and its Applications vol. 17, Oxford University Press, 1999.
* [11] N. Fusco, G. Moscariello, On the homogenization of quasilinear divergence structure operators. Ann. Mat. Pura Appl., IV. Ser. 146 (1987) 1-13. ,
* [12] Tebib Hawa, Chacha Djamal Ahmed, Third-order corrections in periodic homogenization for elliptic problem. Mediterr. J. Math. 18 (2021), Paper No. 135.
* [13] M. Lanza de Cristoforis, P. Musolino, Two-parameter homogenization for a nonlinear periodic Robin problem for a Poisson equation: a functional analytic approach. Rev. Mat. Complut. 31 (2018), 63–110.
* [14] M. Lanza de Cristoforis, P. Musolino, Asymptotic behaviour of the energy integral of a two-parameter homogenization problem with nonlinear periodic Robin boundary conditions. Proc. Edinb. Math. Soc. II. Ser. 62 (2019), 985–1016.
* [15] Wen Ming He, Jun Zhi Cui, Error estimate of the homogenization solution for elliptic problems with small periodic coefficients on $L^{\infty}(\Omega)$. Science China Mathematics 53 (2010), 1231–1252.
* [16] N.N. Nefedov, A.O. Orlov, L. Recke, K.R. Schneider, Nonsmooth regular perturbations of singularly perturbed problems. J. Differ. Equations 375 (2023), 206–236.
* [17] N.N. Nefedov, L. Recke, A common approach to singular perturbation and homogenization II: Semilinear elliptic systems. arXiv:2309.14108.
* [18] S. Neukamm, An introduction to the qualitative and quantitative theory of homogenization. Interdisciplinary Information Sciences 22 (2016),147–186.
* [19] S. Neukamm, M. Schäffner, Quantitative homogenization in nonlinear elasticity for small loads. Arch. Ration. Mech. Anal. 230 (2018), 343–396.
* [20] O.E. Omel’chenko, L. Recke, Existence, local uniqueness and asymptotic approximation of spike solutions to singularly perturbed elliptic problems. Hiroshima Math. J. 45 (2015), 35–89.
* [21] S. E. Pastukhova, Operator estimates in nonlinear problems of reiterated homogenization. Proc. Steklov Inst. Math. 261 (2008), 214-228.
* [22] L. Recke, Use of very weak approximate boundary layer solutions to spatially nonsmooth singularly perturbed problems. J. Math. Anal. Appl. 506 (2022), Article ID 125552.
* [23] L. Recke, O.E. Omel’chenko, Boundary layer solutions to problems with infinite dimensional singular and regular perturbations. J. Differ. Equations 245 (2008), 3806–3822.
* [24] Zongwei Shen, Periodic Homogenization of Elliptic Systems. Operator Theory: Advances and Applications vol. 269, Birkhäuser, 2018.
* [25] Zongwei Shen, Jinping Zhuge, Convergence rates in periodic homogenization of systems of elasticity. Proc. Am. Math. Soc. 145 (2017), 1187–1202.
* [26] T. Suslina, Homogenization of the Neumann problem for elliptic systems with periodic coefficients. SIAM J. Math. Anal. 45 (2013), 3453–3498.
* [27] Li Wang, Qiang Xu, Peihao Zhao, Quantitative estimates on periodic homogenization of nonliear elliptic operators. arXiv:1807.10865v1 (2018).
* [28] Qiang Xu, Convergence rates for general elliptic homogenization problems in Lipschitz domains. SIAM J. Math. Anal. 48 (2016), 3742–3788.
* [29] Shixin Xu, Xingye Yue, Changrong Zhang, Homogenization: in mathematics or physics? Discrete Contin. Dyn. Syst. Series S 9 (2016), 1575–1590.
* [30] Yao Xu, Weisheng Niu, Homogenization of elliptic systems with stratified structure revisited. Commun. Partial Differ. Equations 45 (2020), 655–689.
|
# Crystalline phases at finite winding densities in a quantum link ladder
Paolo Stornati<EMAIL_ADDRESS>ICFO-Institut de Ciencies Fotoniques,
The Barcelona Institute of Science and Technology, Av. Carl Friedrich Gauss 3,
08860 Castelldefels (Barcelona), Spain Philipp Krah TU Berlin, Institute of
Mathematics, Straße des 17. Juni 136, 10623 Berlin, Germany Karl Jansen
Deutsches Elektronen-Synchrotron DESY,Platanenallee 6, 15738 Zeuthen, Germany
Debasish Banerjee Theory Division, Saha Institute of Nuclear Physics, 1/AF
Bidhan Nagar, Kolkata 700064, India Homi Bhabha National Institute, Training
School Complex, Anushaktinagar, Mumbai 400094,India
###### Abstract
Condensed matter physics of gauge theories coupled to fermions can exhibit a
rich phase structure, but are nevertheless very difficult to study in Monte
Carlo simulations when they are afflicted by a sign problem. As an alternate
approach, we use tensor network methods to explore the finite density physics
of Abelian gauge theories without dynamical matter. As a concrete example, we
consider the $U(1)$ gauge invariant quantum link ladder with
spin-$\frac{1}{2}$ gauge fields in an external electric field which cause the
winding electric fluxes to condense in the ground state. We demonstrate how
the electric flux tubes arrange themselves in the bulk giving rise to
crystalline patterns, whose period can be controlled by tuning the external
field. We propose observables to detect the transitions in ground state
properties not only in numerical experiments, but also in future cold-atom
realizations. A systematic procedure for reaching the thermodynamic limit, as
well as extending the studies from ladders to extended geometries is outlined.
## Introduction.–
Finite chemical potentials are expected to give rise to novel phases and
correlations otherwise absent in the ground state of quantum field theories or
quantum many-body systems. Two physically relevant examples are Quantum
Chromodynamics (QCD) and the Hubbard model. Markov Chain Monte Carlo (MCMC)
methods to solve QCD regulated on the lattice can explain properties of
hadrons, such as their masses, binding energies, and scattering cross-
sections. At finite baryon densities, $\mu_{B}$, relevant for e.g., the
description of the interior of neutron stars or the very early universe, the
MCMC methods suffer from the infamous sign problem. The Hubbard model, on the
other hand, is a pedagogical system to describe a variety of phases of
strongly correlated electrons. At finite doping, it is expected to host high-
temperature superconducting phases and provide a model for many physically
interesting materials. Once again, the regime of non-zero doping is difficult
to investigate numerically using Monte Carlo methods due to the sign problem.
Finite density physics of scalar and fermionic theories in various space-time
dimensions have been extensively investigated [1, 2, 3, 4, 5, 6, 7]. We extend
such studies which dealt with point particles to pure gauge theories without
dynamical matter fields containing loop operators. The simplest scenario is an
$U(1)$ Abelian lattice gauge theory in a finite volume and in 2+1 dimensions,
where gauge-invariant winding electric flux strings can be excited by coupling
a chemical potential to each of the global $U(1)$ centre-symmetry generators.
Each sector is labelled by a set of integers
$(\mathbb{Z}_{1},\mathbb{Z}_{2})$, indicating the number of windings in a
specified spatial direction. Moreover, these sectors are topological in
nature, and states in a given winding number sector cannot be smoothly
deformed to another sector. Further, the electric flux tubes are non-local
extended excitations, unlike the point-like bosonic or fermionic particles,
and their properties at finite densities could in principle be considerably
different.
Flux tubes have been played a prominent role in the description of various
physical phenomena. Nielsen and Olesen [8] introduced the field theory of a
vortex-line model, also identified with dual strings. These are flux tubes,
similar to the ones that occur in the theory of type-II superconductors, and
are responsible for most of the low-energy physics in the strong coupling
limit. Classical and semi-classical analysis involving electric fluxes
interacting with a gas of monopoles, giving rise to confinement have been
discussed in [9, 10]. Non-abelian generalizations of such operators, called
disorder operators, were introduced by ’t Hooft to analyse the phases of non-
Abelian gauge theories [11].
We consider the condensed matter physics of these flux tubes in
2+1-dimensional U(1) gauge theory. Previous studies have used the path
integral formulation by either exploiting the dual representation of Abelian
lattice gauge theories [12, 13], or by using the multilevel algorithm [14] and
explored properties such as the profile of the electric flux lines connecting
static charges, or the variation of the potential between two charges with
increasing the representation of the charges. Among other things, this
provides valuable insights about the attractive or repulsive nature of the
flux tubes.
In this article, we use the Hamiltonian formulation of a $U(1)$ quantum link
ladder (QLL) [15]. This theory is known to have novel crystalline confined
phases which carry fractional electric flux excitations [16], possess
anomalously localized excited states [17], and are the building blocks of
spin-ice compounds [18, 19]. While it is known how to simulate the theory with
an improved cluster algorithm at zero and finite temperature [20], this method
has not been extended to deal with the scenario at finite winding chemical
potential. Instead, we use tensor network methods (see for review [21]) to
perform an ab-initio study of the system at finite winding density. Thanks to
the rapid development of quantum simulators, the key elements for realizing
this microscopic model on digital and analogue quantum computers are already
available [22, 23, 24]. The finite density physics investigated in the article
is ideal to be observed in a quantum computing setup. The open boundaries and
gauge invariance realized with quantum spin operators are very natural for
quantum simulators.
$\hat{x}$$\hat{y}$$\circlearrowright$$\circlearrowleft$$\circlearrowright$$\ncirclearrowright$$W_{y}$$W_{x}$
Figure 1: Ladder geometry of the lattice. The periodicity in $\hat{y}$ is
indicated by the dashed lines. Two flippable plaquettes ($\circlearrowleft$,
$\circlearrowright$) and a non flippable plaquette ($\ncirclearrowright$) are
also shown. The dotted lines indicate the links which need to be summed to
obtain the $x$\- and $y$-windings.
## The $U(1)$ quantum link ladder.–
To illustrate our ideas, we consider the setup of the $U(1)$ QLL with the
gauge fields represented by quantum spins in the spin-$\frac{1}{2}$
representation on a rectangular lattice $L_{x}\times L_{y}$, with $L_{y}=2$
and $L_{x}=6,\dots,64$, illustrated in Figure 1. Each link degree of freedom
has a two-dimensional Hilbert space, and the gauge field operator raises (or
lowers) the electric flux basis state:
$U_{r,\hat{i}}=S_{r,\hat{i}}^{+},\leavevmode\nobreak\
U^{\dagger}_{r,\hat{i}}=S_{r,\hat{i}}^{-},\leavevmode\nobreak\
E_{r,\hat{i}}=S_{r,\hat{i}}^{z}$. The Hamiltonian consists of two types of
plaquette operators:
$\mathcal{H}_{\square}=-J\sum_{\square}(U_{\square}+U^{\dagger}_{\square})+\lambda\sum_{\square}(U_{\square}+U^{\dagger}_{\square})^{2}\,,$
(1)
where
$U_{\square}=U_{r,\hat{i}}U_{r+\hat{i},\hat{j}}U^{\dagger}_{r+\hat{j},\hat{j}}U^{\dagger}_{r,\hat{j}}$.
One could have added the square of the electric field energy
$\sum_{r,\hat{i}}E^{2}_{r,\hat{i}}$, but for the spin-$\frac{1}{2}$
representation, this is a trivial constant and can be neglected. As shown in
Figure 1, the first operator flips any flippable plaquette, while the second
operator counts the total number of flippable plaquettes. Only two of the 16
states on a plaquette are non-trivially acted upon by the plaquette operators.
The reduction in the number of physical states is due to a local $U(1)$
symmetry, generated by the Gauss law
$G_{r}=\sum_{\hat{i}=\hat{x},\hat{y}}\left(E_{r-\hat{i},\hat{i}}-E_{r,\hat{i}}\right)=\sum_{\hat{i}=\hat{x},\hat{y}}\left(S^{z}_{r-\hat{i},\hat{i}}-S^{z}_{r,\hat{i}}\right).$
(2)
Physical states satisfy $G_{r}\ket{\psi}=0$, which implies the absence of any
charge on the lattice. In addition, the model has several global symmetries:
the lattice translation symmetry (by one lattice spacing), the reflection and
the rotation symmetry. In addition, there is the $Z_{2}$ charge conjugation
symmetry: $U\rightarrow U^{\dagger},E\rightarrow-E$. However, the main object
of our interest are the $U(1)^{2}$ global winding number symmetries, generated
by the operators:
$W_{x}=\frac{1}{2L_{y}}\sum_{r}S^{z}_{r,\hat{y}}\quad\text{and}\quad
W_{y}=\frac{1}{2L_{x}}\sum_{r}S^{z}_{r,\hat{x}}.$ (3)
where the sum over $r$ runs over all lattice sites. These operators commute
with the Hamiltonian and thus classify the eigenstates in terms of the number
of times the flux loops wind the system either along the $x$\- or the
$y$-direction. Therefore, it is natural to couple chemical potentials with
strengths $\mu_{x},\mu_{y}$ to the Hamiltonian and extend the full Hamiltonian
as: $\mathcal{H}={\mathcal{H}}_{\square}-\mu_{x}W_{x}-\mu_{y}W_{y}$.
The windings $W_{x,y}$ are good quantum numbers for periodic boundary
conditions. However, for using open boundary conditions (as we impose in the
longer directions, since we use matrix product states (MPS) in our
calculations), one can show that an external field $(h_{x},h_{y})$ that
couples to the $x$-links and $y$-links respectively serves the same purpose,
keeping $W_{x,y}$ to be good quantum numbers. With the external field, there
is a non-trivial contribution from the kinetic energy term:
$\sum_{r,\hat{x}}(E_{r,\hat{x}}-h_{x})^{2}+\sum_{r,\hat{y}}(E_{r,\hat{y}}-h_{y})^{2}=-h_{x}\sum_{r,\hat{x}}E_{r,\hat{x}}-h_{y}\sum_{r,\hat{y}}E_{r,\hat{y}}+{\rm
const}=-2h_{x}W_{y}-2h_{y}W_{x}+{\rm const}$, which is equivalent to coupling
the system with $\mu_{x,y}$. We will use the latter notation for the rest of
the article.
$0.2$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$$0$$0.2$$0.4$$0.6$$0.8$$1$$\displaystyle\mu_{x}$$\displaystyle\langle
W_{x}\rangle/L_{x}$$L_{x}=4$$L_{x}=8$$L_{x}=16$$L_{x}=32$$L_{x}=64$
Figure 2: Staircase structure of the winding numbers $\langle W_{x}\rangle$
with increasing $\mu$. The plateaux correspond to ground states where the
winding flux remains fixed as $\mu$ is varied. In the thermodynamic limit, the
curve becomes continuous. For large $\mu$, the curve saturates just as for
fermion (or hard-core bosons).
## Numerical methods.–
We begin by noting that the model considered here a rich ground state phase
diagram [16] and realizes novel crystalline confined phases. The physics of
excited states have revealed the existence of quantum scar states, and
atypical real-time dynamics [17]. While the former used an efficient cluster
Monte-Carlo algorithm, the latter used large scale exact diagonalization (ED).
In this work, we aim to go for system sizes beyond the reach of ED, but
efficient algorithms at finite $\mu$ are non-trivial to construct. While the
existing cluster algorithms can update all sectors at finite temperatures, it
is unclear on how to extend this algorithm for finite $\mu$. Therefore, we use
density matrix renormalization group (DMRG) on MPS states to simulate the
ground state phases with increasing values of
$\mu=\sqrt{\mu_{x}^{2}+\mu_{y}^{2}}$. The $\mu_{y}=0$ is kept throughout the
calculations to ensure that there is no condensation of strings in the
$y$-direction.
## Condensation of strings.–
The effect of increasing $\mu_{x}$ on various system sizes is shown in Figure
2. The more familiar examples of condensation phenomena are known from bosons
and fermions, which are point particles. We notice that with flux strings,
too, one has the ”silver blaze” problem, in which the ground state is
unaffected by the chemical potential until a threshold value
$\mu_{\mathrm{c}}(L_{x})$ is reached, after which the vacuum becomes unstable
to the creation of net flux strings periodically winding around $L_{y}$. On
the smaller lattices, one can clearly observe the step-like structure that
results, with each step indicating the number of winding strings that have
condensed in the vacuum. Plotted in terms of the winding density, we notice
the smooth approach to the thermodynamic limit in the data for lattices when
reaching $L_{x}=64$ (see Figure 2). Note in particular that both the threshold
chemical potential, $\mu_{\mathrm{c}}(L_{x})$, at which condensation phenomena
starts and the saturation chemical potential $\mu_{s}(L_{x})$ have well-
defined thermodynamic limits. In Figure 3 we show the behaviour of
$\mu_{\mathrm{c}}(L_{x})$ with increasing volume. It is interesting to note
that the finite volume dependence is very well described with the same formula
that governs the dependence of a massive particle in finite volume [25]. We
note that the step behaviour of magnetization with an external magnetic field
at zero temperature is well known for frustrated spin systems [26]. Recently,
a similar behaviour has been reported for the ladder Rydberg systems [27].
48163264$0.3$$\displaystyle L_{x}$$\displaystyle\mu_{\mathrm{c}}$fitteddata
Figure 3: Finite size dependence of
$\mu_{\mathrm{c}}(L_{x})=a\,\mathrm{exp}(-bL_{x})+\mu_{\mathrm{c}}^{\infty}$
on $L_{x}$. From the fit we determine: $\mu_{\mathrm{c}}^{\infty}=0.269$. The
error bars are the magnitude of the finite step $\Delta_{\mu_{x}}=0.0025$
taken to identify the phase transition point.
While we have demonstrated the thermodynamic limit for $L_{y}=2$ ladders, more
work is essential to extend the results to other geometries. In particular,
the 2-d system can be thought of as a sequence of ladders with increasing
$L_{y}$ at each step. At each fixed $L_{y}$, we can first take the
$L_{x}\to\infty$ limit. Thus, strings that are in general non-local in $L_{y}$
can condense in an infinitely wide ladder. For a confining theory, increasing
$L_{y}\to\infty$ is expected to yield $\mu_{c}(L_{x}\rightarrow\infty,L_{y})$
that increases linearly with $L_{y}$. We postpone the demonstration of the
thermodynamic limit of larger ladders in a future study, and turn to
understanding the nature of the phases that are realized in the ground states
at finite density.
## Crystalline structures.–
As we demonstrate now, once the winding strings start condensing in the ground
state, they modulate existing crystalline properties. At $\mu_{x}=0$ and
$\lambda=-1$, the ground state breaks both translation invariance and charge
conjugation spontaneously [16]. The novel feature at finite $\mu_{x}$ is the
repulsion of condensed strings in the $x$-direction and their subsequent
arrangement in periodic intervals. This necessarily modulates the pattern of
electric fields, $E_{r,\hat{i}}$, from the zero density case.
$-0.25$$0$$0.25$$\displaystyle\langle
E_{x,\hat{i}}\rangle$$\displaystyle\mu_{x}=0.313$$-0.25$$0$$0.25$$\displaystyle\langle
E_{x,\hat{i}}\rangle$$\displaystyle\mu_{x}=1.3$$0$$10$$20$$30$$40$$50$$60$$-0.25$$0$$0.25$$\displaystyle
x$$\displaystyle\langle E_{x,\hat{i}}\rangle$$\displaystyle\mu_{x}=1.91$
Figure 4: Vertical electric field, $\braket{E_{x,\hat{i}}}$, for the
$L_{x}=64$ lattice for three different regimes of winding density. The dashed
lines correspond to the upper rung and solid lines to the electric field on
the lower rung of the ladder
In Figure 4, we show the spatial distribution of the fluxes in the
$y$-direction, $E_{x,\hat{i}}$, at three different $\mu_{x}$ values for the
largest lattice $L_{x}=64$, representative of three distinct regimes. We call
these three different winding regimes: dilute gas regime, half filled and
close to saturation regime.
The first regime occurs when the system has just started to condense isolated
strings, and the system can be treated as a dilute gas of strings. The top
panel of Figure 4 at $\mu_{x}=0.313$ illustrates this case. The three regions
where the $\braket{E_{x,\hat{i}}}\approx 0$ marks the location of the winding
strings wrapping along the $y$-direction. We infer that the preference of the
strings to stay as far away from each other as possible is indicative of their
repulsive interaction. Moreover, in between the location of the fluxes, the
$\braket{E_{x,\hat{i}}}$ displays a regular oscillatory pattern, as also
expected for $\mu_{x}=0$. This arrangement of the fluxes maximizes the total
number of flippable plaquettes, as preferred by the $\lambda=-1$ term in the
Hamiltonian.
On increasing the filling fraction of the winding density, we notice that the
long-wavelength modulations of the electric flux disappear. As shown in the
representative middle panel, for $\mu_{x}=1.3$, the long-range modulations of
$\braket{E_{x,\hat{i}}}$ disappear. The short range oscillations of the
horizontal fluxes are still present with twice the period than the previous
case: the dashed and the solid lines take their maximum positive and negative
values ($\approx\pm 0.25$) 16 times. This regime corresponds to the half-
filling of winding strings, now distributed evenly through the system,
removing traces of previous spatial modulations. Making the system denser
causes one to approach to the saturation regime, where the electric fields
further rearrange to produce a smooth coherent oscillation. The bottom panel
in Figure 4, for $\mu_{x}=1.91$ shows the coherent oscillations for $L_{x}=64$
in this regime, spread over 12-15 lattice spacing.
We can also understand the physical properties from the sum of the electric
fields on the vertical links, $E_{y,\hat{j}}$, which provides the analog of
the ”particle number density”. Following our previous discussions, we also
expect these profiles to show modulations, which are plotted in Figure 5(a)
for our biggest lattice $L_{x}=64$. Three distinct regimes are also visible in
this plot. The set of blue curves represents the regime where the system has
just started to condense isolated strings. It is clear that as $\mu_{x}$ is
slowly increased, the winding strings condense in such a way as to maintain
maximal separation from each other and the highly polarized boundaries. The
first such string excitation sits in the middle of the lattice, as shown by
the maximum in the density profile. The case with three peaks in
$E_{x,\hat{j}}$ (at $x=10a,30a,50a$) correspond to the profile of
$E_{x,\hat{i}}$ at $\mu_{x}=0.313$ shown in Figure 4. The presence of the
fluxes (wrapping vertically) makes the plaquettes non-flippable, which is
exactly the locations where the horizontal fields are minimum and the vertical
fields maximum, demonstrating that the strings affect all the local
properties.
On increasing $\mu_{x}$, the $E_{x,\hat{j}}$ looses the modulations that
identify individual fluxes, and a smooth distribution, modulated more at the
boundaries than in the bulk are visible. Closer to the saturation region upon
further increasing of $\mu_{x}$, again longer ranged smooth modulations of the
”particle density” appear, which now stretch over several lattice spacing.
Interestingly, this length scale seems to be dynamically generated in this
regime and rather sensitive to the external $\mu_{x}$. The wave number of the
oscillations can thus be controlled by tuning the $\mu_{x}$.
(a) $0.9$$1$$0.5$$0.55$particle number$0$$20$$40$$60$$0$$0.1$$\displaystyle x$
(b) $0.5$$1$$1.5$$2$$0$$2$$4$$\displaystyle\mu_{x}$wave
number$\mu_{x}<0.35$$1.2<\mu_{x}<1.3$$1.8<\mu_{x}$
Figure 5: Winding number regimes of the quantum link ladder. (a) The winding
number distribution as a function of the distance to one boundary for the
three different regimes. (b) The wave number as a function of the chemical
potential for the states where the particle number is non-zero or non-
saturated. The three different winding regimes are highlighted with colored
markers/lines: dilute gas regime ($\bullet$), half filled ($\bullet$) and
close to saturation regime ($\bullet$).
Figure 5(b) shows the wave number of the oscillations as a function of
$\mu_{x}$, obtained by identifying the dominant wave-number that contributes
in the Fourier transform of the vertical electric flux profiles,
$E_{x,\hat{j}}$. The information in this observable is thus the same as in the
structure factor up to a global factor, which is given as a Fourier transform
of the electric flux correlation function at a particular momentum $k$ (the
wave number is $k/2\pi$ in our context). Even in this plot, the aforementioned
three regimes in $\mu_{x}$ are clearly visible. The first non-trivial
excitations for small $\mu_{x}$, present long range oscillations whose wave
numbers keep decreasing until they saturate to a small value. This is the
regime where the system is approximately half-filled, and for the $L_{x}=64$
spans from $\mu_{x}=1.2,\dots,1.3$. In this region, the translational
invariance is approximately recovered. When the chemical potential is
increased, the oscillations rise again with a much faster rate, as already
apparent from the earlier observables.
## Conclusions and Outlook.–
In this Letter, we have explored the phenomenon of string condensation in an
$U(1)$ Abelian lattice gauge theory realized as a spin-$1/2$ QLM. We have
demonstrated that our ladder system posses a smooth thermodynamic limit for a
fixed $L_{y}$. The system starts to condense strings with the increase in
$\mu_{x}$, and the system exhibits at least three different regimes before
saturation is reached. Through the profiles of the horizontal and vertical
electric fluxes, we have shown that the winding strings arrange themselves in
patterns which behave distinctly in each of the three regimes. In the dilute
regime, isolated string excitations can be identified, while the half-filled
regime is marked by an approximate restoration of translation invariance. In
the dense region, there is a dynamically generated length scale which changes
rapidly with $\mu_{x}$ before the system saturates. Our observables are
perfectly suited to be measured in cold atom experiments of lattice gauge
theory models [28, 29, 30, 31, 32, 33].
There are several directions in which the analysis can be extended. The most
obvious is to repeat the calculation for larger ladders and study the
different regimes that manifest themselves. Other observables, such as the
central charge, and finite size scaling of correlation functions could be
useful in attempting to understand if there is a phase transition between the
different regimes. The nature of the origin of the length scale in the dense
region is also an open question, which might be understood better from an
effective field theory approach. Another obvious question is if similar
phenomena can also be observed in QLMs in the spin-$1$ representation, which
are very similar to the lattice gauge theory formulation by Wilson.
## Acknowledgements.–
We would like to thank Luca Barbiero, Marcello Dalmonte, Adam Nahum, Arnab
Sen, and Uwe-Jens Wiese for helpful discussions.
PS acknowledges support from: ERC AdG NOQIA; Ministerio de Ciencia y
Innovation Agencia Estatal de Investigaciones
(PGC2018-097027-B-I00/10.13039/501100011033,
CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA
PID2019-106901GB-I00, FPI, QUANTERA MAQS PCI2019-111828-2, QUANTERA DYNAMITE
PCI2022-132919, Proyectos de I+D+I “Retos Colaboración” QUSPIN
RTC2019-007196-7); European Union NextGenerationEU (PRTR); Fundació Cellex;
Fundació Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and
CERCA program (AGAUR Grant No. 2017 SGR 134, QuantumCAT U16-011424, co-funded
by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing
Center MareNostrum (FI-2022-1-0042); EU Horizon 2020 FET-OPEN OPTOlogic (Grant
No 899794); National Science Centre, Poland (Symfonia Grant
No.2016/20/W/ST4/00314); European Union’s Horizon 2020 research and innovation
programme under the Marie-Skłodowska-Curie grant agreement No 101029393
(STREDCH) and No 847648 (“La Caixa” Junior Leaders fellowships ID100010434:
LCF/BQ/PI19/11690013, LCF/BQ/PI20/11760031, LCF/BQ/PR20/11770012,
LCF/BQ/PR21/11840013). PK acknowledges support from the Research Training
Group ”Differential Equation- and Data-driven Models in Life Sciences and
Fluid Dynamics: An Interdisciplinary Research Training Group (DAEDALUS)” (GRK
2433) funded by the German Research Foundation (DFG).
## References
* Banerjee and Chandrasekharan [2010] D. Banerjee and S. Chandrasekharan, Finite size effects in the presence of a chemical potential: A study in the classical nonlinear o(2) sigma model, Physical Review D 81, 10.1103/physrevd.81.125007 (2010).
* Aarts and James [2010] G. Aarts and F. A. James, On the convergence of complex langevin dynamics: the three-dimensional XY model at finite chemical potential, Journal of High Energy Physics 2010, 10.1007/jhep08(2010)020 (2010).
* Katz _et al._ [2017] S. Katz, F. Niedermayer, D. Nogradi, and C. Torok, Comparison of algorithms for solving the sign problem in the O(3) model in 1+1 dimensions at finite chemical potential, Phys. Rev. D 95, 054506 (2017), arXiv:1611.03987 [hep-lat] .
* Bloch _et al._ [2021] J. Bloch, R. G. Jha, R. Lohmayer, and M. Meister, Tensor renormalization group study of the three-dimensional o(2) model, Physical Review D 104, 10.1103/physrevd.104.094517 (2021).
* Gupta [2010] S. Gupta, QCD at finite density, PoS LATTICE2010, 007 (2010), arXiv:1101.0109 [hep-lat] .
* Ayyar _et al._ [2018] V. Ayyar, S. Chandrasekharan, and J. Rantaharju, Benchmark results in the 2D lattice Thirring model with a chemical potential, Phys. Rev. D 97, 054501 (2018), arXiv:1711.07898 [hep-lat] .
* Bañuls _et al._ [2017] M. C. Bañuls, K. Cichy, J. I. Cirac, K. Jansen, and S. Kühn, Density Induced Phase Transitions in the Schwinger Model: A Study with Matrix Product States, Phys. Rev. Lett. 118, 071601 (2017), arXiv:1611.00705 [hep-lat] .
* Nielsen and Olesen [1973] H. Nielsen and P. Olesen, Vortex-line models for dual strings, Nuclear Physics B 61, 45 (1973).
* Banks _et al._ [1977] T. Banks, R. Myerson, and J. Kogut, Phase transitions in abelian lattice gauge theories, Nuclear Physics B 129, 493 (1977).
* Polyakov [1977] A. M. Polyakov, Quark Confinement and Topology of Gauge Groups, Nucl. Phys. B 120, 429 (1977).
* ’t Hooft [1978] G. ’t Hooft, On the Phase Transition Towards Permanent Quark Confinement, Nucl. Phys. B 138, 1 (1978).
* Trottier and Woloshyn [1993] H. D. Trottier and R. M. Woloshyn, Flux tubes in three-dimensional lattice gauge theories, Physical Review D 48, 2290 (1993).
* Zach _et al._ [1998] M. Zach, M. Faber, and P. Skala, Flux tubes and their interaction in $U(1)$ lattice gauge theory, Nucl. Phys. B 529, 505 (1998), arXiv:hep-lat/9709017 .
* Koma _et al._ [2004] Y. Koma, M. Koma, and P. Majumdar, Static potential, force, and flux-tube profile in 4D compact $U(1)$ lattice gauge theory with the multi-level algorithm, Nuclear Physics B 692, 209 (2004).
* Chandrasekharan and Wiese [1997] S. Chandrasekharan and U. J. Wiese, Quantum link models: A Discrete approach to gauge theories, Nucl. Phys. B 492, 455 (1997), arXiv:hep-lat/9609042 .
* Banerjee _et al._ [2013] D. Banerjee, F.-J. Jiang, P. Widmer, and U.-J. Wiese, The (2 + 1)-d $u(1)$ quantum link model masquerading as deconfined criticality, J. Stat. Mech. Theory Exp. 2013, P12010 (2013).
* Banerjee and Sen [2021] D. Banerjee and A. Sen, Quantum scars from zero modes in an abelian lattice gauge theory on ladders, Phys. Rev. Lett. 126, 220601 (2021).
* Shannon _et al._ [2004] N. Shannon, G. Misguich, and K. Penc, Cyclic exchange, isolated states, and spinon deconfinement in an $xxz$ heisenberg model on the checkerboard lattice, Phys. Rev. B 69, 220403 (2004).
* Benton _et al._ [2012] O. Benton, O. Sikora, and N. Shannon, Seeing the light: Experimental signatures of emergent electromagnetism in a quantum spin ice, Physical Review B 86, 10.1103/physrevb.86.075154 (2012).
* Banerjee [2021] D. Banerjee, Recent progress on cluster and meron algorithms for strongly correlated systems, Indian Journal of Physics 95, 1669 (2021).
* Schollwöck [2011] U. Schollwöck, The density-matrix renormalization group in the age of matrix product states, Annals of physics 326, 96 (2011).
* Celi _et al._ [2020] A. Celi, B. Vermersch, O. Viyuela, H. Pichler, M. D. Lukin, and P. Zoller, Emerging two-dimensional gauge theories in rydberg configurable arrays, Phys. Rev. X 10, 021057 (2020).
* Paulson _et al._ [2021] D. Paulson, L. Dellantonio, J. F. Haase, A. Celi, A. Kan, A. Jena, C. Kokail, R. van Bijnen, K. Jansen, P. Zoller, and C. A. Muschik, Simulating 2d effects in lattice gauge theories on a quantum computer, PRX Quantum 2, 030334 (2021).
* Huffman _et al._ [2021] E. Huffman, M. G. Vera, and D. Banerjee, Real-time dynamics of plaquette models using nisq hardware (2021).
* Luscher [1986] M. Luscher, Volume Dependence of the Energy Spectrum in Massive Quantum Field Theories. 1. Stable Particle States, Commun. Math. Phys. 104, 177 (1986).
* Honecker _et al._ [2004] A. Honecker, J. Schulenburg, and J. Richter, Magnetization plateaus in frustrated antiferromagnetic quantum spin models, Journal of Physics: Condensed Matter 16, S749 (2004).
* Sarkar _et al._ [2022] M. Sarkar, M. Pal, A. Sen, and K. Sengupta, Quantum order-by-disorder induced phase transition in rydberg ladders with staggered detuning, arXiv preprint arXiv:2204.12515 (2022).
* Aidelsburger _et al._ [2022] M. Aidelsburger, L. Barbiero, A. Bermudez, T. Chanda, A. Dauphin, D. González-Cuadra, P. R. Grzybowski, S. Hands, F. Jendrzejewski, J. Jünemann, G. Juzeliūnas, V. Kasper, A. Piga, S.-J. Ran, M. Rizzi, G. Sierra, L. Tagliacozzo, E. Tirrito, T. V. Zache, J. Zakrzewski, E. Zohar, and M. Lewenstein, Cold atoms meet lattice gauge theory, Philos. Trans. R. Soc. A 380, 20210064 (2022).
* Martinez _et al._ [2016] E. A. Martinez _et al._ , Real-time dynamics of lattice gauge theories with a few-qubit quantum computer, Nature 534, 516 (2016), arXiv:1605.04570 [quant-ph] .
* Bernien _et al._ [2017] H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, and et al., Probing many-body dynamics on a 51-atom quantum simulator, Nature 551, 579–584 (2017).
* Schweizer _et al._ [2019] C. Schweizer, F. Grusdt, M. Berngruber, L. Barbiero, E. Demler, N. Goldman, I. Bloch, and M. Aidelsburger, Floquet approach to $\mathbb{Z}_{2}$ lattice gauge theories with ultracold atoms in optical lattices, Nature Physics 15, 1168–1173 (2019).
* Mil _et al._ [2020] A. Mil, T. V. Zache, A. Hegde, A. Xia, R. P. Bhatt, M. K. Oberthaler, P. Hauke, J. Berges, and F. Jendrzejewski, A scalable realization of local $U(1)$ gauge invariance in cold atomic mixtures, Science 367, 1128 (2020), arXiv:1909.07641 [cond-mat.quant-gas] .
* Yang _et al._ [2020] B. Yang, H. Sun, R. Ott, H.-Y. Wang, T. V. Zache, J. C. Halimeh, Z.-S. Yuan, P. Hauke, and J.-W. Pan, Observation of gauge invariance in a 71-site Bose–Hubbard quantum simulator, Nature 587, 392 (2020), arXiv:2003.08945 [cond-mat.quant-gas] .
* Fishman _et al._ [2020] M. Fishman, S. R. White, and E. M. Stoudenmire, The ITensor software library for tensor network calculations (2020), arXiv:2007.14822 .
## Appendix A Supplementary Material
## Numerical Implementation.–
For an efficient representation of the ground states in the quantum link
ladder, we use matrix product states implemented in the itensor software
package [34]. All the result presented in this work have been extrapolated to
infinite bond dimension. During the numerical simulations, the bond dimension
$D$ of the matrix product state has been increased up to 1000
($D_{\mathrm{max}}$). We have also checked that the final state can be
compressed with $D<D_{\mathrm{max}}$. The stopping criteria used in the
optimization procedure is that the difference of the energy after 5 sweeps
should be smaller than $10^{-10}$. For the larger volumes and close to the
saturation, in the regime of the $\mu_{x}\sim 1.8$, the numerical simulation
becomes unstable. If we run the optimization procedure different times, we
might find different the ground state in different topological sectors. This
phenomenon is caused because the energy gap between the different topological
sectors approaches zero in the thermodynamic limit and close to the saturation
regime. Since the gap in this region is smaller than the numerical precision,
we could have found inconsistent result for certain parameter values.
Nevertheless, we are still able to capture the important properties and
extract physical information.
## Observables at $\mu_{x}=\mu_{y}=0$.–
In this section, we briefly sketch the physics of the model at zero chemical
potential, but for varying $\lambda$ for completeness. This has already been
discussed at length in [16] for periodic boundary conditions and using an
efficient quantum Monte Carlo algorithm. It was demonstrated in that study
that for large negative values of $\lambda$, where the $J$ term is
insignificant, the ground state physics is dominated by the states with the
largest number of flippable plaquettes. This state spontaneously breaks both
the charge conjugation and the lattice translation symmetry. For decreasing
$\lambda$, however, the symmetry breaking pattern changes as the $J$ term
increases in strength. We encounter a phase where the charge conjugation
symmetry is restored by the lattice translation symmetry remains broken. The
two phases are connected by a weak first order phase transition.
For the quantum link ladder with $L_{y}=2$ and open boundary conditions in the
$x$-direction, the symmetries are different. In particular, the lattice
translation symmetry is not exact any more, while the charge conjugation
symmetry is still exact. Thus, we expect a phase transition from a phase which
breaks charge conjugation symmetry into a phase where the symmetry is
restored. The susceptibility is expected to be large in the symmetry broken
phase and vanish in the phase where the symmetry is restored. Figure 6 shows
the expected behaviour of the susceptibility as a function of $\lambda$ for
three different lattice sizes.
$-1.2$$-1.1$$-1$$-0.9$$-0.8$$-0.7$$-0.6$$-0.5$$-0.4$$-0.3$$-0.2$$-0.1$$0$$0$$5$$10$$15$$20$$25$$30$$\displaystyle\lambda$$\displaystyle\chi$$L_{x}=16$$L_{x}=24$$L_{x}=32$
Figure 6: Susceptibility as a function of $\lambda$ at $\mu_{x}=0$. The phase
for large negative lambda has a broken symmetry, indicated by the increase in
$\chi$ with $L_{x}$. In the symmetry restored phase, $\chi$ does not increase
with the system size as expected. $0$$10$$\displaystyle
W_{x}$$L_{x}=16$$-20$$0$$\displaystyle\partial
E_{0}/\partial\mu_{x}$0.340.460.60.740.8611.121.261.361.48$0$$0.5$$\displaystyle\mu_{x}$$\displaystyle
M$ Figure 7: (Top) Winding number sectors for $L_{x}=16$. (Middle) The
derivative of the ground state energy with respect to the chemical potential.
(Bottom) The magnetization, $M$, defined as the difference of the plaquettes
flippable in the clockwise and anti-clockwise fashion respectively.
## Observables at $\mu_{x}>0$.–
As the $\mu_{x}$ is cranked up (keeping $\mu_{y}=0$), we study the behaviour
of the expectation of the winding number $\braket{W_{x}}$, and the derivative
of the ground state energy $E_{0}$ with respect to $\mu_{x}$, $\frac{\partial
E_{0}}{\partial\mu_{x}}$. In the top and the middle panel in Figure 7, we plot
the two quantities. As expected from the Feynman-Hellman theorem, these two
quantities show the same qualitative behaviour upto a constant factor (with an
overall negative sign). Finally, we also plot the flippability of the
plaquettes, $M$. The flippability is defined as the difference between the
total number of plaquettes which are flippable in the clockwise fashion and
the ones which are flippable in the anti-clockwise fashion. Interestingly,
this observable shows a staggered behaviour between zero and non-zero values
for odd and even winding number of electric fluxes respectively. This is a
clear indication of a co-operative behaviour of the plaquettes across the
lattice.
|
# General one-loop formulas for $H\rightarrow f\bar{f}\gamma$ and its
applications
Vo Van On<EMAIL_ADDRESS>Dzung Tri Tran Chi Linh Nguyen Khiem Hong Phan
<EMAIL_ADDRESS>Institute of Applied Technology, Thu Dau Mot
University, Thu Dau Mot City, Binh Duong Province 75000, Vietnam University of
Science Ho Chi Minh City, $227$ Nguyen Van Cu, District $5$, Ho Chi Minh City,
Vietnam Institute of Fundamental and Applied Sciences, Duy Tan University, Ho
Chi Minh City $700000$, Vietnam Faculty of Natural Sciences, Duy Tan
University, Da Nang City $550000$, Vietnam
###### Abstract
We present general one-loop contributions to the decay processes $H\rightarrow
f\bar{f}\gamma$ including all possible the exchange of the additional heavy
vector gauge bosons, heavy fermions, and charged (also neutral) scalar
particles in the loop diagrams. As a result, the analytic results are valid in
a wide class of beyond the standard models. Analytic formulas for the form
factors are expressed in terms of Passarino-Veltman functions in the standard
notations of LoopTools. Hence, the decay rates can be computed numerically by
using this package. The computations are then applied to the cases of the
standard model, $U(1)_{B-L}$ extension of the standard model as well as two
Higgs doublet model. Phenomenological results of the decay processes for all
the above models are studied. We observe that the effects of new physics are
sizable contributions and these can be probed at future colliders.
###### keywords:
Higgs phenomenology, One-loop Feynman integrals, Analytic methods for Quantum
Field Theory, Dimensional regularization.
## 1 Introduction
After discovering the standard model-like (SM-like) Higgs boson [1, 2], one of
the main purposes at future colliders like the high luminosity large hadron
collider (HL-LHC) [3, 4] as well as future lepton colliders [5] is to probe
the properties of this boson (mass, couplings, spin and parity, etc). In the
experimental programs, the Higgs productions and its decay rates should be
measured as precise as possible. Based on the measurements, we can verify the
nature of the Higgs sector. In other words we can understand deeply the
dynamic of the electroweak symmetry breaking. It is well-known that the Higgs
sector is selected as the simplest case in the standard model (SM) which there
is only a scalar doublet field. From theoretical viewpoints, there are no
reasons for this simplest choice. Many of beyond the standard models (BSMs)
have extended the Higgs sector (some of them have also expanded gauge sectors,
introduced mass terms of neutrinos, etc). In these models many new particles
are proposed, for examples, new heavy gauge bosons, charged and neutral scalar
Higgs as well as new heavy fermions. These new particles may also contribute
to the productions and decay of Higgs boson. It means that the more precise
data and new theoretical approaches on the Higgs productions and its decay
rates could provide us a crucial tool to answer the nature of the Higgs sector
and, more important, to extract the new physics contributions.
Among all the Higgs decay channels, the processes $H\rightarrow
f\bar{f}\gamma$ are great of interest at the colliders by following reasons.
Firstly, the decay channels can be measured at the large hadron collider [6,
7, 8, 9]. Therefore, the processes can be used to test the SM at the high
energy regions. Secondly, many of new particles as mentioned in the beginning
of this section may propagate in the loop diagrams of the decay processes.
Subsequently, the decay rates could provide a useful tool for constraining new
physic parameters. Last but not least, apart from the SM-like Higgs boson, new
neutral Higgs bosons in BSMs may be mixed with the SM-like one. These effects
can be also observed directly by measuring of the decay rates of $H\rightarrow
f\bar{f}\gamma$. As above reasons, the detailed theoretical evaluations for
one-loop contributions to the decay of Higgs to fermion pairs and a photon
within the SM and its extensions are necessary.
Theoretical implications for the decay $H\rightarrow f\bar{f}\gamma$ in the SM
at the LHC have studied in Refs. [10, 11, 12]. Moreover, there have been many
available computations for one-loop contributing to the decay processes
$H\rightarrow f\bar{f}\gamma$ within the SM framework [13, 14, 15, 16, 17, 18,
19, 20]. The same evaluations for the Higgs productions at $e\gamma$ colliders
have proposed in [21, 22]. While one-loop corrections to $H\rightarrow
f\bar{f}\gamma$ in the context of the minimal super-symmetric standard model
Higgs sector have computed in [23]. Furthermore, one-loop contributions for
CP-odd Higgs boson productions in $e\gamma$ collisions have carried out in
[24]. In this article, we present general formulas for one-loop contributing
to the decay processes $H\rightarrow f\bar{f}\gamma$. The analytic results
presented in the current paper are not only valid in the SM but also in many
of BSMs in which new particles are proposed such as heavy vector bosons, heavy
fermions, and charged (neutral) scalar particles that may propagate in the
loop diagrams of the decay processes. The analytic formulas for the form
factors are expressed in terms of Passarino-Veltman (PV) functions in standard
notations of LoopTools [46]. As a result, they can be evaluated numerically by
using this package. The calculations are then applied to the SM and many of
beyond the SM such as the $U(1)_{B-L}$ extension of the SM [25], two Higgs
doublet models (THDM) [27]. Phenomenological results of the decay processes
for these models are also studied.
We also stress that our analytical results in the present paper can be also
applied for many of BSMs framework. In particular, in the super-symmetry
models, many super-partners of fermions and gauge bosons are introduced.
Furthermore, with extending the Higgs sector, we encounter charged and neutral
Higgs bosons in this framework. There are exist the extra charged gauge bosons
in many electroweak gauge extensions, for examples, the left-right models (LR)
constructed from the $SU(2)_{L}\times SU(2)_{R}\times U(1)_{Y}$ [28, 29, 30],
the 3-3-1 models ($SU(3)_{L}\times U(1)_{X}$) [31, 32, 33, 34, 35, 36, 37],
the $3$-$4$-$1$ models ($SU(4)_{L}\times U(1)_{X}$) [37, 38, 39, 40, 41, 42],
etc. Analytic results in this paper already include the contributions these
mentioned particles which may also exchange in the loop diagrams of the
aforementioned decay processes. Phenomenological results for the decay
processes in the above models are great of interest in future. These topic
will be devoted in our future publications.
The layout of the paper is as follows: We first write down the general
Lagrangian and introduce the notation for the calculations in the section $2$.
We then present the detailed calculations for one-loop contributions to
$H\rightarrow f\bar{f}\gamma$ in section $3$. The applications of this work to
the SM, $U(1)_{B-L}$ extensions of the SM and THDM are also studied in this
section. Phenomenological results for these models are analysed at the end of
section $3$. Conclusions and outlook are devoted in the section $4$. In
appendices, the checks for the computations are shown. We then review briefly
$U(1)_{B-L}$ extensions of the SM and THDM in the appendices. Finally, Feynman
rules and all involving couplings in the decay processes are shown.
## 2 Lagrangian and notations
In order to write down the general form of Lagrangian for a wide class of the
BSMs, we start from the well-known contributions appear in the SM. We then
consider the extra forms that extended from the SM. For example, the two Higgs
doublet model [27] adding a new Higgs doublet that predict new charged and
neutral scalar Higgs bosons; a model with a gauge symmetry $U(1)_{B-L}$ which
proposes a neutral gauge boson $Z^{\prime}$ [25, 26], neutral Higgs; a minimal
left-right models with a new non-Abelian gauge symmetry for electroweak
interactions $SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}$ [28, 29, 30]
introducing many new particles including charged gauge bosons, neutral gauge
bosons and charged Higgs bosons. The mentioned particles give one-loop
contributions to the decays under consideration.
In this section, Feynman rules for the decay channels $H\rightarrow
f\bar{f}\gamma$ are derived for the most general extension of the SM with
considering all possible contributions from the mentioned particles. In this
computation, we denote $V_{i},V_{j}$ for extra charged gauge bosons,
$V_{k}^{0}$ for neutral gauge bosons. Moreover, $S_{i},S_{j}\;(S_{k}^{0})$ are
charged (neutral) Higgs bosons respectively and $f_{i},f_{j}$ show for
fermions. In general, the classical Lagrangian contains the following parts:
$\displaystyle\mathcal{L}=\mathcal{L}_{f}+\mathcal{L}_{G}+\mathcal{L}_{\Phi}+\mathcal{L}_{Y}.$
(1)
Where the fermion sector is given
$\displaystyle\mathcal{L}_{f}=\bar{\psi}_{f}i\not{D}\psi_{f}$ (2)
with $D_{\mu}=\partial_{\mu}-igT^{a}V_{\mu}^{a}+\cdots$. In this formula,
$T^{a}$ is generator of the corresponding gauge symmetry. The gauge sector
reads
$\displaystyle\mathcal{L}_{G}=-\frac{1}{4}\sum\limits_{a}V^{a}_{\mu\nu}V^{a,\mu\nu}+\cdots$
(3)
where
$V^{a}_{\mu\nu}=\partial_{\mu}V^{a}_{\nu}-\partial_{\nu}V^{a}_{\mu}+gf^{abc}V^{b}_{\mu}V^{c}_{\mu}$
with $f^{abc}$ is structure constant of the corresponding gauge group. The
scalar sector is expressed as follows:
$\displaystyle\mathcal{L}_{\Phi}=\sum\limits_{\Phi}Tr[\left(D_{\mu}\Phi\right)^{{\dagger}}\left(D^{\mu}\Phi\right)]-V(\Phi).$
(4)
We then derive all the couplings from the full Lagrangian. The couplings are
parameterized in general forms and presented by following:
* 1.
By expanding the fermion sector, we can derive the vertices of vector boson
$V$ with fermions. In detail, the interaction terms are parameterized as
$\displaystyle\mathcal{L}_{Vff}=\sum\limits_{f_{i},f_{j},V}\bar{f}_{i}\gamma^{\mu}(g_{Vff}^{L}P_{L}+g_{Vff}^{R}P_{R})f_{j}V_{\mu}+\cdots$
(5)
with $P_{L,R}=(1\mp\gamma_{5})/2$.
* 2.
Trilinear gauge and quartic gauge couplings are expanded from the gauge
sector:
$\displaystyle\mathcal{L}_{VVV,VVVV}$ $\displaystyle=$
$\displaystyle\sum\limits_{V_{k}^{0},V_{i},V_{j}}g_{V_{k}^{0}V_{i}V_{j}}\Big{[}\partial_{\mu}V^{0}_{k,\nu}V_{i}^{\mu}V_{j}^{\nu}+V_{k,\nu}^{0}V_{i}^{\mu}\partial^{\nu}V_{j,\mu}+\cdots\Big{]}$
(6)
$\displaystyle+\sum\limits_{V_{k}^{0},V_{l}^{0},V_{i},V_{j}}g_{V_{k}^{0}V_{l}^{0}V_{i}V_{j}}\Big{[}V^{0}_{k,\mu}V_{l,\nu}^{0}V_{i}^{\mu}V_{j}^{\nu}+\cdots\Big{]}+\cdots$
* 3.
Next, the couplings of scalar $S$ to fermions are taken from the Yukawa part
$\mathcal{L}_{Y}$. The interaction terms are expressed as follows:
$\displaystyle\mathcal{L}_{Sf_{i}f_{j}}=\sum\limits_{f_{i},f_{j},S}\bar{f}_{i}(g_{Sff}^{L}P_{L}+g_{Sff}^{R}P_{R})f_{j}S+\cdots$
(7)
* 4.
The couplings of scalar $S$ to vector boson $V$ can be derived from the
kinematic term of the Higgs sector. In detail, we have the interaction terms
$\displaystyle\mathcal{L}_{SVV,SSV,SSVV}$ $\displaystyle=$
$\displaystyle\sum\limits_{S,V_{i},V_{j}}g_{SV_{i}V_{j}}SV_{i}^{\mu}V_{j,\mu}+\sum\limits_{S_{i},S_{j},V}g_{S_{i}S_{j}V}[(\partial_{\mu}S_{i})S_{j}-(\partial_{\mu}S_{j})S_{i}]V^{\mu}$
(8)
$\displaystyle+\sum\limits_{S_{i},S_{j},V_{k},V_{l}}g_{S_{i}S_{j}V_{k}V_{l}}S_{i}S_{j}V_{k}^{\mu}V_{l,\mu}+\cdots$
* 5.
Finally, the trilinear scalar and quartic scalar interactions are from the
Higgs potential $V(\Phi)$. The interaction terms are written as
$\displaystyle\mathcal{L}_{SSS,SSSS}$ $\displaystyle=$
$\displaystyle\sum\limits_{S_{i},S_{j},S_{k}}g_{S_{i}S_{j}S_{k}}S_{i}S_{j}S_{k}+\sum\limits_{S_{i},S_{j},S_{k},S_{l}}g_{S_{i}S_{j}S_{k},S_{l}}S_{i}S_{j}S_{k}S_{l}+\cdots$
(9)
All of the Feynman rules corresponding to the above couplings giving one-loop
contributions to the SM-like Higgs decays $H\to f\bar{f}\gamma$ are collected
in appendix $D$. In detail, the propagators involving the decay processes in
the unitary gauge are shown in Table 6. All the related couplings involving
the decay channels are parameterized in general forms which are presented in
Table 7 (we refer appendices $B$ and $C$ for two typical models).
## 3 Calculations
In this section, one-loop contributions to the decay processes $H\rightarrow
f(q_{1})\bar{f}(q_{2})\gamma(q_{3})$ are calculated in detail. In the present
paper, we consider the computations in the limit of $m_{f}\rightarrow 0$. All
Feynman diagrams involving these processes can be grouped in to the following
classes (seen Fig. 1).
Figure 1: Types of Feynman diagrams involving to the SM-like Higgs decays
$H\to f\bar{f}\gamma$
For on-shell photon, we confirm that the contributions of diagrams $(e+h)$
will be vanished. One can neglect the Yukawa coupling $y_{f}$ (since
$m_{f}\rightarrow 0$) in this computation. As a result, the contributions of
diagrams $(g+k+m)$ can be omitted. Furthermore, the diagrams $(a+f)$ are not
contributed to the amplitude. Hence, we only have the contributions of
$(b+c+d)$ which are separated into two kinds of the contributions. The first
one is to the topology $b$ which is called $V_{k}^{0*}$ pole contributions.
The second type (diagrams $c$ and $d$) belongs to the non-$V_{k}^{0*}$ pole
contributions. We remind that $V_{k}^{0*}$ can include both $Z,\gamma$ in the
SM and the arbitrary neutral vector boson $Z^{\prime}$ in many of the BSMs.
General one-loop amplitude which obeys the invariant Lorentz structure can be
decomposed as follows [20]:
$\displaystyle\mathcal{A}_{\text{loop}}$ $\displaystyle=$
$\displaystyle\sum\limits_{k=1}^{2}\Big{\\{}[q_{3}^{\mu}q_{k}^{\nu}-g^{\mu\nu}q_{3}\cdot
q_{k}]\bar{u}(q_{1})(F_{k,R}\gamma_{\mu}P_{R}+F_{k,L}\gamma_{\mu}P_{L})v(q_{2})\Big{\\}}\varepsilon^{*}_{\nu}(q_{3}).$
(10)
In this equation, all form factors are computed as follows:
$\displaystyle F_{k,L/R}$ $\displaystyle=$ $\displaystyle
F_{k,L/R}^{\text{$V_{k}^{0*}$-poles}}+F_{k,L/R}^{\text{Non-$V_{k}^{0*}$}}$
(11)
for $k=1,2$. Kinematic invariant variables involved in the decay processes are
included: $q^{2}=q_{12}=(q_{1}+q_{2})^{2}$, $q_{13}=(q_{1}+q_{3})^{2}$ and
$q_{23}=(q_{2}+q_{3})^{2}$.
We first write down all Feynman amplitudes for the above diagrams. With the
help of Package-X [43], all Dirac traces and Lorentz contractions in $d$
dimensions are handled. Subsequently, the amplitudes are then written in terms
of tensor one-loop integrals. By following tensor reduction for one-loop
integrals in [44] (the relevant tensor reduction formulas are shown in
appendix $A$), all tensor one-loop integrals are expressed in terms of PV-
functions.
### 3.1 $V_{k}^{0*}$ pole contributions
In this subsection, we first arrive at the $V_{k}^{0*}$ pole contributions
which are corresponding to the diagram $b$. In this group of Feynman diagrams,
it is easily to confirm that the form factors follow the below relation:
$\displaystyle
F_{k,L/R}^{\text{$V_{k}^{0*}$-poles}}=F_{1,L/R}^{\text{$V_{k}^{0*}$-poles}}=F_{2,L/R}^{\text{$V_{k}^{0*}$-poles}}$
(12)
Their analytic results are will be shown in the following subsections. All
possible particles exchanging in the loop diagrams are included. We emphasize
that analytic expressions for the form factors presented in this subsection
cover the results in Ref. [45]. It means that we can reduce to the results for
$H\to\nu_{l}\bar{\nu}_{l}\gamma$ of Ref. [45] by setting $f$ to $\nu_{l}$ and
replacing the corresponding couplings. Furthermore, all analytic formulas
shown in the following subsection cover all cases of $V_{k}^{0*}$ poles. For
instance, when $V_{k}^{0*}\rightarrow\gamma^{*}$, we then set
$M_{V^{0}_{k}}=0$, $\Gamma_{V^{0}_{k}}=0$. In addition, $V_{k}^{0*}$ becomes
$Z$ (or $Z^{\prime}$) boson, we should fix $M_{V^{0}_{k}}=M_{Z}$ and
$\Gamma_{V^{0}_{k}}=\Gamma_{Z}$ (or $Z^{\prime}$) respectively.
Figure 2: One-loop triangle diagrams with exchanging vector boson $V_{i,j}$
particles in the loop. Figure 3: One-loop triangle diagrams with exchanging
scalar boson $S_{i,j}$ particles in loop. Figure 4: One-loop triangle
diagrams with exchanging two scalar bosons $S_{i}$ and a vector boson $V_{j}$
particles in the loop. Figure 5: One-loop triangle diagrams with exchanging a
scalar boson $S_{j}$ and two vector bosons $V_{i}$ in the loop. Figure 6:
One-loop triangle diagrams with exchanging charged fermion $f_{i,j}$ particles
in loop.
We begin with one-loop triangle Feynman diagrams which all vector bosons
$V_{i,j}$ are in the loop (seen Fig. 2). One-loop form factors of this group
of Feynman diagrams are expressed in terms of the PV functions as follows:
$\displaystyle F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{V_{i},V_{j}}$
$\displaystyle=$
$\displaystyle\dfrac{eQ_{V}}{16\pi^{2}}\sum\limits_{V_{i},V_{j},V^{0}_{k}}\dfrac{g_{HViVj}\;g_{V^{0}_{k}ff}^{L}}{M_{V_{i}}^{2}M_{V_{j}}^{2}(q^{2}-M_{V^{0}_{k}}^{2}+i\Gamma_{V^{0}_{k}}M_{V^{0}_{k}})}\times$
$\displaystyle\times\Bigg{\\{}\Big{[}2g_{V^{0}_{k}AV_{i}V_{j}}(M_{H}^{2}-M_{V_{j}}^{2})+g_{V^{0}_{k}V_{i}V_{j}}(M_{H}^{2}+M_{V_{i}}^{2}+M_{V_{j}}^{2})\Big{]}B_{11}(M_{H}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle+\Big{[}g_{V^{0}_{k}V_{i}V_{j}}(M_{H}^{2}+3M_{V_{i}}^{2}-M_{V_{j}}^{2})-2g_{V^{0}_{k}AV_{i}V_{j}}M_{V_{i}}^{2}\Big{]}B_{1}(M_{H}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle+2M_{V_{i}}^{2}(g_{V^{0}_{k}V_{i}V_{j}}-g_{V^{0}_{k}AV_{i}V_{j}})B_{0}(M_{H}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle+2g_{V^{0}_{k}AV_{i}V_{j}}\Big{[}M_{H}^{2}B_{111}+B_{00}+2B_{001}\Big{]}(M_{H}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle+4g_{V^{0}_{k}V_{i}V_{j}}M_{V_{i}}^{2}\Big{(}M_{V_{i}}^{2}+3M_{V_{j}}^{2}-q^{2}\Big{)}C_{0}(0,q^{2},M_{H}^{2},M_{V_{i}}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle+2g_{V^{0}_{k}V_{i}V_{j}}\Big{[}M_{H}^{2}(M_{V_{i}}^{2}+M_{V_{j}}^{2}-q^{2})+M_{V_{i}}^{4}+M_{V_{j}}^{4}+(4d-6)M_{V_{i}}^{2}M_{V_{j}}^{2}$
$\displaystyle\hskip
56.9055pt-q^{2}(M_{V_{i}}^{2}+M_{V_{j}}^{2})\Big{]}(C_{22}+C_{12})(0,q^{2},M_{H}^{2},M_{V_{i}}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle+2g_{V^{0}_{k}V_{i}V_{j}}\Big{[}M_{H}^{2}(M_{V_{i}}^{2}+M_{V_{j}}^{2}-q^{2})+3M_{V_{i}}^{4}-M_{V_{j}}^{4}+(4d-6)M_{V_{i}}^{2}M_{V_{j}}^{2}$
$\displaystyle\hskip
85.35826pt-q^{2}(3M_{V_{i}}^{2}-M_{V_{j}}^{2})\Big{]}C_{2}(0,q^{2},M_{H}^{2},M_{V_{i}}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})\Bigg{\\}},$
$\displaystyle F_{k,R}^{\text{$V_{k}^{0*}$-poles}}|_{V_{i},V_{j}}$
$\displaystyle=$ $\displaystyle
F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{V_{i},V_{j}}\Big{(}g_{V^{0}_{k}ff}^{L}\rightarrow
g_{V^{0}_{k}ff}^{R}\Big{)}.$ (14)
The results are written in terms of $B$\- and $C$-functions. We note that one-
loop amplitude for each diagram in Fig. 2 may decompose into tensor one-loop
up to rank $R=6$. However, after taking into account all diagrams, the
amplitude for this subset Feynman diagrams is only expressed in terms of the
tensor integrals up to rank $R=2$. As a result, we have up to
$C_{22}$-functions contributing to the form factors. Furthermore, some of them
may contain UV divergent but after summing all these functions, the final
results are finite. The topic will be discussed at the end of this section.
We next concern one-loop triangle Feynman diagrams with $S_{i},\;S_{j}$ in the
loop (as described in Fig. 3). The corresponding one-loop form factors are
given:
$\displaystyle F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{S_{i},S_{j}}$
$\displaystyle=$
$\displaystyle\dfrac{eQ_{S}}{4\pi^{2}}\sum\limits_{S_{i},S_{j},V^{0}_{k}}\dfrac{g_{HS_{i}S_{j}}\,g_{V^{0}_{k}S_{i}S_{j}}\,g_{V^{0}_{k}ff}^{L}}{M_{H}^{2}q^{2}(M_{H}^{2}-q^{2})(q^{2}-M_{V^{0}_{k}}^{2}+i\Gamma_{V^{0}_{k}}M_{V^{0}_{k}})}\times$
$\displaystyle\times\Bigg{\\{}(q^{2}-M_{H}^{2})\Big{[}A_{0}(M_{S_{i}}^{2})-A_{0}(M_{S_{j}}^{2})\Big{]}$
$\displaystyle\hskip
14.22636pt+M_{H}^{2}(M_{S_{i}}^{2}-M_{S_{j}}^{2}-q^{2})B_{0}(q^{2},M_{S_{i}}^{2},M_{S_{j}}^{2})$
$\displaystyle\hskip
14.22636pt+q^{2}(M_{H}^{2}-M_{S_{i}}^{2}+M_{S_{j}}^{2})B_{0}(M_{H}^{2},M_{S_{i}}^{2},M_{S_{j}}^{2})$
$\displaystyle\hskip
14.22636pt+2M_{H}^{2}q^{2}(M_{H}^{2}-q^{2})C_{12}(0,q^{2},M_{H}^{2},M_{S_{i}}^{2},M_{S_{i}}^{2},M_{S_{j}}^{2})\Bigg{\\}},$
$\displaystyle F_{k,R}^{\text{$V_{k}^{0*}$-poles}}|_{S_{i},S_{j}}$
$\displaystyle=$ $\displaystyle
F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{S_{i},S_{j}}\Big{(}g_{V^{0}_{k}ff}^{L}\rightarrow
g_{V^{0}_{k}ff}^{R}\Big{)}.$ (16)
Similarly, we have the contributions of one-loop triangle Feynman diagrams
with exchanging scalar boson $S_{i}$ and vector boson $V_{j}$ in the loop. The
Feynman diagrams are depicted as in Fig. 4. Applying the same procedure, one
has the form factors
$\displaystyle F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{S_{i},V_{j}}$
$\displaystyle=$
$\displaystyle\dfrac{eQ_{V}}{8\pi^{2}}\sum\limits_{S_{i},V_{j},V^{0}_{k}}\dfrac{g_{HS_{i}V_{j}}\,g_{V^{0}_{k}S_{i}V_{j}}\,g_{V^{0}_{k}ff}^{L}}{M_{H}^{2}M_{V_{j}}^{2}q^{2}(q^{2}-M_{H}^{2})(q^{2}-M_{V^{0}_{k}}^{2}+i\Gamma_{V^{0}_{k}}M_{V^{0}_{k}})}\times$
$\displaystyle\times\Bigg{\\{}(q^{2}-M_{H}^{2})(M_{H}^{2}-M_{S_{i}}^{2}+M_{V_{j}}^{2})\Big{[}A_{0}(M_{S_{i}}^{2})-A_{0}(M_{V_{j}}^{2})\Big{]}$
$\displaystyle+M_{H}^{2}\Big{[}q^{2}(M_{S_{i}}^{2}+3M_{V_{j}}^{2}-M_{H}^{2})$
$\displaystyle\hskip
36.98866pt+(M_{S_{i}}^{2}-M_{V_{j}}^{2})(M_{H}^{2}-M_{S_{i}}^{2}+M_{V_{j}}^{2})\Big{]}B_{0}(q^{2},M_{S_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle+q^{2}\Big{[}M_{H}^{2}-(M_{S_{i}}+M_{V_{j}})^{2}\Big{]}$
$\displaystyle\hskip
56.9055pt\times\Big{[}M_{H}^{2}-(M_{S_{i}}-M_{V_{j}})^{2}\Big{]}B_{0}(M_{H}^{2},M_{S_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle+2M_{H}^{2}q^{2}(M_{H}^{2}-q^{2})(M_{H}^{2}-M_{S_{i}}^{2}+M_{V_{j}}^{2})$
$\displaystyle\hskip 142.26378pt\times
C_{12}(0,q^{2},M_{H}^{2},M_{S_{i}}^{2},M_{S_{i}}^{2},M_{V_{j}}^{2})\Bigg{\\}},$
$\displaystyle F_{k,R}^{\text{$V_{k}^{0*}$-poles}}|_{S_{i},V_{j}}$
$\displaystyle=$ $\displaystyle
F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{S_{i},V_{j}}\Big{(}g_{V^{0}_{k}ff}^{L}\rightarrow
g_{V^{0}_{k}ff}^{R}\Big{)}.$ (18)
We also consider the contributions of one-loop triangle diagrams with
exchanging a scalar boson $S_{j}$ and two vector boson $V_{i}$ in the loop.
The Feynman diagrams are presented as in Fig. 5. The corresponding form
factors for the above diagrams are given:
$\displaystyle F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{V_{i},S_{j}}$
$\displaystyle=$
$\displaystyle\dfrac{eQ_{V}}{8\pi^{2}}\sum\limits_{V_{i},S_{j},V^{0}_{k}}\dfrac{g_{HV_{i}S_{j}}\;g_{V^{0}_{k}V_{i}S_{j}}\,g_{V^{0}_{k}ff}^{L}}{M_{H}^{2}M_{V_{i}}^{2}q^{2}(M_{H}^{2}-q^{2})(q^{2}-M_{V^{0}_{k}}^{2}+i\Gamma_{V^{0}_{k}}M_{V^{0}_{k}})}\times$
$\displaystyle\times\Bigg{\\{}(M_{H}^{2}-q^{2})(M_{H}^{2}-M_{S_{j}}^{2}+M_{V_{i}}^{2})\Big{[}A_{0}(M_{S_{j}}^{2})-A_{0}(M_{V_{i}}^{2})\Big{]}$
$\displaystyle+q^{2}\Big{[}M_{H}^{2}(M_{H}^{2}+4M_{V_{i}}^{2})-(M_{S_{j}}^{2}-M_{V_{i}}^{2})^{2}\Big{]}B_{0}(M_{H}^{2},M_{V_{i}}^{2},M_{S_{j}}^{2})$
$\displaystyle-
M_{H}^{2}\Big{[}M_{H}^{2}(M_{S_{j}}^{2}-M_{V_{i}}^{2}+q^{2})+M_{S_{j}}^{2}(2M_{V_{i}}^{2}-M_{S_{j}}^{2}-q^{2})$
$\displaystyle\hskip
113.81102pt+M_{V_{i}}^{2}(5q^{2}-M_{V_{i}}^{2})\Big{]}B_{0}(q^{2},M_{V_{i}}^{2},M_{S_{j}}^{2})$
$\displaystyle+2M_{H}^{2}q^{2}(M_{H}^{2}-q^{2})(M_{H}^{2}-M_{S_{j}}^{2}+M_{V_{i}}^{2})C_{12}(0,q^{2},M_{H}^{2},M_{V_{i}}^{2},M_{V_{i}}^{2},M_{S_{j}}^{2})$
$\displaystyle+4M_{H}^{2}M_{V_{i}}^{2}q^{2}(M_{H}^{2}-q^{2})C_{0}(0,q^{2},M_{H}^{2},M_{V_{i}}^{2},M_{V_{i}}^{2},M_{S_{j}}^{2})\Bigg{\\}},$
$\displaystyle F_{k,R}^{\text{$V_{k}^{0*}$-poles}}|_{V_{i},S_{j}}$
$\displaystyle=$ $\displaystyle
F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{V_{i},S_{j}}\Big{(}g_{V^{0}_{k}ff}^{L}\rightarrow
g_{V^{0}_{k}ff}^{R}\Big{)}.$ (20)
Finally, we have consider fermions exchanging in the one-loop triangle
diagrams (shown in Fig. 6). The form factors then read
$\displaystyle F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{f_{i},f_{j}}$
$\displaystyle=$
$\displaystyle\dfrac{eQ_{f}}{4\pi^{2}}\sum\limits_{f_{i},f_{j},V^{0}_{k}}\dfrac{N_{C}^{f}\;g_{V^{0}_{k}ff}^{L}}{(q^{2}-M_{V^{0}_{k}}^{2}+i\Gamma_{V^{0}_{k}}M_{V^{0}_{k}})}\times$
$\displaystyle\times\Bigg{\\{}\Big{[}2m_{f_{i}}(g_{Hf_{i}f_{j}}^{L}g_{V^{0}_{k}f_{i}f_{j}}^{L}+g_{Hf_{i}f_{j}}^{R}g_{V^{0}_{k}f_{i}f_{j}}^{R})+2m_{f_{j}}(g_{Hf_{i}f_{j}}^{L}g_{V^{0}_{k}f_{i}f_{j}}^{R}+g_{Hf_{i}f_{j}}^{R}g_{V^{0}_{k}f_{i}f_{j}}^{L})\Big{]}\times$
$\displaystyle\times\Big{[}C_{22}+C_{12}\Big{]}(0,q^{2},M_{H}^{2},m_{f_{i}}^{2},m_{f_{i}}^{2},m_{f_{j}}^{2})$
$\displaystyle+\Big{[}3m_{f_{i}}(g_{Hf_{i}f_{j}}^{L}g_{V^{0}_{k}f_{i}f_{j}}^{L}+g_{Hf_{i}f_{j}}^{R}g_{V^{0}_{k}f_{i}f_{j}}^{R})+m_{f_{j}}(g_{Hf_{i}f_{j}}^{L}g_{V^{0}_{k}f_{i}f_{j}}^{R}+g_{Hf_{i}f_{j}}^{R}g_{V^{0}_{k}f_{i}f_{j}}^{L})\Big{]}\times$
$\displaystyle\times
C_{2}(0,q^{2},M_{H}^{2},m_{f_{i}}^{2},m_{f_{i}}^{2},m_{f_{j}}^{2})$
$\displaystyle+m_{f_{i}}(g_{Hf_{i}f_{j}}^{L}g_{V^{0}_{k}f_{i}f_{j}}^{R}+g_{Hf_{i}f_{j}}^{R}g_{V^{0}_{k}f_{i}f_{j}}^{R})C_{0}(0,q^{2},M_{H}^{2},m_{f_{i}}^{2},m_{f_{i}}^{2},m_{f_{j}}^{2})\Bigg{\\}},$
$\displaystyle F_{k,R}^{\text{$V_{k}^{0*}$-poles}}|_{f_{i},f_{j}}$
$\displaystyle=$ $\displaystyle
F_{k,L}^{\text{$V_{k}^{0*}$-poles}}|_{f_{i},f_{j}}\Big{(}g_{V^{0}_{k}ff}^{L}\rightarrow
g_{V^{0}_{k}ff}^{R}\Big{)}.$ (22)
### 3.2 Non-$V_{k}^{0*}$ pole contributions
We turn our attention to the non-$V_{k}^{0*}$ pole contributions, considering
all possible particles exchanging in the loop diagrams $(c+d)$. One first
arrives at the group of Feynman diagrams with vector boson $V_{i,j}$ internal
lines (as depicted in Fig. 7). Analytic formulas for the form factors are
given:
Figure 7: One-loop triangle and box diagrams with exchanging vector bosons
$V_{i,j}$ in the loop. Figure 8: One-loop triangle and box diagrams with
exchanging vector bosons $V^{0}_{i},V^{0}_{j}$ in the loop. Figure 9: One-
loop diagrams with exchanging charged scalar bosons $S_{i,j}$ in the loop.
Figure 10: One-loop box diagrams with exchanging vector boson $S^{0}_{i,j}$
particles in loop.
Figure 11: One-loop diagrams with exchanging vector boson $V_{i}$ and scalar
boson $S_{j}$ particles in loop.
Figure 12: One-loop diagrams with exchanging vector boson $V^{0}_{i}$ and
scalar boson $S_{j}^{0}$ particles in loop.
$\displaystyle F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},V_{j}}$
$\displaystyle=$
$\displaystyle\dfrac{eQ_{V}}{16\pi^{2}}\sum\limits_{V_{i},V_{j}}\dfrac{g_{HV_{i}V_{j}}\,g_{V_{i}f\nu_{f}}^{L}\,g_{V_{j}f\nu_{f}}^{L}}{M_{V_{i}}^{2}M_{V_{j}}^{2}}\times$
$\displaystyle\times\Bigg{\\{}-2M_{V_{i}}^{2}\Big{[}C_{0}(0,q_{12},M_{H}^{2},M_{V_{i}}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})+C_{0}(q_{12},0,M_{H}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2},M_{V_{j}}^{2})\Big{]}$
$\displaystyle-(M_{H}^{2}+M_{V_{i}}^{2}+M_{V_{j}}^{2})\Big{\\{}\Big{[}C_{22}+C_{12}\Big{]}(0,q_{12},M_{H}^{2},M_{V_{i}}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle\hskip
142.26378pt+\Big{[}C_{22}+C_{12}\Big{]}(q_{12},0,M_{H}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2},M_{V_{j}}^{2})\Big{\\}}$
$\displaystyle-(M_{H}^{2}+3M_{V_{i}}^{2}-M_{V_{j}}^{2})\Big{[}C_{2}(0,q_{12},M_{H}^{2},M_{V_{i}}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2})$
$\displaystyle\hskip
156.49014pt+C_{2}(q_{12},0,M_{H}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2},M_{V_{j}}^{2})\Big{]}$
$\displaystyle+2(M_{V_{j}}^{2}-M_{V_{i}}^{2})C_{1}(q_{12},0,M_{H}^{2},M_{V_{i}}^{2},M_{V_{j}}^{2},M_{V_{j}}^{2})$
$\displaystyle+(8-2d)M_{V_{i}}^{2}M_{V_{j}}^{2}\Big{[}D_{3}(0,0,0,M_{H}^{2};q_{12},q_{13},M_{V_{i}}^{2},0,M_{V_{j}}^{2},M_{V_{j}}^{2})$
$\displaystyle\hskip
99.58464pt+D_{3}(0,0,0,M_{H}^{2};q_{23},q_{12},M_{V_{i}}^{2},M_{V_{i}}^{2},0,M_{V_{j}}^{2}))\Big{]}$
$\displaystyle+(4-2d)M_{V_{i}}^{2}M_{V_{j}}^{2}\Big{\\{}\Big{[}D_{33}+D_{23}+D_{13}\Big{]}(0,0,0,M_{H}^{2};q_{23},q_{12},M_{V_{i}}^{2},M_{V_{i}}^{2},0,M_{V_{j}}^{2})$
$\displaystyle\hskip
99.58464pt+\Big{[}D_{33}+D_{23}\Big{]}(0,0,0,M_{H}^{2};q_{12},q_{13},M_{V_{i}}^{2},0,M_{V_{j}}^{2},M_{V_{j}}^{2})\Big{\\}}$
$\displaystyle+4M_{V_{i}}^{2}M_{V_{j}}^{2}D_{2}(0,0,0,M_{H}^{2};q_{12},q_{13},M_{V_{i}}^{2},0,M_{V_{j}}^{2},M_{V_{j}}^{2})\Bigg{\\}},$
$\displaystyle F_{1,R}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},V_{j}}$
$\displaystyle=$ $\displaystyle
F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},V_{j}}\Big{(}g_{V_{i}f\nu_{f}}^{L}\rightarrow
g_{V_{i}f\nu_{f}}^{R};\;g_{V_{j}f\nu_{f}}^{L}\rightarrow
g_{V_{j}f\nu_{f}}^{R}\Big{)},$ (24) $\displaystyle
F_{2,L}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},V_{j}}$ $\displaystyle=$
$\displaystyle
F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},V_{j}}\Big{(}\\{q_{13},q_{23}\\}\rightarrow\\{q_{23},q_{13}\\}\Big{)},$
(25) $\displaystyle F_{2,R}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},V_{j}}$
$\displaystyle=$ $\displaystyle
F_{2,L}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},V_{j}}\Big{(}g_{V_{i}f\nu_{f}}^{L}\rightarrow
g_{V_{i}f\nu_{f}}^{R};\;g_{V_{j}f\nu_{f}}^{L}\rightarrow
g_{V_{j}f\nu_{f}}^{R}\Big{)}.$ (26)
We find that the result is presented in terms of $C$\- and $D$-functions up to
$D_{33}$-coefficients. The reason for this fact is explained as follows. Due
to the exchange of vector boson in the loop, we have to handle with tensor
one-loop integrals with rank $R\geq 3$ in the amplitude of each diagram.
However, they are cancelled out after summing all diagrams. As a result, the
total amplitude is only expressed in terms of tensor integrals with $R\leq 2$
that causes of leading the results.
For neutral vector boson $V^{0}_{i},V^{0}_{j}$ internal lines in the loop
diagrams (seen Fig. 8), the corresponding form factors are obtained
$\displaystyle F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},V^{0}_{j}}$
$\displaystyle=$
$\displaystyle\dfrac{eQ_{f}}{16\pi^{2}}\sum\limits_{V^{0}_{i},V^{0}_{j}}g_{HV^{0}_{i}V^{0}_{j}}\;g_{V^{0}_{i}ff}^{L}\;g_{V^{0}_{j}ff}^{L}\times$
$\displaystyle\times\Big{\\{}(4-2d)\big{[}D_{33}+D_{23}\big{]}+(8-2d)D_{3}\Big{\\}}(0,0,0,M_{H}^{2},q_{23},q_{13},M_{V^{0}_{i}}^{2},0,0,M_{V^{0}_{j}}^{2}),$
$\displaystyle F_{1,R}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},V^{0}_{j}}$
$\displaystyle=$ $\displaystyle
F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},V^{0}_{j}}\Big{(}g_{V^{0}_{i}ff}^{L}\rightarrow
g_{V^{0}_{i}ff}^{R};\;g_{V^{0}_{j}ff}^{L}\rightarrow
g_{V^{0}_{j}ff}^{R}\Big{)},$ (28) $\displaystyle
F_{2,L}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},V^{0}_{j}}$ $\displaystyle=$
$\displaystyle
F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},V^{0}_{j}}\Big{(}\\{q_{13},q_{23}\\}\rightarrow\\{q_{23},q_{13}\\}\Big{)},$
(29) $\displaystyle F_{2,R}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},V^{0}_{j}}$
$\displaystyle=$ $\displaystyle
F_{2,L}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},V^{0}_{j}}\Big{(}g_{V^{0}_{i}ff}^{L}\rightarrow
g_{V^{0}_{i}ff}^{R};\;g_{V^{0}_{j}ff}^{L}\rightarrow
g_{V^{0}_{j}ff}^{R}\Big{)}.$ (30)
Next, we also consider one-loop diagrams with having charged scalar bosons
$S_{i,j}$ internal lines (shown in Fig. 9). The results read
$\displaystyle F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{S_{i},S_{j}}$
$\displaystyle=$
$\displaystyle\dfrac{eQ_{S}}{8\pi^{2}}\sum\limits_{S_{i},S_{j}}g_{HS_{i}S_{j}}\,g_{S_{i}f\nu_{f}}^{R}\,g_{S_{j}f\nu_{f}}^{R}\times$
$\displaystyle\times\Bigg{\\{}\Big{[}D_{33}+D_{23}+D_{3}\Big{]}(0,0,0,M_{H}^{2};q_{12},q_{13},M_{S_{i}}^{2},0,M_{S_{j}}^{2},M_{S_{j}}^{2})$
$\displaystyle\hskip
8.5359pt+\Big{[}D_{33}+D_{23}+D_{13}+D_{3}\Big{]}(0,0,0,M_{H}^{2};q_{23},q_{12},M_{S_{i}}^{2},M_{S_{i}}^{2},0,M_{S_{j}}^{2})\Bigg{\\}},$
$\displaystyle F_{1,R}^{\text{Non-$V_{k}^{0*}$}}|_{S_{i},S_{j}}$
$\displaystyle=$ $\displaystyle
F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{S_{i},S_{j}}\Big{(}g_{S_{i}f\nu_{f}}^{R}\rightarrow
g_{S_{i}f\nu_{f}}^{L};\;g_{S_{j}f\nu_{f}}^{R}\rightarrow
g_{S_{j}f\nu_{f}}^{L}\Big{)},$ (32) $\displaystyle
F_{2,L}^{\text{Non-$V_{k}^{0*}$}}|_{S_{i},S_{j}}$ $\displaystyle=$
$\displaystyle
F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{S_{i},S_{j}}\Big{(}\\{q_{13},q_{23}\\}\rightarrow\\{q_{23},q_{13}\\}\Big{)},$
(33) $\displaystyle F_{2,R}^{\text{Non-$V_{k}^{0*}$}}|_{S_{i},S_{j}}$
$\displaystyle=$ $\displaystyle
F_{2,L}^{\text{Non-$V_{k}^{0*}$}}|_{S_{i},S_{j}}\Big{(}g_{S_{i}f\nu_{f}}^{R}\rightarrow
g_{S_{i}f\nu_{f}}^{L};\;g_{S_{j}f\nu_{f}}^{R}\rightarrow
g_{S_{j}f\nu_{f}}^{L}\Big{)}.$ (34)
Furthermore, one also has the contributions of neutral scalar boson
$S^{0}_{i,j}$ exchanging in the loop diagrams (as described in Fig. 10).
Analytic results for the form factors then read
$\displaystyle F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{S^{0}_{i},S^{0}_{j}}$
$\displaystyle=$
$\displaystyle-\dfrac{eQ_{f}}{8\pi^{2}}\sum\limits_{S^{0}_{i},S^{0}_{j}}\,g_{HS^{0}_{i}S^{0}_{j}}\,g_{S^{0}_{i}ff}^{L}\,g_{S^{0}_{j}ff}^{R}\times$
$\displaystyle\times\Big{[}D_{33}+D_{23}+D_{3}\Big{]}(0,0,0,M_{H}^{2};q_{23},q_{13};M_{S^{0}_{i}}^{2},0,0,M_{S^{0}_{j}}^{2}),$
$\displaystyle F_{1,R}^{\text{Non-$V_{k}^{0*}$}}|_{S^{0}_{i},S^{0}_{j}}$
$\displaystyle=$ $\displaystyle
F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{S^{0}_{i},S^{0}_{j}}\Big{(}g_{S^{0}_{i}ff}^{L}\rightarrow
g_{S^{0}_{i}ff}^{R};\;g_{S^{0}_{j}ff}^{R}\rightarrow
g_{S^{0}_{j}ff}^{L}\Big{)},$ (36) $\displaystyle
F_{2,L}^{\text{Non-$V_{k}^{0*}$}}|_{S^{0}_{i},S^{0}_{j}}$ $\displaystyle=$
$\displaystyle
F_{1,L}^{\text{Non-$V_{k}^{0*}$}}|_{S^{0}_{i},S^{0}_{j}}\Big{(}\\{q_{13},q_{23}\\}\rightarrow\\{q_{23},q_{13}\\}\Big{)},$
(37) $\displaystyle F_{2,R}^{\text{Non-$V_{k}^{0*}$}}|_{S^{0}_{i},S^{0}_{j}}$
$\displaystyle=$ $\displaystyle
F_{2,L}^{\text{Non-$V_{k}^{0*}$}}|_{S^{0}_{i},S^{0}_{j}}\Big{(}g_{S^{0}_{i}ff}^{L}\rightarrow
g_{S^{0}_{i}ff}^{R};\;g_{S^{0}_{j}ff}^{R}\rightarrow
g_{S^{0}_{j}ff}^{L}\Big{)}.$ (38)
We now consider non-$V_{k}^{*0}$ pole one-loop diagrams with mixing of scalar
$S_{j}$ (or $S_{j}^{0}$) and $V_{i}(V^{0}_{i})$ exchanging in the loop. The
diagrams are depicted in Figs. 11, 12. The calculations are performed as same
procedure. We finally find that these contributions are proportional to
$m_{f}$. As a result, in the limit of $m_{f}\rightarrow 0$, one confirms that
$\displaystyle
F_{k,L}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},S_{j}}=F_{k,R}^{\text{Non-$V_{k}^{0*}$}}|_{V_{i},S_{j}}$
$\displaystyle=$ $\displaystyle 0,$ (39) $\displaystyle
F_{k,L}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},S^{0}_{j}}=F_{k,R}^{\text{Non-$V_{k}^{0*}$}}|_{V^{0}_{i},S^{0}_{j}}$
$\displaystyle=$ $\displaystyle 0.$ (40)
for $k=1,2$.
We verify the ultraviolet finiteness of the results. We find that the UV-
divergent parts of all the above form factors come from all $B$-functions.
While $C$\- and $D$-functions in this papers are UV-finite. Higher rank tensor
$B$-functions can be reduced into $B_{0}$ and $A_{0}$. We verify that the sum
of all $B$-functions gives a UV-finite result. As a result, all the form
factors are UV-finite (seen our previous paper [45] for example).
Having the correctness form factors for the decay processes, the decay rate is
given by [20]:
$\displaystyle\dfrac{d\Gamma}{dq_{12}q_{13}}=\dfrac{q_{12}}{512\pi^{3}M_{H}^{3}}\Big{[}q_{13}^{2}(|F_{1,R}|^{2}+|F_{2,R}|^{2})+q_{23}^{2}(|F_{1,L}|^{2}+|F_{2,L}|^{2})\Big{]}.$
(41)
Taking the above integrand over $0\leq q_{12}\leq M_{H}^{2}$ and $0\leq
q_{13}\leq M_{H}^{2}-q_{12}$, one gets the total decay rates. In the next
subsection, we show typical examples which we apply our analytical results for
$H\rightarrow f\bar{f}\gamma$ to the Standard Model, the $U(1)_{B-L}$
extension of the SM, THDM. Phenomenological results for the decay channels of
the mentioned models also studied with using the present parameters at the
LHC.
## 4 Applications
We are going to apply the above results to the standard model and several BSMs
such as the $U(1)_{B-L}$ extension of the SM, THDM. For phenomenological
analyses, we use the following input parameters: $\alpha=1/137.035999084$,
$M_{Z}=91.1876$ GeV, $\Gamma_{Z}=2.4952$ GeV, $M_{W}=80.379$ GeV,
$M_{H}=125.1$ GeV, $m_{\tau}=1.77686$ GeV, $m_{t}=172.76$ GeV, $m_{b}=4.18$
GeV, $m_{s}=0.93$ GeV and $m_{c}=1.27$ GeV. Depending on the models under
consideration, the input values for new parameters are then shown.
### 4.1 Standard model
We first reduce our result to the case of the standard model. In this case, we
have $V_{i},V_{j}\rightarrow W^{+},W^{-}$, $V_{k}^{0}\rightarrow Z,\gamma$.
All couplings relating to the decay channels $H\rightarrow f\bar{f}\gamma$ in
the SM are replaced as in Table 1.
Vertices | Couplings
---|---
$g_{HV_{i}V_{j}}$ | $eM_{W}/s_{W}$
$g_{V^{0}_{k}V_{i}V_{j}}$ | $e\;c_{W}/s_{W}$
$g_{V^{0}_{k}AV_{i}V_{j}}$ | $e^{2}\;c_{W}/s_{W}$
$g_{V^{0}_{k}\nu_{f}\nu_{f}}^{L}$ | $e/(2s_{W}c_{W})$
$g_{V^{0}_{k}\nu_{f}\nu_{f}}^{R}$ | $0$
$g_{Hf_{i}f_{j}}^{L,R}$ | $e\;m_{f}/(2s_{W}\;M_{W})$
$g_{V^{0}_{k}f_{i}f_{j}}^{L}$ | $e(T_{3}^{f}-Q_{f}\;s^{2}_{W})/(s_{W}\;c_{W})$
$g_{V^{0}_{k}f_{i}f_{j}}^{R}$ | $-eQ_{f}s_{W}/c_{W}$
$g_{V_{i}f\nu_{f}}^{L}$ | $e/(\sqrt{2}\;s_{W})$
$g_{V_{i}f\nu_{f}}^{R}$ | $0$
Table 1: All the couplings involving the decay processes $H\rightarrow
f\bar{f}\gamma$ in the SM.
In the SM, the form factors are obtained by taking the contributions of Eqs.
(3.1, 3.1, 3.2, 3.2). Using the above couplings, we then get a compact
expression for the form factors as follows:
$\displaystyle F_{2L}$ $\displaystyle=$
$\displaystyle\dfrac{\alpha^{2}m_{t}^{2}}{3s_{W}M_{W}}\Bigg{[}\dfrac{16}{q_{12}}+\dfrac{2(8s_{W}^{2}-3)}{es_{W}c_{W}}\;\dfrac{g_{Zff}^{L}}{q_{12}-M_{Z}^{2}+i\Gamma_{Z}M_{Z}}\Bigg{]}\times$
$\displaystyle\hskip
19.91684pt\times\Bigg{\\{}4\Big{(}C_{22}+C_{12}+C_{2}\Big{)}+C_{0}\Bigg{\\}}(0,q_{12},M_{H}^{2},m_{t}^{2},m_{t}^{2},m_{t}^{2})$
$\displaystyle+\dfrac{\alpha^{2}}{s_{W}M_{W}^{3}}\Bigg{[}\dfrac{1}{q_{12}}-\dfrac{c_{W}}{es_{W}}\dfrac{g_{Zff}^{L}}{q_{12}-M_{Z}^{2}+i\Gamma_{Z}M_{Z}}\Bigg{]}\times$
$\displaystyle\hskip
19.91684pt\times\Bigg{\\{}\Big{[}-M_{H}^{2}\Big{(}B_{1}+3B_{11}+2B_{111}\Big{)}-2\Big{(}B_{00}+2B_{001}\Big{)}\Big{]}(M_{H}^{2},M_{W}^{2},M_{W}^{2})$
$\displaystyle+\Big{[}4M_{W}^{2}\Big{(}q_{12}-M_{H}^{2}\Big{)}+2M_{H}^{2}q_{12}+8(1-d)M_{W}^{4}\Big{]}\times$
$\displaystyle\hskip
85.35826pt\times\Big{(}C_{22}+C_{12}+C_{2}\Big{)}(0,q_{12},M_{H}^{2},M_{W}^{2},M_{W}^{2},M_{W}^{2})$
$\displaystyle+4M_{W}^{2}(q_{12}-4M_{W}^{2})C_{0}(0,q_{12},M_{H}^{2},M_{W}^{2},M_{W}^{2},M_{W}^{2})\Bigg{\\}}$
$\displaystyle+\dfrac{\alpha}{4\pi
s_{W}M_{W}^{3}}\Big{(}g_{Wf\nu_{f}}^{L}\Big{)}^{2}\times$ $\displaystyle\hskip
19.91684pt\times\Bigg{\\{}(-M_{H}^{2}-2M_{W}^{2})\Big{[}\Big{(}C_{12}+C_{11}+C_{1}\Big{)}(M_{H}^{2},0,q_{12},M_{W}^{2},M_{W}^{2},M_{W}^{2})$
$\displaystyle\hskip
122.34692pt+\Big{(}C_{12}+C_{11}+C_{1}\Big{)}(M_{H}^{2},q_{12},0,M_{W}^{2},M_{W}^{2},M_{W}^{2})\Big{]}$
$\displaystyle-2M_{W}^{2}\Big{[}C_{0}(M_{H}^{2},0,q_{12},M_{W}^{2},M_{W}^{2},M_{W}^{2})+C_{0}(M_{H}^{2},q_{12},0,M_{W}^{2},M_{W}^{2},M_{W}^{2})\Big{]}$
$\displaystyle+2M_{W}^{4}(2-d)\Big{[}\Big{(}D_{13}+D_{12}+D_{11}\Big{)}(M_{H}^{2},0,0,0;q_{12},q_{13},M_{W}^{2},M_{W}^{2},M_{W}^{2},0)$
$\displaystyle+\Big{(}D_{23}+D_{22}+D_{13}+2D_{12}+D_{11}\Big{)}(M_{H}^{2},0,0,0;q_{23},q_{12},M_{W}^{2},M_{W}^{2},0,M_{W}^{2})\Big{]}$
$\displaystyle-4M_{W}^{4}\Big{[}\Big{(}D_{3}+D_{2}+D_{0}\Big{)}(M_{H}^{2},0,0,0;q_{12},q_{13},M_{W}^{2},M_{W}^{2},M_{W}^{2},0)$
$\displaystyle\hskip
113.81102pt+D_{0}(M_{H}^{2},0,0,0;q_{23},q_{12},M_{W}^{2},M_{W}^{2},0,M_{W}^{2})\Big{]}$
$\displaystyle-2dM_{W}^{4}\Big{[}\Big{(}D_{2}+D_{1}\Big{)}(M_{H}^{2},0,0,0;q_{23},q_{12},M_{W}^{2},M_{W}^{2},0,M_{W}^{2})$
$\displaystyle\hskip
113.81102pt+D_{1}(M_{H}^{2},0,0,0;q_{12},q_{13},M_{W}^{2},M_{W}^{2},M_{W}^{2},0)\Big{]}\Bigg{\\}}$
$\displaystyle+\dfrac{\alpha M_{W}}{2\pi
c_{W}^{2}s_{W}}\Big{(}g_{Zff}^{L}\Big{)}^{2}\Bigg{\\{}(d-2)\times$
$\displaystyle\hskip
28.45274pt\times\Big{[}D_{23}+D_{22}+D_{13}+2D_{12}+D_{11}\Big{]}(M_{H}^{2},0,0,0;q_{23},q_{13},M_{Z}^{2},M_{Z}^{2},0,0)$
$\displaystyle\hskip
28.45274pt+\Big{[}d\Big{(}D_{2}+D_{1}\Big{)}+2\Big{(}D_{3}+D_{0}\Big{)}\Big{]}(M_{H}^{2},0,0,0;q_{23},q_{13},M_{Z}^{2},M_{Z}^{2},0,0)\Bigg{\\}}.$
Where some coupling constants relate in this representation like
$g_{Wf\nu_{f}}^{L}=e/(\sqrt{2}s_{W})$,
$g_{Zff}^{L}=e(2s_{W}^{2}-1)/(2c_{W}s_{W})$ and $g_{Wf\nu_{f}}^{R}=0$,
$g_{Zff}^{R}=es_{W}/c_{W}$.
Other form factors can be obtained as follows.
$\displaystyle F_{1L}$ $\displaystyle=$ $\displaystyle
F_{2L}\Big{(}\\{q_{13},q_{23}\\}\rightarrow\\{q_{23},q_{13}\\}\Big{)},$ (43)
$\displaystyle F_{kR}$ $\displaystyle=$ $\displaystyle
F_{kL}\Big{(}g_{Wf\nu_{f}}^{L}\rightarrow
g_{Wf\nu_{f}}^{R};g_{Zff}^{L}\rightarrow g_{Zff}^{R}\Big{)}$ (44)
for $k=1,2$.
It is stress that we derive alternative results for the form factors of
$H\rightarrow f\bar{f}\gamma$ in the SM in comparison with previous works.
Because our analytic results in this paper are computed in the unitary. Thus,
our formulas may get different forms in comparison with the results in [20]
which have calculated in $R_{\xi}$-gauge. In this paper, we perform cross-
check our results with [20] by numerical tests. The numerical results for this
check are shown in Table 2. We find that our results are in good agreement
with the ones in [20] with more than $10$ digits.
$(q_{12},q_{13})$ | This work | Ref. [20]
---|---|---
$(100,200)$ | $9.62231539663501\cdot 10^{-8}$ | $9.62231539662956\cdot 10^{-8}$
| $-3.501515874673991\cdot 10^{-10}\,i$ | $-3.501515874674078\cdot 10^{-10}\,i$
$(-100,200)$ | $-9.95227151085161\cdot 10^{-8}$ | $-9.95227151084899\cdot 10^{-8}$
| $-3.531494528007124\cdot 10^{-10}\,i$ | $-3.531494528006995\cdot 10^{-10}\,i$
$(100,-200)$ | $9.62360254230002\cdot 10^{-8}$ | $9.62360254229717\cdot 10^{-8}$
| $-3.597907189717628\cdot 10^{-10}\,i$ | $-3.597907189717582\cdot 10^{-10}\,i$
$(-100,-200)$ | $-9.95098785515085\cdot 10^{-8}$ | $-9.95098785514946\cdot 10^{-8}$
| $-3.622848558573573\cdot 10^{-10}\,i$ | $-3.622848558573332\cdot 10^{-10}\,i$
Table 2: Numerical checks for form factor $F_{2L}$ in this work with $b_{1}$
in $A2$ of [20].
We also generate the decay widths for $H\rightarrow e\bar{e}\gamma$ and cross-
check our results with [13]. The results are presented in Table 3. For this
test, we adjust the input parameters and apply all cuts as same in [13]. In
the Table 3, the parameter $k$ is taken account which come from the
kinematical cuts of the invariant masses $m_{ff},\;m_{f\gamma}$ as follows:
$\displaystyle m_{ff}^{2},m_{f\gamma}^{2}\geq(kM_{H})^{2}.$ (45)
We find that our results are in good agreement with the ones in [13].
$k$ | This work | Ref. [13]
---|---|---
$0$ | $0.576865$ | $0.5782$
$0.1$ | $0.242514$ | $0.245$
$0.2$ | $0.184121$ | $0.1897$
$0.3$ | $0.121368$ | $0.1242$
$0.4$ | $0.0572478$ | $0.05844$
Table 3: Cross-check the results of the decay widths in this work with [13]
### 4.2 $U(1)_{B-L}$ extension of the SM
We refer to the appendix $B$ for reviewing briefly this model. In the appendix
$B$, all couplings relating to the decay processes $H\rightarrow
f\bar{f}\gamma$ are derived in $U(1)_{B-L}$ extension of the SM (seen Table
4). Apart from all particles in the SM, two additional neutral Higgs and a
neutral gauge boson $Z^{\prime}$ which belongs to $U(1)_{B-L}$ gauge symmetry
are taken into account in this model.
For phenomenological study, we have to include three new parameters such as
the mixing angle $c_{\alpha}$, the $U(1)_{B-L}$ coupling $g^{\prime}_{1}$ and
the mass of new gauge boson $M_{Z}^{\prime}$. In all the below results, we set
$c_{\alpha}=0.3,0.7$ and $c_{\alpha}=1$ (this case is to the standard model).
The mass of $Z^{\prime}$ is in the range of $600$ GeV $\leq M_{Z}^{\prime}\leq
1000$ GeV. The coupling $g^{\prime}_{1}$ is in $0.05\leq g_{1}^{\prime}\leq
0.5$.
We study the impact of the $U(1)_{B-L}$ extension of the SM on the
differential decay widths as functions of $m_{ff}$ and $m_{f\gamma}$. The
results are shown in the Fig. 13 with fixing $M_{Z}^{\prime}=1000$ GeV. In
these figures, the solid line shows for the SM case by setting $c_{\alpha}=1$.
While the dashed line presents for $c_{\alpha}=0.7$ and the dashed dot line is
for $c_{\alpha}=0.3$. In the left figure, we observe photon pole at the lowest
region of $m_{ff}$. The decay rates decrease up to $m_{ff}\sim 60$ GeV. They
then grown up to $Z$-peak (the peak of $Z\rightarrow f\bar{f}$) which locates
around $m_{ff}\sim 90$ GeV. Beyond the peak, the decay rates decrease rapidly.
In the right figure, the decay widths increase up to peak of $m_{f\gamma}\sim
81$ GeV which is corresponding the photon recoil mass at $Z$-pole. They also
decrease rapidly beyond the peak. It is interested to find that the
contributions of $U(1)_{B-L}$ extension are sizable in both case of
$c_{\alpha}=0.7,0.3$. These effects can be probed clearly at the future
colliders.
$\begin{array}[]{cc}\dfrac{d\Gamma}{dm_{f\bar{f}}}&\dfrac{d\Gamma}{dm_{f\gamma}}\\\
&\\\
\includegraphics[width=227.62204pt,height=227.62204pt]{./U1/HFFG_U1_mll_Input_gBL_0.5_MZBL_1000_cA_SM_1.0_0.7_0.3.eps}&\includegraphics[width=227.62204pt,height=227.62204pt]{./U1/HFFG_U1_mlg_Input_gBL_0.5_MZBL_1000_cA_SM_1.0_0.7_0.3.eps}\\\
\hskip 142.26378ptm_{ff}\;\text{[GeV]}&\hskip
142.26378ptm_{f\gamma}\;\text{[GeV]}\end{array}$
Figure 13: Differential of the decay width as functions of $m_{ff}$ and
$m_{f\gamma}$
We next examine the decay widths as a function of $M_{Z}^{\prime}$. In this
study, we change the mass of $Z^{\prime}$ boson as $800$ GeV $\leq
M_{Z^{\prime}}\leq 1000$ GeV and set $c_{\alpha}=0.7$ (see Fig. 14), the
dashed line shows for the case of $g^{\prime}_{1}=0.05$, the dot line presents
for $g^{\prime}_{1}=0.2$ and the dashed dot line is for $g^{\prime}_{1}=0.5$.
In all range of $M_{Z}^{\prime}$, we observe that the decay widths are
proportional to $g_{1}^{\prime}$ (it is also confirmed later analyses). They
also decrease with increasing $M_{Z}^{\prime}$. We conclude that the
contributions from the $U(1)_{B-L}$ extension are massive and can be probed
clearly at future colliders.
$\Gamma$ [KeV]
$M_{Z^{\prime}}$
Figure 14: The decay widths which are functions of $M_{Z}^{\prime}$
We finally discuss how the effect of $g^{\prime}_{1}$ on the decay widths
(seen Fig. 15). In these figures, we set $M_{Z^{\prime}}=800$ GeV and
$c_{\alpha}=0.3$ ($c_{\alpha}=0.7$) for the left figure (for the right figure)
respectively. We find that the decay width are proportional to
$g_{1}^{\prime}$.
$\begin{array}[]{cc}\Gamma\text{[KeV]}&\Gamma\text{[KeV]}\\\
\includegraphics[width=227.62204pt,height=227.62204pt]{./U1/HFFG_U1_gBL_Input_cA_0.3_MZBL_800.eps}&\includegraphics[width=227.62204pt,height=227.62204pt]{./U1/HFFG_U1_gBL_Input_cA_0.7_MZBL_800.eps}\\\
\hskip 142.26378ptg_{1}^{\prime}&\hskip 142.26378ptg_{1}^{\prime}\end{array}$
Figure 15: The decay widths are function of $g_{1}^{\prime}$
### 4.3 Two Higgs Doublet Model
For reviewing THDM, we refer the appendix $C$ for more detail. In this
framework, the gauge sector is same the SM case. It means that we have
$V_{i}=V_{j}=W$, $V_{k}^{0}=Z,\gamma$. In the Higgs sectors, one has
additional charged Higgs $H^{+}$ and two neutral scalar CP even Higgs bosons
$H_{1}^{0},H_{2}^{0}$, a CP odd Higgs $A$. All couplings relating to the decay
processes $H\rightarrow f\bar{f}\gamma$ are shown in Table 5.
For the phenomenological results, we take $1\leq\tan\beta\leq 30$,
$M_{H_{1}^{0}}=125.1$ GeV, $-900^{2}$ GeV2 $\leq m_{12}^{2}\leq 900^{2}$ GeV2,
$340$ GeV $\leq M_{H^{\pm}}\leq 900$ GeV, $-\pi/2\leq\alpha\leq\pi/2$. We
study the differential decay rates as a function of the invariant mass of
leptons $m_{ff}$. We select $M_{H^{\pm}}=700$ GeV, $m_{12}^{2}=800^{2}$ GeV2.
The results are shown in Fig. 16. In this figure, the solid line shows for the
SM case. The dashed line presents for the case of $\alpha=-0.4\pi$. The dashed
dot line is for the case of $\alpha=0$. The dot line is for the case of
$\alpha=0.4\pi$. From the top to bottom of the figure, we have the
corresponding figure to $\tan\beta=5,10,15,25$. The decay rates are same
behavior in previous cases, we observe photon pole at the lowest region of
$m_{ff}$. They decrease from photon pole to the region of $m_{ff}\sim 60$ GeV
and then grown up to $Z$-peak. Beyond the $Z$-peak, the decay rates decrease
rapidly. The effect of THDM are also visible in all cases.
$\begin{array}[]{cc}\includegraphics[width=227.62204pt,height=227.62204pt]{./thdm/HFFG_C2HDM_DR_mll_tanb_5_alpha.eps}&\includegraphics[width=227.62204pt,height=227.62204pt]{./thdm/HFFG_C2HDM_DR_mll_tanb_10_alpha.eps}\\\
\\\
\includegraphics[width=227.62204pt,height=227.62204pt]{./thdm/HFFG_C2HDM_DR_mll_tanb_15_alpha.eps}&\includegraphics[width=227.62204pt,height=227.62204pt]{./thdm/HFFG_C2HDM_DR_mll_tanb_25_alpha.eps}\end{array}$
Figure 16: Differential decay rates as a function of the invariant mass of
leptons $m_{ff}$ with changing $\alpha=-0.4\pi,0,0.4\pi$
We next change charged Higgs masses $M_{H^{\pm}}=400,600,800$ GeV and take
$\alpha=0.4\pi$. From the top to bottom of the figure, we have the
corresponding figure to the case of $\tan\beta=5,10,15,25$ (seen Fig. 17). The
decay widths depend on $m_{ff}$ the same previous explanation. They are
inversely proportional to the charged Higgs masses. We find that the effect of
THDM are sizeable contributions in all the above cases. These effects can be
discriminated clearly at future colliders.
$\begin{array}[]{cc}\includegraphics[width=227.62204pt,height=227.62204pt]{./thdm/HFFG_C2HDM_DR_mll_tanb_5_MCH.eps}&\includegraphics[width=227.62204pt,height=227.62204pt]{./thdm/HFFG_C2HDM_DR_mll_tanb_10_MCH.eps}\\\
\\\
\includegraphics[width=227.62204pt,height=227.62204pt]{./thdm/HFFG_C2HDM_DR_mll_tanb_15_MCH.eps}&\includegraphics[width=227.62204pt,height=227.62204pt]{./thdm/HFFG_C2HDM_DR_mll_tanb_25_MCH.eps}\end{array}$
Figure 17: Differential decay rates as a function of the invariant mass of
leptons $m_{ff}$ with changing charged Higgs masses $M_{H^{\pm}}=400,600,800$
GeV.
## 5 Conclusions
We have performed the calculations for one-loop contributing to the decay
processes $H\rightarrow f\bar{f}\gamma$ in the limit of $m_{f}\rightarrow 0$.
In this computation, we have considered all possible contributions of
additional heavy vector gauge bosons, heavy fermions, and charged (and also
neutral) scalar particles which they may exchange in the loop diagrams. The
analytic formulas are written in terms of Passarino-Veltman functions which
they can be evaluated numerically by using the package LoopTools. The
evaluations have then applied standard model, $U(1)_{B-L}$ extension of the
SM, two Higgs doublet model. Phenomenological results of the decay processes
for the above models have studied in detail. We find that the effects of new
physics are sizable contributions and they can be probed at future colliders.
Acknowledgment: This research is funded by Vietnam National Foundation for
Science and Technology Development (NAFOSTED) under the grant number
$103.01$-$2019.346$. K. H. Phan would like to thank Dr. L. T. Hue for helpful
discussions.
## Appendix A Tensor one-loop reductions
In this appendix, tensor one-loop reduction method in [44] is discussed
briefly. First, the definition of one-loop one-, two-, three- and four-point
tensor integrals with rank $R$ are as follows:
$\displaystyle\\{A;B;C;D\\}^{\mu_{1}\mu_{2}\cdots\mu_{R}}=(\mu^{2})^{2-d/2}\int\frac{d^{d}k}{(2\pi)^{d}}\dfrac{k^{\mu_{1}}k^{\mu_{2}}\cdots
k^{\mu_{R}}}{\\{P_{1};P_{1}P_{2};P_{1}P_{2}P_{3};P_{1}P_{2}P_{3}P_{4}\\}}.$
(46)
In this formula, $P_{j}$ ($j=1,\cdots,4$) are the inverse Feynman propagators
$\displaystyle P_{j}=(k+q_{j})^{2}-m_{j}^{2}+i\rho.$ (47)
In this equation, we use $q_{j}=\sum\limits_{i=1}^{j}p_{i}$ with $p_{i}$ for
the external momenta, and $m_{j}$ for internal masses in the loops.
Dimensional regularization is performed in space-time dimension
$d=4-2\varepsilon$. The parameter $\mu^{2}$ plays role of a renormalization
scale. When the numerator of the integrands in Eq. (46) becomes $1$, one has
the corresponding scalar one-loop functions (they are noted as $A_{0}$,
$B_{0}$, $C_{0}$ and $D_{0}$). Explicit reduction formulas for one-loop one-,
two-, three- and four-point tensor integrals up to rank $R=3$ are written as
follows [44]. In particular, for two-point tensor integrals, the reduction
formulas are:
$\displaystyle A^{\mu}$ $\displaystyle=$ $\displaystyle 0,$ (48)
$\displaystyle A^{\mu\nu}$ $\displaystyle=$ $\displaystyle g^{\mu\nu}A_{00},$
(49) $\displaystyle A^{\mu\nu\rho}$ $\displaystyle=$ $\displaystyle 0,$ (50)
$\displaystyle B^{\mu}$ $\displaystyle=$ $\displaystyle q^{\mu}B_{1},$ (51)
$\displaystyle B^{\mu\nu}$ $\displaystyle=$ $\displaystyle
g^{\mu\nu}B_{00}+q^{\mu}q^{\nu}B_{11},$ (52) $\displaystyle B^{\mu\nu\rho}$
$\displaystyle=$
$\displaystyle\\{g,q\\}^{\mu\nu\rho}B_{001}+q^{\mu}q^{\nu}q^{\rho}B_{111},$
(53)
For three-point functions, one has
$\displaystyle C^{\mu}$ $\displaystyle=$ $\displaystyle
q_{1}^{\mu}C_{1}+q_{2}^{\mu}C_{2}=\sum\limits_{i=1}^{2}q_{i}^{\mu}C_{i},$ (54)
$\displaystyle C^{\mu\nu}$ $\displaystyle=$ $\displaystyle
g^{\mu\nu}C_{00}+\sum\limits_{i,j=1}^{2}q_{i}^{\mu}q_{j}^{\nu}C_{ij},$ (55)
$\displaystyle C^{\mu\nu\rho}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{2}\\{g,q_{i}\\}^{\mu\nu\rho}C_{00i}+\sum_{i,j,k=1}^{2}q^{\mu}_{i}q^{\nu}_{j}q^{\rho}_{k}C_{ijk},$
(56)
Similarly, tensor reduction formulas for four-point functions are given by
$\displaystyle D^{\mu}$ $\displaystyle=$ $\displaystyle
q_{1}^{\mu}D_{1}+q_{2}^{\mu}D_{2}+q_{3}^{\mu}D_{3}=\sum\limits_{i=1}^{3}q_{i}^{\mu}D_{i},$
(57) $\displaystyle D^{\mu\nu}$ $\displaystyle=$ $\displaystyle
g^{\mu\nu}D_{00}+\sum\limits_{i,j=1}^{3}q_{i}^{\mu}q_{j}^{\nu}D_{ij},$ (58)
$\displaystyle D^{\mu\nu\rho}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{3}\\{g,q_{i}\\}^{\mu\nu\rho}D_{00i}+\sum_{i,j,k=1}^{3}q^{\mu}_{i}q^{\nu}_{j}q^{\rho}_{k}D_{ijk}.$
(59)
The short notation [44] $\\{g,q_{i}\\}^{\mu\nu\rho}$ is used as follows:
$\\{g,q_{i}\\}^{\mu\nu\rho}=g^{\mu\nu}q^{\rho}_{i}+g^{\nu\rho}q^{\mu}_{i}+g^{\mu\rho}q^{\nu}_{i}$
in the above relations. The scalar coefficients $A_{00},B_{1},\cdots,D_{333}$
in the right hand sides of the above equations are so-called Passarino-Veltman
functions [44]. They have been implemented into LoopTools [46] for numerical
computations.
## Appendix B Review of $U(1)_{B-L}$ extension
In this appendix, we review briefly $U(1)_{B-L}$ extension [25]. This model
follows gauge symmetry $SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}\otimes
U(1)_{B-L}$. By including complex scalar $S$, general scalar potential is
given by
$\displaystyle V(\Phi,S)$ $\displaystyle=$ $\displaystyle
m^{2}\Phi^{{\dagger}}\Phi+\lambda_{1}(\Phi^{{\dagger}}\Phi)^{2}+\mu^{2}|S|^{2}+\lambda_{2}|S|^{4}+\lambda_{3}\Phi^{{\dagger}}\Phi|S|^{2}.$
(60)
In order to find mass spectrum of the scalar sector, we expand the scalar
fields around their vacuum as follows:
$\displaystyle\Phi=\begin{pmatrix}\phi^{+}\\\
\dfrac{v_{\phi}+h+i\xi}{\sqrt{2}}\end{pmatrix},\quad
S=\dfrac{v_{s}+h^{\prime}+i\xi^{\prime}}{\sqrt{2}}.$ (61)
The Goldstone bosons $\phi^{\pm},\xi$ will give the masses of $W^{\pm}$ and
$Z$ bosons. In unitary gauge, the mass eigenvalues of neutral Higgs are given:
$\displaystyle\begin{pmatrix}h_{1}\\\
h_{2}\end{pmatrix}=\begin{pmatrix}c_{\alpha}&-s_{\alpha}\\\
s_{\alpha}&c_{\alpha}\\\ \end{pmatrix}\begin{pmatrix}h\\\
h^{\prime}\end{pmatrix}$ (62)
with mixing angle
$\displaystyle s_{2\alpha}=\sin
2\alpha=\dfrac{\lambda_{3}v_{\phi}v_{s}}{\sqrt{(\lambda_{1}v_{\phi}^{2}-\lambda_{2}v_{s}^{2})^{2}+(\lambda_{3}v_{\phi}v_{s})^{2}}}$
(63)
After this transformation, masses of scalar Higgs are given
$\displaystyle M^{2}_{h_{1}}$ $\displaystyle=$
$\displaystyle\lambda_{1}v_{\phi}^{2}+\lambda_{2}v_{s}^{2}-\sqrt{(\lambda_{1}v_{\phi}^{2}-\lambda_{2}v_{s}^{2})^{2}+(\lambda_{3}v_{\phi}v_{s})^{2}},$
(64) $\displaystyle M^{2}_{h_{2}}$ $\displaystyle=$
$\displaystyle\lambda_{1}v_{\phi}^{2}+\lambda_{2}v_{s}^{2}+\sqrt{(\lambda_{1}v_{\phi}^{2}-\lambda_{2}v_{s}^{2})^{2}+(\lambda_{3}v_{\phi}v_{s})^{2}}.$
(65)
For gauge boson masses, $W,Z$ and $Z^{\prime}$ bosons are obtained their
masses by expanding the following kinematic terms
$\displaystyle\mathcal{L}_{\text{gauge}}\rightarrow(D^{\mu}\Phi)^{+}D_{\mu}\Phi,\quad(D^{\mu}S)^{+}D_{\mu}S.$
(66)
As a result, mass of $Z^{\prime}$ is $M_{Z^{\prime}}=2v_{s}g_{1}^{\prime}$.
The Yukawa interaction involving right-handed neutrinos are given:
$\displaystyle\mathcal{L}_{Y}$ $\displaystyle=$
$\displaystyle-y^{d}_{jk}\bar{q}_{Lj}d_{Rk}\Phi-y^{u}_{jk}\bar{q}_{Lj}u_{Rk}i\sigma_{2}\Phi^{*}-y^{e}_{jk}\bar{l}_{Lj}e_{Rk}\Phi$
(67)
$\displaystyle-y^{\nu}_{jk}\bar{l}_{Lj}\nu_{Rk}i\sigma_{2}\Phi^{*}-y^{M}_{jk}\overline{(\nu_{R})}_{j}^{c}\nu_{Rk}\;S+\text{h.c}.$
for $j,k=1,2,3$. The last term in this equation is Majorana mass terms for
right-handed neutrinos. After the spontaneous breaking symmetry, the mass
matrix of neutrinos is
$\displaystyle
M=\begin{pmatrix}0&m_{D}=\dfrac{(y^{\nu})^{*}}{\sqrt{2}}v_{\phi}\\\
m_{D}^{T}&M=\sqrt{2}y^{M}v_{s}\\\ \end{pmatrix}.$ (68)
The diagonalization is obtained by the transformation
$\displaystyle\text{diag}\Big{(}-\dfrac{(m^{i}_{D})^{2}}{M^{i}},M^{i}\Big{)}=\begin{pmatrix}\cos\alpha_{i}&-\sin\alpha_{i}\\\
\sin\alpha_{i}&\cos\alpha_{i}\\\ \end{pmatrix}\begin{pmatrix}0&m_{D}^{i}\\\
m_{D}^{i}&M^{i}\\\
\end{pmatrix}\begin{pmatrix}\cos\alpha_{i}&\sin\alpha_{i}\\\
-\sin\alpha_{i}&\cos\alpha_{i}\\\ \end{pmatrix}$ (69)
for $i=1,2,3$ and $\alpha_{i}=\text{arcsin}(m_{D}^{i}/M^{i})$.
We show all relevant couplings in the decay under consider In the case of
$\alpha_{i}\rightarrow 0$, all the couplings are shown in Table 4
Vertices | Couplings
---|---
$h_{1}(h_{2})f\bar{f}$ | $-i\dfrac{c_{\alpha}(s_{\alpha})e\;m_{f}}{2M_{W}s_{W}}$
$Z^{\prime}_{\mu}f\bar{f}$ | $iQ_{f}\;g^{\prime}_{1}\gamma_{\mu}$
$h_{1}(h_{2})W^{+}_{\mu}W^{-}_{\nu}$ | $i\frac{eM_{W}}{s_{W}}\;c_{\alpha}(s_{\alpha})g_{\mu\nu}$
$h_{1}(h_{2})Z_{\mu}Z_{\nu})$ | $i\frac{eM_{W}}{c_{W}^{2}\;s_{W}}\;c_{\alpha}(s_{\alpha})g_{\mu\nu}$
$h_{1}(h_{2})Z^{\prime}_{\mu}Z^{\prime}_{\nu})$ | $-i4g^{\prime}_{1}M_{Z}^{\prime}\;s_{\alpha}(-c_{\alpha})g_{\mu\nu}$
Table 4: All the couplings involving the decay processes $H\rightarrow
f\bar{f}\gamma$ in the $U(1)_{B-L}$ extension of the SM.
## Appendix C Review of Two Higgs Doublet Model
Base on Ref. [27], we review briefly the two Higgs doublet model with breaking
softly $Z_{2}$-symmetry. In this model, there are two scalar doublets
$\Phi_{1},\Phi_{2}$ with hypercharge $Y=1/2$. Parts of Lagrangian extended
from the SM are presented as follows:
$\displaystyle\mathcal{L}=\mathcal{L}_{K}+\mathcal{L}_{Y}-V(\Phi_{1},\Phi_{2}).$
(70)
Where the kinematic term is $\mathcal{L}_{K}$, the Yukawa part is
$\mathcal{L}_{Y}$ and $V(\Phi_{1},\Phi_{2})$ is Higgs potential. First, the
kinematic term is taken the form of
$\displaystyle\mathcal{L}_{K}=\sum\limits_{k=1}^{2}\left(D_{\mu}\Phi_{k}\right)^{{\dagger}}\left(D^{\mu}\Phi_{k}\right)$
(71)
with $D_{\mu}=\partial_{\mu}-igT^{a}W^{a}_{\mu}-i\frac{g^{\prime}}{2}B_{\mu}$.
The Higgs potential with breaking the $Z_{2}$-symmetry is expressed:
$\displaystyle V(\Phi_{1},\Phi_{2})$ $\displaystyle=$
$\displaystyle\frac{1}{2}m_{11}^{2}\Phi_{1}^{{\dagger}}\Phi_{1}-m_{12}^{2}(\Phi_{1}^{{\dagger}}\Phi_{2}+\Phi_{2}^{{\dagger}}\Phi_{1})+\frac{1}{2}m_{22}^{2}\Phi_{2}^{{\dagger}}\Phi_{2}+\dfrac{\lambda_{1}}{2}\left(\Phi_{1}^{{\dagger}}\Phi_{1}\right)^{2}+\dfrac{\lambda_{2}}{2}\left(\Phi_{2}^{{\dagger}}\Phi_{2}\right)^{2}$
(72)
$\displaystyle+\lambda_{3}\left(\Phi_{1}^{{\dagger}}\Phi_{1}\right)\left(\Phi_{2}^{{\dagger}}\Phi_{2}\right)+\lambda_{4}\left(\Phi_{1}^{{\dagger}}\Phi_{2}\right)\left(\Phi_{2}^{{\dagger}}\Phi_{1}\right)+\frac{\lambda_{5}}{2}\left[\left(\Phi_{1}^{{\dagger}}\Phi_{2}\right)^{2}+\left(\Phi_{1}^{{\dagger}}\Phi_{2}\right)^{2}\right].$
In this potential, $m_{12}^{2}$ plays role of soft broken scale of the
$Z_{2}$-symmetry. The two scalar doublet fields can be parameterized as
follows:
$\displaystyle\Phi_{1}=\begin{pmatrix}\phi_{1}^{+}\\\
\dfrac{v_{1}+\eta_{1}+i\xi_{1}}{\sqrt{2}}\end{pmatrix},\quad\Phi_{2}=\begin{pmatrix}\phi_{2}^{+}\\\
\dfrac{v_{2}+\eta_{2}+i\xi_{2}}{\sqrt{2}}\end{pmatrix}.$ (73)
We get a system equation for these parameters from the stationary conditions
of the Higgs potential. The relations are shown as follows:
$\displaystyle
m_{11}^{2}-\mu^{2}\frac{v_{2}^{2}}{v^{2}}+\frac{\lambda_{1}}{2}v_{1}^{2}+\frac{\lambda_{345}}{2}v_{2}^{2}$
$\displaystyle=$ $\displaystyle 0,$ (74) $\displaystyle
m_{22}^{2}-\mu^{2}\frac{v_{1}^{2}}{v^{2}}+\frac{\lambda_{2}}{2}v_{2}^{2}+\frac{\lambda_{345}}{2}v_{1}^{2}$
$\displaystyle=$ $\displaystyle 0.$ (75)
Where $v^{2}=v_{1}^{2}+v_{2}^{2}$ is fixed at electroweak scale or
$v=(\sqrt{2}G_{F})^{-1/2}=246$ GeV and new parameter $\mu^{2}$ is defined as
$\mu^{2}=\frac{v^{2}}{v_{1}v_{2}}m_{12}^{2}$. The shorten notation is
$\lambda_{345}=\lambda_{3}+\lambda_{4}+\lambda_{5}$. The mixing angle is given
$t_{\beta}=\tan\beta=v_{2}/v_{1}$. The mass terms of the Higgs potential
$V_{mass}$ can be expressed as:
$\displaystyle V_{mass}$ $\displaystyle=$
$\displaystyle(\phi^{+}_{1},\phi^{+}_{2})R_{\beta}\begin{pmatrix}0&0\\\
0&M^{2}_{H^{+}}\\\ \end{pmatrix}R_{\beta}^{-1}\begin{pmatrix}\phi_{1}^{+}\\\
\phi_{2}^{+}\end{pmatrix}+\frac{1}{2}(\xi_{1},\xi_{2})R_{\beta}\begin{pmatrix}0&0\\\
0&M^{2}_{A}\\\ \end{pmatrix}R_{\beta}^{-1}\begin{pmatrix}\xi_{1}\\\ \xi_{2}\\\
\end{pmatrix}$ (76)
$\displaystyle+\frac{1}{2}(\eta_{1},\eta_{2})R_{\beta}\begin{pmatrix}M^{2}_{H_{2}^{0}}&0\\\
0&M^{2}_{H_{1}^{0}}\\\ \end{pmatrix}R_{\beta}^{-1}\begin{pmatrix}\eta_{1}\\\
\eta_{2}\end{pmatrix}.$
Here diagonalized matrix of neutral mass is defined as
$\displaystyle\text{diag}(M^{2}_{H_{2}^{0}},M^{2}_{H_{1}^{0}})=R_{\alpha}\mathcal{M}^{2}R_{\alpha}^{T},\quad\text{with}\quad(\mathcal{M}^{2})_{ij}=\dfrac{\partial^{2}V}{\partial{\eta_{i}}\partial{\eta_{j}}}.$
(77)
The mass eigenstates can be then expressed as follows:
$\displaystyle\begin{pmatrix}G^{+}\\\
H^{+}\end{pmatrix}=R_{\beta}^{-1}\begin{pmatrix}\phi_{1}^{+}\\\
\phi_{2}^{+}\end{pmatrix},\quad\begin{pmatrix}G^{0}\\\
A\end{pmatrix}=R_{\beta}^{-1}\begin{pmatrix}\xi_{1}\\\
\xi_{2}\end{pmatrix},\quad\begin{pmatrix}H_{2}^{0}\\\
H_{1}^{0}\end{pmatrix}=R_{\alpha}^{-1}R_{\beta}^{-1}\begin{pmatrix}\eta_{1}\\\
\eta_{2}\end{pmatrix}$ (78)
where
$\displaystyle R_{\beta}=\begin{pmatrix}c_{\beta}&s_{\beta}\\\
-s_{\beta}&c_{\beta}\\\ \end{pmatrix},\quad
R_{\alpha}=\begin{pmatrix}c_{\alpha}&s_{\alpha}\\\ -s_{\alpha}&c_{\alpha}\\\
\end{pmatrix}$ (79)
with $-\pi/2\leq\alpha\leq\pi/2$. In the unitary gauge, it is well-known that
$G^{+}$ and $G^{0}$ are massless Goldstone bosons will become the longitudinal
polarization of $W^{+}$ and $Z^{0}$. The remains $H^{\pm}$, $A$ and
$H^{0}_{1,2}$ become the charged Higgs bosons, a CP-odd Higgs boson and CP-
even Higgs bosons respectively. The masses of these scalar bosons are given by
$\displaystyle M^{2}_{H^{\pm}}$ $\displaystyle=$
$\displaystyle\mu^{2}-\frac{v^{2}}{2}(\lambda_{4}+\lambda_{5}),$ (80)
$\displaystyle M^{2}_{A}$ $\displaystyle=$
$\displaystyle\mu^{2}-v^{2}\lambda_{5},$ (81) $\displaystyle
M^{2}_{H^{0}_{1}}$ $\displaystyle=$ $\displaystyle
s_{\alpha}^{2}\mathcal{M}^{2}_{11}-2s_{\alpha}c_{\alpha}\mathcal{M}^{2}_{12}+c_{\alpha}^{2}\mathcal{M}^{2}_{22},$
(82) $\displaystyle M^{2}_{H^{0}_{2}}$ $\displaystyle=$ $\displaystyle
c_{\alpha}^{2}\mathcal{M}^{2}_{11}+2s_{\alpha}c_{\alpha}\mathcal{M}^{2}_{12}+s_{\alpha}^{2}\mathcal{M}^{2}_{22}.$
(83)
From the Higgs potential in Eq. (72) with the stationary conditions in (74),
we have $7$ parameters. They are
$\displaystyle\Big{\\{}\lambda_{1,2,3,4,5},t_{\beta},m_{12}^{2}\Big{\\}}.$
(84)
For phenomenological analyses, the above parameters are transferred to the
following parameters:
$\displaystyle\Big{\\{}M^{2}_{H^{+}},M^{2}_{A},M^{2}_{H^{0}_{1}},M^{2}_{H^{0}_{2}},\alpha,t_{\beta},m_{12}^{2}\Big{\\}}.$
(85)
All the couplings involving the decay processes $H\rightarrow f\bar{f}\gamma$
are derived in this appendix. In general, we can consider the lightest Higgs
boson $H_{1}^{0}$ is the SM like-Higgs boson. In Table 5, all the couplings
are shown in detail.
Vertices | Couplings
---|---
$H_{1}^{0}W^{+}_{\mu}W^{-}_{\nu}$ | $i\frac{2M_{W}^{2}}{v}s_{(\beta-\alpha)}g_{\mu\nu}$
$H_{1}^{0}Z_{\mu}Z_{\nu}$ | $i\frac{2M_{Z}^{2}}{v}s_{(\beta-\alpha)}g_{\mu\nu}$
$H_{1}^{0}(p)H^{\pm}(q)W^{\mp}_{\mu}$ | $\mp i\frac{M_{W}}{v}c_{(\beta-\alpha)}\;(p-q)_{\mu}$
$H^{+}(p)H^{-}(q)A_{\mu}$ | $i\frac{2M_{W}}{v}s_{W}\;(p-q)_{\mu}$
$H^{+}(p)H^{-}(q)Z_{\mu}$ | $i\frac{M_{Z}}{v}c_{2W}\;(p-q)_{\mu}$
$H_{1}^{0}H^{+}H^{-}$ | $\frac{i}{v}\Big{[}(2\mu^{2}-2M_{H^{\pm}}^{2}-M^{2}_{H^{0}_{1}})s_{(\beta-\alpha)}+2(\mu^{2}-M^{2}_{H^{0}_{1}})\text{cot}2\beta\;c_{(\beta-\alpha)}\Big{]}$
$H^{+}H^{-}A_{\mu}A_{\nu}$ | $2ie^{2}\;g_{\mu\nu}$
$H^{+}H^{-}Z_{\mu}A_{\nu}$ | $ieg\frac{c_{2W}}{c_{W}}\;g_{\mu\nu}$
Table 5: All the couplings involving the decay processes $H\rightarrow
f\bar{f}\gamma$ in THDM.
For the Yukawa part, we refer [27] for more detail. Depend on the types of
THDMs, we then have the couplings of scalar fields and fermions.
$\displaystyle\mathcal{L}_{H_{1}^{0}t\bar{t}}$ $\displaystyle=$
$\displaystyle-\frac{m_{t}}{v}\frac{c_{\alpha}}{s_{\beta}}\bar{t}tH_{1}^{0}.$
(86)
## Appendix D Feynman rules and couplings
In the below Tables, we use $P_{L/R}=(1\mp\gamma_{5})/2$,
$\Gamma^{\mu\nu\lambda}(p_{1},p_{2},p_{3})=g^{\mu\nu}(p_{1}-p_{2})^{\lambda}+g^{\lambda\nu}(p_{2}-p_{3})^{\mu}+g^{\mu\lambda}(p_{3}-p_{1})^{\nu}$
and
$S^{\mu\nu,\alpha\beta}=2g^{\mu\nu}g^{\alpha\beta}-g^{\mu\alpha}g^{\nu\beta}-g^{\mu\beta}g^{\nu\alpha}$
and $Q_{V}$ denotes the electric charge of the gauge bosons $V_{i},V_{j}$ and
$Q_{S}$ is charge of the charged Higgs bosons $S_{i},S_{j}$. Moreover, the
factor $\xi_{V_{k}^{0}}$ is included for covering all possible cases of
neutral gauge boson. It can be $0$ and $1$ for the case of the pole of photon
and Z-boson ($Z^{\prime}$-boson) respectively.
Particle types | Propagators
---|---
Fermions $f$ | $i\dfrac{\not{k}+m_{f}}{k^{2}-m_{f}^{2}}$
Charged (neutral) gauge bosons $V_{i}(V^{0}_{i})$ | $\dfrac{-i}{p^{2}-M_{V_{i}}^{2}(M_{V^{0}_{i}}^{2})}\Bigg{[}g^{\mu\nu}-\dfrac{p^{\mu}p^{\nu}}{M_{V_{i}}^{2}(M_{V^{0}_{i}}^{2})}\Bigg{]}$
Gauge boson $V^{0*}_{k}$ poles | $\dfrac{-i}{p^{2}-M_{V_{k}^{0}}^{2}+i\Gamma_{V^{0}_{k}}M_{V_{k}^{0}}}\Bigg{[}g^{\mu\nu}-\xi_{V_{k}^{0}}\dfrac{p^{\mu}p^{\nu}}{M_{V^{0}_{k}}^{2}}\Bigg{]}$
Charged (neutral) scalar bosons $S_{i}(S_{i}^{0})$ | $\dfrac{i}{p^{2}-M_{S_{i}}^{2}(M_{S^{0}_{i}}^{2})}$
Table 6: Feynman rules involving the decay channels in the unitary gauge.
Vertices | Couplings
---|---
$H\cdot S_{i}\cdot S_{j}$ | $-i\,g_{HS_{i}S_{j}}$
$H\cdot S^{0}_{i}\cdot S^{0}_{j}$ | $-i\,g_{HS^{0}_{i}S^{0}_{j}}$
$H\cdot V_{i}^{\mu}\cdot V_{j}^{\nu}$ | $i\,g_{HV_{i}V_{j}}\,g^{\mu\nu}$
$H\cdot V_{i}^{0\,\mu}\cdot V_{j}^{0\,\nu}$ | $i\,g_{HV^{0}_{i}V^{0}_{j}}\,g^{\mu\nu}$
$H\cdot\bar{f_{i}}\cdot f_{j}$ | $-i\,\Big{(}g_{Hf_{i}f_{j}}^{L}P_{L}+g_{Hf_{i}f_{j}}^{R}P_{R}\Big{)}$
$S^{0}_{k}\cdot\bar{f}\cdot f$ | $-i\,\Big{(}g_{S^{0}_{k}ff}^{L}P_{L}+g_{S^{0}_{k}ff}^{R}P_{R}\Big{)}$
$A^{\mu}\cdot f_{i}\cdot\bar{f_{i}}$ | $ieQ_{f}\gamma^{\mu}$
$H(p)\cdot V_{i}^{\mu}\cdot S_{j}(q)$ | $i\,g_{HV_{i}S_{j}}\,(p-q)^{\mu}$
$H(p)\cdot V_{i}^{0\,\mu}\cdot S^{0}_{j}(q)$ | $i\,g_{HV^{0}_{i}S^{0}_{j}}\,(p-q)^{\mu}$
$V^{0\;\mu}_{k}\cdot V_{i}^{\nu}\cdot S^{\pm}_{j}$ | $\pm i\,g_{V^{0}_{k}V_{i}S_{j}}\,g^{\mu\nu}$
$V^{0\;\mu}_{k}(p_{1})\cdot V_{i}^{\nu}(p_{2})\cdot V_{j}^{\lambda}(p_{3})$ | $-i\,g_{V^{0}_{k}V_{i}V_{j}}\,\Gamma_{\mu\nu\lambda}(p_{1},p_{2},p_{3})$
$A^{\mu}(p_{1})\cdot V_{i}^{\nu}(p_{2})\cdot V_{i}^{\lambda}(p_{3})$ | $-ieQ_{V}\;\Gamma^{\mu\nu\lambda}(p_{1},p_{2},p_{3})$
$V^{0\;\mu}_{k}\cdot f_{i}\cdot\bar{f_{j}}$ | $i\gamma^{\mu}\Big{(}g_{V^{0}_{k}f_{i}f_{j}}^{L}P_{L}+g_{V^{0}_{k}f_{i}f_{j}}^{R}P_{R}\Big{)}$
$V^{0\;\mu}_{k}\cdot A^{\nu}\cdot V_{i}^{\alpha}\cdot V_{j}^{\beta}$ | $-i\,eQ_{V}\;g_{V^{0}_{k}AV_{i}V_{j}}\,S_{\mu\nu,\alpha\beta}$
$V^{0\;\mu}_{k}\cdot A^{\nu}\cdot S_{i}\cdot S_{j}$ | $-i\,eQ_{S}\;g_{V^{0}_{k}AS_{i}S_{j}}\,g_{\mu\nu}$
$V^{0\;\mu}_{k}\cdot S_{i}(p)\cdot S_{j}(q)$ | $i\,g_{V^{0}_{k}S_{i}S_{j}}\,(p-q)^{\mu}$
$A^{\mu}\cdot S_{i}\cdot S_{i}$ | $ieQ_{S}\,(p-q)^{\mu}$
$V^{\mu}_{i}\cdot f\cdot\nu_{f}$ | $i\gamma^{\mu}\Big{(}g_{V_{i}f\nu_{f}}^{L}P_{L}+g_{V_{i}f\nu_{f}}^{R}P_{R}\Big{)}$
$V^{0\,\mu}_{i}\cdot f\cdot f$ | $i\gamma^{\mu}\Big{(}g_{V^{0}_{i}ff}^{L}P_{L}+g_{V^{0}_{i}ff}^{R}P_{R}\Big{)}$
$S_{i}\cdot\bar{f}\cdot\nu_{f}$ | $ig_{S_{i}f\nu_{f}}^{L}P_{L}+ig_{S_{i}f\nu_{f}}^{R}P_{R}$
$S^{*}_{i}\cdot f\cdot\bar{\nu}_{f}$ | $ig_{S_{i}f\nu_{f}}^{R}P_{L}+ig_{S_{i}f\nu_{f}}^{L}P_{R}$
Table 7: All couplings involving the decay processes in unitary gauge.
## References
* [1] G. Aad et al. [ATLAS], Phys. Lett. B 716 (2012), 1-29 doi:10.1016/j.physletb.2012.08.020 [arXiv:1207.7214 [hep-ex]].
* [2] S. Chatrchyan et al. [CMS], Phys. Lett. B 716 (2012), 30-61 doi:10.1016/j.physletb.2012.08.021 [arXiv:1207.7235 [hep-ex]].
* [3] A. Liss et al. [ATLAS], [arXiv:1307.7292 [hep-ex]].
* [4] [CMS], [arXiv:1307.7135 [hep-ex]].
* [5] H. Baer, T. Barklow, K. Fujii, Y. Gao, A. Hoang, S. Kanemura, J. List, H. E. Logan, A. Nomerotski and M. Perelstein, et al. [arXiv:1306.6352 [hep-ph]].
* [6] V. Khachatryan et al. [CMS], Phys. Lett. B 753 (2016), 341-362 doi:10.1016/j.physletb.2015.12.039 [arXiv:1507.03031 [hep-ex]].
* [7] A. M. Sirunyan et al. [CMS], JHEP 09 (2018), 148 doi:10.1007/JHEP09(2018)148 [arXiv:1712.03143 [hep-ex]].
* [8] A. M. Sirunyan et al. [CMS], JHEP 11 (2018), 152 doi:10.1007/JHEP11(2018)152 [arXiv:1806.05996 [hep-ex]].
* [9] G. Aad et al. [ATLAS], Phys. Lett. B 819 (2021), 136412 doi:10.1016/j.physletb.2021.136412 [arXiv:2103.10322 [hep-ex]].
* [10] L. B. Chen, C. F. Qiao and R. L. Zhu, Phys. Lett. B 726 (2013), 306-311 [erratum: Phys. Lett. B 808 (2020), 135629] doi:10.1016/j.physletb.2013.08.050 [arXiv:1211.6058 [hep-ph]].
* [11] J. S. Gainer, W. Y. Keung, I. Low and P. Schwaller, Phys. Rev. D 86 (2012), 033010 doi:10.1103/PhysRevD.86.033010 [arXiv:1112.1405 [hep-ph]].
* [12] A. Y. Korchin and V. A. Kovalchuk, Eur. Phys. J. C 74 (2014) no.11, 3141 doi:10.1140/epjc/s10052-014-3141-7 [arXiv:1408.0342 [hep-ph]].
* [13] A. Abbasabadi, D. Bowser-Chao, D. A. Dicus and W. W. Repko, Phys. Rev. D 52 (1995), 3919-3928.
* [14] A. Djouadi, V. Driesen, W. Hollik and J. Rosiek, Nucl. Phys. B 491 (1997), 68-102 doi:10.1016/S0550-3213(96)00711-0 [arXiv:hep-ph/9609420 [hep-ph]].
* [15] A. Abbasabadi and W. W. Repko, Phys. Rev. D 62 (2000), 054025 doi:10.1103/PhysRevD.62.054025 [arXiv:hep-ph/0004167 [hep-ph]].
* [16] D. A. Dicus and W. W. Repko, Phys. Rev. D 87 (2013) no.7, 077301 doi:10.1103/PhysRevD.87.077301 [arXiv:1302.2159 [hep-ph]].
* [17] Y. Sun, H. R. Chang and D. N. Gao, JHEP 05 (2013), 061 doi:10.1007/JHEP05(2013)061 [arXiv:1303.2230 [hep-ph]].
* [18] G. Passarino, Phys. Lett. B 727 (2013), 424-431 doi:10.1016/j.physletb.2013.10.052 [arXiv:1308.0422 [hep-ph]].
* [19] D. A. Dicus, C. Kao and W. W. Repko, Phys. Rev. D 89 (2014) no.3, 033013 doi:10.1103/PhysRevD.89.033013 [arXiv:1310.4380 [hep-ph]].
* [20] A. Kachanovich, U. Nierste and I. Nišandžić, Phys. Rev. D 101 (2020) no.7, 073003 doi:10.1103/PhysRevD.101.073003 [arXiv:2001.06516 [hep-ph]].
* [21] N. Watanabe, Y. Kurihara, K. Sasaki and T. Uematsu, Phys. Lett. B 728 (2014), 202-205 doi:10.1016/j.physletb.2013.11.051 [arXiv:1311.1601 [hep-ph]].
* [22] N. Watanabe, Y. Kurihara, T. Uematsu and K. Sasaki, Phys. Rev. D 90 (2014) no.3, 033015 doi:10.1103/PhysRevD.90.033015 [arXiv:1403.4703 [hep-ph]].
* [23] C. S. Li, C. F. Qiao and S. H. Zhu, Phys. Rev. D 57 (1998), 6928-6933 doi:10.1103/PhysRevD.57.6928 [arXiv:hep-ph/9801334 [hep-ph]].
* [24] K. Sasaki and T. Uematsu, Phys. Lett. B 781 (2018), 290-294 doi:10.1016/j.physletb.2018.04.005 [arXiv:1712.00197 [hep-ph]].
* [25] L. Basso, A. Belyaev, S. Moretti and C. H. Shepherd-Themistocleous, Phys. Rev. D 80 (2009), 055030 doi:10.1103/PhysRevD.80.055030 [arXiv:0812.4313 [hep-ph]].
* [26] L. Michaels and F. Yu, JHEP 03 (2021), 120 doi:10.1007/JHEP03(2021)120 [arXiv:2010.00021 [hep-ph]].
* [27] G. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher and J. P. Silva, Phys. Rept. 516 (2012), 1-102 doi:10.1016/j.physrep.2012.02.002 [arXiv:1106.0034 [hep-ph]].
* [28] J. C. Pati and A. Salam, Phys. Rev. D 10, 275-289 (1974) [erratum: Phys. Rev. D 11, 703-703 (1975)].
* [29] R. N. Mohapatra and J. C. Pati, Phys. Rev. D 11, 2558 (1975).
* [30] G. Senjanovic and R. N. Mohapatra, Phys. Rev. D 12, 1502 (1975).
* [31] M. Singer, J. W. F. Valle and J. Schechter, Phys. Rev. D 22, 738 (1980).
* [32] J. W. F. Valle and M. Singer, Phys. Rev. D 28, 540 (1983).
* [33] F. Pisano and V. Pleitez, Phys. Rev. D 46, 410-417 (1992).
* [34] P. H. Frampton, Phys. Rev. Lett. 69, 2889-2891 (1992).
* [35] R. A. Diaz, R. Martinez and F. Ochoa, Phys. Rev. D 72, 035018 (2005).
* [36] R. M. Fonseca and M. Hirsch, JHEP 08 (2016), 003
* [37] R. Foot, H. N. Long and T. A. Tran, Phys. Rev. D 50, no.1, R34-R38 (1994).
* [38] L. A. Sanchez, F. A. Perez and W. A. Ponce, Eur. Phys. J. C 35 (2004), 259-265 [arXiv:hep-ph/0404005 [hep-ph]].
* [39] W. A. Ponce and L. A. Sanchez, Mod. Phys. Lett. A 22 (2007), 435-448 [arXiv:hep-ph/0607175 [hep-ph]].
* [40] Riazuddin and Fayyazuddin, Eur. Phys. J. C 56 (2008), 389-394 [arXiv:0803.4267 [hep-ph]].
* [41] A. Jaramillo and L. A. Sanchez, Phys. Rev. D 84 (2011), 115001 [arXiv:1110.3363 [hep-ph]].
* [42] H. N. Long, L. T. Hue and D. V. Loi, Phys. Rev. D 94 (2016) no.1, 015007 [arXiv:1605.07835 [hep-ph]].
* [43] H. H. Patel, Comput. Phys. Commun. 197 (2015), 276-290
* [44] A. Denner and S. Dittmaier, Nucl. Phys. B 734 (2006), 62-115
* [45] K. H. Phan, D. T. Tran and L. Hue, [arXiv:2106.14466 [hep-ph]].
* [46] T. Hahn and M. Perez-Victoria, Comput. Phys. Commun. 118 (1999), 153-165.
|
# LNMap: Departures from Isomorphic Assumption in Bilingual Lexicon Induction
Through Non-Linear Mapping in Latent Space
Tasnim Mohiuddin¶, M Saiful Bari¶ Shafiq Joty¶†
¶Nanyang Technological University, Singapore
†Salesforce Research
{mohi0004, bari0001<EMAIL_ADDRESS>
###### Abstract
Most of the successful and predominant methods for Bilingual Lexicon Induction
(BLI) are mapping-based, where a linear mapping function is learned with the
assumption that the word embedding spaces of different languages exhibit
similar geometric structures (i.e., approximately _isomorphic_). However,
several recent studies have criticized this simplified assumption showing that
it does not hold in general even for closely related languages. In this work,
we propose a novel semi-supervised method to learn cross-lingual word
embeddings for BLI. Our model is independent of the isomorphic assumption and
uses non-linear mapping in the latent space of two independently pre-trained
autoencoders. Through extensive experiments on fifteen (15) different language
pairs (in both directions) comprising resource-rich and low-resource languages
from two different datasets, we demonstrate that our method outperforms
existing models by a good margin. Ablation studies show the importance of
different model components and the necessity of non-linear mapping.
## 1 Introduction
In recent years, a plethora of methods have been proposed to learn cross-
lingual word embeddings (or CLWE for short) from monolingual word embeddings.
Here words with similar meanings in different languages are represented by
similar vectors, regardless of their actual language. CLWE enable us to
compare the meaning of words across languages, which is key to most multi-
lingual applications such as bilingual lexicon induction Heyman et al. (2017),
machine translation Lample et al. (2018); Artetxe et al. (2018c), or multi-
lingual information retrieval Vulić and Moens (2015). They also play a crucial
role in cross-lingual knowledge transfer between languages (e.g., from
resource-rich to low-resource languages) by providing a common representation
space Ruder et al. (2019).
Mikolov et al. (2013a), in their pioneering work, learn a linear mapping
function to transform the source embedding space to the target language by
minimizing the squared Euclidean distance between the translation pairs of a
seed dictionary. They assume that the similarity of geometric arrangements in
the embedding spaces is the key reason for their method to succeed as they
found linear mapping superior to non-linear mappings with multi-layer neural
networks. Subsequent studies propose to improve the model by normalizing the
embeddings, imposing an orthogonality constraint on the linear mapper,
modifying the objective function, and reducing the seed dictionary size
Artetxe et al. (2016, 2017, 2018a); Smith et al. (2017).
A more recent line of research attempts to eliminate the seed dictionary
totally and learn the mapping in a purely unsupervised way Barone (2016);
Zhang et al. (2017); Conneau et al. (2018); Artetxe et al. (2018b); Xu et al.
(2018); Hoshen and Wolf (2018); Alvarez-Melis and Jaakkola (2018); Mohiuddin
and Joty (2019, 2020). While not requiring any cross-lingual supervision makes
these methods attractive, Vulić et al. (2019) recently show that even the most
robust unsupervised method Artetxe et al. (2018b) fails for a large number of
language pairs. They suggest to rethink the main motivations behind fully
unsupervised methods showing that with a small seed dictionary (500-1K pairs)
their semi-supervised method always outperforms the unsupervised method and
does not fail for any language pair. Other concurrent work Ormazabal et al.
(2019); Doval et al. (2019) also advocates for weak supervision in CLWE
methods.
Almost all mapping-based CLWE methods, supervised and unsupervised alike,
solve the _Procrustes_ problem in the final step or during self-learning Ruder
et al. (2019). This restricts the transformation to be orthogonal linear
mappings. However, learning an orthogonal linear mapping inherently assumes
that the embedding spaces of different languages exhibit similar geometric
structures (i.e., approximately _isomorphic_). Several recent studies have
questioned this strong assumption and empirically showed that the isomorphic
assumption does not hold in general even for two closely related languages
like English and German Søgaard et al. (2018); Patra et al. (2019).
In this work, we propose LNMap (Latent space Non-linear Mapping), a novel
semi-supervised approach that uses _non-linear_ mapping in the latent space to
learn CLWE. It uses minimal supervision from a seed dictionary, while
leveraging semantic information from the monolingual word embeddings. As shown
in Figure 1, LNMap comprises two _autoencoders_ , one for each language. The
auto-encoders are first trained independently in a self-supervised way to
induce the latent code space of the respective languages. Then, we use a small
seed dictionary to learn the non-linear mappings between the two code spaces.
To guide our mapping in the latent space, we include two additional
constraints: back-translation and original embedding reconstruction.
Crucially, our method does not enforce any strong prior constraints like the
orthogonality (or isomorphic), rather it gives the model the flexibility to
induce the required latent structures such that it is easier for the non-
linear mappers to align them in the code space.
In order to demonstrate the effectiveness and robustness of LNMap, we conduct
extensive experiments on bilingual lexicon induction (BLI) with fifteen (15)
different language pairs (in both directions) comprising high- and low-
resource languages from two different datasets for different sizes of the seed
dictionary. Our results show significant improvements for LNMap over the
state-of-the-art in most of the tested scenarios. It is particularly very
effective for low-resource languages; for example, using 1K seed dictionary,
LNMap yields about 18% absolute improvements on average over a state-of-the-
art supervised method Joulin et al. (2018). It also outperforms the most
robust unsupervised system of Artetxe et al. (2018b) in most of the
translation tasks. Interestingly, for resource-rich language pairs, linear
autoencoder performs better than non-linear ones. Our ablation study reveals
the collaborative nature of LNMap’s different components and efficacy of its
non-linear mappings in the code space. We open-source our framework at
https://ntunlpsg.github.io/project/lnmap/.
## 2 Background
#### Limitations of Isomorphic Assumption.
Almost all CLWE methods inherently assume that embedding spaces of different
languages are approximately isomorphic (i.e., similar in geometric structure).
However, recently researchers have questioned this simplified assumption and
attributed the performance degradation of existing CLWE methods to the strong
mismatches in embedding spaces caused by the linguistic and domain divergences
Søgaard et al. (2019); Ormazabal et al. (2019). Søgaard et al. (2018)
empirically show that even closely related languages are far from being
isomorphic. Nakashole and Flauger (2018) argue that mapping between embedding
spaces of different languages can be approximately linear only at small local
regions, but must be non-linear globally. Patra et al. (2019) also recently
show that etymologically distant language pairs cannot be aligned properly
using orthogonal transformations.
#### Towards Semi-supervised Methods.
A number of recent studies have questioned the robustness of existing
unsupervised CLWE methods Ruder et al. (2019). Vulić et al. (2019) show that
even the most robust unsupervised method Artetxe et al. (2018b) fails for a
large number of language pairs; it gives zero (or near zero) BLI performance
for 87 out of 210 language pairs. With a seed dictionary of only 500 - 1000
word pairs, their supervised method outperforms unsupervised methods by a wide
margin in most language pairs. Other recent work also suggested using semi-
supervised methods Patra et al. (2019); Ormazabal et al. (2019).
#### Mapping in Latent Space.
Mohiuddin and Joty (2019) propose adversarial autoencoder for _unsupervised_
word translation. They use _linear_ autoencoders in their model, and the
mappers are also linear. They emphasize the benefit of using latent space over
the original embedding space. Although their method is more robust than other
existing adversarial models, still it suffers from training instability for
distant language pairs.
#### Our Contributions.
Our proposed LNMap is independent of the isomorphic assumption. It uses weak
supervision from a small seed dictionary, while leveraging rich structural
information from monolingual embeddings. Unlike Mohiuddin and Joty (2019), the
autoencoders in LNMap are not limited to only linearity. More importantly, it
uses _non-linear_ mappers. These two factors contribute to its robust
performance even for very low-resource languages (section 5). To the best of
our knowledge, we are the first to showcase such robust and improved
performance with non-linear methods.111Our experiments with (unsupervised)
adversarial training showed very unstable results with the non-linear mappers.
## 3 LNMap Semi-supervised Framework
Let $\pazocal{V}_{\ell_{x}}$$=$$\\{v_{x_{1}},...,v_{x_{n_{x}}}\\}$ and
$\pazocal{V}_{\ell_{y}}$$=$$\\{v_{y_{1}},...,v_{y_{n_{y}}}\\}$ be two sets of
vocabulary consisting of $n_{x}$ and $n_{y}$ words for a source ($\ell_{x}$)
and a target ($\ell_{y}$) language, respectively. Each word $v_{x_{i}}$ (resp.
$v_{y_{j}}$) has an embedding $x_{i}\in\mathbb{R}^{d}$ (resp.
$y_{j}\in\mathbb{R}^{d}$), trained with any word embedding models, e.g.,
FastText Bojanowski et al. (2017). Let
$\pazocal{E}_{\ell_{x}}\in\mathbb{R}^{n_{x}\times d}$ and
$\pazocal{E}_{\ell_{y}}\in\mathbb{R}^{n_{y}\times d}$ be the word embedding
matrices for the source and target languages, respectively. We are also given
with a seed dictionary $\pazocal{D}$
$=$$\\{(x_{1},y_{1}),...,(x_{k},y_{k})\\}$ with $k$ word pairs. Our objective
is to learn a transformation function $\pazocal{M}$ such that for any
$v_{x_{i}}\in\pazocal{V}_{\ell_{x}}$, $\pazocal{M}(x_{i})$ corresponds to its
translation $y_{j}$, where $v_{y_{j}}\in\pazocal{V}_{\ell_{y}}$. Our approach
LNMap (Figure 1) follows two sequential steps:
1. (i)
Unsupervised latent space induction using monolingual autoencoders (section
3.1), and
2. (ii)
Supervised non-linear transformation learning with back-translation and source
embedding reconstruction constraints (section 3.2).
Figure 1: LNMap: Our proposed semi-supervised framework. Identical shapes with
different colors denote the similar meaning words in different spaces (e.g.,
source/target embedding space or latent space).
### 3.1 Unsupervised Latent Space Induction
We use two autoencoders, one for each language. Each autoencoder comprises an
encoder $E_{\ell_{x}}$ (resp. $E_{\ell_{y}}$) and a decoder $D_{\ell_{x}}$
(resp. $D_{\ell_{y}}$). Unless otherwise stated, the autoencoders are _non-
linear_ , where each of the encoder and decoder is a three-layer feed-forward
neural network with two non-linear hidden layers. More formally, the encoding-
decoding operations of the source autoencoder ($\texttt{autoenc}_{\ell_{x}}$)
are defined as:
$\displaystyle h_{1}^{E_{\ell_{x}}}=$
$\displaystyle\phi(\theta_{1}^{E_{\ell_{x}}}x_{i})$ (1) $\displaystyle
h_{2}^{E_{\ell_{x}}}=$
$\displaystyle\phi(\theta_{2}^{E_{\ell_{x}}}h_{1}^{E_{\ell_{x}}})$ (2)
$\displaystyle z_{x_{i}}=$
$\displaystyle\theta_{3}^{E_{\ell_{x}}}h_{2}^{\ell_{x}}$ (3) $\displaystyle
h_{1}^{D_{\ell_{x}}}=$ $\displaystyle\phi(\theta_{3}^{D_{\ell_{x}}}z_{x_{i}})$
(4) $\displaystyle h_{2}^{D_{\ell_{x}}}=$
$\displaystyle\phi(\theta_{2}^{D_{\ell_{x}}}h_{1}^{D_{\ell_{x}}})$ (5)
$\displaystyle\hat{x}_{i}=$
$\displaystyle\phi(\theta_{1}^{D_{\ell_{x}}}h_{2}^{D_{\ell_{x}}})$ (6)
where $\theta_{i}^{E_{\ell_{x}}}$$\in$ $\mathbb{R}^{c_{i}\times d_{i}}$ and
$\theta_{i}^{D_{\ell_{x}}}$$\in$ $\mathbb{R}^{d_{i}\times c_{i}}$ are the
parameters of the layers in the encoder and decoder respectively, and $\phi$
is a non-linear activation function; we use Parametric Rectified Linear Unit
(PReLU) in all the hidden layers and tanh in the final layer of the decoder
(Eq. 6). We use linear activations in the output layer of the encoder (Eq. 3).
We train $\texttt{autoenc}_{\ell_{x}}$ with $l_{2}$ reconstruction loss as:
$\displaystyle\mathcal{L}_{\text{autoenc}_{\ell_{x}}}(\Theta_{E_{\ell_{x}}},\Theta_{D_{\ell_{x}}})=\frac{1}{n_{x}}\sum_{i=1}^{n_{x}}\|{x_{i}}-\hat{x}_{i}\|^{2}$
(7)
where
$\Theta_{E_{\ell_{x}}}=\\{\theta_{1}^{E_{\ell_{x}}},\theta_{2}^{E_{\ell_{x}}},\theta_{3}^{E_{\ell_{x}}}\\}$
and
$\Theta_{D_{\ell_{x}}}=\\{\theta_{1}^{D_{\ell_{x}}},\theta_{2}^{D_{\ell_{x}}},\theta_{3}^{D_{\ell_{x}}}\\}$
are the parameters of the encoder and the decoder of
$\texttt{autoenc}_{\ell_{x}}$.
The encoder, decoder and the reconstruction loss for the target autoencoder
($\texttt{autoenc}_{\ell_{y}}$) are similarly defined.
### 3.2 Supervised Non-linear Transformation
Let $q(z_{x}|x)$ and $q(z_{y}|y)$ be the distributions of latent codes in
$\texttt{autoenc}_{\ell_{x}}$ and $\texttt{autoenc}_{\ell_{y}}$, respectively.
We have two non-linear mappers: $\pazocal{M}$ that translates a source code
into a target code, and $\pazocal{N}$ that translates a target code into a
source code (Figure 1). Both mappers are implemented as a feed-forward neural
network with a single hidden layer and tanh activations, and they are trained
using the provided seed dictionary $\pazocal{D}$.
#### Non-linear Mapping Loss.
Let $\Theta_{\pazocal{M}}$ and $\Theta_{\pazocal{N}}$ denote the parameters of
the two mappers $\pazocal{M}$ and $\pazocal{N}$, respectively. While mapping
from $q(z_{x}|x)$ to $q(z_{y}|y)$, we jointly train the mapper $\pazocal{M}$
and the source encoder $E_{\ell_{x}}$ with the following $l_{2}$ loss.
$\displaystyle\mathcal{L}_{\text{MAP}}(\Theta_{\pazocal{M}},\Theta_{E_{\ell_{x}}})=\frac{1}{k}\sum_{i=1}^{k}\|{z_{y_{i}}}-\pazocal{M}(z_{x_{i}})\|^{2}$
(8)
The mapping loss for $\pazocal{N}$ and $E_{\ell_{y}}$ is similarity defined.
To learn a better transformation function, we enforce two additional
constraints to our objective – back-translation and reconstruction.
#### Back-Translation Loss.
To ensure that a source code $z_{x_{i}}\in q(z_{x}|x)$ translated to the
target language latent space $q(z_{y}|y)$, and then translated back to the
original latent space remain unchanged, we enforce the back-translation
constraint, that is,
$z_{x_{i}}\rightarrow\pazocal{M}(z_{x_{i}})\rightarrow\pazocal{N}(\pazocal{M}(z_{x_{i}}))\approx
z_{x_{i}}$. The back-translation (BT) loss from $q(z_{y}|y)$ to $q(z_{x}|x)$
is
$\displaystyle\mathcal{L}_{\text{BT}}(\Theta_{\pazocal{M}},\Theta_{\pazocal{N}})=$
(9)
$\displaystyle\frac{1}{k}\sum_{i=1}^{k}\|{z_{x_{i}}}-\pazocal{N}({\pazocal{M}({z_{x_{i}}}}))\|^{2}$
The BT loss in the other direction
$(z_{y_{j}}$$\rightarrow$$\pazocal{N}(z_{y_{j}})$$\rightarrow$
$\pazocal{M}(\pazocal{N}(z_{y_{j}}))\approx z_{y_{j}})$ is similarly defined.
#### Reconstruction Loss.
In addition to back-translation, we include another constraint to guide the
mapping further. In particular, we ask the decoder $D_{\ell_{x}}$ of
$\texttt{autoenc}_{\ell_{x}}$ to reconstruct the original embedding $x_{i}$
from the back-translated code $\pazocal{N}(\pazocal{M}({z_{x_{i}}}))$. We
compute this original embedding reconstruction loss for
$\texttt{autoenc}_{\ell_{x}}$ as:
$\displaystyle\mathcal{L}_{\text{REC}}(\theta_{E_{\ell_{x}}},\theta_{D_{\ell_{x}}},\Theta_{\pazocal{M}},\Theta_{\pazocal{N}})=$
(10)
$\displaystyle\frac{1}{k}\sum_{i=1}^{k}\|{x_{i}}-D_{\ell_{x}}(\pazocal{N}(\pazocal{M}{z_{x_{i}}})))\|^{2}$
The reconstruction loss for $\texttt{autoenc}_{\ell_{y}}$ is defined
similarly. Both back-translation and reconstruction lead to more stable
training in our experiments. In our ablation study (section 5.4), we
empirically show the efficacy of the addition of these two constraints.
#### Total Loss.
The total loss for mapping a batch of word embeddings from source to target
is:
$\mathcal{L}_{{\ell_{x}}\rightarrow{\ell_{y}}}=\mathcal{L}_{\text{MAP}}+\lambda_{1}\mathcal{L}_{\text{BT}}+\lambda_{2}\mathcal{L}_{\text{REC}}$
(11)
where $\lambda_{1}$ and $\lambda_{2}$ control the relative importance of the
loss components. Similarly we define the total loss for mapping in the
opposite direction $\mathcal{L}_{{\ell_{y}}\rightarrow{\ell_{x}}}$.
#### Remark.
Note that our approach is fundamentally different from existing methods in two
ways. First, most of the existing methods directly map the distribution of the
source embeddings $p(x)$ to the distribution of the target embeddings $p(y)$.
Second, they learn a linear mapping function assuming that the two languages’
embedding spaces are nearly isomorphic, which does not hold in general Søgaard
et al. (2018); Patra et al. (2019).
Mapping the representations in the code space using non-linear transformations
gives our model the flexibility to induce the required semantic structures in
its latent space that could potentially yield more accurate cross-lingual
mappings (section 5).
### 3.3 Training Procedure
We present the training method of LNMap in Algorithm 3.3. In the first step,
we pre-train $\texttt{autoenc}_{\ell_{x}}$ and $\texttt{autoenc}_{\ell_{y}}$
separately on the respective monolingual word embeddings. In this unsupervised
step, we use the first 200K embeddings. This pre-training induce word
semantics (and relations) in the code space Mohiuddin and Joty (2019).
[t!] Input : Word embedding matrices: $\pazocal{E}_{\ell_{x}}$,
$\pazocal{E}_{\ell_{y}}$, seed dictionary: $\pazocal{D}$, and increment count
$C$ // Unsup. latent space induction 1\. Train $\texttt{autoenc}_{\ell_{x}}$
and $\texttt{autoenc}_{\ell_{y}}$ separately for some epochs on monolingual
word embeddings // Sup. non-linear transformation 2\.
$iter=0;\pazocal{D}_{\text{orig}}=\pazocal{D}$ 3\. do
$iter=iter+1$
i. for _n_epochs_ do
(a) Sample a mini-batch from $\pazocal{D}$
(b) Update mapper ${\pazocal{M}}$ and $E_{\ell_{x}}$ on the non-linear mapping
loss
(c) Update mappers $\pazocal{M}$ and $\pazocal{N}$ on the back-translation
loss
(d) Update mappers ($\pazocal{M}$, $\pazocal{N}$) and
$\texttt{autoenc}_{\ell_{x}}$ on the reconstruction loss
end for
ii. Induce a new dictionary $\pazocal{D}_{\text{new}}$ of size: $iter\times C$
iii. Create a new dictionary,
$\pazocal{D}=\pazocal{D}_{\text{orig}}\bigcup\pazocal{D}_{\text{new}}$
while _not converge_ ;
Training LNMap
The next step is the self-training process, where we train the mappers along
with the autoencoders using the seed dictionary in an iterative manner. We
keep a copy of the original dictionary $\pazocal{D}$; let us call it
$\pazocal{D}_{\text{orig}}$. We first update the mapper $\pazocal{M}$ and the
source encoder $E_{\ell_{x}}$ on the mapping loss (Eq. 8). The mappers (both
$\pazocal{M}$ and $\pazocal{N}$) then go through two more updates, one for
back-translation (Eq. 9) and the other for reconstruction of the source
embedding (Eq. 10). The entire source autoencoder
$\texttt{autoenc}_{\ell_{x}}$ (both $E_{\ell_{x}}$ and $D_{\ell_{x}}$) in this
stage gets updated only on the reconstruction loss.
After each iteration of training (step _i._ in Alg. 3.3), we induce a new
dictionary $\pazocal{D}_{\text{new}}$ using the learned encoders and mappers.
To find the nearest target word ($y_{j}$) of a source word ($x_{i}$) in the
target latent space, we use the Cross-domain Similarity Local Scaling (CSLS)
measure which works better than simple cosine similarity in mitigating the
hubness problem Conneau et al. (2018). It penalizes the words that are close
to many other words in the target latent space. To induce the dictionary, we
compute CSLS for $K$ most frequent source and target words and select the
translation pairs that are nearest neighbors of each other according to CSLS.
For the next iteration of training, we construct the dictionary $\pazocal{D}$
by merging $\pazocal{D}_{\text{orig}}$ with the $l$ most similar (based on
CSLS) word pairs from $\pazocal{D}_{\text{new}}$. We set $l$ as
$l={iter}\times C$, where $iter$ is the current iteration number and $C$ is a
hyperparameter. This means we incrementally update the dictionary size. This
is because the induced dictionary at the initial iterations is likely to be
noisy. As the training progresses, the model becomes more mature, and the
induced dictionary pairs become better. For convergence, we use the criterion:
if the difference between the average similarity scores of two successive
iteration steps is less than a threshold (we use $1e^{-6}$), then stop the
training process.
## 4 Experimental Settings
We evaluate our approach on bilingual lexicon induction, also known as _word
translation_.
### 4.1 Datasets
To demonstrate the effectiveness of our method, we evaluate our models against
baselines on two popularly used datasets: MUSE Conneau et al. (2018) and
VecMap Dinu et al. (2015).
The MUSE dataset consists of FastText monolingual embeddings of 300 dimensions
Bojanowski et al. (2017) trained on Wikipedia monolingual corpus and gold
dictionaries for 110 language
pairs.222https://github.com/facebookresearch/MUSE To show the generality of
different methods, we consider $15$ different language pairs with $15\times
2=30$ different translation tasks encompassing resource-rich and low-resource
languages from different language families. In particular, we evaluate on
English (En) from/to Spanish (Es), German (De), Italian (It), Russian (Ru),
Arabic (Ar), Malay (Ms), Finnish (Fi), Estonian (Et), Turkish (Tr), Greek
(El), Persian (Fa), Hebrew (He), Tamil (Ta), Bengali (Bn), and Hindi (Hi). We
differentiate between high- and low-resource languages by the availability of
NLP-resources in general.
The VecMap dataset Dinu et al. (2015); Artetxe et al. (2018a) is a more
challenging dataset and contains monolingual embeddings for English, Spanish,
German, Italian, and Finnish.333https://github.com/artetxem/vecmap/ According
to Artetxe et al. (2018b), existing unsupervised methods often fail to produce
meaningful results on this dataset. English, Italian, and German embeddings
were trained on WacKy crawling corpora using CBOW Mikolov et al. (2013b),
while Spanish and Finnish embeddings were trained on WMT News Crawl and Common
Crawl, respectively.
### 4.2 Baseline Methods
We compare our proposed LNMap with several existing methods comprising
supervised, semi-supervised, and unsupervised models. For each baseline model,
we conduct experiments with the publicly available code. In the following, we
give a brief description of the baseline models.
#### Supervised & Semi-supervised Methods.
(a) Artetxe et al. (2017)
propose a self-learning framework that performs two steps iteratively until
convergence. In the first step, they use the dictionary (starting with the
seed dictionary) to learn a linear mapping, which is then used in the second
step to induce a new dictionary.
(b) Artetxe et al. (2018a)
propose a multi-step framework that generalizes previous studies. Their
framework consists of several steps: whitening, orthogonal mapping, re-
weighting, de-whitening, and dimensionality reduction.
(c) Conneau et al. (2018)
compare their unsupervised model with a supervised baseline that learns an
orthogonal mapping between the embedding spaces by iterative Procrustes
refinement. They also propose CSLS for nearest neighbour search.
(d) Joulin et al. (2018)
show that minimizing a convex relaxation of the CSLS loss significantly
improves the quality of bilingual word vector alignment. Their method achieves
state-of-the-art results for many languages Patra et al. (2019).
(e) Jawanpuria et al. (2019)
propose a geometric approach where they decouple CLWE learning into two steps:
(i) learning rotations for language-specific embeddings to align them to a
common space, and (ii) learning a similarity metric in the common space to
model similarities between the embeddings of the two languages.
(f) Patra et al. (2019)
propose a semi-supervised technique that relaxes the isomorphic assumption
while leveraging both seed dictionary pairs and a larger set of unaligned word
embeddings.
#### Unsupervised Methods.
(a) Conneau et al. (2018)
are the first to show impressive results for unsupervised word translation by
pairing adversarial training with effective refinement methods. Given two
monolingual word embeddings, their adversarial training plays a two-player
game, where a linear mapper (generator) plays against a discriminator. They
also impose the orthogonality constraint on the mapper. After adversarial
training, they use the iterative Procrustes solution similar to their
supervised approach.
(b) Artetxe et al. (2018b)
learn an initial dictionary by exploiting the structural similarity of the
embeddings in an unsupervised way. They propose a robust self-learning to
improve it iteratively. This model is by far the most robust and best
performing unsupervised model Vulić et al. (2019).
(c) Mohiuddin and Joty (2019)
use adversarial autoencoder for unsupervised word translation. They use linear
autoencoders in their model, and the mappers are also linear.
### 4.3 Model Variants and Settings
We experiment with two variants of our model: the default LNMap that uses non-
linear autoencoders and LNMap (Lin. AE) that uses linear autoencoders. In both
the variants, the mappers are non-linear. We train our models using stochastic
gradient descent (SGD) with a batch size of 128, a learning rate of $1e^{-4}$,
and a step learning rate decay schedule. During the dictionary induction
process in each iteration, we consider $K=15000$ most frequent words from the
source and target languages. For dictionary update, we set $C=2000$.
## 5 Results and Analysis
We present our results on low-resource and resource-rich languages from MUSE
dataset in Tables 1 and 2, respectively, and the results on VecMap dataset in
Table 3. We present the results in precision@1, which means how many times one
of the correct translations of a source word is predicted as the top choice.
For each of the cases, we show results on seed dictionary of three different
sizes including 1-to-1 and 1-to-many mappings; “1K Unique” and “5K Unique”
contain 1-to-1 mappings of 1000 and 5000 source-target pairs respectively,
while “5K All” contains 1-to-many mappings of all 5000 source and target
words, that is, for each source word there can be multiple target words.
Through experiments and analysis, our goal is to assess the following
questions.
1. (i)
Does LNMap improve over the best existing methods in terms of mapping accuracy
on low-resource languages (section 5.1)?
2. (ii)
How well does LNMap perform on resource-rich languages (section 5.2)?
3. (iii)
What is the effect of non-linearity in the autoencoders? (section 5.3)
4. (iv)
Which components of LNMap attribute to improvements (section 5.4)?
### 5.1 Performance on Low-resource Languages
| En-Ms | En-Fi | En-Et | En-Tr | En-El | En-Fa | En-He | En-Ta | En-Bn | En-Hi | Avg.
---|---|---|---|---|---|---|---|---|---|---|---
| $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ |
GH Distance | 0.49 | 0.54 | 0.68 | 0.41 | 0.46 | 0.39 | 0.45 | 0.47 | 0.49 | 0.56 |
Unsupervised Baselines | | | | | | | | | | |
Artetxe et al. (2018b) | 49.0 | 49.7 | 49.8 | 63.5 | 33.7 | 51.2 | 52.7 | 63.5 | 47.6 | 63.4 | 33.4 | 40.7 | 43.8 | 57.5 | 0.0 | 0.0 | 18.4 | 23.9 | 39.7 | 48.0 | 41.5
Conneau et al. (2018) | 46.2 | 0.0 | 38.4 | 0.0 | 19.4 | 0.0 | 46.4 | 0.0 | 39.5 | 0.0 | 30.5 | 0.0 | 36.8 | 53.1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 15.5
Mohiuddin and Joty (2019) | 54.1 | 51.7 | 44.8 | 62.5 | 31.8 | 48.8 | 51.3 | 61.7 | 47.9 | 63.5 | 36.7 | 44.5 | 44.0 | 57.1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 35.0
Supervision With “1K Unique” Seed Dictionary | |
Sup./Semi-sup. Baselines | | | | | | | | | | |
Artetxe et al. (2017) | 36.5 | 41.0 | 40.8 | 56.0 | 21.3 | 39.0 | 39.5 | 56.5 | 34.5 | 56.2 | 24.1 | 35.7 | 30.2 | 51.7 | 5.4 | 12.7 | 6.2 | 19.9 | 22.6 | 38.8 | 33.5
Artetxe et al. (2018a) | 35.3 | 34.0 | 30.8 | 40.8 | 21.6 | 32.6 | 33.7 | 43.3 | 32.0 | 46.4 | 22.8 | 27.6 | 32.27 | 39.1 | 7.3 | 11.9 | 11.3 | 15.7 | 26.2 | 30.7 | 28.8
Conneau et al. (2018) | 46.2 | 44.7 | 46.0 | 58.4 | 29.3 | 40.0 | 44.8 | 58.5 | 42.1 | 56.5 | 31.6 | 38.4 | 38.3 | 52.4 | 11.7 | 16.0 | 14.3 | 19.7 | 32.5 | 42.3 | 38.2
Joulin et al. (2018) | 31.4 | 30.7 | 30.4 | 41.4 | 20.1 | 26.0 | 30.7 | 36.5 | 28.8 | 43.6 | 18.7 | 23.1 | 33.5 | 34.3 | 6.0 | 10.1 | 7.6 | 11.3 | 20.7 | 25.7 | 25.6
Jawanpuria et al. (2019) | 40.0 | 39.6 | 37.5 | 50.7 | 24.9 | 38.4 | 39.7 | 49.7 | 36.6 | 52.9 | 26.1 | 33.0 | 35.1 | 44.5 | 10.0 | 15.9 | 12.0 | 19.7 | 30.5 | 37.1 | 33.7
Patra et al. (2019) | 40.4 | 41.4 | 44.3 | 59.8 | 21.0 | 40.4 | 41.4 | 58.8 | 37.1 | 58.9 | 26.5 | 39.6 | 38.4 | 54.1 | 6.4 | 15.1 | 6.1 | 18.1 | 24.9 | 35.4 | 35.4
LNMap | 50.6 | 49.5 | 52.5 | 62.1 | 38.2 | 49.4 | 52.6 | 62.1 | 48.2 | 58.9 | 35.5 | 40.9 | 46.6 | 52.8 | 17.6 | 21.2 | 18.4 | 27.2 | 37.1 | 47.4 | 43.4
LNMap (Lin. AE) | 49.8 | 48.7 | 48.5 | 61.2 | 36.5 | 49.1 | 49.3 | 61.9 | 47.2 | 58.3 | 34.7 | 40.1 | 43.0 | 52.3 | 14.5 | 20.3 | 16.5 | 26.1 | 35.6 | 46.6 | 42.1
Supervision With “5K Unique” Seed Dictionary | |
Sup./Semi-sup. Baselines | | | | | | | | | | |
Artetxe et al. (2017) | 36.5 | 42.0 | 40.8 | 57.0 | 22.4 | 39.6 | 39.6 | 56.7 | 37.2 | 56.4 | 26.0 | 35.3 | 31.6 | 51.9 | 6.2 | 13.4 | 8.2 | 21.3 | 23.2 | 38.3 | 34.2
Artetxe et al. (2018a) | 54.6 | 52.5 | 48.8 | 65.2 | 38.2 | 54.8 | 52.0 | 65.1 | 47.5 | 64.6 | 38.4 | 42.4 | 47.4 | 57.4 | 18.4 | 25.8 | 21.9 | 31.8 | 40.3 | 49.5 | 45.8
Conneau et al. (2018) | 46.4 | 45.7 | 46.0 | 59.2 | 31.0 | 41.7 | 45.9 | 60.1 | 43.1 | 56.8 | 31.6 | 37.7 | 38.4 | 53.4 | 14.3 | 19.1 | 15.0 | 22.6 | 32.9 | 42.8 | 39.2
Joulin et al. (2018) | 50.0 | 49.3 | 53.0 | 66.1 | 39.8 | 52.0 | 54.0 | 61.7 | 47.6 | 63.4 | 39.6 | 42.2 | 53.0 | 56.3 | 16.0 | 24.2 | 21.3 | 27.0 | 38.3 | 47.5 | 45.2
Jawanpuria et al. (2019) | 51.0 | 49.8 | 47.4 | 65.1 | 36.0 | 49.8 | 49.3 | 63.9 | 46.6 | 62.3 | 36.6 | 40.8 | 44.1 | 56.1 | 16.1 | 23.2 | 18.6 | 25.9 | 37.5 | 45.9 | 43.3
Patra et al. (2019) | 46.0 | 46.7 | 48.6 | 60.9 | 33.1 | 47.2 | 48.3 | 61.0 | 44.2 | 60..9 | 34.4 | 40.7 | 43.5 | 56.5 | 15.3 | 22.0 | 15.2 | 25.0 | 34.7 | 43.5 | 41.4
LNMap | 51.3 | 54.2 | 52.7 | 67.9 | 40.2 | 56.4 | 53.1 | 65.5 | 48.2 | 64.8 | 36.2 | 44.4 | 47.5 | 56.6 | 19.7 | 31.5 | 22.0 | 36.2 | 38.5 | 52.2 | 46.9
LNMap (Lin. AE) | 50.1 | 53.9 | 51.3 | 67.0 | 38.6 | 55.6 | 51.1 | 64.9 | 47.7 | 63.6 | 35.6 | 44.0 | 44.2 | 55.9 | 18.6 | 27.3 | 19.6 | 31.6 | 36.5 | 51.3 | 45.4
Supervision With “5K All” (“5K Unique” Source Words) Seed Dictionary | |
Sup./Semi-sup. Baselines | | | | | | | | | | |
Artetxe et al. (2017) | 37.0 | 41.6 | 40.8 | 57.0 | 22.7 | 39.5 | 38.8 | 56.9 | 37.5 | 57.2 | 25.4 | 36.3 | 32.2 | 52.1 | 5.9 | 14.1 | 7.7 | 21.7 | 22.4 | 38.3 | 34.3
Artetxe et al. (2018a) | 55.2 | 51.7 | 48.9 | 64.6 | 37.4 | 54.0 | 52.2 | 63.7 | 48.2 | 65.0 | 39.0 | 42.6 | 47.6 | 58.0 | 19.6 | 25.2 | 21.1 | 30.6 | 40.4 | 50.0 | 45.8
Conneau et al. (2018) | 46.3 | 44.8 | 46.4 | 59.0 | 30.9 | 42.0 | 45.8 | 59.0 | 44.4 | 57.4 | 31.8 | 38.8 | 39.0 | 53.4 | 15.1 | 18.4 | 15.5 | 22.4 | 32.9 | 44.4 | 39.4
Joulin et al. (2018) | 51.4 | 49.1 | 55.6 | 65.8 | 40.0 | 50.2 | 53.8 | 61.7 | 49.1 | 62.8 | 40.5 | 42.4 | 52.2 | 57.9 | 17.7 | 24.0 | 20.2 | 26.9 | 38.2 | 47.1 | 45.3
Jawanpuria et al. (2019) | 51.4 | 47.7 | 46.7 | 63.4 | 33.7 | 48.7 | 48.6 | 61.9 | 46.3 | 61.8 | 38.0 | 40.9 | 43.1 | 56.7 | 16.5 | 23.1 | 19.3 | 25.6 | 37.7 | 44.1 | 42.8
Patra et al. (2019) | 48.4 | 43.8 | 53.2 | 63.8 | 36.3 | 48.3 | 51.8 | 59.6 | 48.2 | 61.8 | 38.4 | 39.3 | 51.6 | 55.2 | 16.5 | 22.7 | 17.5 | 26.7 | 36.2 | 45.4 | 43.3
LNMap | 50.3 | 54.1 | 53.1 | 70.5 | 41.2 | 57.5 | 52.5 | 65.3 | 49.1 | 66.6 | 36.8 | 43.7 | 47.6 | 59.2 | 18.9 | 32.1 | 21.4 | 35.2 | 37.6 | 51.6 | 47.2
LNMap (Lin. AE) | 50.0 | 53.2 | 51.2 | 67.5 | 39.9 | 54.5 | 50.9 | 64.2 | 48.6 | 66.1 | 36.4 | 42.9 | 44.6 | 59.0 | 18.0 | 28.7 | 20.1 | 30.8 | 37.1 | 50.5 | 46.7
Table 1: Word translation accuracy (P@1) on low-resource languages on MUSE
dataset using fastText.
Most of the unsupervised models fail in the majority of the low-resource
languages Vulić et al. (2019). On the other hand, the performance of
supervised models on low-resource languages was not satisfactory, especially
with small seed dictionary. Hence, we first compare LNMap’s performance on
these languages. From Table 1, we see that on average LNMap outperforms every
baseline by a good margin (1.1% - 5.2% from the best baselines).
For “1K Unique” dictionary, LNMap exhibits impressive performance. In all the
20 translation tasks, it outperforms all the (semi-)supervised baselines by a
wide margin. If we compare with Joulin et al. (2018), a state-of-the-art
supervised model, LNMap’s average improvement is $\sim$$18\%$, which is
remarkable. Compared to other baselines, the average margin of improvement is
also quite high – $9.9\%,14.6\%,5.2\%,9.7\%,$ and $8.0\%$ gains over Artetxe
et al. (2017), Artetxe et al. (2018a), Conneau et al. (2018), Jawanpuria et
al. (2019), and Patra et al. (2019), respectively. We see that among the
supervised baselines, Conneau et al. (2018)’s model performs better than
others.
If we increase the dictionary size, we can still see the dominance of LNMap
over the baselines. For “5K Unique” seed dictionary, it performs better than
the baselines on 14/20 translation tasks, while for “5K All” seed dictionary,
the best performance by LNMap is on 13/20 translation tasks.
One interesting thing to observe is that, under resource-constrained setup
LNMap’s performance is impressive, making it suitable for very low-resource
languages like En-Ta, En-Bn, and En-Hi.
Now if we look at the performance of unsupervised baselines on low-resource
languages, we see that Conneau et al. (2018)’s model fails to converge on the
majority of the translation tasks (12/20), while the model of Mohiuddin and
Joty (2019) fails to converge on En$\leftrightarrow$Ta, En$\leftrightarrow$Bn,
and En$\leftrightarrow$Hi. Although the most robust unsupervised method of
Artetxe et al. (2018b) performs better than the other unsupervised approaches,
it still fails to converge on En$\leftrightarrow$Ta tasks. If we compare its
performance with LNMap, we see that our model outperforms the best
unsupervised model of Artetxe et al. (2018b) on 18/20 low-resource translation
tasks.
| En-Es | En-De | En-It | En-Ar | En-Ru | Avg.
---|---|---|---|---|---|---
| $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ |
GH Distance | 0.21 | 0.31 | 0.19 | 0.46 | 0.46 |
Unsupervised Baselines | | | | | |
Artetxe et al. (2018b) | 82.2 | 84.4 | 74.9 | 74.1 | 78.9 | 79.5 | 33.2 | 52.8 | 48.93 | 65.0 | 67.4
Conneau et al. (2018) | 81.8 | 83.7 | 74.2 | 72.6 | 78.3 | 78.1 | 29.3 | 47.6 | 41.9 | 59.0 | 64.7
Mohiuddin and Joty (2019) | 82.7 | 84.7 | 75.4 | 74.3 | 79.0 | 79.6 | 36.3 | 52.6 | 46.9 | 64.7 | 67.6
Supervision With “1K Unique” Seed Dictionary | |
Sup./Semi-sup. Baselines | | | | | |
Artetxe et al. (2017) | 81.0 | 83.6 | 73.8 | 72.4 | 76.6 | 77.8 | 24.9 | 44.9 | 46.3 | 61.7 | 64.3
Artetxe et al. (2018a) | 73.8 | 76.6 | 62.5 | 57.6 | 67.9 | 70.0 | 25.8 | 37.3 | 40.2 | 49.5 | 56.2
Conneau et al. (2018) | 81.2 | 82.8 | 73.6 | 73.0 | 77.6 | 76.6 | 34.7 | 46.4 | 48.5 | 60.6 | 65.5
Joulin et al. (2018) | 70.8 | 74.1 | 59.0 | 54.0 | 62.7 | 67.2 | 22.4 | 32.2 | 39.6 | 45.4 | 52.8
Jawanpuria et al. (2019) | 75.1 | 77.3 | 66.0 | 62.6 | 69.3 | 71.6 | 28.4 | 40.6 | 41.7 | 53.9 | 58.6
Patra et al. (2019) | 81.9 | 83.8 | 74.6 | 73.1 | 78.0 | 78.1 | 29.8 | 50.9 | 46.3 | 63.6 | 66.0
LNMap | 80.1 | 80.2 | 73.3 | 71.8 | 77.1 | 75.2 | 40.5 | 52.2 | 49.9 | 62.1 | 66.2
LNMap (Lin. AE) | 83.2 | 85.5 | 76.2 | 74.9 | 79.2 | 79.6 | 37.7 | 54.0 | 52.6 | 66.2 | 68.8
Supervision With “5K Unique” Seed Dictionary | |
Sup./Semi-sup. Baselines | | | | | |
Artetxe et al. (2017) | 81.3 | 83.3 | 72.8 | 72.6 | 76.3 | 77.6 | 24.1 | 45.3 | 47.5 | 60.3 | 64.1
Artetxe et al. (2018a) | 80.8 | 84.5 | 73.3 | 74.3 | 77.4 | 79.7 | 42.0 | 54.7 | 51.5 | 68.2 | 68.7
Conneau et al. (2018) | 81.6 | 83.5 | 74.1 | 72.7 | 77.8 | 77.2 | 34.3 | 48.5 | 49.0 | 60.7 | 66.0
Joulin et al. (2018) | 83.4 | 85.4 | 77.0 | 76.4 | 78.7 | 81.6 | 41.3 | 54.0 | 58.1 | 67.4 | 70.4
Jawanpuria et al. (2019) | 81.3 | 86.3 | 74.5 | 75.9 | 78.6 | 81.3 | 38.7 | 53.4 | 52.3 | 67.6 | 68.9
Patra et al. (2019) | 82.2 | 84.6 | 75.6 | 73.7 | 77.8 | 78.6 | 35.0 | 51.9 | 52.2 | 65.2 | 69.5
LNMap | 80.9 | 80.8 | 74.9 | 72.3 | 77.1 | 76.5 | 40.7 | 56.6 | 52.2 | 64.8 | 67.7
LNMap (Lin. AE) | 83.4 | 85.7 | 75.5 | 75.4 | 79.0 | 81.1 | 39.5 | 56.8 | 53.8 | 68.4 | 69.9
Supervision With “5K All”(5K Unique Source Words) Seed Dictionary | |
Sup./Semi-sup. Baselines | | | | | |
Artetxe et al. (2017) | 81.2 | 83.5 | 72.8 | 72.5 | 76.0 | 77.5 | 24.4 | 45.3 | 47.3 | 61.2 | 64.2
Artetxe et al. (2018a) | 80.5 | 83.8 | 73.5 | 73.5 | 77.1 | 79.2 | 41.2 | 55.5 | 50.5 | 67.3 | 68.2
Conneau et al. (2018) | 81.6 | 83.2 | 73.7 | 72.6 | 77.3 | 77.0 | 34.1 | 49.4 | 49.8 | 60.7 | 66.0
Joulin et al. (2018) | 84.4 | 86.4 | 79.0 | 76.0 | 79.0 | 81.4 | 42.2 | 55.5 | 57.4 | 67.0 | 70.9
Jawanpuria et al. (2019) | 81.4 | 85.5 | 74.7 | 76.7 | 77.8 | 80.9 | 38.1 | 53.3 | 51.1 | 67.6 | 68.7
Patra et al. (2019) | 84.0 | 86.4 | 78.7 | 76.4 | 79.3 | 82.4 | 41.1 | 53.9 | 57.2 | 64.8 | 70.4
LNMap | 80.5 | 82.2 | 73.9 | 72.7 | 76.7 | 78.3 | 41.5 | 57.1 | 53.5 | 67.1 | 68.4
LNMap (Lin. AE) | 82.9 | 86.4 | 75.5 | 75.9 | 78.1 | 81.4 | 39.3 | 57.3 | 52.3 | 67.8 | 69.6
Table 2: Word translation accuracy (P@1) on resource-rich languages on MUSE
dataset using fastText.
### 5.2 Results on Resource-rich Languages
Table 2 shows the results for 5 resource-rich language pairs (10 translation
tasks) from the MUSE dataset. We notice that our model achieves the highest
accuracy in all the tasks for “1K Unique”, 4 tasks for “5K Unique”, 3 for “5K
All”.
We show the results on the VecMap dataset in Table 3, where there are 3
resource-rich language pairs, and one low-resource pair (En-Fi) with a total
of 8 translation tasks. Overall, we have similar observations as in MUSE – our
model outperforms other models on 7 tasks for “1K Unique”, 4 tasks for “5K
Unique”, and 4 for “5K All”.
| En-Es | En-It | En-De | En-Fi | Avg.
---|---|---|---|---|---
| $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ |
Unsupervised Baselines |
Artetxe et al. (2018b) | 36.9 | 31.6 | 47.9 | 42.3 | 48.3 | 44.1 | 32.9 | 33.5 | 39.7
Conneau et al. (2018) | 34.7 | 0.0 | 44.9 | 38.7 | 0.0 | 0.0 | 0.0 | 0.0 | 14.8
Mohiuddin and Joty (2019) | 37.4 | 31.9 | 47.6 | 42.5 | 0.0 | 0.0 | 0.0 | 0.0 | 19.9
Supervision With “1K Unique” Seed Dictionary
Sup./Semi-sup. Baselines |
Artetxe et al. (2017) | 33.3 | 27.7 | 43.9 | 38.1 | 46.8 | 40.8 | 30.4 | 26.0 | 35.9
Artetxe et al. (2018a) | 29.0 | 20.0 | 38.6 | 29.2 | 36.3 | 26.0 | 25.8 | 15.0 | 27.5
Conneau et al. (2018) | 35.7 | 30.8 | 45.4 | 38.3 | 46.9 | 42.3 | 29.1 | 27.2 | 37.0
Joulin et al. (2018) | 24.2 | 17.9 | 33.9 | 25.1 | 31.6 | 25.5 | 21.9 | 14.5 | 24.4
Jawanpuria et al. (2019) | 31.5 | 23.2 | 39.2 | 32.4 | 39.1 | 30.9 | 26.8 | 21.4 | 30.6
Patra et al. (2019) | 31.4 | 30.5 | 30.9 | 38.8 | 47.9 | 43.7 | 30.5 | 31.6 | 35.7
LNMap | 32.9 | 28.6 | 44.2 | 39.1 | 43.0 | 39.2 | 26.6 | 25.4 | 34.9
LNMap (Lin. AE) | 36.5 | 33.6 | 46.0 | 40.1 | 46.4 | 44.8 | 31.7 | 37.1 | 39.5
Supervision With “5K Unique” Seed Dictionary
Sup./Semi-sup. Baselines |
Artetxe et al. (2017) | 33.3 | 27.6 | 43.9 | 38.4 | 46.0 | 41.1 | 30.9 | 25.7 | 35.9
Artetxe et al. (2018a) | 37.6 | 34.0 | 45.7 | 41.6 | 47.2 | 45.0 | 34.0 | 38.8 | 40.2
Conneau et al. (2018) | 36.0 | 31.1 | 46.0 | 38.8 | 47.6 | 43.2 | 31.1 | 28.2 | 37.8
Joulin et al. (2018) | 34.2 | 31.1 | 43.1 | 37.2 | 44.5 | 41.9 | 30.9 | 34.7 | 37.2
Jawanpuria et al. (2019) | 36.9 | 33.3 | 47.1 | 39.9 | 47.7 | 44.6 | 35.1 | 38.0 | 40.2
Patra et al. (2019) | 34.3 | 31.6 | 41.1 | 39.3 | 47.5 | 43.6 | 30.7 | 33.4 | 37.7
LNMap | 33.4 | 27.3 | 44.1 | 38.9 | 42.5 | 39.4 | 29.7 | 28.6 | 35.5
LNMap (Lin. AE) | 37.1 | 34.1 | 46.2 | 40.3 | 47.7 | 45.6 | 33.3 | 38.8 | 40.3
Supervision With “5K All” (5K Unique Source Words) Seed Dictionary
Sup./Semi-sup. Baselines |
Artetxe et al. (2017) | 32.7 | 28.1 | 43.8 | 38.0 | 47.4 | 40.8 | 30.8 | 26.2 | 36.0
Artetxe et al. (2018a) | 38.2 | 33.4 | 47.3 | 41.6 | 47.2 | 44.8 | 34.9 | 38.6 | 40.8
Conneau et al. (2018) | 36.1 | 31.2 | 45.7 | 38.5 | 47.2 | 42.8 | 31.2 | 28.3 | 37.7
Joulin et al. (2018) | 35.5 | 31.2 | 44.6 | 37.6 | 46.6 | 41.7 | 32.1 | 34.4 | 38.0
Jawanpuria et al. (2019) | 37.5 | 33.1 | 47.6 | 40.1 | 48.8 | 45.1 | 34.6 | 37.7 | 40.6
Patra et al. (2019) | 34.5 | 32.1 | 46.2 | 39.5 | 48.1 | 44.1 | 31.0 | 33.6 | 39.4
LNMap | 33.7 | 27.9 | 43.7 | 38.9 | 43.6 | 39.2 | 29.9 | 31.5 | 36.1
LNMap (Lin. AE) | 37.8 | 34.6 | 46.7 | 40.2 | 47.7 | 45.2 | 34.1 | 38.9 | 40.6
Table 3: Word translation accuracy (P@1) on VecMap dataset using CBOW
embeddings.
### 5.3 Effect of Non-linearity in Autoencoders
The comparative results between our model variants in Tables 1 \- 3 reveal
that LNMap (with nonlinear autoencoders) works better for low-resource
languages, whereas LNMap (Lin. AE) works better for resource-rich languages.
This can be explained by the geometric similarity between the embedding spaces
of the two languages.
In particular, we measure the geometric similarity of the language pairs using
the Gromov-Hausdorff (GH) distance Patra et al. (2019), which is recently
proposed to quantitatively estimate isometry between two embedding
spaces.444https://github.com/joelmoniz/BLISS From the measurements (Tables
1-2), we see that etymologically close language pairs have lower GH distance
compared to etymologically distant and low-resource language pairs.555We could
not compute GH distances for the VecMap dataset; the metric gives ‘inf’ in the
BLISS framework. Low-resource language pairs’ high GH distance measure implies
that English and those languages embedding spaces are far from isomorphism.
Hence, we need strong non-linearity for those distant languages.
### 5.4 Dissecting LNMap
We further analyze our model by dissecting it and measuring the contribution
of its different components. Specifically, our goal is to assess the
contribution of back-translation, reconstruction, non-linearity in the mapper,
and non-linearity in the autoencoder. We present the ablation results in Table
4 on 8 translation tasks from 4 language pairs consisting of 2 resource-rich
and 2 low-resource languages. We use MUSE dataset for this purpose. All the
experiments for the ablation study are done using “1K Unique” seed dictionary.
#### $\ominus$ Reconstruction loss:
For removing the reconstruction loss from the full model, on average high-
resource language pairs lose accuracy by 0.9% and 5.3% for from and to
English, respectively. The losses are even higher for low-resource language
pairs, on average 2.5% and 6.4% in accuracy.
| Resource-rich | Low-Resource
---|---|---
| En-Es | En-It | En-Ta | En-Bn
| $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$ | $\rightarrow$ | $\leftarrow$
LNMap | 80.1 | 80.2 | 77.1 | 75.3 | 17.6 | 21.2 | 18.4 | 27.2
$\ominus$ Recon. loss | 79.6 | 75.4 | 75.7 | 69.4 | 14.8 | 14.9 | 16.2 | 20.7
$\ominus$ Back-tran. loss | 79.8 | 79.1 | 76.6 | 74.4 | 16.7 | 20.3 | 16.5 | 26.7
$\oplus$ Linear mapper | 78.8 | 78.9 | 76.3 | 74.7 | 16.6 | 20.2 | 18.0 | 26.3
$\oplus$ Procrustes sol. | 75.9 | 73.9 | 72.0 | 72.2 | 11.1 | 12.1 | 12.2 | 14.8
$\oplus$ Linear autoenc. | 83.2 | 85.5 | 79.2 | 79.6 | 14.5 | 20.3 | 16.5 | 26.1
Table 4: Ablation study of LNMap with “1K Unique” dictionary. $\ominus$
indicates the component is removed from the full model, and ‘$\oplus$’
indicates the component is added by replacing the corresponding component.
#### $\ominus$ Back-translation (BT) loss:
Removing the BT loss also has a negative impact, but not as high as the
reconstruction. This is because the reconstruction loss (Eq. 10) also covers
the BT signal.
#### $\oplus$ Linear mapper:
If we replace the non-linear mapper with a linear one in the full model, we
see that the effect is not that severe. The reason can be explained by the
fact that the autoencoders are still non-linear, and the non-linear signal
passes through back-translation and reconstruction.
#### $\oplus$ Procrustes solution:
To assess the proper effect of the non-linear mapper, we need to replace it
with a linear mapper through which no non-linear signal passes by during
training. This can be achieved by replacing the non-linear mapper with the
Procrustes solution. The results show an adverse effect on removing non-
linearity in the mapper in all the language pairs. However, low-resource
pairs’ performance drops quite significantly.
#### $\oplus$ Linear autoencoder:
For high-resource language pairs, linear autoencoder works better than the
non-linear one. However, it is the opposite for the low-resource pairs, where
the performance drops significantly for the linear autoencoder.
## 6 Conclusions
We have presented a novel semi-supervised framework LNMap to learn the cross-
lingual mapping between two monolingual word embeddings. Apart from exploiting
weak supervision from a small (1K) seed dictionary, our LNMap leverages the
information from monolingual word embeddings. In contrast to the existing
methods that directly map word embeddings using the isomorphic assumption, our
framework is independent of any such strong prior assumptions. LNMap first
learns to transform the embeddings into a latent space and then uses a non-
linear transformation to learn the mapping. To guide the non-linear mapping
further, we include constraints for back-translation and original embedding
reconstruction.
Extensive experiments with fifteen different language pairs comprising high-
and low-resource languages show the efficacy of non-linear transformations,
especially for low-resource and distant languages. Comparison with existing
supervised, semi-supervised, and unsupervised baselines show that LNMap learns
a better mapping. With an in-depth ablation study, we show that different
components of LNMap works in a collaborative nature.
## Acknowledgments
We would like to thank the anonymous reviewers for their helpful comments.
Shafiq Joty would like to thank the funding support from NRF (NRF2016IDM-
TRANS001-062), Singapore.
## References
* Alvarez-Melis and Jaakkola (2018) David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 1881–1890. Association for Computational Linguistics.
* Artetxe et al. (2016) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 2289–2294, Austin, Texas. Association for Computational Linguistics.
* Artetxe et al. (2017) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 451–462, Vancouver, Canada. Association for Computational Linguistics.
* Artetxe et al. (2018a) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence_ , pages 5012–5019.
* Artetxe et al. (2018b) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In _ACL_.
* Artetxe et al. (2018c) Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018c. Unsupervised neural machine translation. In _Proceedings of the Sixth International Conference on Learning Representations_.
* Barone (2016) Antonio Valerio Miceli Barone. 2016. Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders. In _Proceedings of the 1st Workshop on Representation Learning for NLP_ , pages 121–126. Association for Computational Linguistics.
* Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. _Transactions of the Association for Computational Linguistics_ , 5:135–146.
* Conneau et al. (2018) Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In _International Conference on Learning Representations (ICLR)_.
* Dinu et al. (2015) Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In _ICLR, Workshop track_.
* Doval et al. (2019) Yerai Doval, José Camacho-Collados, Luis Espinosa Anke, and Steven Schockaert. 2019. On the robustness of unsupervised and semi-supervised cross-lingual word embedding learning. _ArXiv_ , abs/1908.07742.
* Heyman et al. (2017) Geert Heyman, Ivan Vulić, and Marie-Francine Moens. 2017. Bilingual lexicon induction by learning to combine word-level and character-level representations. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 1085–1095, Valencia, Spain. Association for Computational Linguistics.
* Hoshen and Wolf (2018) Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 469–478. Association for Computational Linguistics.
* Jawanpuria et al. (2019) Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2019\. Learning multilingual word embeddings in latent metric space: a geometric approach. _Transaction of the Association for Computational Linguistics (TACL)_ , 7:107–120.
* Joulin et al. (2018) Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2979–2984, Brussels, Belgium. Association for Computational Linguistics.
* Lample et al. (2018) Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics.
* Mikolov et al. (2013a) Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. _CoRR_ , abs/1309.4168.
* Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In _Advances in Neural Information Processing Systems 26_ , pages 3111–3119. Curran Associates, Inc.
* Mohiuddin and Joty (2019) Tasnim Mohiuddin and Shafiq Joty. 2019. Revisiting adversarial autoencoder for unsupervised word translation with cycle consistency and improved training. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3857–3867, Minneapolis, Minnesota. Association for Computational Linguistics.
* Mohiuddin and Joty (2020) Tasnim Mohiuddin and Shafiq Joty. 2020. Unsupervised word translation with adversarial autoencoder. _Computational Linguistics_ , 46(2):257–288.
* Nakashole and Flauger (2018) Ndapa Nakashole and Raphael Flauger. 2018. Characterizing departures from linearity in word translation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 221–227, Melbourne, Australia. Association for Computational Linguistics.
* Ormazabal et al. (2019) Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019\. Analyzing the limitations of cross-lingual word embedding mappings. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4990–4995, Florence, Italy. Association for Computational Linguistics.
* Patra et al. (2019) Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. 2019. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 184–193, Florence, Italy. Association for Computational Linguistics.
* Ruder et al. (2019) Sebastian Ruder, Anders Søgaard, and Ivan Vulić. 2019. Unsupervised cross-lingual representation learning. In _Proceedings of ACL 2019, Tutorial Abstracts_ , pages 31–38.
* Smith et al. (2017) Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017\. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In _International Conference on Learning Representations (ICLR)_.
* Søgaard et al. (2018) Anders Søgaard, Sebastian Ruder, and Ivan Vulić. 2018. On the limitations of unsupervised bilingual dictionary induction. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 778–788. Association for Computational Linguistics.
* Søgaard et al. (2019) Anders Søgaard, Ivan Vulić, Sebastian Ruder, and Manaal Faruqui. 2019. _Cross-Lingual Word Embeddings_. Synthesis Lectures on Human Language Technologies. Morgan & Claypool.
* Vulić et al. (2019) Ivan Vulić, Goran Glavaš, Roi Reichart, and Anna Korhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4406–4417, Hong Kong, China. Association for Computational Linguistics.
* Vulić and Moens (2015) Ivan Vulić and Marie-Francine Moens. 2015. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In _Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , SIGIR ’15, pages 363–372, New York, NY, USA. ACM.
* Xu et al. (2018) Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2465–2474. Association for Computational Linguistics.
* Zhang et al. (2017) Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1959–1970. Association for Computational Linguistics.
## Appendix A Appendix
### A.1 Reproducibility Settings
* •
Computing infrastructure - Linux machine with a single GTX 1080 Ti GPU
* •
PyTorch version 1.2.0
* •
CUDA version 10.0
* •
cuDNN version 7.6.0
* •
Average runtime - 15-20 minutes
### A.2 Optimal Hyperparameters
Hyperparameter | Value
---|---
Encoder
#layers | 3
input dim | 300
hidden dim | 350-400
output dim | 350-400
hidden non-linearity | PReLU
output non-linearity | linear
Decoder
#layers | 3
input dim | 350-400
hidden dim | 350-400
output dim | 300
hidden non-linearity | PReLU
output non-linearity | tanh
Table 5: Optimal hyper-parameter settings for autoencoder. Hyperparameter | Value
---|---
type | linear/non-linear
#layers | 2
input dim | 350-400
hidden dim | 400
output dim | 350-400
hidden non-linearity | tanh
output non-linearity | linear
Table 6: Optimal hyper-parameter settings for mapper. Hyperparameter | Value
---|---
normalization | renorm,center,renorm
#iterations | dynamic
sup. dict size | 1K-5K
batch size | 128
autoenc. epochs | 25
mapper epochs | 100
nearest-neighbor | CSLS
autoenc. optimizer | SGD
autoenc. learning-rate | 0.0001
mapper optimizer | SGD
mapper learning-rate | 0.0001
mapping-loss weight | 1.0
cycle-loss weight | 1.0
recons.-loss weight | 1.0
Table 7: Optimal hyper-parameter settings for LNMap training.
|
$|\nabla\mathbf{M}_{\varepsilon}|^{2}\lesssim|\nabla\tilde{\mathbf{M}}_{\varepsilon}|^{2}$.
The latter can be estimated by differentiating both sides of (4.91), by the
BV-chain rule; this gives
$\|\nabla\tilde{\mathbf{M}}_{\varepsilon}\|^{2}_{L^{2}(\Omega_{\sigma_{\varepsilon}})}\lesssim\|\nabla\tilde{\mathbf{Q}}_{\varepsilon}\|^{2}_{L^{2}(\Omega_{\sigma_{\varepsilon}})}\lesssim\left|\log\varepsilon\right|$.
Let
$\eta_{\varepsilon}:=\frac{\sqrt{2}\beta+1}{u\left(\frac{\sigma_{\varepsilon}}{\varepsilon}\right)^{2}}$
We observe that $\eta_{\varepsilon}\to 1$ as $\varepsilon\to 0$, due to the
condition at infinity in (4.92). We have in $A_{\varepsilon}^{1}$:
$\frac{\varepsilon}{2}|\nabla\mathbf{M}_{\varepsilon}|^{2}=\frac{\eta_{\varepsilon}}{2}\left|u^{\prime}\left(\frac{|x_{2}|}{\varepsilon}\right)\right|^{2}+\mathrm{O}(\varepsilon|\nabla\tilde{\mathbf{M}}_{\varepsilon}|^{2})$
and therefore:
$\displaystyle\frac{\varepsilon}{2}\int_{A^{1}_{\varepsilon}}|\nabla\mathbf{M}_{\varepsilon}|^{2}\mathrm{d}x$
$\displaystyle\leq\mathrm{O}(\varepsilon|\log\varepsilon|)+\frac{\eta_{\varepsilon}}{2}\int_{A_{\varepsilon}^{1}}\left|u^{\prime}\left(\frac{|x_{2}|}{\varepsilon}\right)\right|^{2}\mathrm{d}x+\mathrm{O}(\varepsilon\|\nabla\tilde{\mathbf{M}}_{\varepsilon}\|^{2}_{L^{2}(\Omega_{\sigma_{\varepsilon}})})$
$\displaystyle=\mathrm{O}(\varepsilon|\log\varepsilon|)+\frac{\eta_{\varepsilon}}{2}\mathscr{H}^{1}(L_{1})\cdot\int_{-\sigma_{\varepsilon}}^{\sigma_{\varepsilon}}\left|u^{\prime}\left(\frac{|x_{2}|}{\varepsilon}\right)\right|^{2}\mathrm{d}x_{2}$
$\displaystyle=\mathrm{O}(\varepsilon|\log\varepsilon|)+\eta_{\varepsilon}\,\mathscr{H}^{1}(L_{1})\cdot\int_{0}^{\frac{\sigma_{\varepsilon}}{\varepsilon}}\left|u^{\prime}(t)\right|^{2}\,\mathrm{d}t.$
By repeating this argument on each $A^{j}_{\varepsilon}$, we deduce
(4.96)
$\frac{\varepsilon}{2}\int_{\Omega_{\sigma_{\varepsilon}}}|\nabla\mathbf{M}_{\varepsilon}|^{2}\mathrm{d}x\leq\mathrm{O}(\varepsilon|\log\varepsilon|)+\eta_{\varepsilon}\,\mathbb{L}(a_{1},\,\ldots,\,a_{2\left|d\right|})\cdot\int_{0}^{\frac{\sigma_{\varepsilon}}{\varepsilon}}\left|u^{\prime}(t)\right|^{2}\,\mathrm{d}t.$
Next, we estimate the potential term. On
$\Omega_{\sigma_{\varepsilon}}\setminus\bigcup_{j=1}^{\left|d\right|}A_{\varepsilon}^{j}$,
we have $|\mathbf{Q}_{\varepsilon}|=1+\kappa_{*}\varepsilon$ and
$|\mathbf{M}_{\varepsilon}|=(\sqrt{2}\beta+1)^{\frac{1}{2}}$. The identity
(4.91) can be written as
$\frac{\mathbf{Q}_{\varepsilon}}{1+\kappa_{*}\varepsilon}=\sqrt{2}\left(\frac{\mathbf{M}_{\varepsilon}\otimes\mathbf{M}_{\varepsilon}}{\sqrt{2}\beta+1}-\frac{\mathbf{I}}{2}\right)$
which implies
$\displaystyle\mathbf{Q}_{\varepsilon}\mathbf{M}_{\varepsilon}\cdot\mathbf{M}_{\varepsilon}$
$\displaystyle=\sqrt{2}(1+\kappa_{*}\varepsilon)\left(\frac{|\mathbf{M}_{\varepsilon}|^{4}}{\sqrt{2}\beta+1}-\frac{1}{2}|\mathbf{M}_{\varepsilon}|^{2}\right)=\frac{\sqrt{2}}{2}(1+\kappa_{*}\varepsilon)(\sqrt{2}\beta+1).$
In conclusion, at each point of
$\Omega_{\sigma_{\varepsilon}}\setminus\bigcup_{j=1}^{\left|d\right|}A_{\varepsilon}^{j}$
we have
(4.97)
$\begin{split}f_{\varepsilon}(\mathbf{Q}_{\varepsilon},\mathbf{M}_{\varepsilon})&=\frac{1}{4}(2\kappa_{*}\varepsilon+\kappa_{*}^{2}\varepsilon^{2})^{2}+\frac{\varepsilon\beta^{2}}{2}-\frac{\beta\varepsilon}{\sqrt{2}}(1+\kappa_{*}\varepsilon)(\sqrt{2}\beta+1)+\kappa_{\varepsilon}\\\
&=\mathrm{o}_{\varepsilon\to 0}(\varepsilon^{2})\end{split}$
by taking Lemma 3.1 into account. Therefore, the total contribution from the
potential on
$\Omega_{\sigma_{\varepsilon}}\setminus\bigcup_{j=1}^{\left|d\right|}A_{\varepsilon}^{j}$
is negligible. Let us compute the potential on $A_{\varepsilon}^{j}$.
Considering for simplicity the case $j=1$, again we have
$|\mathbf{Q}_{\varepsilon}|=1+\kappa_{*}\varepsilon$, but
$|\mathbf{M}_{\varepsilon}(x)|=\eta_{\varepsilon}^{1/2}\,u\left(\frac{|x_{2}|}{\varepsilon}\right)$
Then, (4.91) can be written as
$\mathbf{Q}_{\varepsilon}=\sqrt{2}(1+\kappa_{*}\varepsilon)\left(\frac{\mathbf{M}_{\varepsilon}\otimes\mathbf{M}_{\varepsilon}}{|\mathbf{M}_{\varepsilon}|^{2}}-\frac{\mathbf{I}}{2}\right)$
which implies
$\mathbf{Q}_{\varepsilon}\mathbf{M}_{\varepsilon}\cdot\mathbf{M}_{\varepsilon}=\frac{\sqrt{2}}{2}(1+\kappa_{*}\varepsilon)\,\eta_{\varepsilon}\,u^{2}\left(\frac{|x_{2}|}{\varepsilon}\right)$
At a generic point $x\in A_{\varepsilon}^{1}$, we have (writing
$v_{\varepsilon}:=\eta_{\varepsilon}^{1/2}u(|x_{2}|/\varepsilon)$ for
simplicity)
$\displaystyle
f_{\varepsilon}(\mathbf{Q}_{\varepsilon},\mathbf{M}_{\varepsilon})$
$\displaystyle=\frac{\kappa_{*}\varepsilon^{2}}{4}(2+\kappa_{*}\varepsilon)^{2}+\frac{\varepsilon}{4}(1-v_{\varepsilon}^{2})-\beta\varepsilon\frac{\sqrt{2}}{2}(1+\kappa_{*}\varepsilon)v_{\varepsilon}^{2}+\frac{1}{2}(\beta^{2}+\sqrt{2}\beta)\varepsilon+\kappa_{*}^{2}\varepsilon^{2}+\mathrm{o}(\varepsilon^{2})$
$\displaystyle=2\kappa_{*}^{2}\varepsilon^{2}+\frac{\varepsilon}{4}(1-v_{\varepsilon}^{2})^{2}-\frac{\beta\varepsilon}{\sqrt{2}}v_{\varepsilon}^{2}-\frac{\kappa_{*}\beta\varepsilon^{2}}{\sqrt{2}}v_{\varepsilon}^{2}+\frac{1}{2}(\beta^{2}+\sqrt{2}\beta)\varepsilon+\mathrm{o}(\varepsilon^{2})$
$\displaystyle=\mathrm{O}(\varepsilon^{2})+\varepsilon\left(h(v_{\varepsilon},0)-\frac{\beta^{2}+\sqrt{2}\beta}{2}\right)+\frac{1}{2}(\beta^{2}+\sqrt{2}\beta)\varepsilon$
$\displaystyle=\mathrm{O}(\varepsilon^{2})+\varepsilon h(v_{\varepsilon},0)$
$\displaystyle=\mathrm{O}(\varepsilon^{2})+\frac{\varepsilon}{2}H^{2}(v_{\varepsilon}).$
By repeating this argument on each $A^{j}_{\varepsilon}$, and taking the
integral over $A^{j}_{\varepsilon}$, we obtain
(4.98)
$\begin{split}\frac{1}{\varepsilon^{2}}\int_{\bigcup_{j=1}^{\left|d\right|}A_{\varepsilon}^{j}}f_{\varepsilon}(\mathbf{Q}_{\varepsilon},\mathbf{M}_{\varepsilon})\,\mathrm{d}x&=\int_{\bigcup_{j=1}^{\left|d\right|}A_{\varepsilon}^{j}}\mathrm{O}(1)+\frac{1}{2\varepsilon}H^{2}\left(\eta_{\varepsilon}^{1/2}\,u\left(\frac{|x_{2}|}{\varepsilon}\right)\right)\mathrm{d}x\\\
&=\mathrm{O}(\sigma_{\varepsilon})+\frac{1}{2\varepsilon}\sum_{j=1}^{\left|d\right|}\int_{A_{\varepsilon}^{j}}H^{2}\left(\eta_{\varepsilon}^{1/2}\,u\left(\frac{|x_{2}|}{\varepsilon}\right)\right)\mathrm{d}x\\\
&=\mathrm{o}_{\varepsilon\to
0}(1)+\mathbb{L}(a_{1},\cdots,a_{2{\left|d\right|}})\cdot\int_{0}^{\frac{\sigma_{\varepsilon}}{\varepsilon}}H^{2}(\eta_{\varepsilon}^{1/2}\,u(t))\,\mathrm{d}t.\end{split}$
By combining (4.90), (4.96), (4.97) and (4.98), keeping in mind that
$\eta_{\varepsilon}\to 1$, $\sigma_{\varepsilon}/\varepsilon\to+\infty$ as
$\varepsilon\to 0$, and applying Lebesgue’s dominated convergence theorem, we
obtain
(4.99)
$\begin{split}\mathscr{F}_{\varepsilon}(\mathbf{Q}_{\varepsilon},\mathbf{M}_{\varepsilon};\Omega\setminus\bigcup_{j=1}^{2\left|d\right|}B_{\sigma_{\varepsilon}}(a_{j}))&\leq\mathrm{o}_{\varepsilon\to
0}(1)+\mathbb{L}(a_{1},\cdots,a_{2d})\int_{0}^{+\infty}\left(u^{\prime
2}(t)+H^{2}(u(t))\right)\mathrm{d}t\\\
&\qquad+2\pi\left|d\right|\left|\log\varepsilon\right|+\mathbb{W}(a_{1},\cdots,a_{2\left|d\right|})+2\left|d\right|\gamma_{*}\\\
&\stackrel{{\scriptstyle\eqref{inte3}}}{{=}}2\pi\left|d\right|\left|\log\varepsilon\right|+\mathbb{W}_{\beta}(a_{1},\cdots,a_{2\left|d\right|})+2\left|d\right|\gamma_{*}+\mathrm{o}_{\varepsilon\to
0}(1).\end{split}$
It only remains to define $\mathbf{M}_{\varepsilon}$ in each ball
$B_{\sigma_{\varepsilon}}(a_{j})$. For each $j$, there exists
$\rho=\rho(j)\in(\sigma_{\varepsilon},2\sigma_{\varepsilon})$ such that
$\int_{\partial
B_{\rho}(a_{j})}|\nabla\mathbf{M}_{\varepsilon}|^{2}\,\mathrm{d}\mathscr{H}^{1}\leq\frac{1}{\sigma_{\varepsilon}}\int_{B_{2\sigma_{\varepsilon}}(a_{j})\setminus
B_{\sigma_{\varepsilon}}(a_{j})}|\nabla\mathbf{M}_{\varepsilon}|^{2}\,\mathrm{d}x=\mathrm{O}\left(\frac{\left|\log\varepsilon\right|}{\sigma_{\varepsilon}}\right)+\mathrm{O}(\frac{1}{\varepsilon})$
Define $\mathbf{M}_{\varepsilon}$ on $B_{\rho}(a_{j})$ as
(4.100)
$\mathbf{M}_{\varepsilon}(x):=\frac{|x-a_{j}|}{\rho}\mathbf{M}_{\varepsilon}\left(\frac{\rho(x-a_{j})}{|x-a_{j}|}\right)$
The vector field $\mathbf{M}_{\varepsilon}$ was already defined in
$B_{\rho}(a_{j})\setminus B_{\sigma_{\varepsilon}}(a_{j})$, but we disregard
its previous values and re-define it according to (4.100). We have
(4.101)
$\begin{split}\varepsilon\int_{B_{\rho}(a_{j})}|\nabla\mathbf{M}_{\varepsilon}|^{2}\,\mathrm{d}x&\leq\sigma_{\varepsilon}\int_{\partial
B_{\rho}(a_{j})}\varepsilon|\nabla\mathbf{M}_{\varepsilon}|^{2}\,\mathrm{d}\mathscr{H}^{1}+\varepsilon\int_{B_{\rho}(a_{j})}\mathrm{O}\left(\frac{1}{\rho^{2}}\right)\,\mathrm{d}x\\\
&\leq\mathrm{O}(\varepsilon|\log\varepsilon|)+\mathrm{O}(\sigma_{\varepsilon})+\mathrm{O}(\varepsilon)\to
0\end{split}$
and
(4.102)
$\frac{1}{\varepsilon^{2}}\int_{B_{\rho}(a_{j})}\left(f_{\varepsilon}(\mathbf{Q}_{\varepsilon},\mathbf{M}_{\varepsilon})-\frac{1}{4}(1-|\mathbf{Q}_{\varepsilon}|^{2})^{2}\right)\mathrm{d}x=\mathrm{O}(\frac{\sigma_{\varepsilon}^{2}}{\varepsilon}).$
If we choose $\varepsilon\ll\sigma_{\varepsilon}\ll\varepsilon^{\frac{1}{2}}$,
then the total contribution of $\mathbf{M}_{\varepsilon}$ to the energy on
each ball $B_{\rho}(a_{j})$ tends to zero as $\varepsilon\to 0$. ∎
###### Remark 4.2.
The proof of Proposition 4.19 carries over, with no essential modifications,
to the case we impose Dirichlet boundary conditions for the
$\mathbf{Q}$-component and Neumann boundary conditions for the
$\mathbf{M}$-component, as described in Remark 2.1. Indeed, while the
structure of the (orientable) boundary datum for $\mathbf{Q}$ is important to
the analysis, the boundary condition for $\mathbf{M}$ does not play a crucial
role; the coupling between $\mathbf{Q}$ and $\mathbf{M}$ is determined by the
potential $f_{\varepsilon}$ and not the boundary conditions.
We can now complete the proof of our main result, Theorem 2.1.
###### Conclusion of the proof of Theorem 2.1, proof of Proposition 4.13.
From Proposition 4.14 and Proposition 4.19, we deduce
(4.103)
$\begin{split}&\mathbb{W}(a^{*}_{1},\,\ldots,\,a^{*}_{2\left|d\right|})+c_{\beta}\,\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}^{*}})+\int_{\Omega}(\xi_{*}-\kappa_{*})^{2}\,\mathrm{d}x+2\left|d\right|\gamma_{*}\\\
&\hskip 56.9055pt\leq\liminf_{\varepsilon\to
0}\big{(}\mathscr{F}_{\varepsilon}(\mathbf{Q}^{*}_{\varepsilon},\,\mathbf{M}^{*}_{\varepsilon})-2\pi\left|d\right|\left|\log\varepsilon\right|\big{)}\\\
&\hskip 56.9055pt\leq\limsup_{\varepsilon\to
0}\big{(}\mathscr{F}_{\varepsilon}(\mathbf{Q}^{*}_{\varepsilon},\,\mathbf{M}^{*}_{\varepsilon})-2\pi\left|d\right|\left|\log\varepsilon\right|\big{)}\\\
&\hskip
56.9055pt\leq\mathbb{W}(a_{1},\,\ldots,\,a_{2\left|d\right|})+c_{\beta}\,\mathbb{L}(a_{1},\,\ldots,\,a_{2\left|d\right|})+2\left|d\right|\gamma_{*}\end{split}$
for any $(2\left|d\right|)$-uple of distinct points $a_{1}$, …,
$a_{2\left|d\right|}$ in $\Omega$. In particular, choosing $a_{j}=a_{j}^{*}$,
we obtain
(4.104)
$\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}^{*}})=\mathbb{L}(a^{*}_{1},\,\ldots,\,a^{*}_{2\left|d\right|}),\qquad\xi_{*}=\kappa_{*}$
and Proposition (4.13) follows. Moreover, Proposition 4.15 and (4.104) imply
that the jump set $\mathrm{S}_{\mathbf{M}^{*}}$ coincides (up to negligible
sets) with $\cup_{j=1}^{\left|d\right|}L_{j}$, where
$(L_{1},\,\ldots,\,L_{\left|d\right|})$ is a minimal connection for
$(a_{1},\,\ldots,\,a_{2\left|d\right|})$. Finally, from (4.103) and (4.104) we
deduce
(4.105)
$\mathbb{W}_{\beta}(a^{*}_{1},\,\ldots,\,a^{*}_{2\left|d\right|})\leq\mathbb{W}_{\beta}(a_{1},\,\ldots,\,a_{2\left|d\right|})$
for any $(2\left|d\right|)$-uple of distinct points $a_{1}$, …,
$a_{2\left|d\right|}$ in $\Omega$ — that is,
$(a^{*}_{1},\,\ldots,\,a^{*}_{2\left|d\right|})$ minimises
$\mathbb{W}_{\beta}$. ∎
## 5 Numerics
In this section, we numerically compute some stable critical points of the
ferronematic free energy, on square domains with topologically non-trivial
Dirichlet boundary conditions for $\mathbf{Q}$ and $\mathbf{M}$. These
numerical results do not directly support our main results on global energy
minimizers of (2.1) in the $\epsilon\to 0$ limit, since the numerically
computed critical points need not be global energy minimizers, and we expect
multiple local and global energy minimizers of (2.1) for $\epsilon>0$.
Instead of solving the Euler-Lagrange equations directly, we solve a
$L^{2}$-gradient flow associated with the effective re-scaled free energy for
ferronematics (2.1), given by
(5.1)
$\frac{\mathrm{d}}{\mathrm{d}t}\mathscr{F}_{\varepsilon}(\mathbf{Q},\mathbf{M})=-\int_{\Omega}(\eta_{1}|\partial_{t}\mathbf{Q}|^{2}+\eta_{2}|\partial_{t}\mathbf{M}|^{2})\mathrm{d}{\bf
x}.$
Here $\eta_{1}>0$ and $\eta_{2}>0$ are arbitrary friction coefficients. Due to
limited physical data, we do not comment on physically relevant values of
$\varepsilon$, $\beta$ and the friction coefficients. The system of
$L^{2}$-gradient flow equations for $Q_{11}$, $Q_{12}$ and the components,
$M_{1}$, $M_{2}$ of the magnetisation vector, can be written as
(5.2) $\begin{cases}&2\eta_{1}\,\partial_{t}Q_{11}=2\Delta
Q_{11}-\frac{1}{\varepsilon^{2}}(4Q_{11}(Q_{11}^{2}+Q_{12}^{2}-1/2)-\beta\varepsilon(M_{1}^{2}-M_{2}^{2}))\\\
&2\eta_{1}\,\partial_{t}Q_{12}=2\Delta
Q_{12}-\frac{1}{\varepsilon^{2}}(4Q_{12}(Q_{11}^{2}+Q_{12}^{2}-1/2)-2\beta\varepsilon
M_{1}M_{2})\\\ &\eta_{2}\,\partial_{t}M_{1}=\varepsilon\Delta
M_{1}-\frac{1}{\varepsilon^{2}}(\varepsilon(M_{1}^{2}+M_{2}^{2}-1)M_{1}-\beta\varepsilon(2Q_{11}M_{1}+2Q_{12}M_{2}))\\\
&\eta_{2}\,\partial_{t}M_{2}=-\varepsilon\Delta
M_{2}-\frac{1}{\varepsilon^{2}}(\varepsilon(M_{1}^{2}+M_{2}^{2}-1)M_{2}-\beta\varepsilon(-2Q_{11}M_{2}+2Q_{12}M_{1})),\\\
\end{cases}$
The stationary time-independent or equilibrium solutions of the
$L^{2}$-gradient flow satisfy the original Euler-Lagrange equations of (2.1).
For non-convex free energies as in (2.1), there are multiple critical points,
with many of them being unstable saddle points [47]. One can efficiently
compute stable critical points of such free energies by considering the
$L^{2}$-gradient flow associated with the non-convex free energies and these
gradient flows converge to a stable critical point, for a given initial
condition, thus avoiding the unstable saddle points. From a numerical
standpoint, the $L^{2}$-gradient flow can be more straightforward to solve
than the nonlinear coupled Euler-Lagrange equations, primarily due to the
inclusion of time relaxation in the $L^{2}$-gradient flow.
In the following simulations, we take $\eta_{1}=1$ and $\eta_{2}=\varepsilon$
and do not offer rigorous justifications for these choices, except as
numerical experiments to qualitatively support out theoretical results. We
impose the continuous degree $+k$ boundary condition
(5.3) $\mathbf{M}_{b}=(\sqrt{2}\beta+1)^{1/2}(\cos k\theta,\sin
k\theta),\quad{\bf Q}_{b}=\sqrt{2}\begin{pmatrix}\frac{1}{2}\cos
2k\theta&\frac{1}{2}\sin 2k\theta\\\\[5.0pt] \frac{1}{2}\sin
2k\theta&\frac{1}{2}\cos 2k\theta\\\ \end{pmatrix},$
where
(5.4) $\theta(x,y)={\rm
atan2}\left({y-0.5},\,{x-0.5}\right)-\pi/2,\quad(x,y)\in\partial\Omega.$
and ${\rm atan2}(y,x)$ is the 2-argument arctangent that computes the
principal value of the argument function applied to the complex number $x+iy$.
So $-\pi\leq{\rm atan2}(y,x)\leq\pi$. For example, if $x>0$, then ${\rm
atan2}(y,x)=\arctan\left(\frac{y}{x}\right)$. The initial condition is
prescribed to be
(5.5) $\mathbf{M}_{0}=(\sqrt{2}\beta+1)^{1/2}(\cos k\theta,\sin
k\theta),\quad{\bf Q}_{0}=\sqrt{2}\begin{pmatrix}\frac{1}{2}\cos
2k\theta&\frac{1}{2}\sin 2k\theta\\\\[5.0pt] \frac{1}{2}\sin
2k\theta&\frac{1}{2}\cos 2k\theta,\\\ \end{pmatrix}$
where
(5.6) $\theta(x,y)={\rm
atan2}\left({y-0.5},\,{x-0.5}\right)-\pi/2,\quad(x,y)\in(0,1)^{2}.$
We solve the $L^{2}$-gradient flow equation using standard central finite
difference methods [33]. For the temporal discretization, we employ a second-
order Crank-Nicolson method [33]. The grid size and temporal step size are
denoted by $h$ and $\tau$, respectively. In all our computations, we set
$h=1/50$ and $\tau=1/1000$.
Figure 1: Numerical results for the gradient flows (5.2) with (a)
$\beta=1,\varepsilon=0.05$ at $t=0.02$, $0.05$ and $1$ and (b)
$\beta=1,\varepsilon=0.02$ at $t=0.02$, $0.05$ and $1$ (Continuous degree +1
boundary condition, $h=1/50$, $\tau=1/1000$). In each sub-figure, the nematic
configuration is shown in the left panel, where the white bars represent
nematic field ${\bf n}$ (the eigenvector of $\mathbf{Q}$ associated with the
largest eigenvalue) and the color represents
$\operatorname{tr}\mathbf{Q}^{2}=2(Q_{11}^{2}+Q_{12}^{2})$; the
$\mathbf{M}$-profile is shown in the right panel, where the white bars
represent magnetic field ${\bf M}$ and the color bar represents $|{\bf
M}|^{2}=M_{1}^{2}+M_{2}^{2}$. Figure 2: Numerical results for the gradient
flows (5.2) with (a) $\beta=1,\varepsilon=0.05$ at $t=0.02$, $0.05$ and $1$
and (b) $\beta=1,\varepsilon=0.02$ at $t=0.02$, $0.05$ and $1$ (Continuous
degree +2 boundary condition, $h=1/50$, $\tau=1/1000$). In each sub-figure,
the nematic configuration is shown in the left panel, where the white bars
represent nematic field ${\bf n}$ (the eigenvector of $\mathbf{Q}$ associated
with the largest eigenvalue) and the color represents
$\operatorname{tr}\mathbf{Q}^{2}=2(Q_{11}^{2}+Q_{12}^{2})$; the magnetic
configuration is shown in the right panel, where the white bars represent
magnetic field ${\bf M}$ and the color represents $|{\bf
M}|^{2}=M_{1}^{2}+M_{2}^{2}$.
In Figure 1, we plot the dynamical evolution of the solutions of the gradient
flow equations, for $k=1$ boundary conditions, with the initial condition
(5.5). The time-dependent solutions converge for $t\geq 1$, and we treat the
numerical solution at $t=1$ to the converged equilibrium state. We cannot
conclusively argue that the converged solution is an energy minimizer but it
is locally stable, the converged $\mathbf{Q}$-profile has two non-orientable
defects and the corresponding $\mathbf{M}$-profile has a jump set composed of
a straight line connecting the nematic defect pair, consistent with our
theoretical results on global energy minimizers. We consider two different
values of $\varepsilon$ and it is clear that the $\mathbf{Q}$-defects and the
jump set in $\mathbf{M}$ become more localised as $\varepsilon$ becomes
smaller, as expected from the theoretical- results. We have also investigated
the effects of $\beta$ on the converged solutions — the defects become closer
as $\beta$ increases. This is expected since the cost of the minimal
connection between the nematic defects increases as $\beta$ increases, and
hence the shorter connections require the defects to be closer to each other
(at least in a pairwise sense).
In Figure 2, we plot the dynamical evolution of the solutions of the gradient
flow equations, for $k=2$ boundary conditions, with the initial condition
(5.5), and we treat the numerical solution at $t=1$ to be the converged
equilibrium state. Again, the converged solution is locally stable, the
$\mathbf{Q}$-profile has four non-orientable defects,the $\mathbf{M}$-profile
has two distinct jump sets connecting two pairs of non-orientable nematic
defects, and the jump sets are indeed approximately straight lines. Smaller
values of $\epsilon$ correspond to the sharp interface limit which induces
more localised defects for $\mathbf{Q}$, straighter line defects for
$\mathbf{M}$ and larger values of $\beta$ push the defects closer together,
all in qualitative agreement with our theoretical results.
Theorem 2.1 is restricted to global minimizers of (2.1) in the $\epsilon\to 0$
limit, but the numerical illustrations in Figures 1 and 2 suggest that Theorem
2.1 may also partially apply to local energy minimizers of (2.1). In other
words, locally energy minimizing pairs,
$(\mathbf{Q}_{\varepsilon},\mathbf{M}_{\varepsilon})$, may also converge to a
pair $(\mathbf{Q}^{*},\mathbf{M}^{*})$, for which $\mathbf{Q}^{*}$ is a
canonical harmonic map with non-orientable point defects and $\mathbf{M}^{*}$
has a jump set connecting the non-orientable point defects of
$\mathbf{Q}^{*}$, with the location of the defects being prescribed by the
critical point(s) of the normalization energy in Theorem 2.1. The numerical
illustrations in Figures 1 and 2 cannot be directly related to Theorem 2.1,
since we have only considered two small and non-zero values of $\epsilon$ and
for a fixed $\beta>0$, there maybe multiple local and global energy minimizers
with different jump sets in $\mathbf{M}$ i.e. different choices of the minimal
connection of equal length, or different connections of different lengths
between the nematic defect pairs. For example, it is conceivable that a
locally stable $\mathbf{M}$-profile also connects the nematic defects by means
of straight lines, but this connection is not minimal. There may also be non
energy-minimising critical points with orientable point defects in
$\mathbf{M}$ tailored by the non-orientable nematic defects. Similarly, there
may be non energy-minimising critical points with non-orientable and
orientable nematic defects, whose locations are not minimisers but critical
points of the modified renormalised energy in Theorem 2.1. We defer these
interesting questions to future work.
## 6 Conclusions
We study a simplified model for ferronematics in two-dimensional domains, with
Dirichlet boundary conditions, building on previous work in [14]. The model is
only valid for dilute ferronematic suspensions and we do not expect
quantitative agreement with experiments. Further, the experimentally relevant
choices for the boundary conditions for $\mathbf{M}$ are not well established
and our methods can be adapted to other choices of boundary conditions e.g.
Neumann conditions for the magnetisation vector. Similarly, it is not clear if
topologically non-trivial Dirichlet conditions can be imposed on the nematic
directors, for physically relevant experimental scenarios. Having said that,
our model problem is a fascinating mathematical problem because of the
tremendous complexity of ferronematic solution landscapes, the multiplicity of
the energy minimizers and non energy-minimizing critical points, and the
multitude of admissible coupled defect profiles for the nematic and magnetic
profiles. There are several forward research directions, some of which could
facilitate experimental observations of the theoretically predicted
morphologies in this manuscript. For example, one could study the
experimentally relevant generalisation of our model problem with Dirichlet
conditions for $\mathbf{Q}$ and Neumann conditions for $\mathbf{M}$, or study
different asymptotic limits of the ferronematic free energy in (1.1), a prime
candidate being the $\varepsilon\to 0$ limit for fixed $\xi$ and $c_{0}$
(independent of $\varepsilon$). This limit, although relevant for dilute
suspensions, would significantly change the vacuum manifold $\mathscr{N}$ in
the $\varepsilon\to 0$ limit. In fact, we expect to observe stable point
defects in the energy-minimizing $\mathbf{M}$-profiles for this limit, where
$\xi$ and $c_{0}$ are independent of $\varepsilon$, as $\varepsilon\to 0$.
Further, there is the interesting question of how this ferronematic model can
be generalised to non-dilute suspensions or to propose a catalogue of magneto-
nematic coupling energies for different kinds of MNP-MNP interactions and MNP-
NLC interactions. The physics of ferronematics is complex, and it is
challenging to translate the physics to tractable mathematical problems with
multiple order parameters, and we hope that our work is solid progress in this
direction with bright interdisciplinary prospects.
Taxonomy: GC, BS and AM conceived the project based on a model developed by AM
and her ex-collaborators. GC and BS led the analysis, followed by AM. YW
performed the numerical simulations, as advised by AM and GC. All authors
contributed to the scientific writing.
Acknowledgements: GC, BS and AM gratefully acknowledge support from the CIRM-
FBK (Trento) Research in Pairs grant awarded in 2019, when this collaboration
was initiated. GC, BS and AM gratefully acknowledge support from an ICMS
Research in Groups grant awarded in 2020, which supported the completion of
this project and submission of this manuscript. AM gratefully acknowledges the
hospitality provided by the University of Verona in December 2019, GC
gratefully acknowledges the hospitality provided by the University Federico II
(Naples) under the PRIN project 2017TEXA3H, and AM, BS and GC gratefully
acknowledge support from the Erwin Schrodinger Institute in Vienna in December
2019, all of which facilitated this collaboration. AM acknowledges support
from the Leverhulme Trust and the University of Strathclyde New Professor’s
Fund. We thank the referee for their careful reading of the manuscript and
comments.
Data Availability Statement: Data sharing not applicable to this article as no
datasets were generated or analysed during the current study.
Conflict of interest statement. The authors have no competing interests to
declare that are relevant to the content of this article.
## Appendix A Lifting of a map with non-orientable singularities
The aim of this section is to prove Proposition 4.15. We reformulate the
problem in a slightly more general setting.
Let $a\in\mathbb{R}^{2}$, and let $\mathbf{Q}\in
W^{1,2}_{\operatorname{loc}}(\mathbb{R}^{2}\setminus\\{a\\},\,\mathscr{N})$.
By Fubini theorem and Sobolev embedding, the restriction of $\mathbf{Q}$ on
the circle $\partial B_{\rho}(a)$ is well-defined and continuous for a.e.
$\rho>0$. Therefore, it makes sense to define the topological degree of
$\mathbf{Q}$ on $\partial B_{\rho}(a)$ as an half-integer,
$\deg(\mathbf{Q},\,a)\in\frac{1}{2}\mathbb{Z}$. As the notation suggests, the
degree is independent of the choice of $\rho$: for a.e. $0<\rho_{1}<\rho_{2}$,
the degrees of $\mathbf{Q}$ on $\partial B_{\rho_{1}}(a)$ and $\partial
B_{\rho_{2}}(a)$ are the same. If $\mathbf{Q}$ is smooth, this is a
consequence of the homotopy lifting property; for more general $\mathbf{Q}\in
W^{1,2}_{\operatorname{loc}}(\mathbb{R}^{2}\setminus\\{a\\},\,\mathscr{N})$,
this follows from an approximation argument (based on [44, Proposition p.
267]). We will say that $a$ is a non-orientable singularity of $\mathbf{Q}$ if
$\deg(\mathbf{Q},\,a)\in\frac{1}{2}\mathbb{Z}\setminus\mathbb{Z}$.
Given an open set $\Omega\subseteq\mathbb{R}^{2}$, a map
$\mathbf{Q}\colon\Omega\to\mathscr{N}$ and a unit vector field
$\mathbf{M}\colon\Omega\to\SS^{1}$, we say that $\mathbf{M}$ is a lifting for
$\mathbf{Q}$ if
(A.1)
$\mathbf{Q}(x)=\sqrt{2}\left(\mathbf{M}(x)\otimes\mathbf{M}(x)-\frac{\mathbf{I}}{2}\right)\qquad\textrm{for
a.e. }x\in\Omega.$
Any map $\mathbf{Q}\in\operatorname{BV}(\Omega,\,\mathscr{N})$ admits a
lifting $\mathbf{M}\in\operatorname{BV}(\Omega,\,\SS^{1})$ (see e.g. [32]).
The vector field $\mathbf{M}^{*}$ given by Theorem 2.1 is not a lifting of
$\mathbf{Q}^{*}$, according to the definition above, because
$\left|\mathbf{M}^{*}\right|\neq 1$. However, $\left|\mathbf{M}^{*}\right|$ is
still a positive constant (see Proposition 4.11), so we can construct a
lifting of unit-norm simply by rescaling.
We focus on properties of the lifting for $\mathbf{Q}$-tensors of a particular
form, namely, we assume that $\mathbf{Q}$ has an even number of non orientable
singularities at distinct points $a_{1}$, …, $a_{2d}$. We recall that a
_connection_ for $\\{a_{1},\,\ldots,\,a_{2d}\\}$ as a finite collection of
straight line segments $\\{L_{1},\,\ldots,\,L_{d}\\}$, with endpoints in
$\\{a_{1},\,\ldots,\,a_{2d}\\}$, such that each $a_{i}$ is an endpoint of one
of the segments $L_{j}$. We recall that
(A.2)
$\mathbb{L}(a_{1},\,\ldots,\,a_{2d}):=\min\left\\{\sum_{i=1}^{d}\mathscr{H}^{1}(L_{i})\colon\\{L_{1},\,\ldots,\,L_{d}\\}\textrm{
is a connection for }\\{a_{1},\,\ldots,\,a_{2d}\\}\right\\}\\!.$
A minimal connection for $\\{a_{1},\,\ldots,\,a_{2d}\\}$ is a connection that
attains the minimum in the right-hand side of (A.2). Given two sets $A$, $B$,
we denote their symmetric difference as $A\Delta B:=(A\setminus
B)\cup(B\setminus A)$.
###### Proposition A.1.
Let $\Omega\subseteq\mathbb{R}^{2}$ be a bounded, convex domain, let $d\geq 1$
be an integer, and let $a_{1}$, …, $a_{2d}$ be distinct points in $\Omega$.
Let $\mathbf{Q}\in W^{1,1}(\Omega,\,\mathscr{N})\cap
W^{1,2}_{\operatorname{loc}}(\Omega\setminus\\{a_{1},\,\ldots,a_{2d}\\},\,\mathscr{N})$
be a map with a non-orientable singularity at each $a_{j}$. If
$\mathbf{M}\in\operatorname{SBV}(\Omega,\,\SS^{1})$ is a lifting for
$\mathbf{Q}$ such that $\mathrm{S}_{\mathbf{M}}\subset\\!\subset\Omega$, then
$\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}})\geq\mathbb{L}(a_{1},\,\ldots,\,a_{2d})$
The equality holds if and only if there exists a minimal connection
$\\{L_{1},\,\ldots,\,L_{d}\\}$ for $\\{a_{1},\,\ldots,a_{d}\\}$ such that
$\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}}\Delta\cup_{j=1}^{d}L_{j})=0$.
Proposition 4.15 is an immediate consequence of Proposition A.1. The proof of
Proposition A.1 is based on classical results in Geometric Measure Theory, but
we provide it in full detail for the reader’s convenience. Before we prove
Proposition A.1, we state a few preliminary results.
###### Lemma A.2.
If $\\{L_{1},\,\ldots,\,L_{d}\\}$ is a minimal connection for
$\\{a_{1},\,\ldots,\,a_{2d}\\}$, then the $L_{j}$’s are pairwise disjoint.
###### Proof.
Suppose, towards a contradiction, that $\\{L_{1},\,\ldots,\,L_{d}\\}$ is a
minimal connection with $L_{1}\cap L_{2}\neq\emptyset$. The intersection
$L_{1}\cap L_{2}$ must be either a non-degenerate sub-segment of both $L_{1}$
and $L_{2}$ or a point. If $L_{1}\cap L_{2}$ is non-degenerate, then
$(L_{1}\cup L_{2})\setminus(L_{1}\cap L_{2})$ can be written as the disjoint
union of two straight line segments, $K_{1}$ and $K_{2}$, and
$\mathscr{H}^{1}(K_{1})+\mathscr{H}^{1}(K_{2})=\mathscr{H}^{1}((L_{1}\cup
L_{2})\setminus(L_{1}\cap
L_{2}))<\mathscr{H}^{1}(L_{1})+\mathscr{H}^{1}(L_{2})$
This contradicts the minimality of $\\{L_{1},\,\ldots,\,L_{d}\\}$. Now,
suppose that $L_{1}\cap L_{2}$ is a point. By the pigeon-hole principle,
$L_{1}\cap L_{2}$ cannot be an endpoint for either $L_{1}$ or $L_{2}$. Say,
for instance, that $L_{1}$ is the segment of endpoints $a_{1}$, $a_{2}$, while
$L_{2}$ is the segment of endpoints $a_{3}$, $a_{4}$. Let $H_{1}$, $H_{2}$ be
the segments of endpoints $(a_{1},\,a_{3})$, $(a_{2},\,a_{4})$ respectively.
Then, by the triangular inequality,
$\mathscr{H}^{1}(H_{1})+\mathscr{H}^{1}(H_{2})<\mathscr{H}^{1}(L_{1})+\mathscr{H}^{1}(L_{2}),$
which contradicts again the minimality of $\\{L_{1},\,\ldots,L_{d}\\}$. ∎
###### Lemma A.3.
Let $\Omega\subseteq\mathbb{R}^{2}$ be a bounded, convex domain and let
$a_{1}$, …, $a_{2d}$ be distinct points in $\Omega$. Let $\mathbf{Q}\in
W^{1,1}(\Omega,\,\mathscr{N})\cap
W^{1,2}_{\operatorname{loc}}(\Omega\setminus\\{a_{1},\,\ldots,a_{2d}\\},\,\mathscr{N})$
be a map with a non-orientable singularity at each $a_{j}$. If
$\\{L_{1},\,\ldots,\,L_{d}\\}$ is a minimal connection for
$\\{a_{1},\,\ldots,\,a_{2d}\\}$, then there exists a lifting
$\mathbf{M}^{*}\in\operatorname{SBV}(\Omega,\,\SS^{1})$ such that
$\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}^{*}}\Delta\cup_{j=1}^{d}L_{j})=0$.
###### Proof.
For any $\rho>0$ and $j\in\\{1,\,\ldots,\,d\\}$, we define
$U_{j,\rho}:=\left\\{x\in\mathbb{R}^{2}\colon\operatorname{dist}(x,\,L_{j})<\rho\right\\}\\!.$
and
$\Omega_{\rho}:=\Omega\setminus\bigcup_{j=1}^{d}U_{j,\rho}.$
Since $\Omega$ is convex, $L_{j}\subseteq\Omega$ for any $j$ and hence,
$U_{j,\rho}\subseteq\Omega$ for any $j$ and $\rho$ small enough. Each
$U_{j\,\rho}$ is a simply connected domain with piecewise smooth boundary.
Moreover, for $\rho$ fixed and small, the sets $U_{j,\rho}$ are pairwise
disjoint, because the $L_{j}$’s are pairwise disjoint (Lemma A.2). The trace
of $\mathbf{Q}$ on $\partial U_{j,\rho}$ is orientable, because $\partial
U_{j,\rho}$ contains exactly two non-orientable singularities of $\mathbf{Q}$.
Then, for any $\rho>0$ small enough, $\mathbf{Q}_{|\Omega_{\rho}}$ has a
lifting $\mathbf{M}^{*}_{\rho}\in W^{1,2}(\Omega_{\rho},\,\SS^{1})$ [8,
Proposition 7]. In fact, the lifting is unique up to the choice of the sign
[8, Proposition 2]; in particular, if $0<\rho_{1}<\rho_{2}$ then we have
either $\mathbf{M}^{*}_{\rho_{2}}=\mathbf{M}^{*}_{\rho_{1}}$ a.e. in
$\Omega_{\rho_{2}}$ or $\mathbf{M}^{*}_{\rho_{2}}=-\mathbf{M}^{*}_{\rho_{1}}$
a.e. in $\Omega_{\rho_{2}}$. As a consequence, for any sequence
$\rho_{k}\searrow 0$, we can choose liftings $\mathbf{M}^{*}_{\rho_{k}}\in
W^{1,2}(\Omega_{\rho_{k}},\,\SS^{1})$ of $\mathbf{Q}^{*}_{|\Omega_{\rho_{k}}}$
in such a way that $\mathbf{M}^{*}_{\rho_{k+1}}=\mathbf{M}^{*}_{\rho_{k}}$
a.e. in $\Omega_{\rho_{k}}$. By glueing the $\mathbf{M}^{*}_{\rho_{k}}$’s, we
obtain a lifting
$\mathbf{M}^{*}\in
W^{1,2}_{\operatorname{loc}}(\Omega\setminus\cup_{j}L_{j},\,\SS^{1})$
of $\mathbf{Q}$. By differentiating the identity (A.1), we obtain
$\sqrt{2}\left|\nabla\mathbf{M}^{*}\right|=\left|\nabla\mathbf{Q}\right|$ a.e.
and, since $\nabla\mathbf{Q}\in
L^{1}(\Omega,\,\mathbb{R}^{2}\otimes\mathbb{R}^{2\times 2})$ by assumption, we
deduce that so $\mathbf{M}^{*}\in
W^{1,1}(\Omega\setminus\cup_{j}L_{j},\,\SS^{1})$. The set $\cup_{j}L_{j}$ has
finite length and $\mathbf{M}^{*}$ is bounded, so we also have
$\mathbf{M}^{*}\in\operatorname{SBV}(\Omega,\,\SS^{1})$ (see [3, Proposition
4.4]).
By construction, we have $\mathrm{S}_{\mathbf{M}^{*}}\subseteq\cup_{j}L_{j}$.
Therefore, it only remains to prove that $\mathrm{S}_{\mathbf{M}^{*}}$
contains $\mathscr{H}^{1}$-almost all of $\cup_{j}L_{j}$. Consider, for
instance, the segment $L_{1}$; up to a rotation and traslation, we can assume
that $L_{1}=[0,\,b]\times\\{0\\}$ for some $b>0$. Given a small parameter
$\rho>0$ and $t\in(0,\,b)$, we define
$K_{\rho,t}:=(-\rho,\,t)\times(-\rho,\,\rho)$. Fubini theorem implies that,
for a.e. $\rho$ and $t$, $\mathbf{Q}$ restricted to $\partial K_{\rho,t}$
belongs to $W^{1,2}(\partial K_{\rho,t},\,\mathscr{N})$ and hence, by Sobolev
embedding, is continuous. Since the segments $L_{j}$ are pairwise disjoint by
Lemma A.2, for $\rho$ small enough there is exactly one non-orientable
singularity of $\mathbf{Q}$ inside $K_{\rho,t}$. Therefore, $\mathbf{Q}$ is
non-orientable on $\partial K_{\rho,t}$ for a.e. $t\in(0,\,b)$ and a.e.
$\rho>0$ small enough; in particular, there is no continuous lifting of
$\mathbf{Q}$ on $\partial K_{\rho,t}$. Since $\mathbf{M}^{*}$ is continuous on
$\partial K_{\rho,t}\setminus L_{1}$ for a.e. $\rho$ and $t$, we conclude that
$\mathrm{S}_{\mathbf{M}^{*}}$ contains $\mathscr{H}^{1}$-almost all of
$L_{1}$. ∎
Given a countably $1$-rectifiable set $\Sigma\subseteq\mathbb{R}^{2}$ and a
$\mathscr{H}^{1}$-measurable unit vector field
${\boldsymbol{\tau}}\colon\Sigma\to\SS^{1}$, we say that ${\boldsymbol{\tau}}$
is an orientation for $\Sigma$ if ${\boldsymbol{\tau}}(x)$ spans the
(approximate) tangent line of $\Sigma$ at $x$, for $\mathscr{H}^{1}$-a.e.
$x\in\Sigma$. In case $\Sigma$ is the jump set of an $\operatorname{SBV}$-map
$\mathbf{M}$, ${\boldsymbol{\tau}}\colon\mathrm{S}_{\mathbf{M}}\to\SS^{1}$ is
an orientation for $\mathrm{S}_{\mathbf{M}}$ if and only if
${\boldsymbol{\tau}}(x)\cdot{\boldsymbol{\nu}}_{\mathbf{M}}(x)=0$ for
$\mathscr{H}^{1}$-a.e. $x\in\mathrm{S}_{\mathbf{M}}$.
###### Lemma A.4.
Let $\Omega\subseteq\mathbb{R}^{2}$ be a bounded, convex domain and let
$a_{1}$, …, $a_{2d}$ be distinct points in $\Omega$. Let $\mathbf{Q}\in
W^{1,1}(\Omega,\,\mathscr{N})\cap
W^{1,2}_{\operatorname{loc}}(\Omega\setminus\\{a_{1},\,\ldots,a_{2d}\\},\,\mathscr{N})$
be a map with a non-orientable singularity at each $a_{j}$. Let
$\\{L_{1},\,\ldots,\,L_{d}\\}$ be a minimal connection for
$\\{a_{1},\,\ldots,\,a_{2d}\\}$. Up to relabelling, we assume that $L_{j}$ is
the segment of endpoints $a_{2j-1}$, $a_{2j}$, for any
$j\in\\{1,\,\ldots,\,d\\}$. Let
$\mathbf{M}\in\operatorname{SBV}(\Omega,\,\SS^{1})$ be a lifting for
$\mathbf{Q}$ such that $\mathrm{S}_{\mathbf{M}}\subset\\!\subset\Omega$. Then,
there exist $\mathscr{H}^{1}$-measurable sets $T_{j}\subseteq L_{j}$ and an
orientation ${\boldsymbol{\tau}}_{\mathbf{M}}$ for $\mathrm{S}_{\mathbf{M}}$
such that, for any $\varphi\in C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2})$, there
holds
$\int_{\mathrm{S}_{\mathbf{M}}}\nabla\varphi\cdot{\boldsymbol{\tau}}_{\mathbf{M}}\,\mathrm{d}\mathscr{H}^{1}=\sum_{j=1}^{d}\left(\varphi(a_{2j-1})-\varphi(a_{2j})\right)-2\sum_{j=1}^{d}\int_{T_{j}}\nabla\varphi\cdot\frac{a_{2j-1}-a_{2j}}{\left|a_{2j-1}-a_{2j}\right|}\,\mathrm{d}\mathscr{H}^{1}$
###### Proof.
Let $\mathbf{M}^{*}\in\operatorname{SBV}(\Omega,\,\SS^{1})$ be the lifting of
$\mathbf{Q}$ given by Lemma A.3. By construction,
$\mathrm{S}_{\mathbf{M}^{*}}$ coincides with
$\cup_{j}L_{j}\subset\\!\subset\Omega$ up to $\mathscr{H}^{1}$-negligible
sets. Since we have assumed that
$\mathrm{S}_{\mathbf{M}}\subset\\!\subset\Omega$, there exists a neighbourhood
$U\subseteq\overline{\Omega}$ of $\partial\Omega$ in $\overline{\Omega}$ such
that $\mathbf{M}\in W^{1,1}(U,\,\SS^{1})$, $\mathbf{M}^{*}\in
W^{1,1}(U,\,\SS^{1})$. A map that belongs to $W^{1,1}(U,\,\mathscr{N})$ has at
most two different liftings in $W^{1,1}(U,\,\SS^{1})$, which differ only for
the sign [8, Proposition 2]. Therefore, since both $\mathbf{M}$ and
$\mathbf{M}^{*}$ are liftings of $\mathbf{Q}$ in $U$, we have that either
$\mathbf{M}=\mathbf{M}^{*}$ a.e. in $U$ or $\mathbf{M}=-\mathbf{M}^{*}$ a.e.
in $U$. Changing the sign of $\mathbf{M}^{*}$ if necessary, we can assume that
$\mathbf{M}=-\mathbf{M}^{*}$ a.e. in $U$. Then, the set
$A:=\\{x\in\Omega\colon\mathbf{M}(x)\cdot\mathbf{M}^{*}(x)=1\\}$
is compactly contained in $\Omega$.
The Leibnitz rule for BV-functions (see e.g. [3, Example 3.97]) implies that
$\mathbf{M}\cdot\mathbf{M}^{*}\in\operatorname{SBV}(\Omega;\,\\{-1,\,1\\})$ As
a consequence, $A$ has finite perimeter in $\Omega$ (see e.g. [3, Theorem
3.40]); since $A\subset\\!\subset\Omega$, $A$ has also finite perimeter in
$\mathbb{R}^{2}$. By the Gauss-Green formula (see e.g. [3, Theorem 3.36, Eq.
(3.47)]), for any $\varphi\in C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2})$ we have
(A.3)
$0=\int_{A}\operatorname{curl}\nabla\varphi=\int_{\partial^{*}A}\nabla\varphi\cdot{\boldsymbol{\tau}}_{A}\,\mathrm{d}\mathscr{H}^{1},$
where $\partial^{*}A$ is the reduced boundary of $A$ and
${\boldsymbol{\tau}}_{A}$ is an orientation for $\partial^{*}A$. Up to
$\mathscr{H}^{1}$-negligible sets, $\partial^{*}A$ coincides with
$\mathrm{S}_{\mathbf{M}\cdot\mathbf{M}^{*}}$ (see e.g. [3, Example 3.68 and
Theorem 3.61]). By the Leibnitz rule for BV-functions,
$\mathrm{S}_{\mathbf{M}\cdot\mathbf{M}^{*}}$ coincides with
$\mathrm{S}_{\mathbf{M}}\Delta\mathrm{S}_{\mathbf{M}^{*}}$ up to
$\mathscr{H}^{1}$-negligible sets, so
(A.4)
$\mathscr{H}^{1}\left(\partial^{*}A\,\Delta\,(\mathrm{S}_{\mathbf{M}}\Delta\cup_{j=1}^{d}L_{j})\right)=0$
For any $j\in\\{1,\,\ldots,\,d\\}$, let
${\boldsymbol{\tau}}_{j}:=(a_{2j-1}-a_{2j})/|a_{2j-1}-a_{2j}|$. We define an
orientation ${\boldsymbol{\tau}}_{\mathbf{M}}$ for $\mathrm{S}_{\mathbf{M}}$
as ${\boldsymbol{\tau}}_{\mathbf{M}}:={\boldsymbol{\tau}}_{A}$ on
$\mathrm{S}_{\mathbf{M}}\setminus(\cup_{j}L_{j})$ (observing that, by (A.4),
$\mathscr{H}^{1}$-almost all of
$\mathrm{S}_{\mathbf{M}}\setminus(\cup_{j}L_{j})$ is contained in
$\partial^{*}A$) and
${\boldsymbol{\tau}}_{\mathbf{M}}:={\boldsymbol{\tau}}_{j}$ on
$\mathrm{S}_{\mathbf{M}}\cap L_{j}$, for any $j$. Then, (A.3) and (A.4) imply
(A.5)
$\begin{split}\int_{\mathrm{S}_{\mathbf{M}}}\nabla\varphi\cdot{\boldsymbol{\tau}}_{\mathbf{M}}\,\mathrm{d}\mathscr{H}^{1}-\sum_{j=1}^{d}\int_{L_{j}}\nabla\varphi\cdot{\boldsymbol{\tau}}_{j}\,\mathrm{d}\mathscr{H}^{1}+\sum_{j=1}^{d}\int_{L_{j}\setminus\mathrm{S}_{\mathbf{M}}}(1+{\boldsymbol{\tau}}_{A}\cdot{\boldsymbol{\tau}}_{j})\nabla\varphi\cdot{\boldsymbol{\tau}}_{j}\,\mathrm{d}\mathscr{H}^{1}=0,\end{split}$
On $\mathscr{H}^{1}$-almost all of $L_{j}\setminus\mathrm{S}_{\mathbf{M}}$,
both ${\boldsymbol{\tau}}_{j}$ and ${\boldsymbol{\tau}}_{A}$ are tangent to
$L_{j}$. Therefore, for $\mathscr{H}^{1}$-a.e. $x\in
L_{j}\setminus\mathrm{S}_{\mathbf{M}}$ we have
${\boldsymbol{\tau}}_{A}(x)\cdot{\boldsymbol{\tau}}_{j}(x)\in\\{-1,\,1\\}$. If
we define $T_{j}:=\\{x\in
L_{j}\setminus\mathrm{S}_{\mathbf{M}}\colon{\boldsymbol{\tau}}_{A}(x)\cdot{\boldsymbol{\tau}}_{j}(x)=1\\}$,
then the lemma follows from (A.5). ∎
Lemma A.4 can be reformulated in terms of currents. We recall a few basic
definitions in the theory of currents, because they will be useful to complete
the proof of Proposition A.1. Actually, we will only work with currents of
dimension $0$ or $1$. We refer to, e.g., [27, 45] for more details.
A $0$-dimensional current, or $0$-current, in $\mathbb{R}^{2}$ is just a
distribution on $\mathbb{R}^{2}$, i.e. an element of the topological dual of
$C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2})$ (where
$C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2})$ is given a suitable topology). A
$1$-dimensional current, or $1$-current, in $\mathbb{R}^{2}$ is an element of
the topological dual of
$C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2};\,(\mathbb{R}^{2})^{\prime})$, where
$(\mathbb{R}^{2})^{\prime}$ denotes the dual of $\mathbb{R}^{2}$ and
$C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2};\,(\mathbb{R}^{2})^{\prime})$ is given
a suitable topology, in much the same way as
$C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2})$. In other words, a $1$-dimensional
current is an $\mathbb{R}^{2}$-valued distribution. The boundary of a
$1$-current $T$ is the $0$-current $\partial T$ defined by
$\langle\partial T,\,\varphi\rangle:=\langle
T,\,\mathrm{d}\varphi\rangle\qquad\textrm{for any }\varphi\in
C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2}).$
The mass of a $1$-current $T$ is defined as
$\mathbb{M}(T):=\sup\left\\{\langle T,\,\omega\rangle\colon\omega\in
C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2};\,(\mathbb{R}^{2})^{\prime}),\
\left|\omega(x)\right|\leq 1\quad\textrm{for any
}x\in\mathbb{R}^{2}\right\\}\\!;$
the mass of a $0$-current is defined analogously.
We single out a particular subset of currents, called integer-multiplicity
rectifiable currents or rectifiable currents for short. A rectifiable
$0$-current is a current of the form
(A.6) $T=\sum_{k=1}^{p}n_{k}\,\delta_{b_{k}},$
where $k\in\mathbb{N}$, $n_{k}\in\mathbb{Z}$ and $b_{k}\in\mathbb{R}^{2}$. A
rectifibiable $0$-current has finite mass: for the current $T$ given by (A.6),
we have $\mathbb{M}(T)=\sum_{k=1}^{p}\left|n_{k}\right|$. A $1$-current is
called rectifiable if there exist a countably $1$-rectifiable set
$\Sigma\subseteq\mathbb{R}^{2}$ with $\mathscr{H}^{1}(\Sigma)<+\infty$, an
orientation ${\boldsymbol{\tau}}\colon\Sigma\to\SS^{1}$ for $\Sigma$ and an
integer-valued, $\mathscr{H}^{1}$-integrable function
$\theta\colon\Sigma\to\mathbb{Z}$ such that
(A.7) $\langle
T,\,\omega\rangle=\int_{\Sigma}\theta(x)\langle{\boldsymbol{\tau}}(x),\,\omega(x)\rangle\,\mathrm{d}\mathscr{H}^{1}(x)\qquad\textrm{for
any }\omega\in
C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2};\,(\mathbb{R}^{2})^{\prime}).$
The current $T$ defined by (A.7) is called the rectifiable $1$-current carried
by $\Sigma$, with multiplicity $\theta$ and orientation ${\boldsymbol{\tau}}$;
it satisfies
$\mathbb{M}(T)=\int_{\Sigma}\left|\theta(x)\right|\,\mathrm{d}\mathscr{H}^{1}(x)<+\infty.$
The set of rectifiable $0$-currents, respectively rectifiable $1$-currents, is
denoted by $\mathscr{R}_{0}(\mathbb{R}^{2})$, respectively
$\mathscr{R}_{1}(\mathbb{R}^{2})$.
Given a Lipschitz, injective map $\mathbf{f}\colon[0,\,1]\to\mathbb{R}^{2}$,
we denote by $\mathbf{f}_{\\#}I$ the rectifiable $1$-current carried by
$\mathbf{f}([0,\,1])$, with unit multiplicity and orientation given by
$\mathbf{f}^{\prime}$. The mass of $\mathbf{f}_{\\#}I$ is the length of the
curve parametrised by $\mathbf{f}$ and
$\partial(\mathbf{f}_{\\#}I)=\delta_{\mathbf{f}(1)}-\delta_{\mathbf{f}(0)}$;
in particular, $\partial(\mathbf{f}_{\\#}I)=0$ if
$\mathbf{f}(1)=\mathbf{f}(0)$. The assumption that $\mathbf{f}$ is injective
can be relaxed; for instance, if the curve parametrised by $\mathbf{f}$ has
only a finite number of self-intersections, then $\mathbf{f}_{\\#}I$ is still
well-defined and the properties above remain valid.
We take a bounded, convex domain $\Omega\subseteq\mathbb{R}^{2}$, distinct
points $a_{1}$, …$a_{2d}$ and a map $\mathbf{Q}\in
W^{1,1}(\Omega,\,\mathscr{N})\cap
W^{1,2}_{\operatorname{loc}}(\Omega\setminus\\{a_{1},\,\ldots,\,a_{2d}\\},\,\mathscr{N})$
with a non-orientable singularity at each $a_{i}$. Let
$\mathbf{M}\in\operatorname{SBV}(\Omega,\,\SS^{1})$ be a lifting of
$\mathbf{Q}$ such that $\mathrm{S}_{\mathbf{M}}\subset\\!\subset\Omega$. By
Federer-Vol’pert theorem (see e.g. [3, Theorem 3.78]), the set
$\mathrm{S}_{\mathbf{M}}$ is countably $1$-rectifiable. We claim that
$\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}})<+\infty$. Indeed, since $\mathbf{Q}$
has no jump set, by the BV-chain rule (see e.g. [3, Theorem 3.96]) we deduce
that $\mathbf{M}^{+}(x)=-\mathbf{M}^{-}(x)$ at $\mathscr{H}^{1}$-a.e. point
$x\in\mathrm{S}_{\mathbf{M}}$. This implies
$\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}})\leq\frac{1}{2}\int_{\mathrm{S}_{\mathbf{M}}}|\mathbf{M}^{+}-\mathbf{M}^{-}|\,\mathrm{d}\mathscr{H}^{1}\lesssim\left|\mathrm{D}\mathbf{M}\right|(\Omega)<+\infty,$
as claimed. In particular, there is a well-defined, rectifiable $1$-current
carried by $\mathrm{S}_{\mathbf{M}}$, with unit multiplicity and orientation
${\boldsymbol{\tau}}_{\mathbf{M}}$ given by Lemma A.4; we denote it by
$\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket$. Lemma A.4 provides information
on the boundary of $\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket$. More
precisely, Lemma A.4 implies
(A.8)
$\partial\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket=\sum_{i=1}^{2d}\delta_{a_{i}}+2\,\partial
Q$
where $Q$ is a rectifiable $1$-chain, defined as
(A.9) $\langle
Q,\,\psi\rangle:=\sum_{j=1}^{d}\int_{T_{j}}\left\langle\psi(x),\frac{a_{2j-1}-a_{2j}}{\left|a_{2j-1}-a_{2j}\right|}\right\rangle\,\mathrm{d}\mathscr{H}^{1}(x)$
for any $\psi\in
C^{\infty}_{\mathrm{c}}(\mathbb{R}^{2},\,(\mathbb{R}^{2})^{\prime})$. The
$T_{j}$’s are $1$-rectifiable sets that depend only on $\mathbf{M}$, not on
$\psi$, as given by Lemma A.4.
###### Lemma A.5.
Let $\Omega$, $\mathbf{Q}$ be as above. Let
$\mathbf{M}\in\operatorname{SBV}(\Omega,\,\SS^{1})$ be a lifting of
$\mathbf{Q}$ with $\mathrm{S}_{\mathbf{M}}\subset\\!\subset\Omega$. Then,
there exist countably may Lipschitz functions
$\mathbf{f}_{j}\colon[0,\,1]\to\mathbb{R}^{2}$, with finitely many self-
intersections, a rectifiable $1$-current $R\in\mathscr{R}_{1}(\mathbb{R}^{2})$
and a permutation $\sigma$ of the indices $\\{1,\,\ldots,\,2d\\}$ such that
the following properties hold:
(A.10) $\displaystyle\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket=\sum_{j\geq
1}\mathbf{f}_{j,\\#}I+2R$ (A.11)
$\displaystyle\mathbb{M}(\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket)=\sum_{j\geq
1}\mathbb{M}\left(\mathbf{f}_{j,\\#}I\right)$ (A.12)
$\displaystyle\partial(\mathbf{f}_{j,\\#}I)=\delta_{\sigma(2j)}-\delta_{\sigma(2j-1)}\quad\textrm{if
}j\in\\{1,\,\ldots,\,d\\},\qquad\partial(\mathbf{f}_{j,\\#}I)=0\quad\textrm{otherwise.}$
Figure 3: A decomposition of the graph $\mathscr{G}$, as defined in the proof
of Lemma A.5, into edge-disjoint trails $\mathscr{E}_{1}$ (in red) and
$\mathscr{E}_{2}$ (in blue). In addition to the edges of $\mathscr{G}$, there
may be other cycles, carried by the curves $\mathbf{g}_{j}([0,\,1])$ with
$j\geq q+1$; they are shown in black.
###### Proof.
By applying, e.g., [48, Theorem 6.3] or [4, Corollary 4.2], we find
rectifiable $1$-currents $T$, $R\in\mathscr{R}_{1}(\mathbb{R}^{2})$ such that
$\mathbb{M}(T)=\mathbb{M}(\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket)=\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}})$,
$\partial T\in\mathscr{R}_{0}(\mathbb{R}^{2})$ and
(A.13) $T=\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket+2R.$
By taking the boundary of both sides of (A.13), and applying (A.8), we obtain
(A.14) $\partial T=\sum_{i=1}^{2d}\delta_{a_{i}}+2P$
with $P:=\partial(R+Q)$ (and $Q$ as in (A.9)). The current $2P=\partial
T-\sum_{i=1}^{2d}\delta_{a_{i}}$ is rectifiable, so $\mathbb{M}(P)<+\infty$.
Moreover, $P$ isthe boundary of a rectifiable $1$-current. Then, Federer’s
closure theorem [27, 4.2.16] implies that $P$ itself is rectifiable. As a
consequence, we can re-write (A.14) as
(A.15) $\partial
T=\sum_{i=1}^{2d}\delta_{a_{i}}+2\sum_{k=1}^{p}n_{k}\,\delta_{b_{k}},$
for some integers $n_{k}$ and some distinct points $b_{k}\in\mathbb{R}^{2}$.
By applying [27, 4.2.25], we find countably many Lipschitz, injective maps
$\mathbf{g}_{j}\colon[0,\,1]\to\mathbb{R}$ such that
(A.16) $T=\sum_{j\geq 1}\mathbf{g}_{j,\\#}I,\qquad\sum_{j\geq
1}\left(\mathbb{M}(\mathbf{g}_{j,\\#}I)+\mathbb{M}(\partial(\mathbf{g}_{j,\\#}I))\right)=\mathbb{M}(T)+\mathbb{M}(\partial
T)<+\infty.$
For any $j$, we have either $\partial(\mathbf{g}_{j,\\#}I)=0$ (if
$\mathbf{g}_{j,\\#}(1)=\mathbf{g}_{j,\\#}(0)$) or
$\mathbb{M}(\partial(\mathbf{g}_{j,\\#}I))=2$ (otherwise). Therefore, by
(A.16), there are only finitely many indices $j$ such that
$\mathbf{g}_{j,\\#}(1)\neq\mathbf{g}_{j,\\#}(0)$. Up to a relabelling of the
$\mathbf{g}_{j}$’s, we assume that there is an integer $q$ such that
$\mathbf{g}_{j,\\#}(1)\neq\mathbf{g}_{j,\\#}(0)$ if and only if $j\leq q$.
Now, the problem reduces to a combinatorial, or graph-theoretical, one. We
consider the finite (multi-)graph $\mathscr{G}$ whose edges are the curves
parametrised by $\mathbf{g}_{1}$, …, $\mathbf{g}_{q}$, and whose vertices are
the endpoints of such curves. There can be two or more edges that join the
same pair of vertices. However, we can disregard the orientation of the edges:
changing the orientation of the curve parametrised by $\mathbf{g}_{j}$
corresponds to passing from the current $\mathbf{g}_{j,\\#}I$ to the current
$-\mathbf{g}_{j,\\#}I$; the difference
$\mathbf{g}_{j,\\#}I-(-\mathbf{g}_{j,\\#}I)=2\mathbf{g}_{j,\\#}I$ can be
absorbed into the term $2R$ that appears in (A.10).
We would like to partition the set of edges of $\mathscr{G}$ into $d$ disjoint
subsets $\mathscr{E}_{1}$, …$\mathscr{E}_{d}$, where each $\mathscr{E}_{j}$ is
a trail (i.e., a sequence of distinct edges such that each edge is adjacent to
the next one) and, for a suitable permutation $\sigma$ of
$\\{1,\,\ldots,\,2d\\}$, the trail $\mathscr{E}_{j}$ connects
$a_{\sigma(2j-1)}$ with $a_{\sigma(2j)}$. If we do so, then we can define
$\mathbf{f}_{j}\colon[0,\,1]\to\mathbb{R}^{2}$ for $j\in\\{1,\,\ldots,d\\}$ as
a Lipschitz map that parameterises the trail $\mathscr{E}_{j}$, with suitable
orientations of each edge; for $j\geq d+1$, we define
$\mathbf{f}_{j}:=\mathbf{g}_{q+j-d}$. With this choice of $\mathbf{f}_{j}$,
the lemma follows. It is possible to find $\mathscr{E}_{1}$,
…$\mathscr{E}_{d}$ as required because the graph $\mathscr{G}$ has the
following property: any $a_{i}$ is an endpoint of an _odd_ number of edges of
$\mathscr{G}$; conversely, any vertex of $\mathscr{G}$ other than the
$a_{i}$’s is an endpoint of an _even_ number of edges of $\mathscr{G}$. This
property follows from (A.15). Then, we can construct $\mathscr{E}_{1}$,
…$\mathscr{E}_{d}$ by reasoning along the lines of, e.g., [16, Theorem 12]. ∎
We can now conclude the proof of Proposition A.1.
###### Proof of Proposition A.1.
We consider the decomposition of $\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket$
given by Lemma A.5. Thanks to (A.12), for any $j\in\\{1,\,\ldots,\,d\\}$ the
curve parametrised by $\mathbf{f}_{j}$ joins $a_{\sigma(2j-1)}$ with
$a_{\sigma(2j)}$. Then,
$\mathscr{H}^{1}(\mathrm{S}_{\mathbf{M}})=\mathbb{M}(\llbracket\mathrm{S}_{\mathbf{M}}\rrbracket)\geq\sum_{j=1}^{d}\mathbb{M}(\mathbf{f}_{j,\\#}I)\geq\sum_{j=1}^{d}\left|a_{\sigma(2j)}-a_{\sigma(2j-1)}\right|\geq\mathbb{L}(a_{1},\,\ldots,\,a_{2d}).$
The equality can only be attained if there are exactly $d$ maps
$\mathbf{f}_{j}$ and each of them parametrises a straight line segment. ∎
## Appendix B Properties of $f_{\varepsilon}$
The aim of this section is to prove Lemma 3.1. We first of all, we
characterise the zero-set of the potential $f_{\varepsilon}$, in terms of the
(unique) solution to an algebraic system depending on $\varepsilon$ and
$\beta$.
###### Lemma B.1.
For any $\varepsilon>0$, the algebraic system
(B.1)
$\begin{cases}X(X-1-\beta^{2}\varepsilon)^{2}=\dfrac{\beta^{2}\varepsilon^{2}}{2}\\\
X>1+\beta^{2}\varepsilon\end{cases}$
admits a unique solution $X_{\varepsilon}$, which satisfies
$X_{\varepsilon}=1+\frac{1}{\sqrt{2}}\left(\sqrt{2}\beta+1\right)\beta\varepsilon-\frac{1}{4}\left(\sqrt{2}\beta+1\right)\beta^{2}\varepsilon^{2}+\mathrm{o}(\varepsilon^{2})\qquad\textrm{as
}\varepsilon\to 0.$
###### Proof.
The function $P(X):=X(X-1-\beta^{2}\varepsilon)^{2}$ is continuous and
strictly increasing in the interval $[1+\beta^{2}\varepsilon,\,+\infty)$,
because
$P^{\prime}(X)=(X-1-\beta^{2}\varepsilon)(3X-1-\beta^{2}\varepsilon)>0$ for
$X>1+\beta^{2}\varepsilon$. Moreover, $P(1+\beta^{2}\varepsilon)=0$ and
$P(X)\to+\infty$ as $X\to+\infty$. Therefore, the system (B.1) admits a unique
solution. Let $Y_{\varepsilon}>0$ be such that
$X_{\varepsilon}=1+\beta^{2}\varepsilon+\beta\varepsilon\,Y_{\varepsilon}.$
Then, (B.1) can be rewritten as
(B.2)
$Y_{\varepsilon}^{2}=\frac{1}{2+2\beta^{2}\varepsilon+2\beta\varepsilon\,Y_{\varepsilon}},$
which implies $Y_{\varepsilon}\to 1/\sqrt{2}$ as $\varepsilon\to 0$. Using
(B.2) again, we obtain
$Y_{\varepsilon}=\frac{1}{\left(2+2\beta^{2}\varepsilon+\sqrt{2}\beta\varepsilon+\mathrm{o}(\varepsilon)\right)^{1/2}}=\frac{1}{\sqrt{2}}-\frac{1}{4}\left(\sqrt{2}\beta+1\right)\beta\varepsilon+\mathrm{o}(\varepsilon)$
as $\varepsilon\to 0$, and the lemma follows. ∎
For any $\varepsilon>0$, we define
(B.3)
$s_{\varepsilon}:=X_{\varepsilon}^{1/2},\qquad\lambda_{\varepsilon}:=\left(\frac{X_{\varepsilon}-1}{X_{\varepsilon}-1-\beta^{2}\varepsilon}\right)^{1/2}.$
Lemma B.1 implies, via routine algebraic manipulations, that
(B.4) $\displaystyle
s_{\varepsilon}=1+\frac{1}{2\sqrt{2}}\left(\sqrt{2}\beta+1\right)\beta\varepsilon+\mathrm{o}(\varepsilon),\qquad\lambda_{\varepsilon}^{2}=\sqrt{2}\beta+1+\frac{1}{2}\left(\sqrt{2}\beta+1\right)\beta^{2}\varepsilon+\mathrm{o}(\varepsilon)$
as $\varepsilon\to 0$.
###### Lemma B.2.
A pair $(\mathbf{Q},\,\mathbf{M})\in\mathscr{S}_{0}^{2\times
2}\times\mathbb{R}^{2}$ satisfies $f_{\varepsilon}(\mathbf{Q},\,\mathbf{M})=0$
if and only if
$\begin{split}\left|\mathbf{M}\right|=\lambda_{\varepsilon},\qquad\mathbf{Q}=\sqrt{2}\,s_{\varepsilon}\left(\frac{\mathbf{M}\otimes\mathbf{M}}{\lambda_{\varepsilon}^{2}}-\frac{\mathbf{I}}{2}\right)\\!.\end{split}$
###### Proof.
By imposing that the gradient of $f_{\varepsilon}$ is equal to zero, we obtain
the system
(B.5)
$\displaystyle(\left|\mathbf{Q}\right|^{2}-1)\mathbf{Q}=\beta\varepsilon\left(\mathbf{M}\otimes\mathbf{M}-\dfrac{\left|\mathbf{M}\right|^{2}}{2}\mathbf{I}\right)$
(B.6)
$\displaystyle(\left|\mathbf{M}\right|^{2}-1)\mathbf{M}=2\beta\,\mathbf{Q}\mathbf{M}.$
Suppose first that $\mathbf{M}=0$. Then, Equation (B.5) implies that either
$\mathbf{Q}=0$ or $\left|\mathbf{Q}\right|=1$. The pair $\mathbf{Q}=0$,
$\mathbf{M}=0$ is not a minimiser for $f_{\varepsilon}$, because
$\nabla_{\mathbf{Q}}^{2}f_{\varepsilon}(0,\,0)=-\mathbf{I}<0$. If
$\left|\mathbf{Q}\right|=1$, $\mathbf{M}=0$, then
$\nabla_{\mathbf{M}}^{2}f_{\varepsilon}(\mathbf{Q},\,0)=-\varepsilon(\mathbf{I}+2\beta\mathbf{Q})$.
Since $\mathbf{Q}$ is non-zero, symmetric and trace-free, there exists
$\mathbf{n}\in\SS^{1}$ such that $\mathbf{Q}\mathbf{n}\cdot\mathbf{n}>0$.
Then,
$\nabla_{\mathbf{M}}^{2}f_{\varepsilon}(\mathbf{Q},\,0)\mathbf{n}\cdot\mathbf{n}<0$,
so the pair $(\mathbf{Q},\,\mathbf{M}=0)$ is not a minimiser of
$f_{\varepsilon}$. It remains to consider the case $\mathbf{M}\neq 0$. In this
case, we have $\mathbf{Q}\neq 0$ and $\left|\mathbf{Q}\right|\neq 1$, due to
(B.5). Solving (B.5) for $\mathbf{Q}$, and then substituting in (B.6), we
obtain
$\displaystyle\left|\mathbf{M}\right|^{2}-1=\frac{\beta^{2}\varepsilon\left|\mathbf{M}\right|^{2}}{\left|\mathbf{Q}\right|^{2}-1}$
and hence, solving for $\left|\mathbf{M}\right|^{2}$,
(B.7)
$\displaystyle\left|\mathbf{M}\right|^{2}=\frac{\left|\mathbf{Q}\right|^{2}-1}{\left|\mathbf{Q}\right|^{2}-1-\beta^{2}\varepsilon}.$
By taking the squared norm of both sides of (B.5), we obtain
$(\left|\mathbf{Q}\right|^{2}-1)^{2}\left|\mathbf{Q}\right|^{2}=\frac{\beta^{2}\varepsilon^{2}}{2}\left|\mathbf{M}\right|^{4}$
and hence, using (B.7),
(B.8)
$\left|\mathbf{Q}\right|^{2}=\frac{\beta^{2}\varepsilon^{2}}{2(\left|\mathbf{Q}\right|^{2}-1-\beta^{2}\varepsilon)^{2}}$
We either have $\left|\mathbf{Q}\right|^{2}<1$ or
$\left|\mathbf{Q}\right|^{2}>1+\beta^{2}\varepsilon$, because of (B.7). On the
other hand, by imposing that the second derivative of $f_{\varepsilon}$ with
respect to $\mathbf{Q}$ is non-negative, we obtain
$\left|\mathbf{Q}\right|^{2}\geq 1$. Therefore, we conclude that
$\left|\mathbf{Q}\right|^{2}=X_{\varepsilon}$ is the unique solution to the
system (B.1) and, taking (B.7) into account, the proposition follows. ∎
We can now prove Lemma 3.1. For convenience, we recall the statement here.
###### Lemma B.3.
The potential $f_{\varepsilon}$ satisfies the following properties.
1. (i)
The constant $\kappa_{\varepsilon}$ in (2.2), uniquely defined by imposing the
condition $\inf f_{\varepsilon}=0$, satisfies
(B.9)
$\kappa_{\varepsilon}=\frac{1}{2}\left(\beta^{2}+\sqrt{2}\beta\right)\varepsilon+\kappa_{*}^{2}\,\varepsilon^{2}+\mathrm{o}(\varepsilon^{2})$
In particular, $\kappa_{\varepsilon}\geq 0$ for $\varepsilon$ small enough.
2. (ii)
If $(\mathbf{Q},\,\mathbf{M})\in\mathscr{S}_{0}^{2\times
2}\times\mathbb{R}^{2}$ is such that
(B.10)
$\left|\mathbf{M}\right|=(\sqrt{2}\beta+1)^{1/2},\qquad\mathbf{Q}=\sqrt{2}\left(\frac{\mathbf{M}\otimes\mathbf{M}}{\sqrt{2}\beta+1}-\frac{\mathbf{I}}{2}\right)$
then
$f_{\varepsilon}(\mathbf{Q},\,\mathbf{M})=\kappa_{*}\,\varepsilon^{2}+\mathrm{o}(\varepsilon^{2})$.
3. (iii)
If $\varepsilon$ is sufficiently small, then
(B.11)
$\displaystyle\frac{1}{\varepsilon^{2}}f_{\varepsilon}(\mathbf{Q},\,\mathbf{M})$
$\displaystyle\geq\frac{1}{4\varepsilon^{2}}(\left|\mathbf{Q}\right|^{2}-1)^{2}-\frac{\beta}{\sqrt{2}\varepsilon}\left|\mathbf{M}\right|^{2}\,\left|\left|\mathbf{Q}\right|-1\right|$
(B.12)
$\displaystyle\frac{1}{\varepsilon^{2}}f_{\varepsilon}(\mathbf{Q},\,\mathbf{M})$
$\displaystyle\geq\frac{1}{8\varepsilon^{2}}(\left|\mathbf{Q}\right|^{2}-1)^{2}-\beta^{2}\left|\mathbf{M}\right|^{4}$
for any $(\mathbf{Q},\,\mathbf{M})\in\mathscr{S}_{0}^{2\times
2}\times\mathbb{R}^{2}$.
_Proof of Statement (i)_. Let
$(\mathbf{Q}^{*}_{*},\,\mathbf{M}^{*})\in\mathscr{S}_{0}^{2\times
2}\times\mathbb{R}^{2}$ be a minimiser for $f_{\varepsilon}$, i.e.
$f_{\varepsilon}(\mathbf{Q}^{*}_{*},\,\mathbf{M}^{*})=0$. By Lemma B.2, we
have
$\begin{split}\kappa_{\varepsilon}&=-\frac{1}{4}(|\mathbf{Q}^{*}_{*}|^{2}-1)^{2}-\frac{\varepsilon}{4}(|\mathbf{M}^{*}|^{2}-1)^{2}+\beta\varepsilon\,\mathbf{Q}^{*}_{*}\mathbf{M}^{*}\cdot\mathbf{M}^{*}\\\
&=-\frac{1}{4}(s_{\varepsilon}^{2}-1)^{2}-\frac{\varepsilon}{4}(\lambda_{\varepsilon}^{2}-1)^{2}+\frac{\beta\varepsilon}{\sqrt{2}}\,s_{\varepsilon}\lambda_{\varepsilon}^{2}\end{split}$
We expand $s_{\varepsilon}$, $\lambda_{\varepsilon}$ in terms of
$\varepsilon$, as given by (B.4). Equation (B.9) then follows by standard
algebraic manipulations.
_Proof of Statement (ii)_. The assumption (B.10) implies
$\left|\mathbf{Q}\right|=1,\qquad\mathbf{Q}\mathbf{M}\cdot\mathbf{M}=\sqrt{2}\left(\frac{\left|\mathbf{M}\right|^{4}}{\sqrt{2}\beta+1}-\frac{1}{2}\left|\mathbf{M}\right|^{2}\right)=\frac{\sqrt{2}}{2}\left(\sqrt{2}\beta+1\right)=\beta+\frac{\sqrt{2}}{2}$
Therefore,
$\begin{split}f_{\varepsilon}(\mathbf{Q},\,\mathbf{M})=\frac{\varepsilon\,\beta^{2}}{2}-\beta\varepsilon\left(\beta+\frac{\sqrt{2}}{2}\right)+\kappa_{\varepsilon}=-\frac{\varepsilon}{2}\left(\beta^{2}+\sqrt{2}\beta\right)+\kappa_{\varepsilon}\stackrel{{\scriptstyle\eqref{kepsapp}}}{{=}}\kappa_{*}\,\varepsilon^{2}+\mathrm{o}(\varepsilon^{2})\end{split}$
_Proof of Statement (iii)_. When $\mathbf{Q}=0$, we have
$f_{\varepsilon}(0,\,\mathbf{M})\geq 1/4+\kappa_{\varepsilon}$ and
$\kappa_{\varepsilon}>0$ is positive for $\varepsilon$ small enough, due to
(B.9). Then, (B.11) follows. When $\mathbf{Q}\neq 0$, it is convenient to make
the change of variables we have introduced in Section 3. We write
$\mathbf{Q}=\frac{\left|\mathbf{Q}\right|}{\sqrt{2}}\left(\mathbf{n}\otimes\mathbf{n}-\mathbf{m}\otimes\mathbf{m}\right)$
where $(\mathbf{n},\,\mathbf{m})$ is an orthonormal basis of eigenvalues for
$\mathbf{Q}$. We define $\mathbf{u}=(u_{1},\,u_{2})\in\mathbb{R}^{2}$ as
$u_{1}:=\mathbf{M}\cdot\mathbf{n}$, $u_{2}:=\mathbf{M}\cdot\mathbf{m}$. The
potential $f_{\varepsilon}$ can be expressed in terms of $\mathbf{Q}$,
$\mathbf{u}$ as (see Equation (3.14)),
$\begin{split}\frac{1}{\varepsilon^{2}}f_{\varepsilon}(\mathbf{Q},\,\mathbf{M})&=\frac{1}{4\varepsilon^{2}}(\left|\mathbf{Q}\right|^{2}-1)^{2}+\frac{1}{\varepsilon}h(\mathbf{u})+\frac{\beta}{\sqrt{2}\,\varepsilon}(1-\left|\mathbf{Q}\right|)\,(u_{1}^{2}-u_{2}^{2})\\\
&\qquad\qquad+\frac{\kappa_{\varepsilon}}{\varepsilon^{2}}-\frac{1}{2\varepsilon}(\beta^{2}+\sqrt{2}\beta)\end{split}$
where $h$ is defined in (3.8). By Lemma (3.4), we know that $h\geq 0$.
Moreover, Equation (B.9) implies
$\frac{\kappa_{\varepsilon}}{\varepsilon^{2}}-\frac{1}{2\varepsilon}(\beta^{2}+\sqrt{2}\beta)=\kappa_{*}^{2}+\mathrm{o}(1)\geq
0$
for $\varepsilon$ small enough. Then, (B.11) follows. Equation (B.12) follows
from (B.11), as
$\begin{split}\frac{\beta}{\sqrt{2}\varepsilon}\left|\mathbf{M}\right|^{2}\left|\left|\mathbf{Q}\right|-1\right|\leq\beta^{2}\left|\mathbf{M}\right|^{4}+\frac{1}{8\varepsilon^{2}}(\left|\mathbf{Q}\right|-1)^{2}\leq\beta^{2}\left|\mathbf{M}\right|^{4}+\frac{1}{8\varepsilon^{2}}(\left|\mathbf{Q}\right|^{2}-1)^{2}\end{split}$
∎
## Appendix C Proof of Lemma 4.5
The aim of this section is to prove Lemma C.1, which we recall here for the
convenience of the reader. We recall that
$g_{\varepsilon}\colon\mathscr{S}_{0}^{2\times 2}\to\mathbb{R}$ is the
function defined in (3.7).
###### Lemma C.1.
Let $B=B_{r}(x_{0})\subseteq\Omega$ be an open ball. Suppose that
$\mathbf{Q}^{*}_{\varepsilon}\rightharpoonup\mathbf{Q}^{*}$ weakly in
$W^{1,2}(\partial B)$ and that
(C.1) $\begin{split}\int_{\partial
B}\left(\frac{1}{2}\left|\nabla\mathbf{Q}^{*}_{\varepsilon}\right|^{2}+g_{\varepsilon}(\mathbf{Q}^{*}_{\varepsilon})\right)\mathrm{d}\mathscr{H}^{1}\leq
C\end{split}$
for some constant $C$ that may depend on the radius $r$, but not on
$\varepsilon$. Then, there exists a map $\mathbf{Q}_{\varepsilon}\in
W^{1,2}(B,\,\mathscr{S}_{0}^{2\times 2})$ such that
(C.2)
$\displaystyle\mathbf{Q}_{\varepsilon}=\mathbf{Q}_{\varepsilon}^{*}\quad\textrm{on
}\partial
B,\qquad\left|\mathbf{Q}_{\varepsilon}\right|\geq\frac{1}{2}\quad\textrm{in
}B$ (C.3)
$\displaystyle\int_{B}\left(\frac{1}{2}\left|\nabla\mathbf{Q}_{\varepsilon}\right|^{2}+g_{\varepsilon}(\mathbf{Q}_{\varepsilon})\right)\mathrm{d}x\to\frac{1}{2}\int_{B}\left|\nabla\mathbf{Q}^{*}\right|^{2}\,\mathrm{d}x$
Lemma C.1 is inspired by interpolation results in the literature on harmonic
maps (see e.g. [39, Lemma 1]). As we work in a two-dimensional domain, we can
simplify some points of the proof in [39]. On the other hand, we need to
estimate the contributions from the term
$g_{\varepsilon}(\mathbf{Q}_{\varepsilon})$, which is not present in [39].
###### Proof of Lemma C.1.
Without loss of generality, we can assume that $x_{0}=0$. By assumption, we
have $\mathbf{Q}_{\varepsilon}^{*}\rightharpoonup\mathbf{Q}^{*}$ weakly in
$W^{1,2}(\partial B)$ and hence, by Sobolev embedding, uniformly on $\partial
B$. In particular, $\left|\mathbf{Q}^{*}_{\varepsilon}\right|\to 1$ uniformly
on $\partial B$. Let $\lambda_{\varepsilon}>0$ be a small number, to be chosen
later on. We consider the decomposition $B=A_{\varepsilon}^{1}\cup
A^{2}_{\varepsilon}\cup A^{3}_{\varepsilon}$, where
$A^{1}_{\varepsilon}:=B_{r}\setminus\bar{B}_{r-\lambda_{\varepsilon}r},\qquad
A^{2}_{\varepsilon}:=\bar{B}_{r-\lambda_{\varepsilon}r}\setminus\bar{B}_{r-2\lambda_{\varepsilon}r},\qquad
A^{3}_{\varepsilon}:=\bar{B}_{r-2\lambda_{\varepsilon}r}$
We define the map $\mathbf{Q}_{\varepsilon}$ using polar coordinates
$(\rho,\,\theta)$, as follows. If $x=\rho e^{i\theta}\in A^{1}_{\varepsilon}$,
we define
$\mathbf{Q}_{\varepsilon}(x):=t_{\varepsilon}(\rho)\,\mathbf{Q}_{\varepsilon}^{*}(re^{i\theta})+(1+\kappa_{*}\varepsilon)(1-t_{\varepsilon}(\rho))\,\dfrac{\mathbf{Q}_{\varepsilon}^{*}(re^{i\theta})}{\left|\mathbf{Q}^{*}_{\varepsilon}(re^{i\theta})\right|}$
where $t_{\varepsilon}\colon\mathbb{R}\to\mathbb{R}$ is an affine function
such that $t_{\varepsilon}(r)=1$,
$t_{\varepsilon}(r-\lambda_{\varepsilon}r)=0$. If $x=\rho e^{i\theta}\in
A^{2}_{\varepsilon}$, we define
$\mathbf{Q}_{\varepsilon}(x):=(1+\kappa_{*}\varepsilon)\,\frac{s_{\varepsilon}(\rho)\,\mathbf{Q}^{*}_{\varepsilon}(re^{i\theta})+(1-s_{\varepsilon}(\rho))\,\mathbf{Q}^{*}(re^{i\theta})}{\left|s_{\varepsilon}(\rho)\,\mathbf{Q}^{*}_{\varepsilon}(re^{i\theta})+(1-s_{\varepsilon}(\rho))\,\mathbf{Q}^{*}(re^{i\theta})\right|}$
where $s_{\varepsilon}\colon\mathbb{R}\to\mathbb{R}$ is an affine function
such that $s_{\varepsilon}(r-\lambda_{\varepsilon}r)=1$,
$s_{\varepsilon}(r-2\lambda_{\varepsilon}r)=0$. Finally, if $x\in
A_{\varepsilon}^{3}$, we define
$\mathbf{Q}_{\varepsilon}(x):=(1+\kappa_{*}\varepsilon)\,\mathbf{Q}^{*}\left(\frac{x}{1-2\lambda_{\varepsilon}}\right)$
The map $\mathbf{Q}_{\varepsilon}$ is well-defined in $B$, beacuse
$\left|\mathbf{Q}_{\varepsilon}\right|\to 1$ uniformly on $\partial B$.
Moreover, we have $\left|\mathbf{Q}_{\varepsilon}\right|\geq 1/2$ for
$\varepsilon$ small enough, $\mathbf{Q}_{\varepsilon}\in
W^{1,2}(B,\,\mathscr{S}_{0}^{2\times 2})$ (at the interfaces between
$A_{\varepsilon}^{1}$, $A_{\varepsilon}^{2}$, $A^{3}_{\varepsilon}$, the
traces of $\mathbf{Q}_{\varepsilon}$ on either side of the interface match),
and $\mathbf{Q}_{\varepsilon}=\mathbf{Q}^{*}_{\varepsilon}$ on $\partial B$.
It only remains to prove (C.3). First, we estimate the integral of
$g_{\varepsilon}(\mathbf{Q}_{\varepsilon})$. On $A^{2}_{\varepsilon}\cup
A^{3}_{\varepsilon}$, we have
$|\mathbf{Q}_{\varepsilon}|=1+\kappa_{*}\varepsilon$ and hence, substituting
in (3.7),
(C.4)
$g_{\varepsilon}(\mathbf{Q}_{\varepsilon})=\kappa_{*}^{2}\left(\frac{1}{4}(2+\kappa_{*}\varepsilon)^{2}-1\right)=\kappa_{*}^{2}\left(\kappa_{*}\varepsilon+\kappa_{*}^{2}\varepsilon^{2}\right)=\mathrm{O}(\varepsilon)$
We consider the annulus $A_{\varepsilon}^{1}$. By Lemma 3.3, we have
$\begin{split}g_{\varepsilon}(\mathbf{Q}_{\varepsilon})\leq\left(\frac{1}{\varepsilon}(|\mathbf{Q}_{\varepsilon}|-1)-\kappa_{*}\right)^{2}+\frac{C}{\varepsilon^{2}}(|\mathbf{Q}_{\varepsilon}|-1)^{2}\end{split}$
For $x\in A^{1}_{\varepsilon}$, we have
$|\mathbf{Q}_{\varepsilon}(x)|=t_{\varepsilon}\left|\mathbf{Q}^{*}_{\varepsilon}(rx/\left|x\right|)\right|+(1-t_{\varepsilon})(1+\kappa_{*}\varepsilon)$,
with $t_{\varepsilon}=t_{\varepsilon}(\rho)\in[0,\,1]$. As a consequence,
(C.5)
$\begin{split}\int_{A^{1}_{\varepsilon}}g_{\varepsilon}(\mathbf{Q}_{\varepsilon})\,\mathrm{d}x&\lesssim\lambda_{\varepsilon}\int_{\partial
B}\left(\frac{1}{\varepsilon}(\left|\mathbf{Q}^{*}_{\varepsilon}\right|-1)-\kappa_{*}\right)^{2}\mathrm{d}\mathscr{H}^{1}+\frac{\lambda_{\varepsilon}}{\varepsilon^{2}}\int_{\partial
B}(\left|\mathbf{Q}^{*}_{\varepsilon}\right|-1)^{2}\,\mathrm{d}\mathscr{H}^{1}+\lambda_{\varepsilon}\kappa_{*}^{2}\end{split}$
On the other hand, as $\left|\mathbf{Q}^{*}_{\varepsilon}\right|\to 1$
uniformly on $\partial B$, from Lemma 3.3 we deduce that
(C.6)
$\begin{split}g_{\varepsilon}(\mathbf{Q}^{*}_{\varepsilon})\geq\left(\frac{1}{\varepsilon}(|\mathbf{Q}^{*}_{\varepsilon}|-1)-\kappa_{*}\right)^{2}-\frac{3}{4\varepsilon^{2}}(|\mathbf{Q}^{*}_{\varepsilon}|-1)^{2}\geq\frac{1}{8\varepsilon^{2}}(|\mathbf{Q}^{*}_{\varepsilon}|-1)^{2}-7\kappa_{*}^{2}\end{split}$
at any point of $\partial B$, for $\varepsilon$ small enough. Combining (C.5)
and (C.6), we obtain
(C.7)
$\int_{A^{1}_{\varepsilon}}g_{\varepsilon}(\mathbf{Q}_{\varepsilon})\,\mathrm{d}x\lesssim\lambda_{\varepsilon}\int_{\partial
B}g_{\varepsilon}(\mathbf{Q}^{*}_{\varepsilon})\,\mathrm{d}\mathscr{H}^{1}+\lambda_{\varepsilon}\kappa_{*}^{2}\stackrel{{\scriptstyle\eqref{hp:interp-
app}}}{{\lesssim}}\lambda_{\varepsilon}$
If we choose $\lambda_{\varepsilon}$ in such a way that
$\lambda_{\varepsilon}\to 0$ as $\varepsilon\to 0$, then (C.4) and (C.7) imply
(C.8) $\int_{B}g_{\varepsilon}(\mathbf{Q}_{\varepsilon})\,\mathrm{d}x\to
0\qquad\textrm{as }\varepsilon\to 0.$
Finally, we estimate the gradient term. An explicit computation shows that
$\begin{split}\int_{A^{1}_{\varepsilon}\cup
A^{2}_{\varepsilon}}\left|\nabla\mathbf{Q}_{\varepsilon}\right|^{2}\,\mathrm{d}x&\lesssim\lambda_{\varepsilon}\int_{\partial
B}\left(\left|\nabla\mathbf{Q}^{*}_{\varepsilon}\right|^{2}+\left|\nabla\mathbf{Q}^{*}\right|^{2}+\frac{1}{\lambda_{\varepsilon}^{2}}\left|\mathbf{Q}^{*}_{\varepsilon}-\mathbf{Q}^{*}\right|^{2}+\frac{1}{\lambda_{\varepsilon}^{2}}\left(\left|\mathbf{Q}^{*}_{\varepsilon}\right|-1-\kappa_{*}\varepsilon\right)^{2}\right)\mathrm{d}\mathscr{H}^{1}\\\
&\stackrel{{\scriptstyle\eqref{interp6}}}{{\lesssim}}\lambda_{\varepsilon}\int_{\partial
B}\left(\left|\nabla\mathbf{Q}^{*}_{\varepsilon}\right|^{2}+\left|\nabla\mathbf{Q}^{*}\right|^{2}+\frac{1}{\lambda_{\varepsilon}^{2}}\left|\mathbf{Q}^{*}_{\varepsilon}-\mathbf{Q}^{*}\right|^{2}+\frac{\varepsilon^{2}}{\lambda_{\varepsilon}^{2}}g_{\varepsilon}(\mathbf{Q}^{*}_{\varepsilon})+\frac{\varepsilon^{2}\kappa_{*}}{\lambda_{\varepsilon}^{2}}\right)\mathrm{d}\mathscr{H}^{1}\\\
\end{split}$
By the assumption (C.1), we deduce
(C.9) $\begin{split}\int_{A^{1}_{\varepsilon}\cup
A^{2}_{\varepsilon}}\left|\nabla\mathbf{Q}_{\varepsilon}\right|^{2}\,\mathrm{d}x&\stackrel{{\scriptstyle\eqref{interp6}}}{{\lesssim}}\lambda_{\varepsilon}+\frac{\varepsilon^{2}}{\lambda_{\varepsilon}}+\frac{1}{\lambda_{\varepsilon}}\int_{\partial
B}\left|\mathbf{Q}^{*}_{\varepsilon}-\mathbf{Q}^{*}\right|^{2}\,\mathrm{d}\mathscr{H}^{1}\\\
\end{split}$
We take
(C.10) $\lambda_{\varepsilon}:=\varepsilon+\left(\int_{\partial
B}\left|\mathbf{Q}^{*}_{\varepsilon}-\mathbf{Q}^{*}\right|^{2}\,\mathrm{d}\mathscr{H}^{1}\right)^{1/2}$
By assumption, we have
$\mathbf{Q}^{*}_{\varepsilon}\rightharpoonup\mathbf{Q}^{*}$ weakly in
$W^{1,2}(\partial B)$, hence strongly in $L^{2}(\partial B)$. Therefore,
$\lambda_{\varepsilon}\to 0$ as $\varepsilon\to 0$. Moreover, (C.9) and (C.10)
imply
(C.11) $\begin{split}\int_{A^{1}_{\varepsilon}\cup
A^{2}_{\varepsilon}}\left|\nabla\mathbf{Q}_{\varepsilon}\right|^{2}\,\mathrm{d}x\to
0\qquad\textrm{as }\varepsilon\to 0,\end{split}$
On the other hand, we have
(C.12)
$\begin{split}\int_{A^{3}_{\varepsilon}}\left|\nabla\mathbf{Q}_{\varepsilon}\right|^{2}\,\mathrm{d}x=\int_{B}\left|\nabla\mathbf{Q}^{*}\right|^{2}\,\mathrm{d}x\end{split}$
for any $\varepsilon$. Therefore, (C.3) follows from (C.8), (C.11) and (C.12).
∎
## References
* [1] R. Alicandro and M. Ponsiglione. Ginzburg-Landau functionals and renormalized energy: a revised $\Gamma$-convergence approach. J. Funct. Anal., 266(8):4890–4907, 2014.
* [2] F. Almgren, W. Browder, and E. H. Lieb. Co-area, liquid crystals, and minimal surfaces. In Partial differential equations (Tianjin, 1986), volume 1306 of Lecture Notes in Math., pages 1–22. Springer, Berlin, 1988.
* [3] L. Ambrosio, N. Fusco, and D. Pallara. Functions of bounded variation and free discontinuity problems. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000.
* [4] L. Ambrosio and S. Wenger. Rectifiability of flat chains in Banach spaces with coefficients in $\mathbb{Z}_{p}$. Mathematische Zeitschrift, 268:477–506, 2009.
* [5] Rufat Badal and Marco Cicalese. Renormalized energy between fractional vortices with topologically induced free discontinuities on $2$-dimensional Riemannian manifolds. Preprint arXiv 2204.01840, 2022.
* [6] Rufat Badal, Marco Cicalese, Lucia De Luca, and Marcello Ponsiglione. $\Gamma$-convergence analysis of a generalized $XY$ model: fractional vortices and string defects. Comm. Math. Phys., 358(2):705–739, 2018.
* [7] S. Baldo. Minimal interface criterion for phase transitions in mixtures of Cahn-Hilliard fluids. Annales de l’Institut Henri Poincare (C) Non Linear Analysis, 7(2):67–90, 1990.
* [8] J. M. Ball and A. Zarnescu. Orientability and energy minimization in liquid crystal models. Arch. Rational Mech. Anal., 202(2):493–535, 2011.
* [9] P. Bauman, J. Park, and D. Phillips. Analysis of nematic liquid crystals with disclination lines. Archive for Rational Mechanics and Analysis, 205(3):795–826, Sep 2012.
* [10] F. Bethuel, H. Brezis, and F. Hélein. Asymptotics for the minimization of a Ginzburg-Landau functional. Cal. Var. Partial Differential Equations, 1(2):123–148, 1993.
* [11] F. Bethuel, H. Brezis, and F. Hélein. Ginzburg-Landau Vortices. Progress in Nonlinear Differential Equations and their Applications, 13\. Birkhäuser Boston Inc., Boston, MA, 1994.
* [12] F. Bethuel and D. Chiron. Some questions related to the lifting problem in Sobolev spaces. Contemporary Mathematics, 446:125–152, 2007.
* [13] F. Bethuel and X. Zheng. Density of smooth functions between two manifolds in Sobolev spaces. J. Funct. Anal., 80(1):60 – 75, 1988.
* [14] K. Bisht, Y. Wang, V. Banerjee, and A. Majumdar. Tailored morphologies in two-dimensional ferronematic wells. Physical Review E, 101(2):022706, 2020.
* [15] Konark Bisht, Varsha Banerjee, Paul Milewski, and Apala Majumdar. Magnetic nanoparticles in a nematic channel: A one-dimensional study. Physical Review E, 100(1):012703, 2019.
* [16] B. Bollobas. Modern Graph Theory. Springer-Verlag New York, 1998.
* [17] H. Brezis, J.-M. Coron, and E. H. Lieb. Harmonic maps with defects. Comm. Math. Phys., 107(4):649–705, 1986.
* [18] H. Brezis and H.-M. Nguyen. The Jacobian determinant revisited. Invent. Math., 185(1):17–54, 2011.
* [19] F Brochard and PG De Gennes. Theory of magnetic suspensions in liquid crystals. Journal de Physique, 31(7):691–708, 1970.
* [20] S. V. Burylov and Y. L. Raikher. Orientation of a solid particle embedded in a monodomain nematic liquid crystal. Physical review A, Atomic, molecular, and optical physics, 50(1):358–367, 1994.
* [21] S. V. Burylov and Y. L. Raikher. Macroscopic properties of ferronematics caused by orientational interactions on the particle surfaces. I. extended continuum model. Molecular Crystals and Liquid Crystals Science and Technology. Section A., 258(1):107–122, 1995.
* [22] G. Canevari and A. Zarnescu. Design of effective bulk potentials for nematic liquid crystals via colloidal homogenisation. Math. Models Methods Appl. Sci., 30(2):309–342, 2020.
* [23] James Dalby, Patrick E Farrell, Apala Majumdar, and Jingmin Xia. One-dimensional ferronematics in a channel: Order reconstruction, bifurcations, and multistability. SIAM Journal on Applied Mathematics, 82(2):694–719, 2022.
* [24] P. G. De Gennes and J. Prost. The Physics of Liquid Crystals. International series of monographs on physics. Clarendon Press, 1993.
* [25] Ennio De Giorgi and Luigi Ambrosio. Un nuovo funzionale nel calcolo delle variazioni. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Nat. (8), 82(2):199–210 (1989), 1988.
* [26] M. del Pino and P. L. Felmer. Local minimizers for the Ginzburg-Landau energy. Math. Z., 225(4):671–684, 1997.
* [27] H. Federer. Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften, Band 153. Springer-Verlag New York Inc., New York, 1969.
* [28] Irene Fonseca and Luc Tartar. The gradient theory of phase transitions for systems with two potential wells. Proceedings of the Royal Society of Edinburgh: Section A Mathematics, 111(1-2):89–102, 1989.
* [29] M. Giaquinta, G. Modica, and J. Souček. Cartesian currents in the calculus of variations., volume 37–38 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin, 1998. Cartesian currents.
* [30] Michael Goldman, Benoit Merlet, and Vincent Millot. A Ginzburg-Landau model with topologically induced free discontinuities. Ann. Inst. Fourier (Grenoble), 70(6):2583–2675, 2020.
* [31] D. Golovaty and J. A. Montero. On minimizers of a Landau-de Gennes energy functional on planar domains. Arch. Rational Mech. Anal., 213(2):447–490, 2014.
* [32] R. Ignat and X. Lamy. Lifting of $\mathbb{{RP}}^{d-1}$-valued maps in BV and applications to uniaxial Q-tensors. with an appendix on an intrinsic BV-energy for manifold-valued maps. Calculus of Variations and Partial Differential Equations, 58(2):68, Mar 2019.
* [33] Arieh Iserles. A first course in the numerical analysis of differential equations. Number 44. Cambridge university press, 2009.
* [34] R. L. Jerrard. Lower bounds for generalized Ginzburg-Landau functionals. SIAM J. Math. Anal., 30(4):721–746, 1999.
* [35] R. L. Jerrard and H. M. Soner. Functions of bounded higher variation. Indiana Univ. Math. J., 51(3):645–677, 2003.
* [36] J.P.F. Lagerwall and G. Scalia. A new era for liquid crystal research: Applications of liquid crystals in soft matter nano-, bio- and microtechnology. Current Applied Physics, 12(6):1387–1412, 2012.
* [37] Fang Hua Lin. Some dynamical properties of Ginzburg-Landau vortices. Comm. Pure Appl. Math., 49(4):323–359, 1996.
* [38] Fang Hua Lin. Vortex dynamics for the nonlinear wave equation. Comm. Pure Appl. Math., 52(6):737–761, 1999.
* [39] S. Luckhaus. Partial Hölder continuity for minima of certain energies among maps into a Riemannian manifold. Indiana Univ. Math. J., 37(2):349–367, 1988.
* [40] Ruma Rani Maity, Apala Majumdar, and Neela Nataraj. Parameter dependent finite element analysis for ferronematics solutions. Comput. Math. Appl., 103:127–155, 2021.
* [41] A. Mertelj, D. Lisjak, M. Drofenik, and M. Copic. Ferromagnetism in suspensions of magnetic platelets in liquid crystal. Nature, 504(7479):237–241, 2013.
* [42] L. Modica and S. Mortola. Un esempio di $\Gamma^{-}$-convergenza. Boll. Un. Mat. Ital. B (5), 14(1):285–299, 1977.
* [43] É. Sandier. Lower bounds for the energy of unit vector fields and applications. J. Funct. Anal., 152(2):379–403, 1998. see Erratum, ibidem 171, 1 (2000), 233.
* [44] R. Schoen and K. Uhlenbeck. Boundary regularity and the Dirichlet problem for harmonic maps. J. Differential Geom., 18(2):253–268, 1983.
* [45] L. Simon. Lectures in Geometric Measure Theory. Centre for Mathematical Analysis, Australian National University, Canberra, 1984.
* [46] M. Struwe. On the asymptotic behavior of minimizers of the Ginzburg-Landau model in $2$ dimensions. Differential Integral Equations, 7(5-6):1613–1624, 1994.
* [47] Jianyuan Yin, Yiwei Wang, Jeff ZY Chen, Pingwen Zhang, and Lei Zhang. Construction of a pathway map on a complicated energy landscape. Physical review letters, 124(9):090601, 2020.
* [48] W. P. Ziemer. Integral currents mod 2. Transactions of the American Mathematical Society, 105(3):496–524, 1962. |
# Convergence of reputations under indirect reciprocity
Bryce Morsky<EMAIL_ADDRESS>Department of Mathematics, Florida State
University, Tallahassee, FL, USA Department of Biology, University of
Pennsylvania, Philadelphia, PA, USA Joshua B. Plotkin<EMAIL_ADDRESS>Department of Biology, University of Pennsylvania, Philadelphia, PA, USA Erol
Akçay<EMAIL_ADDRESS>Department of Biology, University of Pennsylvania,
Philadelphia, PA, USA
###### Abstract
Previous research has shown how indirect reciprocity can promote cooperation
through evolutionary game theoretic models. Most work in this field assumes a
separation of time-scales: individuals’ reputations equilibrate at a fast time
scale for given frequencies of strategies while the strategies change slowly
according to the replicator dynamics. Much of the previous research has
focused on the behaviour and stability of equilibria for the replicator
dynamics. Here we focus on the underlying reputational dynamics that occur on
a fast time scale. We describe reputational dynamics as systems of
differential equations and conduct stability analyses on their equilibria. We
prove that reputations converge to a unique equilibrium for each of the five
standard norms whether assessments are public or private. These results
confirm a crucial but previously unconfirmed assumption underlying the theory
of indirect reciprocity for the most studied set of norms.
Keywords: cooperation, evolutionary game theory, indirect reciprocity,
reputations, social norms
## 1 Introduction
Indirect reciprocity is an important mechanism to foster cooperation.
Theoretical studies have extended an evolutionary game theoretic model of the
Prisoner’s Dilemma game (also termed the Donation game) by adding a system of
reputations and a population of Discriminators who defect against “bad”
individuals and cooperate with “good” ones [4, 6, 11, 13, 14, 15, 19]. To
determine who is good and who is bad, interactions between pairs of
individuals are observed. To assess a donor’s reputation (as good or bad), an
observer may consider the action the donor took (to cooperate or defect), the
reputation of the recipient with the observer (either good or bad), and a
social norm. The social norms provide the rules of how to assess what the
observer observed. For all norms, a donor is assessed as good if they
cooperate with a good donor. This is a minimum requirement to promote
cooperation. However, the norms will differ on their recommendations for other
scenarios. There are five social norms that are frequently studied: Scoring
[23], Shunning [22], Staying [9], Simple Standing [7], and Stern Judging [18].
Scoring was the first norm studied, and under it a donor is considered good if
they cooperate and bad if they defect. Assessments under Scoring do not depend
on the reputation of the recipient, but this norm leads to a population that
only ever defects. Thus, attention shifted to higher order norms that factor
in the reputation of the recipient (Shunning, Staying, Simple Standing, and
Stern Judging). Under the Stern Judging norm, donors are assessed as good if
they cooperate with good recipients and defect with bad ones. Conversely, they
are assessed as bad if they defect with good recipients and cooperate with bad
ones. In the initial work studying these norms, reputations were generally
assumed to be assessed publicly wherein there is a shared reputational system
[2, 3, 12] and all individuals agree on each other’s reputation. Private
reputations were later explored [15], which allow for individuals to hold
private information and disagree about the reputations of others. Conflict
between different opinions of individuals can undermine the reputational
system and thus cooperation. These five norms and two methods of assessment
(public and private) form the core set of theoretical models of indirect
reciprocity. Additionally, there are a great many models extending this
framework including noisy and incomplete information [5], individuals’
emotions [17] and individuals’ reasoning to account for errors and
discrepancies in reputations [8, 16] to name a few.
Despite a wide range of assumptions about assessment and game play, the above
models generally share the assumption that the dynamics reputation and
strategies operate at two different time scales. Reputations are assumed to
equilibrate rapidly while strategies change much more slowly. This assumption
can be justified if each individual undergoes many interactions during its
lifetime (if replicator dynamics model a birth-death process, or simply update
infrequently). A further justification is that individuals cannot fully assess
the payoffs of their strategies (and thus not compare and imitate) until
reputations have reached an equilibrium. As a theory building strategy, this
assumption allows the models to account for the full incentive effects through
reputations of a strategy composition of the population. It also makes the
theory analytically tractable.
Under this separation of time-scale assumption, the strategies of individuals
change in response to payoffs that are computed for the frequencies of the
types of individuals while assuming that their reputations are at the
equilibrium levels given the strategy frequencies. Though equilibria of the
reputational system were found previously and used to analyze the replicator
dynamics that govern the change in strategies, whether or not they converged
to these equilibria has been understudied. Convergence of reputations to
unique equilibria has only been proved in a few cases such as for models that
incorporate reasoning [8, 16]. However, convergence in some situations is
conditional on the parameter values chosen [16], and thus need not hold
generally. Thus, it is an open and key question as to whether or not
reputations converge. Here, we have proven that reputations in the standard
indirect reciprocity model do indeed converge to unique equilibria for the
five common norms and both public and private assessments of reputations. We
do this by representing the reputational dynamics as a system of ordinary
differential equations and analyse the stability of the equilibria of these
systems. This setup places some assumptions on the reputation dynamics such as
the population being infinite and time being continuous. However, these are
frequently assumed in mathematical models of indirect reciprocity. So long as
the population is large and dynamics occur in short intervals of time. In the
methods section, we define the system of ordinary differential equations that
model the reputational dynamics, and in the results section analyze the
stability of their equilibria.
## 2 Methods
Consider three types of players each playing a specific strategy in the
donation game. AllC (always cooperate) players are those who always _intend_
to cooperate regardless of the reputation of the recipient, and $x$ is their
frequency in the population. Note that AllC players only intend to cooperate:
they don’t always successfully do so. As discussed below, we assume — as is
standard in the literature — that errors may occur when players attempt to
cooperate by which they unintentionally defect. AllD players always defect,
and their frequency in the population is $y$. Note that there is no
possibility for an AllD player to accidentally cooperate. The third and final
type of player are Discriminators, who intend to cooperate with good
recipients and defect against bad ones. They thus act as punishers of “bad
behaviour” (as determined by the social norm). Their frequency in the
population is $z$, and we have $x+y+z=1$. Since reputations converge rapidly,
i.e. before strategies can change, $x$, $y$, and $z$ will be constants in our
analyses.
Errors in action and assessment are assumed in many models of indirect
reciprocity. Let $\tfrac{1}{2}>e_{1}>0$ be the probability that a donor who
intends to cooperate defects by mistake. Further, let $\tfrac{1}{2}>e_{2}>0$
be the probability that there is an error in the assessment of the reputation
of the donor. That is to say, with probability $e_{2}$, the observer assigns
the _opposite_ reputation to the donor than they intended to. Define
$\epsilon=(1-e_{1})(1-e_{2})+e_{1}e_{2}$ as the probability that an individual
who intends to cooperate is observed doing so. We will use $\epsilon$
throughout our analysis rather than $e_{1}$. Also, we write $e=e_{2}$ to
further simplify our notation. The parameters $\epsilon$ and $e$ along with
the social norm and whether or not assessment are public will thus determine
reputations.
| Donor’s action / recipient’s reputation
---|---
Social norm | $C/G$ | $D/G$ | $C/B$ | $D/B$
Scoring | $G$ | $B$ | $G$ | $B$
Shunning | $G$ | $B$ | $B$ | $B$
Simple Standing | $G$ | $B$ | $G$ | $G$
Staying | $G$ | $B$ | — | —
Stern Judging | $G$ | $B$ | $B$ | $G$
Table 1: Assessments of the donor (either $G$ or $B$ for good or bad) given
the donor’s action (either $C$ or $D$ for cooperate or defect), recipient’s
reputation ($G$ or $B$), and the social norm. The dash under Staying implies
that the reputation of the donor is not updated when they interact with a
recipient with a bad reputation.
Social norms determine what actions are good and what bad given the reputation
of the donor. Five important norms frequently studied in the literature are:
Scoring, Shunning, Staying, Simple Standing, and Stern Judging. The rules for
these norms are represented in Table 1. For example, under Simple Standing, a
donor is assessed as good if they cooperate with a good recipient, and bad if
they do not. And, they’re assessed as good when they interact with a bad
recipient, regardless of whether or not they cooperate. Note that due to the
error in assessment, it is possible for an observer to assess a donor who
interacts with a bad recipient as bad. However, in models where players can
factor in error rates and thereby reason about the intention of the donor,
this cannot happen [8, 16].
The state variables in our analyses of the reputational dynamics are the
fraction of players of each type that are good. Thus, $g_{x}$ and $1-g_{x}$
are the frequencies of AllC players with good and bad reputations,
respectively. Likewise $g_{y}$ and $1-g_{y}$ are the frequencies of AllD
players with good and bad reputations, respectively. Finally, $g_{z}$ and
$1-g_{z}$ are the frequencies of Discriminators with good and bad reputations,
respectively. The total frequency of good players in the population is
$g=xg_{x}+yg_{y}+zg_{z}$.
Reputations can be assessed publicly or privately, which impact the assessment
of the reputations of Discriminators because Discriminators’ behaviours are
determined by the reputation the recipient has with them. Thus, if this
differs between the observer and a donor Discriminator, we need to know the
frequency with which two players agree that a recipient is good, denoted
$g_{2}$. The probability that two AllC players are good is defined as
$g_{x2}$. $g_{y2}$ and $g_{z2}$ for AllD players and Discriminators are
defined similarly. Thus, $g_{2}=xg_{x2}+yg_{y2}+zg_{z2}$. Under private
assessment of reputations, $g_{x2}$, $g_{y2}$, and $g_{z2}$ will be state
variables in addition to $g_{x}$, $g_{y}$, and $g_{z}$.
As in previous models, we assume that reputations change by a good individual
being reassessed as bad or a bad individual being reassessed as good [15, 19].
Let $g_{i}^{+}$ be the probability that a bad individual of type $i$ is
reassessed as good, and $g_{i}^{-}$ be the probability that a good individual
of type $i$ is reassessed as good. To represent this process as a system of
differential equations, we define $\dot{g}_{i}=g_{i}^{+}-g_{i}^{-}$ by
assuming a limiting process. We are thus able to convert a discrete process
into a continuous one. We use the same values of $g_{i}^{+}$ and $g_{i}^{-}$
from [15, 19]. The reputational systems under public assessment for the five
norms are thus:
$\begin{rcases}\dot{g}_{x}=\epsilon-g_{x}\\\ \dot{g}_{y}=e-g_{y}\\\ \dot{g}_{z}=\epsilon g+e(1-g)-g_{z}\end{rcases}\text{Scoring},$ (1a) | $\begin{rcases}\dot{g}_{x}=\epsilon g+e(1-g)-g_{x}\\\ \dot{g}_{y}=e-g_{y}\\\ \dot{g}_{z}=\epsilon g+e(1-g)-g_{z}\end{rcases}\text{Shunning},$ (1b)
---|---
$\begin{rcases}\dot{g}_{x}=(\epsilon-g_{x})g\\\ \dot{g}_{y}=(e-g_{y})g\\\ \dot{g}_{z}=(\epsilon-g_{z})g\end{rcases}\text{Staying},$ (1c) | $\begin{rcases}\dot{g}_{x}=\epsilon g+(1-e)(1-g)-g_{x}\\\ \dot{g}_{y}=eg+(1-e)(1-g)-g_{y}\\\ \dot{g}_{z}=\epsilon g+(1-e)(1-g)-g_{z}\end{rcases}\text{Simple Standing},$ (1d)
$\begin{rcases}\dot{g}_{x}=\epsilon g+(1-\epsilon)(1-g)-g_{x}\\\
\dot{g}_{y}=eg+(1-e)(1-g)-g_{y}\\\ \dot{g}_{z}=\epsilon
g+(1-e)(1-g)-g_{z}\end{rcases}\text{Stern Judging}.$ (1e)
---
And for private assessment of reputations, the systems are as follows:
$\begin{rcases}\dot{g}_{x}=\epsilon g+e(1-g)-g_{x}\\\ \dot{g}_{y}=e-g_{y}\\\ \dot{g}_{z}=\epsilon g_{2}+e(1-g_{2})-g_{z}\\\ \dot{g}_{x2}=(\epsilon g+e(1-g))g_{x}-g_{x2}\\\ \dot{g}_{y2}=eg_{y}-g_{y2}\\\ \dot{g}_{z2}=(\epsilon g_{2}+e(1-g_{2}))g_{z}-g_{z2}\end{rcases}\text{Shunning},$ (2a) | $\begin{rcases}\dot{g}_{x}=(\epsilon-g_{x})g\\\ \dot{g}_{y}=(e-g_{y})g\\\ \dot{g}_{z}=\epsilon g_{2}+e(g-g_{2})-g_{z}g\\\ \dot{g}_{x2}=(\epsilon g_{x}-g_{x2})g\\\ \dot{g}_{y2}=(eg_{y}-g_{y2})g\\\ \dot{g}_{z2}=(\epsilon g_{2}+e(g-g_{2}))g_{z}-g_{z2}g\end{rcases}\text{Staying},$ (2b)
---|---
$\begin{rcases}\dot{g}_{x}=\epsilon g+(1-e)(1-g)-g_{x}\\\
\dot{g}_{y}=eg+(1-e)(1-g)-g_{y}\\\ \dot{g}_{z}=\epsilon
g_{2}+e(g-g_{2})+(1-e)(1-g)-g_{z}\\\ \dot{g}_{x2}=(\epsilon
g+(1-e)(1-g))g_{x}-g_{x2}\\\ \dot{g}_{y2}=(eg+(1-e)(1-g))g_{y}-g_{y2}\\\
\dot{g}_{z2}=(\epsilon
g_{2}+e(g-g_{2})+(1-e)(1-g))g_{z}-g_{z2}\end{rcases}\text{Simple Standing},$
(2c)
---
$\begin{rcases}\dot{g}_{x}=\epsilon g+(1-\epsilon)(1-g)-g_{x}\\\
\dot{g}_{y}=eg+(1-e)(1-g)-g_{y}\\\
\dot{g}_{z}=(1-2g)(1-e)+g+(\epsilon-e)(2g_{2}-g)-g_{z}\\\
\dot{g}_{x2}=(\epsilon g+(1-\epsilon)(1-g))g_{x}-g_{x2}\\\
\dot{g}_{y2}=(eg+(1-e)(1-g))g_{y}-g_{y2}\\\
\dot{g}_{z2}=((1-2g)(1-e)+g+(\epsilon-e)(2g_{2}-g))g_{z}-g_{z2}\end{rcases}\text{Stern
Judging}.$ (2d)
Since the reputation of the recipient is irrelevant for Scoring, the
reputation dynamics are the same whether assessments are public or private.
To be more explicit on how these equations are derived, consider the dynamics
for AllC under Simple Standing. $g_{x}$ increases when a bad AllC player is
reassessed as good, which occurs with probability
$g_{x}^{+}=(1-g_{x})(\epsilon g+(1-e)(1-g))$. A bad AllC player is selected
with probability $1-g_{x}$. With probabilities $g$ and $1-g$ they pair with a
good and bad recipient, respectively. When paired with a good recipient, the
bad AllC player is reassessed as good with probability $\epsilon$. When paired
with a good AllC player, they are reassessed as good with probability $1-e$.
In a similarly fashion a good AllC player is reassessed as bad with
probability $g_{x}^{-}=g_{x}((1-\epsilon)g+e(1-g))$. Simplifying we have
$g_{x}^{+}-g_{x}^{-}=\epsilon g+(1-e)g-g_{x}$, to which we define
$\dot{g}_{x}$. The reputational dynamics for $g_{y}$ can be found similarly.
Under public assessment of reputations, $g_{z}$ is also found similarly.
However, under private assessment, one has to take care of how the
Discriminator donor and observer view the reputations of the recipient. If
they both agree that they’re good, then the donor will intend to cooperate and
the observer will evaluate them as if they’re interacting with a good
recipient. This occurs with probability $g_{2}$ and the donor will then be
assessed as good with probability $\epsilon$. With probability $g-g_{2}$ the
donor believes that the recipient is bad and so defects, but the observer
believes that they’re good. A donor who intends to defect against a good
recipient will be assessed as good only if an error in assessment occurs, i.e.
with probability $e$. Finally, with probability $1-g$, the observer believes
that the recipient is bad, and thus will assess the donor as good so long as
there is no error in assessment, i.e. with probability $1-e$. Thus, a bad
Discriminator will be reassessed as good with probability
$g_{z}^{+}=(1-g_{z})(\epsilon g_{2}+e(g-g_{2})+(1-e)(1-g))$. $g_{z}^{-}$ is
found in a similar way. However, we consider the probabilities that the donor
is bad. For example, if both donor and observer believe that the recipient is
good, which happens with probability $g_{2}$, then the donor is assessed as
bad with probability $1-\epsilon$. The other terms are found in a similar way
giving us $g_{z}^{-}=g_{z}((1-\epsilon)g_{2}+(1-e)(g-g_{2})+e(1-g))$ and
$\dot{g}_{z}=g_{z}^{+}-g_{z}^{-}=\epsilon g_{2}+e(g_{2}-g)+(1-e)(1-g)$.
Continuing with the example of Simple Standing under private assessment of
reputations, $g_{x}-g_{x2}$ is the probability that one player believes that
an AllC player is good and another believes that they are bad. Thus, $g_{x2}$
increases when such an AllC player is reassessed as good, which happens with
probability $g_{x2}^{+}=(g_{x}-g_{x2})(\epsilon g+(1-e)(1-g))$. In a similar
way, we can calculate the probability that $g_{x2}$ decreases as
$g_{x2}^{-}=g_{x2}((1-\epsilon)g+e(1-g))$, which gives us
$\dot{g}_{x2}=g_{x2}^{+}-g_{x2}^{-}=(\epsilon g+(1-e)(1-g))g_{x}-g_{x2}$.
$g_{y2}^{+}$, $g_{y2}^{-}$, $\dot{g}_{y2}$, $g_{z2}^{+}$, $g_{z2}^{-}$, and
$\dot{g}_{z2}$ are all computed similarly.
## 3 Results
### 3.1 Public Assessment
First consider an analysis of the case of public assessment of reputations.
Equilibria have previously been found in the literature [19], but here we show
that reputations converge to a unique equilibrium for each norm. Beginning
with Scoring, there is only one equilibrium frequency of good individuals,
namely
$g^{*}=\frac{\epsilon x+e(1-x)}{1-(\epsilon-e)z},$ (3)
which is defined across the simplex and for all error rates (as will all
equilibria here). The Jacobian of the system of ODEs is
$\mathbf{J}=\begin{pmatrix}-1&0&0\\\ 0&-1&0\\\
(\epsilon-e)x&(\epsilon-e)y&(\epsilon-e)z-1\\\ \end{pmatrix},$ (4)
which has eigenvalues $\lambda_{1}=\lambda_{2}=-1$ and
$\lambda_{3}=(\epsilon-e)z-1<0$. Note that $\mathbf{J}$ is not a function of
$g$, which will be the case under public assessment for all norms but Staying.
Since the eigenvalues are negative and $g^{*}$ is the sole equilibrium,
$g^{*}$ is stable. Additionally, since assessments under Scoring do not depend
upon the reputation of the recipient, there is no difference in Scoring
between public and private assessments and thus these results hold for both.
For Shunning, the sole equilibrium frequency of good individuals is
$g^{*}=\frac{e}{1-(\epsilon-e)(1-y)},$ (5)
and the Jacobian of the system is
$\mathbf{J}=\begin{pmatrix}(\epsilon-e)x-1&(\epsilon-e)y&(\epsilon-e)z\\\
0&-1&0\\\ (\epsilon-e)x&(\epsilon-e)y&(\epsilon-e)z-1\\\ \end{pmatrix}.$ (6)
It has eigenvalues $\lambda_{1}=\lambda_{2}=-1$ and
$\lambda_{3}=(\epsilon-e)(1-y)-1<0$, and thus $g^{*}$ is stable.
For Staying, there are two equilibria: $g^{*}=0$ and $g^{*}=\epsilon(1-y)+ey$,
and in the latter case $g_{x}^{*}=g_{z}^{*}=\epsilon$ and $g_{y}^{*}=e$.
Subbing these solutions into the Jacobian gives us the following matrices:
$\mathbf{J}(0)=\begin{pmatrix}\epsilon x&\epsilon y&\epsilon z\\\ ex&ey&ez\\\
\epsilon x&\epsilon y&\epsilon z\\\ \end{pmatrix},$ (7a)
$\mathbf{J}(\epsilon(1-y)+ey)=\begin{pmatrix}-\epsilon(1-y)-ey&0&0\\\
0&-\epsilon(1-y)-ey&0\\\ 0&0&-\epsilon(1-y)-ey\\\ \end{pmatrix}.$ (7b)
The eigenvalues for Equation 7a are $\lambda_{1}=\lambda_{2}=0$ and
$\lambda_{3}=\epsilon(1-y)+ey>0$, and thus $g^{*}=0$ is unstable. The
eigenvalues for Equation 7b are
$\lambda_{1}=\lambda_{2}=\lambda_{3}=-(1-y)\epsilon-ey<0$, and thus
$g^{*}=\epsilon(1-y)+ey$ is stable.
For Simple Standing, we have the sole equilibrium
$g^{*}=\frac{1-e}{1-e+1-\epsilon(1-y)-ey}.$ (8)
The Jacobian of the system is
$\mathbf{J}=\begin{pmatrix}(\epsilon+e-1)x-1&(\epsilon+e-1)y&(\epsilon+e-1)z\\\
(2e-1)x&(2e-1)y-1&(2e-1)z\\\
(\epsilon+e-1)x&(\epsilon+e-1)y&(\epsilon+e-1)z-1\\\ \end{pmatrix},$ (9)
which has eigenvalues $\lambda_{1}=\lambda_{2}=-1$ and
$\lambda_{3}=e-1+\epsilon(1-y)+ey-1<0$. Therefore, $g^{*}$ is stable.
And, finally, for Stern Judging, we have the sole equilibrium
$g^{*}=\frac{1-\epsilon x-e(1-x)}{1-\epsilon x-e(1-x)+1-\epsilon(1-y)-ey}.$
(10)
The Jacobian of the system is
$\mathbf{J}=\begin{pmatrix}(2\epsilon-1)x-1&(2\epsilon-1)y&(2\epsilon-1)z\\\
(2e-1)x&(2e-1)y-1&(2e-1)z\\\
(\epsilon+e-1)x&(\epsilon+e-1)y&(\epsilon+e-1)z-1\\\ \end{pmatrix},$ (11)
which has eigenvalues $\lambda_{1}=\lambda_{2}=-1$ and $\lambda_{3}=\epsilon
x+e(1-x)-1+\epsilon(1-y)+ey-1<0$. Therefore, $g^{*}$ is stable.
### 3.2 Private Assessment
For Shunning, the sole equilibrium frequency of good individuals is
$g^{*}=\frac{\epsilon g_{2}^{*}z+e(1-g_{2}^{*}z)}{1-(\epsilon-e)x}.$ (12)
The Jacobian of the system evaluated at this equilibrium is
$\mathbf{J}(g^{*})=\begin{pmatrix}(\epsilon-e)x-1&(\epsilon-e)y&(\epsilon-e)z&0&0&0\\\
0&-1&0&0&0&0\\\ 0&0&-1&(\epsilon-e)x&(\epsilon-e)y&(\epsilon-e)z\\\
((\epsilon-e)x+1)g_{x}^{*}&(\epsilon-e)g_{x}^{*}y&(\epsilon-e)g_{x}^{*}z&-1&0&0\\\
0&e&0&0&-1&0\\\
0&0&g_{z}^{*}&(\epsilon-e)g_{z}^{*}x&(\epsilon-e)g_{z}^{*}y&(\epsilon-e)g_{z}^{*}z-1\\\
\end{pmatrix}.$ (13)
The characteristic equation is
$(1+\lambda)^{3}(\lambda^{3}+c_{2}\lambda^{2}+c_{1}\lambda+c_{0})=0$ with the
following coefficients:
$\displaystyle c_{2}$ $\displaystyle=3-(\epsilon-e)(x+g_{z}^{*}z)>2,$ (14a)
$\displaystyle c_{1}$
$\displaystyle=3-(\epsilon-e)(3g_{z}^{*}z+(2+(\epsilon-e)(g_{x}^{*}-g_{z}^{*})z)x)>0,$
(14b) $\displaystyle c_{0}$
$\displaystyle=1-(\epsilon-e)(x+2g_{z}^{*}z+2(\epsilon-e)(g_{x}^{*}-g_{z}^{*})xz)$
$\displaystyle=1-(\epsilon-e)(2ez+x(1+2g^{*}z(\epsilon-e)^{2}))-2z(\epsilon-e)^{2}(1-(\epsilon-e)x)g_{2}^{*}$
$\displaystyle\geq
1-(\epsilon-e)(2ez+x(1+2g^{*}z(\epsilon-e)^{2}))-2z(\epsilon-e)^{2}(1-(\epsilon-e)x)g^{*}$
$\displaystyle=1-(\epsilon-e)x-2(\epsilon-e)((\epsilon-e)g^{*}+e)z$
$\displaystyle\geq 1-(\epsilon-e)x-\frac{2e(\epsilon-e)}{1-\epsilon+e}z$
$\displaystyle>1-(\epsilon-e)x-z\geq 0.$ (14c)
The inequality for $c_{0}$ follows for the following reasons:
$g_{x}^{*}=(\epsilon-e)g^{*}+e$ and $g_{z}^{*}=(\epsilon-e)g_{2}^{*}+e$;
$g^{*}\geq g_{2}^{*}$;
$g_{x}^{*}=(\epsilon-e)g^{*}+e\leq(\epsilon-e)g_{x}^{*}+e\implies
g_{x}^{*}\leq e/(1-\epsilon+e)$; and
$1-\epsilon+e-2e(\epsilon-e)=e_{1}+4(1-e_{1})e^{2}>0$. The first three
eigenvalues are negative as can be seen from the first factor of the
characteristic equation. The second factor is a cubic equation of $\lambda$.
Note that all of the coefficients of this cubic are positive. The Routh-
Hurwitz criterion for stability requires that all coefficients of this cubic
to be positive and $c_{2}c_{1}-c_{0}>0$. Checking this last condition gives us
$c_{2}c_{1}-c_{0}>2c_{1}-c_{0}=5-(\epsilon-e)(3x+4g_{z}^{*}z)>0.$ (15)
Therefore, $g^{*}$ is stable.
There are two equilibria for Staying under private assessment. The first of
which is $g^{*}=0$, and evaluating the Jacobian at this equilibrium gives us
$\mathbf{J}(0)=\begin{pmatrix}x\epsilon&y\epsilon&z\epsilon&0&0&0\\\
ex&ey&ez&0&0&0\\\ ex&ey&ez&(\epsilon-e)x&(\epsilon-e)y&(\epsilon-e)z\\\
0&0&0&0&0&0\\\ 0&0&0&0&0&0\\\ 0&0&0&0&0&0\\\ \end{pmatrix}.$ (16)
The eigenvalues are $\lambda_{i}=0$ for $i=1\ldots,5$ and
$\lambda_{6}=\epsilon x+e(1-x)>0$. Thus, $g^{*}=0$ is unstable. At the other
equilibrium, $g_{x}^{*}=\epsilon$, $g_{y}^{*}=e$,
$g_{z}^{*}=(\epsilon-e)g_{2}^{*}/g^{*}+e$, and the Jacobian evaluated at it is
$\displaystyle\mathbf{J}(g^{*})$
$\displaystyle=\begin{pmatrix}\mathbf{J}_{1}&\mathbf{J}_{2}\\\
\mathbf{J}_{3}&\mathbf{J}_{4}\end{pmatrix},$ (17a)
$\displaystyle\mathbf{J}_{1}$ $\displaystyle=\begin{pmatrix}-g^{*}&0&0\\\
0&-g^{*}&0\\\ (e-g_{z}^{*})x&(e-g_{z}^{*})y&(e-g_{z}^{*})z-g^{*}\\\
\end{pmatrix},$ (17b) $\displaystyle\mathbf{J}_{2}$
$\displaystyle=\begin{pmatrix}0&0&0\\\ 0&0&0\\\
(\epsilon-e)x&(\epsilon-e)y&(\epsilon-e)z\\\ \end{pmatrix},$ (17c)
$\displaystyle\mathbf{J}_{3}$ $\displaystyle=\begin{pmatrix}\epsilon
g^{*}&0&0\\\ 0&eg^{*}&0\\\
(e-g_{z}^{*})g_{z}^{*}x&(e-g_{z}^{*})g_{z}^{*}y&(e-g_{z}^{*})g_{z}^{*}z+g_{z}^{*}g^{*}\\\
\end{pmatrix},$ (17d) $\displaystyle\mathbf{J}_{4}$
$\displaystyle=\begin{pmatrix}-g^{*}&0&0\\\ 0&-g^{*}&0\\\
(\epsilon-e)g_{z}^{*}x&(\epsilon-e)g_{z}^{*}y&(\epsilon-e)g_{z}^{*}z-g^{*}\\\
\end{pmatrix}.$ (17e)
The characteristic equation is
$(\lambda+g^{*})^{4}(\lambda^{2}+c_{1}\lambda+c_{0})=0$. Since $g^{*}\geq
g_{y}^{*}=e$,
$\displaystyle c_{1}$ $\displaystyle=2g^{*}-ez+(1-\epsilon+e)g_{z}^{*}z\geq
e(2-z)+(1-\epsilon+e)g_{z}^{*}z>0,$ (18a) $\displaystyle c_{0}$
$\displaystyle=g^{*}(g^{*}+((1-2(\epsilon-e))g_{z}^{*}-e)z)\geq
g^{*}(g_{z}^{*}z+((1-2(\epsilon-e))g_{z}^{*}-e)z)=g^{*}\sqrt{k_{1}+k_{2}+k_{3}}>0,$
(18b) $\displaystyle k_{1}$ $\displaystyle=k1=e^{2}(y-z)^{2}+2\epsilon
exy+\epsilon^{2}x^{2}>0,$ (18c) $\displaystyle k_{2}$
$\displaystyle=2e^{2}yz(2(1-\epsilon^{2})+2e(2\epsilon-e))>0,$ (18d)
$\displaystyle k_{3}$ $\displaystyle=2\epsilon
exz+4\epsilon(1-\epsilon)(\epsilon-e)^{2}xz>0.$ (18e)
We obtain the inequality for Equation 18b by solving $g^{*}=\epsilon
x+ey+g_{z}^{*}z$ and
$g_{2}^{*}=\epsilon^{2}x+e^{2}y+(g_{z}^{*})^{2}z=\epsilon^{2}x+e^{2}y+((\epsilon-e)g_{2}^{*}/g^{*}+e)^{2}z$
for $g^{*}$ and $g_{2}^{*}$ and then plugging these solutions into $c_{0}$.
Note also that the radicand is positive. Thus, the last two eigenvalues must
be negative and so $g^{*}$ is stable.
Next consider Simple Standing. The equilibrium frequency of good individuals
is
$g^{*}=\frac{1-e+(\epsilon-e)zg_{2}^{*}}{1-2e+1-(\epsilon-e)},$ (19)
and the Jacobian evaluated at it is
$\displaystyle\mathbf{J}(g^{*})$
$\displaystyle=\begin{pmatrix}\mathbf{J}_{1}&\mathbf{J}_{2}\\\
\mathbf{J}_{3}&\mathbf{J}_{4}\end{pmatrix},$ (20a)
$\displaystyle\mathbf{J}_{1}$
$\displaystyle=\begin{pmatrix}(\epsilon+e-1)x-1&(\epsilon+e-1)y&(\epsilon+e-1)z\\\
(2e-1)x&(2e-1)y-1&(2e-1)z\\\ (2e-1)x&(2e-1)y&(2e-1)z-1\\\ \end{pmatrix},$
(20b) $\displaystyle\mathbf{J}_{2}$ $\displaystyle=\begin{pmatrix}0&0&0\\\
0&0&0\\\ (\epsilon-e)x&(\epsilon-e)y&(\epsilon-e)z\\\ \end{pmatrix},$ (20c)
$\displaystyle\mathbf{J}_{3}$
$\displaystyle=\begin{pmatrix}(\epsilon+e-1)g_{x}^{*}x+g_{x}^{*}&(\epsilon+e-1)g_{x}^{*}y&(\epsilon+e-1)g_{x}^{*}z\\\
(2e-1)g_{y}^{*}x&(2e-1)g_{y}^{*}y+g_{y}^{*}&(2e-1)g_{y}^{*}z\\\
(2e-1)g_{z}^{*}x&(2e-1)g_{z}^{*}y&(2e-1)g_{z}^{*}z+g_{z}^{*}\\\
\end{pmatrix},$ (20d) $\displaystyle\mathbf{J}_{4}$
$\displaystyle=\begin{pmatrix}-1&0&0\\\ 0&-1&0\\\
(\epsilon-e)g_{z}^{*}x&(\epsilon-e)g_{z}^{*}y&(\epsilon-e)g_{z}^{*}z-1\\\
\end{pmatrix}.$ (20e)
The characteristic equation is
$(\lambda+1)^{3}(\lambda^{3}+c_{2}\lambda^{2}+c_{1}\lambda+c_{0})=0$ with
positive coefficients:
$\displaystyle c_{2}$
$\displaystyle=2+1-(\epsilon-e)g_{z}^{*}z+(1-\epsilon-e)x+(1-2e)y+(1-2e)z>2,$
(21a) $\displaystyle c_{1}$
$\displaystyle=(1+(1-\epsilon-e)(2+(\epsilon-e)(g_{x}^{*}-g_{z}^{*})z))x+2(1-2e)(1-(\epsilon-e)(g_{z}^{*}-g_{y}^{*})z)y$
$\displaystyle+3(1-(\epsilon-e)g_{z}^{*}z)+2(1-2e)z>0,$ (21b) $\displaystyle
c_{0}$
$\displaystyle=(1+(1-\epsilon-e)(1+2(\epsilon-e)(g_{x}^{*}-g_{z}^{*})z))x+2(e+(1-2e)(1-(\epsilon-e)(g_{z}^{*}-g_{y}^{*})z))y$
$\displaystyle+2(1-\epsilon g_{z}^{*}-e(1-g_{z}^{*}))z>0,$ (21c)
since $1-\epsilon-e=e_{1}(1-2e)>0$. The first three eigenvalues are negative.
Since all of the coefficients of the cubic are positive, we need only to
confirm that $c_{2}c_{1}-c_{0}>2c_{1}-c_{0}>0$ to prove stability. Checking
this last condition gives us
$\displaystyle 2c_{1}-c_{0}$
$\displaystyle=8-6e-3x(\epsilon-e)-4(\epsilon-e)(1-x-y)g_{z}^{*}$
$\displaystyle\geq 8-6e-3x(\epsilon-e)-4(\epsilon-e)(1-x-y)$
$\displaystyle=4-2e+4(1-\epsilon)+(\epsilon-e)(x+4y)>0.$ (22)
Therefore, it is stable.
Finally, consider Stern Judging, which has the equilibrium
$g^{*}=g_{x}^{*}=g_{y}^{*}=g_{z}^{*}=\tfrac{1}{2}$ [15]. Evaluating the
Jacobian at this equilibrium gives us
$\displaystyle\mathbf{J}(g^{*})$
$\displaystyle=\begin{pmatrix}\mathbf{J}_{1}&\mathbf{J}_{2}\\\
\mathbf{J}_{3}&\mathbf{J}_{4}\end{pmatrix},$ (23a)
$\displaystyle\mathbf{J}_{1}$
$\displaystyle=\begin{pmatrix}(2\epsilon-1)x-1&(2\epsilon-1)y&(2\epsilon-1)z\\\
(2e-1)x&(2e-1)y-1&(2e-1)z\\\
(2e-1+e-\epsilon)x&(2e-1+e-\epsilon)y&(2e-1+e-\epsilon)z-1\\\ \end{pmatrix},$
(23b) $\displaystyle\mathbf{J}_{2}$ $\displaystyle=\begin{pmatrix}0&0&0\\\
0&0&0\\\ 2(\epsilon-e)x&2(\epsilon-e)y&2(\epsilon-e)z\\\ \end{pmatrix},$ (23c)
$\displaystyle\mathbf{J}_{3}$
$\displaystyle=\begin{pmatrix}(2\epsilon-1)g_{x}^{*}x+g_{x}^{*}&(2\epsilon-1)g_{x}^{*}y&(2\epsilon-1)g_{x}^{*}z\\\
(2e-1)g_{y}^{*}x&(2e-1)g_{y}^{*}y+g_{y}^{*}&(2e-1)g_{y}^{*}z\\\
(2e-1+e-\epsilon)g_{z}^{*}x&(2e-1+e-\epsilon)g_{z}^{*}y&(2e-1+e-\epsilon)g_{z}^{*}z+g_{z}^{*}\\\
\end{pmatrix},$ (23d) $\displaystyle\mathbf{J}_{4}$
$\displaystyle=\begin{pmatrix}-1&0&0\\\ 0&-1&0\\\
2(\epsilon-e)g_{z}^{*}x&2(\epsilon-e)g_{z}^{*}y&2(\epsilon-e)g_{z}^{*}z-1\\\
\end{pmatrix}.$ (23e)
The characteristic equation is
$(\lambda+1)^{2}(\lambda^{4}+c_{3}\lambda^{3}+c_{2}\lambda^{2}+c_{1}\lambda+c_{0})=0$
with coefficients:
$\displaystyle c_{3}$ $\displaystyle=4-\epsilon
x+(1-\epsilon)x+(1-2e)(1-x)>0,$ (24a) $\displaystyle c_{2}$
$\displaystyle=3+3(1-\epsilon
x)+3(1-\epsilon)x+3(1-2e)y+(2(1-2e)+1-\epsilon-e)z>3,$ (24b) $\displaystyle
c_{1}$ $\displaystyle=3(1-\epsilon x)+3(1-\epsilon)x+3(1-2e)y+1-\epsilon
z+(2(1-2e)+1-\epsilon)z>0$ (24c) $\displaystyle c_{0}$
$\displaystyle=1-\epsilon x+(1-\epsilon)x+(1-2e)y+(1-\epsilon-e)z>0,$ (24d)
Further, we have the following inequalities:
$\displaystyle c_{3}c_{2}-c_{1}\geq 3c_{3}-c_{1}=8-2(\epsilon-e)z>0,$ (25a)
$\displaystyle c_{3}c_{2}c_{1}-c_{3}^{2}c_{0}-c_{1}^{2}=k_{1}k_{2}>0,$ (25b)
$\displaystyle k_{1}=2(2-(2\epsilon-1)x+(1-2e)(y+z))>0,$ (25c) $\displaystyle
k_{2}=(4+2(1-2\epsilon)x+2(1-2e)y+(1-2e+1-\epsilon-e)z))^{2}>0.$ (25d)
Therefore, $g^{*}$ is stable by the Routh-Hurwitz criteria.
## 4 Discussion
Indirect reciprocity is a key mechanism to promote cooperation and has been
well studied in the literature with both theoretical models and experiments.
Models have shown how indirect reciprocity can evolve and how it can promote
cooperation. Additionally, experimental evidence of indirect reciprocity has
been found in both humans and other animals [1, 10, 20, 21, 24]. Many of the
mathematical models of indirect reciprocity assume a fast dynamic for
reputations and a slow dynamic for strategies. That is to say, reputations of
individuals are assessed and reach an equilibrium relatively quickly. Expected
payoffs are calculated given these reputations. Then, individuals can change
their strategies by imitating those who have greater payoffs. Reputations
converge to an equilibrium quickly again, and so on. These models assume that
reputations converge to a unique equilibrium, but this was not proven for
either public or private assessment of reputations. Here, we closed this gap,
and have shown that the reputational dynamics that occur rapidly do converge
to unique equilibria for each of the five standard norms and two assessment
rules, which provides a basis for the previous analysis of the strategical
dynamics in the literature.
Extensions to the models may be qualitatively different than the reputational
dynamics explored here. For example, the norms we considered are zeroth order
(Scoring) and first order (Shunning, Staying, Simple Standing, and Stern
Judging). Assignment of reputations under Scoring only depends upon the action
of the donor while assignments of the other norms also depends on the
reputation of the recipient. Higher order norms — those that use other
information such as the previous reputation of the donor or multiple
observations of the donor — may not lead to convergence to a unique set of
reputations. For third order norms, the reputational system can be bistable,
and when observers make multiple observations, reputations may not converge
(unpublished research). We note that that our systems of ODEs contain at most
cubic polynomials with respect to the variables. The case of multiple
observations is quartic, while the abductive reasoning model — which has
conditional convergence — involves rational functions [16]. Finding higher
order norms that are relevant to behaviour and that do not converge to
reputational equilibria is an area for future research. Another research area
that may lead to non-convergence is finite populations whether well-mixed or
on a network. It’s possible that small populations do not converge to unique
equilibria or that there are network configurations that also impede
convergence.
### Code and data availability
Code to verify analytical results is available at
github.com/bmorsky/indirectReciprocity-convergence.
## References
* [1] Çağlar Akçay, Veronica A Reed, S Elizabeth Campbell, Christopher N Templeton, and Michael D Beecher. Indirect reciprocity: song sparrows distrust aggressive neighbours based on eavesdropping. Animal Behaviour, 80(6):1041–1047, 2010.
* [2] Hannelore Brandt and Karl Sigmund. The logic of reprobation: assessment and action rules for indirect reciprocation. Journal of theoretical biology, 231(4):475–486, 2004.
* [3] Fabio ACC Chalub, Francisco C Santos, and Jorge M Pacheco. The evolution of norms. Journal of theoretical biology, 241(2):233–240, 2006.
* [4] Michael A Fishman. Indirect reciprocity among imperfect individuals. Journal of Theoretical Biology, 225(3):285–292, 2003.
* [5] Christian Hilbe, Laura Schmid, Josef Tkadlec, Krishnendu Chatterjee, and Martin A Nowak. Indirect reciprocity with private, noisy, and incomplete information. Proceedings of the national academy of sciences, 115(48):12241–12246, 2018.
* [6] Olof Leimar and Peter Hammerstein. Evolution of cooperation through indirect reciprocity. Proceedings of the Royal Society of London. Series B: Biological Sciences, 268(1468):745–753, 2001.
* [7] Manfred Milinski, Dirk Semmann, Theo CM Bakker, and Hans-Jürgen Krambeck. Cooperation through indirect reciprocity: image scoring or standing strategy? Proceedings of the Royal Society of London. Series B: Biological Sciences, 268(1484):2495–2501, 2001.
* [8] Bryce Morsky, Joshua B Plotkin, and Erol Akcay. Indirect reciprocity with bayesian reasoning and biases. SocArXiv. October, 16, 2023.
* [9] Yutaka Nakai and Masayoshi Muto. Emergence and collapse of peace with friend selection strategies. Journal of Artificial Societies and Social Simulation, 11(3):6, 2008\.
* [10] Elena Nava, Emanuela Croci, and Chiara Turati. ‘I see you sharing, thus I share with you’: indirect reciprocity in toddlers but not infants. Palgrave Communications, 5(1):1–9, 2019.
* [11] Martin A Nowak and Karl Sigmund. Evolution of indirect reciprocity. Nature, 437(7063):1291–1298, 2005.
* [12] Hisashi Ohtsuki and Yoh Iwasa. How should we define goodness?—reputation dynamics in indirect reciprocity. Journal of theoretical biology, 231(1):107–120, 2004.
* [13] Hisashi Ohtsuki and Yoh Iwasa. The leading eight: social norms that can maintain cooperation by indirect reciprocity. Journal of theoretical biology, 239(4):435–444, 2006.
* [14] Isamu Okada. A review of theoretical studies on indirect reciprocity. Games, 11(3):27, 2020.
* [15] Isamu Okada, Tatsuya Sasaki, and Yutaka Nakai. A solution for private assessment in indirect reciprocity using solitary observation. Journal of theoretical biology, 455:7–15, 2018.
* [16] Neel Pandula, Erol Akçay, and Bryce Morsky. Indirect reciprocity with abductive reasoning. Journal of Theoretical Biology, 580:111715, 2024.
* [17] Arunas L Radzvilavicius, Alexander J Stewart, and Joshua B Plotkin. Evolution of empathetic moral evaluation. eLife, 8:e44269, 2019.
* [18] Fernando P Santos, Jorge M Pacheco, and Francisco C Santos. Evolution of cooperation under indirect reciprocity and arbitrary exploration rates. Scientific reports, 6(1):37517, 2016.
* [19] Tatsuya Sasaki, Isamu Okada, and Yutaka Nakai. The evolution of conditional moral assessment in indirect reciprocity. Scientific reports, 7(1):1–8, 2017.
* [20] Ingrid Seinen and Arthur Schram. Social status and group norms: Indirect reciprocity in a repeated helping experiment. European economic review, 50(3):581–602, 2006.
* [21] Ralf D Sommerfeld, Hans-Jürgen Krambeck, Dirk Semmann, and Manfred Milinski. Gossip as an alternative for direct observation in games of indirect reciprocity. Proceedings of the national academy of sciences, 104(44):17435–17440, 2007.
* [22] Nobuyuki Takahashi and Rie Mashima. The importance of subjectivity in perceptual errors on the emergence of indirect reciprocity. Journal of Theoretical Biology, 243(3):418–436, 2006.
* [23] Claus Wedekind and Manfred Milinski. Cooperation through image scoring in humans. Science, 288(5467):850–852, 2000.
* [24] Erez Yoeli, Moshe Hoffman, David G Rand, and Martin A Nowak. Powering up with indirect reciprocity in a large-scale field experiment. Proceedings of the National Academy of Sciences, 110(Supplement 2):10424–10429, 2013.
|
# New results on sparse representations in unions of orthonormal bases
Tao Zhang and Gennian Ge The research of G. Ge was supported by the National
Key Research and Development Program of China under Grant 2020YFA0712100, the
National Natural Science Foundation of China under Grant 12231014, and Beijing
Scholars Program.T. Zhang is with the Institute of Mathematics and
Interdisciplinary Sciences, Xidian University, Xi’an 710071, China (e-mail:
zhant220@163.com).G. Ge is with the School of Mathematics Sciences, Capital
Normal University, Beijing 100048, China (e-mail: gnge@zju.edu.cn).
###### Abstract
The problem of sparse representation has significant applications in signal
processing. The spark of a dictionary plays a crucial role in the study of
sparse representation. Donoho and Elad initially explored the spark, and they
provided a general lower bound. When the dictionary is a union of several
orthonormal bases, Gribonval and Nielsen presented an improved lower bound for
spark. In this paper, we introduce a new construction of dictionary, achieving
the spark bound given by Gribonval and Nielsen. Our result extends Shen et
al.’ s findings [IEEE Trans. Inform. Theory, vol. 68, pp. 4230–4243, 2022].
###### Index Terms:
Sparse representation, spark, mutual coherence, orthonormal bases.
## I Introduction
Due to the applications in compressed sensing, researchers are interested in
sparse representation problems [1, 5, 6]. Given a real column vector
$v\in\mathbb{R}^{n}$, the goal is to find an efficient representation of the
signal $v$. Let $B=\\{b_{1},b_{2},\dots,b_{n}\\}$ be an orthonormal basis of
$\mathbb{R}^{n}$, then
$\displaystyle
v=\begin{pmatrix}b_{1}&b_{2}&\cdots&b_{n}\end{pmatrix}\begin{pmatrix}v_{1}\\\
v_{2}\\\ \vdots\\\ v_{n}\end{pmatrix}=\sum_{i=1}^{n}v_{i}b_{i},$
where $v_{i}=\langle v,b_{i}\rangle$. Now we consider a more general problem
where the orthonormal basis is replaced by a dictionary of $\mathbb{R}^{n}$.
###### Definition 1.1.
A dictionary of $\mathbb{R}^{n}$ is a family of $N\geq n$ unit column vectors
$\\{d_{i}:i=1,2,\dots,N\\}$ that spans $\mathbb{R}^{n}$. We will use the
matrix notation $D=[d_{1},d_{2},\dots,d_{N}]$ for a dictionary.
If $N>n$, then the representation of $v$ is not unique, i.e., there are many
solutions for the equation $Dx=v$. The goal of compressed sensing is to find
the sparest representation among them. Then it is natural to consider the
following optimization problem:
$\displaystyle\min_{x}\|x\|_{0}\text{ subject to }v=Dx.$ (1)
In general, problem (1) is NP hard [14]. Following [1, 2], the following two
concepts are important in the study of sparse representation.
###### Definition 1.2.
The mutual coherence $\mu(D)$ of a given matrix $D$ is the largest magnitude
of the inner product between two columns of $D$, i.e.,
$\mu(D)=\max_{i\neq j}|\langle d_{i},d_{j}\rangle|.$
###### Definition 1.3.
The spark $\eta(D)$ of a given matrix $D$ is the smallest number of columns
from $D$ that are linearly dependent, i.e.,
$\eta(D)=\min_{x\in\ker(D),x\neq 0}\|x\|_{0},$
where $\ker(D)$ is the null space of $D$.
In [2, 7], the authors proved that if the equation $v=Dx$ has a solution
$x_{0}$ satisfying $\|x_{0}\|_{0}<\frac{\eta(D)}{2}$, then $x_{0}$ is the
unique solution for problem (1). Hence it is interesting to consider
dictionary that supports the largest possible sparse vectors. The problem of
computing the spark of a matrix is NP hard [2, 18]. In [17], the author gave a
specialized algorithm to compute the spark. Dictionary with large spark also
has applications in linear sensor arrays [12] and tensor decompositions [10].
For any dictionary $D$, Donoho and Elad [2] proved that
$\eta(D)\geq 1+\frac{1}{\mu(D)}.$
If $D$ is a union of two orthonormal bases, then a better bound can be found
in [4]
$\displaystyle\eta(D)\geq\frac{2}{\mu(D)}.$
The above bound was generalized by Gribonval and Nielsen [9], they proved that
if $D$ is a union of $q+1$ orthonormal bases, then
$\displaystyle\eta(D)\geq(1+\frac{1}{q})\frac{1}{\mu(D)}.$ (2)
In the same paper, the authors asked whether there exist examples for which
the equality holds in (2) when $q>1$. In [15], the authors gave a positive
answer to above question. Moreover, they proved that for $q=2^{r}$, $r$ is a
positive integer, $t=1$ or 2, there exists a dictionary $D$ in
$\mathbb{R}^{q^{2t}}$, which is a union of $q+1$ orthonormal bases, such that
the spark of $D$ attains bound (2). Then it is natural to ask the following
problem.
###### Problem 1.
For which $n,d$, there exists a dictionary $D$ in $\mathbb{R}^{n}$, which is a
union of $d$ orthonormal bases, such that the spark of $D$ attains bound (2).
For the above problem, Shen et al. [15] settled the case $(n,d)=(q^{2t},q+1)$,
where $q=2^{r}$, $r$ is a positive integer and $t=1$ or 2. We also refer the
readers to [2, 3, 8, 9, 16] for more dictionaries that are unions of several
orthonormal bases.
In this paper, we solve Problem 1 for $(n,d)=(q^{2t},q+1)$, where $q=2^{r}$
and $t,r$ are any positive integers. More precisely, We prove that
###### Theorem 1.4.
Let $t,r$ be positive integers, $q=2^{r}$, then there exists a dictionary $D$
in $\mathbb{R}^{q^{2t}}$ which is a union of $q+1$ orthonormal bases
satisfying $\eta(D)=q^{t}+q^{t-1}$ and $\mu(D)=\frac{1}{q^{t}}$.
It is easy to see that the spark of dictionary in Theorem 1.4 attains bound
(2). This paper is organized as follows. In Section II, we recall some basics
of finite fields. In Section III, we prove the main theorem.
## II Preliminaries
Let $m,n$ be integers with $m\mid n$. For any $a\in\mathbb{F}_{2^{n}}$, the
trace of $a$ from $\mathbb{F}_{2^{n}}$ to $\mathbb{F}_{2^{m}}$ is defined by
$\text{Tr}_{m}^{n}(a)=a+a^{2^{m}}+a^{2^{2m}}+\cdots+a^{2^{n-m}}.$
It is easy to see that $\text{Tr}_{m}^{n}(a)\in\mathbb{F}_{2^{m}}$. We recall
some properties of trace function.
###### Lemma 2.1.
[13, Theorem 2.1.83] Let $m,n$ be integers with $m\mid n$. Then the trace
function satisfies the following properties:
1. 1.
$\text{Tr}_{m}^{n}(a+b)=\text{Tr}_{m}^{n}(a)+\text{Tr}_{m}^{n}(b)$ for all
$a,b\in\mathbb{F}_{2^{n}}$;
2. 2.
$\text{Tr}_{m}^{n}(ca)=c\text{Tr}_{m}^{n}(a)$ for all $a\in\mathbb{F}_{2^{n}}$
and $c\in\mathbb{F}_{2^{m}}$;
3. 3.
$\text{Tr}_{m}^{n}(a^{2^{m}})=\text{Tr}_{m}^{n}(a)$ for all
$a\in\mathbb{F}_{2^{n}}$;
4. 4.
$|\\{x\in\mathbb{F}_{2^{n}}:\ \text{Tr}_{m}^{n}(x)=c\\}|=2^{n-m}$ for any
$c\in\mathbb{F}_{2^{m}}$.
###### Lemma 2.2.
[11, Theorem 2.26] Let $l,m,n$ be integers with $l\mid m\mid n$. Then
$\text{Tr}_{l}^{n}(a)=\text{Tr}_{l}^{m}(\text{Tr}_{m}^{n}(a))$ for all
$a\in\mathbb{F}_{2^{n}}$.
From Lemma 2.1, we have the following corollary.
###### Corollary 2.3.
$\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{n}(x)}=0$.
###### Proof.
Since $|\\{x\in\mathbb{F}_{2^{n}}:\
\text{Tr}_{1}^{n}(x)=0\\}|=|\\{x\in\mathbb{F}_{2^{n}}:\
\text{Tr}_{1}^{n}(x)=1\\}|=2^{n-1}$, then
$\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{n}(x)}=0$. ∎
We also need the following lemma.
###### Lemma 2.4.
[11, Corollary 3.79] For a positive integer $n$, the equation $x^{2}+ax+b=0$
with $a,b\in\mathbb{F}_{2^{n}}$, $a\neq 0$, has solutions in
$\mathbb{F}_{2^{n}}$ if and only if $\text{Tr}_{1}^{n}(\frac{b}{a^{2}})=0$.
## III Proof of Theorem 1.4
Let $t,r$ be positive integers, $m=tr$ and $n=2m=2tr$. Let $q=2^{r}$. Define
$B_{q+1}=\\{e_{x}:x\in\mathbb{F}_{2^{n}}\\}$, and
$B_{a}=\\{B_{a,b}=\frac{1}{2^{m}}((-1)^{\text{Tr}_{1}^{m}(ax^{2^{m}+1})+\text{Tr}_{1}^{n}(bx)})_{x\in\mathbb{F}_{2^{n}}}:b\in\mathbb{F}_{2^{n}}\\},$
for $a\in\mathbb{F}_{2^{r}}$. We first prove that the vectors in $B_{a}$ form
an orthonormal basis of $R^{2^{n}}$.
###### Lemma 3.1.
For each $a\in\mathbb{F}_{2^{r}}$, $B_{a}$ is an orthonormal basis of
$R^{2^{n}}$.
###### Proof.
For any $b_{1},b_{2}\in\mathbb{F}_{2^{n}}$ with $b_{1}\neq b_{2}$, we have
$\displaystyle\langle B_{a,b_{1}},B_{a,b_{2}}\rangle=$
$\displaystyle\frac{1}{2^{n}}\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{m}(ax^{2^{m}+1})+\text{Tr}_{1}^{n}(b_{1}x)}(-1)^{\text{Tr}_{1}^{m}(ax^{2^{m}+1})+\text{Tr}_{1}^{n}(b_{2}x)}$
$\displaystyle=$
$\displaystyle\frac{1}{2^{n}}\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{n}((b_{1}+b_{2})x)}$
$\displaystyle=$ $\displaystyle 0,$
where the last equality follows from $b_{1}+b_{2}\neq 0$,
$\\{(b_{1}+b_{2})x:x\in\mathbb{F}_{2^{n}}\\}=\\{x:x\in\mathbb{F}_{2^{n}}\\}$
and Corollary 2.3. Note that $|B_{a}|=2^{n}$, then $B_{a}$ is an orthogonal
basis of $R^{2^{n}}$. ∎
For any $a\in\mathbb{F}_{2^{n}}$, define $\overline{a}:=a^{2^{m}}$. The unit
circle of $\mathbb{F}_{2^{n}}$ is the set
$U=\\{u\in\mathbb{F}_{2^{n}}:u^{2^{m}+1}=u\overline{u}=1\\}.$
Note that $\gcd(2^{m}+1,2^{m}-1)=1$, then for any
$a\in\mathbb{F}_{2^{n}}^{*}$, there exists a unique representation $a=uv$,
where $u\in U$ and $v\in\mathbb{F}_{2^{m}}^{*}$. Now we consider the inner
product between vectors from $B_{a_{1}}$ and $B_{a_{2}}$, where $a_{1}\neq
a_{2}$.
###### Lemma 3.2.
For any $a_{1},a_{2}\in\mathbb{F}_{2^{r}}$ with $a_{1}\neq a_{2}$, and
$b_{1},b_{2}\in\mathbb{F}_{2^{n}}$, we have $|\langle
B_{a_{1},b_{1}},B_{a_{2},b_{2}}\rangle|=\frac{1}{2^{m}}$.
###### Proof.
$\displaystyle\langle B_{a_{1},b_{1}},B_{a_{2},b_{2}}\rangle=$
$\displaystyle\frac{1}{2^{n}}\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{m}(a_{1}x^{2^{m}+1})+\text{Tr}_{1}^{n}(b_{1}x)}(-1)^{\text{Tr}_{1}^{m}(a_{2}x^{2^{m}+1})+\text{Tr}_{1}^{n}(b_{2}x)}$
$\displaystyle=$
$\displaystyle\frac{1}{2^{n}}\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{m}((a_{1}+a_{2})x^{2^{m}+1})+\text{Tr}_{1}^{n}((b_{1}+b_{2})x)}.$
Set $a=a_{1}+a_{2}$, $b=b_{1}+b_{2}$, then $a\neq 0$ and
$\displaystyle\langle B_{a_{1},b_{1}},B_{a_{2},b_{2}}\rangle=$
$\displaystyle\frac{1}{2^{n}}\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{m}(ax^{2^{m}+1})+\text{Tr}_{1}^{n}(bx)}$
$\displaystyle=$
$\displaystyle\frac{1}{2^{n}}\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{m}(ax^{2^{m}+1}+bx+(bx)^{2^{m}})}$
$\displaystyle=$
$\displaystyle\frac{1}{2^{n}}\sum_{x\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}_{1}^{m}(ax\overline{x}+bx+\overline{bx})}$
$\displaystyle=$ $\displaystyle\frac{1}{2^{n}}(1+\sum_{u\in
U}\sum_{v\in\mathbb{F}_{2^{m}}^{*}}(-1)^{\text{Tr}_{1}^{m}(auv\overline{uv}+buv+\overline{buv})})$
$\displaystyle=$ $\displaystyle\frac{1}{2^{n}}(1+\sum_{u\in
U}\sum_{v\in\mathbb{F}_{2^{m}}^{*}}(-1)^{\text{Tr}_{1}^{m}(av^{2}+buv+\overline{bu}v)})$
$\displaystyle=$ $\displaystyle\frac{1}{2^{n}}(1+\sum_{u\in
U}\sum_{v\in\mathbb{F}_{2^{m}}^{*}}(-1)^{\text{Tr}_{1}^{m}((a^{\prime}+bu+\overline{bu})v)}),$
where $a^{\prime}=a^{2^{m-1}}$. Let $b=b_{1}b_{2}$, where $b_{1}\in U$ and
$b_{2}\in\mathbb{F}_{2^{m}}^{*}$, then
$a^{\prime}+bu+\overline{bu}=a^{\prime}+b_{2}(b_{1}u+\overline{b_{1}u})$. Let
$N=|\\{x\in U:a^{\prime}+b_{2}(x+\overline{x})=0\\}|.$
If $u\in U$ is a solution for equation $a^{\prime}+b_{2}(x+\overline{x})=0$,
then $a^{\prime}+b_{2}(u+\overline{u})=0$. We have
$a^{\prime}u+b_{2}(u^{2}+1)=0$, i.e., $b_{2}u^{2}+a^{\prime}u+b_{2}=0$. Hence
$N\leq 2$. On the other hand, if $u\in U$ is a solution for equation
$a^{\prime}+b_{2}(x+\overline{x})=0$, then $\overline{u}\in U$ is also a
solution for equation $a^{\prime}+b_{2}(x+\overline{x})=0$. If
$u=\overline{u}$, then $u^{2}=u\overline{u}=1$. Thus $u=1$, this leads to
$a^{\prime}=0$, which is a contradiction. Therefore $N=0$ or 2. Then we have
$\langle
B_{a_{1},b_{1}},B_{a_{2},b_{2}}\rangle=\begin{cases}-\frac{1}{2^{m}},&\textup{
if }N=0;\\\ \frac{1}{2^{m}},&\textup{ if }N=2.\end{cases}$
This finishes the proof. ∎
The inner product between vectors from $B_{a}$ and $B_{q+1}$ is easy to get.
###### Lemma 3.3.
For any $a\in\mathbb{F}_{2^{r}}$, $b,x\in\mathbb{F}_{2^{n}}$, we have
$|\langle B_{a,b},e_{x}\rangle|=\frac{1}{2^{m}}$.
###### Proof.
Since $\langle
B_{a,b},e_{x}\rangle=\frac{1}{2^{m}}(-1)^{\text{Tr}_{1}^{m}(ax^{2^{m}+1})+\text{Tr}_{1}^{n}(bx)}$,
then $|\langle B_{a,b},e_{x}\rangle|=\frac{1}{2^{m}}$. ∎
Now we consider the linear dependency of vectors in $B_{a}$ and $B_{q+1}$, we
first prove a lemma.
###### Lemma 3.4.
For any $y\in\mathbb{F}_{2^{m}}$, we have
$|\\{b\in\mathbb{F}_{2^{m}}:\text{Tr}_{1}^{m}(by)=0,\text{Tr}_{r}^{m}(b)=0\\}|=\begin{cases}2^{m-r-1},&\textup{
if }y\notin\mathbb{F}_{2^{r}};\\\ 2^{m-r},&\textup{ if
}y\in\mathbb{F}_{2^{r}}.\end{cases}$
###### Proof.
We can compute to get
$\displaystyle|\\{b\in\mathbb{F}_{2^{m}}:\text{Tr}_{1}^{m}(by)=0,\text{Tr}_{r}^{m}(b)=0\\}|$
$\displaystyle=$
$\displaystyle\frac{1}{2^{r+1}}\sum_{b\in\mathbb{F}_{2^{m}}}(1+(-1)^{\text{Tr}_{1}^{m}(by)})\sum_{\alpha\in\mathbb{F}_{2^{r}}}(-1)^{\text{Tr}_{1}^{r}(\alpha\text{Tr}_{r}^{m}(b))}$
$\displaystyle=$
$\displaystyle\frac{1}{2^{r+1}}\sum_{\alpha\in\mathbb{F}_{2^{r}}}\sum_{b\in\mathbb{F}_{2^{m}}}(1+(-1)^{\text{Tr}_{1}^{m}(by)})(-1)^{\text{Tr}_{1}^{m}(\alpha
b)}$ $\displaystyle=$
$\displaystyle\frac{1}{2^{r+1}}\sum_{\alpha\in\mathbb{F}_{2^{r}}}\sum_{b\in\mathbb{F}_{2^{m}}}((-1)^{\text{Tr}_{1}^{m}(\alpha
b)}+(-1)^{\text{Tr}_{1}^{m}(\alpha b+by)})$ $\displaystyle=$ $\displaystyle
2^{m-r-1}+\frac{1}{2^{r+1}}\sum_{\alpha\in\mathbb{F}_{2^{r}}}\sum_{b\in\mathbb{F}_{2^{m}}}((-1)^{\text{Tr}_{1}^{m}((\alpha+y)b)})$
$\displaystyle=$ $\displaystyle\begin{cases}2^{m-r-1},&\textup{ if
}y\notin\mathbb{F}_{2^{r}};\\\ 2^{m-r},&\textup{ if
}y\in\mathbb{F}_{2^{r}}.\end{cases}$
∎
Define
$S=\\{x\in\mathbb{F}_{2^{n}}:\
\text{Tr}_{r}^{m}(x^{2^{m}+1})=0,x+x^{2^{m}}\in\mathbb{F}_{2^{r}}\\}.$
Then we find that the following vectors are linearly dependent.
###### Lemma 3.5.
$\sum_{a\in\mathbb{F}_{2^{r}}}\sum_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}B_{a,b}-\sum_{x\in
S}e_{x}=0$.
###### Proof.
We can compute to get
$\displaystyle\sum_{a\in\mathbb{F}_{2^{r}}}(-1)^{Tr_{1}^{m}(ax^{2^{m}+1})}$
$\displaystyle=$
$\displaystyle\sum_{a\in\mathbb{F}_{2^{r}}}(-1)^{Tr_{1}^{r}(a\text{Tr}_{r}^{m}(x^{2^{m}+1}))}$
$\displaystyle=$ $\displaystyle\begin{cases}0,&\textup{ if
}\text{Tr}_{r}^{m}(x^{2^{m}+1})\neq 0;\\\ 2^{r},&\textup{ if
}\text{Tr}_{r}^{m}(x^{2^{m}+1})=0,\end{cases}$
and
$\displaystyle\sum_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}(-1)^{\text{Tr}_{1}^{n}(bx)}$
$\displaystyle=$
$\displaystyle\sum_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}(-1)^{\text{Tr}_{1}^{m}(b(x+x^{2^{m}}))}$
$\displaystyle=$ $\displaystyle\begin{cases}0,&\textup{ if
}x+x^{2^{m}}\notin\mathbb{F}_{2^{r}};\\\ 2^{m-r},&\textup{ if
}x+x^{2^{m}}\in\mathbb{F}_{2^{r}},\end{cases}$
where the last equation follows from Lemma 3.4. For a vector
$A\in\mathbb{F}_{2^{n}}^{2^{n}}$, we use $[A]_{x}$ to denote its $x$-th
coordinate. From above two equations, we have
$\displaystyle[\sum_{a\in\mathbb{F}_{2^{r}}}\sum_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}B_{a,b}]_{x}$
$\displaystyle=$
$\displaystyle\sum_{a\in\mathbb{F}_{2^{r}}}\sum_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}\frac{1}{2^{m}}(-1)^{\text{Tr}_{1}^{m}(ax^{2^{m}+1})+\text{Tr}_{1}^{n}(bx)}$
$\displaystyle=$
$\displaystyle\frac{1}{2^{m}}\sum_{a\in\mathbb{F}_{2^{r}}}(-1)^{\text{Tr}_{1}^{m}(ax^{2^{m}+1})}\sum_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}(-1)^{\text{Tr}_{1}^{n}(bx)}$
$\displaystyle=$ $\displaystyle\begin{cases}0,&\textup{ if
}\text{Tr}_{r}^{m}(x^{2^{m}+1})\neq 0\text{ or
}x+x^{2^{m}}\notin\mathbb{F}_{2^{r}};\\\ 1,&\textup{ if
}\text{Tr}_{r}^{m}(x^{2^{m}+1})=0,x+x^{2^{m}}\in\mathbb{F}_{2^{r}},\end{cases}$
$\displaystyle=$ $\displaystyle\begin{cases}0,&\textup{ if }x\notin S;\\\
1,&\textup{ if }x\in S.\end{cases}$
Hence
$\sum_{a\in\mathbb{F}_{2^{r}}}\sum_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}B_{a,b}-\sum_{x\in
S}e_{x}=0$. ∎
Now we compute the size of $S$.
###### Lemma 3.6.
$|S|=2^{m-r}.$
###### Proof.
For any $x\in S$, let $A=x\overline{x}$ and $B=x+\overline{x}$. Then
$\text{Tr}_{r}^{m}(A)=0$ and $B\in\mathbb{F}_{2^{r}}$.
If $x\in\mathbb{F}_{2^{m}}$, then $B=x+\overline{x}=0\in\mathbb{F}_{2^{r}}$
and $0=\text{Tr}_{r}^{m}(A)=\text{Tr}_{r}^{m}(x^{2})=\text{Tr}_{r}^{m}(x)$. By
Lemma 2.1, there are $2^{m-r}$ such $x$.
If $x\in\mathbb{F}_{2^{n}}\backslash\mathbb{F}_{2^{m}}$, then $x,\overline{x}$
are solutions of equation $X^{2}+BX+A=0$. Note that
$B,A\in\mathbb{F}_{2^{m}}$, by Lemma 2.4,
$x,\overline{x}\in\mathbb{F}_{2^{n}}\backslash\mathbb{F}_{2^{m}}$ if and only
if $\text{Tr}_{1}^{m}(\frac{A}{B^{2}})=1$. On the other hand,
$\text{Tr}_{1}^{m}(\frac{A}{B^{2}})=\text{Tr}_{1}^{r}(\text{Tr}_{r}^{m}(\frac{A}{B^{2}}))=\text{Tr}_{1}^{r}(\frac{1}{B^{2}}\text{Tr}_{r}^{m}(A))=0$,
which is a contradiction. Therefore, $|S|=2^{m-r}.$ ∎
Proof of Theorem 1.4: Let
$D=B_{q+1}\cup(\cup_{a\in\mathbb{F}_{2^{r}}}B_{a})$, then $D$ is a dictionary
in $\mathbb{R}^{q^{2t}}$. By Lemma 3.1, $D$ is a union of $2^{r}+1=q+1$
orthonormal bases. By Lemmas 3.2 and 3.3, we have
$\mu(D)=\frac{1}{2^{m}}=\frac{1}{q^{t}}$. From Lemma 3.5, the vectors in the
set
$(\cup_{a\in\mathbb{F}_{2^{r}}}\cup_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}B_{a,b})\cup(\cup_{x\in
S}e_{x})$
are linearly dependent. By Lemma 3.6, we can get
$\displaystyle|(\cup_{a\in\mathbb{F}_{2^{r}}}\cup_{b\in\mathbb{F}_{2^{m}},\text{Tr}_{r}^{m}(b)=0}B_{a,b})\cup(\cup_{x\in
S}e_{x})|=$ $\displaystyle 2^{r}\cdot 2^{m-r}+2^{m-r}$ $\displaystyle=$
$\displaystyle 2^{m}+2^{m-r}$ $\displaystyle=$ $\displaystyle q^{t}+q^{t-1}.$
Hence $\eta(D)=q^{t}+q^{t-1}$. This finishes the proof.
## References
* [1] A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” _SIAM Rev._ , vol. 51, no. 1, pp. 34–81, 2009. [Online]. Available: https://doi.org/10.1137/060657704
* [2] D. L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via $l^{1}$ minimization,” _Proc. Natl. Acad. Sci. USA_ , vol. 100, no. 5, pp. 2197–2202, 2003. [Online]. Available: https://doi.org/10.1073/pnas.0437847100
* [3] D. L. Donoho and X. Huo, “Uncertainty principles and ideal atomic decomposition,” _IEEE Trans. Inform. Theory_ , vol. 47, no. 7, pp. 2845–2862, 2001. [Online]. Available: https://doi.org/10.1109/18.959265
* [4] M. Elad and A. M. Bruckstein, “A generalized uncertainty principle and sparse representation in pairs of bases,” _IEEE Trans. Inform. Theory_ , vol. 48, no. 9, pp. 2558–2567, 2002. [Online]. Available: https://doi.org/10.1109/TIT.2002.801410
* [5] Y. Eldar and G. Kutyniok, _Compressed Sensing: Theory and Applications_. Cambridge, U.K.: Cambridge Univ. Press, 2012.
* [6] S. Foucart and H. Rauhut, _A mathematical introduction to compressive sensing_ , ser. Applied and Numerical Harmonic Analysis. Birkhäuser/Springer, New York, 2013. [Online]. Available: https://doi.org/10.1007/978-0-8176-4948-7
* [7] I. Gorodnitsky and B. Rao, “Sparse signal reconstruction from limited data using focuss: a re-weighted minimum norm algorithm,” _IEEE Transactions on Signal Processing_ , vol. 45, no. 3, pp. 600–616, 1997.
* [8] R. Gribonval and M. Nielsen, “Highly sparse representations from dictionaries are unique and independent of the sparseness measure,” _Appl. Comput. Harmon. Anal._ , vol. 22, no. 3, pp. 335–355, 2007. [Online]. Available: https://doi.org/10.1016/j.acha.2006.09.003
* [9] ——, “Sparse representations in unions of bases,” _IEEE Trans. Inform. Theory_ , vol. 49, no. 12, pp. 3320–3325, 2003. [Online]. Available: https://doi.org/10.1109/TIT.2003.820031
* [10] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” _SIAM Rev._ , vol. 51, no. 3, pp. 455–500, 2009. [Online]. Available: https://doi.org/10.1137/07070111X
* [11] R. Lidl and H. Niederreiter, _Finite fields_ , 2nd ed., ser. Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1997, vol. 20, with a foreword by P. M. Cohn.
* [12] A. Manikas and C. Proukakis, “Modeling and estimation of ambiguities in linear arrays,” _IEEE Transactions on Signal Processing_ , vol. 46, no. 8, pp. 2166–2179, 1998.
* [13] G. L. Mullen, Ed., _Handbook of finite fields_ , ser. Discrete Mathematics and its Applications (Boca Raton). CRC Press, Boca Raton, FL, 2013. [Online]. Available: https://doi.org/10.1201/b15006
* [14] B. K. Natarajan, “Sparse approximate solutions to linear systems,” _SIAM J. Comput._ , vol. 24, no. 2, pp. 227–234, 1995. [Online]. Available: https://doi.org/10.1137/S0097539792240406
* [15] Y. Shen, C. Yu, Y. Shen, and S. Li, “An open problem on sparse representations in unions of bases,” _IEEE Trans. Inform. Theory_ , vol. 68, no. 7, pp. 4230–4243, 2022.
* [16] ——, “On sparse recovery algorithms in unions of orthonormal bases,” _J. Approx. Theory_ , vol. 289, pp. Paper No. 105 886, 15, 2023. [Online]. Available: https://doi.org/10.1016/j.jat.2023.105886
* [17] A. M. Tillmann, “Computing the spark: mixed-integer programming for the (vector) matroid girth problem,” _Comput. Optim. Appl._ , vol. 74, no. 2, pp. 387–441, 2019. [Online]. Available: https://doi.org/10.1007/s10589-019-00114-9
* [18] A. M. Tillmann and M. E. Pfetsch, “The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing,” _IEEE Trans. Inform. Theory_ , vol. 60, no. 2, pp. 1248–1259, 2014. [Online]. Available: https://doi.org/10.1109/TIT.2013.2290112
|
# Hindered settling of log-normally distributed particulate suspensions:
theoretical models vs. Stokesian simulations
Heng Li1 Lorenzo Botto1<EMAIL_ADDRESS>1Process and Energy Department,
ME Faculty of Mechanical Engineering, TU Delft, 2628 CD Delft, The Netherlands
###### Abstract
Settling velocity statistics for dilute, non-Brownian suspensions of
polydisperse spheres having a log-normal size distribution are analysed by
Stokesian Dynamics, as a function of the total volume fraction and width of
the size distribution. Several hundred instantaneous configurations are
averaged to obtain reliable statistics. Average velocities for each particle
class are compared to the models proposed by Batchelor, Richardson $\&$ Zaki,
Davis $\&$ Gecol, and Masliyah-Lockett-Bassoon (MLB). Batchelor’s model is
shown to give reasonably accurate predictions when the volume fraction is
within 5%. Because of its complexity, this model is however hardly used in
practice, so lower-order models are needed. We found that while the other
hindered settling models can give reasonably accurate predictions of the
velocity of the largest particles, all of them overestimate - in certain cases
by a large margin - the velocity of the smaller particles. By computing the
fluid-particle velocity slip for each particle class and using Batchelor’s
model, we explain why predicting the lower tail of the particle size
distribution is challenging, and propose possible avenues for model
improvement. The analysis of velocity fluctuations suggest quantitative
similarities between velocity fluctuations in monodisperse and polydisperse
suspensions.
###### keywords:
Authors should not enter keywords on the manuscript, as these must be chosen
by the author during the online submission process and will then be added
during the typesetting process (see Keyword PDF for the full list). Other
classifications will be added at the same time.
## 1 Introduction
The prediction of the settling velocity of polydisperse suspensions is
crucially important in applications, such as wastewater treatment (He et al.,
2021), food processing (Wang et al., 2020), nanoparticle sorting (Bonaccorso
et al., 2013), particle size characterisation (Papuga et al., 2021), materials
recycling (Wolf, 2021), and sediment transport modelling (Dorrell & Hogg,
2010). Despite decades of research on polydispersed suspension this field
still offers interesting scientific problems. A central challenge is the
prediction of the settling velocity of each particle class in a polydisperse
system. This information is crucially important, not only because the velocity
of each class dictates the particle concentration profile, but also because
only by knowing the velocity of each class it is possible to separate
particles by size. The accurate prediction of the class-averaged particle
velocity has recently become important because of the need for accurate size
fractionation of micro and nanoparticles (Bonaccorso et al., 2013; Backes,
2016). Furthermore, from the knowledge of the class-averaged particle velocity
one can measure the particle size distribution from the time evolution of the
concentration profile (Papuga et al., 2021), which is the principle underlying
the functioning of an analytical centrifuge (Chaturvedi et al., 2018). The
current work aims to analyse the validity of several models used for the
prediction of the class-averaged velocity, comparing against simulation
results.
For a Stokesian suspension of polydisperse spheres grouped into $m$ distinct
particle classes, the average settling velocity of the $i$-th class can be
written as $\langle u_{i}\rangle=u_{St,i}h_{i}(\boldsymbol{\phi})$, where
$u_{St,i}=\frac{2}{9\mu}a_{i}^{2}(\rho_{p}-\rho_{f})g$ is the single-particle
Stokes velocity of the $i$-th class, $h_{i}(\boldsymbol{\phi})$ is the
hindered settling function of that class, and
$\boldsymbol{\phi}=(\phi_{1},\phi_{2},...,\phi_{m})$ is the vector of volume
fractions (Davis & Acrivos, 1985); $a_{i}$ is the particle radius, $\mu$ is
the fluid viscosity, and $\rho_{p}-\rho_{f}$ is the density difference between
the particles and the fluid. The literature reports several models for
$h_{i}(\boldsymbol{\phi})$, as reviewed by Berres et al. (2005).
Batchelor (1982) showed that in the dilute limit the hindered settling
function can be written as
$h_{i}(\boldsymbol{\phi})=1+\sum_{j=1}^{m}S_{ij}\phi_{j},$ (1)
and using the pair-wise interaction approximation developed analytical
solution for the sedimentation coefficients $S_{ij}$ for different size and
density ratios (Batchelor & Wen, 1982). In equation (1) $S_{ii}=-6.55$, so the
hindered settling function for monodispersed suspensions is recovered for
$m=1$ (Batchelor, 1972).
Davis & Gecol (1994) proposed the following semi-empirical extension of
Batchelor’s formula:
$h_{i}(\boldsymbol{\phi})=(1-\phi)^{-S_{ii}}\left(1+\sum_{j=1}^{m}(S_{ij}-S_{ii})\phi_{j}\right),$
(2)
where $\phi=\sum\phi_{j}$ is the total volume fraction, and the coefficients
$S_{ij}$ are defined as in equation (1). for For $\phi\rightarrow 0$, equation
(2) recovers (1). For $m=1$, equation (2) reduces to a power-law form, with
exponent $-S_{ii}$, similar to the expression of Richardson & Zaki (1954).
The models of Batchelor and Davis $\&$ Gecol have been tested in experiments
and simulations of bidisperse suspensions (Davis & Birdsell, 1988; Al-Naafa &
Selim, 1992; Abbas et al., 2006; Wang & Brady, 2015; Chen et al., 2023).
However, these models are rarely used in practice because they contain a large
number of coefficients. Batchelor’s model furthermore does not smoothly
converge to a form that can handle large solid concentrations, and therefore
cannot be used in one-dimensional simulation where the concentration increases
from dilute in the supernatant to concentrated in the region immediately above
the sediment-supernatant interface. Simpler expressions have therefore been
developed for practical predictions of the hindered settling of polydisperse
suspensions.
Some authors (Davis & Hassen, 1988; Abeynaike et al., 2012; Vowinckel et al.,
2019; Chen et al., 2023) have adapted the model of Richardson & Zaki (1954) to
the velocity of each class by writing
$h_{i}(\boldsymbol{\phi})=(1-\phi)^{n}.$ (3)
This model predicts different velocities for different particle radii $a_{i}$,
because $h_{i}$ contains the single-particle Stokes formula at denominator. In
the current paper, this hindered settling formula will be referred to as
Richardson-Zaki’s model for polydispersed suspensions. The exponent $n$ is
obtained from an empirical fit. For monodispersed suspensions, Richardson &
Zaki (1954) originally proposed $n\approx 5$, but the exact value is still a
subject of debate. For example, Brzinski III & Durian (2018) demonstrated that
the value of $n$ depends on the particle Peclet number, and there are two
branches in the sedimentation curve which are best fitted by $n\approx 5.6$
for monodisperse Brownian spheres, and $n\approx 4.48$ for monodisperse non-
Brownian spheres.
Masliyah (1979) and Lockett & Bassoon (1979) proposed the following hindered
settling formula:
$h_{i}(\boldsymbol{\phi})=(1-\phi)^{n-1}\left(1-\sum_{j=1}^{m}\left(\frac{a_{j}}{a_{i}}\right)^{2}\phi_{j}\right),$
(4)
where $a_{j}/a_{i}$ is the ratio between the radii of the $j$-th and the
$i$-th species. The function
$\left(1-\sum_{j=1}^{m}\left(\frac{a_{j}}{a_{i}}\right)^{2}\phi_{j}\right)$ is
the hindered settling function obtained by including the effect of volume
fraction on the fluid-solid slip velocity, and neglecting the effect of
hydrodynamic interactions on the drag force experienced by each particle. The
prefactor $(1-\phi)^{n-1}$ estimates the effect of hydrodynamic interactions
on the drag force (see Appendix A, where equation (4) is re-derived).
The Masliyah-Lockett-Bassoon (MLB) model is favoured in engineering
applications (Xue & Sun, 2003; Berres et al., 2005; Dorrell & Hogg, 2010). It
is easy to tune, since it has only one fitting parameter. Using the MLB model
for the stability analysis of settling size-polydispersed and equal-density
spheres gives no unphysical lateral segregation and streamers, which are
instead obtained using Davis & Gecol’s model (Bürger et al., 2002; Tory et
al., 2003; Berres et al., 2005). The MLB model has been validated by
comparison of predicted and measured concentration profiles (Xue & Sun, 2003;
Berres et al., 2005). However, this validation is not complete, because the
particle concentration is a convolution of the velocities of the different
size classes. Therefore a reasonably accurate prediction of concentration does
not necessarily imply that the velocity of each class has been predicted
correctly. Direct validation of the settling rates predicted by the MLB model
for all size classes has not been published.
Despite large recent interest in the modelling of suspensions of wide and
continuous size distributions (Pednekar et al., 2018; Gonzalez et al., 2021;
Howard et al., 2022; Malbranche et al., 2023; Lavrenteva et al., 2024), there
is very limited data on the settling of polydisperse suspensions with many
size classes. Most physical experiments have been carried out for bidisperse
or tridisperse suspensions (Lockett & Bassoon, 1979; Davis & Birdsell, 1988;
Al-Naafa & Selim, 1992; Davis & Gecol, 1994; Chen et al., 2023). In these
experiments, the largest size ratio between the two species was around 4, and
the velocity of the largest size class was only measured in the homogeneous
region where a mixture of all size classes was present. Numerical simulations
have been carried out for bidisperse suspensions using Stokesian dynamics
(Revay & Higdon, 1992; Cunha et al., 2002; Wang & Brady, 2015) and the force
coupling method (Abbas et al., 2006), with size ratio up to 4. Simulations of
sedimentation of suspensions with a log-normal distribution have been carried
out by Vowinckel et al. (2019) in a domain bounded by top and bottom walls.
Their objective was to study the effect of cohesive force on the transient
settling process. The velocity of each size class was not quantified.
In this paper, we analyse settling velocity statistics for polydisperse
suspensions of non-Brownian spheres using Stokesian dynamics simulations. The
size distribution is log-normal. Of all the particle size distributions, log-
normal distributions are the most interesting because of their ubiquitous
presence in applications (Vowinckel et al., 2019; Di Vaira et al., 2022;
Rettinger et al., 2022). We vary the volume fraction and ratio between the
standard deviation and mean value of the particle size distribution. All the
particles have the same mass density. The volume fraction ranges from 0.01 to
0.1. The largest size ratio between two classes is 5 and up to 9 classes are
considered. To reduce the large statistical error, following other authors
(Revay & Higdon, 1992; Cunha et al., 2002; Abbas et al., 2006; Wang & Brady,
2015), we produce converged particle velocity statistics by generating a large
number of random fixed particle configurations inside a triply periodic box
and ensemble-averaging over all the configurations.
## 2 Numerical approach and validation
Figure 1: A configuration for volume fraction $\phi=0.05$ and polydispersity
parameter $\alpha=0.4$. The spheres are coloured according to their radii.
Consider a polydisperse suspension of $N$ spheres having the same density but
different radii. The $N$ spheres are divided into $m$ size classes. The radius
of size class $i$ is $a_{i}$. Each sphere in class $i$ is subjected to a force
$\boldsymbol{F}_{i}=\frac{4}{3}\pi
a_{i}^{3}(\rho_{p}-\rho_{f})\boldsymbol{g}$, which includes the particle
weight and buoyancy; $\rho_{p}$ and $\rho_{f}$ are the densities of the
spheres and the fluid, respectively, and $\boldsymbol{g}$ is the gravitational
acceleration. The single-particle Stokes velocity corresponding to each class
is
$\boldsymbol{u}_{St,i}=\frac{2}{9\mu}a_{i}^{2}(\rho_{p}-\rho_{f})\boldsymbol{g}$,
where $\mu$ is the dynamic viscosity of the fluid. In the current work, the
particle velocity statistics are calculated from each configuration of the
spheres by first averaging over the particles in the computational domain and
then ensemble-averaging over statistically identical configurations. Each
configuration is generated by randomly placing the spheres one by one inside
the computational domain, ensuring that each placement gives no overlap
between the spheres (Revay & Higdon, 1992; Wang & Brady, 2015; Cheng & Wachs,
2023). One such configuration is shown in figure 1. In our coordinate system,
gravity is aligned in the $z$ direction, also referred to as vertical
direction in the following. The horizontal direction corresponds to the $x$
and $y$ coordinates.
To calculate the velocities of individual particles, a basic version of the
Stokesian Dynamics method is adopted (Brady et al., 1988; Brady & Bossis,
1988). The velocities of the spheres are calculated by solving the mobility
problem
$\boldsymbol{U}-\langle\boldsymbol{u}\rangle=\mathsfbi{M}\boldsymbol{F},$ (5)
where $\boldsymbol{U}$ is the $3N$ vector containing the velocities of the
spheres, $\boldsymbol{F}$ is the $3N$ vector containing the gravitational
forces acting on the spheres (these forces include the particle weight and the
buoyancy force), and $\mathsfbi{M}$ is the $3N\times 3N$ mobility matrix (Kim
& Karrila, 2013). In equation (5), $\langle\boldsymbol{u}\rangle$ is the
average translational velocity of the suspension. In our simulations
$\langle\boldsymbol{u}\rangle=\boldsymbol{0}$ because of the zero volume flux
condition of batch sedimentation (Berres et al., 2005). Note that in the
current work only velocity-force coupling is considered, i.e. the stresslet
and other force moments are not considered. Brady & Durlofsky (1988) showed
that in a sedimenting suspension the inclusion of the stresslet changes the
settling rate negligibly. Because we work in the relatively dilute limit,
short-range lubrication are also neglected.
The mobility matrix $\mathsfbi{M}$ depends on the positions and radii of the
spheres. We used the Rotne-Prager approximation for this term (Rotne & Prager,
1969; Zuk et al., 2014). This approximation has been shown to give accurate
predictions of the sedimentation velocities of suspensions from dilute to
relatively dense (Brady & Durlofsky, 1988). Triply periodic boundary
conditions are applied to the simulation box. The mobility matrix is
constructed using the Ewald summation technique by splitting the mobility
matrix into a real-space part and a wave-space part (Beenakker, 1986).
Explicit formulae for the mobility matrix for a polydisperse suspension can be
found in Beenakker (1986) and Hase & Powell (2001). As characteristic length
and velocity scales, we choose the mean particle radius $\langle a\rangle$ and
the single particle Stokes velocity corresponding to $\langle a\rangle$. To
make forces non-dimensional we use the effective weight of the mean particle,
$\frac{4}{3}\pi\langle a\rangle^{3}(\rho_{p}-\rho_{f})g$.
Figure 2: Normalized settling velocity vs. volume fraction for a simple cubic
array of monodisperse spheres. The line is the point-force solution of
Hasimoto (1959). Upward triangles are the numerical results of Brady et al.
(1988).
In figure 2 numerical predictions for a single sphere in a triply periodic
cubic box are plotted against Hasimoto’s analytical solution (Hasimoto, 1959)
and the simulation results of Brady et al. (1988). The volume fraction of the
simple cubic array is varied by varying the size of the box. Based on the
point-force assumption, Hasimoto (1959) derived $u/u_{St}=1-1.7601\phi^{1/3}$
for $\phi\ll 1$, where $u_{St}$ is the Stokes velocity of the sphere. Brady et
al. (1988) used Stokesian Dynamics with different approximations for the
mobility matrix. The results of Brady et al. (1988) shown in figure 2
correspond to simulations in the Rotne-Prager approximation. As seen from
figure 2, our results match exactly those of Brady et al. (1988) and converge
to Hasimoto’s solution for $\phi\rightarrow 0$. This test validates our
implementation of the Ewald summation for the periodic boundary conditions.
Figure 3: Normalized relative settling velocity for a pair of spheres as a
function of the centre-to-centre distance for (a) size ratio 2
and (b) size ratio 5. Results of current simulations are shown as symbols, and
analytical results of Wacholder & Sather (1974) are shown as lines.
In figure 3, the normalized relative settling velocity is shown as a function
of the normalized centre-to-centre distance between two unequal spheres with
size ratio 2 and 5, respectively. In our simulations, the radius of the large
sphere is fixed to $a_{l}=2$. The radius of the small sphere is $a_{s}=1$ and
0.4 for size ratio 2 and 5, respectively (these values are chosen because the
largest radius is 2 and the largest size ratio is 5 in the polydisperse
simulations analysed in this paper). The relative settling velocity between
the two spheres is normalized by the Stokes velocity of the large sphere. The
centre-to-centre distance is normalized by the radius of the large sphere. In
figure 3, symbols are results from our simulations, and lines correpond to the
asymptotic solution of Wacholder & Sather (1974), in which only far-field
hydrodynamic interactions were considered. It can be seen that our results
match the analytical solution for both vertically and horizontally aligned
pairs.
The current paper discusses results for bidisperse suspensions and
polydisperse suspensions with more than two classes, also comparing with the
monodisperse case. For the monodisperse case, the radius of the spheres is
$a=1$. For the bidisperse case, two size ratios are considered:
$a_{2}/a_{1}=2$ and 5. The radii of the small size classes are $a_{1}=0.8$ and
0.4 for these two size ratios, respectively. The volume fraction of the small
size class is $\phi_{1}=\frac{3}{11}\phi$ for size ratio 2, and
$\phi_{1}=\frac{1}{76}\phi$ for size ratio 5. These volume fraction ratios are
chosen so that the average radius of the spheres is equal to 1.0 for each
system.
Figure 4: Discrete frequency distributions of particle size for different
values of the polydispersity parameter (symbols). Lines indicate the
continuous log-normal distributions that fit the discrete frequency
histograms.
For the simulations with several size classes, the particle size distribution
follows $p(a)=\frac{1}{a\sigma\sqrt{2\pi}}e^{-(\ln{a}-\mu)^{2}/2\sigma^{2}}$,
where the mean value of the size distribution is $\langle
a\rangle=e^{\mu+\sigma^{2}/2}$ and the standard deviation is $\Delta
a=\sqrt{\left(e^{\sigma^{2}}-1\right)e^{2\mu+\sigma^{2}}}$. We define the
polydispersity parameter as $\alpha=\Delta a/\langle a\rangle$. Four size
distributions are considered, with $\langle a\rangle=1$ and
$\alpha=$0.1,0.2,0.3 or 0.4. Each distribution is cut at the two ends,
resulting in a range $a_{min}\leq a\leq a_{max}$, where $a_{min}$ and
$a_{max}$ are chosen so that at least 95% of the original distribution falls
within this range. The largest size ratio between two spheres is 5. Each
radius range is discretized into between 4 and 9 size classes, with a
difference of 0.2 between the radii of two adjacent size classes.
The discrete number frequency distributions are overlaid on the corresponding
continuous distributions in figure 4 . The frequency of size class $i$ is
calculated as $\frac{p(a_{i})}{\sum_{j=1}^{m}p(a_{j})}$. For each value of
$\alpha$, the volume fraction $\phi$ ranges from 0.01 to 0.10. For each
simulated case, corresponding to a combination of $\alpha$ and $\phi$, a fixed
box size $L=80$ is used and 500 random particle configurations are generated.
The total number of spheres in each case varies from 925 to 12223.
The average velocity of class $i$ is calculated by ensemble-averaging over $M$
configurations as
$\langle\overline{{u}}_{\xi,i}\rangle=\frac{\sum_{k=1}^{M}\overline{u}_{\xi,i}^{k}}{M},$
(6)
where $\xi=1,2,3$ correspond to the three Cartesian coordinates,
$\overline{u}_{\xi,i}$ is the intrinsic volume average of the velocity
component ${u}_{\xi,i}$ within one configuration, and $\left<\cdot\right>$ is
the ensemble-averaging operator. The intrinsic volume average within class $i$
over configuration $k$ is
$\overline{u}_{\xi,i}^{k}=\frac{\sum_{l=1}^{N_{i}}u_{\xi,i,l}^{k}}{N_{i}}$,
where $N_{i}$ is the number of particles in class $i$. The standard deviation
of a certain velocity component within one realisation is calculated as
$\left(u_{\xi,i}^{\prime}\right)^{k}=\sqrt{\frac{\sum_{l=1}^{N_{i}}\left(u_{\xi,i,l}^{k}-\overline{u}_{\xi,i}^{k}\right)^{2}}{N_{i}-1}}$.
Averaging over many realisations gives an improve estimate of the class-
averaged standard deviation. In the bulk of the paper we indicate averages by
the bracket symbol, distinguishing between volume and ensemble average when
necessary.
### 2.1 Relation between the mobility formulation and Batchelor’s formula
In this section we show the connection between the mobility formulation,
equation (5), and Batchelor’s formula, equation (1). For simplicity of
notation, let us consider a specific size class. Without loss of generality we
consider class 1. According to (5) the velocity of the $\alpha$-th sphere in
the 1-st size class is
$\boldsymbol{u}_{\alpha,1}=\sum_{i=1}^{m}\sum_{\beta=1}^{N_{i}}\mathsfbi{M}_{\alpha
1,\beta i}\boldsymbol{F}_{i},$ (7)
where $N_{i}$ is the number of spheres in the $i$-th class, and
$\mathsfbi{M}_{\alpha 1,\beta i}$ is the $3\times 3$ mobility matrix
representing the hydrodynamic interaction between the $\alpha$-th sphere in
the 1-st class and the $\beta$-th sphere in the $i$-th class (Kim & Karrila,
2013). Because $\mathsfbi{M}_{\alpha 1,\alpha 1}=(6\pi\mu a_{1})^{-1}$, (7)
can be written as
$\boldsymbol{u}_{\alpha,1}=\boldsymbol{u}_{St,1}+\sum_{\beta\neq\alpha}\mathsfbi{M}_{\alpha
1,\beta 1}\boldsymbol{F}_{1}+\sum_{i\neq
1}\sum_{\beta=1}^{N_{i}}\mathsfbi{M}_{\alpha 1,\beta i}\boldsymbol{F}_{i}.$
(8)
The average velocity of the 1-st class in this configuration is
$\overline{\boldsymbol{u}}_{1}=\boldsymbol{u}_{St,1}+\frac{1}{N_{1}}\left(\sum_{\alpha}\sum_{\beta\neq\alpha}\mathsfbi{M}_{\alpha
1,\beta 1}\boldsymbol{F}_{1}+\sum_{\alpha}\sum_{i\neq
1}\sum_{\beta=1}^{N_{i}}\mathsfbi{M}_{\alpha 1,\beta
i}\boldsymbol{F}_{i}\right),$ (9)
but because $\boldsymbol{F}_{i}$ is constant within the same size class we can
also write
$\overline{\boldsymbol{u}}_{1}=\boldsymbol{u}_{St,1}+\mathsfbi{s}_{11}\boldsymbol{F}_{1}+\sum_{i\neq
1}\mathsfbi{s}_{1i}\boldsymbol{F}_{i},$ (10)
where $\mathsfbi{s}_{11}$ and $\mathsfbi{s}_{1i}$ describe the intra-class
hydrodynamic interactions (within the 1-st class) and the inter-class
hydrodynamic interactions (between the 1-st and the $i$-th classes),
respectively. These two matrices can be written as
$\mathsfbi{s}_{11}=(N_{1}-1)\overline{\mathsfbi{M}}_{11}$ and
$\mathsfbi{s}_{1i}=N_{i}\overline{\mathsfbi{M}}_{1i}$, where
$\overline{\mathsfbi{M}}_{11}$ and $\overline{\mathsfbi{M}}_{1i}$ are the
average two-sphere mobility matrices. Upon ensemble-averaging, the average
velocity of the 1-st size class can be written as
$\langle\boldsymbol{\overline{u}}_{1}\rangle=\boldsymbol{u}_{St,1}+\langle\mathsfbi{s}_{11}\rangle\boldsymbol{F}_{1}+\sum_{i\neq
1}\langle\mathsfbi{s}_{1i}\rangle\boldsymbol{F}_{i}.$ (11)
The average velocity component in the gravity direction can be written as
$\frac{\langle u_{1}\rangle}{u_{St,1}}=1+\frac{9\mu\langle
s_{11}\rangle}{2a_{1}^{2}n_{1}}\phi_{1}+\sum_{i\neq 1}\frac{9\mu\langle
s_{1i}\rangle}{2a_{1}^{2}n_{i}}\phi_{i},$ (12)
where the formula for the single-particle Stokes velocity is used and $n_{i}$
is the number density of class $i$. The scalar $\langle s_{1i}\rangle$ is the
component of $\langle\mathsfbi{s}_{1i}\rangle$ for the velocity-force coupling
in the gravity direction.
Extending equation (12) to a generic class $i$ yields
$\frac{\langle
u_{i}\rangle}{u_{St,i}}=1+B_{ii}(\boldsymbol{\phi})\phi_{i}+\sum_{j\neq
i}B_{ij}\left(\boldsymbol{\phi},\frac{a_{j}}{a_{i}}\right)\phi_{j}.$ (13)
The dependence of $B_{ii}$ and $B_{ij}$ on the volume fraction vector
$\boldsymbol{\phi}$ comes from the fact that $\langle s_{ij}\rangle$ depends
on the pair distribution functions, and the pair distribution functions in
turn depend on the volume fraction of each class. The dependence of $B_{ij}$
on $a_{j}/a_{i}$ comes from the dependence of the two-sphere mobility matrix
on the size ratio. For $\phi\rightarrow 0$, $B_{ii}$ is a constant and
$B_{ij}=S_{ij}$ is only a function of $a_{j}/a_{i}$. In this limit, equation
(13) recovers Batchelor’s expression (1).
## 3 Hindered settling of monodisperse and bidisperse suspensions
Figure 5: Monodispese case: average settling velocity, normalized by the
single-particle Stokes velocity, versus volume fraction
The normalised average settling velocity $\langle u_{z}\rangle/u_{St}$ for the
monodisperse suspension is plotted in figure 5 as a function of $\phi$. We
include in the plot the Richardson-Zaki correlation $(1-\phi)^{n}$ (Richardson
& Zaki, 1954) for $n=5$, the Batchelor model $1+S\phi$ (Batchelor, 1972)
assuming $S=-6.55$ and the Hayakawa-Ichiki model
$\frac{(1-\phi)^{3}}{1+2\phi+1.429\phi(1-\phi)^{3}}$ (Hayakawa & Ichiki,
1995). The values chosen for the exponent $n$ and the coefficient $S$ here are
typically for non-Brownian particles interacting only hydrodynamically
(Padding & Louis, 2004; Moncho-Jordá et al., 2010).
Our simulation results agree with Batchelor’s model for $\phi$ approximately
smaller than 0.03. For larger volume fractions, the simulation gives larger
values than Batchelor’s model. A similar range of validity for Batchelor’s
model was also found by Abbas et al. (2006) using a force-coupling method. Our
results also agree well with the Hayakawa-Ichiki model for $\phi\leq 0.05$ and
they lie between the predictions of Richardson-Zaki’s correlation and
Hayakawa-Ichiki’s model for $\phi\geq 0.06$. The simulation data for
$\phi=0.01$ is smaller than the values predicted by the three models. This is
expected because of the use of triply-periodic boundary conditions in a domain
of finite size (Phillips et al., 1988).
Figure 6: Bidisperse case: average settling velocity normalized by the single-
particle Stokes velocities for the small (left panels) and large (right
panels) particles. Panels (a) and (b) are for size ratio 2. Panels (c) and (d)
are for size ratio 5.
Normalised average settling velocities for the small and the large particles
in the bidisperse case are plotted as symbols in figure 6 for two size ratios.
The predictions of Batchelor’s model (equation (1)), Davis & Gecol’s model
(equation (2)) and the MLB model (equation (4)) are indicated by lines. It is
seen from figure 6 (a) and (c) that our results for the small particles agree
with the predictions of Batchelor’s model for $\phi\leq 0.05$, and lie between
the predictions from Batchelor’s model and Davis & Gecol’s models for
$\phi\geq 0.06$. For the large particles, our results agree with predictions
from Batchelor’s model for $\phi\leq 0.03$ and lie between the predictions
from Davis & Gecol’s and MLB models for $\phi\geq 0.04$. Stokesian dynamics
calculations by Wang & Brady (2015) that include stresslet and lubrication
contributions also predicted for $\phi$ larger than around 0.05 hindered
settling velocities smaller and larger than those of Davis & Gecol’s for the
small and the large particles, respectively.
In summary our simulation results for the mono- and bi-disperse cases are
comparable to those in the literature.
## 4 Polydisperse suspensions
Figure 7: Configuration for different polydispersity parameters and
$\phi$=0.05, with the spheres coloured according to their settling velocity;
(a) $\alpha$=0, (b) $\alpha$=0.2, and (c) $\alpha$=0.4.
To illustrate the spatial distribution of particle velocities in the
polydisperse particle simulations, in figure 7 we show snapshots of the
simulations with each sphere coloured according to its settling velocity.
Spheres coloured in red have settling velocities in the direction of gravity
whereas spheres coloured in blue have settling velocities opposite to gravity.
Figure 7 (c) shows that the smaller particles can move against gravity, and
have negative velocities comparable in magnitude to the positive settling
velocity of the largest particles.
Figure 8: Probability distribution functions of (a) horizontal
and (b) vertical velocities for different polydispersity parameters and
$\phi$=0.05.
Figure 9: Probability distribution functions of (a) horizontal
and (b) for $\alpha$=0.4 at $\phi$=0.05. In contrast to Fig. 8, here the PDFs
are calculated based on the distribution of velocities within each size class.
Probability distribution functions (PDFs) of horizontal and vertical
velocities, shown in figure 8 for different values of $\alpha$, are
approximately Gaussian, with a variance that increases as $\alpha$ increases.
These PDFs are constructed by considering all the particles in the simulation
domain. However, spheres belonging to the same size class also have a
distribution of settling velocities. Therefore, in figure 9, we show the PDFs
of the horizontal and the vertical velocities of spheres in each size class
for $\alpha$=0.4. For comparison, the PDFs of the velocities of all the
spheres are included in this plot as grey lines. Again, the PDFs are
approximately Gaussian (simulations by Cheng & Wachs (2023) of uniform flow
past fixed polydisperse random arrays indicate also a Gaussian distribution
for the hydrodynamic forces of a given size class). Surprisingly, the PDFs of
the horizontal velocities for different size classes collapse onto a single
curve (figure 9 (a)). From the PDFs of the vertical velocities in 9 (b), it is
seen that the mean velocity increases as the size of the particle increases.
And, different size classes have comparable variances.
Figure 10: Normalized average settling velocity of each size class for
different polydispersity parameters and $\phi$=0.05. The inset is a zoom in
the range $0.8\leq a_{i}\leq 2$. The lines are guides for the eyes.
The average vertical settling velocity of each size class normalized by the
corresponding single-particle Stokes velocity is shown in figure 10 for
different values of $\alpha$. The inset shows a zoom in the range $0.8\leq
a_{i}\leq 2$. Because now the settling velocity is normalised by the single-
particle settling velocity, the information in this plot complements the data
of figure 9 (b). We see that for fixed $\alpha$ the normalized average
settling velocity increases as the particle size increases. This means that
the velocities of small particles are more hindered than the velocities of
large particles. For a given size class, the normalized average settling
velocity decreases as $\alpha$ increases, and decreases faster for small
particles than for large particles.
Figure 11: Normalized average settling velocities of each size class, for
$\alpha$=0.4 and different volume fractions. The inset shows a zoom in the
range $1\leq a_{i}\leq 2$.
In the previous figures, the total volume fraction was fixed, and $\alpha$ was
changed. In figure 11, we instead change $\phi$ for fixed $\alpha$=0.4. This
plot confirms the trend seen in figure 10: for a given volume fraction, the
normalized average settling velocity decreases as the particle size decreases.
The normalized average settling velocity decreases faster with increasing
$\phi$ for small size particles.
To summarise, the smaller particles are more hindered and more affected by
polydispersity than the large ones.
### 4.1 Comparison with hindered settling models
In this subsection, current simulations are compared with predictions from
Batchelor’s (see equation (1)), Davis & Gecol’s (see equation (2)) and MLB
(see equation (4)) models. The accuracy of Richardson-Zaki correlation (see
equation (3)) for polydisperse suspensions is also checked. The values of the
coefficients $S_{ij}$ in Batchelor’s and Davis & Gecol’s models are calculated
from $S_{ij}=-3.50-1.10\lambda-1.02\lambda^{2}-0.002\lambda^{3}$ where
$\lambda=a_{j}/a_{i}$ (Davis & Gecol, 1994). The value of the exponent $n$ in
the MLB model and the Richardson-Zaki correlation is 5.
Figure 12: Comparison between the current simulation results and the
predictions of hindered settling function models for the average velocity of
different size classes, for $\phi$=0.05 and different polydispersity
parameters: (a) $\alpha$=0.1, (b) $\alpha$=0.2, (c) $\alpha$=0.3 and (d)
$\alpha$=0.4.
Hindered settling functions corresponding to different size classes for fixed
$\phi$=0.05 and different $\alpha$ are compared against different theoretical
models in figure 12. The Richardson-Zaki correlation largely overestimates the
hindered settling functions of smaller particles, whereas it gives reasonable
values for larger particles. The discrepancy between the Richardson-Zaki
correlation and the computed hindered settling functions of smaller particles
increases as $\alpha$ increases. For each $\alpha$, the predictions from the
other three models show a consistent trend for each size class. The MLB model
gives the largest settling velocities, Batchelor’s model gives the smallest
settling velocities, and Davis & Gecol’s model gives intermediate values. The
differences between the predictions from these three models get smaller as the
particle size increases.
Figure 13: Relative differences between the average settling velocities from
different models and from the current simulations for fixed $\phi=0.05$: (a)
Batchelor’s model; (b) Davis & Gecol model; (c) MLB model.
Figure 13 shows the normalized relative differences between the computed and
predicted average settling velocities. The Batchelor model and the Davis &
Gecol model predict the average settling velocity of each size class quite
well for all $\alpha$ considered here, with relative errors smaller than 10%,
except for the smallest size class $a_{i}$=0.4 for $\alpha$=0.4 for which the
simulation gives a very small settling velocity. From figure 13 (c), it is
seen that the relative difference between the predictions from the MLB model
and the current simulations decreases as $a_{i}$ increases, or $\alpha$
decreases. For $a_{i}\geq 1$, the MLB model predicts the average settling
velocities quite well, with the relative difference within 10%. For $a_{i}\leq
0.8$, the MLB model starts failing.
Figure 14: Comparison between the current numerical results and the
predictions of different models. The comparison is here evaluated as a
function of $\phi$ for fixed $\alpha$=0.4 . (a) to (f) correspond to size
classes $a_{1}$,$a_{2}$,$a_{4}$,$a_{6}$,$a_{7}$ and $a_{9}$, respectively.
Hindered settling function data for fixed $\alpha$=0.4 and different $\phi$
are compared with the predictions from different models in figure 14. Also,
Richardson-Zaki’s correlation predicts the hindered settling functions of
smaller particles poorly. For the other three models, a similar trend in the
predicted values is observed as the one in the case of varying $\alpha$. The
predictions from the MLB model and the Davis & Gecol model are quite close to
each other for all size classes at each volume fraction, and they are also
quite close to the values from current simulations for larger particles.
However, the Davis & Gecol model slightly underestimates the hindered settling
functions of the larger particles when $\phi>0.06$. For smaller particles, the
predictions from the MLB model and the Davis & Gecol model are larger than the
values from the simulations, and the discrepancies between the predictions
from these two models and the values from current simulations get larger as
$\phi$ increases. The predictions of Batchelor’s model are close to the
simulated values for $\phi$ approximately less than 0.05. As $\phi$ increases,
Batchelor’s model underestimates the hindered settling functions of all size
classes systematically compared to the results of current simulations.
Figure 15: Same as in Fig. 13, but for different volume fractions and fixed
$\alpha=0.4.$
For fixed $\alpha$=0.4, the relative differences between the average settling
velocities predicted by different models and calculated by current simulations
of each size class for different volume fractions are shown in figure 15. For
each size class, the relative difference between the prediction from the
Batchelor model and the current simulations increases as the volume fraction
increases, and it is within 10% when $\phi\leq 0.05$, except for the smallest
size class $a_{i}=0.4$. From figure 15 (b) and (c), it is seen that the
relative differences are quite close for the Davis & Gecol and the MLB models,
with those of the MLB model slightly larger. For larger size classes
($a_{i}\geq 1$), both these two models give quite accurate predictions for all
volume fractions considered, with the relative differences within 10% compared
to the results of current simulations. For smaller size classes ($a_{i}\leq
0.8$), both these two models give predictions with large relative differences
compared to the results of current simulations, and in general the relative
difference gets larger as volume fraction increases or as size of the class
decreases.
We saw that the MLB model, despite its simplicity, gives relatively good
agreement for the large particles. However, it fails for the small particles.
The MLB model is based on a closure relation for the particle-fluid velocity
difference (see Appendix A). Therefore, to understand the limits of validity
of the model, we compute the slip velocity from the simulation data and
compare against the MLB model prediction.
Figure 16: Normalized slip velocity for each size class for $\phi$=0.05 and
different $\alpha$. The line shows the prediction of the MLB model.
Slip velocities for each size class normalized by the corresponding Stokes
velocities are plotted in figure 16 for $\phi$=0.05 and different $\alpha$.
The slip velocity for size class $i$ is defined as the difference between the
average settling velocity of class $i$ and the average velocity of the fluid
phase,
$u_{slip,i}=\langle u_{z,i}\rangle-\langle u_{f}\rangle.$ (14)
The value of $\langle u_{f}\rangle$ is obtained from the the zero volume-flux
condition $\sum\phi_{j}\langle u_{z,j}\rangle+(1-\phi)\langle u_{f}\rangle=0$.
The slip velocity predicted by the MLB model is calculated from (see Appendix
A)
$u_{slip,i}=u_{St,i}(1-\phi)^{n-1}.$ (15)
It is seen that the MLB model does not predict accurately the slip velocities
of the smaller particles. The discrepancy between the MLB model prediction and
the simulation data increases as $\alpha$ increases. The slip velocities of
relatively large particles are reasonably well captured. As the particle size
increases, the simulation data tends to converge to the MLB model prediction.
Figure 17: Normalized slip velocity for each size class for $\alpha$=0.4 and
different values of $\phi$. The dashed line is the prediction of the MLB
model.
The normalized slip velocities for each size class for fixed $\alpha$=0.4 and
varying $\phi$ are plotted in figure 17. It is seen that the prediction of the
MLB model gets increasingly worse as the volume fraction increases for the
small size classes. Predictions for the largest particles are instead
acceptable regardless of the volume fraction.
## 5 Particle velocity fluctuations
Figure 18: Normalized (a) horizontal
and (b) vertical velocity fluctuations for different $\alpha$ and $\phi$=0.05.
Figure 19: Normalized (a) horizontal
and (b) vertical velocity fluctuations for different $\phi$ and $\alpha$=0.4.
Figure 20: Normalized velocity fluctuations of each size class at $\phi=0.05$
for (a) $\alpha=0.2$
and (b) $\alpha=0.4$.
Statistical deviations with respect to the mean particle velocity, as measured
by the root-mean square of the velocity fluctuation, increase as $\alpha$ or
$\phi$ increase (see figures 18 and 19). The normalized horizontal and
vertical velocity fluctuations of each size class are shown for $\alpha=0.2$
and 0.4 with fixed $\phi=0.05$ in figure 20. For a fixed $\alpha$, the
normalized velocity fluctuations decrease as the particle size increases.
Figure 9 seems to suggest that the velocity fluctuations are approximately
independent of the particle radius $a_{i}$. Because the Stokes velocity scales
as $a_{i}^{2}$, it is expected that the velocity fluctuations normalized by
the Stokes velocity scale as $\sim a_{i}^{-2}$. Our data confirm this scaling
(see lines in figure 20): $u_{i}^{\prime}/u_{St,i}=ca_{i}^{-2}$ fits the data
for all the values of $\alpha$ and $\phi$ simulated, as shown in tables 1 and
2 (this scaling is also observed in our simulations of bidisperse suspensions,
as velocity fluctuations for the two classes are similar in magnitude).
$\alpha$ | horizontal direction | vertical direction
---|---|---
0.1 | 0.32 | 1.13
0.2 | 0.38 | 1.33
0.3 | 0.46 | 1.56
0.4 | 0.53 | 1.85
Table 1: Approximate values of the prefactor $c$ in the scalings of the horizontal and the vertical normalized velocity fluctuations for each $\alpha$ at $\phi=0.05$. $\phi$ | horizontal direction | vertical direction
---|---|---
0.01 | 0.28 | 0.95
0.03 | 0.45 | 1.54
0.05 | 0.53 | 1.85
0.08 | 0.61 | 2.08
0.10 | 0.62 | 2.15
Table 2: Same as table 1 but for different $\phi$ at $\alpha=0.4$.
Figure 21: Ratio between vertical and horizontal velocity fluctuation
magnitudes for $a_{i}=1$ and (a) different $\alpha$ at $\phi=0.05$,
or (b) different $\phi$ at $\alpha=0.4$.
The anisotropy ratio between the vertical and the horizontal velocity
fluctuations, plotted in figure 21, is around 3.5 regardless of the values of
$\alpha$ or $\phi$. This value was also observed in the monodisperse and
bidisperse simulations.
Peysson & Guazzelli (1999) measured experimentally the velocity fluctuations
of small and large particles in a dilute bidisperse suspensions with size
ratio 2. They found that the ratio of velocity fluctuations between the small
and the large size classes were around 0.85 and 0.75 in the vertical and
horizontal directions, respectively, close to $(a_{1}/a_{2})^{1/2}\approx
0.71$. To explain their findings, they extended Hinch’s scaling for the
velocity fluctuations in monodisperse suspensions (Hinch, 1988), using two
different correlation lengths for the two size classes. In our simulations,
the correlation length should be the size of the computational box, as found
in other numerical simulations using periodic boxes (Ladd, 1996; Nguyen &
Ladd, 2005) and in other experiments during the initial settling stage of
well-mixed suspensions (Segre et al., 1997; Tee et al., 2002). Using the same
correlation length for the two size classes would predict comparable velocity
fluctuation magnitudes.
## 6 Discussion
Figure 22: Inter-class interaction term appearing on the right hand side of
equation (16) for $a_{i}=0.4$ (“small” particles) and $a_{i}=2$ (“large”
particles). The particle size distribution corresponds to $\alpha=0.4$ and
$\phi=0.05$. Figure 23: (a) Interaction coefficients $S_{ij}-S_{ii}$
and (b) volume fraction distribution corresponding to the interaction term of
figure 22.
The good comparison between Batchelor’s model and the simulation data for
sufficiently small $\phi$ enables us to use this analytical model to
illustrate why the prediction of the velocity of the small particles is highly
dependent on the full particle size distribution, while that of the large
particles is not. Equation (1) can be rewritten as
$h_{i}=1+S_{ii}\phi+\sum_{j=1}^{m}(S_{ij}-S_{ii})\phi_{j},$ (16)
where $S_{ii}=-6.55$ and $\phi$ is the total volume fraction. The influence of
particle class $j$ on particle class $i$ is negligible if
$|(S_{ij}-S_{ii})\phi_{j}|\ll|S_{ii}\phi|$. For $\phi=0.05$, the magnitude of
the intra-class interaction term is $|S_{ii}\phi|=0.33$. Let us compare this
value to the inter-class interaction term for $\phi=0.05$. In figure 22
$(S_{ij}-S_{ii})\phi_{j}$ is shown for $a_{i}=0.4$ (small particles) and
$a_{i}=2$ (large particles), for $\alpha=0.4$. The maximum absolute value of
the inter-class interaction term for the small particle is $0.13$, not
negligible in comparison to $0.33$. The maximum value of the inter-class
interaction term for the largest particles is instead 18 times smaller than
the intra-class interaction term. The question is: why is the inter-class
interaction term small for the large particles? Is this because the
interaction coefficients are small in magnitude? Or because of the
distribution of volume fractions?
To reply to these questions, in figures 23 (a) and (b) we show
$(S_{ij}-S_{ii})$ and $\phi$ separately. It is seen that in our log-normal
distribution the volume fraction corresponding to the small particles is small
in comparison to that of the large particles, and tends to zero as the lower
tail of the particle size distribution is approached. The quantity
$(S_{ij}-S_{ii})$ is on the other hand not diverging for $a_{j}/a_{i}\ll 1$,
and is $O(1)$ in this limit. Therefore, the reason why the lower tail of the
distribution has a small influence on the upper tail is that the volume
fraction corresponding to the lower tail is comparatively small and is
weighted by an interaction term that is not large. This is an insight that
could be applied to other particle size distributions. For example, if the
particle size distribution was such that $\phi$ was comparatively large in the
small particle range, we would expect the settling velocity of the largest
particles to be more affected by the smallest particles than seen in our
simulations.
The analysis above also gives an insights into the condition for which models
parameterised on the total volume fraction can be used as a first, practical
approximation for the prediction of the settling of a dilute polydisperse
suspension. This approximation is reasonable when the inter-class interaction
term is comparatively small. This term is small when either $\phi_{j}\ll 1$
for finite $S_{ij}-S_{ii}$, the case discussed above. Or when the particle
size distribution is narrow so that $|S_{ij}-S_{ii}|\rightarrow 0$, the case
discussed by Davis & Hassen (1988) (see the value of $S_{ij}-S_{ii}$ for
$a_{j}/a_{i}$ approaching 1 in figure 22 (a)). If deviations of $S_{ij}$ from
$S_{ii}$ are small, then it can be seen from Eq. (13) that the hindered
settling function for $\phi\ll 1$ depends only on the total volume fraction.
For a dilute suspension with a narrow size distribution, the use of
Richardson-Zaki’s correlation is for example justified, and is not by chance
that the exponent of the Richardson-Zaki correlation ($n\simeq 5$) is
numerically close to $|S_{ii}|=6.5$. This “lucky coincidence” was also noted
by Davis & Hassen (1988).
The comparison with the simulation data shows that the MLB model tends to
overestimate the hindered settling functions of particles with sizes smaller
than the mean. The MLB model gives more accurate predictions than Richardson-
Zaki’s correlation for comparatively large particles. MLB is also based on
Richardson-Zaki’s formula, but in the MLB model the formula is used to
estimate the particle-fluid slip velocity, not the absolute settling velocity.
For applications where the focus is predicting the sedimentation of the larger
particles (e.g., separation of large particles from a polydisperse mixture),
using the MLB model could be sufficient. For applications where the
stratification in different layers needs to be predicted (e.g. in
sedimentology (Dorrell & Hogg, 2010)), using the MLB model will overestimate
the fraction of the smaller particles in the sediment region. In particle size
fractionation by centrifugation or sedimentation (Bonaccorso et al., 2013;
Backes, 2016)), using the MLB model could give a wrong prediction of the
region where most of the small particles are located, jeopardising the size
fractionation protocol.
Looking at the main assumptions of the MLB model, rederived in the Appendix A,
we can see that the model rests on two key assumptions. The first one is that
the buoyancy force on each sphere in a polydisperse suspension depends on the
suspension density. The conceptual difficulty of modelling the buoyancy force
on a small sphere in a suspension of spheres of a different size has been
addressed in several papers (Gibb, 1991; Di Felice, 1995; Ruzicka, 2006;
Piazza et al., 2013). Experiments show that using the density of the
suspension to evaluate the buoyancy force gives accurate predictions of the
settling velocity of the test sphere when the test sphere is comparable in
size or larger than the other particles, but can lead to inaccuracies when the
test particle is smaller than the other particles (Poletto & Joseph, 1995;
Rotondi et al., 2015). A second assumption in the MLB model is that the Stokes
drag correction for each size class only depends on the total volume fraction,
and we have seen that this cannot be in general a good approximation.
We have not tried to improve the MLB model. However, we can give indications
on possible approaches for improvement. When the MLB model was derived, no
accurate drag correlations for polydisperse sphere arrays were available.
Recently, numerical simulation of flow past fixed sphere arrays (van der Hoef
et al., 2005; Yin & Sundaresan, 2009; Sarkar et al., 2009; Cheng & Wachs,
2023) have shown that data for the drag on polydispersed sphere arrays can be
fitted with low-order polynomials of the the first two moments of the size
distribution. Improved models for poly-disperse suspensions could therefore be
built starting from the MLB model but using the drag force correlation for
polydispersed fixed arrays to model the slip velocity. Such models could also
account for recent studies on the shear viscosity of polydisperse suspensions
(Wagner & Woutersen, 1994; Dörr et al., 2013; Mwasame et al., 2016; Pednekar
et al., 2018), because in certain limits there is a relation between the drag
on a sphere translating through a suspension and the shear viscosity of the
suspension (Squires & Mason, 2010).
For a continuum size distribution, equation (13) which is also valid at non-
negligible volume fractions, reads
$h(a)=1+B_{self}\phi+\phi\int(B_{inter}-B_{self})v\left(a^{\prime}\right)da^{\prime},$
(17)
where $v(a)=\frac{a^{3}p(a)}{\int a^{3}p(a)da}$ is the normalised volume
fraction distribution corresponding to $p(a)$, and the coefficients $B_{self}$
and $B_{inter}$ are averages of the mobility matrices. For a two-parameter
distribution such as the log-normal, $v(a^{\prime})$ is a function of only two
parameters, for example the mean $\langle a\rangle$ and the standard deviation
$\Delta a$. One could therefore postulate a hindered settling function where
the inter-class interaction term and the intra-class interaction terms are
functions of $\phi,\langle a\rangle,\Delta a/\langle a\rangle$. The issue is
that the same functional form should “best fit” a wide range of particle
sizes. This is a search which could benefit from the artificial intelligence
methods (Zhang & Ma, 2020; El Hasadi & Padding, 2023; Wu & Zhang, 2023;
Siddani et al., 2024). The analysis we have provided indicate some constraints
on this search, for example the functional forms expected for $\phi\rightarrow
0$ or $\alpha\rightarrow 0$.
## 7 Conclusions
We quantified numerically the hindered settling function of non-Brownian,
dilute suspensions of polydisperse spheres with a log-normal size
distribution, considering the effects of the polydispersity parameter $\alpha$
and the volume fraction $\phi$.
The average settling velocity $\langle u_{z,i}\rangle$ of each particle size
class is found to decrease as either $\phi$ or $\alpha$ increases. The class-
averaged velocity $\langle u_{z,i}\rangle$ decays with $\phi$ or $\alpha$
faster for the smaller particles than the largest particles. For given
$\alpha$ and $\phi$, the hindered settling function $\langle
u_{z,i}\rangle/u_{St,i}$ decreases as the particle size decreases, indicating
that the settling of the smaller size classes is more hindered compared to
that of the larger particles.
The probability distribution functions of the horizontal and vertical
velocities of each size class tend to follow a Gaussian distribution, with the
probability distributions of horizontal velocities of different size classes
collapsing onto each other for a given $\alpha$ and $\phi$. The magnitude of
the horizontal and vertical velocity fluctuations for each size class
increases as $\phi$ or $\alpha$ increases, and appear to follow the
approximate scaling $u_{i}^{\prime}\sim u_{St,i}(a_{i}/\langle
a\rangle)^{-2}$. Our simulations for the log-normally dispersed system suggest
a value of about 3.5 for the anisotropy ratio between the vertical and the
horizontal velocity fluctuations. This value is comparable to the one observed
in our simulations for monodisperse or bidisperse suspensions.
To verify the accuracy of available polydisperse hindered settling function
models, we compare the predictions of these models with the simulation data.
We found that the Richardson-Zaki correlation, which is often used to model
polydisperse suspensions, largely overestimates the hindered settling
functions of the smaller particles. For $\alpha$=0.4 and $\phi$=0.05, the
value predicted by Richardson-Zaki’s formula for polydisperse suspensions can
be up to seven times larger than the simulated value. Batchelor’s model
(equation (1)) gives quite accurate predictions for all size classes when
$\phi\leq 0.05$, yielding discrepancies of the settling velocities that are
within 10% of the numerical results. The Davis & Gecol model (equation 2) and
the MLB model (equation (4)) give comparable predictions. Both these models
tend to overestimate the hindered settling function of the smaller particles.
The discrepancy between the models and the simulation data increases as
$\alpha$ or $\phi$ increases. In Sec. 6, we use Batchelor’s model to analyse
the conditions under which models parameterised on the total volume fraction
can produce predictions of acceptable accuracy.
Our simulations demonstrate that the modelling of polydispersed suspension is
still challenging even in the dilute limit. Practically usable models such as
Richardson-Zaki or MLB work reasonably well for large particles, but give
significant errors for small particles. This is a major obstacle in the size
fractionation by centrifugation, for example, where one is interested in the
precise estimation of the velocity of each particle class. In these
applications, the small particle fraction is often the most valuable (Backes,
2016).
Our results hold in the Stokes regime. For future work, particle-resolved
simulations based on the solution of the Navier-Stokes equation around each
particle (Uhlmann & Doychev, 2014; Fornari et al., 2016; Willen & Prosperetti,
2019; Yao et al., 2021; Shajahan & Breugem, 2023) could be used to evaluate
the first effect of fluid inertia on our low-Reynolds number observations. In
the presence of fluid or particle inertia, averaging over instantaneous random
configurations cannot be applied, because in the inertial case the particle
velocities depend on the history of the hydrodynamic forces. Thanks to
advances in computational power, it is however now possible to simulate tens
of thousands particles at finite Reynolds numbers (Breugem, 2012; Schwarzmeier
et al., 2023), and with appropriate time averaging it should therefore be
possible to obtain smooth statistics for at least some of the parameters
combinations we explore. The most interesting seem to be the extreme cases,
namely small deviations from uniformity and log-normals with a large variance
(such as our $\alpha=0.4$ case). In the presence of fluid inertia, strong
trapping of small particles in the wake of large particles is expected.
Experimental techniques such as X-ray radiography (Dulanjalee et al., 2020),
magnetic resonance imaging (Boyce et al., 2016), or optical experiments with
fluorescent particles (Snabre et al., 2009) could be used to measure the
velocity of the small particle fraction in polydisperse suspensions. Machine
learning techniques such as symbolic regression (Zhang & Ma, 2020; El Hasadi &
Padding, 2023; Wu & Zhang, 2023) could be used in combination with particle-
resolved simulations (Yao et al., 2022) to extend Batchelor’s model to higher
volume fractions, or to incorporate into MLB’s model information about the
moments of the particle size distribution.
[Acknowledgements]We thank Wim-Paul Breugem and Johan Padding for valuable
discussions. The computations are carried out on the Reynolds cluster in the
Process & Energy department in TU Delft.
[Funding]This project has received funding from the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation program
(Grant Agreement No. 715475, project FLEXNANOFLOW).
[Declaration of interests]The authors report no conflict of interest.
## Appendix A Derivation of the slip velocity closure in the MLB model
A derivation of the MLB model is provided here to highlight the key
assumptions of the model, which was too concisely described in the original
papers (Masliyah, 1979; Lockett & Bassoon, 1979). Consider a homogeneous
polydisperse suspension with $m$ particulate classes. The radius and density
of the $j$-th class are $a_{j}$ and $\rho_{j}$, respectively, with
$j=1,2,…,m$. The density and dynamic viscosity of the fluid are $\rho_{f}$ and
$\mu$, respectively. Gravity is in the negative $z$ direction. Due to the
differences between the particle and the fluid densities, a macroscopic
pressure gradient $dp/dz$ along the height of the mixture is needed to balance
the excess weight of the particles. This pressure gradient drives the back
flow of the fluid during settling of the particles. Corresponding to this
pressure gradient, each particle experiences a buoyancy force $F_{\nabla
p}=(-dp/dz)V_{p}$, where $V_{p}$ is the volume of that particle. The total
force exerted on each particle by the fluid is given by $F_{\nabla p}$, by the
buoyancy force due to the undisturbed hydrostatic pressure gradient and by the
drag force due to the relative fluid-particle velocity difference.
The steady-state momentum equation for the fluid phase is
$\left(-\frac{dp}{dz}\right)(1-\phi)-\sum_{j=1}^{m}f_{d,j}=0,$ (18)
where $\phi$ is the total volume fraction, and $f_{d,j}$ is the volumetric
drag force density (drag per unit volume) exerted by the $j$-th particle
class. The steady-state particle momentum equation for the $j$-th particle
class is
$\left(-\frac{dp}{dz}\right)\phi_{j}+f_{d,j}-(\rho_{j}-\rho_{f})\phi_{j}g=0,$
(19)
where $\phi_{j}$ is the volume fraction of the $j$-th class. Using equations
(18) and (19) gives
$\frac{dp}{dz}=-\sum_{j=1}^{m}(\rho_{j}-\rho_{f})\phi_{j}g,$ (20)
and
$f_{d,j}=(\rho_{j}-\rho_{susp})\phi_{j}g,$ (21)
where $\rho_{susp}=(1-\phi)\rho_{f}+\sum_{i=1}^{m}\rho_{i}\phi_{i}$ is the
density of the suspension (see e.g. Xia et al. (2022) for the case $m=1$). The
predictive accuracy of equation (21) for small particles immersed in a
suspension of larger particles has been put into question (Poletto & Joseph,
1995; Rotondi et al., 2015).
To calculate the particle velocity, a constitutive equation relating relative
velocity to force must be postulated. The MLB model uses a linear law between
the drag force and the slip velocity $u_{slip,j}$ between the $j$-th particle
class and the average fluid velocity:
$f_{d,j}=-\beta_{j}u_{slip,j}.$ (22)
where $u_{slip,j}$ is defined as in equation (14). The friction coefficient
was calculated as $\beta_{j}=\frac{9\mu\phi_{j}C(\phi)}{2a_{j}^{2}}$. The case
$C=1$ corresponds to no influence of neighbouring particles on the drag force
exerted on a test particle (the factor $\phi_{j}$ is due to the fact that
$f_{d,j}$ is a force per unit volume). To model hydrodynamic interactions on
the drag force, the MLB model assumes $C(\phi)=(1-\phi)^{2-n}$, as for a
monodisperse case at the same total volume fraction (from Richardson-Zaki’s
correlation, the slip velocity in the monodisperse case is $u_{slip}=\langle
u_{p}\rangle-\langle u_{f}\rangle=\frac{\langle
u_{p}\rangle}{1-\phi}=u_{St}(1-\phi)^{n-1}$; equating (21) and (22) using this
slip velocity gives $C(\phi)=(1-\phi)^{2-n}$).
From (21) and (22), the slip velocity for the polydispersed case is
$u_{slip,j}=\frac{2a_{j}^{2}}{9\mu}(1-\phi)^{n-2}(\rho_{j}-\rho_{susp}).$ (23)
If all the particles have the same density,
$\rho_{susp}=(1-\phi)\rho_{f}+\phi\rho_{p}$ and
$\rho_{j}-\rho_{susp}=(1-\phi)(\rho_{p}-\rho_{f})$. In this case the slip
velocity simplifies to
$u_{slip,j}=u_{St,j}(1-\phi)^{n-1},$ (24)
where $u_{St,j}$ is the Stokes velocity of the $j$-th species. Using the
definition of the slip velocity and using mass continuity
$\sum_{j}\phi_{j}\langle u_{j}\rangle+(1-\phi)\langle u_{f}\rangle=0$ yields
equation (4).
It can be seen from the derivation that the main assumptions in MLB’s model
are embedded in equations (21) and (22).
## References
* Abbas et al. (2006) Abbas, M., Climent, E., Simonin, O. & Maxey, M. R. 2006 Dynamics of bidisperse suspensions under Stokes flows: Linear shear flow and sedimentation. Phys. Fluids 18 (12).
* Abeynaike et al. (2012) Abeynaike, A., Sederman, A.J., Khan, Y., Johns, M.L., Davidson, J.F. & Mackley, M.R. 2012 The experimental measurement and modelling of sedimentation and creaming for glycerol/biodiesel droplet dispersions. Chem. Eng. Sci. 79, 125–137.
* Al-Naafa & Selim (1992) Al-Naafa, M.A. & Selim, M. S. 1992 Sedimentation of monodisperse and bidisperse hard-sphere colloidal suspensions. AIChE J. 38 (10), 1618–1630.
* Backes (2016) Backes, C. et al. 2016 Production of highly monolayer enriched dispersions of liquid-exfoliated nanosheets by liquid cascade centrifugation. ACS Nano 10 (1), 1589–1601.
* Batchelor (1972) Batchelor, G.K. 1972 Sedimentation in a dilute dispersion of spheres. J. Fluid Mech. 52 (2), 245–268.
* Batchelor (1982) Batchelor, G.K. 1982 Sedimentation in a dilute polydisperse system of interacting spheres. part 1. general theory. J. Fluid Mech. 119, 379–408.
* Batchelor & Wen (1982) Batchelor, G.K. & Wen, C.S. 1982 Sedimentation in a dilute polydisperse system of interacting spheres. part 2. numerical results. J. Fluid Mech. 124, 495–528.
* Beenakker (1986) Beenakker, C.W.J. 1986 Ewald sum of the rotne–prager tensor. J. Chem. Phys. 85 (3), 1581–1582.
* Berres et al. (2005) Berres, S., Bürger, R. & Tory, E. M. 2005 Applications of polydisperse sedimentation models. Chem. Eng. J. 111 (2-3), 105–117.
* Bonaccorso et al. (2013) Bonaccorso, F., Zerbetto, M., Ferrari, A. C. & Amendola, V. 2013 Sorting nanoparticles by centrifugal fields in clean media. J. Phys. Chem. C 117 (25), 13217–13229.
* Boyce et al. (2016) Boyce, C. M., Rice, N.P., Ozel, A., Davidson, J.F., Sederman, A.J., Gladden, L.F., Sundaresan, S., Dennis, J.S. & Holland, D.J. 2016 Magnetic resonance characterization of coupled gas and particle dynamics in a bubbling fluidized bed. Phys. Rev. Fluids 1 (7), 074201.
* Brady & Bossis (1988) Brady, J. F. & Bossis, G. 1988 Stokesian dynamics. Annu. Rev. Fluid Mech. 20 (1), 111–157.
* Brady & Durlofsky (1988) Brady, J. F. & Durlofsky, L. J. 1988 The sedimentation rate of disordered suspensions. Phys. Fluids 31 (4), 717–727.
* Brady et al. (1988) Brady, J. F., Phillips, R. J., Lester, J. C. & Bossis, G. 1988 Dynamic simulation of hydrodynamically interacting suspensions. J. Fluid Mech. 195, 257–280.
* Breugem (2012) Breugem, W. 2012 A second-order accurate immersed boundary method for fully resolved simulations of particle-laden flows. J. Comput. Phys. 231 (13), 4469–4498.
* Brzinski III & Durian (2018) Brzinski III, T.A. & Durian, D.J. 2018 Observation of two branches in the hindered settling function at low Reynolds number. Phys. Rev. Fluids 3 (12), 124303.
* Bürger et al. (2002) Bürger, R., Karlsen, K. H., Tory, E.M. & Wendland, W.L. 2002 Model equations and instability regions for the sedimentation of polydisperse suspensions of spheres. Z Angew Math Mech. 82 (10), 699–722.
* Chaturvedi et al. (2018) Chaturvedi, S. K., Ma, J., Brown, P. H., Zhao, H. & Schuck, P. 2018 Measuring macromolecular size distributions and interactions at high concentrations by sedimentation velocity. Nat. Commun. 9 (1), 4415.
* Chen et al. (2023) Chen, H., Jia, X., Fairweather, M. & Hunter, T. N. 2023 Characterising the sedimentation of bidisperse colloidal silica using analytical centrifugation. Adv. Powder Technol. 34 (2), 103950.
* Cheng & Wachs (2023) Cheng, Z. & Wachs, A. 2023 Hydrodynamic force and torque fluctuations in a random array of polydisperse stationary spheres. Int. J. Multiph. Flow 167, 104524.
* Cunha et al. (2002) Cunha, F.R., Abade, G.C., Sousa, A.J. & Hinch, E.J. 2002 Modeling and direct simulation of velocity fluctuations and particle-velocity correlations in sedimentation. J. Fluids Eng. 124 (4), 957–968.
* Davis & Birdsell (1988) Davis, R.H. & Birdsell, K.H. 1988 Hindered settling of semidilute monodisperse and polydisperse suspensions. AIChE J. 34 (1), 123–129.
* Davis & Acrivos (1985) Davis, R. H. & Acrivos, A. 1985 Sedimentation of noncolloidal particles at low Reynolds numbers. Annu. Rev. Fluid Mech. 17 (1), 91–118.
* Davis & Gecol (1994) Davis, R. H. & Gecol, H. 1994 Hindered settling function with no empirical parameters for polydisperse suspensions. AIChE J. 40 (3), 570–575.
* Davis & Hassen (1988) Davis, R. H. & Hassen, M. A. 1988 Spreading of the interface at the top of a slightly polydisperse sedimenting suspension. J. Fluid Mech. 196, 107–134.
* Di Felice (1995) Di Felice, R. 1995 Hydrodynamics of liquid fluidisation. Chem. Eng. Sci. 50 (8), 1213–1245.
* Di Vaira et al. (2022) Di Vaira, N. J., Łaniewski-Wołłk, Ł., Johnson, R. L., Aminossadati, S. M. & Leonardi, C. R. 2022 Influence of particle polydispersity on bulk migration and size segregation in channel flows. J. Fluid Mech. 939, A30.
* Dörr et al. (2013) Dörr, A., Sadiki, A. & Mehdizadeh, A. 2013 A discrete model for the apparent viscosity of polydisperse suspensions including maximum packing fraction. J. Rheol. 57 (3), 743–765.
* Dorrell & Hogg (2010) Dorrell, R. & Hogg, A. J. 2010 Sedimentation of bidisperse suspensions. Int. J. Multiph. Flow 36 (6), 481–490.
* Dulanjalee et al. (2020) Dulanjalee, E., Guillard, F., Baker, J., Einav, I. & Marks, B. 2020 Measuring grain size fractions of bidisperse granular materials using x-ray radiography. Opt. Express 28 (20), 29202–29211.
* El Hasadi & Padding (2023) El Hasadi, Y. M. F & Padding, J. T. 2023 Do logarithmic terms exist in the drag coefficient of a single sphere at high reynolds numbers? Chem. Eng. Sci. 265, 118195.
* Fornari et al. (2016) Fornari, W., Picano, F. & Brandt, L. 2016 Sedimentation of finite-size spheres in quiescent and turbulent environments. J. Fluid Mech. 788, 640–669.
* Gibb (1991) Gibb, J. 1991 Pressure and viscous forces in an equilibrium fluidized suspension. Chem. Eng. Sci. 46 (1), 377–379.
* Gonzalez et al. (2021) Gonzalez, E., Aponte-Rivera, C. & Zia, R. N. 2021 Impact of polydispersity and confinement on diffusion in hydrodynamically interacting colloidal suspensions. J. Fluid Mech. 925, A35.
* Hase & Powell (2001) Hase, K. R. & Powell, R. L. 2001 Calculation of the Ewald summed far-field mobility functions for arbitrarily sized spherical particles in stokes flow. Phys. Fluids 13 (1), 32–44.
* Hasimoto (1959) Hasimoto, H. 1959 On the periodic fundamental solutions of the Stokes equations and their application to viscous flow past a cubic array of spheres. J. Fluid Mech. 5 (2), 317–328.
* Hayakawa & Ichiki (1995) Hayakawa, H. & Ichiki, K. 1995 Statistical theory of sedimentation of disordered suspensions. Phys. Rev. E 51 (5), R3815.
* He et al. (2021) He, W., Wang, Q., Zhu, Y., Wang, K., Mao, J., Xue, X. & Shi, Y. 2021 Innovative technology of municipal wastewater treatment for rapid sludge sedimentation and enhancing pollutants removal with nano-material. Bioresour. Technol. 324, 124675.
* Hinch (1988) Hinch, E.J. 1988 Sedimentation of small particles. In Disorder and Mixing: Convection, Diffusion and Reaction in Random Materials and Processes, pp. 153–162. Springer.
* van der Hoef et al. (2005) van der Hoef, M. A., Beetstra, R. & Kuipers, J.A.M. 2005 Lattice-Boltzmann simulations of low-Reynolds-number flow past mono-and bidisperse arrays of spheres: results for the permeability and drag force. J. Fluid Mech. 528, 233–254.
* Howard et al. (2022) Howard, A. A., Maxey, M. R. & Gallier, S. 2022 Bidisperse suspension balance model. Phys. Rev. Fluids 7 (12), 124301.
* Kim & Karrila (2013) Kim, S. & Karrila, S. J. 2013 Microhydrodynamics: principles and selected applications. Butterworth-Heinemann.
* Ladd (1996) Ladd, A. J. C. 1996 Hydrodynamic screening in sedimenting suspensions of non-Brownian spheres. Phys. Rev. Lett. 76 (8), 1392.
* Lavrenteva et al. (2024) Lavrenteva, O.M., Smagin, I. & Nir, A. 2024 Shear-induced particle migration in viscous suspensions with continuous size distribution. Phys. Rev. Fluids 9 (2), 024305.
* Lockett & Bassoon (1979) Lockett, M. J. & Bassoon, K.S. 1979 Sedimentation of binary particle mixtures. Powder Technol. 24 (1), 1–7.
* Malbranche et al. (2023) Malbranche, N., Chakraborty, B. & Morris, J. F. 2023 Shear thickening in dense bidisperse suspensions. J. Rheol. 67 (1), 91–104.
* Masliyah (1979) Masliyah, J. H. 1979 Hindered settling in a multi-species particle system. Chem. Eng. Sci. 34 (9), 1166–1168.
* Moncho-Jordá et al. (2010) Moncho-Jordá, A., Louis, A.A. & Padding, J.T. 2010 Effects of interparticle attractions on colloidal sedimentation. Phys. Rev. Lett. 104 (6), 068301.
* Mwasame et al. (2016) Mwasame, P. M., Wagner, N. J. & Beris, A. N. 2016 Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions. J. Rheol. 60 (2), 225–240.
* Nguyen & Ladd (2005) Nguyen, N. & Ladd, A. J. C. 2005 Sedimentation of hard-sphere suspensions at low Reynolds number. J. Fluid Mech. 525, 73–104.
* Padding & Louis (2004) Padding, J. T. & Louis, A. A. 2004 Hydrodynamic and Brownian fluctuations in sedimenting suspensions. Phys. Rev. Lett. 93 (22), 220601.
* Papuga et al. (2021) Papuga, K., Kaszubkiewicz, J. & Kawałko, D. 2021 Do we have to use suspensions with low concentrations in determination of particle size distribution by sedimentation methods? Powder Technol. 389, 507–521.
* Pednekar et al. (2018) Pednekar, S., Chun, J. & Morris, J. F. 2018 Bidisperse and polydisperse suspension rheology at large solid fraction. J. Rheol. 62 (2), 513–526.
* Peysson & Guazzelli (1999) Peysson, Y. & Guazzelli, E. 1999 Velocity fluctuations in a bidisperse sedimenting suspension. Phys. Fluids 11 (7), 1953–1955.
* Phillips et al. (1988) Phillips, R. J., Brady, J. F. & Bossis, G. 1988 Hydrodynamic transport properties of hard-sphere dispersions. i. suspensions of freely mobile particles. Phys. Fluids 31 (12), 3462–3472.
* Piazza et al. (2013) Piazza, R., Buzzaccaro, S., Secchi, E. & Parola, A. 2013 On the general concept of buoyancy in sedimentation and ultracentrifugation. Phys. Biol. 10 (4), 045005.
* Poletto & Joseph (1995) Poletto, M. & Joseph, D. D. 1995 Effective density and viscosity of a suspension. J. Rheol. 39 (2), 323–343.
* Rettinger et al. (2022) Rettinger, C., Eibl, S., Rüde, U. & Vowinckel, B. 2022 Rheology of mobile sediment beds in laminar shear flow: effects of creep and polydispersity. J. Fluid Mech. 932, A1.
* Revay & Higdon (1992) Revay, J. M. & Higdon, J. J. L. 1992 Numerical simulation of polydisperse sedimentation: equal-sized spheres. J. Fluid Mech. 243, 15–32.
* Richardson & Zaki (1954) Richardson, J.F. T. & Zaki, W.N. 1954 Sedimentation and fluidization: part 1. Trans. Inst. Chem. Eng. 32, 35–53.
* Rotne & Prager (1969) Rotne, J. & Prager, S. 1969 Variational treatment of hydrodynamic interaction in polymers. J. Chem. Phys. 50 (11), 4831–4837.
* Rotondi et al. (2015) Rotondi, M., Di Felice, R. & Pagliai, P. 2015 Validation of fluid–particle interaction force relationships in binary-solid suspensions. Particuology 23, 40–48.
* Ruzicka (2006) Ruzicka, M. C. 2006 On buoyancy in dispersion. Chem. Eng. Sci. 61 (8), 2437–2446.
* Sarkar et al. (2009) Sarkar, S., van der Hoef, M. A. & Kuipers, J. A. M. 2009 Fluid–particle interaction from lattice Boltzmann simulations for flow through polydisperse random arrays of spheres. Chem. Eng. Sci. 64 (11), 2683–2691.
* Schwarzmeier et al. (2023) Schwarzmeier, C., Rettinger, C., Kemmler, S., Plewinski, J., Núñez-González, F., Köstler, H., Rüde, U. & Vowinckel, B. 2023 Particle-resolved simulation of antidunes in free-surface flows. J. Fluid Mech. 961, R1.
* Segre et al. (1997) Segre, P. N., Herbolzheimer, E. & Chaikin, P. M. 1997 Long-range correlations in sedimentation. Phys. Rev. Lett. 79 (13), 2574.
* Shajahan & Breugem (2023) Shajahan, T. & Breugem, W. 2023 Inertial effects in sedimenting suspensions of solid spheres in a liquid. Int. J. Multiph. Flow 166, 104498.
* Siddani et al. (2024) Siddani, B., Balachandar, S., Zhou, J. & Subramaniam, S. 2024 Investigating the influence of particle distribution on force and torque statistics using hierarchical machine learning. AIChE J. p. e18339.
* Snabre et al. (2009) Snabre, P., Pouligny, B., Metayer, C. & Nadal, F. 2009 Size segregation and particle velocity fluctuations in settling concentrated suspensions. Rheol. Acta 48, 855–870.
* Squires & Mason (2010) Squires, T. M. & Mason, T. G. 2010 Fluid mechanics of microrheology. Annu. Rev. Fluid Mech. 42, 413–438.
* Tee et al. (2002) Tee, S., Mucha, P. J., Cipelletti, L., Manley, S., Brenner, M. P., Segre, P. N. & Weitz, D. A. 2002 Nonuniversal velocity fluctuations of sedimenting particles. Phys. Rev. Lett. 89 (5), 054501.
* Tory et al. (2003) Tory, E. M., Karlsen, K. H., Bürger, R. & Berres, S. 2003 Strongly degenerate parabolic-hyperbolic systems modeling polydisperse sedimentation with compression. SIAM J. Appl. Math. 64 (1), 41–80.
* Uhlmann & Doychev (2014) Uhlmann, M. & Doychev, T. 2014 Sedimentation of a dilute suspension of rigid spheres at intermediate Galileo numbers: the effect of clustering upon the particle motion. J. Fluid Mech. 752, 310–348.
* Vowinckel et al. (2019) Vowinckel, B., Withers, J., Luzzatto-Fegiz, P. & Meiburg, E. 2019 Settling of cohesive sediment: particle-resolved simulations. J. Fluid Mech. 858, 5–44.
* Wacholder & Sather (1974) Wacholder, E. & Sather, N. F. 1974 The hydrodynamic interaction of two unequal spheres moving under gravity through quiescent viscous fluid. J. Fluid Mech. 65 (3), 417–437.
* Wagner & Woutersen (1994) Wagner, N. J. & Woutersen, A. T. J. M. 1994 The viscosity of bimodal and polydisperse suspensions of hard spheres in the dilute limit. J. Fluid Mech. 278, 267–287.
* Wang & Brady (2015) Wang, M. & Brady, J. F. 2015 Short-time transport properties of bidisperse suspensions and porous media: A Stokesian dynamics study. J. Chem. Phys. 142 (9).
* Wang et al. (2020) Wang, X., Wang, J., Liu, H., Zhao, L., Wang, Y., Wu, X. & Liao, X. 2020 Improving the production efficiency of sweet potato starch using a newly designed sedimentation tank during starch sedimentation process. J. Food Process. Preserv. 44 (10), e14811.
* Willen & Prosperetti (2019) Willen, D. P. & Prosperetti, A. 2019 Resolved simulations of sedimenting suspensions of spheres. Phys. Rev. Fluids 4 (1), 014304.
* Wolf (2021) Wolf, A. et al. 2021 Centrifugation based separation of lithium iron phosphate (LFP) and carbon black for lithium-ion battery recycling. Chem. Eng. Process. 160, 108310.
* Wu & Zhang (2023) Wu, C. & Zhang, Y. 2023 Enhancing the shear-stress-transport turbulence model with symbolic regression: A generalizable and interpretable data-driven approach. Phys. Rev. Fluids 8 (8), 084604.
* Xia et al. (2022) Xia, Y., Yu, Z., Pan, D., Lin, Z. & Guo, Y. 2022 Drag model from interface-resolved simulations of particle sedimentation in a periodic domain and vertical turbulent channel flows. J. Fluid Mech. 944, A25.
* Xue & Sun (2003) Xue, B. & Sun, Y. 2003 Modeling of sedimentation of polydisperse spherical beads with a broad size distribution. Chem. Eng. Sci. 58 (8), 1531–1543.
* Yao et al. (2022) Yao, Y., Biegert, E., Vowinckel, B., Köllner, T., Meiburg, E., Balachandar, S., Criddle, C. S. & Fringer, O. B 2022 Particle-resolved simulations of four-way coupled, polydispersed, particle-laden flows. Int. J. Numer. Methods Fluids 94 (11), 1810–1840.
* Yao et al. (2021) Yao, Y., Criddle, C. S. & Fringer, O. B. 2021 The effects of particle clustering on hindered settling in high-concentration particle suspensions. J. Fluid Mech. 920, A40.
* Yin & Sundaresan (2009) Yin, X. & Sundaresan, S. 2009 Fluid-particle drag in low-Reynolds-number polydisperse gas–solid suspensions. AIChE J. 55 (6), 1352–1368.
* Zhang & Ma (2020) Zhang, J. & Ma, W. 2020 Data-driven discovery of governing equations for fluid dynamics based on molecular simulation. J. Fluid Mech. 892, A5.
* Zuk et al. (2014) Zuk, P. J., Wajnryb, E., Mizerski, K. A. & Szymczak, P. 2014 Rotne–Prager–Yamakawa approximation for different-sized particles in application to macromolecular bead models. J. Fluid Mech. 741, R5.
|
# Secondary frequency control stabilizing the voltage
Eder Batista Tchawou Tchuisseu Institute of Thermomechanics, Academy of
Science of the Czech Republic, 18200 Prague 8, Czech Republic
<EMAIL_ADDRESS>Eric-Donald Dongmo Department of Mechanical Engineering,
College of Technology, University of Buea, Po. Box 63, Buea, Cameroon
Laboratory of Modeling and Simulation in Bio-Engineering and Prototypes,
University of Yaounde 1 Pavel Procházka Institute of Thermomechanics,
Academy of Science of the Czech Republic, 18200 Prague 8, Czech Republic
<EMAIL_ADDRESS>Paul Woafo Laboratory of Modeling and Simulation in Bio-
Engineering and Prototypes, University of Yaounde 1 Pere Colet Instituto de
Física Interdisciplinar y Sistemas Complejos, IFISC (CSIC-UIB), Campus
Universitat Illes Balears, E-07122 Palma de Mallorca, Spain Benjamin Schäfer
<EMAIL_ADDRESS>School of Mathematical Sciences, Queen Mary
University of London, London E1 4NS, United Kingdom Faculty of Science and
Technology, Norwegian University of Life Sciences, 1432 Ås, Norway
# Secondary frequency control stabilizing voltage dynamics
Eder Batista Tchawou Tchuisseu Institute of Thermomechanics, Academy of
Science of the Czech Republic, 18200 Prague 8, Czech Republic
<EMAIL_ADDRESS>Eric-Donald Dongmo Department of Mechanical Engineering,
College of Technology, University of Buea, Po. Box 63, Buea, Cameroon
Laboratory of Modeling and Simulation in Bio-Engineering and Prototypes,
University of Yaounde 1 Pavel Procházka Institute of Thermomechanics,
Academy of Science of the Czech Republic, 18200 Prague 8, Czech Republic
<EMAIL_ADDRESS>Paul Woafo Laboratory of Modeling and Simulation in Bio-
Engineering and Prototypes, University of Yaounde 1 Pere Colet Instituto de
Física Interdisciplinar y Sistemas Complejos, IFISC (CSIC-UIB), Campus
Universitat Illes Balears, E-07122 Palma de Mallorca, Spain Benjamin Schäfer
<EMAIL_ADDRESS>School of Mathematical Sciences, Queen Mary
University of London, London E1 4NS, United Kingdom Faculty of Science and
Technology, Norwegian University of Life Sciences, 1432 Ås, Norway
###### Abstract
The ongoing energy transition challenges the stability of the electrical power
system. Stable operation of the electrical power grid requires both the
voltage (amplitude) and the frequency to stay within operational bounds. While
much research has focused on frequency dynamics and stability, the voltage
dynamics has been neglected. Here, we study frequency and voltage stability in
the case of simple networks via linear stability and bulk analysis. In
particular, our linear stability analysis of the network shows that the
frequency secondary control guarantees the stability of a particular electric
network. Even more interesting, while we only consider secondary frequency
control, we observe a stabilizing effect on the voltage dynamics, especially
in our numerical bulk analysis.
## I Introduction
The need for good and stable electricity is a current and urgent quest in our
society [1, 2]. Electricity is generated by the conversion of a primary source
of energy such are mechanical, chemical, nuclear or thermal to electrical
energy. To power small devices, energy harvesting systems are commonly used
[3, 4], while for powering cities or countries synchronous generators and
renewable energy sources are needed [5, 6, 7]. The infrastructure connecting
such generators and consumers of electricity is called the electric power
grid. The traditional power system stability remains an important task to be
achieved by electric utilities in order to ensure a good electric quality and
supply security for the consumers [5, 6, 8, 9, 10, 11, 12]. Power imbalances
are one of the principal causes of grid instabilities, which can lead the
network to blackouts. In fact, any power imbalance induces a variation of the
frequency and the voltage (amplitude) of the electric grid.
In the literature, much work has been devoted to propose controllers that
stabilize the grid when facing any power imbalance. Many such controllers
mainly focused on either controlling the frequency [5, 11, 7, 13] of the grid
or the voltage [14] of the power grid. For the frequency control, the governor
of the power plant is often used through the load frequency control and the
automatic generation control, mostly known as the primary, secondary and
tertiary frequency controls [7, 15, 16, 17]. The voltage on the other hand is
controlled through the automatic voltage regulator, which ensures that the
voltage is kept within an admissible range. Therefore, many studies only
focused on either the control of the frequency or the control of the voltage
and rarely both.
There are three leading models [18] to mathematically describe the power grid,
which are: the effective network; the structure-preserving model and the
synchronous motor model. In this paper, we are using the synchronous motor
model, where each node in the network ("the motor") can be considered as an
aggregate of generators or consumers (e.g. a small region or large city). The
voltage sources of these machines are usually considered constant, such that
the dynamic of the power system is reduced to its frequency, hence phase
dynamics [18, 5, 19, 20].
Within this article, we investigate the stability of the high-voltage
transmission system while including the voltage dynamics. This model has been
studied in the electric network research community e.g. by Katrin
Schmietendorf et al. in [21], Sabine Auer et al. [22] and Florian Döfler et
al. [23], all stressing the need to include voltage dynamics even on
transmission level. However, these works do not consider any type of control
applied to the variables of the network (phase, frequencies and voltages). But
this control is essential for the global understanding of the power system
dynamics and stability. For example, it has been shown that applying secondary
frequency control to the well-known second order electric network model, can
deeply modify the dynamics of a network [1, 15, 5]. Thus, we aim to couple the
frequency secondary control to the electric network considering the voltage
dynamics. The results obtained including the voltage dynamics will be compared
with the ones obtained with the classical uncontrolled model.
The rest of this paper is organized as follows: Section 2 provides a
mathematical model of the power grid considered in this work as networks of
synchronous machines controlled each by frequency controllers. Based on linear
stability analysis and bulk dynamics, we also quantify the stability of the
power grid. Numerical analyses are presented in Section 3 for small (N=2) and
larger (N>2) networks. Finally, the paper is concluded in Section 4.
## II Mathematical model and stability analysis
### II.1 Mathematical model
The electric network is modeled as coupled synchronous machines combined with
their respective frequency controllers (secondary) and by considering the
voltage dynamics, as follows:
$\left\\{\begin{array}[]{ll}\dot{\mathbf{\theta}}_{i}=\omega_{i}\\\
\dot{\omega}_{i}=-\alpha_{i}\dot{\mathbf{\theta}}_{i}+P_{i}^{*}-\sum\limits_{j=1}^{N}E_{q,i}^{{}^{\prime}}B_{ij}E^{{}^{\prime}}_{q,j}\sin(\theta_{i}-\theta_{j})+u_{i}\\\
T^{{}^{\prime}}_{{d,i}_{0}}\dot{\overline{E}}_{q,i}^{{}^{\prime}}=E_{f,i}-E_{q,i}^{{}^{\prime}}+(X_{d,i}-X_{d,i}^{{}^{\prime}})\sum\limits_{j=1}^{N}B_{ij}E^{{}^{\prime}}_{q,j}\cos(\theta_{i}-\theta_{j})\\\
\tau_{g_{i}}\dot{u}_{i}=-u_{i}-\gamma_{i}\theta_{i}-\beta_{i}\omega_{i},\end{array}\right.$
(1)
where $i\in{1,...,N}$ denotes the node index in the network, $\theta$ the
voltage phase angle, $\omega$ the frequency, $E$ the voltage amplitude and $u$
the control. $\alpha$ is a damping constant, $P^{*}$ the power
consumed/generated at a node, $B$ the susceptibility matrix, while $T$ and
$\tau_{g}$ are time constants, $E_{f}$ is the rotor’s field voltage, and
$X_{d}$ and $X_{d}^{{}^{\prime}}$ are voltage dynamic parameters [21]. The
nodes (islanded power grids or synchronous machines), which compose the
network, are assumed to be all to all connected. For simplicity we assume that
the frequency controller acts instantaneously ($\tau_{g}$=0), such that it is
described as the proportional derivative control given by:
$u_{i}=-\gamma_{i}\theta_{i}-\beta_{i}\omega_{i}$. The derivative term
$\beta_{i}\dot{\mathbf{\theta}}_{i}$ can be absorbed into the damping term
$\alpha_{i}\dot{\mathbf{\theta}}_{i}$ of the swing equation. One can then also
rewrite the set of equation 1 by Eq. 2 substituting
$(X_{d,i}-X_{d,i}^{{}^{\prime}})$ by $X_{i}$, $T^{{}^{\prime}}_{{d,i}_{0}}$
and $E_{q,i}^{{}^{\prime}}$ by respectively $T_{{d,i}}$ and $E_{i}$ without
loss of generality
$\left\\{\begin{array}[]{ll}\dot{\mathbf{\theta}}_{i}=\omega_{i}\\\
\dot{\omega}_{i}=-\alpha_{i}\dot{\mathbf{\theta}}_{i}-\gamma_{i}\theta_{i}+P_{i}^{*}-\sum\limits_{j=1}^{N}E_{i}B_{ij}E_{j}\sin(\theta_{i}-\theta_{j})\\\
T_{{d,i}}\dot{\overline{E}}_{i}=E_{f,i}-E_{i}+X_{i}\sum\limits_{j=1}^{N}B_{ij}E_{j}\cos(\theta_{i}-\theta_{j}).\\\
\end{array}\right.$ (2)
### II.2 Linear stability analysis
The stable operation of the electric network requires that the frequency,
voltage and difference of phases between connected nodes is constant, meaning
that the network is operating in a synchronous regime. Such system is said to
be linearly stable if, subjected to a small perturbation, it regains its
stable operation, in the case of a power grid its synchronous state.
Konstantin Sharafutdinov et al. have extensively studied the stability of such
electric network model considering the voltage dynamics as described in Eq.
(2), but without secondary control. Thus, based on their work and mainly on
the necessary and sufficient conditions of the uncontrolled system to be
linearly stable, the effects of the secondary control on the stability of the
network will be investigated.
Thus, to analyze the stability of the system with respect to small
perturbation, we linearize Eq. 2 around a steady state
$(\theta_{i}^{*},\omega_{i}^{*},E_{i}^{*})$. We denote small perturbation
around the steady state as $\theta_{i}=\theta_{i}^{*}+\delta\theta_{i}$,
$\omega_{i}=\omega_{i}^{*}+\delta\omega_{i}$, $E_{i}=E_{i}^{*}+\delta E_{i}$.
The linearization of the equation 2 leads to the Eq. 3, where
$\mathbf{X_{1}}$, $\mathbf{X_{2}}$ and $\mathbf{X_{3}}$ are n-dimensional
vectors of $\delta\theta_{i}$, $\delta\omega_{i}$ and $\delta E_{i}$.
$\left\\{\begin{array}[]{ll}\dot{\mathbf{X}}_{\mathbf{{}_{1}}}=\mathbf{X_{2}},\\\
\dot{\mathbf{X}}_{\mathbf{{}_{2}}}=-(\mathbf{P}+\mathbf{\Gamma})\mathbf{X_{1}}-\mathbf{A}\mathbf{X_{2}}-\mathbf{\Lambda}\mathbf{X_{3}}\\\
\dot{\mathbf{X}}_{\mathbf{{}_{3}}}=\mathbf{T}^{-1}\mathbf{\chi}\mathbf{\Lambda}\mathbf{X_{1}}+\mathbf{T}^{-1}(\mathbf{\chi}\mathbf{C}-\mathds{1})\mathbf{X_{3}},\end{array}\right.$
(3)
where $\mathbf{T}^{-1}$, $\mathbf{\Gamma}$, $\mathbf{A}$ and $\mathbf{\chi}$
are diagonal matrices with elements $T_{ii}$= $\frac{1}{T_{i}}$,
$\mathbf{\Gamma_{ii}}$= $\gamma_{i}$,$A_{ii}$= $\alpha_{i}$ and
$\chi_{ii}$=$X_{i}$ respectively, representing the relaxation time of the
transient voltage dynamics matrix,the control, the damping and the transient
reactance matrix of the synchronous machine. Matrices $\mathbf{P}$,
$\mathbf{\Lambda}$, $\mathbf{C}$, $\mathbf{\Gamma}$ $\in$ $\mathds{R}^{n\times
n}$ whose the elements are respectively defined as follows:
$P_{ij}=\left\\{\begin{array}[]{ll}-{E_{i}}^{*}B_{ij}{E_{j}}^{*}\cos({\theta_{i}}^{*}-{\theta_{j}}^{*}),&i\neq
j,\\\ \sum\limits_{l\neq
i}^{N}{E_{i}}^{*}B_{il}{E_{l}}^{*}\cos({\theta_{l}}^{*}-{\theta_{i}}^{*}),&i=j,\end{array}\right.$
(4)
${\Lambda}_{ij}=\left\\{\begin{array}[]{ll}{E_{j}}^{*}B_{ij}\sin({\theta_{i}}^{*}-{\theta_{j}}^{*}),&i\neq
j,\\\
-\sum\limits_{l=i}^{N}{E_{l}}^{*}B_{il}\sin({\theta_{l}}^{*}-{\theta_{i}}^{*}),&i=j,\end{array}\right.$
(5) ${C}_{ij}=B_{ij}\cos({\theta_{i}}^{*}-{\theta_{j}}^{*}).$ (6)
This later set of equations can be rewritten into the following compact form
$\frac{\text{d}}{\text{d}t}\left({\begin{array}[]{ccc}\mathbf{X_{1}}\\\
\mathbf{X_{2}}\\\
\mathbf{X_{3}}\end{array}}\right)=\underbrace{\left[{\begin{array}[]{ccc}\mathbf{0}&\mathds{1}&\mathbf{0}\\\
-(\mathbf{P}+\mathbf{\Gamma})&-\mathbf{A}&-\mathbf{\Lambda}\\\
\mathbf{T}^{-1}\mathbf{\chi}\mathbf{\Lambda}&\mathbf{0}&\mathbf{T}^{-1}(\mathbf{\chi}\mathbf{C}-\mathds{1})\end{array}}\right]}_{:=\mathbf{J}}\left[{\begin{array}[]{cc}\mathbf{X_{1}}\\\
\mathbf{X_{2}}\\\ \mathbf{X_{3}}\end{array}}\right].$ (7)
The evaluation of the Jacobian matrix $\mathbf{J}$ around an existing fixed
point $(\theta_{i}^{*},\omega_{i}^{*},E_{i}^{*})$ determines the stability of
the system around this fixed point. That is done by computing the eigenvalues
$\mu_{i}$ associated to this fixed point. The system is then linearly stable
if the maximal real part of the eigenvalues is negative. In this case,
perturbed, the system will regain its fixed point. But, if at least one
eigenvalue has a positive real part, then, the system is said to be linearly
unstable. This previous analysis is possible once we can determine the steady
state of the system and therefore compute the eigenvalues.
In most of the cases, the fixed points can be difficult to determine, hence
complicating the determination of eigenvalues. In such cases, the
determination of condition of existence of the fixed point is necessary and
the stability conditions of the system can be derived through some
mathematical formulations.
Sharafutdinov et al in [24] have provided for the uncontrolled system two
sufficient and necessary stability conditions grouped into two propositions
(proposition I and II in [24]), which both have to hold to have a linearly
stable fixed point. In fact, it is worth noting that the Jacobian matrix
$\mathbf{J}$ has one eigenvector $(1,0,0)$ with eigenvalue $\mu_{1}=0$, which
corresponds to a global phase shift of the synchronous machine [25, 24]. This
particular case, which does not have any physical implications for the
stability, is excluded from the stability analysis. Thereby we reduce the
space of possible states to $S_{\bot}$, which is the space perpendicular to
the solution $\theta_{i}^{*}$+c(1,0,0)/c $\in\mathds{R}$ and defined by
$S_{\bot}=\left\\{(\mathbf{X_{1}},\mathbf{X_{2}},\mathbf{X_{3}})\in\mathds{R}^{3n}/\mathbf{1}\mathbf{X_{1}}=0\\\
\right\\}.$ (8)
The Proposition I in [24] states that a given steady state
$(\theta_{i}^{*},\omega_{i}^{*},E_{i}^{*})$ is linearly stable if and only if
* •
given the space
$S_{\bot}^{1}=\left\\{(\mathbf{X_{1}},\mathbf{X_{2}})\in\mathds{R}^{2N}/\mathbf{1}\mathbf{X_{1}}=0\\\
\right\\}$, the matrix $\mathbf{P+\Gamma}$ is positive definite on
$S_{\bot}^{1}$, furthermore,
* •
the matrix
$\mathbf{C}-\mathbf{\chi}^{-1}+\mathbf{\Lambda}\mathbf{P}^{+}\mathbf{\Lambda}^{T}$
is negative definite.
Where $T$ denotes the transpose of a matrix, which is identical to its inverse
if it is an orthogonal matrix. In addition, the matrix $\mathbf{P}^{+}$,
represents the Moore-Penrose pseudo-inverse of $\mathbf{P}$. From this
proposition I, the only difference appearing in comparison to the study made
in Sharafutdinov et al in [24] is the matrix $\mathbf{\Gamma}$ in the sum
$\mathbf{P+\Gamma}$. Thus, from the proposition I, it is clear that the
secondary control represented by the matrix $\mathbf{\Gamma}$ can improve the
stability of the network with the necessary condition to be positive definite.
In our case, where the nodes are assumed to have the same system parameters
such are $\alpha_{i}=\alpha$, $\gamma_{i}=\gamma$, $T_{d,i}=T_{d}$ and
$E_{f,i}=E_{f}$. The matrix $\mathbf{\Gamma}$ is a diagonal matrix, which
elements $\gamma$ are always positive. Thus, all the eigenvalues of the
symmetric matrix $\mathbf{\Gamma}$ are positive, hence, the matrix
$\mathbf{\Gamma}$ is positive definite (See Appendix A).
### II.3 Bulk dynamics
In order to analyze the stability of the network, we simplify the analysis by
focusing on the ensemble dynamics, i.e. we consider the _bulk_ or average
dynamics of the network. As in the previous stability analysis, we wish to
evaluate the impact of the secondary control through its parameter $\gamma$ on
the dynamics of the system. Thus, we first consider a system with a constant
voltage, which is a well known and studied case. Next, we include the voltage
dynamics and analyze the effects of the secondary control on the voltage.
#### II.3.1 Constant voltage
Let’ s consider Eq. (2) and assume the voltage to be constant. The resulting
equation is the simple Kuramoto model with secondary control, discussed for
example in [5]:
$\left\\{\begin{array}[]{ll}\dot{\mathbf{\theta}}_{i}=\omega_{i},\\\
\dot{\omega}_{i}=-\alpha_{i}\dot{\mathbf{\theta}}_{i}-\gamma_{i}\theta_{i}+P_{i}^{*}-\sum\limits_{j=1}^{N}E_{i}B_{ij}E_{j}\sin(\theta_{i}-\theta_{j}).\\\
\end{array}\right.$ (9)
Now, we take the average of the variables of Eq. (9) over the number of nodes
$n$ that form the network. We further assume that the node parameters
$\alpha_{i}=\alpha$, $\gamma_{i}=\gamma$ are identical for all the nodes.
Thus, we obtain the following differential equations:
$\left\\{\begin{array}[]{ll}\dot{\overline{\theta}}=\overline{\omega},\\\
\dot{\overline{\omega}}=-\alpha\dot{\overline{\theta}}-\gamma\overline{\theta}+\frac{1}{N}\sum_{i=1}^{N}{P_{i}^{*}}.\\\
\end{array}\right.$ (10)
Eq. (10) can easily be solved and the average value of the principal variable
are obtained as follows:
$\overline{\theta}(t)=C_{1}\exp{r_{1}t}+C_{2}\exp{r_{2}t}+\frac{\sum_{i=1}^{N}{P_{i}^{*}}}{N\gamma},$
where $C_{1}$ and $C_{2}$ are constants which are determined by the initial
conditions and $r_{1}$ and $r_{2}$ are expressed as follows:
$\left\\{\begin{array}[]{ll}r_{1}=-\frac{\alpha}{2}+\frac{\sqrt{\alpha^{2}-4\gamma}}{2},\\\
r_{2}=-\frac{\alpha}{2}-\frac{\sqrt{\alpha^{2}-4\gamma}}{2}.\\\
\end{array}\right.$ (11)
Thus, it is clear that for time tending to infinity and for any value of the
secondary control parameter $\gamma$ ($\gamma>0$), the average value of the
angle of rotation of each node in the network is constant and the average
value of the frequency is zero:
$\overline{\theta}(t)=\frac{\sum_{i=1}^{N}{P_{i}^{*}}}{N\gamma},\quad\bar{\omega}=0.$
As we can see, the average of the angle $\overline{\theta}$ for the controlled
system depends on the control parameter $\gamma$, the balance of the network
as well as the size of the network. For a balanced network, i.e. with
$\sum_{i=1}^{N}{P_{i}^{*}}=0$, the mean value of the angles is zero. For an
imbalanced and fixed network, the mean angle is inversely proportional to the
secondary control parameter, thus for a large control, the average of the
angles tends to zero.
For an uncontrolled electric network ($\gamma=0$), the average of the angles
is time varying and given by the following expression:
$\overline{\theta}(t)=D_{1}+D_{2}\exp{(-\alpha
t)}+\frac{\sum_{i=1}^{N}{P_{i}^{*}}}{N\alpha}t,$
where $D_{1}=-D_{2}=-{\sum_{i=1}^{N}{{P_{i}^{*}}}/{N\alpha^{2}}}$ are
constants determined by the initial conditions. Thus, the mean value of the
frequency of the network as a function of time $t$ is given by the Eq. (12):
$\overline{\omega}(t)=-\alpha{D_{2}}\exp{(-\alpha
t)}+\frac{\sum_{i=1}^{N}{P_{i}^{*}}}{N\alpha},$ (12)
this basically corresponds to the frequency deviation for an uncontrolled and
imbalanced network.
This imbalance can be easily absorbed in large networks (large $N$) and those
with large primary control (given by the $\alpha$ parameter).
We have so far shown that, when the voltage is constant, any imbalance in the
network induces the deviation of the mean frequency, which is constant and
non-zero for a network without control. This implies that the average value of
the angle is linearly increasing with the time. For the same network, but in
presence of secondary control in all the nodes, one notices that the average
frequency in the network is always zero after a long period of time and the
average angular in the network is a constant value proportional to the
disturbance and inversely proportional to the size of the network and the
secondary control parameter. We have therefore seen the effects of the
secondary control in such an electric network in which the voltage dynamics is
not considered. But how does the voltage dynamics change these results?
#### II.3.2 Dynamical voltage
In this part, we investigate the effects of the secondary control on the
dynamics of the voltage in an electric network. Thus, we consider the full
equation (2), summing up all three equations and dividing them by the total
number of nodes. Again, we assume that the parameters of the nodes are
constant and identical for all the nodes, obtaining the following equations:
$\left\\{\begin{array}[]{lll}\dot{\overline{\theta}}=\overline{\omega}\\\
\dot{\overline{\omega}}=-\alpha\dot{\overline{\theta}}-\gamma\overline{\theta}+\frac{1}{N}\sum_{i=1}^{N}{P_{i}^{*}}\\\
T_{d}\dot{\overline{E}}=E_{f}-\overline{E}+\frac{X}{N}\sum_{i=1}^{N}\sum\limits_{j=1}^{N}B_{ij}E_{j}\cos(\theta_{i}-\theta_{j})\end{array}\right.$
(13)
As shown in Eq. (13), the equations describing the dynamics of the mean
voltage (the third equation) contain the difference of phases of connected
nodes as argument of a cosine function. The presence of this term makes the
determination of the mean voltage fixed point very difficult. Nevertheless,
one can find the range of variation of $\overline{E}$ considering the
following relations:
$\begin{cases}B_{ij}=B_{ji}=\left\\{\begin{array}[]{ll}B_{0}&\mbox{if }i=j\\\
B_{1}&\mbox{else}\end{array}\right.\\\ -1\leq\cos{(\theta_{i}-\theta_{j})}\leq
1.\par\end{cases}$
Thus, the term with cosine in Eq.(13) is approximated by the following
relation:
$\begin{cases}-X(B_{0}+(N-1)B_{1})\overline{E}\leq\frac{X}{N}\sum_{i=1}^{N}\sum\limits_{j=1}^{N}B_{ij}E_{j}\cos(\theta_{i}-\theta_{j}),\\\
\frac{X}{N}\sum_{i=1}^{N}\sum\limits_{j=1}^{N}B_{ij}E_{j}\cos(\theta_{i}-\theta_{j})\leq
X(B_{0}+(n-1)B_{1})\overline{E}.\end{cases}$ (14)
The range of variation of the mean value can then be determined, solving the
following inequalities:
$\begin{cases}T_{d}\dot{\overline{E}}+\overline{E}-E_{f}\leq X[\
(B_{0}+(N-1)B_{1})]\ \overline{E},\\\
T_{d}\dot{\overline{E}}+\overline{E}-E_{f}\geq X[\ (B_{0}-(N-1)B_{1})]\
\overline{E}.\end{cases}$ (15)
Thus, the average of the voltage is bounded as follows:
$\begin{cases}\overline{E(t)}\leq
E_{0}\exp{(\frac{-(1-\sigma_{1})t}{T_{d}})}+\frac{E_{f}}{T_{d}(1-\sigma_{1})}(\
1-\exp{(\frac{-(1-\sigma_{1})t}{T_{d}})})\,\\\ \overline{E(t)}\geq
E_{0}\exp{(\frac{-(1+\sigma_{2})t}{T_{d}})}+\frac{E_{f}}{T_{d}(1+\sigma_{2})}(\
1-\exp{(\frac{-(1+\sigma_{2})t}{T_{d}})})\ ,\end{cases}$ (16)
where $E_{0}$ is the initial value of the mean voltage:
$\overline{E}(t=0)=E_{0}$, and $\sigma_{2}$ and $\sigma_{1}$ are given by:
$\begin{cases}\sigma_{1}=X[\ (B_{0}+(N-1)B_{1})],\ \\\ \sigma_{2}=-X[\
(B_{0}+(N-1)B_{1})].\ \end{cases}$
Evaluating all previous inequalities, we observe that the dynamics of the mean
voltage is strictly related to the size of the network. Thus, according to Eq.
(16), the mean voltage will be bounded between real values if and only if:
$\begin{cases}1-\sigma_{1}=1-X[\ (B_{0}+(N-1)B_{1})]\geq 0,\\\
1-\sigma_{2}=1+X[\ (B_{0}-(N-1)B_{1})]\geq 0.\\\ \end{cases}$ (17)
Thus the stability of the electric voltage is achieved if the network size and
the network coupling are chosen in the range given by the following
inequality:
$1-X\frac{1-B_{0}}{B_{1}}\leq N\leq 1+X\frac{1-B_{0}}{B_{1}}$ (18)
In the present study, the values of the system’s parameters are set as: $X=1$,
$B_{0}=-0.8$ and $B_{1}=1$, following literature values [21, 22]. Thus, the
mean voltage of the network of size $N$ will be bounded if and only if:
${\color[rgb]{0,0,0}N=2}$ (19)
Indeed, let’s consider an electric network with the same parameters as the
studied case and composed of $n$ nodes, where each node represents a small
region or a city. The result above states that an extension of the current
network composed of $N=2$ nodes by connecting it with additional nodes leads
to the instability of the new extended network in terms of voltage.
Thus, from this bulk analysis, it appears that the secondary frequency control
has explicitly no effects on the stabilization of the mean voltage, but does
have an effect when it comes to stabilizing the mean frequency of the network.
This implies therefore the necessity of having another form of control for the
mean voltage in larger electric networks.
## III Numerical analysis
In the previous section, we analysed the linear stability as well as the bulk
dynamics analytically. In this part, we present a numerical analysis for N=2
and $N\geq 2$ nodes with the objective to evaluate the effects of the
frequency secondary control ($\gamma$) on the network, thereby complementing
the analytical results.
### III.1 2-node system
The two node system considered here consists of a generator node of power
$P_{1}>0$ and a consumer node of power $P_{2}<0$. The two nodes represent
simply two interconnected power grids (or synchronous machines) with equal
control parameters $\gamma_{1}=\gamma_{2}=\gamma$ and are described by the Eq.
(2) and parameters, such as $T_{d}$, inspired by literature values [21, 22].
Initially, without disturbance, the two systems evolve till reaching a stable
final state, in which the variables of each node tend to a steady state as
shown in Fig.1 without control (first column in black) and with control
(second column in red).
Figure 1: An unperturbed system converges to a steady state with and without
control. [Description of figure] parameters of simulation are the following:
$\alpha_{1}=\alpha_{2}=0.2$, $T_{d,1}=T_{d,2}=2$,$E_{f,1}=E_{f,2}=1.$ ,
$E_{1}(0)=E_{2}(0)=1.14$, $\omega_{1}(0)=\omega_{2}(0)=50$ ,
$\theta_{1}(0)=\theta_{2}(0)=0$, $P_{dist}$=0, $\gamma_{1}=\gamma_{2}=0$
(first column), $\gamma_{1}=\gamma_{2}=1$ (second column)
One observes that in both cases the network is synchronized, meaning that the
frequency at each node is equal to $\omega_{syn}=50$ $Hz$ and the difference
of phases between the connected node tends to a constant value. The final
states reached by the voltages in the controlled case are sightly greater than
the the ones of the uncontrolled case. This slight increase results from the
effects of the secondary control parameter at each node, which affects the
dynamics of the voltage through the angles. In fact, at a constant voltage,
the secondary control is basically the governor which activates the online or
offline substations, thereby increasing or reducing the power generated in
order to balance the power in the system ($\omega=0$) [7, 5, 26]. Thus, once
the frequency is brought back to its nominal value, the phases in each node
tend to constant values, which basically implies a constant difference of
phases between connected nodes, hence constant voltage. Without control, the
two nodes of the system evolve until sharing a constant amount of power and
reach a power balanced state.
Figure 2: Secondary control stabilizes a 2 node system. [Description of
figure] parameters of simulation are the following:
$\alpha_{1}=\alpha_{2}=0.2$, $T_{d,1}=T_{d,2}=2$,$E_{f,1}=E_{f,2}=1.$,
$E_{1}(0)=E_{2}(0)=1.14$, $\omega_{1}(0)=\omega_{2}(0)=50$,
$\theta_{1}(0)=\theta_{2}(0)=0$, $P_{dist}$=1, $\gamma_{1}=\gamma_{2}=0$
(first column), $\gamma_{1}=\gamma_{2}=1$ (second column)
Next, we consider a perturbation to the power system: We assume that from time
$t=40$ $s$ to $t=42$ $s$ , the power $P_{1}$ at node 1 experiences a gradual
increase of power from 0 to $P_{dist}=1$, resulting in a sudden increase of
its corresponding frequency, as shown in Fig. 2 (dashed lines) in the
controlled (red) and uncontrolled (black) case. This perturbation leads to an
instability of the uncontrolled case, visible by the emergence of periodic
oscillation in the frequency and the voltages around states far away from
their original states,while the corresponding phases are rising in opposite
directions. On the other hand, the variables in the controlled system (plot in
red in Fig.2) converge to new steady states for the voltages and phases, while
the corresponding frequencies return to their synchronous states as before the
perturbation. Thus, the secondary control not only stabilized the frequencies
but also stabilized the voltage, thereby avoiding the system to drop into an
unstable regime. Thus, the secondary frequency control acts as a damping for
the voltages. This is a very similar behavior as the primary frequency control
has on the frequency through the damping coefficient $\alpha$ on the frequency
of the uncontrolled ($\gamma=0$) network. Hence, we can consider secondary
frequency control acts somehow as a primary control for the voltage.
Figure 3: Uncontrolled networks display frequency instability and large
voltage oscillations. Illustration of average of the phase (a), frequency
deviation (b) and the voltage (c) as a function of the time for different size
of the network, with the corresponding zoom of the voltage dynamics given at
the bottom of the figure. Parameter values: $\gamma=0$, $\alpha=0.2$,
$Pdist=1$, $X=1$, $E_{f}=1$, $E_{i}(t=0)=1.14$, $T_{d}=1$, $B_{0}=-0.8$ and
$B_{1}=1$.
Figure 4: Secondary control stabilizes the frequency and voltage following
initial oscillations. Illustration of average of the phase (a), frequency
deviation (b) and the voltage (c) as a function of the time for different size
of the network, with the corresponding zoom of the voltage dynamics given at
the bottom of the figure. Parameter values: $\gamma=1$, $\alpha=0.2$,
$Pdist=1$
### III.2 Bulk dynamics
Moving towards the $N>2$, case, we now present the numerical results
complementing the analytical bulk analysis. Thus, the question to be answered
is the following: how does the secondary control affect the mean voltage
dynamics in networks? Thus, we consider the electric network, whose dynamics
is governed by the Eq. (2) and we analyse the dynamics of the mean voltage and
frequency for different network sizes. We consider for these purposes that we
have an interconnected network constituted of $n$ nodes, which can represent
$n$ interconnected isolated power grids or synchronous machines. So far, we
assumed that the nodes are all-to-all-coupled, each node having equal power
$P_{i}$ in absolute value, and the network without perturbation is power
balanced ($\sum_{i}{P_{i}=0}$). Such all-to-all-coupled networks naturally
emerge after Kron reduction of any network topology [27]. In all the studied
scenarios, the system is perturbed from the time $t=40$ $s$ to $t=42$ $s$ by
gradually increasing the power at a single node from $0$ to $P_{dist}$.
First, we consider the uncontrolled ($\gamma=0$) case and plot the deviation
of the mean phase, frequency and voltage as function of time for different
network size, and without secondary control in Fig. 3. We observe a clear
agreement with the analytical predictions given by the expression of the
frequency deviation in Eq. (12), meaning that the mean frequency for a long
time tends to a constant. Since we assumed that the primary control
parameter/the inertia of each power plant is constant, we observe that the
mean of the frequency deviation decreases when the size of the network
increases. The corresponding curve of the average voltage is plotted at the
third column of the first row. Complementing the analytical results, we
observe large transient voltage dynamics, which increase in amplitude with
network size. To better observe the dynamics of the mean value of the voltage
towards the end of our simulation window, we provide zooms in the lower row of
Fig. 3 for each size $n$ of the network. For the two nodes system, the average
frequency tends to a constant value. The mean voltage conversely is
oscillating around a stable value $\overline{E}\approx 0.55$ $Volt$. For
larger values of $n$ on the other hand, the mean voltage is fluctuating and
displays large periodic peaks. These regular peaks persist throughout the
running time of our simulation and are observed for $N$ = $10$ and $N$ = $20$.
For $N$ = $50$, after the transient, the mean voltage mostly fluctuates around
a constant value and only displays one peak, likely pointing to a longer
periodicity of these peaks due to the larger network. The small fluctuations
observed are probably due to the difference of phases existing between the
connected nodes. In this uncontrolled studied case, the system is clearly not
a synchronous stable electric network.
Now, we include secondary control ($\gamma>0$) and repeat the same simulations
as before. Fig. 4 (top) shows the time evolution of the mean frequency and the
mean voltage of the considered network in presence of the secondary control.
First, we notice that the mean frequency deviation tend to zero for every
network size, as we predicted in our linear stability analysis. Hence, we
focus on the mean voltage dynamics, for which we could not derive an equality
from the Eq.(13) but only constrained its range of fluctuations. Thus, only
these numerical simulation can tell us how the voltage evolves with time.
Similar to the frequency, after an initial transient phase, the mean voltage
tends to a constant value for all the considered sizes of the network.
Again, to highlight the voltage dynamics towards the end of our simulation
window, we provide a zoom of the voltages in the second row of figure 4. For
the two node system, we observe that the mean voltage tends to steady state,
and it is not oscillating as in the corresponding uncontrolled case. Thus, the
secondary control clearly stabilizes the voltage. In fact, without secondary
control, the phase of each node is increasing continuously with the time (see
Fig. 2), leading to the oscillation of the voltage. The secondary frequency
control on other hand stabilizes the phases, hence the cosine function in the
expression of each voltage becomes a constant, leading to constant voltage at
each node. For larger networks, it may happen that due to the power flow
between connected nodes, the phases slightly vary, leading at some times to
the fluctuations of the voltage. This justifies some rare peaks observed in
the mean voltage. Nevertheless, the magnitude of the voltage and its peaks in
the controlled network mostly remains lower than the one in the uncontrolled
network. In addition, we observe that the steady state reached in the end
decreases when the network size increases.
### III.3 Relaxation time
We have shown in the previous analysis how the dynamics of the phase, the
frequency, the voltage as well as their corresponding mean values evolve with
the secondary parameter $\gamma$, once the network is perturbed. In fact, once
perturbed from its stable state, the variables of the system vary until they
reach new steady states, often different from their original steady states
without perturbation. Now, we investigate how long it takes the system to
relax to its (new) steady state, by computing the return time or relaxation
time. According to the bulk dynamics of the network, the return time of the
frequency and phase average depends strongly on the damping of the system, and
slightly on the secondary control parameter, which impacts mostly the
oscillatory regime of these variables. If the return time of the mean
frequency and phase can be obtained explicitly, this cannot so easily be
achieved for the mean value of the voltage. Hence, we again use numerical
computations to obtain the return time for different control parameters.
In particular, we consider the previously described 2 nodes system, including
voltage dynamics. For this system, we aim to determine the evolution of the
return time as a function of the secondary control parameter $\gamma$. The
network is perturbed as previously, by gradually decreasing the power at the
node $1$ from $P_{1}$ to $P_{1}+P_{dist}$ ($P_{dist}=-1$) from time t=$40s$ to
t=$42s$ and all other parameters also remain unchanged. The relaxation time or
return time is numerically computed by recording the time taken by the voltage
at a node to regain a steady state after perturbation. Thus, we define by
$\overline{E}(t)$ the mean voltage at the time $t$ and $\overline{E}(t-T)$ the
mean at the time $t=t-T$, where $T$ is a characteristic time. We define also
by $\xi$ the numerical tolerance. The mean voltage is considered stable after
perturbation of the network if and only if
$|\overline{E}(t)-\overline{E}(t-T)|\leq$ $\xi$. Note that we focus here on
the relaxation and therefore of the voltage, not the frequency, as we are
interested in quantifying the impact of frequency control on voltage
stability. The so computed return times decrease with increasing secondary
control parameter $\gamma$, see Fig. 5. First, we note that for any non-zero
value of the secondary control parameter $\gamma$, the system after
perturbation always regains a steady state, i.e. the secondary frequency
control guarantees stability and return. Secondly, we observe that the return
time is reduced by increasing the control amplitude, approaching zero for
sufficiently large control $\gamma$. This means that the secondary frequency
control is strong enough to immediately compensate the introduced disturbance
and the system never leaves its original fixed point.
Figure 5: Secondary control stabilizes the system by reducing the return time.
We plot the return time of the mean voltage when the control parameter
increases in the two node system. The parameters of simulation are the
following: $\alpha_{1}=\alpha_{2}=0.2$, $T_{d,1}=T_{d,2}=2$,
$E_{f,1}=E_{f,2}=1$, $P_{dist}=1$ $N=2$ nodes. Figure 6: Exploring
heterogeneous parameters and non-all-to-all-coupling. Illustration of the
topology (a), the average of the frequency deviation (b) and the voltage (c)
as a function of the time for the controlled (black) and controlled (red)
network. Parameter values:
$P=[-0.4,0.53,-0.51,0.56,0.52,0.48,-0.55,-0.45,0.491,0.509,-0.482,-0.518,-0.46,-0.64,0.42,0.58,-0.5,0.5,0.35,-0.45]$,
primary control $\alpha_{i}=0.2|P_{i}|$, secondary control
$\gamma_{i}=|P_{i}|$, while $P_{dist}$, $X$ and $B$ are chosen as in the other
simulations.
### III.4 Case study: Heterogeneous parameters and network influence
So far we have assumed networks that are all-to-all-coupled and have
homogeneous parameters, i.e. identical loads, generation and control values.
We used this simplification to derive analytical results. To demonstrate that
our results are in principle also applicable to more general systems we
consider one non-all-to-all-coupled system with heterogeneous parameters, see
Fig. 6a. In particular, we simulate the dynamical behavior of 20 nodes
connected to a common bus with power values $P$ randomly drawn between -0.7
and 0.7. We assume nodes with large absolute value of $P$ to provide more
control power and hence set the control values proportional to the absolute
power value: $\alpha_{i}=0.2|P_{i}|$, secondary control $\gamma_{i}=|P_{i}|$,
where $|...|$ denotes the absolute value. Analogue to the earlier analysis, we
observe that the average frequency can only be restored once secondary control
is used, see Fig. 6b. Meanwhile, the voltage dynamics is not controlled and
oscillates (Fig. 6c) since we have more than two nodes, consistent with our
analytical results. How exactly non-homogeneous parameters and network
topology affect the dynamics and controlability of both frequency and voltage
is beyond the scope of this study.
## IV Conclusion
This article has investigated the effects of the secondary frequency control
on the voltage dynamics of electric networks of different sizes. We considered
an simple networks, both all-to-all-coupled as well as a bus topology. Our
analytical linear stability analysis of the network has shown that the
secondary control can guarantee the stability of the network. In addition,
considering the network as a simplified bulk, we have demonstrated that the
stability of the mean phase and frequency are independent of the mean voltage
of the network. On the other hand, the mean voltage does depend on the nodes’
phases. The different numerical simulations computed for the perturbed network
and in presence of the secondary control have shown that, the secondary
control actually plays the role of a primary control for the voltage. The
frequency secondary control after perturbation stops the variation of the
voltage and stabilizes it to a new steady value different to the original one.
Our results showcase how voltage stability and secondary frequency control
should be considered in other power system stability analyses: Including only
primary frequency control might stabilize the frequency but does not guarantee
any voltage stability. Hence, the voltage should then explicitly be considered
when assessing stability in power systems. Meanwhile, if secondary frequency
control is applied, the voltage stability is no longer an immediate concern
and might be neglected, especially if only the short-term stability is of
interest.
In the future, it would be interesting to complement the secondary frequency
control, which acts as an effective "primary voltage control" by a "secondary
voltage control" to bring the voltage back within its operational boundaries.
Furthermore, how precisely network topology and heterogeneous parameters
affect the voltage and frequency stability and return times still remains a
mostly open question for now. Similarly, the choice of specific parameters,
such as $T_{d}$ and $X_{i}$ should be investigated further. Finally, our
linear stability and return time analysis could be supplemented by a detailed
analysis of the basin of attraction [28].
### Acknowledgments
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under the Marie Sklodowska-Curie grant
agreement No 840825.
E.B.T.T. and P.P. acknowledge funding from the European Regional Development
Fund under Grant No. CZ.02.1.02/0.0/0.0/15003/0000493, "CENDYNMAT - Centre of
Excellence for Nonlinear Dynamics Behaviour of Advanced Materials in
Engineering" and the Academy of Sciences CR under Grant Strategy AV21, VP03:
"Efficient energy conversion and storage, Vibrodiagnostics of rotating blades
of rotary machines in power engineering.
E.B.T.T., D.G., and P.C. acknowledge funding from the Ministerio de Ciencia e
Innovación (Spain), the Agencia Estatal de Investigación (AEI, Spain), and the
Fondo Europeo de DesarrolloRegional (FEDER, EU) under Grant No. PACSS
(RTI2018-093732-B-C22) and the Maria de Maeztu program for Units of Excellence
in R&D (No. MDM-2017-0711). E.B.T.T. also acknowledges the fellowship from the
AEI and MINEICO, Spain under the FPI program(No. FIS2015-63628-CZ-Z-R).
### Competing interests
The authors declare no competing interests.
## References
* Machowski et al. [2011] J. Machowski, J. Bialek, and J. Bumby, _Power System Dynamics: Stability and Control_ (John Wiley & Sons, 2011).
* Mancarella et al. [2021] P. Mancarella, J. Moriarty, A. Philpott, A. Veraart, S. Zachary, and B. Zwart, _Introduction: the mathematics of energy systems_ (2021).
* Tchawou and Woafo [2014] E. Tchawou and P. Woafo, Nonlinear Engineering 3, 89 (2014).
* Tékam et al. [2014] G. O. Tékam, E. T. Tchuisseu, C. K. Kwuimy, and P. Woafo, Nonlinear Dynamics 76, 1561 (2014).
* Tchuisseu et al. [2018] E. B. T. Tchuisseu, D. Gomila, P. Colet, D. Witthaut, M. Timme, and B. Schäfer, New Journal of Physics 20, 083005 (2018).
* Witthaut and Timme [2012] D. Witthaut and M. Timme, New Journal of Physics 14, 083036 (2012).
* Tchuisseu et al. [2017] E. T. Tchuisseu, D. Gomila, D. Brunner, and P. Colet, Physical Review E 96, 022302 (2017).
* Witthaut and Timme [2013] D. Witthaut and M. Timme, The European Physical Journal B 86, 1 (2013).
* Schäfer et al. [2015] B. Schäfer, M. Matthiae, M. Timme, and D. Witthaut, New Journal of Physics 17, 015002 (2015).
* Dongmo and Woafo [2015] E. D. Dongmo and P. Woafo, The European Physical Journal B 88, 1 (2015).
* Dongmo et al. [2017] E. D. Dongmo, P. Colet, and P. Woafo, The European Physical Journal B 90, 6 (2017).
* Filatrella et al. [2008] G. Filatrella, A. H. Nielsen, and N. F. Pedersen, The European Physical Journal B 61, 485 (2008).
* Gorjão et al. [2020] L. R. Gorjão, M. Anvari, H. Kantz, C. Beck, D. Witthaut, M. Timme, and B. Schäfer, IEEE Access 8, 43082 (2020).
* Sun et al. [2019] H. Sun, Q. Guo, J. Qi, V. Ajjarapu, R. Bravo, J. Chow, Z. Li, R. Moghe, E. Nasr-Azadani, U. Tamrakar, et al., IEEE Transactions on Power Systems 34, 2790 (2019).
* Weitenberg et al. [2017] E. Weitenberg, Y. Jiang, C. Zhao, E. Mallada, C. De Persis, and F. Dörfler, arXiv preprint arXiv:1711.07332 (2017).
* Tyloo and Jacquod [2020] M. Tyloo and P. Jacquod, IEEE Control Systems Letters 5, 929 (2020).
* Böttcher et al. [2020] P. C. Böttcher, A. Otto, S. Kettemann, and C. Agert, Chaos: An Interdisciplinary Journal of Nonlinear Science 30, 013122 (2020).
* Nishikawa and Motter [2015] T. Nishikawa and A. E. Motter, New Journal of Physics 17, 015012 (2015).
* Schäfer et al. [2016] B. Schäfer, C. Grabow, S. Auer, J. Kurths, D. Witthaut, and M. Timme, The European Physical Journal Special Topics 225, 569 (2016).
* Schäfer et al. [2017] B. Schäfer, M. Matthiae, X. Zhang, M. Rohden, M. Timme, and D. Witthaut, Physical Review E 95, 060203 (2017).
* Schmietendorf et al. [2014] K. Schmietendorf, J. Peinke, R. Friedrich, and O. Kamps, Euroean Physical Journal Special Topics 223, 2577 (2014).
* Auer et al. [2016] S. Auer, K. Kleis, P. Schultz, J. Kurths, and F. Hellmann, Euroean Physical Journal Special Topics 225, 609 (2016).
* Dörfler and Bullo [2014] F. Dörfler and F. Bullo, Automatica 50, 1539 (2014).
* Sharafutdinov et al. [2018] K. Sharafutdinov, L. Rydin Gorjão, M. Matthiae, T. Faulwasser, and D. Witthaut, Chaos: An Interdisciplinary Journal of Nonlinear Science 28, 033117 (2018).
* Manik et al. [2014] D. Manik, D. Witthaut, B. Schäfer, M. Matthiae, A. Sorge, M. Rohden, E. Katifori, and M. Timme, The European Physical Journal Special Topics 223, 2527 (2014).
* Tchuisseu et al. [2019] E. T. Tchuisseu, D. Gomila, and P. Colet, International Journal of Electrical Power & Energy Systems 108, 145 (2019).
* Dorfler and Bullo [2013] F. Dorfler and F. Bullo, Circuits and Systems I: Regular Papers, IEEE Transactions on 60, 150 (2013).
* Hellmann et al. [2016] F. Hellmann, P. Schultz, C. Grabow, J. Heitzig, and J. Kurths, Scientific Reports 6 (2016).
* Horn [1962] A. Horn, Pacific Journal of Mathematics 12, 225 (1962).
## Appendix A Eigenvalues of sum of matrices
To compute the eigenvalues of the sum of two matrices, as used in this paper,
we follow this simple line of reasoning: The eigenvalue equation of matrix $A$
are obtained by:
$\det{\left(A-\lambda I\right)}=0,$ (20)
with identity matrix $I$ and where $\lambda_{1},\lambda_{2},...,\lambda_{N}$
are the eigenvalues of $A$. Now suppose we add a diagonal matrix to $A$, i.e.
$B=A+\gamma I$, then the eigenvalue equation reads
$\displaystyle\det{\left(B-\tilde{\lambda}I\right)}$ $\displaystyle=$
$\displaystyle 0,$ (21) $\displaystyle\det{\left(A+\gamma
I-\tilde{\lambda}I\right)}$ $\displaystyle=$ $\displaystyle 0,$
$\displaystyle\det{\left(A-\left(\tilde{\lambda}-\gamma\right)I\right)}$
$\displaystyle=$ $\displaystyle 0,$ $\displaystyle\det{\left(A-\lambda
I\right)}$ $\displaystyle=$ $\displaystyle 0,$
i.e. the eigenvalues $\tilde{\lambda}$ of $B=A+\gamma I$ are given as
$\tilde{\lambda}_{k}=\lambda_{k}+\gamma$. So if the eigenvalues are ordered as
$\lambda_{1}\leq\lambda_{2}\leq...$ and $\lambda_{1}<0$ is the smallest
eigenvalue, we can choose any $\gamma>\lambda_{1}$ so that $B=A+\gamma I$ has
only positive eigenvalues, i.e. it the matrix $B$ is positive definite.
The interested reader might consult more general results on the eigenvalues of
two Hermitian matrices [29].
|
# Stable patterns in the Lugiato-Lefever equation with a confined vortex pump
Shatrughna Kumar1 Wesley B. Cardoso2 Boris A. Malomed1,3 1Department of
Physical Electronics, School of Electrical Engineering, Faculty of
Engineering, and Center for Light-Matter Interaction, Tel Aviv University, Tel
Aviv 69978, Israel 2Instituto de Física, Universidade Federal de Goiás,
74.690-900, Goiânia, Goiás, Brazil 3Instituto de Alta Investigación,
Universidad de Tarapacá, Casilla 7D, Arica, Chile
###### Abstract
We introduce a model of a passive optical cavity based on a novel variety of
the two-dimensional Lugiato-Lefever equation, with a localized pump carrying
intrinsic vorticity $S$, and the cubic or cubic-quintic nonlinearity. Up to
$S=5$, stable confined vortex-ring states (vortex pixels) are produced by
means of a variational approximation and in a numerical form. Surprisingly,
vast stability areas of the vortex states are found, for both the self-
focusing and defocusing signs of the nonlinearity, in the plane of the pump
and loss parameters. When the vortex-rings are unstable, they are destroyed by
azimuthal perturbations which break the axial symmetry. The results suggest
new possibilities for mode manipulations in passive nonlinear photonic media
by means of appropriately designed pump beams.
## I Introduction and the model
Optical solitons are a broad class of self-trapped states maintained by the
interplay of nonlinearity and dispersion or diffraction in diverse photonic
media [1, 2]. In addition to that, dissipative optical solitons are supported
by the equilibrium of loss and gain or pump, concomitant to the nonlinearity-
dispersion/diffraction balance [3, 4]. Dissipative solitons have been studied
in detail, theoretically and experimentally, in active setups, with the loss
compensated by local gain (essentially, provided by lasing), being modelled by
one- and two-dimensional (1D and 2D) equations of the complex Ginzburg-Landau
(CGL) type [5, 6].
In passive nonlinear optical cavities, the losses are balanced by the pump
field supplied by external laser beams, with the appropriate models provided
by the Lugiato-Lefever (LL) equations [7]. This setting was also studied in
the 1D and 2D forms [8, 9, 10, 11]. Widely applied in nonlinear optics,
equations of the LL type play a crucial role in understanding fundamental
phenomena such as the modulation instability (MI) and pattern formation in
dissipative environments [8]-[24]. The relevance of these models extends to
the exploration of complex dynamics of various nonlinear photonic modes,
tremendously important applications being the generation of Kerr solitons and
frequency combs in passive cavities [12]-[24], as well as the generation of
terahertz radiation [25]. In addition to rectilinear cavity resonators,
circular ones can be used too [26]. In many cases, they operate in the
whispering-gallery regime [27]-[32].
In most cases, solutions of the one- and two-dimensional LL equations are
looked for under the action of the spatially uniform pump, which approximately
corresponds to the usual experimental setup. However, the use of localized
(focused) pump beams is possible too, which makes it relevant to consider LL
equations with the respective shape of the pump terms. In fact, truly
localized optical modes in the cavities can be created only in this case,
otherwise the uniform pump supports nonzero background of the optical field.
In particular, exact analytical solutions of the LL equations with the 1D pump
represented by the delta-function, and approximate solutions maintained by the
2D pump in the form of a Gaussian were reported in Ref. [33]. In Ref. [26],
the LL equation for the ring resonator with localized pump and loss terms
produced nonlinear resonances leading to multistability of nonlinear modes and
coexisting solitons, that are associated with spectrally distinct frequency
combs.
Further, solutions for fully localized robust pixels with zero background were
produced by the 2D LL equation, incorporating the spatially uniform pump,
self-focusing or defocusing cubic nonlinearity, and a tight confining
harmonic-oscillator potential [34]. Additionally, this model with a vorticity-
carrying pump gives rise to stable vortex pixels. In particular, in the case
of the self-defocusing sign of the nonlinear term, the pixels with zero
vorticity and ones with vorticity $S=1$ were predicted analytically by means
of the Thomas-Fermi (TF) approximation.
In this work, we introduce the 2D LL equation for complex amplitude field
$u\left(x,y,t\right)$ of the light field, with the cubic or cubic-quintic
nonlinearity:
$\frac{\partial u}{\partial t}=-\alpha
u+\frac{i}{2}\nabla^{2}u+i\sigma\left(\left|u\right|^{2}-\eta^{2}\right)u-ig\left(\left|u\right|^{4}-\eta^{4}\right)u+f(r)e^{iS\theta},$
(1)
and a confined pump beam represented by factor $f(r)$. Here $i$ is the
imaginary unit, $\alpha>0$ is the loss parameter, $\sigma=+1$ and $-1$
corresponds, respectively, to the self-focusing and defocusing Kerr (cubic)
nonlinearity, $g>0$ or $g<0$ represents the self-defocusing or focusing
quintic nonlinearity (which often occurs in optical media [35, 36, 37], in
addition to the cubic term), parameter $\eta$ defines the cavity mismatch,
which is $\sigma\eta^{2}-g\eta^{4}$ (the coefficient multiplying the linear
term $\sim iu$), in terms of the linearized LL equation, and
$f(r)=f_{0}r^{S}\exp\left(-r^{2}/W^{2}\right)$ (2)
written in terms of polar coordinates $\left(r,\theta\right)$, corresponds to
the confined pump beam with real amplitude $f_{0}$, radial width $W$, and
integer vorticity $S\geq 1$. Vortex beams, shaped by the passage of the usual
laser beam through an appropriate phase mask, are available in the experiment
[37].
Equation (1) is written in the scaled form. All figures are plotted below in
the same notation. In physical units, $r=1$ and $t=1$ normally correspond to
$\char 126\relax 50$ $\mathrm{\mu}$m and $\sim 50$ ps, respectively. Then, the
typical width $W=2$, considered below, corresponds to the pump beam with
diameter $\sim 100$ $\mathrm{\mu}$m, which an experimentally relevant value.
Accordingly, the characteristic evolution time in simulations presented below,
$\grave{t}\sim 100$, corresponds to times $\char 126\relax 5$ ns.
Stationary solutions of Eq. (1) are characterized by values of the total power
(alias norm),
$P=\int_{-\infty}^{+\infty}dx\int_{-\infty}^{+\infty}dy~{}\left|u\left(x,y\right)\right|^{2}\equiv
2\pi\int_{0}^{\infty}\left|u(r,t)\right|^{2}rdr,$ (3)
and angular momentum,
$M=i\int_{-\infty}^{+\infty}dx\int_{-\infty}^{+\infty}dyu^{\ast}\left(y\partial_{x}u-x\partial_{y}u\right)dxdy$
(4)
(with $\ast$ standing for the complex conjugate), even if the power and
angular momentum are not dynamical invariants of the dissipative equation (1).
In the case of the axisymmetric solutions with vorticity $S$, i.e.,
$u\left(x,y\right)=u(r)e^{iS\theta}$ [38, 39, 40], the expressions for the
power and angular momentum are simplified:
$P=2\pi\int_{0}^{\infty}\left|u(r)\right|^{2}rdr,~{}M=SP.$ (5)
Our objective is to construct _stable_ ring-shaped vortex solitons
(representing vortex pixels, in terms of plausible applications), as localized
solutions of Eq. (1) with the same $S$ as in the pump term (2). The stability
is a challenging problem, as it is well known from the work with models based
on the nonlinear Schrödinger and CGL equations that (in the absence of a tight
confining potential) vortex-ring solitons are normally vulnerable to the
splitting instability. In the case of a narrow ring shape, the splitting
instability may be considered as quasi-one-dimensional MI of the ring against
azimuthal perturbations which break its axial symmetry [40, 2]. The azimuthal
MI is driven by the self-focusing nonlinearity, and inhibited by the self-
defocusing.
To produce stationary solutions for the vortex solitons in an approximate
analytical form (parallel to the numerical solution), we employ a variational
approximation (VA). Our results identify regions of the existence and
stability of the vortex solitons with $1\leq S\leq 4$ in the space of
parameters of Eqs. (1) and (2) (in particular, in the plane of
$\left(f_{0},\alpha\right)$) for both signs of the cubic nonlinearity,
$\sigma=\pm 1$, while the mismatch parameter is fixed to be $\eta=1$ by dint
of scaling. The stability areas are vast, provided that the loss coefficient
$\alpha$ is, roughly speaking, not too small. A majority of the results are
produced for the pure cubic model, with $g=0$, but the effect of the quintic
term, with $g\neq 0$, is considered too. Quite surprisingly, a stability area
for the vortices with $S\leq 3$ is found even in the case of $\sigma=+1,g<0$,
when both the cubic and quintic terms are self-focusing, which usually implies
strong propensity to the azimuthal instability of the vortex rings [40].
The rest of the paper is structured as follows. The analytical approach, based
on the appropriate VA, is presented in Sec. II. An asymptotic expression for
the tail of the vortex solitons, decaying at $r\rightarrow\infty$, is found
too in that section. Systematically produced numerical results for the shape
and stability of the vortex solitons are collected (and compared to the VA
predictions) in Sec. III. The paper is concluded by Sec. IV.
## II Analytical considerations
This section summarizes the analytical part of the work and results produced
by this part. Two directions of the analytical considerations for the present
model are possible: the investigation of the decaying “tails" of the localized
stationary states, in the framework of the linearized model, and detailed
development of VA for the full model, including the nonlinear terms.
### II.1 Asymptotic forms of the vortex solitons
Direct consideration of the linearized version of Eqs. (1) and (2) readily
produces an explicit result for the soliton’s tail decaying at
$r\rightarrow\infty$:
$u(r,\theta)\approx(i/2)W^{4}r^{S-2}\exp\left(-r^{2}/W^{2}+iS\theta\right),$
(6)
with the power of the pre-Gaussian factor, $r^{S-2}$, which is lower than that
in the pump term, $r^{S}$. Due to this feature, the asymptotic expression (6)
formally predicts a maximum of local power $\left|u(r)\right|^{2}$ at $S>2$,
at
$r^{2}=r_{\max}^{2}\equiv\sqrt{(S/2-1)}W.$ (7)
A local maximum is indeed observed in numerically found radial profiles of all
vortex solitons (see Figs. 2, 4, 7(b), 9(b), 10, and 11 below, for $S=1$, $4$,
$2$, $2$, $1$, and $3$, respectively). In fact, for these cases Eq. (7)
predicts values of $r_{\max}$ which are smaller by a factor $\simeq 0.6$ than
the actually observed positions of the maxima. The discrepancy is explained by
the fact that the asymptotic expression (6) is valid at values of $r$ which
are essentially larger than $r_{\max}$.
In a looser form, one can try to construct an asymptotic approximation for the
solution at moderately large $r$, by adopting the ansatz which follows the
functional form of the pump term (2), viz.,
$u(r,\theta)\approx\mathrm{u}(r)r^{S}\exp\left(-r^{2}/W^{2}+iS\theta\right),$
(8)
where $\mathrm{u}(r)$ is a complex slowly varying function, in comparison with
those which are explicitly present in ansatz (8). Substituting the ansatz in
Eq. (1) and omitting derivatives of the slowly varying function, one can
develop an approach which is akin to the TF approximation applied to the model
with the tight trapping potential in Ref. [33]. The result for the linearized
version of Eq. (1), which implies a small amplitude of the mode pinned to the
pump beam, is
$\mathrm{u}(r)=f_{0}\left[\alpha-i\left(\frac{2r^{2}}{W^{4}}-\frac{2(S+1)}{W^{2}}-\sigma+g\right)\right]^{-1},$
(9)
where, as said above, $\eta=1$ is substituted. In the limit of
$r\rightarrow\infty$, Eqs. (9) and (8) carry over into the asymptotically
rigorous expression (6). On the other hand, Eq. (9) predicts a maximum of the
local power at
$r^{2}=\left(r_{\max}^{2}\right)_{\mathrm{TF}}=\left(S+1\right)W^{2}+\frac{\sigma-g}{2}W^{4},$
(10)
cf. Eq. (7). One may expect that the prediction of the local maximum of the
vortex soliton at point (10) is valid when it yields values of
$\left(r_{\max}^{2}\right)_{\mathrm{TF}}$ which are large enough, i.e., if $S$
and $W$ are relatively large. Indeed, the comparison with the numerically
produced profiles of the vortex solitons, displayed below in Figs. 6 and 10,
demonstrates that Eq. (10) predicts, relatively accurately,
$\left(r_{\max}\right)_{\mathrm{TF}}=4$ for $S=5$, $W=2$, $\sigma=-1$, and
$g=0$. However, the prediction given by Eq. (10) is not accurate for $S=1$ and
$2$.
Finally, in the limit of $r\rightarrow 0$, the asymptotic form of the solution
is simple, $u(r,\theta)\approx u_{0}r^{S}$, but constant $u_{0}$ cannot be
found explicitly, as it depends on the global structure of the vortex-soliton
solution. In particular, the crude TF approximation given by Eq. (9) yields
$u_{0}=f_{0}\left[\alpha+i\left(\frac{2(S+1)}{W^{2}}+\sigma-g\right)\right]^{-1}$.
### II.2 The variational approximation (VA)
A consistent global analytical fit for the vortex solitons may be provided by
VA, based on the Lagrangian of the underlying equation [2]. While Eq. (1),
which includes the linear dissipative term, does not have a Lagrangian
structure, it can be converted into an appropriate form by the substitution
suggested by Ref. [41], which absorbs the dissipative term:
$u\left(r,\theta,t\right)=U(r,t)e^{iS\theta-\alpha t},$ (11)
producing the following time-dependent equation for complex function $U(r,t)$,
where, as said above, we set $\eta=1$ by means of scaling:
$\frac{\partial U}{\partial t}=\frac{i}{2}\left(\frac{\partial^{2}}{\partial
r^{2}}+\frac{1}{r}\frac{\partial}{\partial
r}-\frac{S^{2}}{r^{2}}\right)U+i\sigma\left(\left|U\right|^{2}e^{-2\alpha
t}-1\right)U-ig\left(\left|U\right|^{4}e^{-4\alpha t}-1\right)U+f(r)e^{\alpha
t}.$ (12)
The real Lagrangian which precisely produces the time-dependent equation (12)
is
$\displaystyle L=\int_{0}^{\infty}\left[\frac{i}{2}\left(\frac{\partial
U^{\ast}}{\partial t}U-U^{\ast}\frac{\partial U}{\partial
t}\right)+\frac{1}{2}\left|\frac{\partial U}{\partial
r}\right|^{2}+\left(\frac{S^{2}}{2r^{2}}+\sigma-g\right)|U|^{2}\right.$
$\displaystyle-\left.\frac{\sigma}{2}e^{-2\alpha
t}|U|^{4}+\frac{g}{3}e^{-4\alpha t}|U|^{6}+if(r)e^{\alpha
t}\left(U^{\ast}-U\right)\right]rdr.$ (13)
The simplest ansatz which may be used as the basis for VA follows the form of
the pump term (2):
$U(r,t)=U_{0}e^{i\phi}r^{S}\exp\left(-\frac{r^{2}}{W^{2}}+\alpha t\right),$
(14)
where variational parameters $U_{0}$ and $\phi$ are the real amplitude and
phase shift of the solution with respect to the pump. Power (3) for this
ansatz is
$P_{S}=\pi\Gamma(S+1)\left(\frac{W^{2}}{2}\right)^{S+1}U_{0}^{2},$ (15)
where $\Gamma(S+1)\equiv S!$ is the Gamma-function, and the time-dependent
factors, $\exp\left(\pm\alpha t\right)$, mutually cancel when relations (11)
and (14) are substituted in expression (3). Note that the local power
$|U(r)|^{2}$, corresponding to ansatz (14), attains it maximum at
$r^{2}=SW^{2}/2$.
The substitution of ansatz (14) in Lagrangian (13) and straightforward
integration yields the respective VA Lagrangian,
$\displaystyle L_{\mathrm{VA}}$ $\displaystyle=$ $\displaystyle
U_{0}e^{2\alpha
t}\left\\{\left[6^{-(3S+2)}\Gamma\left(3S+1\right)gW^{2(3S+1)}\right]U_{0}^{5}-\left[2^{-4(S+1)}\Gamma\left(2S+1\right)\sigma
W^{2(2S+1)}\right]U_{0}^{3}\right.$ (16) $\displaystyle+$
$\displaystyle\left.2^{-(S+2)}\Gamma\left(S+1\right)W^{2S}\left[\left(\frac{d\phi}{dt}+(\sigma-g)\right)W^{2}+\left(S+1\right)\right]U_{0}\right.$
$\displaystyle+$
$\displaystyle\left.2^{-(S+1)}\Gamma\left(S+1\right)W^{2(S+1)}f_{0}\sin\phi\right\\}.$
Then, the Euler-Lagrange equations for $U_{0}$ and $\phi$ are obtained as
$\partial L_{\mathrm{VA}}/\partial U_{0}=\delta L_{\mathrm{VA}}/\delta\phi=0,$
(17)
where $\delta/\delta\phi$ stands for the variational derivative. Taking into
regard that Lagrangian (16) must be substituted in the respective action,
$\int L_{\mathrm{VA}}dt$, and then the action must be actually subjected to
the variation, one should apply the time differentiation to factor $e^{2\alpha
t}$ in Lagrangian (16), while deducing the appropriate form of $\delta
L_{\mathrm{VA}}/\delta\phi$ in Eq. (17). Once the Euler-Lagrange equations
(17) have been derived , we consider their stationary (fixed-point) solutions
by setting $dU_{0}/dt=d\phi/dt=0$, which yields
$U_{0}\alpha-f_{0}\cos(\phi)=0,$ (18) $\displaystyle
6^{-(3S+1)}\Gamma\left(3S+1\right)gW^{2(2S+1)}U_{0}^{5}-2^{-2(2S+1)}\Gamma\left(2S+1\right)\sigma
W^{2S+2}U_{0}^{3}$ $\displaystyle+$
$\displaystyle+2^{-(S+1)}\Gamma\left(S+1\right)\left\\{f_{0}W^{2}\sin\phi+\left[(\sigma-g)W^{2}+\left(S+1\right)\right]U_{0}\right\\}$
$\displaystyle=$ $\displaystyle 0.$ (19)
It is relevant to mention that the evolution equation for the power (3), which
follows from Eq. (12), is
$\frac{dP}{dt}=-2\alpha
P+4\pi\int_{0}^{\infty}f(r)\text{Re}\left\\{U(r,t)\right\\}e^{-\alpha t}rdr.$
(20)
The stationary states must satisfy the balance condition, $dP/dt=0$. Then, the
substitution of _ansatz_ (14) and expression (2) for the pump in this
condition yields a simple relation,
$\cos\phi=\frac{\alpha U_{0}}{f_{0}},$ (21)
which is identical to Eq. (18). In particular, Eq. (21) implies that, for the
fixed pump’s amplitude $f_{0}$, the amplitude of the established localized
pattern cannot exceed the maximum value, which corresponds to $\phi=0$ in Eq.
(21):
$U_{0}\leq\left(U_{0}\right)_{\max}=f_{0}/\alpha.$ (22)
### II.3 VA for the cubic ($g=0$) and quintic ($g\rightarrow\infty$) models
First, we aim to predict stationary states, as solutions of Eqs. (18) and
(19), for the model with cubic-only nonlinearity, i.e., $g=0$, under the
assumption that the loss and pump terms in Eq. (1) may be considered as small
perturbations. In the lowest approximation, i.e., dropping the small term
$\sim f_{0}$ in Eq. (19), one obtains a relatively simple expression which
predicts the squared amplitude of the vortex soliton,
$\left(U_{0}^{2}\right)_{\mathrm{VA}}^{(g=0)}=\frac{2^{2S+1}}{(2S-1)!!}W^{-2S}\left[1+\sigma\left(S+1\right)W^{-2}\right],$
(23)
the respective power (15) of the underlying ansatz (14) being
$P_{\mathrm{VA}}^{(g=0)}=\frac{2^{S}S!\pi}{(2S-1)!!}\left[W^{2}+\sigma\left(S+1\right)\right].$
(24)
Note that expressions (23) and (24) are always meaningful for the self-
focusing sign of the cubic nonlinearity, $\sigma=+1$, while in the case of
defocusing, $\sigma=-1$, the expressions are meaningful if they are positive,
which imposes a restriction on the width of the Gaussian pump: it must be
broad enough, viz.,
$W^{2}>S+1.$ (25)
The dependence of the power given by Eq. (24) on the pump’s squared width
$W^{2}$ for three different values of the vorticity, $S=1,2,3$, is plotted in
Figs. 1(a) and (b) for the self-focusing and defocusing signs of the cubic
term, i.e., $\sigma=+1$ and $-1$, respectively. Note that in Fig. 1(b) for
$\sigma=-1$, there is no solution in the region in which condition (25) does
not hold, hence the VA solution does not exist.
Figure 1: The power of the VA solution, in the case of $g=0$ (no quintic
nonlinearity), vs. the pump’s squared width, as given by Eq. (24), which
neglects weak effects of the pump and loss (small $f_{0}$ and $\alpha$), for
the self-focusing ($\sigma=+1$) in (a) and defocusing ($\sigma=-1$) in (b)
signs of the cubic term. The black curve with circles, the red one with
squares, and the blue one with triangles pertain to vorticities $S=1$, $2$,
and $3$, respectively.
In the limit of the dominant quintic nonlinearity, i.e.,
$g\rightarrow\pm\infty$, opposite to the pure cubic model considered above,
the asymptotic solution of Eq. (19) is
$\left(U_{0}^{2}\right)_{\mathrm{VA}}^{(g\rightarrow\pm\infty)}\approx
2^{S}\sqrt{3^{3S+1}\frac{S!}{(3S)!}}W^{-2S}$ (26)
(in which the large coefficient $g$ cancels out), the respective expression
for power (15) being
$P_{\mathrm{VA}}^{(g\rightarrow\pm\infty)}=\frac{3^{(3S+1)/2}\pi\left(S!\right)^{3/2}}{2\sqrt{(3S)!}}W^{2},$
(27)
cf. Eqs. (23) and (24).
In the following section, we report results of numerical solution of Eq. (1),
comparing them to solutions of the full system of the VA equations (18) and
(19), which include effects of the pump and loss terms.
## III Numerical results
Simulations of Eq. (1) were conducted by means of the split-step pseudo-
spectral algorithm. The solution procedure started from the zero input, and
was running until convergence to an apparently stable stationary profile (if
this outcome of the evolution was possible). This profile was then compared to
its VA counterpart, produced by a numerical solution of Eqs. (18) and (19)
with the same values of parameters $\alpha$, $\sigma=\pm 1$, $g$, and $f_{0}$,
$W$, $S$ (see Eq. (2)). The results are presented below by varying, severally,
loss $\alpha$, vorticity $S$, the pump’s width $W$ and strength $f_{0}$, and,
eventually, the quintic coefficient $g$. The findings are eventually
summarized in the form of stability charts plotted in Fig. 12.
### III.1 Variation of the loss parameter $\alpha$
In Fig. 2(a) we display the cross-section (drawn through $y=0$) of the
variational and numerical solutions for the stable vortex solitons obtained
with $\alpha=0.5$, $1.0$, and $2.0$, while the other parameter are fixed as
$\sigma=+1$ (the self-focusing cubic nonlinearity), $g=0$, $f_{0}=1$, $W=2$,
and $S=1$. The accuracy of the VA-predicted solutions presented in Fig. 2(a)
is characterized by the relative power difference from their numerically found
counterparts, which is $5.1\%$, $3.1\%$, and $0.5\%$ for
$\alpha=0.5,1.0,\mathrm{~{}}2.0,$ (28)
respectively. Thus, the VA accuracy improves with the increase of $\alpha$.
Similar results for the self-defocusing nonlinearity, $\sigma=-1$, are
presented in Fig. 2(b), which shows an essentially larger discrepancy between
the VA and numerical solutions, viz., $18.9\%$, $17.4\%$, and $3.4\%$ for the
same set (28) of values of the loss parameter, other coefficients being the
same as in Fig. 2(a). The larger discrepancy is explained by the fact that
localized (bright-soliton) modes are not naturally maintained by the self-
defocusing, hence the ansatz (14), which is natural for the self-trapped
solitons in the case of the self-focusing, is not accurate enough for
$\sigma=-1$. In the same vein, it is natural that, in the latter case, the
discrepancy is more salient for stronger nonlinearity, i.e., smaller $\alpha$,
which makes the respective amplitude higher.
Figure 2: The comparison between cross sections (drawn through $y=0$) of the
VA solutions and their numerically found counterparts (dashed black and solid
blue lines, respectively) for different values of the loss parameter $\alpha$
in Eq. (1), taken from set (28). Panels (a) and (b) pertain to the self-
focusing ($\sigma=+1$) and defocusing ($\sigma=-1$) signs of the cubic
nonlinearity, respectively. The other parameters in Eqs. (1) and ( 2) are
fixed as $g=0$, $\eta=1$, $f_{0}=1$, $W=2$, and $S=1$.
There is a critical value of $\alpha$ below which the vortex solitons are
unstable. As an example, Fig. 3 shows the VA-predicted and numerically
produced solutions for $\alpha=0.2$ and $\sigma=+1$ (the self-focusing
nonlinearity). The observed picture may be understood as a result of the
above-mentioned azimuthal MI which breaks the axial symmetry of the vortex
soliton. More examples of the instability of this type are displayed below.
For the values of other parameters fixed as in Fig. 3, the instability
boundary is $\alpha_{\mathrm{crit}}\approx 0.35$. The stabilizing effect of
the loss at $\alpha>\alpha_{\mathrm{crit}}$ is a natural feature. On the other
hand, the increase of $\alpha$ leads to decrease of the soliton’s amplitude,
as seen in Fig. 2.
In the case of the self-defocusing ($\sigma=-1$), all the numerically found
vortex modes are stable, at least, at $\alpha\geq 0.1$, although the
discrepancy in the values of the power between these solutions and their VA
counterparts is very large at small $\alpha$, exceeding $75\%$ at
$\alpha=0.1$. As mentioned above, the growing discrepancy is explained by the
increase of the soliton’s amplitude with the decrease of $\alpha$. At still
smaller values of $\alpha$, the relaxation of the evolving numerical solution
toward the stationary state is very slow, which makes it difficult to identify
the stability.
Figure 3: Profiles of $|u|^{2}$ produced by VA (a) and numerical solution at
$t=100$ (b) in the case of the self-focusing ($\sigma=+1$) for $\alpha=0.2$.
Other parameter are the same as in Fig. 2(a).
### III.2 Variation of the pump’s vorticity $S$
To analyze effects of the winding number (vorticity) $S$, we here fix $g=0$
(the pure cubic nonlinearity) and set $\alpha=1$, $f_{0}=1$, $W=2$ in Eqs. (1)
and (2). In the self-focusing case ($\sigma=+1$), the numerically produced
solutions are stable for $S=1$ and $2$, and unstable for $S\geq 3$. In the
former case, the power difference between the VA and numerical solutions is
$3.1\%$ and $4.9\%$ for $S=1$ and $2$, respectively, i.e., the VA remains a
relatively accurate approximation in this case.
In the self-defocusing case ($\sigma=-1$), considering the same values of the
other parameters as used above, the numerical solution produces stable vortex
solitons at least until $S=5$. For the same reason as mentioned above, the
accuracy of VA is much lower for $\sigma=-1$ than for the self-focusing case
($\sigma=+1$), with the respective discrepancies in the power values being
$17.4\%$, $6.7\%$, $24.1\%$, $29.0\%$, and $29.6\%$ for $S=1$, $2$, $3$, $4$,
and $5$, respectively.
For the cogent verification of the stability of the localized vortices in the
case of the self-defocusing, we have also checked it for smallest value of the
loss parameter considered in this work, viz., $\alpha=0.1$, again for $S=1$,
$2$, $3$, $4$, and $5$, and the above-mentioned values of the other
coefficients, i.e., $f_{0}=1$ and $W=2$. Naturally, the discrepancy between
the VA and numerical findings is still higher in this case, being $75\%$,
$72.5\%$, $65.7\%$, $55.2\%$, and $41.1\%$ for $S=1$, $2$, $3$, $4$, and $5$,
respectively. The result is illustrated by Fig. 4 for a relatively large
vorticity, $S=4$. In particular, the pattern of $|u(x,y)|^{2}$ and the
corresponding cross section, displayed in Figs. 4(b) and (c), respectively,
exhibit the established vortex structure and background “garbage" produced by
the evolution.
Figure 4: (a) The VA-predicted pattern and (b) the corresponding result of the
direct simulation of Eq. (1) with $\sigma=-1$ and $g=0$ (cubic self-
defocusing) at $t=100$, initiated by the zero input at $t=0$, for
$\alpha=0.1$, $f_{0}=1$, $W=2$, and vorticity $S=4$ in the pump term (2). (c)
The respective cross sections drawn through $y=0$. (d) The evolution of the
total power $P$ (see Eq. (3) of the numerical solution in the course of the
simulation.
### III.3 Variation of the pump’s width $W$
To address the effects of the variation of parameter $W$ in Eq. (2), we here
fix $g=0$, $\alpha=1$, $f_{0}=1$, and $S=1$. In Fig. 5(a), the power of the
VA-predicted and numerically found stable vortex-soliton solutions is plotted
as a function of $W$ for both the self-focusing and defocusing cases, i.e.,
$\sigma=+1$ and $\sigma=-1$, respectively. In the former case, the azimuthal
MI sets in at $W\geq 2.7$, see an example in Fig. 5(b) for $W=2.75$.
Figure 5: (a) The power of the VA-predicted vortex-soliton solutions (solid
blue and dashed black lines, pertaining to the self-focusing, $\sigma=+1$, and
self-defocusing, $\sigma=-1$, cubic nonlinearitry, respectively) and their
numerically found counterparts (red circles and green squares pertaining to
the self-focusing and self-defocusing nonlinearitry, respectively) vs. the
pump’s width $W$. Other parameters are $g=0$, $\alpha=1$, $f_{0}=1$, and
$S=1$. (b) The profile produced, at $t=100$, by the numerically generated
unstable solution in the case of the self-focusing, $\sigma=+1$, with
$W=2.75$.
In the self-focusing case, $\sigma=+1$, the azimuthal MI for the solitons with
higher vorticities, $S=2$, $3$, $4$, or $5$, sets in at $W\geq 2.1$, $1.7$,
$1.6$, and $1.4$, respectively. In the self-defocusing case, no
existence/stability boundary was found for the vortex modes with $S=1$, $S=2$,
and $S=3$ (at least, up to $W=5$). At higher values of the vorticity, the
localized vortices do not exist, in the defocusing case, at $W>3.5$ and
$W>2.5$, for $S=4$, and $S=5$, respectively.
### III.4 Variation of the pump’s strength $f_{0}$
Effects of the variation of $f_{0}$ are reported here, fixing other parameters
as $g=0$, $\alpha=1$, and $W=2$. In the case of the cubic self-focusing,
$\sigma=+1$, the vortex soliton with $S=1$ are subject to MI at $f_{0}\geq
1.6$. As a typical example, in Fig. 6 we display the VA-predicted solution
alongside the result of the numerical simulations for $f_{0}=1.7$. For higher
vorticities, $S=2$, $3$, $4$, and $5$, the azimuthal instability sets in at
$f_{0}\geq 1.1$, $0.6$, $0.3$, and $0.08$, respectively. Naturally, the narrow
vortex rings with large values of $S$ are much more vulnerable to the quasi-
one-dimensional azimuthal MI.
The power of the vortex solitons with $S=1$ and $2$, as produced by the VA and
numerical solution, is plotted vs. the pump amplitude $f_{0}$ in Fig. 7(a). As
an example, Fig. 7(b) showcases an example of the cross-section profile of the
vortex soliton with $S=2$, demonstrating the reliability of the VA prediction.
In the range of $f_{0}\leq 1$, the highest relative difference in the power
between the numerical and variational solutions cases is $5.5\%$ and $7.5\%$
for $S=1$ and $S=2$, respectively.
Figure 6: (a) The VA-predicted profile of $|u|^{2}$ in the self-focusing case,
with parameters $g=0$, $\sigma=+1$, $\alpha=1$, $f_{0}=1.7$, $W=2$, and $S=1$.
(b) The unstable solution, produced, at $t=100$, by the simulations of Eq. (1)
for the same parameters.
Figure 7: (a) The power versus $f_{0}$ for the confined vortex modes in the
self-focusing case ($\sigma=+1$). The VA solutions for $S=1$ and $2$ are shown
by solid blue and dashed black lines, respectively. The corresponding
numerical solutions are represented by red circles and green squares,
respectively. Recall that the numerical solutions are stable, in this case, at
$f_{0}<1.6$ and $f_{0}<1.1$, for $S=1$ and $2$, respectively. (b) The VA-
predicted and numerically obtained (the dashed red and solid blue lines,
respectively) profiles of the stable solution with $S=2$ and $f_{0}=0.4$,
drawn as cross sections through $y=0$ . The other parameters are $g=0$,
$\sigma=+1$, $\alpha=1$, $W=2$.
It is relevant to mention that the “traditional” azimuthal instability of
vortex-ring solitons with winding number $S$ demonstrates fission of the
original axially symmetric shape into a set consisting of a large number
$N\geq S$ of symmetrically placed localized fragments [40, 2], while the above
examples, displayed in Figs. 3(b), 5(b), and 6(b), demonstrate the appearance
of a single bright fragment and a “garbage cloud” distributed along the
original ring. At larger values of $f_{0}$, our simulations produce examples
of the “clean” fragmentation, viz., with $N=4$ produced by the unstable vortex
rings with $S=1$ in Fig. 8(a), $N=5$ produced by $S=2$ and $3$ in (b) and (d),
$N=7$ by $S=2$ and $3$ in (c) and (e), and $N=8$ by $S=3$ in (f). These
outcomes of the instability development are observed at the same evolution
time $t=100$ as in Figs. 3(b), 5(b), and 6(b). The gradual increase of the
number of the fragments on $S$ is explained by the dependence of the azimuthal
index of the fastest growing eigenmode of the breaking instability on the
underlying winding number $S$, which is a generic property of vortex solitons
[40, 2].
Figure 8: Examples of the fission of unstable vortex-ring solitons produced by
simulations of the LL equation (1) with $\sigma=+1$ (cubic self-focusing),
$g=0$ (no quintic nonlinearity), $\alpha=2$, $W=2$, and $\eta=1$. Each plot
displays the result of the numerical simulations at time $t=100$. Values of
the initial vorticity and pump’s strength are indicated in panels.
In self-defocusing case, $\sigma=-1$, a summary of the results produced by the
VA and numerical solution for the stable vortex solitons with $S=1$ and $2$,
in the form of the dependence of their power on $f_{0}$, is produced in Fig.
9(a) (cf. Fig. 7(a) for $\sigma=+1$). Naturally, the VA-numerical discrepancy
increases with the growth of the pump’s strength, $f_{0}$, see an example in
Fig. 9(b). Unlike the case of $\sigma=+1$, in the case of the self-defocusing
the vortex modes with $S\leq 5$ remain stable, at least, up to $f_{0}=5$
(here, we do not consider the case of $S>5$).
Figure 9: (a) The power versus $f_{0}$ for the vortex modes in the self-
defocusing case ($\sigma=-1$). The VA solutions for $S=1$ and $2$ are shown by
solid blue and dashed black lines, respectively. The corresponding numerical
solutions are presented by red circles and green squares, respectively. (b)
The VA-predicted and numerically obtained (the dashed red and solid blue
lines, respectively) profiles of the solution with $S=2$ and $f_{0}=2$, drawn
as the cross sections through $y=0$ . The other parameters are $g=0$,
$\sigma=-1$, $\alpha=1$, $W=2$.
### III.5 Influence of the quintic coefficient $g$
In the above analysis, the quintic term was dropped in the LL equation (1),
setting $g=0$. To examine the impact of this term, we first address the case
shown above in Fig. 3, which demonstrated that the vortex soliton with $S=1$,
as a solution to Eqs. (1) and (2) with $g=0$, $\sigma=+1$ and $f_{0}=1$,
$W=2$, was unstable if the loss parameter fell below the critical value,
$\alpha=0.35$. We have found that, adding to Eq. (1) the quintic term with
either $g=-1$ or $g=+1$ (the self-focusing or defocusing quintic nonlinearity,
respectively) leads to the _stabilization_ of the vortex mode displayed in
Fig. 3, which was unstable in the absence of the quintic term. The
stabilization of the soliton by the quintic self-defocusing is a natural fact.
More surprising is the possibility to provide the stabilization by the self-
focusing quintic nonlinearity because, in most cases, the inclusion of such a
term gives rise to the supercritical collapse in 2D, making all solitons
strongly unstable [40, 2]. However, it is concluded from the stability charts
displayed below in Fig. 12 that the stabilizing effect of the quintic self-
focusing occurs only at moderately small powers, for which the the quintic
term is not a clearly dominant one. In the general case, solitons stability
regions naturally shrink in Figs. 12 under the action of the quintic self-
focusing quintic term, with $g<0$.
In the presence of the quintic term, the comparison of the numerically found
stabilized vortex-soliton profiles with their VA counterparts, whose
parameters are produced by a numerical solution of Eqs. (18) and (19), is
presented in Fig. 10. Similar to the results for the LL equation with the
cubic-only nonlinearity, the VA is essentially more accurate in the case of
the self-focusing sign of the quintic term ($g<0$) than in the opposite case,
$g>0$. In particular, in the case shown in Fig. 10, the power-measured
discrepancy for $g=-1$ and $+1$ is, respectively, $4\%$ and $64\%$. An
explanation for this observation is provided by the fact that the soliton’s
amplitude is much higher in the latter case.
Figure 10: Cross-section profiles (drawn through $y=0$) for stable vortex
solitns with $S=1$, produced by Eq. (1) with the quintic self-focusing, $g=-1$
(a) or defocusing, $g=+1$ (b) term. In both cases, the cubic self-focusing
cubic term, with $\sigma=+1$, is present. The numerically found profiles and
their VA-produced counterparts are displayed, respectively, by the solid blue
and dashed lines. Other parameter are $\alpha=0.2$, $\eta=1$, and $f_{0}=1$,
$W=2$.
Another noteworthy finding is the stabilization of higher-vorticity solitons
by the quintic term. For instance, it was shown above that, for parameters
$\alpha=1$, $\eta=1$, $f_{0}=1$, $W=2$, and $\sigma=+1$ (the cubic self-
focusing) all vortex solitons with $S\geq 3$, produced by Eq. (1), were
unstable. Now, we demonstrate that the soliton with $S=3$ is stabilized by
adding the self-defocusing quintic term with a small coefficient, just
$g=0.1$, see Fig. 11(b). As a counter-intuitive effect, the stabilization of
the same soliton by the self-focusing quintic term is possible too, but the
necessary coefficient is large, $g=-6$, see Fig. 11(a) (recall that $g=-1$ is
sufficient for the stabilization of the vortex soliton with $S=1$ and
$\alpha=0.2$ in Fig. 10(a)). Nevertheless, similar to what is said above, the
set of Figs. 12(c,f,i) demonstrates the natural shrinkage of the stability
area under the action of the quintic self-focusing. For the stabilized vortex
modes shown in Fig. 11(a), in the case when both the cubic and quintic terms
are self-focusing, the relative power-measured discrepancy between the
numerical and VA-predicted solutions is very small, $\approx 0.7\%$, while in
the presence of the weak quintic self-defocusing in Fig. 11(b) the discrepancy
is $6.6\%$.
Figure 11: Cross-section profiles (drawn through $y=0$) of vortex solitons
with $S=3$, stabilized by the self-focusing (a) or defocusing (b) quintic
tterm in Eq. (1), with the respective coefficient $g=-6$ or $g=0.1$, other
parameters being $\sigma=-1$ (cubic self-focusing), $\alpha=1$, $\eta=1$, and
$f_{0}=1$, $W=2$ in Eq.( 2). The numerically found solutions and their VA-
predicted counterparts are plotted by the solid blue and dashed red lines,
respectively.
### III.6 Stability charts in the parameter space
The numerical results produced in this work are summarized in the form of
stability areas plotted in Fig. 12 in the parameter plane
$\left(f_{0},\alpha\right)$, for the vortex-soliton families with winding
numbers $S=1$, $2$, $3$, and three values of the quintic coefficient, $g=-1$,
$0$, $+1$, while the cubic term is self-focusing, $\sigma=+1$, and the width
of the pump beam is fixed, $W=2$. In addition to that, stability charts
corresponding to the combination of the cubic self-defocusing ($\sigma=-1$)
and quintic focusing ($g=-1$), also for $S=1,2,3$, are plotted in Fig. 13.
The choice of the parameter plane $\left(f_{0},\alpha\right)$ in the stability
diagrams displayed in Fig. 12 is relevant, as the strength of the pump beam,
$f_{0}$, and loss coefficient, $\alpha$,are amenable to accurate adjustment in
the experiment (in particular, $\alpha$ may be tuned by partially compensating
the background loss of the optical cavity by a spatially uniform pump, taken
separately from the confined pump beam). As seen in all panels of Figs. 12 and
13, the increase of $\alpha$ naturally provides effective stabilization of the
vortex modes, while none of them may be stable at $\alpha=0$, in agreement
with the known properties of vortex-soliton solutions of the 2D nonlinear
Schrödinger equation with the cubic and/or cubic-quintic nonlinearity [40, 2].
The apparent destabilization of the vortices with the increase of the pump’s
amplitude $f_{0}$ is explained by the ensuing enhancement of the destabilizing
nonlinearity. Other natural features exhibited by Figs. 12 are the general
stabilizing/destabilizing effect of the quintic self-defocusing/focusing (as
discussed above), and expansion of the splitting-instability area with the
increase of $S$ (the latter feature is also exhibited by Fig. 13). The latter
finding is natural too, as larger $S$ makes the ring-shaped mode closer to the
quasi-1D shape (see, in particular, Fig. 8), which facilitates the onset of
the above-mentioned azimuthal MI (modulational instability).
Figure 12: Stability areas for families of the vortex solitons with winding
numbers $S=1$, $2,$ and $3$, in the plain of the loss coefficient ($\alpha$)
and amplitude of the pump beam ($f_{0}$), for three different values of the
quintic coefficient, $g=-1,0,+1$ (recall that $g<0$ and $g>0$ correspond,
respectively, to the self-focusing and defocusing). Other parameters of Eqs.
(1) and (2) are $\sigma=+1$ (the self-focusing cubic term), $\eta=1$, and
$W=2$. Figure 13: The same as in Figs. 12(a-c), but for $\sigma=-1$ and $g=-1$
(the cubic self-defocusing and quintic focusing terms).
Thus, the inference is that the instability mode which determines the boundary
of the stability areas in Figs. 12 and 13 is the breaking of the axial
symmetry of the vortex rings by azimuthal perturbations, as shown above, in
particular, in Figs. 3(b), 5(b), and 6(b). The destabilization through the
spontaneous splitting of the rings into symmetric sets of fragments (see Fig.
8) occurs deeply inside the instability area, i.e., at larger values of
$f_{0}$.
## IV Conclusion
We have introduced the two-dimensional LL (Lugiato-Lefever) equation including
the self-focusing or defocusing cubic or cubic-quintic nonlinearity and the
confined pump with embedded vorticity (winding number), $S\leq 5$. Stable
states in the form of vortex solitons (rings) for these values of $S$ are
obtained, in parallel, in the semi-analytical form by means of the VA
(variational approximation) and numerically, by means of systematic
simulations of the LL equation starting from the zero input. The VA provides
much more accurate results in the case of the self-focusing nonlinearity than
for the defocusing system. Stability areas of the vortex solitons with
$S=1,2,3$ are identified in the plane of experimentally relevant parameters,
viz., the pump amplitude and loss coefficient, for the self-focusing and
defocusing signs of the cubic and quintic terms. Stability boundaries for the
vortex rings are determined by the onset of the azimuthal instability which
breaks their axial symmetry. These findings suggest new possibilities for the
design of tightly confined robust optical modes, such as vortex pixels.
As an extension of this work, it may be interesting to construct solutions
pinned to a symmetric pair of pump beams with or without intrinsic vorticity.
In this context, it is possible to consider the beam pair with identical or
opposite vorticities. In the case of the self-focusing sign of the
nonlinearity, one may expect onset of spontaneous breaking of the symmetry in
the dual-pump configuration. Results for this setup will be reported
elsewhere.
## Acknowledgments
We thank Prof. Branko Dragovich for invitation to submit the paper to the
Special Issue of Symmetry on the topic of “Selected Papers on Nonlinear
Dynamics".
The work of S.K. and B.A.M. is supported, in part, by the Israel Science
Foundation through grant No. 1695/22. W.B.C. acknowledges the financial
support of the Brazilian agency CNPq (grant #306105/2022-5). This work was
also performed as a part of program #465469/2014-0 of the Brazilian National
Institute of Science and Technology (INCT) for Quantum Information.
## References
* [1] Y. S. Kivshar and G. Agrawal, Optical Solitons: From Fibers to Photonic Crystals (Elsevier Science, 2003).
* [2] B. A. Malomed, Multidimensional Solitons (American Institute of Physics Publishing, Melville, NY, 2022).
* [3] N. N. Rosanov, Spatial Hysteresis and Optical Patterns (Springer: Berlin, 2002).
* [4] Dissipative Optical Solitons, ed. by M. F. S. Ferreira (Springer Nature Switzerland AG, Cham, Switzerland, 2022).
* [5] P. Grelu and N. Akhmediev, Dissipative solitons for mode-locked lasers, Nature Phot. 6, 84-92 (2012).
* [6] M. X. Jiang, X. N. Wang, Q. Ouyang, and H. Zhang, Spatiotemporal chaos control with a target wave in the complex Ginzburg-Landau equation system, Phys. Rev. E 69, 56202 (2004).
* [7] L. A. Lugiato and R. Lefever, Spatial Dissipative Structures in Passive Optical Systems, Phys. Rev. Lett. 58, 2209 (1987).
* [8] M. Tlidi and K. Panajotov, Two-dimensional dissipative rogue waves due to time-delayed feedback in cavity nonlinear optics, Chaos 27, 013119 (2017).
* [9] K. Panajotov, M. G. Clerc, and M. Tlidi, Spatiotemporal chaos and two-dimensional dissipative rogue waves in Lugiato-Lefever model, Eur. Phys. J. B 71, 176 (2017).
* [10] M. Tlidi and M. Taki, Rogue waves in nonlinear optics, Adv. Opt. Photon. 14, 87-147 (2022).
* [11] Y. Sun, P. Parra-Rivas, F. Mangini, and S. Wabnitz, Multidimensional localized states in externally driven Kerr cavities with a parabolic spatiotemporal potential: a dimensional connection, arXiv:2401.15689.
* [12] G. J. de Valcárcel and K. Staliunas, Phase-bistable Kerr cavity solitons and patterns, Phys. Rev. A 87, 043802 (2013).
* [13] S. Coen, H. G. Randle, T. Sylvestre, and M. Erkintalo, Modeling of octave-spanning Kerr frequency combs using a generalized mean-field Lugiato-Lefever model, Opt. Lett. 38, 37-39 (2013).
* [14] M. R. E. Lamont, Y. Okawachi, and A. L. Gaeta, Route to stabilized ultrabroadband microresonator-based frequency combs, Opt. Lett. 38, 3478-3481 (2013).
* [15] C. Godey, I. V. Balakireva, A. Coillet, and Y. K. Chembo, Stability analysis of the spatiotemporal Lugiato-Lefever model for Kerr optical frequency combs in the anomalous and normal dispersion regimes, Phys. Rev. A 89, 063814 (2014).
* [16] V. E. Lobanov, G. Lihachev, T. J. Kippenberg, and M. L. Gorodetsky, Frequency combs and platicons in optical microresonators with normal GVD, Opt. Exp. 23, 7713-7721 (2015).
* [17] M. Karpov, H. Guo, A. Kordts, V. Brasch, M. H. P. Pfeiffer, M. Zervas, M. Geiselmann, and T. J. Kippenberg, Raman Self-Frequency Shift of Dissipative Kerr Solitons in an Optical Microresonator, Phys. Rev. Lett. 116, 103902 (2016).
* [18] F. Copie, M. Conforti, A. Kudlinski, A. Mussot, and S. Trillo, Competing Turing and Faraday Instabilities in longitudinally modulated passive resonators, Phys. Rev. Lett. 116, 143901 (2016).
* [19] P. Parra-Rivas, D. Gomila, P. Colet, and L. Gelens, Interaction of solitons and the formation of bound states in the generalized Lugiato-Lefever equation, Eur. Phys. J. D 71, 198 (2017).
* [20] M. G. Clerc, M. A. Ferré, S. Coulibaly, R. G. Rojas, and M. Tlidi, Chimera-like states in an array of coupled-waveguide resonators, Opt. Lett. 42, 2906-2909 (2017).
* [21] B. Garbin, Y. Wang, S. G. Murdoch, G.-L. Oppo, S. Coen, and M. Erkintalo, Experimental and numerical investigations of switching wave dynamics in a normally dispersive fibre ring resonator, Eur. Phys. J. D 71, 240 (2017).
* [22] Q. Li, T. C. Briles, D. A. Westly, T. E. Drake, J. R. Stone, B. R. Ilic, S. A. Diddams, S. B. Papp, and K. Srinivasan, Stably accessing octave-spanning microresonator frequency combs in the soliton regime, Optica 4, 193-203 (2017).
* [23] L. A. Lugiato, F. Prati, M. L. Gorodetsky, and T. J. Kippenberg, From the Lugiato–Lefever equation to microresonator-based soliton Kerr frequency combs, Phil. Trans. R. Soc. A. 376, 20180113 (2018).
* [24] X. Dong, C. Spiess, V. G. Bucklew, and W. H. Renninger, Chirped-pulsed Kerr solitons in the Lugiato-Lefever equation with spectral filtering, Phys. Rev. Research 3, 033252 (2021).
* [25] S.-W. Huang, J. Yang, S.-H. Yang, M. Yu, D.-L. Kwong, T. Zelevinsky, M. Jarrahi, and C. W. Wong, Globally stable microresonator Turing pattern formation for coherent high-power THz radiation on-chip, Phys. Rev. X 7, 041002 (2017).
* [26] Y. V. Kartashov, O. Alexander, and D. V. Skryabin, Multistability and coexisting soliton combs in ring resonators: the Lugiato-Lefever approach, Opt. Express 25, 11550-11555 (2017).
* [27] A. Coillet, I. Balakireva, R. Henriet, K. Saleh, L. Larger, J. M. Dudley, C. R. Menyuk, Y. K. Chembo, Azimuthal Turing patterns, bright and dark cavity solitons in Kerr combs generated with whispering-gallery-mode resonators, IEEE Photon. J. 5, 6100409-6100409 (2013).
* [28] Y. K. Chembo and C. R. Menyuk, Spatiotemporal Lugiato-Lefever formalism for Kerr-comb generation in whispering-gallery-mode resonators, Phys. Rev. A 87, 053852 (2013).
* [29] H. Taheri, A. A. Eftekhar, K. Wiesenfeld, and A. Adibi, Soliton Formation in Whispering-Gallery-Mode Resonators via Input Phase Modulation, IEEE Photon. J. 7, 2200309 (2015).
* [30] Y.-Y. Wang, M.-M. Li, G.-Q. Zhou, Y. Fan, and X.-J. Lai, Rotating vortex-like soliton in a whispering gallery mode microresonator, Eur. Phys. J. Plus 134, 161 (2019).
* [31] T. Daugey, C. Billet, J. Dudley, J.-M. Merolla, and Y. K. Chembo, Kerr optical frequency comb generation using whispering-gallery-mode resonators in the pulsed-pump regime, Phys. Rev. A 103, 023521 (2021).
* [32] Q.-H. Cao, K.-L. Geng, B-W. Zhu, Y.-Y. Wang, and C.-Q. Dai, Scalar vortex solitons and vector dipole solitons in whispering gallery mode optical microresonators, Chaos, Sol. & Fract. 166, 112895 (2023).
* [33] W. B. Cardoso, L. Salasnich, and B. A. Malomed, Localized solutions of Lugiato-Lefever equations with focused pump, Sci. Rep. 7, 16876 (2017).
* [34] W. B. Cardoso, L. Salasnich, and B. A. Malomed, Zero-dimensional limit of the two-dimensional Lugiato-Lefever equation. Eur. Phys. J. D 71, 112 (2017).
* [35] M. Quiroga-Teixeiro and H. Michinel, Stable azimuthal stationary state in quintic nonlinear optical media, J. Opt. Soc. Am. B 14, 2004-2009 (1997).
* [36] G. Boudebs, S. Cherukulappurath, H. Leblond, J. Troles, F. Smektala, and F. Sanchez, Experimental and theoretical study of higher-order nonlinearities in chalcogenide glasses, Opt. Commun. 219, 427-432 (2003).
* [37] A. S. Reyna and C. B. de Araújo, High-order optical nonlinearities in plasmonic nanocomposites – a review, Adv. Opt. Phot. 9, 720-774 (2017).
* [38] D. L. Andrews, symmetry and quantum features in optical vortices, Symmetry 13, 1368 (2021).
* [39] A. Ramaniuk, N. V. Hung, M. Giersig, K. Kempa, V. V. Konotop, and M. Trippenbach, Vortex creation without stirring in coupled ring resonators with gain and loss, Symmetry 10, 195 (2018).
* [40] B. A. Malomed, (INVITED) Vortex solitons: Old results and new perspectives, Physica D 399, 108-137 (2019).
* [41] R. K. Bullough, A. P. Fordy, and S. V. Manakov, Adiabatic invariants theory of near-integrable systems with damping, Phys. Lett. A 91, 98-100 (1982).
|
1
# A Universal Attractor Decomposition Algorithm
for Parity Games
Marcin Jurdziński nnnn-nnnn-nnnn-nnnn Department of Computer
ScienceUniversity of Warwick<EMAIL_ADDRESS>and Rémi Morvan
nnnn-nnnn-nnnn-nnnn ENS Paris-Saclay<EMAIL_ADDRESS>
###### Abstract.
An attractor decomposition meta-algorithm for solving parity games is given
that generalizes the classic McNaughton-Zielonka algorithm and its recent
quasi-polynomial variants due to Parys (2019), and to Lehtinen, Schewe, and
Wojtczak (2019). The central concepts studied and exploited are attractor
decompositions of dominia in parity games and the ordered trees that describe
the inductive structure of attractor decompositions.
The main technical results include the embeddable decomposition theorem and
the dominion separation theorem that together help establish a precise
structural condition for the correctness of the universal algorithm: it
suffices that the two ordered trees given to the algorithm as inputs embed the
trees of some attractor decompositions of the largest dominia for each of the
two players, respectively.
The universal algorithm yields McNaughton-Zielonka, Parys’s, and Lehtinen-
Schewe-Wojtczak algorithms as special cases when suitable universal trees are
given to it as inputs. The main technical results provide a unified proof of
correctness and deep structural insights into those algorithms.
This paper motivates a research program of developing new efficient algorithms
for solving parity games by designing new classes of small trees that embed
the largest dominia in relevant classes of parity games. An early success
story in this research program is the recent development, by Daviaud,
Jurdziński, and Thejaswini (2019), of Strahler-universal trees, which embed
dominia in games of bounded register number, introduced by Lehtinen (2018).
When run on these trees, the universal algorithm can solve games with bounded
register number in polynomial time and in quasi-linear space.
A symbolic implementation of the universal algorithm is also given that
improves the symbolic space complexity of solving parity games in quasi-
polynomial time from $O(d\lg n)$—achieved by Chatterjee, Dvořák, Henzinger,
and Svozil (2018)—down to $O(\lg d)$, where $n$ is the number of vertices and
$d$ is the number of distinct priorities in a parity game. This not only
exponentially improves the dependence on $d$, but it also entirely removes the
dependence on $n$.
parity games, universal trees, attractor decompositions, separation,
embedding, quasi-polynomial, symbolic algorithms
††conference: University of Warwick; January 2020; Coventry, UK††journalyear:
2020††doi: ††copyright: none††ccs: Software and its engineering General
programming languages††ccs: Social and professional topics History of
programming languages
## 1\. Context
### 1.1. Parity games and their significance
Parity games play a fundamental role in automata theory, logic, and their
applications to verification (Emerson and Jutla, 1991), program analysis
(Baldan et al., 2019; Hausmann and Schröder, 2019), and synthesis (Grädel et
al., 2002; Luttenberger et al., 2019). In particular, parity games are very
intimately linked to the problems of emptiness and complementation of non-
deterministic automata on trees (Emerson and Jutla, 1991; Zielonka, 1998),
model checking and satisfiability checking of fixpoint logics (Emerson and
Jutla, 1991; Emerson et al., 1993; Bradfield and Walukiewicz, 2018),
evaluation of nested fixpoint expressions (Hasuo et al., 2016; Baldan et al.,
2019; Hausmann and Schröder, 2019), or fair simulation relations (Etessami et
al., 2005). It is a long-standing open problem whether parity games can be
solved in polynomial time (Emerson et al., 1993).
The impact of parity games goes well beyond their home turf of automata
theory, logic, and formal methods. For example, an answer (Friedmann, 2009) of
a question posed originally for parity games (Vöge and Jurdziński, 2000) has
strongly inspired major breakthroughs on the computational complexity of
fundamental algorithms in stochastic planning (Fearnley, 2010) and linear
optimization (Friedmann, 2011b; Friedmann et al., 2011), and parity games
provide the foundation for the theory of nested fixpoint expressions used in
program analysis (Baldan et al., 2019; Hausmann and Schröder, 2019) and
coalgebraic model checking (Hasuo et al., 2016).
### 1.2. Related work
The major breakthrough in the study of algorithms for solving parity games
occurred in 2017 when Calude, Jain, Khoussainov, Li, and Stephan (Calude et
al., 2017) have discovered the first quasi-polynomial algorithm. Three
other—and seemingly distinctly different—techniques for solving parity games
in quasi-polynomial time have been proposed in quick succession soon after: by
Jurdziński and Lazić (Jurdziński and Lazić, 2017), by Lehtinen (Lehtinen,
2018), and by Parys (Parys, 2019).
Czerwiński, Daviaud, Fijalkow, Jurdziński, Lazić, and Parys (Czerwiński et
al., 2019) have also uncovered an underlying combinatorial structure of
universal trees as provably underlying the techniques of Calude et al., of
Jurdziński and Lazić, and of Lehtinen. Czerwiński et al. have also established
a quasi-polynomial lower bound for the size of smallest universal trees,
providing evidence that the techniques developed in those three papers may be
insufficient for leading to futher improvements in the complexity of solving
parity games. The work of Parys (Parys, 2019) has not been obviously subject
to the quasi-polynomial barrier of Czerwiński et al., making it a focus of
current activity. It is worth noting, though, that Lehtinen, Schewe, and
Wojtczak (Lehtinen et al., 2019), who have improved the complexity of Parys’s
algorithm somewhat, have made an informal observation that the tree of
recursive calls of their algorithm is also universal. The algorithms of Parys
and of Lehtinen et al. are modifications of the classic McNaughton-Zielonka
algorithm (McNaughton, 1993; Zielonka, 1998), which has exponential running
time in the worst case (Friedmann, 2011a), but consistently outperforms most
other algorithms in practice (van Dijk, 2018).
### 1.3. Our contributions
In this work we provide a meta-algorithm—the _universal attractor
decomposition algorithm_ —that generalizes McNaughton-Zielonka, Parys’s, and
Lehtinen-Schewe-Wojtczak algorithms. There are multiple benefits of
considering the universal algorithm.
Firstly, in contrast to Parys’s and Lehtinen-Schewe-Wojtczak algorithms, the
universal algorithm has a very simple and transparent structure that minimally
departs from the classic McNaughton-Zielonka algorithm. Secondly, we observe
that Lehtinen-Schewe-Wojtczak algorithm, as well as non-adaptive versions (see
Sections 3.2 and 4.4) of McNaughton-Zielonka and Parys’s algorithms, all arise
from the universal algorithm by using specific classes of universal trees,
strongly linking the theory of universal trees to the only class of quasi-
polynomial algorithms that had no established formal relationship to universal
trees so far.
Thirdly, we further develop the theory of dominia and their attractor
decompositions in parity games, initiated by Daviaud, Jurdziński, and Lazić
(Daviaud et al., 2018) and by Daviaud, Jurdziński, and Lehtinen (Daviaud et
al., 2019), and we prove two new structural theorems (the embedabble
decomposition theorem and the dominion separation theorem) about ordered trees
of attractor decompositions. Fourthly, we use the structural theorems to
provide a unified proof of correctness of various McNaughton-Zielonka-style
algorithms, identifying very precise structural conditions on the trees of
recusive calls of the universal algorithm that result in it correctly
identifying the largest dominia.
Finally, we observe that thanks to its simplicity, the universal algorithm is
particularly well-suited for solving parity games efficiently in a symbolic
model of computation, when large sizes of input graphs prevent storing them
explicitly in memory. Indeed, we argue that already a routine implementation
of the universal algorithm improves the state-of-the-art symbolic space
complexity of solving parity games in quasi-polynomial time from $O(d\lg n)$
to $O(d)$, but we also show that a more sophisticated symbolic data structure
allows to further reduce the symbolic space of the universal algorithm to
$O(\lg d)$.
## 2\. Dominia and decompositions
### 2.1. Strategies, traps, and dominia
A _parity game_ $\mathcal{G}$ consists of a finite directed graph $(V,E)$, a
partition $(V_{\mathrm{Even}},V_{\mathrm{Odd}})$ of the set of vertices $V$,
and a function $\pi:V\to\left\\{\,0,1,\dots,d\,\right\\}$ that labels every
vertex $v\in V$ with a non-negative integer $\pi(v)$ called its _priority_. We
say that a cycle is _even_ if the highest vertex priority on the cycle is
even; otherwise the cycle is _odd_. We say that a parity game is _$(n,d)$
-small_ if it has at most $n$ vertices and all vertex priorities are at most
$d$.
For a set $S$ of vertices, we write $\mathcal{G}\cap S$ for the substructure
of $\mathcal{G}$ whose graph is the subgraph of $(V,E)$ induced by the sets of
vertices $S$. Sometimes, we also write $\mathcal{G}\setminus S$ to denote
$\mathcal{G}\cap(V\setminus S)$. We assume throughout that every vertex has at
least one outgoing edge, and we reserve the term _subgame_ to substructures
$\mathcal{G}\cap S$, such that every vertex in the subgraph of $(V,E)$ induced
by $S$ has at least one outgoing edge. For a subgame
$\mathcal{G}^{\prime}=\mathcal{G}\cap S$, we sometimes write
$V^{\mathcal{G}^{\prime}}$ for the set of vertices $S$ that the subgame
$\mathcal{G}^{\prime}$ is induced by. When convenient and if the risk of
confusion is contained, we may simply write $\mathcal{G}^{\prime}$ instead of
$V^{\mathcal{G}^{\prime}}$.
A (positional) _Even strategy_ is a set $\sigma\subseteq E$ of edges such
that:
* •
for every $v\in V_{\mathrm{Even}}$, there is an edge $(v,u)\in\sigma$,
* •
for every $v\in V_{\mathrm{Odd}}$, if $(v,u)\in E$ then $(v,u)\in\sigma$.
We sometimes call all the edges in such an Even strategy $\sigma$ the
_strategy edges_ , and the definition of an Even strategy requires that every
vertex in $V_{\mathrm{Even}}$ has an outgoing strategy edge, and every
outgoing edge of a vertex in $V_{\mathrm{Odd}}$ is a strategy edge.
For a non-empty set of vertices $T$, we say that an Even strategy $\sigma$
_traps Odd in $T$_ if no strategy edge leaves $T$, that is, $w\in T$ and
$(w,u)\in\sigma$ imply $u\in T$. We say that a set of vertices $T$ is a _trap
for Odd_ if there is an Even strategy that traps Odd in $R$.
Observe that if $T$ is a trap in a game $\mathcal{G}$ then $\mathcal{G}\cap T$
is a subgame of $\mathcal{G}$. For brevity, we sometimes say that a subgame
$\mathcal{G}^{\prime}$ is a trap if $\mathcal{G}^{\prime}=\mathcal{G}\cap T$
and the set $T$ is a trap in $\mathcal{G}$. Moreover, the following simple
_“trap transitivity”_ property holds: if $T$ is a trap for Even in game
$\mathcal{G}$ and $T^{\prime}$ is a trap for Even in subgame $\mathcal{G}\cap
T$ then $T^{\prime}$ is a trap in $\mathcal{G}$.
For a set of vertices $D\subseteq V$, we say that an Even strategy $\sigma$ is
an _Even dominion strategy on $D$_ if:
* •
$\sigma$ traps Odd in $D$,
* •
every cycle in the subgraph $(D,\sigma)$ is even.
Finally, we say that a set $D$ of vertices is an _Even dominion_ if there is
an Even dominion strategy on it.
Odd strategies, trapping Even, and Odd dominia are defined in an analogous way
by swapping the roles of the two players. It is an instructive exercise to
prove the following two facts about Even and Odd dominia.
###### Proposition 2.1 (Closure under union).
If $D$ and $D^{\prime}$ are Even (resp. Odd) dominia then $D\cup D^{\prime}$
is also an Even (resp. Odd) dominion.
###### Proposition 2.2 (Dominion disjointness).
If $D$ is an Even dominion and $D^{\prime}$ is an Odd dominion then $D\cap
D^{\prime}=\emptyset$.
From closure under union it follows that in every parity game, there is the
largest Even dominion $W_{\mathrm{Even}}$ (which is the union of all Even
dominia) and the largest Odd dominion $W_{\mathrm{Odd}}$ (which is the union
of all Odd dominia), and from dominion disjointness it follows that the two
sets are disjoint. The positional determinacy theorem states that, remarkably,
the largest Even dominion and the largest Odd dominion form a partition of the
set of vertices.
###### Theorem 2.3 (Positional determinacy (Emerson and Jutla, 1991)).
Every vertex in a parity game is either in the largest Even dominion or in the
largest Odd dominion.
### 2.2. Reachability strategies and attractors
In a parity game $\mathcal{G}$, for a target set of vertices $B$ (“bullseye”)
and a set of vertices $A$ such that $B\subseteq A$, we say that an Even
strategy $\sigma$ is an _Even reachability strategy to $B$ from $A$_ if every
infinite path in the subgraph $(V,\sigma)$ that starts from a vertex in $A$
contains at least one vertex in $B$.
For every target set $B$, there is the largest (with respect to set inclusion)
set from which there is an Even reachability strategy to $B$ in $\mathcal{G}$;
we call this set the _Even attractor to $B$ in $\mathcal{G}$_ and denote it by
$\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}}(B)$. _Odd reachability
strategies_ and _Odd attractors_ are defined analogously.
We highlight the simple facts that if $A$ is an attractor for a player in
$\mathcal{G}$ then its complement $V\setminus A$ is a trap for her; and that
attractors are monotone operators: if $B^{\prime}\subseteq B$ then the
attractor to $B^{\prime}$ is included in the attractor to $B$.
### 2.3. Attractor decompositions
If $\mathcal{G}$ is a parity game in which all priorities do not exceed a non-
negative even number $d$ then we say that
$\mathcal{H}\>=\>\left\langle
A,(S_{1},\mathcal{H}_{1},A_{1}),\dots,(S_{k},\mathcal{H}_{k},A_{k})\right\rangle$
is an _Even $d$-attractor decomposition_ of $\mathcal{G}$ if:
* •
$A$ is the Even attractor to the (possibly empty) set of vertices of priority
$d$ in $\mathcal{G}$;
and setting $\mathcal{G}_{1}=\mathcal{G}$, for all $i=1,2,\dots,k$, we have:
* •
$S_{i}$ is a non-empty trap for Odd in $\mathcal{G}_{i}$ in which every vertex
priority is at most $d-2$;
* •
$\mathcal{H}_{i}$ is a $(d-2)$-attractor decomposition of subgame
$\mathcal{G}\cap S_{i}$;
* •
$A_{i}$ is the Even attractor to $S_{i}$ in $\mathcal{G}_{i}$;
* •
$\mathcal{G}_{i+1}=\mathcal{G}_{i}\setminus A_{i}$;
and the game $\mathcal{G}_{k+1}$ is empty. If $d=0$ then we require that
$k=0$.
The following proposition states that if a subgame induced by a trap for Odd
has an Even attractor decomposition then the trap is an Even dominion. Indeed,
a routine proof argues that the union of all the reachability strategies,
implicit in the attractors listed in the decomposition, is an Even dominion
strategy.
###### Proposition 2.4.
If $d$ is even, $T$ is a trap for Odd in $\mathcal{G}$, and there is an Even
$d$-attractor decomposition of $\mathcal{G}\cap T$, then $T$ is an Even
dominion in $\mathcal{G}$.
If $\mathcal{G}$ is a game in which all priorities do not exceed an odd number
$d$, then an _Odd $d$-attractor decomposition_ of $\mathcal{G}$ is defined
analogously, with the roles of the two players swapped throughout the
definition. As expected and by symmetry, if a trap for Even has an Odd
attractor decomposition then it is an Odd dominion.
###### Proposition 2.5.
If $d$ is odd, $T$ is a trap for Even in $\mathcal{G}$, and there is an Odd
$d$-attractor decomposition of $\mathcal{G}\cap T$, then $T$ is an Odd
dominion in $\mathcal{G}$.
In the next subsection we argue that attractor decompositions are witnesses
for the largest dominia and that the classic recursive McNaughton-Zielonka
algorithm can be amended to produce such witnesses. Since McNaughton-Zielonka
algorithm produces Even and Odd attractor decompositions, respectively, of
subgames that are induced by sets of vertices that are complements of each
other, a by-product of its analysis is a constructive proof of the positional
determinacy theorem (Theorem 2.3).
### 2.4. McNaughton-Zielonka algorithm
procedure _$\text{McN-Z}_{\mathrm{Even}}$(_$\mathcal{G},d$_)_:
if _$d=0$_ then
return _$V^{\mathcal{G}}$_
$i\leftarrow 0;\;\mathcal{G}_{1}\leftarrow\mathcal{G}$
repeat
$i\leftarrow i+1$
$D_{i}\leftarrow\pi^{-1}(d)\cap\mathcal{G}_{i}$
$\mathcal{G}_{i}^{\prime}\leftarrow\mathcal{G}_{i}\setminus\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i})$
$U_{i}\leftarrow\textnormal{{$\text{McN-Z}_{\mathrm{Odd}}$}}\left(\mathcal{G}_{i}^{\prime},d-1\right)$
$\mathcal{G}_{i+1}\leftarrow\mathcal{G}_{i}\setminus\mathrm{Attr}^{\mathcal{G}_{i}}_{\mathrm{Odd}}(U_{i})$
until _$U_{i}=\emptyset$_
return _$V^{\mathcal{G}_{i}}$_
procedure _$\text{McN-Z}_{\mathrm{Odd}}$(_$\mathcal{G},d$_)_:
$i\leftarrow 0;\;\mathcal{G}_{1}\leftarrow\mathcal{G}$
repeat
$i\leftarrow i+1$
$D_{i}\leftarrow\pi^{-1}(d)\cap\mathcal{G}_{i}$
$\mathcal{G}_{i}^{\prime}\leftarrow\mathcal{G}_{i}\setminus\mathrm{Attr}_{\mathrm{Odd}}^{\mathcal{G}_{i}}(D_{i})$
$U_{i}\leftarrow\textnormal{{$\text{McN-Z}_{\mathrm{Even}}$}}\left(\mathcal{G}_{i}^{\prime},d-1\right)$
$\mathcal{G}_{i+1}\leftarrow\mathcal{G}_{i}\setminus\mathrm{Attr}^{\mathcal{G}_{i}}_{\mathrm{Even}}(U_{i})$
until _$U_{i}=\emptyset$_
return _$V^{\mathcal{G}_{i}}$_
Algorithm 1 McNaughton-Zielonka algorithm
The classic recursive McNaughton-Zielonka algorithm (Algorithm 1) computes the
largest dominia in a parity game. In order to obtain the largest Even dominion
in a parity game $\mathcal{G}$, it suffices to call
$\textnormal{{$\text{McN-Z}_{\mathrm{Even}}$}}(\mathcal{G},d)$, where $d$ is
even and all vertex priorities in $\mathcal{G}$ are at most $d$. In order to
obtain the largest Odd dominion in a parity game $\mathcal{G}$, it suffices to
call $\textnormal{{$\text{McN-Z}_{\mathrm{Odd}}$}}(\mathcal{G},d)$, where $d$
is odd and all vertex priorities in $\mathcal{G}$ are at most $d$.
The procedures $\text{McN-Z}_{\mathrm{Even}}$ and
$\text{McN-Z}_{\mathrm{Odd}}$ are mutually recursive and whenever a recursive
call is made, the second argument $d$ decreases by $1$. Figure 1 illustrates
one iteration of the main loop in a call of procedure
$\text{McN-Z}_{\mathrm{Even}}$. The outer rectangle denotes subgame
$\mathcal{G}_{i}$, the thin horizontal rectangle at the top denotes the set
$D_{i}$ of the vertices in $\mathcal{G}_{i}$ whose priority is $d$, and the
set below the horizontal wavy line is subgame $\mathcal{G}_{i}^{\prime}$,
which is the set of vertices in $\mathcal{G}_{i}$ that are not in the
attractor $\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i})$. The
recursive call of $\text{McN-Z}_{\mathrm{Odd}}$ returns the set $U_{i}$, and
$\mathcal{G}_{i+1}$ is the subgame to the left of the vertical zig-zag line,
and it is induced by the set of vertices in $\mathcal{G}_{i}$ that are not in
the attractor $\mathrm{Attr}_{\mathrm{Odd}}^{\mathcal{G}_{i}}(U_{i})$.
A way to prove the correctness of McNaughton-Zielonka algorithm we wish to
highlight here is to enhance the algorithm slightly to produce not just a set
of vertices but also an Even attractor decomposition of the set and an Odd
attractor decomposition of its complement. We explain how to modify procedure
$\text{McN-Z}_{\mathrm{Even}}$ and leave it as an exercise for the reader to
analogously modify procedure $\text{McN-Z}_{\mathrm{Odd}}$. In procedure
$\textnormal{{$\text{McN-Z}_{\mathrm{Even}}$}}(\mathcal{G},d)$, replace the
line
$U_{i}\leftarrow\textnormal{{$\text{McN-Z}_{\mathrm{Odd}}$}}(\mathcal{G}_{i}^{\prime},d-1)$
by the line
$U_{i},\mathcal{H}_{i},\mathcal{H}_{i}^{\prime}\leftarrow\textnormal{{$\text{McN-Z}_{\mathrm{Odd}}$}}(\mathcal{G}_{i}^{\prime},d-1)\,.$
Moreover, if upon termination of the repeat-until loop we have
$\mathcal{H}_{i}\>=\>\left\langle\emptyset,(S_{1},\mathcal{I}_{1},A_{1}),\dots,(S_{k},\mathcal{I}_{k},A_{k})\right\rangle$
then instead of returning just the set $V^{\mathcal{G}_{i}}$, let the
procedure return both $V^{\mathcal{G}_{i}}$ and the following two objects:
(1)
$\left\langle\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i}),(S_{1},\mathcal{I}_{1},A_{1}),\dots,(S_{k},\mathcal{I}_{k},A_{k})\right\rangle$
and
(2)
$\left\langle\emptyset,\left(U_{1},\mathcal{H}_{1}^{\prime},\mathrm{Attr}_{\mathrm{Odd}}^{\mathcal{G}_{1}}(U_{1})\right),\dots,\left(U_{i},\mathcal{H}_{i}^{\prime},\mathrm{Attr}_{\mathrm{Odd}}^{\mathcal{G}_{i}}(U_{i})\right)\right\rangle$
Figure 1. Attractors and subgames in one iteration of the loop in attractor
decomposition algorithms.
In an inductive argument by induction on $d$ and $i$, the inductive hypothesis
is that:
* •
$\mathcal{H}_{i}^{\prime}$ is an Odd $(d-1)$-attractor decomposition of the
subgame $\mathcal{G}^{\prime}_{i}\cap U_{i}$;
* •
$\mathcal{H}_{i}$ is an Even $d$-attractor decomposition of the subgame
$\mathcal{G}^{\prime}_{i}\setminus U_{i}$;
and the inductive step is then to show that:
* •
for every $i$, (2) is an Odd $(d+1)$-attractor decomposition of subgame
$\mathcal{G}\setminus\mathcal{G}_{i+1}$;
* •
upon termination of the repeat-until loop, (1) is an Even $d$-attractor
decomposition of subgame $\mathcal{G}_{i+1}$.
The general arguments in such a proof are well known (McNaughton, 1993;
Zielonka, 1998; Jurdziński et al., 2008; Daviaud et al., 2018) and hence we
omit the details here.
###### Theorem 2.6.
McNaughton-Zielonka algorithm can be enhanced to produce both the largest Even
and Odd dominia, and an attractor decomposition of each. Every vertex is in
one of the two dominia.
## 3\. Universal trees and algorithms
Every call of McNaughton-Zielonka algorithm takes small polynomial time if the
recursive calls it makes are excluded. This is because in every execution of
the main repeat-until loop, the two attractor computations can be performed in
time linear in the size of the game graph, and the loop can only be performed
at most linearly many times since the sets $U_{1},U_{2},\dots,U_{i}$ are
mutually disjoint. Therefore, the running time of McNaughton-Zielonka
algorithm is mostly determined by the number of recursive calls it makes
overall. While numerous experiments indicate that the algorithm performs very
well on some classes of random games and on games arising from applications in
model checking, termporal logic synthesis, and equivalence checking (van Dijk,
2018), it is also well known that there are families of parity games on which
McNaughton-Zielonka algorithm performs exponentially many recursive calls
(Friedmann, 2011a).
Parys (Parys, 2019) has devised an ingenious modification of McNaughton-
Zielonka algorithm that reduced the number of recursive calls of the algorithm
to quasi-polynomial number $n^{O(\lg n)}$ in the worst case. Lehtinen, Schewe,
and Wojtczak (Lehtinen et al., 2019) have slightly modified Parys’s algorithm
in order to improve the running time from $n^{O(\lg n)}$ down to $d^{O(\lg
n)}$ for $(n,d)$-small parity games. They have also made an informal
observation that the tree of recursive calls of their recursive procedure is
universal.
In this paper, we argue that McNaughton-Zielonka algorithm, Parys’s algorithm,
and Lehtinen-Schewe-Wojtczak algorithm are special cases of what we call a
_universal attractor decomposition algorithm_. The universal algorithm is
parameterized by two ordered trees and we prove a striking structural result
that if those trees are capacious enough to embed (in a formal sense explained
later) ordered trees that describe the “shape” of some attractor
decompositions of the largest Even and Odd dominia in a parity game, then the
universal algorithm correctly computes the two dominia. It follows that if the
algorithm is run on two universal trees then it is correct, and indeed we
reproduce McNaughton-Zielonka, Parys’s, and Lehtinen-Schewe-Wojtczak
algorithms by running the universal algorithm on specific classes of universal
trees. In particular, Lehtinen-Schewe-Wojtczak algorithm is obtained by using
the succinct universal trees of Jurdziński and Lazić (Jurdziński and Lazić,
2017), whose size nearly matches the quasi-polynomial lower bound on the size
of universal trees (Czerwiński et al., 2019).
### 3.1. Universal ordered trees
#### Ordered trees.
Ordered trees are defined inductively; an ordered tree is the trivial tree
$\left\langle\right\rangle$ or a sequence
$\left\langle\mathcal{T}_{1},\mathcal{T}_{2},\dots,\mathcal{T}_{k}\right\rangle$,
where $\mathcal{T}_{i}$ is an ordered tree for every $i=1,2,\dots,k$. The
trivial tree has only one node called the root, which is a leaf; and a tree of
the form
$\left\langle\mathcal{T}_{1},\mathcal{T}_{2},\dots,\mathcal{T}_{k}\right\rangle$
has the root with $k$ children, the root is not a leaf, and the $i$-th child
of the root is the root of ordered tree $\mathcal{T}_{i}$.
For an ordered tree $\mathcal{T}$, we write $\mathrm{height}(\mathcal{T})$ for
its _height_ and $\mathrm{leaves}(\mathcal{T})$ for its _number of leaves_.
Both are defined by routine induction: the height of the trivial tree is $0$
and it has $1$ leaf; the height of tree
$\left\langle\mathcal{T}_{1},\mathcal{T}_{2},\dots,\mathcal{T}_{k}\right\rangle$
is $1$ plus the maximum height of trees $\mathcal{T}_{1}$, $\mathcal{T}_{2}$,
…, $\mathcal{T}_{k}$; and the number of leaves of tree
$\left\langle\mathcal{T}_{1},\mathcal{T}_{2},\dots,\mathcal{T}_{k}\right\rangle$
is the sum of the numbers of leaves of trees $\mathcal{T}_{1}$,
$\mathcal{T}_{2}$, …, $\mathcal{T}_{k}$.
#### Trees of attractor decompositions.
The definition of an attractor decomposition is inductive and we define an
ordered tree that reflects the hierarchical structure of an attractor
decomposition. If $d$ is even and
$\mathcal{H}=\left\langle
A,(S_{1},\mathcal{H}_{1},A_{1}),\dots,(S_{k},\mathcal{H}_{k},A_{k})\right\rangle$
is an Even $d$-attractor decomposition then we define the _tree of attractor
decomposition $\mathcal{H}$_, denoted by $\mathcal{T}_{\mathcal{H}}$, to be
the trivial ordered tree $\left\langle\right\rangle$ if $k=0$, and otherwise,
to be the ordered tree
$\left\langle\mathcal{T}_{\mathcal{H}_{1}},\mathcal{T}_{\mathcal{H}_{2}},\dots,\mathcal{T}_{\mathcal{H}_{k}}\right\rangle$,
where for every $i=1,2,\dots,k$, tree $\mathcal{T}_{\mathcal{H}_{i}}$ is the
tree of attractor decomposition $\mathcal{H}_{i}$. Trees of Odd attractor
decompositions are defined analogously.
Observe that the sets $S_{1},S_{2},\dots,S_{k}$ in an attractor decomposition
as above are non-empty and pairwise disjoint, which implies that trees of
attractor decompositions are small relative to the number of vertices and the
number of distinct priorities in a parity game. More precisely, we say that an
ordered tree is _$(n,h)$ -small_ if its height is at most $h$ and it has at
most $n$ leaves. The following proposition can be proved by routine structural
induction.
###### Proposition 3.1.
If $\mathcal{H}$ is an attractor decomposition of an $(n,d)$-small parity game
then its tree $\mathcal{T}_{\mathcal{H}}$ is $(n,\lceil d/2\rceil)$-small.
#### Embedding ordered trees.
Intuitively, an ordered tree _embeds_ another if the latter can be obtained
from the former by pruning some subtrees. More formally, every ordered tree
embeds the trivial tree $\left\langle\right\rangle$, and
$\left\langle\mathcal{T}_{1},\mathcal{T}_{2},\dots,\mathcal{T}_{k}\right\rangle$
embeds
$\left\langle\mathcal{T}_{1}^{\prime},\mathcal{T}_{2}^{\prime},\dots,\mathcal{T}_{\ell}^{\prime}\right\rangle$
if there are indices $i_{1},i_{2},\dots,i_{\ell}$, such that $1\leq
i_{1}<i_{2}<\cdots<i_{\ell}\leq k$ and for every $j=1,2,\dots,\ell$, we have
that $\mathcal{T}_{i_{\ell}}$ embeds $\mathcal{T}_{j}^{\prime}$.
#### Universal ordered trees.
We say that an ordered tree is $(n,h)$-universal (Czerwiński et al., 2019) if
it embeds every $(n,h)$-small ordered tree. The complete $n$-ary tree of
height $h$ can be defined by induction on $h$: if $h=0$ then $C_{n,0}$ is the
trivial tree $\left\langle\right\rangle$, and if $h>0$ then $C_{n,h}$ is the
ordered tree $\left\langle C_{n,h-1}\right\rangle^{n}$. The tree $C_{n,h}$ is
obviously $(n,h)$-universal but its size is exponential in $h$.
We define two further classes $P_{n,h}$ and $S_{n,h}$ of $(n,h)$-universal
trees whose size is only quasipolynomial, and hence they are significantly
smaller than the complete $n$-ary trees of height $h$. Both classes are
defined by induction on $n+h$.
If $h=0$ then both $P_{n,h}$ and $S_{n,h}$ are defined to be the trivial tree
$\left\langle\right\rangle$. If $h>0$ then $P_{n,h}$ is defined to be the
ordered tree
$\left\langle P_{\lfloor n/2\rfloor,h-1}\right\rangle^{\lfloor
n/2\rfloor}\cdot\left\langle P_{n,h-1}\right\rangle\cdot\left\langle
P_{\lfloor n/2\rfloor,h-1}\right\rangle^{\lfloor n/2\rfloor}\,,$
and $S_{n,h}$ is defined to be the ordered tree
$S_{\lfloor n/2\rfloor,h}\cdot\left\langle S_{n,h-1}\right\rangle\cdot
S_{\lfloor n/2\rfloor,h}\,.$
We leave it as an instructive exercise to the reader to prove the following
proposition.
###### Proposition 3.2.
Ordered trees $C_{n,h}$, $P_{n,h}$ and $S_{n,h}$ are $(n,h)$-universal.
A proof of universality of $S_{n,h}$ is implicit in the work of Jurdziński and
Lazić (Jurdziński and Lazić, 2017), whose _succinct multi-counters_ are merely
an alternative presentation of trees $S_{n,h}$. Parys (Parys, 2019) has shown
that the number of leaves in trees $P_{n,h}$ is $n^{\lg n+O(1)}$ and
Jurdziński and Lazić (Jurdziński and Lazić, 2017) have proved that the number
of leaves in trees $S_{n,h}$ is $n^{\lg h+O(1)}$. Czerwiński et al.
(Czerwiński et al., 2019) have established a quasi-polynomial lower bound on
the number of leaves in $(n,h)$-universal trees, which the size of $S_{n,h}$
exceeds only by a small polynomial factor.
### 3.2. Universal algorithm
Every call of McNaughton-Zielonka algorithm (Algorithm 1) repeats the main
loop until the set $U_{i}$ (returned by a recursive call) is empty. If the
number of iterations for each value of $d$ is large then the overall number of
recursive calls may be exponential in $d$ in the worst case, and that is
indeed what happens for some families of hard parity games (Friedmann, 2011a).
procedure
_$\text{Univ}_{\mathrm{Even}}$(_$\mathcal{G},d,\mathcal{T}^{\mathrm{Even}},\mathcal{T}^{\mathrm{Odd}}$_)_:
let
$\mathcal{T}^{\mathrm{Odd}}=\left\langle\mathcal{T}^{\mathrm{Odd}}_{1},\mathcal{T}^{\mathrm{Odd}}_{2},\dots,\mathcal{T}^{\mathrm{Odd}}_{k}\right\rangle$
$\mathcal{G}_{1}\leftarrow\mathcal{G}$
for _$i\leftarrow 1$ to $k$_ do
$D_{i}\leftarrow\pi^{-1}(d)\cap\mathcal{G}_{i}$
$\mathcal{G}_{i}^{\prime}\leftarrow\mathcal{G}_{i}\setminus\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i})$
$U_{i}\leftarrow\textnormal{{$\text{Univ}_{\mathrm{Odd}}$}}\left(\mathcal{G}_{i}^{\prime},d-1,\mathcal{T}^{\mathrm{Even}},\mathcal{T}^{\mathrm{Odd}}_{i}\right)$
$\mathcal{G}_{i+1}\leftarrow\mathcal{G}_{i}\setminus\mathrm{Attr}^{\mathcal{G}_{i}}_{\mathrm{Odd}}(U_{i})$
return _$V^{\mathcal{G}_{k+1}}$_
procedure
_$\text{Univ}_{\mathrm{Odd}}$(_$\mathcal{G},d,\mathcal{T}^{\mathrm{Even}},\mathcal{T}^{\mathrm{Odd}}$_)_:
let
$\mathcal{T}^{\mathrm{Even}}=\left\langle\mathcal{T}^{\mathrm{Even}}_{1},\mathcal{T}^{\mathrm{Even}}_{2},\dots,\mathcal{T}^{\mathrm{Even}}_{\ell}\right\rangle$
$\mathcal{G}_{1}\leftarrow\mathcal{G}$
for _$i\leftarrow 1$ to $\ell$_ do
$D_{i}\leftarrow\pi^{-1}(d)\cap\mathcal{G}_{i}$
$\mathcal{G}_{i}^{\prime}\leftarrow\mathcal{G}_{i}\setminus\mathrm{Attr}_{\mathrm{Odd}}^{\mathcal{G}_{i}}(D_{i})$
$U_{i}\leftarrow\textnormal{{$\text{Univ}_{\mathrm{Even}}$}}\left(\mathcal{G}_{i}^{\prime},d-1,\mathcal{T}^{\mathrm{Even}}_{i},\mathcal{T}^{\mathrm{Odd}}\right)$
$\mathcal{G}_{i+1}\leftarrow\mathcal{G}_{i}\setminus\mathrm{Attr}^{\mathcal{G}_{i}}_{\mathrm{Even}}(U_{i})$
return _$V^{\mathcal{G}_{\ell+1}}$_
Algorithm 2 The universal algorithm
In our _universal attractor decomposition algorithm_ (Algorithm 2), every
iteration of the main loop performs exactly the same actions as in McNaughton-
Zielonka algorithm (see Algorithm 1 and Figure 1), but the algorithm uses a
different mechanism to determine how many iterations of the main loop are
performed in each recursive call. In the mutually recursive procedures
$\text{Univ}_{\mathrm{Odd}}$ and $\text{Univ}_{\mathrm{Even}}$, this is
determined by the numbers of children of the root in the input trees
$\mathcal{T}^{\mathrm{Even}}$ (the third argument) and
$\mathcal{T}^{\mathrm{Odd}}$ (the fourth argument), respectively. Note that
the sole recursive call of $\text{Univ}_{\mathrm{Odd}}$ in the $i$-th
iteration of the main loop in a call of $\text{Univ}_{\mathrm{Even}}$ is given
subtree $\mathcal{T}_{i}^{\mathrm{Odd}}$ as its fourth argument and,
analogously, the sole recursive call of $\text{Univ}_{\mathrm{Even}}$ in the
$j$-th iteration of the main loop in a call of $\text{Univ}_{\mathrm{Odd}}$ is
given subtree $\mathcal{T}_{j}^{\mathrm{Even}}$ as its third argument.
Define the _interleaving_ operation on two ordered trees inductively as
follows:
$\left\langle\right\rangle\bowtie\mathcal{T}=\left\langle\right\rangle$ and
$\left\langle\mathcal{T}_{1},\mathcal{T}_{2},\dots,\mathcal{T}_{k}\right\rangle\bowtie\mathcal{T}=\left\langle\mathcal{T}\bowtie\mathcal{T}_{1},\mathcal{T}\bowtie\mathcal{T}_{2},\dots,\mathcal{T}\bowtie\mathcal{T}_{k}\right\rangle$.
Then the following simple proposition provides an explicit description of the
tree of recursive calls of our universal algorithm.
###### Proposition 3.3.
If $d$ is even then the tree of recursive calls of a call
$\textnormal{{$\text{Univ}_{\mathrm{Even}}$}}\left(\mathcal{G},d,\mathcal{T}^{\mathrm{Even}},\mathcal{T}^{\mathrm{Odd}}\right)$
is the interleaving
$\mathcal{T}^{\mathrm{Odd}}\bowtie\mathcal{T}^{\mathrm{Even}}$ of trees
$\mathcal{T}^{\mathrm{Odd}}$ and $\mathcal{T}^{\mathrm{Even}}$.
###### Proposition 3.4.
If $d$ is odd then the tree of recursive calls of a call
$\textnormal{{$\text{Univ}_{\mathrm{Odd}}$}}\left(\mathcal{G},d,\mathcal{T}^{\mathrm{Even}},\mathcal{T}^{\mathrm{Odd}}\right)$
is the interleaving
$\mathcal{T}^{\mathrm{Even}}\bowtie\mathcal{T}^{\mathrm{Odd}}$ of trees
$\mathcal{T}^{\mathrm{Even}}$ and $\mathcal{T}^{\mathrm{Odd}}$.
The following elementary proposition helps estimate the size of an
interleaving of two ordered trees and hence the running time of a call of the
universal algorithm that is given two ordered trees as inputs.
###### Proposition 3.5.
If $\mathcal{T}$ and $\mathcal{T}^{\prime}$ are ordered trees then:
* •
$\mathrm{height}(\mathcal{T}\bowtie\mathcal{T}^{\prime})\leq\mathrm{height}(\mathcal{T})+\mathrm{height}(\mathcal{T}^{\prime})$;
* •
$\mathrm{leaves}(\mathcal{T}\bowtie\mathcal{T}^{\prime})\leq\mathrm{leaves}(\mathcal{T})\cdot\mathrm{leaves}(\mathcal{T}^{\prime})$.
In contrast to the universal algorithm, the tree of recursive calls of
McNaughton-Zielonka algorithm is not pre-determined by a structure separate
from the game graph, such as the pair of trees $\mathcal{T}^{\mathrm{Even}}$
and $\mathcal{T}^{\mathrm{Odd}}$. Instead, McNaughton-Zielonka algorithm
determines the number of iterations of its main loop adaptively, using the
adaptive _empty-set early termination rule_ : terminate the main loop as soon
as $U_{i}=\emptyset$. We argue that if we add the empty-set early termination
rule to the universal algorithm in which both trees
$\mathcal{T}^{\mathrm{Even}}$ and $\mathcal{T}^{\mathrm{Odd}}$ are the tree
$C_{n,d/2}$ then its behaviour coincides with McNaughton-Zielonka algorithm.
###### Proposition 3.6.
The universal algorithm performs the same actions and produces the same output
as McNaughton-Zielonka algorithm if it is run on an $(n,d)$-small parity game
and with both trees $\mathcal{T}^{\mathrm{Even}}$ and
$\mathcal{T}^{\mathrm{Odd}}$ equal to $C_{n,d/2}$, and if it uses the adaptive
empty-set early termination rule.
The idea of using rules for implicitly pruning the tree of recursive calls of
a McNaughton-Zielonka-style algorithm that are significantly different from
the adaptive empty-set early termination rule is due to Parys (Parys, 2019).
In this way, he has designed the first McNaughton-Zielonka-style algorithm
that works in quasi-polynomial time $n^{O(\lg n)}$ in the worst case, and
Lehtinen, Schewe, and Wojtczak (Lehtinen et al., 2019) have refined Parys’s
algorithm, improving the worst-case running time down to $n^{O(\lg d)}$. Both
algorithms use two numerical arguments (one for $\mathrm{Even}$ and one for
$\mathrm{Odd}$) and “halving tricks” on those parameters, which results in
pruning the tree of recursive calls down to quasi-polynomial size in the worst
case. We note that our universal algorithm yields the algorithms of Parys and
of Lehtinen et al., respectively, if, when run on an $(n,d)$-small parity game
and if both trees $\mathcal{T}^{\mathrm{Even}}$ and
$\mathcal{T}^{\mathrm{Odd}}$ set to be the $(n,d/2)$-universal trees
$P_{n,d/2}$ and $S_{n,d/2}$, respectively.
###### Proposition 3.7.
The universal algorithm performs the same actions and produces the same output
as Lehtinen-Schewe-Wojtczak algorithm if it is run on an $(n,d)$-small parity
game with both trees $\mathcal{T}^{\mathrm{Even}}$ and
$\mathcal{T}^{\mathrm{Odd}}$ equal to $S_{n,d/2}$.
The correspondence between the universal algorithm run on $(n,d/2)$-universal
trees $P_{n,d/2}$ and Parys’s algorithm is a bit more subtle. While both run
in quasi-polynomial time in the worst case, the former may perform more
recursive calls than the latter. The two coincide, however, if the the former
is enhanced with a simple adaptive tree-pruning rule similar to the empty-set
early termination rule. The discussion of this and other adaptive tree-pruning
rules will be better informed once we have dicussed sufficient conditions for
the correctness of our universal algorithm. Therefore, we will return to
elaborating the full meaning of the following proposition in Section 4.4.
###### Proposition 3.8.
The universal algorithm performs the same actions and produces the same output
as a non-adaptive version of Parys’s algorithm if it is run on an
$(n,d)$-small parity games with both trees $\mathcal{T}^{\mathrm{Even}}$ and
$\mathcal{T}^{\mathrm{Odd}}$ equal to $P_{n,d/2}$.
## 4\. Correctness via structural theorems
The proof of correctness of McNaughton-Zielonka algorithm that is based on the
algorithm recursively producing attractor decompositions of largest Even and
Odd dominia (as discussed in Section 2.4) critically relies on the
$U_{i}=\emptyset$ termination condition of the main loop in McNaughton-
Zielonka algorithm. The argument breaks down if the loop terminates before
that empty-set condition obtains. Instead, Parys (Parys, 2019) has developed a
novel _dominion separation technique_ to prove correctness of his algorithm
and Lehtinen et al. (Lehtinen et al., 2019) use the same technique to justify
theirs.
In this paper, we significantly generalize the dominion separation technique
of Parys, which allows us to intimately link the correctness of our meta-
algorithm to shapes (modelled as ordered trees) of attractor decompositions of
largest Even and Odd dominia. We say that the universal algorithm is correct
on a parity game if $\text{Univ}_{\mathrm{Even}}$ returns the largest Even
dominion and $\text{Univ}_{\mathrm{Odd}}$ returns the largest Odd dominion. We
also say that an ordered tree $\mathcal{T}$ _embeds a dominion_ $D$ in a
parity game $\mathcal{G}$ if it embeds the tree of some attractor
decomposition of $\mathcal{G}\cap D$. The main technical result we aim to
prove in this section is the sufficiency of the following condition for the
universal algorithm to be correct.
###### Theorem 4.1 (Correctness of universal algorithm).
The universal algorithm is correct on a parity game $\mathcal{G}$ if it is run
on ordered trees $\mathcal{T}^{\mathrm{Even}}$ and
$\mathcal{T}^{\mathrm{Odd}}$, such that $\mathcal{T}^{\mathrm{Even}}$ embeds
the largest Even dominion in $\mathcal{G}$ and $\mathcal{T}^{\mathrm{Odd}}$
embeds the largest Odd dominion in $\mathcal{G}$.
### 4.1. Embeddable decomposition theorem
Before we prove Theorem 4.1 in Section 4.2, in this section we establish
another technical result—the embeddable decomposition theorem—that enables our
generalization of Parys’s dominion separation technique. Its statement is
intuitive: a subgame induced by a trap has a simpler attractor decomposition
structure than the whole game itself; its proof, however, seems to require
some careful surgery.
###### Theorem 4.2 (Embeddable decomposition).
If $T$ is a trap for Even in a parity game $\mathcal{G}$ and
$\mathcal{G}^{\prime}=\mathcal{G}\cap T$ is the subgame induced by $T$, then
for every Even attractor decomposition $\mathcal{H}$ of $\mathcal{G}$, there
is an Even attractor decomposition $\mathcal{H}^{\prime}$ of
$\mathcal{G}^{\prime}$, such that $\mathcal{T}_{\mathcal{H}}$ embeds
$\mathcal{T}_{\mathcal{H}^{\prime}}$.
In order to streamline the proof of the embeddable decomposition theorem, we
state the following two propositions, which synthesize or generalize some of
the arguments that were also used by Parys (Parys, 2019) and Lehtinen et al.
(Lehtinen et al., 2019). Proofs are included in the Appendix.
###### Proposition 4.3.
Suppose that $R$ is a trap for Even in game $\mathcal{G}$. Then if $T$ is a
trap for Odd in $\mathcal{G}$ then $T\cap R$ is a trap for Odd in subgame
$\mathcal{G}\cap R$, and if $T$ is an Even dominion in $\mathcal{G}$ then
$T\cap R$ is an Even dominion in $\mathcal{G}\cap R$.
The other proposition is illustrated in Figure 2. Its statement is more
complex than that of the first proposition. The statement and the proof
describe the relationship between the Even attractor of a set $B$ of vertices
in a game $\mathcal{G}$ and the Even attractor of the set $B\cap T$ in subgame
$\mathcal{G}\cap T$, where $T$ is a trap for Even in $\mathcal{G}$.
Figure 2. Traps and attractors in Proposition 4.4.
###### Proposition 4.4.
Let $B\subseteq V^{\mathcal{G}}$ and let $T$ be a trap for Even in game
$\mathcal{G}$. Define $A=\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}}(B)$ and
$A^{\prime}=\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}\cap T}(B\cap T)$. Then
$T\setminus A^{\prime}$ is a trap for Even in subgame $\mathcal{G}\setminus
A$.
Finally, we are ready to prove the embeddable decomposition theorem by
induction on the number of leaves of the tree of attractor decomposition
$\mathcal{H}$.
###### Proof of Theorem 4.2.
Without loss of generality, assume that $d$ is even and
$\mathcal{H}=\left\langle
A,(S_{1},\mathcal{H}_{1},A_{1}),\dots,(S_{k},\mathcal{H}_{k},A_{k})\right\rangle$
is an Even $d$-attractor decomposition of $\mathcal{G}$, where $A$ is the Even
attractor to the set $D$ of vertices of priority $d$ in $\mathcal{G}$. In
Figure 3, set $T$ and the subgame $\mathcal{G}^{\prime}$ it induces form the
pentagon obtained from the largest rectangle by removing the triangle above
the diagonal line in the top-left corner. Sets $A$, $S_{1}$, and $A_{1}$ are
also illustrated, together with sets $A^{\prime}$, $S_{1}^{\prime}$,
$A_{1}^{\prime}$ and subgames $\mathcal{G}_{1}$, $\mathcal{G}_{2}$,
$\mathcal{G}_{1}^{\prime}$, and $\mathcal{G}_{2}^{\prime}$, which are defined
as follows.
Let $\mathcal{G}_{1}=\mathcal{G}\setminus A$, and
$\mathcal{G}_{2}=\mathcal{G}_{1}\setminus A_{1}$. We will define sets
$A^{\prime}$, $S_{1}^{\prime}$, $A_{1}^{\prime}$, …, $S_{\ell}^{\prime}$,
$A_{\ell}^{\prime}$, and Even $(d-2)$-attractor decompositions
$\mathcal{H}_{1}^{\prime},\dots,\mathcal{H}_{\ell}^{\prime}$ of subgames
$\mathcal{G}\cap S_{1}^{\prime}$, …, $\mathcal{G}\cap S_{\ell}^{\prime}$,
respectively, such that
$\mathcal{H}^{\prime}=\left\langle
A^{\prime},(S_{1}^{\prime},\mathcal{H}_{1}^{\prime},A_{1}^{\prime}),\dots,(S_{k}^{\prime},\mathcal{H}_{\ell}^{\prime},A_{\ell}^{\prime})\right\rangle$
is an Even $d$-attractor decomposition of subgame $\mathcal{G}^{\prime}$ and
$\mathcal{T}_{\mathcal{H}}$ embeds $\mathcal{T}_{\mathcal{H}^{\prime}}$.
Let $A^{\prime}$ be the Even attractor to $D\cap T$ in $\mathcal{G}^{\prime}$
and let $\mathcal{G}_{1}^{\prime}=\mathcal{G}^{\prime}\setminus A^{\prime}$.
Set $S_{1}^{\prime}=S_{1}\cap\mathcal{G}_{1}^{\prime}$, let $A_{1}^{\prime}$
be the Even attractor to $S_{1}^{\prime}$ in $\mathcal{G}_{1}^{\prime}$, and
let $\mathcal{G}_{2}^{\prime}=\mathcal{G}_{1}^{\prime}\setminus
A_{1}^{\prime}$.
Firstly, since $D\subseteq V^{\mathcal{G}}$ and $T$ is a trap for Even in
$\mathcal{G}$, by Proposition 4.4, we have that $\mathcal{G}_{1}^{\prime}$ is
a trap for Even in subgame $\mathcal{G}_{1}$. Since $S_{1}\subseteq
V^{\mathcal{G}_{1}}$ and subgame $\mathcal{G}_{1}^{\prime}$ is a trap for Even
in subgame $\mathcal{G}_{1}$, again by Proposition 4.4, we conclude that
$\mathcal{G}_{2}^{\prime}$ is a trap for Even in subgame $\mathcal{G}_{2}$.
Secondly, we argue that $S_{1}^{\prime}$ is an Even dominion in subgame
$\mathcal{G}_{1}^{\prime}$. This follows by recalling that $S_{1}$ is a
dominion for Even in $\mathcal{G}_{1}$ and $\mathcal{G}_{1}^{\prime}$ is a
trap for Even in $\mathcal{G}_{1}$, and then applying Proposition 4.3.
Thirdly, we argue that $S_{1}^{\prime}$ is a trap for Even in subgame
$\mathcal{G}\cap S_{1}$. This follows by recalling that $S_{1}$ is a trap for
Odd in $\mathcal{G}_{1}$ and that $\mathcal{G}_{1}^{\prime}$ is a trap for
Even in $\mathcal{G}_{1}$, and then applying Proposition 4.3.
Figure 3. Attractors, subgames, and dominia in the proof of the embeddable
decomposition theorem.
We are now in a position to apply the inductive hypothesis twice in order to
complete the definition of the attractor decomposition $\mathcal{H}^{\prime}$.
Firstly, recall that $S_{1}^{\prime}$ is a trap for Even in subgame
$\mathcal{G}\cap S_{1}$ and that $\mathcal{H}_{1}$ is a $(d-2)$-attractor
decomposition of $\mathcal{G}\cap S_{1}$, so we can apply the inductive
hypothesis to obtain a $(d-2)$-attractor decomposition
$\mathcal{H}_{1}^{\prime}$ of subgame $\mathcal{G}\cap S_{1}^{\prime}$, such
that $\mathcal{T}_{\mathcal{H}_{1}}$ embeds
$\mathcal{T}_{\mathcal{H}_{1}^{\prime}}$. Secondly, note that
$\mathcal{I}\>=\>\left\langle\emptyset,(S_{2},\mathcal{H}_{2},A_{2}),\dots,(S_{k},\mathcal{H}_{k},A_{k})\right\rangle$
is a $d$-attractor decomposition of $\mathcal{G}_{2}$. We find a $d$-attractor
decomposition $\mathcal{I}^{\prime}$ of subgame $\mathcal{G}_{2}^{\prime}$,
such that $\mathcal{T}_{\mathcal{I}}$ embeds
$\mathcal{T}_{\mathcal{I}^{\prime}}$. Recalling that
$\mathcal{G}_{2}^{\prime}$ is a trap for Even in subgame $\mathcal{G}_{2}$, it
suffices to use the inductive hypothesis for subgame
$\mathcal{G}_{2}^{\prime}$ of game $\mathcal{G}_{2}$ and the $d$-attractor
decomposition $\mathcal{I}$ of $\mathcal{G}_{2}$.
Verifying that $\mathcal{H}^{\prime}$ is a $d$-attractor decomposition of
$\mathcal{G}^{\prime}$ is routine. That $\mathcal{T}_{\mathcal{H}}$ embeds
$\mathcal{T}_{\mathcal{H}^{\prime}}$ also follows routinely from
$\mathcal{T}_{\mathcal{H}_{1}}$ embedding
$\mathcal{T}_{\mathcal{H}_{1}^{\prime}}$ and $\mathcal{T}_{\mathcal{I}}$
embedding $\mathcal{T}_{\mathcal{I}^{\prime}}$. ∎
### 4.2. Dominion separation theorem
The simple dominion disjointness property (Proposition 2.2) states that every
Even dominion is disjoint from every Odd dominion. For two sets $A$ and $B$,
we say that another set $X$ _separates $A$ from $B$_ if $A\subseteq X$ and
$X\cap B=\emptyset$. In this section we establish a very general dominion
separation property for subgames that occur in iterations of the universal
algorithm. This allows us to prove one of the main technical results of this
paper (Theorem 4.1) that describes a detailed structural sufficient condition
for the correctness of the universal algorithm.
###### Theorem 4.5 (Dominion separation).
Let $\mathcal{G}$ be an $(n,d)$-small parity game and let
$\mathcal{T}^{\mathrm{Even}}=\left\langle\mathcal{T}^{\mathrm{Even}}_{1},\dots,\mathcal{T}^{\mathrm{Even}}_{\ell}\right\rangle$
and
$\mathcal{T}^{\mathrm{Odd}}=\left\langle\mathcal{T}^{\mathrm{Odd}}_{1},\dots,\mathcal{T}^{\mathrm{Odd}}_{k}\right\rangle$
be trees of height at most $\lceil d/2\rceil$ and $\lfloor d/2\rfloor$,
respectively.
1. (a)
If $d$ is even and $\mathcal{G}_{1},\dots,\mathcal{G}_{k+1}$ are the games
that are computed in the successive iterations of the loop in the call
$\textnormal{{$\text{Univ}_{\mathrm{Even}}$}}\left(\mathcal{G},d,\mathcal{T}^{\mathrm{Even}},\mathcal{T}^{\mathrm{Odd}}\right)$,
then for every $i=0,1,\dots,k$, we have that $\mathcal{G}_{i+1}$ separates
every Even dominion in $\mathcal{G}$ that tree $\mathcal{T}^{\mathrm{Even}}$
embeds from every Odd dominion in $\mathcal{G}$ that tree
$\left\langle\mathcal{T}_{1}^{\mathrm{Odd}},\ldots,\mathcal{T}_{i}^{\mathrm{Odd}}\right\rangle$
embeds.
2. (b)
If $d$ is odd and $\mathcal{G}_{1},\dots,\mathcal{G}_{\ell+1}$ are the games
that are computed in the successive iterations of the loop in the call
$\textnormal{{$\text{Univ}_{\mathrm{Odd}}$}}\left(\mathcal{G},d,\mathcal{T}^{\mathrm{Even}},\mathcal{T}^{\mathrm{Odd}}\right)$,
then for every $i=0,1,\dots,\ell$, we have that $\mathcal{G}_{i+1}$ separates
every Odd dominion in $\mathcal{G}$ that tree $\mathcal{T}^{\mathrm{Odd}}$
embeds from every Even dominion in $\mathcal{G}$ that tree
$\left\langle\mathcal{T}_{1}^{\mathrm{Even}},\ldots,\mathcal{T}_{i}^{\mathrm{Even}}\right\rangle$
embeds.
Before we prove the dominion separation theorem, we recall a simple
proposition from Parys (Parys, 2019), also stated explicitly by Lehtinen et
al. (Lehtinen et al., 2019). Note that it is a straightfoward corollary of the
dual of Proposition 4.4 (in case $B\cap T=\emptyset$).
###### Proposition 4.6.
If $T$ is a trap for Odd in $\mathcal{G}$ and $T\cap B=\emptyset$ then we also
have that $T\cap\mathrm{Attr}_{\mathrm{Odd}}^{\mathcal{G}}(B)=\emptyset$.
###### Proof of Theorem 4.5.
We prove the statement of part (a); the proof of part (b) is analogous.
The proof is by induction on the height of tree
$\mathcal{T}^{\mathrm{Odd}}\bowtie\mathcal{T}^{\mathrm{Even}}$ (the “outer”
induction). If the height is $0$ then tree $\mathcal{T}^{\mathrm{Odd}}$ is the
trivial tree $\left\langle\right\rangle$; hence $k=0$, the algorithm returns
the set $V^{\mathcal{G}_{1}}=V^{\mathcal{G}}$, which contains the largest Even
dominion, and which is trivially disjoint from the largest Odd dominion
(because the latter is empty).
If the height of
$\mathcal{T}^{\mathrm{Odd}}\bowtie\mathcal{T}^{\mathrm{Even}}$ is positive,
then we split the proof of the separation property into two parts.
Figure 4. Attractors, subgames, and dominia in the first part of the proof of
the dominion separation theorem.
#### Even dominia embedded by $\mathcal{T}^{\mathrm{Even}}$ are included in
$\mathcal{G}_{i+1}$.
We prove by induction on $i$ (the “inner” induction) that for
$i=0,1,2,\dots,k$, if $M$ is an Even dominion in $\mathcal{G}$ that
$\mathcal{T}^{\mathrm{Even}}$ embeds, then $M\subseteq\mathcal{G}_{i+1}$.
For $i=0$, this is moot because $\mathcal{G}_{1}=\mathcal{G}$.
For $i>0$, let $M$ be an Even dominion that has an Even $d$-attractor
decomposition $\mathcal{H}$ such that $\mathcal{T}^{\mathrm{Even}}$ embeds
$\mathcal{T}_{\mathcal{H}}$. The inner inductive hypothesis (for $i-1$)
implies that $M\subseteq\mathcal{G}_{i}$.
The reader is encouraged to systematically refer to Figure 4 to better follow
the rest of this part of the proof.
Let
$M^{\prime}=M\setminus\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i})$.
Because
$\mathcal{G}_{i}\setminus\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i})$
is a trap for Even in $\mathcal{G}_{i}$ and $M$ is a trap for Odd in
$\mathcal{G}_{i}$, the dual of Proposition 4.3 yields that $M^{\prime}$ is a
trap for Even in $\mathcal{G}_{i}\cap M$.
Then, because $\mathcal{H}$ is an Even $d$-attractor decomposition of
$\mathcal{G}\cap M$, it follows by Theorem 4.2 that there is an Even
$d$-attractor decomposition $\mathcal{H}^{\prime}$ of $\mathcal{G}_{i}\cap
M^{\prime}$ such that $\mathcal{T}_{\mathcal{H}}$ embeds
$\mathcal{T}_{\mathcal{H}^{\prime}}$, and hence also
$\mathcal{T}^{\mathrm{Even}}$ embeds $\mathcal{T}_{\mathcal{H}^{\prime}}$.
Therefore, because $M^{\prime}$ is an Even dominion in the game
$\mathcal{G}_{i}\setminus\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i})$,
part (b) of the outer inductive hypothesis yields $M^{\prime}\cap
U_{i}=\emptyset$.
Finaly, because $M\setminus
M^{\prime}\subseteq\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i})$ and
$(M^{\prime}\setminus M)\cap U_{i}=\emptyset$, it follows that $M\cap
U_{i}=\emptyset$. By Proposition 4.6, we obtain
$M\cap\mathrm{Attr}_{\mathrm{Odd}}^{\mathcal{G}_{i}}(U_{i})=\emptyset$ and
hence $M\subseteq\mathcal{G}_{i+1}$.
Figure 5. Attractors, subgames, and dominia in the second part of the proof of
the dominion separation theorem.
#### Odd dominia embedded by
$\left\langle\mathcal{T}_{1}^{\mathrm{Odd}},\ldots,\mathcal{T}_{i}^{\mathrm{Odd}}\right\rangle$
are disjoint from $\mathcal{G}_{i+1}$.
We prove by induction on $i$ (another “inner” induction) that for
$i=0,1,\dots,k$, if $M$ is an Odd dominion in $\mathcal{G}$ that
$\left\langle\mathcal{T}^{\mathrm{Odd}}_{1},\ldots,\mathcal{T}^{\mathrm{Odd}}_{i}\right\rangle$
embeds, then $\mathcal{G}_{i+1}\cap M=\emptyset$.
For $i=0$, note that
$\left\langle\mathcal{T}_{1}^{\mathrm{Odd}},\dots,\mathcal{T}_{i}^{\mathrm{Odd}}\right\rangle=\left\langle\right\rangle$
and the only Odd dominion $M$ in $\mathcal{G}$ that has an Odd
$(d+1)$-attractor decomposition whose tree is the trivial tree
$\left\langle\right\rangle$ is the empty set, and hence $\mathcal{G}_{1}\cap
M=\emptyset$, because $\mathcal{G}_{1}=\mathcal{G}$.
The reader is encouraged to systematically refer to Figure 5 to better follow
the rest of this part of the proof.
For $i>0$, let
$\mathcal{H}=\left\langle\emptyset,(S_{1},\mathcal{H}_{1},A_{1}),\dots,(S_{\bar{\imath}},\mathcal{H}_{\bar{\imath}},A_{\bar{\imath}})\right\rangle$
be an Odd $(d+1)$-attractor decomposition of $\mathcal{G}\cap M$ such that
$\left\langle\mathcal{T}^{\mathrm{Odd}}_{1},\dots,\mathcal{T}^{\mathrm{Odd}}_{i}\right\rangle$
embeds $\mathcal{T}_{\mathcal{H}}$. Note that the embedding implies that
$\bar{\imath}\leq i$.
If
$\left\langle\mathcal{T}_{1}^{\mathrm{Odd}},\dots,\mathcal{T}_{i-1}^{\mathrm{Odd}}\right\rangle$
embeds $\mathcal{T}_{\mathcal{H}}$ then the inner inductive hypothesis (for
$i-1$) implies that $\mathcal{G}_{i}\cap M=\emptyset$ and thus
$\mathcal{G}_{i+1}\cap M=\emptyset$ since
$\mathcal{G}_{i+1}\subseteq\mathcal{G}_{i}$.
Otherwise, it must be the case that
(3) $\text{$\mathcal{T}^{\mathrm{Odd}}_{i}$ embeds
$\mathcal{T}_{\mathcal{H}_{\bar{\imath}}}$}\,.$
Observe that the set $A_{\leq{\bar{\imath}}-1}=A_{1}\cup A_{2}\cup\cdots\cup
A_{{\bar{\imath}}-1}$ is a trap for Even in $\mathcal{G}\cap M$, and hence by
trap transitivity it is a trap for Even in $\mathcal{G}$ because $M$ is a trap
for Even in $\mathcal{G}$. Moreover, subgame $\mathcal{G}\cap
A_{\leq{\bar{\imath}}-1}$ has an Odd $(d+1)$-attractor decomposition
$\mathcal{I}\>=\>\left\langle\emptyset,(S_{1},\mathcal{H}_{1},A_{1}),\dots,(S_{\bar{\imath}-1},\mathcal{H}_{\bar{\imath}-1},A_{\bar{\imath}-1})\right\rangle$
in $\mathcal{G}$ and hence—by Proposition 2.5—it is an Odd dominion in
$\mathcal{G}$, and ordered tree
$\left\langle\mathcal{T}_{1}^{\mathrm{Odd}},\dots,\mathcal{T}_{i-1}^{\mathrm{Odd}}\right\rangle$
embeds $\mathcal{T}_{\mathcal{I}}$. Hence, the inner inductive hypothesis (for
$i-1$) yields
(4) $\mathcal{G}_{i}\cap A_{\leq\bar{\imath}-1}\>=\>\emptyset\,.$
Set $M^{\prime}=\mathcal{G}_{i}\cap M$ and note that not only
$M^{\prime}\subseteq A_{\bar{\imath}}$, but also $M^{\prime}$ is a trap for
Odd in $A_{\bar{\imath}}$, because $\mathcal{G}_{i}$ is a trap for Odd in
$\mathcal{G}$. Moreover—by Proposition 4.3—$M^{\prime}$ is an Odd dominion in
$\mathcal{G}_{i}$ because $\mathcal{G}_{i}$ is a trap for Odd in $\mathcal{G}$
and $M$ is a dominion for Odd in $\mathcal{G}$.
Observe that
$\mathcal{J}=\left\langle\emptyset,(S_{\bar{\imath}},\mathcal{H}_{\bar{\imath}},A_{\bar{\imath}})\right\rangle$
is an Odd $(d+1)$-attractor decomposition of $\mathcal{G}\cap
A_{\bar{\imath}}$. By the embeddable decomposition theorem (Theorem 4.2), it
follows that there is an Odd $(d+1)$-attractor decomposition $\mathcal{K}$ of
$\mathcal{G}\cap M^{\prime}$ such that $\mathcal{T}_{\mathcal{J}}$ embeds
$\mathcal{T}_{\mathcal{K}}$. Because of this embedding, $\mathcal{K}$ must
have the form
$\mathcal{K}=\left\langle\emptyset,(S^{\prime},\mathcal{K}^{\prime},M^{\prime})\right\rangle$.
Since $\mathcal{T}_{\mathcal{J}}$ embeds $\mathcal{T}_{\mathcal{K}}$, we also
have that $\mathcal{T}_{\mathcal{H}_{\bar{\imath}}}$ embeds
$\mathcal{T}_{\mathcal{K}^{\prime}}$, and hence—by
(3)—$\mathcal{T}^{\mathrm{Odd}}_{i}$ embeds
$\mathcal{T}_{\mathcal{K}^{\prime}}$.
Note that $S^{\prime}$ is a trap for Odd in $\mathcal{G}\cap M^{\prime}$ in
which every vertex priority is at most $d-1$, because $\mathcal{K}$ is an Odd
$(d+1)$-attractor decomposition of $\mathcal{G}\cap M^{\prime}$. It follows
that $S^{\prime}$ is also an Odd dominion in
$\mathcal{G}_{i}\setminus\mathrm{Attr}_{\mathrm{Even}}^{\mathcal{G}_{i}}(D_{i})$.
The outer inductive hypothesis then yields $S^{\prime}\subseteq U_{i}$. It
follows that
$M^{\prime}=\mathrm{Attr}_{\mathrm{Odd}}^{\mathcal{G}_{i}\cap
M^{\prime}}(S^{\prime})\subseteq\mathrm{Attr}^{\mathcal{G}_{i}}_{\mathrm{Odd}}(S^{\prime})\subseteq\mathrm{Attr}^{\mathcal{G}_{i}}_{\mathrm{Odd}}(U_{i})\,,$
where the first inclusion holds because $M^{\prime}$ is a trap for Even in
$\mathcal{G}_{i}$, and the second follows from monotonicity of the attractor
operator. When combined with with (4), this implies $\mathcal{G}_{i+1}\cap
M=\emptyset$. ∎
### 4.3. Correctness and complexity
The dominion separation theorem (Theorem 4.5) allows us to conclude the proof
of the main universal algorithm correctness theorem (Theorem 4.1). Indeed, if
trees $\mathcal{T}^{\mathrm{Even}}$ and $\mathcal{T}^{\mathrm{Odd}}$ satisfy
the conditions of Theorem 4.1 then, by the dominion separation theorem, the
set returned by the call
$\textnormal{{$\text{Univ}_{\mathrm{Even}}$}}\left(\mathcal{G},d,\mathcal{T}^{\mathrm{Even}},\mathcal{T}^{\mathrm{Odd}}\right)$
separates the largest Even dominion from the largest Odd dominion, and
hence—by the positional determinacy theorem (Theorem 2.3)—it is the largest
Even dominion. The argument for procedure $\text{Univ}_{\mathrm{Odd}}$ is
analogous.
We note that the universal algorithm correctness theorem, together with
Propositions 3.8 and 3.7, imply correctness of the non-adaptive version of
Parys’s algorithm (Parys, 2019) and of Lehtinen-Schewe-Wojtczak algorithm
(Lehtinen et al., 2019), because trees of attractor decompositions are
$(n,d/2)$-small (Proposition 3.1) and trees $P_{n,d/2}$ and $S_{n,d/2}$ are
$(n,d/2)$-universal.
The following fact, an alternative restatement of the conclusion of Lehtinen
et al. (Lehtinen et al., 2019), is a simple corollary of the precise
asymptotic upper bounds on the size of the universal trees $S_{n,d/2}$
established by Jurdziński and Lazić (Jurdziński and Lazić, 2017), and of
Propositions 3.7, 3.3, and 3.5.
###### Proposition 4.7 (Complexity).
The universal algorithm that uses universal trees $S_{n,d/2}$ (aka. Lehtinen-
Schewe-Wojtczak algorithm) solves $(n,d)$-small parity games in polynomial
time if $d=O(\log n)$, and in time $n^{2\lg(d/{\lg n})+O(1)}$ if
$d=\omega(\log n)$.
### 4.4. Acceleration by tree pruning
As we have discussed in Section 3.2, Parys (Parys, 2019) has achieved a
breakthrough of developing the first quasi-polynomial McNaughton-Zielonka-
style algorithm for parity games by pruning the tree of recursive calls down
to quasi-polynomial size. Proposition 3.8 clarifies that Parys’s scheme can be
reproduced by letting the universal algorithm run on universal trees
$P_{n,d/2}$, but as it also mentions, just doing so results in a “non-
adaptive” version of Parys’s algorithm. What is the “adaptive” version
actually proposed by Parys?
Recall that the root of tree $P_{n,h}$ has $n+1$ children, the first $n/2$ and
the last $n/2$ children are the roots of copies of tree $P_{n/2,h-1}$, and the
middle child is the root of a copy of tree $P_{n,h-1}$. The adaptive version
of Parys’s algorithm also uses another tree-prunning rule, which is adaptive
and a slight generalization of the empty-set rule: whenever the algorithm is
processing the block of the first $n/2$ children of the root or the last $n/2$
children of the root, if one of the recursive calls in this block returns an
empty set then the rest of the block is omitted.
We expect that our structural results (such as Theorems 4.1 and 4.5) will
provide insights to inspire development and proving correctness of further and
more sophisticated adaptive tree-pruning rules, but we leave it to future
work. This may be critical for making quasi-polynomial versions of McNaughton-
Zielonka competitive in practice with its basic version that is exponential in
the worst case, but remains very hard to beat in practice (van Dijk, 2018;
Parys, 2019).
## 5\. Symbolic algorithms
Parity games that arise in applications, for example from the automata-
theoretic model checking approaches to verification and automated synthesis,
often suffer from the _state-space explosion problem_ : the sizes of models
are exponential (or worse) in the sizes of natural descriptions of the
modelled objects, and hence the models obtained may be too large to store them
explicitly in memory. One method of overcoming this problem that has been
successful in the practice of algorithmic formal methods is to represent the
models symbolically rather than explicitly, and to develop algorithms for
solving the models that work directly on such succinct symbolic
representations (Burch et al., 1992).
We adopt the _set-based symbolic model of computation_ that was already
considered for parity games by Chatterjee, Dvořák, Henzinger, and Svozil
(Chatterjee et al., 2018). In this model, any standard computational
operations on any standard data structures are allowed, but there are also the
following symbolic resources available: _symbolic set variables_ can be used
to store sets of vertices in the graph of a parity game; basic set-theoretic
operations on symbolic set variables are available as _primitive symbolic
operations_ ; the _controllable predecessors_ operations are available as
primite symbolic operations: the Even (resp. Odd) controllable predecessor,
when applied to a symbolic set variable $X$, returns the set of vertices from
which Even (resp. Odd) can force to move into the set $X$, by taking just one
outgoing edge. Since symbolic set variables can represent possibly very large
and complex objects, they should be treated as a costly resource.
Chatterjee et al. (Chatterjee et al., 2018) have given a symbolic set-based
algorithm that on $(n,d)$-small parity games uses $O(d\log n)$ of symbolic set
variables and runs in quasi-polynomial time. While the dependence on $n$ is
only logarithmic, a natural question is whether this dependence is inherent.
Given that $n$ can be prohibitively large in applications, reducing dependence
on $n$ is desirable. In this section we argue that it is not only possible to
eliminate the dependence on $n$ entirely, but it is also possible to
exponentially improve the dependence on $d$, resulting in a quasi-polynomial
symbolic algorithm for solving parity games that uses only $O(\lg d)$ symbolic
set variables.
### 5.1. Universal algorithm symbolically
In the set-based symbolic model of computation, it is routine to compute the
attractors efficiently: it is sufficient to iterate the controllable
predecessor operations. Using the results of Jurdziński and Lazić (Jurdziński
and Lazić, 2017), on can also represent a path of nodes from the root to a
leaf in the tree $S_{n,d/2}$ in $O(\lg n\cdot\lg d)$ bits, and for every node
on such a path, to compute its number of children in $O(\lg n\cdot\lg d)$
standard primitive operations. This allows to run the whole universal
algorithm (Algorithm 2) on an $(n,d)$-small parity game and two copies of
trees $S_{n,d/2}$, using only $O(\lg n\cdot\lg d)$ bits to represent the
relevant nodes in the trees $\mathcal{T}^{\mathrm{Even}}$ and
$\mathcal{T}^{\mathrm{Odd}}$ throughout the execution.
The depth of the tree of recursive calls of the universal algorithm on an
$(n,d)$-small parity game is at most $d$. Moreover, in every recursive call,
only a small constant number of set variables is needed because only the
latest sets $V^{\mathcal{G}_{i}}$, $D_{i}$, $V^{\mathcal{G}^{\prime}_{i}}$,
and $U_{i}$ are needed at any time. It follows that the overall number of
symbolic set variables needed to run the universal algorithm is $O(d)$. Also
note that every recursive call can be implemented symbolically using a
constant number of primitive symbolic operations and two symbolic attractor
computations.
This improves the symbolic space from Chatterjee et al.’s $O(d\lg n)$ to
$O(d)$, while keeping the running time quasipolynomial. Moreover, this
symbolic algorithm is very simple and straightforward to implement, which
makes it particularly promising and attractive for empirical evaluation and
deployment in applications.
### 5.2. Succinct symbolic partitions
In this section we describe how the number of symbolic set variables in the
symbolic implementation of the universal algorithm can be further reduced from
$O(d)$ to $O(\log d)$. We use letters $G$, $D$, $G^{\prime}$, and $U$ to
denote the sets $V^{\mathcal{G}_{i}}$, $D_{i}$,
$V^{\mathcal{G}^{\prime}_{i}}$, and $U_{i}$ for some $i$-th iteration of any
of the recursive calls of the universal algorithm. Observe that we do not need
to keep the symbolic variables that store the sets $D$, $G^{\prime}$, and $U$
on the stack of recursive calls because on any return from a recursive call,
their values are not needed to proceed. How can we store the sets denoted by
all the symbolic set variables $G$ on the stack using only $O(\log d)$
symbolic set variables, while the height of the stack may be as large as $d$?
Firstly, we argue that we can symbolically represent any sequence
$\left\langle G_{d-1},\dots,G_{i}\right\rangle$ of set variables that would
normally occur on the stack of recursive calls of the universal algorithm, by
another sequence $\left\langle H_{d-1},\dots,H_{0}\right\rangle$, in which the
sets form a partition of the set of vertices in the parity game. Indeed, a
sequence $\left\langle G_{d},\dots,G_{i}\right\rangle$ on the stack of
recursive calls at any time forms a descending chain w.r.t. inclusion, and
$G_{d}$ is the set of all vertices, so it suffices to consider the sequence
$\left\langle G_{d}\setminus G_{d-1},\dots,G_{i+1}\setminus
G_{i},G_{i},\emptyset,\dots,\emptyset\right\rangle$.
Secondly, we argue that the above family of $d$ mutually disjoint sets can be
succinctly represented and maintained using $O(\log d)$ set variables.
W.l.o.g., assume that $d$ is a power of $2$. For every $k=1,2,\dots,\lg d$,
and for every $i=1,2,\dots,d$, let $\mathrm{bit}_{k}(i)$ be the $k$-th digit
in the binary representation of $i$ (and zero if there are less than $k$
digits). We now define the following sequence of sets $\left\langle
S_{1},S_{2},\dots,S_{\lg d}\right\rangle$ that provides a succinct
representation of the sequence $\left\langle
H_{d-1},\dots,H_{0}\right\rangle$. For every $k=1,2,\dots,\lg d$, we set:
$S_{k}\>=\>\bigcup\left\\{H_{i}\>:\>0\leq i\leq d-1\text{ and
}\mathrm{bit}_{k}(i)=1\right\\}\,.$
By sets $\left\langle H_{d-1},\dots,H_{0}\right\rangle$ forming a partition of
the set of all vertices, it follows that for every $i=0,1,\dots,d-1$, we have:
$H_{i}\>=\>\bigcap\left\\{S_{k}\>:\>1\leq k\leq\lg d\text{ and
}\mathrm{bit}_{k}(i)=1\right\\}\\\
\cap\;\bigcap\left\\{\overline{S_{k}}\>:\>1\leq k\leq\lg d\text{ and
}\mathrm{bit}_{k}(i)=0\right\\}\,,$
where $\overline{X}$ is the complement of set $X$.
What remains to be shown is that the operations on the sequence of sets
$\left\langle G_{d-1},\dots,G_{i}\right\rangle$ that reflect changes on the
stack of recursive calls of the universal algorithm can indeed be implemented
using small numbers of symbolic set operations on the succinct representation
$\left\langle S_{1},\dots,S_{\lg d}\right\rangle$ of the sequence
$\left\langle H_{d-1},\dots,H_{0}\right\rangle$. We note that there are two
types of changes to the sequence $\left\langle
G_{d-1},\dots,G_{i}\right\rangle$ that the universal algorithm makes:
1. (a)
all components are as before, except for $G_{i}$ that is replaced by
$G_{i}\setminus B$, for some set $B\subseteq G_{i}$;
2. (b)
all components are as before, except that a new entry $G_{i-1}$ is added equal
to $G_{i}\setminus B$, for some set $B\subseteq G_{i}$.
The corresponding changes to the sequence $\left\langle
H_{d-1},\dots,H_{0}\right\rangle$ are then:
1. (a)
all components are as before, except that set $H_{i+1}$ is replaced by
$H_{i+1}\cup B$, and set $H_{i}$ is replaced by $H_{i}\setminus B$;
2. (b)
all components are as before, except that set $H_{i}$ is replaced by $B$, and
set $H_{i-1}$ is replaced by $H_{i}\setminus B$.
To implement the update of type (a), it suffices to perform the following
update to the succinct representation:
$S^{\prime}_{k}\>=\>\begin{cases}S_{k}&\text{if
$\mathrm{bit}_{k}(i+1)=\mathrm{bit}_{k}(i)$},\\\ S_{k}\cup B&\text{if
$\mathrm{bit}_{k}(i+1)=1$ and $\mathrm{bit}_{k}(i)=0$},\\\ S_{k}\setminus
B&\text{if $\mathrm{bit}_{k}(i+1)=0$ and $\mathrm{bit}_{k}(i)=1$}.\end{cases}$
and to to implement the update of type (b), it suffices to perform the
following:
$S^{\prime}_{k}\>=\>\begin{cases}S_{k}&\text{if
$\mathrm{bit}_{k}(i)=\mathrm{bit}_{k}(i-1)$},\\\ S_{k}\setminus(H_{i}\setminus
B)&\text{if $\mathrm{bit}_{k}(i)=1$ and $\mathrm{bit}_{k}(i-1)=0$},\\\
S_{k}\cup(H_{i}\setminus B)&\text{if $\mathrm{bit}_{k}(i)=0$ and
$\mathrm{bit}_{k}(i-1)=1$}.\end{cases}$
This completes the proof sketch of the main technical result in this section.
###### Theorem 5.1.
There is a symbolic algorithm that solves $(n,d)$-small parity games using
$O(\lg d)$ symbolic set variables, $O(\log d\cdot\log n)$ bits of conventional
space, and whose running time is polynomial if $d=O(\log n)$, and
$n^{2\lg(d/{\lg n})+O(1)}$ if $d=\omega(\log n)$.
###### Acknowledgements.
The first author has been supported by the EPSRC grant EP/P020992/1 (Solving
Parity Games in Theory and Practice). The idea of the design of the universal
algorithm has been discovered independently and later by Nathanaël Fijalkow;
we thank him for sharing his conjectures with us and exchanging ideas about
adaptive tree-pruning rules. We also thank Alexander Kozachinskiy and
Thejaswini K. S. for helpful comments on earlier drafts of the paper.
## References
* (1)
* Baldan et al. (2019) P. Baldan, B. König, C. Mika-Michalski, and T. Padoan. 2019\. Fixpoint games on continuous lattices. _Proceedings of the ACM on Programming Languages_ 3, POPL, January 2019 (2019), 26:1–26:29.
* Bradfield and Walukiewicz (2018) J. C. Bradfield and I. Walukiewicz. 2018. _Handbook of Model Checking_. Springer, Chapter The mu-calculus and model checking, 871–919.
* Burch et al. (1992) J. R. Burch, E. M. Clarke, K. L. McMillan, D. L. Dill, and L. J. Hwang. 1992. Symbolic model checking: $10^{20}$ states and beyond. _Inf. Comput._ 98, 2 (1992), 142–170.
* Calude et al. (2017) C. S. Calude, S. Jain, B. Khoussainov, W. Li, and F. Stephan. 2017. Deciding parity games in quasipolynomial time. In _STOC 2017_. ACM, Montreal, QC, Canada, 252–263.
* Chatterjee et al. (2018) K. Chatterjee, W. Dvořák, M. Henzinger, and A. Svozil. 2018. Quasipolynomial set-based symbolic algorithms for parity games. In _LPAR-22_ _(EPiC Series in Computing)_ , Vol. 57. EasyChair, Awassa, Ethiopia, 233–253.
* Czerwiński et al. (2019) W. Czerwiński, L. Daviaud, N. Fijalkow, M. Jurdziński, R. Lazić, and P. Parys. 2019\. Universal trees grow inside separating automata: Quasi-polynomial lower bounds for parity games. In _SODA 2019_. SIAM, San Diego, CA, 2333–2349.
* Daviaud et al. (2018) L. Daviaud, M. Jurdziński, and R. Lazić. 2018\. A pseudo-quasi-polynomial algorithm for mean-payoff parity games. In _LICS 2018_. ACM, Oxford, UK, 325–334.
* Daviaud et al. (2019) L. Daviaud, M. Jurdziński, and K. Lehtinen. 2019\. Alternating weak automata from universal trees. In _CONCUR 2019_ _(Leibniz International Proceedings in Informatics (LIPIcs))_ , Vol. 140. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Amsterdam, the Netherlands, 18:1–18:14.
* Emerson and Jutla (1991) E. A. Emerson and C. S. Jutla. 1991. Tree automata, mu-calculus and determinacy. In _FOCS 1991_. IEEE Computer Society, San Juan, Puerto Rico, 368–377.
* Emerson et al. (1993) E. A. Emerson, C. S. Jutla, and P. Sistla. 1993. On model-checking for fragments of $\mu$-calculus. In _CAV 1993_ _(LNCS)_ , Vol. 697. Springer, Elounda, Greece, 385–396.
* Etessami et al. (2005) K. Etessami, T. Wilke, and R. A. Schuller. 2005. Fair simulation relations, parity games, and state space reduction for Büchi automata. _SIAM J. Comput._ 34, 5 (2005), 1159–1175.
* Fearnley (2010) J. Fearnley. 2010\. Exponential lower bounds for policy iteration. In _ICALP 2010_ _(LNCS)_ , Vol. 6199. Springer, Bordeaux, France, 551–562.
* Friedmann (2009) O. Friedmann. 2009\. An exponential lower bound for the parity game strategy improvement algorithm as we know it. In _LICS 2009_. IEEE Computer Society, Los Angeles, CA, USA, 145–156.
* Friedmann (2011a) O. Friedmann. 2011a. Recursive algorithm for parity games requires exponential time. _RAIRO — Theor. Inf. and Applic._ 45, 4 (2011), 449–457.
* Friedmann (2011b) O. Friedmann. 2011b. A subexponential lower bound for Zadeh’s pivoting rule for solving linear programs and games. In _IPCO 2011_ _(LNCS)_ , Vol. 6655. Springer, New York, NY, USA, 192–206.
* Friedmann et al. (2011) O. Friedmann, T. D. Hansen, and U. Zwick. 2011. Subexponential lower bounds for randomized pivoting rules for the simplex algorithm. In _STOC 2011_. ACM, San Jose, CA, USA, 283–292.
* Grädel et al. (2002) E. Grädel, W. Thomas, and T. Wilke (Eds.). 2002. _Automata, Logics, and Infinite Games: A Guide to Current Research_. LNCS, Vol. 2500. Springer.
* Hasuo et al. (2016) I. Hasuo, S. Shimizu, and C. Cîrstea. 2016. Lattice-theoretic progress measures and coalgebraic model checking. In _POPL 2016_. ACM, St. Petersburg, FL, USA, 718–732.
* Hausmann and Schröder (2019) D. Hausmann and L. Schröder. 2019. Computing nested fixpoints in quasipolynomial time. arXiv:1907.07020.
* Jurdziński and Lazić (2017) M. Jurdziński and R. Lazić. 2017. Succinct progress measures for solving parity games. In _LICS 2017_. IEEE Computer Society, Reykjavik, Iceland, 1–9.
* Jurdziński et al. (2008) M. Jurdziński, M. Paterson, and U. Zwick. 2008\. A deterministic subexponential algorithm for solving parity games. _SIAM J. Comput._ 38, 4 (2008), 1519–1532.
* Lehtinen (2018) K. Lehtinen. 2018\. A modal $\mu$ perspective on solving parity games in quasi-polynomial time. In _LICS 2018_. IEEE, Oxford, UK, 639–648.
* Lehtinen et al. (2019) K. Lehtinen, S. Schewe, and D. Wojtczak. 2019. Improving the complexity of Parys’ recursive algorithm. arXiv:1904.11810
* Luttenberger et al. (2019) M. Luttenberger, P. J. Meyer, and S. Sickert. 2019. Practical synthesis of reactive systems from LTL specifications via parity games. arXiv:1903.12576.
* McNaughton (1993) R. McNaughton. 1993\. Infinite games played on finite graphs. _Annals of Pure and Applied Logic_ 65, 2 (1993), 149–184.
* Parys (2019) P. Parys. 2019. Parity games: Zielonka’s algorithm in quasi-polynomial time. In _MFCS 2019_ _(Leibniz International Proceedings in Informatics (LIPIcs))_ , Vol. 138. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Aachen, Germany, 10:1–10:13.
* van Dijk (2018) T. van Dijk. 2018\. Oink: An implementation and evaluation of modern parity game solvers. In _TACAS 2018_ _(LNCS)_ , Vol. 10805. Springer, Thessaloniki, Greece, 291–308.
* Vöge and Jurdziński (2000) J. Vöge and M. Jurdziński. 2000. A discrete strategy improvement algorithm for solving parity games. In _CAV 2000_ _(LNCS)_ , Vol. 1855. Springer, Chicago, IL, USA, 202–215.
* Zielonka (1998) W. Zielonka. 1998\. Infinite games on finitely coloured graphs with applications to automata on infinite trees. _Theoretical Computer Science_ 200, 1–2 (1998), 135–183.
## Appendix A Appendix
###### Proof of Proposition 4.3.
Firstly, we argue that $T\cap R$ is a trap for Odd in subgame $\mathcal{G}\cap
R$. Let $v\in(T\cap R)\cap V_{\mathrm{Odd}}$. We need to argue that if
$(v,w)\in E$ and $w\in R$ then $w\in T$, which follows directly from the
assumption that $T$ is a trap for Odd in $\mathcal{G}$. Let $v\in(T\cap R)\cap
V_{\mathrm{Even}}$. We need to argue that there is $(v,w)\in E$, such that
$w\in T\cap R$. Observe that there is an edge $(v,w)\in E$, such that $w\in
T$, because $T$ is a trap for Odd in $\mathcal{G}$. Note, however, that we
also have $w\in R$ because $R$ is a trap for Even in $\mathcal{G}$.
Now suppose that $T$ is not only a trap for Odd in $\mathcal{G}$ but it is
also an Even dominion in $\mathcal{G}$ and let $\sigma$ be an Even dominion
strategy on $T$. Note that $\sigma$ is also an Even dominion strategy on
$T\cap R$ in subgame $\mathcal{G}\cap R$ because $R$ is a trap for Even. ∎
###### Proof of Proposition 4.4.
Firstly, we argue that $A\cap T\subseteq A^{\prime}$. Let $\sigma$ be an Even
reachability strategy from $A$ to $B$ in $\mathcal{G}$. Note that $\sigma$ is
also an Even reachability strategy from $A\cap T$ to $B\cap T$ in
$\mathcal{G}\cap T$ because $T$ is a trap for Even in $\mathcal{G}$. It then
follows that $A\cap T\subseteq A^{\prime}$ because $A^{\prime}$ is the Even
attractor to $B\cap T$ in $\mathcal{G}\cap T$.
Secondly, we argue that $T\setminus A^{\prime}$ is a trap for Even in
$\mathcal{G}\setminus A$. Note that $T\setminus A^{\prime}$ is a trap for Even
in $\mathcal{G}$ by trap transitivity: $T$ is a trap for Even in
$\mathcal{G}$, and $T\setminus A^{\prime}$ is a trap for Even in
$\mathcal{G}\cap T$ because it is a complement of an Even attractor. From
$A\cap T\subseteq A^{\prime}$ it follows that $T\setminus A^{\prime}$ is
disjoint from $A$, and hence it is also a trap for Even in
$\mathcal{G}\setminus A$. ∎
|
Subsets and Splits